Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 23 July 2019

Algorithmic Trading in Practice

Abstract and Keywords

The use of computer algorithms in securities trading, or algorithmic trading, has become a central factor in modern financial markets. The desire for cost and time savings within the trading industry spurred buy side as well as sell side institutions to implement algorithmic services along the entire securities trading value chain. This chapter encompasses this algorithmic evolution, highlighting key cornerstones in it development discussing main trading strategies, and summarizing implications for overall securities markets quality. In addition, it touches on the contribution of algorithmic trading to the recent market turmoil, the U.S. Flash Crash, including the discussions of potential solutions for assuring market reliability and integrity.

Keywords: algorithmic trading, high-frequency trading, trading technologies, smart order routing, direct market access

10.1 Introduction

In the past few decades, decades, securities trading has experienced significant changes as more and more stages within the trading process have become automated by incorporating electronic systems. Electronic trading desks together with advanced algorithms entered the international trading landscape and introduced a technological revolution to traditional physical floor trading. Nowadays, the securities trading landscape is characterized by a high level of automation, for example, enabling complex basket portfolios to be traded and executed on a single click or finding best execution via smart order-routing algorithms on international markets. Computer algorithms encompass the whole trading process—buy side (traditional asset managers and hedge funds) as well as sell side institutions (banks, brokers, and broker-dealers) have found their business significantly migrated to an information systems–driven area where trading is done with minimum human intervention.

In addition, with the help of new market access models, the buy side has gained more control over the actual trading and order allocation processes and is able to develop and implement its own trading algorithms or use standard software solutions from independent vendors. Nevertheless, the sell side still offers the majority of algorithmic trading tools to its clients. The application of computer algorithms that generate orders automatically has reduced overall trading costs for investors because intermediaries could largely be omitted. Consequently, algorithmic trading (AT) has gained significant market share in international financial markets in recent years as time- and cost-saving automation went hand in hand with cross-market connectivity. Algorithmic trading not only has altered the traditional relation between investors and their market-access intermediaries but also has caused a change in the traders’ focus as the invention of the telephone did in 1876 for communication between people.

(p. 311) This chapter gives an overview of the evolution of algorithmic trading, highlighting current technological issues as well as presenting scientific findings concerning the impact of this method on market quality. The paper is structured as follows: First, we characterize algorithmic trading in the light of the definitions available in the academic literature. The difference between algorithmic trading and such related constructs as high-frequency trading (HFT) is therefore illustrated. Further, we provide insights into the evolution of the trading process within the past thirty years and show how the evolution of trading technology influenced the interaction among market participants along the trading value chain. Several drivers of algorithmic trading are highlighted in order to discuss the significant impact of algorithms on securities trading. In section 10.3 we introduce the ongoing evolution of algorithmic strategies, highlighting such current innovations as newsreader algorithms. Section 10.4 outlines findings provided by academics as well as practitioners by illustrating their conclusions regarding the impact of algorithmic trading on market quality. In section 10.5 we will briefly discuss the role of this approach in the 2010 Flash Crash and explain circuit breakers as a key mechanism for handling market stress. A brief outlook will close the chapter.

10.1.1 Characterization, Definition, and Classification

A computer algorithm is defined as the execution of pre-defined instructions in order to process a given task (Johnson 2010). Transferred to the context of securities trading, algorithms provide a set of instructions on how to process or modify an order or multiple orders without human intervention.

Academic definitions vary, so we summarize the undisputed facts about which analysts agree the most. “Throughout the literature, AT is viewed as a tool for professional traders that may observe market parameters or other information in real-time and automatically generates/carries out trading decisions without human intervention” (Gomber et al. 2011, p. 11). The authors further list real-time market observation and automated order generation as key characteristics of algorithmic traders. These elements are essential in most definitions of algorithmic trading. For example, Chaboud et al. (2009) write: “[I]n algorithmic trading (AT), computers directly interface with trading platforms, placing orders without immediate human intervention. The computers observe market data and possibly other information at very high frequency, and, based on a built-in algorithm, send back trading instructions, often within milliseconds” (p. 1), and Domowitz and Yegerman (2006) state: “[W]e generally define algorithmic trading as the automated, computer-based execution of equity orders via direct market-access channels, usually with the goal of meeting a particular benchmark” (p. 1). A tighter regulatory definition was provided by the European Commission in the proposal concerning the review of the Markets in Financial Instruments Directive (MiFID) in 2011. The proposal states: “Algorithmic trading” means trading in financial instruments where a computer algorithm automatically determines individual parameters of orders such as whether to initiate the order, the (p. 312) timing, price or quantity of the order or how to manage the order after its submission, with limited or no human intervention. This definition does not include any system that is only used for the purpose of routing orders to one or more trading venues or for the confirmation of orders” (European Commission 2011, p. 54). To summarize the intersection of these academic and regulatory statements, trading without human intervention is considered a key aspect of algorithmic trading and became the center of most applied definitions of this strategy. Gomber et al. (2011) further define trade characteristics not necessarily but often linked to algorithmic trading:

  1. 1. Agent trading

  2. 2. Minimization of market impact (for large orders)

  3. 3. To achieve a particular benchmark

  4. 4. Holding periods of days, weeks, or months

  5. 5. Working an order through time and across markets

This characterization delineates algorithmic trading from its closest subcategory, HFT, which is discussed in the following section.

Based on the specified design and parameterization, algorithms do not only process simple orders but conduct trading decisions in line with pre-defined investment decisions without any human involvement. Therefore, we generally refer to algorithmic as computer-supported trading decision making, order submission, and order management.

Given the continuous change in the technological environment, an all-encompassing classification seems unattainable, whereas the examples given promote a common understanding of this evolving area of electronic trading.

10.1.2 Algorithmic Trading in Contrast to High-Frequency Trading

High-frequency trading is a relatively new phenomenon in the algorithmic trading landscape, and much less literature and definitions can be found for it. Although the media often use the terms HFT and algorithmic trading synonymously, they are not the same, and it is necessary to outline the differences between the concepts. Aldridge (2009), Hendershott and Riordan (2011), Gomber et al. (2011) acknowledge HFT as a subcategory of algorithmic trading. The literature typically states that HFT-based trading strategies, in contrast to algorithmic trading, update their orders very quickly and try to keep no overnight position. The rapid submission, cancellation, and deletion of instructions is necessary in order to realize small profits per trade in a large number of trades without keeping significant overnight positions. As a prerequisite, HFT needs to rely on high-speed access to markets, that is, low latencies, the use of co-location or proximity services, and individual data feeds. It does not rely on sophisticated strategies to deploy orders as algorithmic trading does, but relies mainly on speed (p. 313) that is, technology to earn small profits on a large number of trades. The concept of defining HFT as a subcategory of algorithmic trading is also applied by the European Commission in its latest MiFID proposal: “A specific subset of algorithmic trading is High Frequency Trading where a trading system analyses data or signals from the market at high speed and then sends or updates large numbers of orders within a very short time period in response to that analysis. High frequency trading is typically done by the traders using their own capital to trade and rather than being a strategy in itself is usually the use of sophisticated technology to implement more traditional trading strategies such as market making or arbitrage” (European Commission 2011, p. 25). Most academic and regulatory papers agree that HFT should be classified as technology rather than a specific trading strategy and therefore demarcate HFT from algorithmic trading.

10.2 Evolution Of Trading And Technology (I)

The evolutionary shift toward electronic trading did not happen overnight. Starting in 1971, the National Association of Securities Dealers Automated Quotation (NASDAQ) becomes the first electronic stock market when it displayed quotes for twenty-five hundred over-the-counter securities. Soon competitors followed on both sides of the Atlantic. The following sections focus on the timeline of the shift and the changing relationship between the buy side and the sell side. Significant technological innovations are discussed, and the drivers of this revolution are identified.

10.2.1 Evolution of Trading Processes

Figure 10.1 presents cornerstones of the evolutionary shift in trading since the initial electronification of securities markets. From the early 1990 many of the major securities exchanges became fully electronified, that is, the matching of orders and price determination was performed by matching algorithms (Johnson 2010). The exchanges established electronic central limit order books (e-CLOB), which provided a transparent, anonymous, and cost-effective way to aggregate and store open-limit orders as well as match executable orders in real time. These advancements led to a decentralization of market access, allowing investors to place orders from remote locations, and made physical floor trading more and more obsolete. In the mid 1990s the Securities and Exchange Commission further intensified competition between exchanges by allowing electronic communication networks, computer systems that facilitate trading outside traditional exchanges, to enter the battle for order flow, leading the way to today’s highly fragmented electronic trading landscape. On the sell side, electronification proceeded to the implementation of automated price observation mechanisms, electronic eyes and (p. 314)

Algorithmic Trading in PracticeClick to view larger

Figure 10.1 The evolution of trading. Technology walks up the value chain and supports an ever-increasing range of trading behaviors formerly carried out by humans.

automated quoting machines that generate quotes given pre-parameterized conditions, effectively reducing a market maker’s need to provide liquidity manually. About the year 2000, buy side traders began to establish electronic trading desks by connecting with multiple brokers and liquidity sources. Trading saw significant improvements in efficiency owing to the use of order management systems (OMS), which allowed for routing automation, connectivity, and integration with confirmation, clearing, and

(p. 315) settlement systems. The introduction of the Financial Information eXchange (FIX) Protocol allowed for world wide uniform electronic communication of trade-related messages and became the de facto messaging standard for pre-trade and trade communication (FIX Protocol Limited 2012). About the same time, sell side pioneers implemented the first algorithms to aid and enhance their proprietary executions. Realizing that buy side clients could also benefit from these advancements, brokers started to offer algorithmic services to them shortly thereafter. Since being offered frameworks that allow for individual algorithm creation and parameterization, clients’ uptake has steadily increased (Johnson 2010). The first smart order-routing services were introduced in the U.S. system to support order routing in a multiple-market system.

About 2006, the sell side started using co-location and proximity services to serve its own and the buy side’s demand for lower transmission latencies between order submission and order arrival. The term “high-frequency trading” emerged. One has to keep in mind, however that, in particular, mid-sized and small buy side firms today still use the telephone, fax, or email to communicate orders to their brokers. The 2010 U.S. Flash Crash marks a significant event in the evolution of securities trading because it dramatically intensified the regulatory discussion about the benefits of this evolution (see section 10.4).

10.2.2 Evolution of Trading Processes in Intermediation Relationships

To augment and add detail to the discussion above, this section highlights major technological advancements accompanying the intermediation relationship between the buy side, the sell side, and markets in the process of securities trading. The top panel of figure 10.1 displays the traditional trading process. It reaches from the buy side investor’s allocation decision to the final order arrival in the markets. In this process, the broker played the central role because he or she was responsible for management and execution of the order. Depending on order complexity and benchmark availability (both of which are driven mainly by order size and the liquidity of the traded security), the broker decided to either route the order directly to the market immediately and in full size or to split and time the order to avoid market impact. If liquidity on the market is not available, the broker executed the order against his own proprietary book, providing risk capital.

The bottom panel of figure 10.2 shows how the intermediation relationship between buy side and the sell side changed during the technological evolution. As illustrated, the responsibility for execution was shifted toward the buy side which absorbed more direct control over the order routing and execution process, and the role of the sell side changed to that of a provider of market access and trading technology. The new technologies named in the figure, direct market access and sponsored market access, as well as smart order routing are described below to show their relation to algorithmic trading. Because execution by full-service or agency broker dark pools, or electronic execution services for large institutional orders without pre-trade transparency, is (p. 316)

Algorithmic Trading in PracticeClick to view larger

Figure 10.2 While traditionally the responsibility for order execution was fully outsourced to the sell side, the new technology-enabled execution services allow for full control by the buy side.

mainly focused on the direct interaction of buy side orders and only indirectly related to algorithmic trading, this technology will not be described in detail.

In markets that are organized by exchanges, only registered members are granted access to the e-CLOB. Those members are the only ones allowed to conduct trading directly; thus their primary role as market access intermediaries for investors. Market members performing that function are referred to as exchange brokers (Harris 2003). These intermediaries transform their clients’ investment decisions into orders that are allocated to the desired market venues. As the buy side has become more aware of trading costs over the years brokers have begun to provide alternative market access models such as so-called direct market access (DMA). By taking advantage of DMA, aninvestor (p. 317) no longer has to go through a broker to place an order but, rather, can have it forwarded directly to the markets through the broker’s trading infrastructure. Johnson (2010) refers to “Zero-touch” DMA because the buy side takes total control over the order without direct intervention by an intermediary. Given the resulting reduction in latency, DMA models provide an important basis for algorithm-based strategies and HFT.

Sponsored market access represents a modified approach to DMA offerings. This approach targets buy side clients that focus on high-frequency strategies and therefore wish to connect to the market via their broker’s identification but omit their broker’s infrastructure. Sponsored access users rely on their own high-speed infrastructure and access markets using the sell side’s identification; that is, they trade on the market by renting the exchange membership of their sell side broker. Afterward, intermediaries only provide automated pre-trade risk checks that are mostly implemented within the exchange software and administered by the broker, for example, by setting a maximum order value or the maximum number of orders in a predefined time period. A further extension, “naked access” or “unfiltered access,” refers to the omission of pre-trade risk checks. In this process, in order to achieve further latency reduction, only post-trade monitoring is conducted, potentially allowing erroneous orders and orders submitted by flawed algorithms to enter the markets. Because of the possible devastating impacts, the SEC resolved to ban naked access in 2010. Furthermore, the SEC requires all brokers to put in place risk controls and supervisory procedures relating to how they and their customers access the market (SEC 2010b). Naked access is not allowed in the European securities trading landscape.

In a setup in which each instrument is traded only in one market, achieving the best possible price requires mainly the optimal timing of the trade and optimal order sizes to minimize price impact, or implicit transaction costs. In a fragmented market system such as those of Europe and the United States, however, this optimization problem becomes more complex. Because each instrument is traded in multiple venues, a trader has to monitor liquidity and price levels in each venue in real time. Automated, algorithm-based low-latency systems provide solutions in fragmented markets. Smart order routing (SOR) engines monitor multiple liquidity pools (that is, exchanges or alternative trading systems) to identify the highest liquidity and optimal price by applying algorithms to optimize order execution. They continuously gather real-time data from the respective venues concerning the available order book situations (Ende et al. 2009). Foucault and Menkveld (2008) analyze executions among two trading venues for Dutch equities and argue that suboptimal trade executions result from a lack of automation of routing decisions. Ende et al. (2009) empirically assess the value of SOR algorithms in a post-MiFID fragmented European securities system. They find suboptimal executed trades worth €262 billion within a four-week data set. With approximately 6.71 percent of all orders capable of being executed at better prices, they predict overall cost savings of €9.5 million within this time period, indicating an increasing need for sophisticated SOR to achieve best possible execution.

(p. 318) 10.2.3 Latency Reduction

Among the changes in the trading process triggered by algorithmic trading, execution and information transmission latency faced the most significant adjustment. “Latency” in this context refers to the time that elapses from the insertion of an order into the trading system and the actual arrival of the order and its execution at the market. In the era of physical floor trading, traders with superior capabilities and close physical proximity to the desks of specialists could accomplish more trades and evaluate information faster than competitors and therefore could trade more successfully. Today, average latencies have been reduced to a fragment of a millisecond. This advance was driven mainly by the latest innovations in hardware, exchange co-location services, and improved market infrastructure. Such a decrease in latency translates into an increase in participants’ revenues as well as a reduction of error rates, since traders can avoid missing economically attractive order book situations due to high latency (Riordan and Stockenmaier 2012). The omission of human limitation in decision making became central in promoting algorithms for the purpose of conducting high-speed trading. Combining high-speed data access with predefined decision making, today’s algorithms are able to adapt to permanent changes in market conditions quickly. Trading venues recognize the trader’s desire for low latency, and so they intensify the chase for speed by providing more low-latency solutions to attract more clients (Ende et al. 2011).

10.2.4 Co-Location and Proximity Hosting Services

The Commodity Futures Trading Commission (CFTC) states that “[…] the term ”Co-Location/Proximity Hosting Services” is defined as trading market and certain third-party facility space, power, telecommunications, and other ancillary products and services that are made available to market participants for the purpose of locating their computer systems/servers in close proximity to the trading market’s trade and execution system” (Commodity Futures Trading Commission 2010a, p. 33200). These services provide participating institutions with further latency reduction by minimizing network and other trading delays. These improvements essential for all participants conducting HFT but are also beneficial in algorithmic trading strategies. The CFTC thus acknowledges that these services should not be granted in a discriminatory way, for example, by limiting co-location space or by a lack of price transparency. In order to ensure equal, fair, and transparent access to these services, the CFTC proposed a rule that requires institutions that offer co-location or proximity hosting services to offer equal access without artificial barriers that act to exclude some market participants from accessing these services (Commodity Futures Trading Commission 2010a).

10.2.5 Fragmentation of Markets

Fragmentation of investors’ order flow has occurred in U.S. equity markets since the implementation of the Regulation of Exchanges and Alternative Trading Systems (p. 319) (Reg ATS) in 2000, followed by the 2005 implementation of the Regulation National Market System (Reg NMS). Competition in European equity markets began in 2007 after the introduction of MiFID, which enabled new venues to compete with the incumbent national exchanges. Both regulatory approaches, although they differ in the explicit degree of regulation, aim to improve competition in the trading landscape by attracting new entrants to the market for markets. Because traders’ interest in computer-supported trading preceded these regulations, the fragmentation of markets cannot be considered the motivating force for the use of algorithms. But considering that a multiple-market system only allows for beneficial order execution and the resulting cost savings if every relevant trading center is included in decision making, a need for algorithms to support this process is reasonable. Further, cross-market strategies (arbitrage), as well as provision of liquidity in fragmented markets can only be achieved with wide availability of cross-market data and a high level of automated decision making. Therefore, fragmentation is considered a spur for promoting algorithm use and high-frequency technologies in today’s markets.

10.2.6 Market Participation and Empirical Relevance

Algorithmic trading influences not only today’s trading environment and market infrastructure but also trading characteristics and intraday patterns. Although exact participation levels remain opaque owing to the anonymity of traders and their protection of their methods, a handful of academic and industry papers try to estimate overall market share. The Aite Group (2006) estimated algorithm usage from a starting point near zero around 2000, thought to be responsible for over 50 percent of trading volume in the United States in 2010 (Aite Group 2006). Hendershott and Riordan (2011) reached about the same number on the basis of a data set of Deutsche Boerse’s DAX 30 instruments traded on XETRA in 2008. The CME Group (2010) conducted a study of algorithmic activity within their futures markets that indicated algorithm participation of between 35 percent (for crude oil futures) and 69 percent in (for EuroFX futures) in 2010. Because the literature is mainly based on historic data sets, these numbers may underestimate actual participation levels. Academics see a significant trend toward a further increase in use of algorithms. Furthermore, algorithmic trading as well as HFT now claim significant shares of the foreign exchange market. According to the Aite Group, the volume of trade in FX markets executed by algorithms may exceed 10% in the year 2011 (Aite Group 2011).

10.3 Evolution of Trading and Technology (II)

Not only has the trading environment adapted to technological advances, but market interaction and order management have improved with computerized support. Section 10.3 gives a comprehensive overview of the status quo in algorithmic trading strategies, (p. 320) focusing on trading strategies used primarily in agent trading well as proprietary trading.

10.3.1 Algorithmic Strategies in Agent Trading

From the beginning of algorithm-based trading, the complexity and granularity of the algorithms have developed with their underlying mathematical models and supporting hard- and software. Algorithms react to changing market conditions, level their aggressiveness based on the current trading hour, and consider financial news in their trading behavior. Apart from advancements in customization, the key underlying strategies of algorithms have not changed much.Most of the algorithms today still strive to match given benchmarks, minimize transaction costs, or seek liquidity in different markets. The categorization of the various algorithms is based mainly on the different purposes or behavior of the strategies used. Domowitz and Yegerman (2005) qualify algorithms based on their complexity and mechanics, whereas Johnson (2010) suggests a classification based on their objective. We follow Johnson’s proposal by illustrating the chronology of algorithm development. Impact-driven and cost-driven algorithms seek to minimize market impact costs (overall trading costs). Johnson places opportunistic algorithms in a separate category. Since both impact-driven and cost-driven algorithms are available for opportunistic modification,we give examples of opportunistic behavior in both types. We also provide a brief introduction to newsreader algorithms, among the latest developments. Section 10.3.5 focuses on algorithms used in proprietary trading.

10.3.2 Impact-Driven Algorithms

Orders entering the market may considerably change the actual market price depending on order quantity, the order limit and current order book liquidity. Imagine a large market order submitted to a low-liquidity market. This order would clear the other side of the order book to a large extent, thus significantly worsening its own execution price with every partial fill. This phenomenon is the reason why market impact costs make up one part of the implicit trading costs (Harris 2003; Domowitz and Yegerman 2005). Impact-driven algorithms seek to minimize the effect that trading has on the asset’s price. By splitting orders in to sub-orders and spreading their submission over time, these algorithms characteristically process sub-orders on the basis of a predefined price, time, or volume benchmark. The volume-weighted average price (VWAP) benchmark focuses on previous traded prices relative to the order’s volume. The overall turnover divided by the total volume of the order sizes indicates the average price of the given time interval and may represent the benchmark for the measurement of the performance of the algorithm. Focusing on execution time, the time-weighted average price (TWAP) benchmark algorithm generate—in its simplest implementation—equally large sub-orders and processes them in equally distributed time intervals. Trading intervals can be calculated from the total quantity, the start (p. 321) time, and the end time; for example, an order to buy 120,000 shares in chunks of 5,000 shares from 10 o’clock to 12 o’clock, results in five-minute trading intervals. Both methods have substantial disadvantages. If one disregards the current market situation while scheduling the order to meet the predefined benchmark, the results of both algorithms may lead to disadvantageous execution conditions. The predictability of these algorithms may encourage traders to exploit them, so dynamization of both concepts is reasonable because actual market conditions are obviously a more efficient indicator than historical data. With real-time market data access, VWAP benchmarks are calculated trade by trade, adjusting operating algorithms with every trade. Percent-of-volume (POV) algorithms base their market participation on the actual market volume, forgo trading if liquidity is low, and intensify aggressiveness if liquidity is high to minimize market impact. Randomization is an feature of the impact-driven algorithms. As predictability decreases with randomization of time or volume, static orders become less prone to detection by other market participants.

10.3.3 Cost-Driven Algorithms

Market impact costs represent only one part of the overall costs arising in securities trading. Academic literature distinguishes between implicit cost such as market impact or timing costs and explicit costs such as commission or access fees (Harris 2003). Cost-driven algorithms concentrate on both variants in order to minimize overall trading costs. Therefore, simple order splitting may not be the most desirable mechanism, as market impact may be eventually reduced, but at the cost of higher timing risk owing to the extended time span in which the order is processed. Cost-driven algorithms must anticipate such opposing effects in order to not just shift sources of risk but instead minimize it. Implementation shortfall is one of the widespread benchmarks in agent trading. It represents the difference of the average execution price currently achievable at the market and the actual execution price provided by the algorithm. Since implementation shortfall algorithms are, at least in part affected by the same market parameters as impact-driven algorithms are, both types use similar approaches. Adaptive shortfall is a subcategory of implementation shortfall. Based on the constraints of the latter, this algorithm adapts trading to market condition changes such as price movements allowing the algorithm to trade more opportunistically in beneficial market situations.

10.3.4 Newsreader Algorithms

One of the relatively recent innovations is the newsreader algorithm. Since every investment decision is based on some input by news or other distributed information, investors feed their algorithms with real-time newsfeeds. From a theoretical perspective, these investment strategies are based on the semi-strong form of efficient markets (Fama 1970), that is, prices adjust to publicly available new information very rapidly (p. 322) and in an unbiased fashion. In practical terms, information enters market prices with a certain transitory gap, during which investors can realize profits. Humans’ ability to analyze a significant amount of information in short reaction times is limited, however, so newsreaders are deployed to analyze sentiment in documents. A key focus of this approach is to overcome the problem utilizing the relevant information in documents such as blogs, news, articles, or corporate disclosures. This information may be unstructured, meaning it is hard for computers to understand, since written information contains a lot of syntactic and semantic features, and information that is relevant for an investment decision may be concealed within paraphrases. The theoretical field of sentiment analysis and text-mining encompasses the investigation of documents in order to determine their positive or negative conclusion about the relevant topic. In general, there are two types of in-depth analysis of the semantic orientation of text information (called polarity mining): supervised and unsupervised techniques (Chaovalit and Zhou 2008). Supervised techniques are based on labeled data sets in order to train a classifier (for example, a support vector machine), which is set up to classify the content of future documents. In contrast, unsupervised techniques use predefined dictionaries to determine the content by searching for buzzwords within the text. Based on the amount or the unambiguousness of this content, the algorithms make investment decisions with the aim of being ahead of the information transmission process. An introduction to various approaches to extracting investment information from various unstructured documents as well as an assessment of the efficiency of these approaches is offered by Tetlock (2007) and Tetlock et al. (2008).

10.3.5 Algorithmic Trading Strategies in Proprietary Trading

Whereas the previous sections dealt with agent trading, the rest of this section will focus on strategies that are prevalent in proprietary trading, which have changed significantly owing to the implementation of computer-supported decision making.

10.3.6 Market Making

Market making strategies differ significantly from agent (buy side) strategies because they do not aim to build up permanent positions in assets. Instead, their purpose is to profit from short-term liquidity by simultaneously submitting buy and sell limit orders in various financial instruments. Market makers’ revenues are based on the aggregated bid-ask spread. For the most part, they try to achieve a flat end-of-day position. Market makers frequently employ quote machines, programs that generate, update, and delete quotes according to a pre-defined strategy (Gomber et al. 2011). The implementation of quote machines in most cases has to be authorized by the market venue and has to be monitored by the user. The success of market making basically is sustained through (p. 323) real-time market price observation, since dealers with more timely information about the present market price can set up quotes in a more exact manner and so generate a thinner bid-ask spread through an increased number of executed trades. On the other hand, speed in order submission, execution, and cancellation reduces a market maker’s risk of misquoting instruments in times of high volatility. Therefore, market makers benefit in critical ways from automated market observation as well as algorithm-based quoting.

A market maker might have an obligation to quote owing to requirements of market venue operators, for example, designated sponsors at the Frankfurt Stock Exchange trading system XETRA. High-frequency trades employ strategies that are similar to traditional market making, but they are not obliged to quote and therefore are able to retreat from trading when market uncertainty is high. Besides the earnings generated by the bid-ask spread, HFT market makers benefit from pricing models of execution venues that rebate voluntary HFT market makers in case their orders provide liquidity (liquidity maker), that is, are sitting in the order book and get executed by a liquidity taker that has to pay a fee. This model is often called asymmetric pricing or maker/taker pricing.

10.3.7 Statistical Arbitrage

Another field that evolved significantly with the implementation of computer algorithms is financial arbitrage. Harris (2003) defines arbitrageurs as speculators who trade on information about relative values. They profit whenever prices converge so that their purchases appreciate relative to their sales. Types of arbitrage vary with the nature of underlying assumptions about an asset’s “natural” price. Harris further identifies two categories: Pure arbitrage (also referred to as mean reverting arbitrage) is based on the opinion that an asset’s value fundamentally tends to a long or medium average. Deviations from this average only represent momentum shifts due to short-term adjustments. The second category, speculative arbitrage, assumes a nonstationary asset value. Nonstationary variables tend to drop and rise without regularly returning to a particular value. Instead of anticipating a value’s long-term time mean, arbitrageurs predict a value’s future motion and base investment strategies on the expected value. The manifold of arbitrage strategies are derivatives of one of these two approaches, ranging from vanilla pair trading techniques to trading pattern prediction based on statistical or mathematical methods. For a detailed analysis of algorithm-based arbitrage strategies and insight in to current practices see, for example, Pole (2007).

Permanent market observation and quantitative models make up only one pillar essential to both kinds of arbitrage. The second pillar focuses again on trading latency. Opportunities to conduct arbitrage frequently exist only for very brief moments. Because only computers are able to scan the markets for such short-lived possibilities, arbitrage has become a major strategy of HFTs (Gomber et al. 2011).

(p. 324) 10.4 Impact of Algorithmic Trading on Market Quality and Trading Processes

The prevailing negative opinion about algorithmic trading, especially HFT, is driven in part by media reports that are not always well informed and impartial. Most of the scientific literature credits algorithmic trading with beneficial effects on market quality, liquidity, and transaction costs. Only a few papers highlight possible risks imposed by the greatly increased trading speed. However, all academics encourage objective assessments as well as sound regulation in order to prevent system failures without cutting technological innovation. This section concentrates on major findings regarding the U.S. and European trading landscape as it concerns. Impact, on trade modification and cancellation rates, market liquidity, and market volatility.

10.4.1 Impact on Trade Modification and Cancellation Rates

Among the first who analyzed algorithmic trading pattern in electronic order books, Prix et al. (2007) studied changes in the lifetime of cancelled orders in the XETRA order book. Owing to the characteristics of their data set, they are able to identify each order by a unique identifier and so re create the whole history of events for each order. As they focus on the lifetimes of the so-called no-fill deletion orders, that is, orders that are inserted and subsequently cancelled without being executed, they find algorithm-specific characteristics concerning the insertion limit of an order compared to ordinary trading by humans. Gsell and Gomber (2009) likewise focus on differences in trading pattern between human and computer-based traders. In their data setup they are able to distinguish between algorithmic and human order submissions. They conclude that automated systems tend to submit more, but significantly smaller, orders. Additionally, they show the ability of algorithms to monitor their orders and modify them so as to be at the top of the order book. The authors state that algorithmic trading behavior is fundamentally different from human trading concerning the use of order types, the positioning of order limits, modification or deletion behavior. Algorithmic trading systems capitalize on their ability to process high-speed data feeds and react instantaneously to market movements by submitting corresponding orders or modifying existing ones.

Algorithmic trading has resulted in faster trading and more precise trading strategy design, but what is the impact on market liquidity and market volatility? The following sections provide a broader insight to this question.

10.4.2 Impact on Market Liquidity

A market’s quality is determined foremost by its liquidity. Harris (2003, p. 394) defines liquidity as “the ability to trade large size quickly, at low cost, when you want to (p. 325) trade.” Liquidity affects the transaction costs for investors and is a decisive factor in the competition for order flow among exchanges and between exchanges and proprietary trading venues. Many academic articles focus on these attributes to discern possible impacts of algorithmic trading and HFT on a market’s liquidity and, therefore, on a market’s quality. Hendershott et al. (2011) provide the first event study, assessing the New York Stock Exchange’s dissemination of automated quotes in 2003. This event marked the introduction of an automated quoting update, which provided information faster and caused an exogenous increase in algorithmic trading and, on the other side, nearly no advantage for human traders. By analyzing trading before and after this event, the authors find that algorithmic trading lowers the costs of trading and increases the informativeness of quotes. These findings are influenced by the fact that the analyzed period covers a general increase in volume traded, which also contributes to market quality but is not controlled in the author’s approach. Hendershott and Riordan (2011) confirm the positive effect of algorithmic trading on market quality. They find that algorithmic traders consume liquidity when it is cheap and provide liquidity when it is expensive. Further, they conclude that algorithmic trading contributes to volatility dampening in turbulent market phases because algorithmic traders do not retreat from or attenuate trading during these times and therefore contribute more to the discovery of the efficient price than human trading does. These results are backed by findings of Chaboud et al. (2009). Based on a data set of algorithmic trades from 2003 to 2007, the authors argue that computers provide liquidity during periods of market stress. Overall these results illustrate that algorithmic trading closely monitors the market in terms of liquidity and information and react quickly to changes in market conditions, thus providing liquidity in tight market situations (Chaboud et al. 2009). Further empirical evidence for the algorithms’ positive effects on market liquidity are provided by Hasbrouck and Saar (2013) as well as Sellberg (2010).

Among the theoretical evidence on the benefits of algorithmic trading, the model presented by Foucault et al. (2011) achieved significant attention. In order to determine the benefits and costs of monitoring activities of securities markets, the authors develop a model of trading with imperfect monitoring to study this trade-off and its impact on the trading rate. In order to study the effect of algorithmic trading, the authors interpret it as a reduction of monitoring costs, concluding that algorithmic trading should lead to a sharp increase in the trading rate. Moreover, it should lead to a decrease in the bid-ask spread if, and only if, it increases the speed of reaction of market makers relative to the speed of reaction of market takers (the “velocity ratio”). Last, algorithmic trading is socially beneficial because it increases the rate at which gains from trades are realized. Yet adjustments in trading fees redistribute the social gain of algorithmic trading between participants. For this reason, automation of one side may, counter intuitively, make that side worse off after adjustments in maker/ taker fees (Foucault et al. 2011).

(p. 326) 10.4.3 Impact on Market Volatility

A high variability in asset prices indicates great uncertainty about the value of the underlying asset, thus alienating an investor’s valuation and potentially resulting in incorrect investment decisions when price variability is high. Connecting automation with increased price variability seems to be a straightforward argument owing to computers’ immense speed. With research, however, this prejudice proves to be unsustainable. By simulating market situations with and without the participation of algorithmic trading, Gsell (2008) finds decreasing price variability when computers act in the market. This might be explained by the fact that because there is lower latency in algorithmic trading, more orders can be submitted to the market and therefore the size of the sliced orders decreases. Fewer partial executions will occur because there will more often be sufficient volume in the order book to completely execute the small order. If fewer partial executions occur, price movements will be narrowed as the order executes at fewer limits in the order book. Assessing the foreign exchange market and basing their work on a data set that differentiates between computer and human trades, Chaboud et al. (2009) find no causal relation between algorithmic trading and increased exchange rate volatility. They state: “If anything, the presence of more algorithmic trading is associated with lower volatility” (p. 1). The authors use an ordinary least-squares approach in order to test for a causal relation between the fractions of daily algorithmic trading and to the overall daily volume. Additionally, Groth (2011) confirms this relation between volatility and algorithmic trading by analyzing data containing a specific flag provided by the respective market operator that allows one to distinguish between algorithmic and human traders. The author indicates that the participation of algorithmic traders is associated not with higher levels of volatility, but with more stable prices. Furthermore, algorithmic traders do not withdraw liquidity during periods of high volatility, and traders do not seem to adjust their order cancellation behavior to volatility levels. In other words, algorithmic traders provide liquidity even if markets become turbulent; therefore, algorithms dampen price fluctuations and contribute to the robustness of markets in times of stress.

A more critical view of algorithmic trading is provided by researchers from the London-based Foresight Project. Although they highlight its beneficial effects on market stability, the authors warn that possible self-reinforcing feedback loops within well-intentionedmanagement and control processes can amplify internal risks and lead to undesired interactions and outcomes (Foresight 2011). The authors illustrate possible liquidity or price shock cascades, which also intensified the U.S. Flash Crash of May 6, 2010. This hypothesis is backed, in part, by Zhang (2010) and Kirilenko et al. (2017), each finding HFT to be highly correlated with volatility and the unusually large selling pressure noted during the Flash Crash.

(p. 327) 10.5 Regulation and Handling of Market Stress

With increasing trading volume and public discussion, algorithmic trading became a key topic for regulatory bodies. The preceding major regulatory changes, Regulation NMS as well as the Dodd-Frank Act in the United States and MiFID in the European Union, had addressed the reformation of the financial market system. After crises including the collapse of the investment bank Lehman Brothers and the 2010 Flash Crash, the regulators started probing and calling the overall automation of trading into question. Since then, the SEC as well as European federal regulators have promoted dialogue with practitioners and academics in order to evaluate key issues related to algorithmic trading (IOSCO2011; SEC 2010a; European Commission 2011). Discussion is still intense, with supporters highlighting the beneficial effects for market quality and adversaries alert to the increasing degree of computer-based decision making and decreasing options for human intervention as trading speed increases further. In the following we focus on a specific event that promoted regulators on both sides of the Atlantic to re-evaluate the contribution of algorithmic trading, the Flash Crash, when a single improperly programmed algorithm led to a serious plunge. We then present mechanisms currently in place to manage and master such events.

10.5.1 Algorithmic Trading in the Context of the Flash Crash

On May 6, 2010, U.S. securities markets suffered one of the most devastating plunges in recent history. Within several minutes equity indices, exchange-traded funds, and futures contracts significantly declined (e.g., the Dow Jones Industrial Average dropped 5.49 percent in five minutes) only to rise to their original levels again. The CFTC together with the SEC investigated the problem and provided evidence in late 2010 that a single erroneous algorithm had initiated the crash. An automated sell program was implemented to slice a larger order of E-mini S&P 500 contracts, a stock market index futures contract traded on the Chicago Mercantile Exchange’s Globex electronic trading platform, in to several smaller orders to minimize market impact. The algorithm’s parameterization scheduled a fixed percentage-of-volume strategy without accounting for time duration or minimum execution price. This incautious implementation resulted in a significant dislocation of liquidity, resulting in a price drop of E-mini S&P 500 futures contracts. This selling volume cascade flushed the market, resulting in massive order book imbalances with subsequent price drops. Intermarket linkages transferred these order book imbalances across major broad-based U.S. equity indices such as the Dow Jones Industrial Index and the S&P 500 Index. Finally, the extreme price movements triggered a trading safeguard on the Chicago Mercantile Exchange that stopped trading for several minutes and allowed prices to stabilize (Commodity Futures Trading Commission 2010b). In order to get a more detailed (p. 328) picture of the uniqueness of the Flash Crash, a closer look at the structure of the U.S. equity market and the NMS is necessary. The U.S. trade-through rule and a circuit breaker regime that were neither targeted at individual equities nor sufficiently aligned among U.S. trading venues are also relevant causes of the Flash Crash. In Europe, a more flexible best-execution regime without re-routing obligations and a share-by-share volatility safeguard regime that have existed for more than two decades have largely prevented comparable problems (Gomber et al. 2013).

10.5.2 Circuit Breakers in Securities Trading

Automated safeguard mechanisms are implemented in major exchanges in order to ensure safe, fair, and orderly trading. In 1988 the SEC implemented a marketwide circuit breaker in the aftermath of the crash of October 19, 1987 (Black Monday). Based on a three-level threshold, markets halt trading if the Dow Jones Industrial Average drops more than 10 percent within a predefined time period (NYSE 2011). In addition, many U.S. trading venues introduced further safeguard mechanisms that are also implemented at major European exchanges. So far, the academic literature provides mixed reviews regarding the efficiency of circuit breakers. Most of the studies conclude that circuit breakers are not helping decrease volatility (Kim and Yang 2004). Chen (1993) finds no support for the hypothesis that circuit breakers help the market calm down. Kim and Rhee (1997) and likewise Bildik and Gülay (2006) observed a spillover effect of the volatility to the near future after a trading halt was put in place. Nevertheless, the importance of such automated safeguards has risen in the eyes of regulators on both side of the Atlantic. On October 20, 2011, the European Commission published proposals concerning the review of the MiFID framework and now requires trading venues to be able to temporarily halt trading if there is any significant price movement on its own market or a related market during a short period (European Commission 2011).

(p. 329) 10.6 Outlook

The demand for automation was initially driven by the desire for cost reduction and the need to adapt to a rapidly changing market environment characterized by fragmentation of order flow. Algorithmic trading as well as HFT enable sophisticated buy side and sell side participants to achieve legitimate rewards on their investments in technology, infrastructure, and know-how. To draw a picture of the future evolution of algorithmic trading, it seems reasonable that even if the chase for speed is theoretically limited to the speed of light, the continuing alteration of the international securities markets as well as the omnipresent desire to cut costs may fuel the need for algorithmic innovations. This will allow algorithmic strategies to further claim significant shares of trading volume. Considering further possible shifts to the securities trading value chain, (p. 330) algorithm-based automation may continue to adopt major processes that contribute significantly to the ongoing abolishment of human traders. Richard Balarkas, CEO of Instinet Europe, an institutional brokerage firm, draws a dark future for human intermediaries: “It [algorithmic trading] signaled the death of the dealer that just outsourced all risk and responsibility for the trade to the broker and heralded the arrival of the buy-side trader that could take full control of the trade and be a more discerning buyer of sell-side services” (Trade News 2009).

So far, the academic literature draws a largely positive picture of this evolution. Algorithmic trading contributes to market efficiency and liquidity, although the effects on market volatility are still opaque. Therefore, it is central to enable algorithmic trading and HFT to unfold their benefits in times of quiet trading and to have mechanisms (like circuit breakers) in place to control potential errors at both the level of the users of algorithms and at the market level. Yet preventing use of these strategies by inadequate regulation resulting in excessive burdens may result in unforeseen negative effects on market efficiency and quality.

References

Aite Group (2006). Algorithmic trading 2006: More bells and whistles. Online. http://www.aitegroup.com/Reports/ReportDetail.aspx?recordItemID=296 [accessed January 17, 2012].

Aite Group (2011). Algorithmic trading in FX: Ready for takeoff? Online. http://www.aitegroup.com/Reports/ReportDetail.aspx?recordItemID=836 [accessed January 17, 2012].

Aldridge, I. (2009). High-Frequency Trading. Wiley.Find this resource:

Bildik, R., and G. Gülay (2006). Are price limits effective? Evidence from the Istanbul Stock Exchange. Journal of Financial Research 29(3), 383–403.Find this resource:

Chaboud, A., B. Chiquoine, E. Hjalmarsson, and C. Vega (2009). Rise of the machines: Algorithmic trading in the foreign exchange market. Report, Board of Governors of the Federal Reserve System.Find this resource:

Chaovalit, P., and L. Zhou (2008). Ontology-supported polarity mining. Journal of the ASIS&T 59(1), 98–110.Find this resource:

Chen, Y.-M. (1993). Price limits and stock market volatility in taiwan. Pacific-Basin Finance Journal 1(2), 139–153.Find this resource:

CME Group (2010). Algorithmic trading and market dynamics. Online. http://www.cmegroup.com/education/files/Algo_and_HFT_Trading_0610.pdf [accessed January 17, 2012].

Commodity Futures Trading Commission (2010a). Co-location/proximity hosting. Online. http://edocket.access.gpo.gov/2010/pdf/2010-13613.pdf [accessed January 17, 2012].

Commodity Futures Trading Commission (2010b). Findings regarding the market events of May 6, 2010. Online. http://www.cftc.gov/ucm/groups/public/@otherif/documents/ifdocs/staff-findings050610.pdf [accessed January 17, 2012].

Domowitz, I., and H. Yegerman (2005). Measuring and interpreting the performance of broker algorithms. ITG Inc. Research Report.Find this resource:

Domowitz, I., and H. Yegerman (2006). The cost of algorithmic trading: A first look at comparative performance. Journal of Trading 1(1), 33–42.Find this resource:

Ende, B., P. Gomber, and M. Lutat (2009). Smart order routing technology in the new European equity trading landscape. In Proceedings of the Software Services for e-Business and e-Society, 9th IFIP WG 6.1 Conference, I3E, pp. 197–209. Springer.Find this resource:

(p. 331) Ende, B., T. Uhle, and M. C. Weber (2011). The impact of a millisecond: Measuring latency. Proceedings of the 10th International Conference on Wirtschaftsinformatik 1 (1), 27–37.Find this resource:

European Commission (2011). Proposal for a directive of the European Parliament and of the Council on Markets in Financial Instruments repealing Directive 2004/39/EC of the European Commission. Online. http://ec.europa.eu/internal_market/securities/docs/isd/mifid/COM_2011_656_en.pdf [accessed January 17, 2012].

Fama, E. (1970). Efficient capital markets: A review of theory and empirical work. Journal of Finance 25(2), 383–417.Find this resource:

FIX Protocol Limited (2012). What is FIX? Online. http://fixprotocol.org/what-is-fix.shtml [accessed January 17, 2012].

Foresight (2011). The future of computer trading in financial markets. Report, Government Office for Science.Find this resource:

Foucault, T., O. Kadan, and E. Kandel (2011). Liquidity cycles and make/take fees in electronic markets.Find this resource:

Foucault, T., and A. Menkveld (2008). Competition for order flow and smart order routing. Journal of Finance 63(1), 119–158.Find this resource:

Gomber, P., B. Arndt, M. Lutat, and T. Uhle (2011). High frequency trading.Find this resource:

Gomber, P., M. Haferkorn, M. Lutat, and K. Zimmermann (2013). The effect of single-stock circuit breakers on the quality of fragmented markets. In F. A. Rabhi and P. Gomber (Eds.), Lecture Notes in Business Information Processing (LNBIP), 135, pp. 71–87. Springer.Find this resource:

Groth, S. (2011). Does algorithmic trading increase volatility? Empirical evidence from the fully-electronic trading platform XETRA. In Proceedings of the 10th International Conference on Wirtschaftsinformatik.Find this resource:

Gsell, M. (2008). Assessing the impact of algorithmic trading on markets: A simulation approach. CFS Working Paper Series 2008/49.Find this resource:

Gsell, M., and P. Gomber (2009). Algorithmic trading engines versus human traders: do they behave different in securities markets?. In S. Newell, E. A. Whitley, N. Pouloudi, J. Wareham, L. Mathiassen (Eds.), 17th European Conference on Information Systems, pp. 98–109. Verona.Find this resource:

Harris, L. (2003). Trading and Exchanges: Market Microstructure for Practitioners. Oxford University Press.Find this resource:

Hasbrouck, J., and G. Saar (2013). Low-latency trading. Johnson School Research Paper Series No. 35–2010, AFA 2012 Chicago Meetings Paper. Online. https://ssrn.com/abstract=1695460 or http://dx.doi.org/10.2139/ssrn.1695460 [accessed May 22, 2013].

Hendershott, T., C. Jones, and A. Menkveld (2011). Does algorithmic trading improve liquidity? Journal of Finance 66(1), 1–33.Find this resource:

Hendershott, T., and R. Riordan (2011). Algorithmic trading and information. NET Institute Working Paper No. 09-08.Find this resource:

IOSCO (2011). Regulatory issues raised by the impact of technological changes on market integrity and efficiency. Online. http://www.iosco.org/library/pubdocs/pdf/IOSCOPD354.pdf [accessed January 17, 2012].

Johnson, B. (2010). Algorithmic trading & DMA. 4Myeloma.Find this resource:

Kim, K. A., and S. G. Rhee (1997). Price limit performance: Evidence from the Tokyo Stock Exchange. Journal of Finance 52(2), 885–901.Find this resource:

Kim, Y. H., and J. J. Yang (2004). What makes circuit breakers attractive to financial markets? A survey. Financial Markets, Institutions and Instruments 13(3), 109–146.Find this resource:

Kirilenko, A., A. Kyle, M. Samadi, and T. Tuzun (2017). The flash crash: High-frequency trading in an electronic market. Journal of Finance. Online. https://ssrn.com/abstract=1686004 or http://dx.doi.org/10.2139/ssrn.1686004 [accessed January 6, 2017].

(p. 332) NYSE (2011). NYSERules–45-299c. Online. http://rules.nyse.com/NYSETools/PlatformViewer.asp?selectednode=chp\%5F1\%5F3\%5F4\%5F20\&manual=\%2Fnyse\%2Frules\%2Fnyse\%2Drules\%2F [accessed January 17, 2012].

Pole, A. (2007). Statistical Arbitrage. Wiley.Find this resource:

Prix, J., O. Loistl, and M. Huetl (2007). Algorithmic trading patterns in XETRA orders. European Journal of Finance 13(8), 717–739.Find this resource:

Riordan, R., and A. Stockenmaier (2012). Latency, liquidity and price Discovery. Journal of Financial Markets 15(4), 416–437.Find this resource:

SEC (2010a). Concept release on equity market structure. Online. http://www.sec.gov/rules/concept/2010/34-61358.pdf [accessed January 17, 2012].

SEC (2010b). Risk management controls for brokers or dealers with market access; final rule. Federal Register, 17 CFR Part 240 75(229).Find this resource:

Sellberg, L.-I. (2010). Algorithmic trading and its implications for marketplaces. A Cinnober White Paper.Find this resource:

Tetlock, P. C. (2007). Giving content to investor sentiment: The role of media in the stock market. Journal of Finance 62(3), 1139–1168.Find this resource:

Tetlock, P. C., M. Saar-Tsechansky, and S. Macskassy (2008). More than words: Quantifying language to measure firms’ fundamentals. Journal of Finance 63(3), 1437–1467.Find this resource:

Trade News (2009). 2000–2009: The decade of electronic trading. Online. http://www.thetradenews.com/trading-execution/industry-issues/4038 [accessed January 17, 2012].