Would You Prefer Qualitative Trading?: A layman’s thoughts on computational finance

When I was an undergrad, in the early 1990s, I spent a little time trying to figure out how to invest what meager savings I had (generated by my lucrative summer pizza delivery gig). My first efforts were directed at figuring out what public equities I might invest in. [Sidenote: Ultimately I ended up putting all of my savings in my roommate’s start-up, which I do not recommend but which did end up allowing me to buy a house five years later.] I ordered a bunch of annual reports from the investor relations departments of companies I was interested in, got a pencil and some graph paper, and did some rudimentary calculations, like: how fast are these companies growing? what are their PEs? Obviously this seems ridiculous to those weaned on Yahoo Finance, but trust me, much of what I was doing those days would seem ridiculous to most sensible people.

My point is this: even as a 19 year old liberal arts trained neophyte, my first instinct upon deciding to invest in stock was to do some math. So when I read all of these people attacking “quant traders”, I have to wonder: as opposed to what? Instinctive traders? Psychic traders? Insider traders? As someone who now spends his time figuring out how to invest in financial services start-ups, understanding the actual contours of this new phenomena is important to me, so I’ve done some thinking about it. I will leave the question of who to demonize up to you, but I will do my best to lay out some frameworks that hopefully introduce some nuance into the debate.

In actual fact, most of what people are complaining about when they decry quant trading is actually a sub-set of quantitative trading (admittedly the largest part) called high frequency trading or, somewhat interchangeably, low latency trading. In fact, these are two different techniques often used in combination: High frequency = lots of trades/second, Low latency = getting the trades into the marketplaces very quickly. High frequency/low latency trading marries two skill sets: a)an analytical pursuit of transient pricing anomalies and b)a hardware/software/communications configuration designed to get trades into the various markets more quickly than the next guy. Part (b) is important because of the key word “transient” in part (a). This kind of quantitative trading has been around for decades, since the advent of computers essentially. Firms of this type input massive amounts of market data, across all types of securities and geographies, and then look for correlations, eg, if the price of this commodity future goes up 5%, the stock price of this Indian company should trade down 2%. [note: this example is obviously over-simplified to the point of parody, as will be other examples, so please check your condescending vitriol at the door, if you can.] They carefully back-test these observations to determine their validity and robustness over time, then build trading strategies built around looking for events in the future that mirror these historical correlations.

The problem with this type of quant trading is that, over time, with everyone working on the same data set, everyone makes these same observations. So then the question becomes who can trade on the data first. Hence the massive investment in infrastructure to turn these quantitatively-derived investment ideas into low latency trades. What is important about that is one quickly realizes that in order to minimize cycle time, you need to cut the slowest link in the chain out first: the human brain. As a result, we now have computers trading directly with other computers, and this has created many of the market structure issues we are dealing with now, such as “flash crashes” and high levels of volatility. In addition, even if you get there first, the profits available in a given trade are often tiny. Hence the other customary component of this trading style, the high frequency part: if you’re making a penny per trade, you’ve got to do a lot of trading.

But let’s back up a second. The reason that this branch of quantitative trading led to high frequency trading is that, in a sense, the observations are “obvious” (at least if you have $100MM worth of computing power and all of the market data in the world.) As such, making the observations is a commodity, albeit an expensive one; it is trading first on them that is the money-maker. But what about types of quant trading that are predicated on making investment decisions that are non-obvious? Specifically, investment decisions based on information that is coming from outside of the markets, versus strictly from inside the markets, ie, prices.

Quite clearly, that is what most great equity investors do. They get to know a company well, analytically and otherwise, make a prognostication about the future of the company and then buy or sell the stock when the rest of the world disagrees with their prognostication. The frequency of their trades can be high or low, depending on how quickly it takes for the rest of the world (ie, the markets) to figure out that they were correct. Another simplistic example: I think Coke is worth $70, and it’s trading at $67. I buy at $67 with a plan to sell at $70, regardless of whether it hits $70 in 10 minutes or 6 months. If i’m truly disciplined about that, this could be a high frequency trade indeed, if the market comes to agree with me quickly.

Our view is that the next great revolution will be applying the information technology techniques of high frequency trading to this kind of non-obvious investing, which relies on the intake and synthesis of exogenous data (from outside the markets) to make pricing observations. In that these kinds of observations will have a kind of duration durability far in excess of endogenous observations (based on readily available market data), they will generally be far less dependent on speed, and as a result not destabilizing. These new types of firms will use computers to enable and validate human investing intuition, rather than using computers to try to be first past the post in a race to the bottom. Two of the emerging winners in this space include Two Sigma and Kinetic Trading. I would humbly submit that these new firms should be called quantitative investors, rather than quantitative traders, and I look forward to backing some of the best of these players.

About mattcharris

husband, dad, venture capital investor, based in new york.
This entry was posted in Uncategorized. Bookmark the permalink.

15 Responses to Would You Prefer Qualitative Trading?: A layman’s thoughts on computational finance

  1. Dave Famolari says:

    Great post!  More lasting value will come from exploring new data sources rather than from building new tools to mine common data. 

  2. Damian says:

    I’ve been thinking for a while that perhaps the HFT has reached its peak this year. If the SEC does try and equalize the playing field a bit by forcing something like 500ms order display before cancel, then many of their opportunities may disappear.

    We’re now also seeing the emergence of organized software platforms, both retail-oriented as well as targeting programmers, which could indicate that HFT has followed the same history as the great Gold Rush. That is, the first people made money by pulling gold out of the river, then the ground, but as stakes got harder and harder to make money on, the people making the money were the people selling the shovels.

    The two firms you point to are more recent examples of the quant investor – there are too many others to mention, but I think of DE Shaw as perhaps the prototypical one. Here in Boston, I think of Panagora as another example. My point is that these firms – that use quantitative research to do long term investing – are not new, and we may expect the same problem as exists within the HFT crowd. Meaning, if they are all exploring the same edges, they may find them harder to come by. As my old boss used to always say: “there’s a limited amount of alpha in the world.”

    Comments welcome. Great article!

    • mattcharris says:

      your old boss sounds like my kind of guy (or gal). i generally agree, though there is a lot of data in the world, and a lot more created each day, almost none of which is being interpreted for trading signal right now. feels like an opportunity. not an easy or simple one, given that most of the data is unstructured and textual, but still, an opportunity.

      • Damian says:

        Well, these guys are already doing a fair amount of the alternative data stuff – feeding in earnings, free cash flow, relative value, momentum, internet buzz, news – you name it – it’s probably being looked at. One issue that exists out there is that existing VC models can’t really fund new companies in this area in my experience. Would love to hear your comment on this issue.

        • mattcharris says:

          i would agree that historically vcs have not invested in asset management firms of this type (or any type). i think the reason is similar to why vcs don’t invest in services firms … they don’t typically generate equity value because the assets walk out the door every day. my view is that quantitative investing firms of the type i’m describing will create equity value, given the legitimate and defensible IP assets at the core of these firms. frankly, the asset management field has a pattern of relatively brisk M&A activity, notwithstanding a recent slowdown due to the global financial crisis. further, nearly half of that M&A activity is targeted at alternative asset managers, up from historical levels in the 20% range. this dates from before my time, but the rule in the 1980s was that vcs only did hardware deals, not software deals, because software deals didn’t have enough intellectual property to create equity value. times change.

      • Damian says:

        Matt – given that, do you have any interest in hearing a pitch? Not sure what your criteria would be for defensible IP, but I’ve been trying to get capital to start up a quant asset management company that I’ve been working on for 3 years. Feel free to tell me “no” – my feelings will not be hurt.

  3. Thomas Johnson says:

    This isn’t really very new. AQR, for example, has been doing this since it was founded, as has Dimensional Fund Advisors.

    • mattcharris says:

      Thanks for this. These two organizations are pretty opaque, as it relates to what exactly they are doing, but my understanding is that their work using inputs outside of market data is primarily correlating simple and objective events to pricing outcomes, not the kind of complex and “squishy” events that are truly and lastingly non-obvious. But they are definitely candidates to figure it all out.

  4. David Soloff says:

    Matt, your thoughts are very elegant. I’d add that the ‘exogenous-data-introduction value chain’ you very smartly uncover and explicate in this post can and should be extended further. The startups and companies to deliver real value in this regard may live further upstream, not simply introducing or delivering these exogenous data sources into the closed markets system, but rather serving as agents of identification, cultivation, capture and synthesis of these data sources into high utility information products. Trading firms like Two Sigma and Kinetic will distinguish themselves in their ability to source, model and apply these exogenous data sources to the proprietary models they execute or develop on behalf of others.

    • mattcharris says:

      bingo. the one caveat is that it is/can be hard to make money as a data vendor to these guys. this is probably a whole post unto itself, but one of the critical issues for any data vendor to the investment community is how quickly does the value of my information degrade as there are multiple consumers. on one hand is a company like GLG (where my partner bo and i were small investors), who created a platform whereby there was relatively little degradation, and it both spread like wildfire AND was able to maintain premium price points. most other companies face a situation where they can get $1mm/year from one hedge fund for exclusive access to data, or they can get little or nothing from 100 hedge funds for non-exclusive access. neither is particularly interesting. only occasionally can a company transcend this problem and go from being valuable only on an exclusive basis to being a “need to have” product that people will pay a premium for on a non-exclusive basis (GLG, Bloomberg, etc). one of the rabbis of this space, jonathan glick, refers to that period of time when you go from having one or two exclusive, high-paying customers to having a whole universe of non-exclusives the “dark side of the moon”. trust me, it’s cold out there.

      having said that, if you are a company like, say, metamarkets, and you are defining and creating access to a whole new field of data for investors and others, you are going to royally kick ass and i will likely retire on the proceeds.

      • Damian says:

        Exactly right – GLG has a scalable hedge fund model because they concentrate on individual consultations – a one-to-one conversation that isn’t syndicated. In contrast, anyone selling, say, individual syndicated reports to hedge funds (ironically the way GLG started) based on proprietary data will run into exactly the problem that you’ve described.

        Metamarkets seems concentrated on the media buying space as opposed to the financial space – but it’s easy to see how media buying is moving rapidly towards an exchange model.

  5. wealthofinfo says:

    Very helpful framework, thanks for putting this out there. If you “zoom out” and think about the history and future of investing, would you agree that the boundaries of what you call “endogenous” data actually expand over time as various metrics become first quantifiable, then “proven,” and then more widely accepted by the market? Today’s Google search frequency or twitter mention score or Facebook friend growth rate might someday be looked at alongside P/E, if and when correlations to price moves are sufficiently established.

    Clearly, if the adoption rate of proven correlations has a Moore’s Law dynamic (not coincidentally), the value proposition shifts rapidly to finding sources of data that are either proprietary or simply not (yet) readily quantifiable. In the latter case, you are betting on “intuition” to generate hypotheses (to then be scientifically validated) that are beyond the capabilities of statistical software to initially identify. To those of us in the data mining world, this makes perfect sense: you’re advocating a way beyond the local optimum that technology identifies quickly within existing data sets. “Qualitative trading” may be a bad idea, but qualitative hypothesis generation that envisions roles for new data sources is the only way to win big going forward.

    • mattcharris says:

      all spot on, but in addition to hypothesis generation as the differentiating competency i would submit back-testing. it is (relatively) easy to back-test market data against market performance, but it is very hard to back-test complex and in particular, text-based inputs and events.

Leave a reply to Damian Cancel reply