Market Making Article 2 (Not Illustrated, but Much More In-Depth)

I Draw Charts (David Holt)
16 min readNov 5, 2021

--

Sup. For those of you interested in learning more in-depth about the highly technical, complex, competitive and very difficult world of market-making from an absolute fucking moron who may or may not be completely wrong about half of it, this one’s for you guys!

Warning, it’s not an easy read and you will have to do math with me

(oh god I probably messed some equation up somewhere huh)

But hopefully this article is a good jumping off place for guys who maybe read my first article and are interested in trying to get started market-making. I’ll talk you through the more important things to consider when designing a rudimentary system as we build one completely on the fly, then briefly explain how trying it manually can work (less than ideal) and point you towards some decent freeware for starters.

In the last article, we mostly focused on one imaginary scenario where we assumed a lot and things more or less worked out pretty well. In the real world though, it’s very different and there are a million dynamics at play to keep track of. But we’re still gonna assume a lot of shit. First off, let’s talk pricing.

Pick a price, any price (except that one)

Especially when the spread is multiple ticks wide, determining “fair price” isn’t obvious. When there’s not enough volume or the market is too volatile for anyone to quote tight spreads, if we don’t have a model for our understanding of fair price, we are at risk of someone else with a better understanding of fair price picking off our orders (aka, getting filled because we’re no longer pricing correctly), leaving us on the wrong side of the market with inventory that we can only close at a loss. There’s no single right answer, and unfortunately that’s the case for a lot of the problems we’ll encounter, especially in smaller crypto markets with weird quirks where the “textbook” answer hasn’t even been written down anywhere yet.

The upside is that if you can ad-hoc decent solutions, there’s a good chance you can make good money even if you’re miles off from the undiscovered textbook solution, because everyone else probably is too.

Back to the pricing problem, there are a few “standard” approaches you can try or tweak.

If you’re quoting on a pair that trades in multiple other places, you can use an index price from those other places, which ought to provide a reliable oracle most of the time. On the rare occasions the data-feed gets interrupted, or a component member of that index experiences abnormal behavior though, your understanding of fair price might become severely unreliable.

Using the mid price (the price point directly in the middle of the bb/o) as fair price is much easier but it does leave you reliant on whatever the other marketmakers are doing, since by default they will be deciding what you think is fair price. They may also try to exploit you if they notice it in very small markets, but being specifically noticed or consistently targeted is unlikely in a busy market with multiple marketmakers. If you’re a comparatively small fish in a large and active market with multiple other marketmakers, using mid price to determine what’s fair is often a great solution. As I recall seeing on twitter somewhere in the bell-curve iq meme format, “Quote around mid, the market is probably right”.

As an interesting aside, in very slow markets with wide spreads and only one or two other players, you may be able to rely on other marketmakers’ understanding of price and successfully quote continuously in front of them, so long as you keep your size small enough that your undercutting takes away less of their profit than it would cost them to tighten their spread to beat you. Eg, if they quote a $0.06 spread for $10k and the average taker order size is 5k with low variance, it’s cheaper for them to let you quote a $0.04 spread for $1k in front of them than it would be for them to match or beat you, since they’d be giving up at least 33% of their profit to do so, while you’re taking somewhat less than that (depending on the size variance of taker orders). Cat and mouse games are to be expected!

There are infinite other ways you could try and calculate or derive fair price, but remember that you always have to be able to unload your inventory. If your fair price model has your quotes skewed too much higher or lower than everyone else, you’re unlikely to get filled equally on both sides and your model will end up costing you money no matter how superior you believe it is.

Butter, Jam and Other Spreads

Deciding how wide to quote is one of the most challenging and multilayered questions I’ve encountered in my life, probably because I’m bad at math and staying focused. Let’s give it a shot.

To start with, we need to ask a million questions all at the same time. How much volume is the market doing on average, and how much does that number vary? What’s the average taker order size, and how much does it vary? For each tick away from fair price, how much will be on the book on average, and how much does that vary?

Spoiler, you can’t answer that one because you and the other marketmakers are destined to spend eternity trapped in an ever-repeating dance of constant reaction to each other’s behavior, perpetually making changes to how you quote in order to get ahead of each other.

We also need to know what the low time frame volatility and its variance are, what we’ll be paying or receiving in maker fees, whether our competitors are paying the same amount (if the exchange has fee tiers), and then consider other sub-factors like our API ratelimit, how how fast we can place and remove orders and the list goes on.

But let’s think through this. If you quote at the top of the book on each side to maximize the number of spreads, you’re setting the mid price and you’re guaranteed to get picked off when price changes. So that’s not a great idea, unless the market doesn’t move often or by much (so you’ll get picked off infrequently and at low cost), but still does enough volume that you’ll make more from all those spreads than you lose when it moves against you. But if that’s the case, won’t the other marketmakers just re-jump ahead of you? Eventually this continuous leapfrogging would result in an equilibrium where everybody’s quoting at the same level because any tighter would result in net loss over time, except that then everyone’s sharing the taker order volume so they’re making fewer spreads, which means less money, which means they’re no longer profitable because they get picked off for more than they make again. It’s a Bitmexican standoff!

To some degree, this is indeed what happens, but calculating the ever-changing perfect bid/ask placement and taking all of these variables into account properly and before anyone else can beat you to it is incredibly difficult and relies on near perfect understanding of the market with no room for error… and marketmaking like this certainly can’t account for that one guy on Binance who suddenly decides that he has 3,000 too many Bitcoin and must immediately sell them at market, completely destroying that “average loss when picked off” stat that we were relying on. So we need to adapt our strategy a bit.

There are academic papers on this topic, formulas to follow and all sorts of highly technical and mathematical models for the ideal marketmaking models, but I never learned any of them because I am relatively dumb. So here is where we part ways with the academically correct solutions and gigabrain quant models so we can instead talk about rough estimations and subpar heuristic solutions because that is what I’m familiar with. Ironically, this next part is gonna be the one that gives you a headache. Tone shift.

Everything in this article going forward is highly likely to be less-than-ideal reasoning at best and entirely based on my own anecdotal experience and logic. Neither of these are terribly reliable or extensive. You’ve been warned.

Clearly, there’s a lot of variables at play and we’re heavily reliant on averages for determining our ideal behavior. But reality diverges from averages often and by large amounts. We can and will compensate somewhat by using variance and standard deviations of behavior, always opting to act more cautiously than the averages suggest, but it is important to remember that market behavior and therefore most of our data we rely on DOES NOT FOLLOW A STANDARD DISTRIBUTION OF BEHAVIOR AND IT IS NOT RANDOM. Every other participant in the market is trying to take our money. Every taker order is accepting our quote ONLY when they believe we are already or will soon be offsides, unless the order is forced. So any time we are reliant on an average, or an assumption of evenly distributed random behavior, we must be extra cautious and ensure that it pays significantly more than what we estimate our risk to be.

So let’s get back to the original question: how tight should we quote? I’m going to assume that we are quoting in a market that is busy and has multiple other makers. The following model will be of middling complexity in order to highlight design considerations and it will be full of flaws… but it will give you some ideas to play with while presenting most of the important questions that every model needs to answer.

We’ll start with volatility (formula here, I suggest you get very familiar). We want to know how volatile the pair is on a very granular level. We can use either a tick/number of ticks, elapsed volume, or a time period for our data sample point size. The goal is to find a baseline that tells us “how much does this asset bounce around its (very low timeframe/recent) average performance per [data sample] on average”. Now remember, this formula assumes a standard distribution of market movements, which means the numbers on paper are unlikely to match reality. Since we want to price based on the assumption that mid is fair, but we want to know how far price actually trades from it, we’ll calculate based on relative variance between the two instead of variance.

Math time

If we use 1 second as our sampling period with the last 60 seconds as our sample size and determine that the mean mid price is equal to the price 60 seconds ago with a $0.16 relative variance of mark price, this results in a standard deviation in variance of mark price equal to $0.04 relative to the mean mid price. This gives us the basis for a formulaic assumption of the future, which is that ~68% of the time, mark price will close somewhere between +$0.04 and -$0.04 cents from the mean mid price and ~99.7% of the time, mark price will close somewhere between +/- $0.12 from the mean mid price, which remains unchanged. If I got the math right.

Given this information, we can make an assumption that if we quote a spread of $0.08 ($0.04 cents away from mid on each side), we need price to trade one standard deviation from the median mid price or more to get filled if we quote around it. This happens in around 32% of the periods. Then crossing the entire 8 cent spread would require a move 4 cents or more past median to the other side, which happens in around 16% of those periods. Assuming continuous complete fills at each extreme, including loading exposure to the opposite side for the next spread, the average profit per period rounds out to somewhere around 1.28 cents per unit per period.

Already we are highly inexact, since we haven’t accounted for what happens during our selected periods (only what they close at) to see how much and how often price extends beyond the opening and closing values, and therefore can’t calculate the “accurate” probability of a fill. However even if we did, that value would still be based on assuming random market behavior, which we already know is also wrong. If you use individual ticks as your sampling period that calculation becomes redundant, but we’re going to move on and continue with bad assumptions for the sake of brevity, just make sure you understand that these numbers are very unreliable and only designed to serve as example solutions for actual questions. I know it’s a lot, bear with me.

Using the same math, we can estimate that offering 2 cent spreads will return .8 cents per unit/period, 4 cent spreads return 1.22 per unit/period and 6 cent spreads return 1.35 per unit/period. These numbers assume zero fees or rebates but we would account for those in our spread size numbers before calculating average returns per period.

As you can see, the best returns come from the 6 cent spread, with orders placed 3 cents aka .75 standard deviations from the mean before factoring in any risk.

But Wait, There’s More!

Now let’s try to think about risk. If price is moving against us while we hold inventory on the opposite side by an average rate per period that is equal or greater than our average income rate, we are losing money. But unfortunately when this occurs, the assumptions that determined our income rate cease to hold even remotely.

In our example, the average market return per period was 0, since the mean remained unchanged throughout. But if the mean price changes over time, the sample set of selected periods is not following a random distribution because it is weighted to one side, otherwise there could have been no shift in the mean. This means that each result on the opposite side of the mean is less probable than we anticipated and we’re much less likely to fill orders on that side. Not only are we losing money to a shifting mean value when we hold inventory, our average profit per period is revealed to be lower than we expected.

Or is it? What if we expected the mean to move because we calculated that 1m periods were expected to follow a standard distribution of returns (!warning!) around a 60m mean?

If we calculate that 1m periods have a standard deviation of variance of $0.20 around their mean, it would stand to reason that the 1s periods must regularly shift their mean by quite a bit. So what if we simply quoted a 6 cent spread, but stacked orders every tick and then averaged in and out as the 1m periods floated around their mean? Estimating that the 1s periods hold their volatility values on average relative to their shifting mean, we know that that mean is also likely to revert according to the 1m period data, which means we can simply spread out our exposure across the orderbook for the same estimated profit per unit per period. We just won’t have nearly as many units at each level.

For our example, we’ll place orders at every price tick that lies within 3 1m period standard deviations of the 1m mean (this translates to +/- $0.60) and size them evenly to split up our max inventory size. We’ll open long from below the mean and open short from above it. Now 99.7% of the time, we will be averaging that profit rate of 1.35 cents per unit per second (assuming that the 1s variance numbers remain the same on average).

If the average performance of the 1m period shifts to -$0.006 (-$0.36 performance on the hour), well, that’s only an average drawdown of $0.0001 per unit held per second. We are holding more inventory for longer though and that indicates an imbalance in the 1m chart, meaning our probabilities are once again proven to be estimated at best despite all our fancy math. However, with our current setup, we only hold 1/3 of our maximum inventory per standard deviation from median 1m price, meaning that the drawdown impact is massively less severe. We’re only holding more than 1/3 exposure approximately 32% of the time, and only holding it on the wrong side (a flawed but approximate estimate of)16% of the time.

If our maximum inventory size happened to be 60 units, the estimate says that would mean we are earning $0.0135 per second 99.7% of the time, while losing $0.002 or more per second 16% of the time, with a maximum loss per second of $0.006 at that rate of decline if our inventory was full. Since the income is not as sensitive to mean price changes any more, this looks comfortably profitable even though we don’t entirely trust our own math.

To ensure we aren’t gradually caught more and more offsides by the drift though, when the mean 1m price moves, we re-center our orders so that we are still only opening below mean and closing above it. We will also immediately add an additional closing order at the top of book for each unit of inventory that we should no longer be holding based on our new mean price, or a new opening order for one that we should. Eg; if the 1m mean price has dropped by a total of $0.04 since we began and price is currently below it, we are now long one more unit than we ought to be at current price. We will immediately send a closing order for that unit. If price is trading above mean when this happens, we are now short one less unit than we ought to be, and so we shall send an opening order.

We’re going to add one last period to look at, using 1h periods and a 24h average. This one will tell us what variance to expect on the mean hourly returns. We can use it as a gut check to see whether the lower timeframes are behaving and calibrated as they should be, or we could build in additional parameters around it.

If the 1h standard deviation of variance from the 24h mean is $1 and the mean 24h performance is -$1, we would expect hourly performance to be between -$2 and + $0 (flat) 68% of the time, while 5% of the time it could be anywhere between -$3 and +$1.

This would indicate that our lower timeframe mean performances are each expected to average much larger and more negative changes than they currently do by quite a wide margin, and this would result in much lower and likely negative returns if our current setup remains as it is. Fortunately, as our average low timeframe values shift, we adjust our strategy to account for the new variable values. When the 1s standard deviation from 60s mean increases, so too does our spread, therefore profit. As the 1m standard deviation from 60m mean increases, we decrease order size but increase total orders so that we are not sizing too heavily for the volatility when near the 60m mean.

Given the discrepancy, we could choose to manually interfere with some of the variables, decrease the sizing, pause the bot, anything we want really. An external check like this is really more of a comfort item but can be very useful.

And with that, we’ve finished the logic design.

System Complete (Said No One Ever)

So that’s it, that’s a system. Is it any good? Fuck no, we relied on a million different assumptions and it’s not nearly risk-averse enough even if the assumptions were correct, along with countless other problems. But it’s not meant to be a perfect system, it’s meant to show you the sort of things you’ll need to consider. Every variable and decision above can and should be tested, optimized and theorized about, most of them were completely arbitrary. There are many many alternatives to every single decision made above.

But is it profitable? In the long run, no. It’ll go bust when the market does something far enough outside of its poorly padded expectations of “normal”. But it might make a few bucks in between. That’s the trap of assuming a correct understanding of actual probability. It’s easy to get comfy and assume things will always work because they always have, until something that’s never happened before… does.

Made it this far? Impressive

If you’re still interested in all this shit but don’t know how to actually get started after brainstorming, try doing it manually to start. Most people seem surprised to hear anyone’s tried it manually, but after reading this article it should be clear that marketmaking does not require you to be at the top of the book. “Pure” marketmaking is all about mean reversion, and although doing it at very rapid pace for small spreads is usually the most profitable, the concepts are broadly applicable at any distance from mid.

You really do want to be fast though.

To get started, just open your favorite exchange, start a test subaccount and place some buy and sell orders at equal spacing as close to price as you can put them while still replacing them when they fill. Maybe try implementing some of the logic from the example system, all of the parameters can be re-created as indictors in TradingView. If you have no idea how to choose any logic, start by grid trading and then add parameters.

Create rules for yourself, see if they work or not and why. Manual is slow and capital inefficient, but you’ll start to see and understand some of the ephemeral math concepts we talked about earlier. How far out should you stack orders? Is mean reverting a good idea past a certain point of drawdown, or should you dump it? Start testing those averages and questions, then once you think you have answers, start assuming you’ll be wrong at some point and design rules to minimize loss when it happens.

If you can make a system that works consistently and has hard risk limits, you’re doing well.

For those interested in publicly-available software, Hummingbot is free and open-source, which allows you to modify it as you please. It’s an excellent place to start and connects to a wide variety of exchanges. Mango Markets also offers an open-sourced bot, though it’s designed to work specifically for their DEX on Solana. Of course, you can always code or commission your own as well, but if you’re ready for that, you probably don’t need to be reading this. Just be careful and redundant, software mistakes can cost a lot of money.

Summary

If you got through this, congrats. Seriously. If you actually enjoyed it, you’re a fkn weirdo but you’ll probably end up picking me off in the orderbook soon.

If you’re wondering, my first model was a gridbot with periodically updated logic for application that ended up behaving a little like the example model. My latest is less different than you might think, but the small changes add up to massive differences. So keep trying new shit!

If any of my smart marketmaker friends are reading, would love any feedback. This entire article is based completely on limited understanding built on trial and error and is far from comprehensive.

If this was useful, claps, likes, shares, blowjobs and follows are always appreciated, thanks for reading!

*Edit 11/09/21, corrected erroneous use of term “covariance” when referring to relative variance between variables. Simplified portions of the mathematic operation descriptions.

--

--