Utility Indifference Pricing Overview


Inspired by Chen and Pennock’s constant utility market maker framework, we explore exponential utility indifference pricing for long-tail perpetual derivatives but found this pricing not robust. Are there robust alternative pricing frameworks?


Our goal is to create a permissionless perpetual derivatives exchange focused on listing markets with illiquid underlyings before our centralized counterparts are able to. In order to scale to these markets quickly, we aimed to price perpetual (more precisely, pricing the funding rates) without trading the underlying, effectively not hedging against risk through replication. This led us to explore pricing in incomplete markets.

Utility Indifference Pricing

One framework for incomplete market pricing is utility indifference pricing. In short, determining whether or not to accept a risky payoff is based on requiring additional capital such that expected utility is kept constant. This additional capital is the indifference price, also known as the certainty equivalent. More formally by Henderson and Hobson 2004, let a value function be described as

V(x, k)=\sup_{X_T \in \mathcal{A}(x)} \mathbb{E}_\mathbb{P} [u(X_T + kC_T)]

where \mathbb{P} is the real world probability measure, i.e. the distribution of prices at some terminal time T of some risky asset S, \mathcal{A}(x) is the set of all attainable wealths X_T at time T given an initial endowment x, u is a utility function, C_T is the payoff of a claim contingent on the value of S at time T, and k is the number of claims. Then the indifference price p_{C_T}(k) from purchasing k claims of C_T is defined as the solution to

V(x-p_{C_T}(k),k)=V(x_0, 0)

Indifference prices can be recovered effectively from a dual optimization problem over the set of equivalent martingale measures \mathcal{Q}. Defining the dual as \tilde{V}(y, k) = \inf_{\mathbb{Q}\in\mathcal{Q}}\mathbb{E}_\mathbb{Q}[\tilde{u}(\frac{\mathbb{Q}}{\mathbb{P}}y)-y\mathbb{E}_\mathbb{Q}[kC_T]] where \tilde{u}(y) = \max_x[u(x)-xy] and using the relation V(x,k) = \inf_{y>0}[\tilde{V}(y,k)+xy], one can solve


where I is the inverse of u', \hat{y}_2 solves \mathbb{E}_\mathbb{Q}[I(\frac{\mathbb{Q}}{\mathbb{P}}\hat{y_2})]=x-p_{C_T}(k)+\mathbb{E}_\mathbb{Q}[kC_T] and \hat{y}_1 solves \mathbb{E}_\mathbb{Q}[I(\frac{\mathbb{Q}}{\mathbb{P}}\hat{y_1})]=x given \mathbb{Q}. Derivations can be found in Elliot and Hoek 2009.

Note that the dual problem formalization also gives the general arbitrage-free price bounds

(\inf_{\mathbb{Q} \in \mathcal{Q}} \mathbb{E}_\mathbb{Q}[D\cdot C_T], \sup_{\mathbb{Q} \in \mathcal{Q}} \mathbb{E}_\mathbb{Q}[D \cdot C_T])

where D is the risk-free discount rate Staum 2008. It follows that in a complete market where there exists a unique equivalent martingale measure, the indifference price converges to the market price.

We now explore a well-studied special case of utility indifference pricing.

Exponential utility indifference

Exponential (CARA) utility is defined as u(x) = -\frac{1}{\gamma}e^{-\gamma x} for some risk aversion parameter \gamma > 0. Exponential utility indifference pricing is well-studied and unifies many frameworks in incomplete market pricing. One way to formalize the pricing problem is by Barrieu and Karoui 2002, given two agents, a buyer looking to buy a claim C_T and a seller looking to sell the claim C_T

\begin{align*} \text{maximize} \quad &\mathbb{E}_\mathbb{P} [u_b(C_T - p(C_T)]\\ \text{subject to} \quad &\mathbb{E}_\mathbb{P} [u_s(x + p(C_T)] \geq u_s(x) \end{align*}

where u_b is the utility function of the buyer, u_s is the utility function of the seller, and x is the initial endowment of the seller. We abuse notation slightly by taking p^*(C_T) to be the optimal indifference price of a claim C_T.

The exponential utility indifference price is

p^*(C_T) = \frac{1}{\gamma_s}\log \mathbb{E}_\mathbb{P} [e^{\gamma_s \cdot C_T}]

where \gamma_s is the risk aversion of the seller.

Interestingly, the dual optimization problem is

p^*(C_T)=\sup_{\mathbb{Q}\in\mathcal{Q}} \mathbb{E}_\mathbb{Q} [C_T] - \frac{1}{\gamma_s}\Big(H(\mathbb{Q}||\mathbb{P})-\inf_{\mathbb{Q}\in\mathcal{Q}}H(\mathbb{Q}||\mathbb{P})\Big)

where H(\mathbb{Q} || \mathbb{P}) = \mathbb{E}_\mathbb{P} [ \frac{d\mathbb{Q}}{d\mathbb{P}} \log \frac{d\mathbb{Q}}{d\mathbb{P}} ] is the relative entropy of \mathbb{Q} with respect to \mathbb{P} (Musiela and Zariphopoulou 2004). Note that functional form of the exponential utility indifference price is equivalent to the entropic risk measure, a convex risk measure. Thus, maximizing exponential utility is dual to minimizing entropic risk.

From the dual optimization problem, we observe that taking \lim \gamma_s \rightarrow 0, i.e. as risk aversion vanishes, the indifference price becomes

p^*(C_T) = \mathbb{E}_{\hat{\mathbb{Q}}} [C_T]

where \hat{\mathbb{Q}}= \text{arg inf}_{\mathbb{Q} \in \mathcal{Q}} H(\mathbb{Q} || \mathbb{P}) is known as the minimal entropy martingale measure (Frittelli 2002). Alternatively, we observe that taking \lim \gamma_s \rightarrow \infty, i.e. as risk aversion grows infinitely large, the indifference price becomes

p^*(C_T) = \sup_{\mathbb{Q} \in \mathcal{Q}} \mathbb{E}_\mathbb{Q} [C_T]

otherwise known as the superhedging price.


The problem with the utility indifference pricing framework is two-fold: estimating the real world probability measure \mathbb{P} and specifying a utility function u.

First, since the market is incomplete, there exists many equivalent martingale measures. We assume that calibration to market prices is not possible because market prices are unlikely to exist for illiquid assets. Hence, the choice of an equivalent martingale measure heavily depends on \mathbb{P}. But \mathbb{P} is notoriously difficult to estimate to sufficient accuracy with finite sample points.

Second, calibrating the utility function u is equally difficult. How does one quantify risk aversion precisely? From exponential utility indifference pricing, we observe that the choice of risk aversion \gamma interpolates between the minimal entropy martingale measure, the equivalent martingale measure with the least informational difference from \mathbb{P}, and the superhedging price, the most conservative equivalent martingale measure.

These two issues make the utility indifference pricing framework not robust in the sense that resulting prices are extremely sensitive to parameters that are fundamentally hard to infer (see this simple simulation).


  1. Does there exist a robust approach to utility indifference pricing?
  2. Or are there alternative robust frameworks for incomplete market pricing that may be insensitive to the accuracy of an estimation of \mathbb{P}?

regarding 2), topological data analysis provides a framework that is insensitive to the chosen metric, is robust to noise, and works well in high dimensions. There is currently a lot of interest/application in using TDA as a preprocessing step on the underlying data before ingesting into ML models of choice - in this case anything related to short term price prediction.


This is really interesting, will take a look into topological data analysis and see where it applies. Do you have any recommendations for papers incorporating this technique on financial markets?


There isn’t much due to TDA being fairly novel still:

I think for 2) there is some creativity that is required. I think the best way to understand use cases is to develop an intuition for what kind of power TDA unlocks and what is possible.

For example my approach would be to predict tick level prices and corresponding movements using persistent homology. Reading the bar diagrams from persistent homology gives you a “true probability distribution” with respect to the underlying data. You don’t need a lot of data either. For example I think this paper (using Mapper) used a dataset with less than 50 data points.

I am personally using giotto-tda to analyze MEV trade flow on Olympus POL right now, using Mapper to characterize (and thus predict) MEV bot behavior. Giotto-tda is a fantastic library to use for TDA and fits very well into traditional ML data pipelines. Interestingly have found that a single MEV bot address has 4 distinct market behaviors, 3 of which are correlated with human volume and 1 that isn’t. From this it becomes straight forward to understand what drives each behavior type and design better liquidity mechanics to incentivize desirable MEV behavior.


Could you clarify the last portion? Particularly what do you mean by human volume in this case?

I separated the trading flows of humans from mev bots and built a dataset of such. You can find more details on how I did that here.

Now I am expanding the dataset to include some cross chain data of liquidation events for further analysis between human vs MEV behavior. The initial hypothesis is that humans don’t behave based off of liquidation events while MEV bots do because they are the responsible for liquidations

1 Like

Hey I recently read this paper on geometric arbitrage theory. It’s a good expository that connects some elementary differential geometry with analysis frameworks that mathematical finance people find more familiar.

The broad overview is to analyze arbitrage from the point of view of differential geometry. This makes the accuracy of estimation of \mathbb{P} insensitive to accuracy (because every smooth manifold is locally euclidean). A lot of the asset pricing theorems traditionally known are essentially buffed up in the language of differential geometry.

The way you apply this to utility indifferent pricing is simple - the farther the price is away from the geometric arbitrage, the more “risky” the price should be considered.