• current economic landscape is heavily dominated by a few major companies built around large private markets
    • e.g. :
      • Amazon / Alibaba for e-commerce
      • Apple / Google for apps
      • Uber / Lyft / DiDi for transportation
      • Google / Facebook / Microsoft / Amazon for ads
    • so there’s been some interest (from e.g. these institutions) on characterizing the equilibrium behavior of large markets
  • some relevant strands of work:
    • providing formal foundations for market equilibrium was probably the crowning achievement of classical economic theory
      • this classical work seems to have least partially been motivated by a desire to provide some sort of ‘scientific’ justification for capitalism (this was the 50s, after all)
        • thus, focus on abstract markets, intuitively corresponding to nation-level economies
      • focused on characterizing existence / uniqueness / welfare properites of equilibria
      • based on fixed-point arguments
        • not that useful for actually finding equilibria
    • more recently, developments in algorithmic game theory for computing equilibria of certain markets
      • for some reasonable models of markets, complexity is polynomial in market size (and sometimes even quite fast)
      • but given the scale of the markets of interest described above, often even writing down the entire market is a challenge
    • there’s also been some recent work in graph theory on defining / characterizing the limits of graph sequences
      • some conditions under which some property of a graph will converge as the graph gets big
      • markets are basically just bipartite graphs where vertices are consumers and goods
  • so, it feels like you should be able to do something like this:
    1. sample some not-too-small sub-market from your big market
    2. compute equilibrium on this sub-market
    3. equilibrium on the sampled market should be similar-ish to equilibrium on the whole market

I’ll walk through a very simple example to provide some intuition on when this process could work, and then discuss how much foundational work remains to be done.

I. Simple example

We’ll be extremely concrete here and focus on a fairly trivial market with no production / only monetary endowments / specific utilities (Fisher market with Cobb-Douglas utilites).

I.0. Setup

  • goods
    • each good has total supply
    • in addition, let denote binary ‘type’ of a good
  • consumers
    • each consumer has total budget to spend on stuff
    • utility function
      • a little calculus gives us that, given prices , the optimal amount of each item to buy is
      • thus, in particular, the total fraction of consumer ’s budget spent on good is just independent of price
  • an equilibrium is defined by prices such that supply and demand are equal:
    • note that because the allocation of money to each product doesn’t depend on price
    • so, the market clearing price for each good is just
    • so this market is actually trivial
  • our aim is to figure out how much total money is spent on goods of type in equilibrium, which can be written

I.1. Equilibrium existence and computation

Equilibrium existence:

  • this is trivial because we have closed form expression for the equilibrium price vector
  • in more general cases, classical general equilibrium results give us existences under some fairly general assumptions (see Arrow Debreu 1954)

Equilibrium computation:

  • again, trivial because we have closed form solution
  • more generally, various polynomial time algorithms for doing so:
    • tatonnement-style algorithm in exchange economies where elasticiity of substitution is not too high and consumers are endowed with money (e.g. Cole Fleischer 2008)
    • convex optimization based algorithms in the linear case (e.g. Devanur Garg Vegh 2013)

I.2. Sampling

  • doing the sum in \eqref{rev0} requires operations
  • this might be large, in which case we’ll probably want to sample
  • if e.g. is small, we can just subsample consumers and approximate the fraction of revenue flowing to goods with
    • standard LLN implies consistency
  • if both and are large, then we’ll need to sample in both dimensions
    • … is this actually a thing that we can do?

II. Graph limits and parameter testing

II.0. Graph limits

There’s some quite elegant theory on limits of graph sequences (see Lovasz Szegedy 2004):

  • a graph’s adjacency matrix is just a symmetric function from
    • discretize the interval into intervals corresponding to the vertices of the graph
    • set the function’s value to 1 on a rectangle IFF there’s an edge between the two vertices corresponding to the two intervals defining the rectangle
  • so instead of talking about graphs, just talk about these functions instead
  • it turns out that the right limit objects for these piecewise-constant symmetric functions are symmetric functions on , which are called ‘graphons’
    • the range can be interpreted a probability distribution on

Some further work (Borgs et. al. 2007) put a metric on this graphon space (in fact, a more general space, as they allow for edge and vertex weights)

  • namely, the cut-metric :
    • for two simple graphs on the exact same (labeled) vertex set:
      • is the number of edges between vertex sets and
      • is the maximum difference in edge density between any two vertex sub-sets
      • intuitively, this amounts to saying that and are ‘macroscopically similar’, in that no matter which edge subsets you look at, the number of edges is roughly the same
        • macroscopic because everything is scaled down by the square of the number of vertices, so that it’s possible for e.g. individual vertices to have dramatically different degrees so long as the total differences are
      • note that this distance only makes sense for dense graphs, since otherwise this metric would be uniformly zero as the graphs got large
    • for two simple graphs with the same number of vertices, just find the best way to relabel the vertices of to match the vertex set of , and then use the above definition: where refers to being some re-labeling of the vertex set of to match the vertex set of
    • this definition can be generalized to graphons by doing something in the same spirit as above, but let’s just stop here and use to refer to this generalized graphon metric
  • this cut-metric ends up being the right metric for graphon space, and the paper proves these results:
    • graph convergence as defined in Lovasz Szegedy 2004 is equivalent to convergence in
    • graphon space is the completion of the space of all graphs under
    • graphon space with the -metric is compact (once you collapse graphons into equivalence classes with )
    • with high probability, sampling a sufficiently large subgraph of some graph produces a sub-graph that is close in to the original graph
  • these results mean that questions about sampling and convergence of functions of graphs can be rephrased in terms of continuity of functionals on graphon space
    • this is cool

II.1. Parameter testing and continuity

  • so it turns out that this idea of computing some parameters of large graph by subsampling the large graph and computing the parameter on the subsampled graph and hoping that these are close has a name: parameter testing
  • the Borgs et. al. 2007 paper gives some very useful results for parameter testing (Theorem 6.1) in terms of continuity in
    • results are only for simple graphs, as opposed to the previous stuff which applied also to weighted graphs
    • the parameter must be real and bounded
  • the theorem states that these are equivalent (plus some other things we dont’ care about):
    • (a) is testable
    • (d) can be extended to some on graphon space that is continuous in
    • (e.1-e.3):
      • (e.1) such that for any two graphs on the same vertex set with , we have
      • (e.2) converges as , where represents the graph obtained by duplicating each vertex of times and adding in all the corresponding edges
      • (e.3) as the number of vertices in goes to , where is just except with an additional disconnected vertex
  • so, testability is equivalent to continuity in , which is equivalent to some easy-ish-to-check conditions

III. Testability of market equilibrium: simple example

III.1. One specification that is testable

Let’s now apply this nice theory of parameter testing to the simple setting above. This basically amounts to showing that the type-0 revenue in \eqref{rev0} is a testable graph parameter.

So let’s first represent the simple example above as a graph:

  • consumers and goods are vertices
  • a consumer has vertex-weight corresponding to its budget
  • a good has vertex-weight corresponding to its supply
  • the edge weight between consumer and good is consumer ’s valuation of good , which is

As the above theory applies only to simple graphs, we’ll make some more constraints:

  • in order to get rid of weights for vertices coresponding to goods:
    • normalize the supply of each good to (this is WLOG)
  • in order to get rid of edge weights:
    • make preference binary so that a consumer either likes or doesn’t like a good
    • for the goods that consumer likes, uniformly set where is number of goods likes
  • in order to get rid of weights on vertices corresponding to consumers:
    • set the budget of consumer to , so that people who like more stuff have more money to spend
    • it might be more natural to give consumers equal budget, but we’ll discuss later why this is problematic

This then gives us a simple graph with:

  • total vertices
  • consumer-vertex is attached to good-vertex IFF consumer likes good
  • there are no connections between consumer-vertices, and similarly for good-vertices

So, now \eqref{rev0} can be written , which is just the total number of all edges attached to goods of type . We’ll also normalize this by the square of the total number of vertices to construct a graph parameter:

So we need to show that is testable, which we can do via conditions (e.1-e.3):

  • (e.1):
    • as we’ve constructed it is literally just the edge density between the sets of good-vertices with and the entire vertex set
    • the condition that \eqref{cutdistance0} be small just says that the edge density between any two subsets of vertices is small
    • which guarantees that the difference in between the two graphs will also be small
  • (e.2):
    • so we duplicate each vertex times
    • there are now copies of each good-vertex and similarly for each consumer-vertex
    • let represent a good in this new graph, and similarly for consumers
    • so we have
    • so duplicating the vertices times actually keeps the same
  • (e.3):
    • adding an isolated vertex to will only impact by changing to
    • this change is clearly negligible as

Thus, it follows that is testable, and thus it makes sense to approximate this in a large graph by just taking some subsampled from and then using to approximate

III.2. Other specifications that are not testable

We made some further assumptions to the simple example in order to apply this parameter testing stuff above. Two of them are maybe a bit unnatural, and they were made because somehow more natural assumptions would have resulted in graph parameters that weren’t testable:

  1. normalizing revenue by in \eqref{revparam}
    • this is kind of like a ‘revenue going to type-0 goods, as a fraction of hypothetical total revenue supportable by the market’
    • more naturally, we might be interested in something like ‘fraction of total revenue going to type-0 goods, as a fraction of total spending’
      • unfortunately, this parameter is not testable
    • consider this example:
      • every consumer only likes exactly one good, and that good is of type-0
      • so, all revenue goes to type-0 goods
      • now, add in a single edge between every consumer and any arbitrary good of type-1
      • this change makes it so that each consumer now spends half of their income on type-0 goods, so now only .5 fraction of total revenue goes to type-0 goods
      • however, adding in an edge between every consumer and some type-1 only involves adding edges
      • so, for , the between these two different graphs goes to 0
      • thus, no matter how small gets, you can find two graphs that differ only in in the metric, but nevertheless differ significantly in fraction-of-revenue-going-to-type-0-goods
      • so (e.1) fails
    • the crux of the issue here is that the cut-metric scales everything down by the square of the number of vertices
      • so, whatever graph parameter must also be similarly scaled in order to be continuous in
  2. assigning each consumer a budget proportional to the number of items they like
    • more naturally, we might want to just give every consumer an equal share of the total budget
      • in this case, we wouldn’t want to normalize revenue by because that would be… uniformly 0 as the graph gets large
      • it turns out this is also not testable
    • consider this example (similar to previous example):
      • of the consumers only like a single item, and that item is type-0
      • now, for every consumers in this , add a single edge to some item of type-1
      • this only entails adding edges, which gets small as the market gets big
      • however, this of consumers will shift half of their spending from type-0 to type-1 goods
      • thus, this moves the revenue share of type-1 goods by , which doesn’t get small as the market gets big
      • so (e.1) fails
    • note that, while the last example relied on the graph being sparse, it’s possible in this example for the graph to be dense
      • so long as the remaining of consumers have edges on average, the resulting graph is dense
    • the key here is that a contingent of consumers with low edge count controls a non-vanishing share of the total revenue, so that small changes to the edge count can significantly change the revenue distribution

III.3. Can we say anything more general?

We had to trivialize the problem setting to get the testability results. How can we relax these assumptions?

  • consumer preferences are binary
    • more generally, we’d prefer to have this be a real number
      • unfortunately, this would make the corresponding graph have edge-weights
      • presumably Borgs et. al. 2007 would have characterized parameter testing for weighted graphs rather than just simple graphs if it was easy
      • so probably it’s not
    • or even more generally, we might want to have each edge be some function from some compact function space
      • Lovasz Szegedy 2010 defines graph limits for ‘compact decorated graphs’ where edges can be associated with elements in a general compact space
      • but I don’t think there’s even an equivalent to the cut-metric defined for this stuff yet?
      • so surely quite a long way to go here before we can do anything
  • consumer budgets are proportional to how many items they like
    • it seems like having budget proportional to how many products a consumer likes will be necessary
    • but maybe we’ll want each consumer to have their own separate proportionality constant
    • this would involve adding vertex-weights to the graph
    • again, probably nontrivial
  • cobb-douglas utilities
    • the specification in \eqref{cbutil} trivializes the problem by making spending independent of price
    • this allowed us to turn a question of market equilibrium into a question about edge density in a graph
    • more generally, we would actually have to worry about the equilibrium prices and how they change with small changes in the market
      • we might have to do this on a case-by-case-basis for specific utilities
      • or maybe there’s some way to tie this price-sensitivity-wrt-cut-metric to something like the elasticity of substitution in the utility functions
        • it feels like something like this probably should hold
    • we might consider adopting the standard approach used to prove continuity of equilibrium in the fixed-number-of-consumers-and-goods case where we prove that each consumer’s optimal consumption is continuous in the cut-metric via berge theorem
      • this is probably hard, because if we allow variable numbers of products, the mapping from prices to optimal consumption is a function-valued functional, which seemslike a pretty difficult thing to work with
      • we’ll probably need to rely on some results like (e.1-e.3), except for real-valued-function valued functionals, rather than to bounded real-valued functionals
      • this feels quite hard

IV. So…

  • this is pretty cool stuff
  • it certainly feels like some fairly general results should hold here
  • but a lot more groundwork needs to be laid before this graph limit theory stuff can be used to produce general results about equilibrium behavior of large markets
  • even so, existing theory can still provide intuitive motivation for some reasonable approaches to approximation equilibria of large markets

References