Abstracts ECON-CS
Optimal Algorithm for Bayesian Incentive-Compatible Exploration (Lee Cohen)
We consider a social planner faced with a stream of myopic selfish agents. The goal of the social planner is to maximize the social welfare, however, it is limited to using only information asymmetry (regarding previous outcomes) and cannot use any monetary incentives. The planner recommends actions to agents, but her recommendations need to be Bayesian Incentive Compatible to be followed by the agents. Our main result is an optimal algorithm for the planner, in the case that the actions realizations are deterministic and have limited support, making significant important progress on this open problem. Our optimal protocol has two interesting features. First, it always completes the exploration of a priori more beneficial actions before exploring a priori less beneficial actions. Second, the randomization in the protocol is correlated across agents and actions (and not independent at each decision time).Machine Learning for Mechansim Design (Nina Balcan)
In this talk I will discuss how machine learning theory tools can be adapted and extended to analyze important aspects of data-driven mechanism design. I will discuss revenue maximization in the setting where the mechanism designer has access to samples from the distribution over buyers' values, not an explicit description thereof. I will present a general technique for providing sample complexity bounds, that is, bounds on the number of samples sufficient to ensure that if a mechanism has high average revenue over the set of samples, then that mechanism will have high revenue in expectation over the buyers' valuation distribution. I will discuss numerous applications of this result to both pricing mechanisms (including posted-price mechanisms and multi-part tarifs), and auctions (including second price auctions with reserves and classes of affine maximizer auctions).Fiduciary Bandits (Omer Ben-Porat)
Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be enforced to use the system. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents' interests---i.e., by a system that acts as a fiduciary.More formally, we introduce a model in which a system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.
On the power and limits of dynamic pricing in combinatorial markets (Ben Berger)
We study the power and limits of 'optimal dynamic pricing' in combinatorial markets; i.e., dynamic pricing that leads to optimal social welfare. Previous work by Cohen-Addad et al. [EC'16] established the existence of optimal dynamic prices for unit-demand buyers, and showed a market with coverage valuations that admits no such prices. A major open problem is finding the frontier of markets (i.e., valuation functions) that admit optimal dynamic prices. In this work we establish positive and negative results that shrink the existing gap.On the positive side, we provide tools for handling markets beyond unit-demand valuations. In particular, we give a characterization of optimal allocations for weighted rank functions of uniform-matroids (hereafter, uniform matroids). This characterization allows us to partition the items into equivalence classes according to the role they play in achieving optimality. Using these tools, we provide a poly-time optimal dynamic pricing algorithm for up to $3$ uniform-matroid buyers.
On the negative side, we establish a maximal domain theorem, showing that for every non-gross substitutes valuation, there exist unit-demand valuations such that the obtained market admits no optimal dynamic pricing. This result is analogous to the seminal result of Gul and Stacchetti [JET'99] for Walrasian equilibrium. Yang [JET'17] (who found a bug in the original proof) established a different, incomparable version of the maximal domain theorem. En route to our maximal domain theorem for optimal dynamic pricing, we provide the first complete proof of the original theorem of Gul and Stacchetti.
Fair Allocation of Structured Set Systems (Arpita Biswas)
We consider the problem of fairly allocating indivisible goods while satisfying feasibility constraints defined over the goods. The collection of all feasible sets of goods form a matroid. We first consider a simple set system, called partition matroid. In this setting, we are given a partition of the entire set of goods---i.e., the goods are categorized---and a limit is specified on the cardinality of the bundle that can be allocated from each category to any agent. The objective here is to find a fair allocation in which the subset of goods assigned to any agent satisfies the given cardinality constraints. This problem naturally captures a number of resource-allocation applications, and is a generalization of the well-studied (unconstrained) fair division problem. The two central notions of fairness, in the context of fair division of indivisible goods, are envy freeness up to one good (EF1) and the (approximate) maximin share guarantee (MMS). We show that the existence and algorithmic guarantees established for these solution concepts in the unconstrained setting can essentially be achieved under cardinality constraints. Specifically, we develop efficient algorithms which compute EF1 and approximate MMS allocations in the constrained setting. Furthermore, focusing on the case wherein all the agents have the same additive valuation, we establish that EF1 allocations exist and can be computed efficiently even under general matroid constraints.Optimal Algorithm for Bayesian Incentive-Compatible Exploration (Lee Cohen)
We consider a social planner faced with a stream of myopic selfish agents. The goal of the social planner is to maximize the social welfare, however, it is limited to using only information asymmetry (regarding previous outcomes) and cannot use any monetary incentives. The planner recommends actions to agents, but her recommendations need to be Bayesian Incentive Compatible to be followed by the agents. Our main result is an optimal algorithm for the planner, in the case that the actions realizations are deterministic and have limited support, making significant important progress on this open problem. Our optimal protocol has two interesting features. First, it always completes the exploration of a priori more beneficial actions before exploring a priori less beneficial actions. Second, the randomization in the protocol is correlated across agents and actions (and not independent at each decision time).The two-sided game of Googol and sample-based prophet inequalities (Andres Cristi)
The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. We consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card.In this talk we will present an algorithm for the natural objective of maximizing the expectation of the selected hidden value, similar to the prophet inequality. We will show a guarantee of at least $0.63518$ with respect to the expected maximum hidden value. Our algorithm results from combining three basic strategies. One is to stop whenever we see a value larger than the initial $n$ visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently $n$ visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon $1-1/e$ for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.
This is based on joint work with José Correa, Boris Epstein and José Soto.
Non-Clairvoyant Dynamic Mechanism Design with Budget Constraints and Beyond (Yuan Deng)
We design a non-clairvoyant dynamic mechanism under budget and ex-post individual rationality constraints that is dynamic incentive-compatible and achieves non-trivial revenue performance, even without any knowledge about the future. In particular, our dynamic mechanism obtains a constant approximation to the optimal dynamic mechanism having access to all information in advance.Our dynamic mechanism is enabled by a general design framework for dynamic mechanisms under complex environments, coined Lossless History Compression (LHC) mechanisms. LHC mechanisms compress the history into a state carrying the least historical information without losing any generality in terms of either revenue or welfare. In particular, the characterization works for i) almost arbitrary constraints on the outcomes, ii) private type distributions that are publicly correlated, and iii) any objective function defined on the historical reports, allocations, and the cumulative payments. We further prove that a non-clairvoyant dynamic mechanism is dynamic incentive-compatible if and only if it is equivalent to a stage-wise incentive compatible and state-utility independent mechanism, in which the latter means that the buyer's expected utility at each stage is independent of the state.
Auction Design under Interdependent Value (Alon Eden)
We study combinatorial auctions with interdependent valuations. In such settings, every agent has a private signal, and every agent has a valuation function that depends on the private signals of all the agents. Interdependent valuations capture settings where agents lack information to determine their own valuations. Examples include auctions for artwork or oil drilling rights. For single item auctions and assume some restrictive conditions (the so-called single-crossing condition), full welfare can be achieved. However, in general, there are strong impossibility results on welfare maximization in the interdependent setting. This is in contrast to settings where agents are aware of their own valuations, where the optimal welfare can always be obtained by an incentive compatible mechanism.Motivated by these impossibility results, we study welfare maximization for interdependent valuations through the lens of approximation. We introduce two valuation properties that enable positive results. The first is a relaxed, parameterized version of single crossing; the second is a submodularity condition over the signals. We obtain a host of approximation guarantees under these two notions for various scenarios.
Based on joint works with Michal Feldman, Amos Fiat, Kira Goldner and Anna Karlin.
A General Framework for Endowment Effects in Combinatorial Markets (Tomer Ezra)
The endowment effect, coined by Nobel Laureate Richard Thaler, posits that people tend to inflate the value of items they own. This bias has been traditionally studied mainly using experimental methodology. Recently, Babaioff et al. proposed a specific formulation of the endowment effect in combinatorial markets, and showed that the existence of Walrasian equilibrium with respect to the endowed valuations extends from gross substitutes to submodular valuations, but provably fails to extend to XOS valuations. We propose to harness the endowment effect further. To this end, we introduce a principlebased framework that captures a wide range of different formulations of the endowment effect (including that of Babaioff et al.). We equip our framework with a partial order over the different formulations, which (partially) ranks them from weak to strong, and provide algorithms for computing endowment equilibria with high welfare for sufficiently strong endowment effects, as well as non-existence results for weaker ones. Our main results are the following:• For markets with XOS valuations, we provide an algorithm that, for any sufficiently strong endowment effect, given an arbitrary initial allocation S, returns an endowment equilibrium with at least as much welfare as in S. In particular, the socially optimal allocation can be supported in an endowment equilibrium; moreover, every such endowment equilibrium gives at least half of the optimal social welfare. Evidently, the negative result of Babaioff et al. for XOS markets is an artifact of their specific formulation.
• For markets with arbitrary valuations, we show that bundling leads to a sweeping positive result. In particular, if items can be prepacked into indivisible bundles, we provide an algorithm that, for a wide range of endowment effects, given an arbitrary initial allocation S, computes an endowment equilibrium with at least as much welfare as in S. The algorithm runs in poly time with poly many value (resp., demand) queries for submodular (resp., general) valuations. This result is essentially a black-box reduction from the computation of an approximately-optimal endowment equilibrium with bundling to the algorithmic problem of welfare approximation.
Based on a joint work with Michal Feldman and Ophir Friedler.
Reducing Inefficiency in Carbon Auctions with Imperfect Competition (Kira Goldner)
We study carbon cap-and-trade schemes: a policy tool used to distribute licenses and control the social cost of pollution. A license grants its holder the right to pollute a unit of carbon. We focus on the allocation mechanism most commonly deployed in practice: the uniform price auction with a price floor and/or ceiling. This mechanism is not truthful; we quantify the strategic vulnerabilities of this mechanism, and use as a benchmark the welfare of the best uniform price auction with a price floor and ceiling under truth-telling behavior. We then define a subclass of "safe-price auctions", and show how to choose a number of licenses and a high enough price floor such that, at any Bayes-Nash equilibrium, the better of a "safe-price" auction and a single-bidder contract gives a constant-factor approximation to our benchmark.A Compact, Logical Approach to Scaling Economic Theories (Yannai Gonczarowski)
Strategy-Proof Bilateral Trade (Zi Yang Kang)
This paper studies fixed-price mechanisms in bilateral trade with ex ante symmetric agents. We show that the optimal price is particularly simple: it is exactly equal to the mean of the agents' distribution. The optimal price guarantees a worst-case performance of at least 1/2 of the first-best gains from trade, regardless of the agents' distribution. We also show that the worst-case performance improves as the number of agents increases, and is robust to various extensions. Our results offer an explanation for the widespread use of fixed-price mechanisms for size discovery, such as in workup mechanisms and dark pools.The Adaptive Complexity of Maximizing Gross Substitutes Functions (Ron Kupfer)
We study the adaptive complexity of maximizing a monotone Gross Substitutes function under a cardinality constraint. Our main result is an algorithm that achieves a 1-ε approximation in O(log n) adaptive rounds for any constant ε > 0, which is an exponential speedup in parallel running time compared to previously studied algorithms for Gross Substitutes functions. We also show two lower bounds. The first is that there is no non-adaptive algorithm that obtains an approximation better than 1/2 + ε. The second is that under the assumption that queries are of size O(k), where k is the cardinality constraint, there is no algorithm that obtains a constant factor approximation in o(log n/log(log n)) rounds. The lower bounds extend to the class of OXS functions.Joint work with Eric Balkanski, Sharon Qian and Yaron Singer.
Quality Selection in Two-Sided Markets: A Constrained Price Discrimination Approach (Bar Light)
Online platforms collect rich information about participants, and then share this information back with participants to improve market outcomes. In this paperwe study the following information disclosure problem of a two-sided market: how much of its available information about sellers' quality should the platform share with buyers to maximize its revenue?
One key innovation in our analysis is to reduce the study of optimal information disclosure policies to a constrained price discrimination problem. The information shared by the platform induces a "menu" of equilibrium prices and sellers' expected qualities. Optimization over feasible menus yields a price discrimination problem. The problem is constrained because feasible menus are only those that can arise in the equilibrium of the two sided-market for some information disclosure policy.
We analyze this constrained price discrimination problem, and apply our insights to two distinct two-sided market models: one in which the platform chooses prices and sellers choose quantities (similar to ride-sharing), and one in which sellers choose prices (similar to e-commerce). We provide conditions under which a simple information structure of banning a certain portion of sellers from the platform, and not sharing any information about the remaining participating sellers maximizes the platform’s revenue.
Exploration, Exploitation and Incentives (Yishay Mansour)
A classical tradeoff in reinforcement learning is between exploration and exploitation, where one needs to tradeoff gathering additional knowledge (exploring) with using the current knowledge to maximize the return (exploiting). When the system also involves strategic agents, one needs in addition to consider the incentives of those agents, who are maximizing their own reward. The goal is to maximize the social welfare through controlling the flow of information, while avoiding any monitory payments.The main focus of the talk would be on a competition between multiple learning algorithms. Here, the users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing algorithms. Our goal is to understand to what extent competition does incentivizes the adoption of better learning algorithms.
Based on joint works with Joint works with Lee Cohen, Ilan Kremer, Motty Perry, Alex Slivkins and Vasilis Syrgkanis and Steven Wu.
Alexa, Why Do People Buy Seemingly Irrelevant Items in Voice Product Search? (Liane Lewin-Eytan)
One emerging benefit of voice assistants is to facilitate product search experience, allowing users to express orally which products they seek, and taking actions on retrieved results such as adding them to their cart or sending the product details to their mobile phone for further examination. Looking at users’ behavior in product search, supported by a digital voice assistant, we have observed an interesting phenomenon where users purchase or engage with search results that are objectively judged irrelevant to their queries. In this work, we analyze and characterize this phenomenon. We provide several hypotheses as to the reasons behind it, including users’ personalized preferences, the product’s popularity, the product’s indirect relation with the query, the user’s tolerance level, the query intent, and the product price. We address each hypothesis by conducting thorough data analyses and offer some insights with respect to users’ purchase and engagement behavior with seemingly irrelevant results. We conclude with a discussion on how this analysis can be used to improve voice product search services.Multidimensional Disclosure (Giorgio Martini)
I study a model of verifiable information disclosure in which the Sender’s type is multidimensional. I propose a new explanation for why complete unraveling does not occur in practice, in situations such as firms disclosing financial information, politicians revealing conflicts of interest, and participants in two-sided matching markets choosing which attributes to show. When the Sender’s preferences over market beliefs are sufficiently convex, equilibria feature partial disclosure. Types that are low along any dimension pool together and reveal nothing. Thus, unlike traditional one-dimensional models, complete unraveling does not occur. Convex preferences arise when the Sender is uncertain over which dimensions matter, or is disclosing to a committee or jury with heterogenous preferences.Some practical issues in recent combinatorial auction design (Paul Milgrom)
In a Vickrey auction, if one bidder has an option to invest to increase the value of its item, the combined mechanism including investments is still a full optimum. For any α<1, however, there exist monotone outcome functions that guarantee a fraction α of the optimum in the worst case but such that the associated threshold auction game with investments by one bidder can lead to arbitrarily small fractions of the optimum being achieved: the best guarantee is zero. We show that if a monotone outcome 'respects improvements' - a new property that we define - and guarantees a fraction α of the optimum, then in the equilibrium of the threshold auction game with investments, at least a fraction α of the optimum is achieved. As examples, the greedy algorithm for the generalized knapsack problem and several of its variants all respect improvements.Approximation Schemes for Revenue Maximization via Symmetries (Divya Mohan)
Optimal mechanisms for multi-item revenue maximization can be prohibitively complex (unlike the single-item case resolved by Myerson). There has been a long line of work studying simple mechanisms through the lens of approximation, and more recently exploring the tradeoffs between simplicity and optimality.In this talk, we consider a revenue-maximizing seller with n items facing a single unit-demand buyer. Our main result is that an (1-\epsilon)-approximation to the optimal revenue for a unit-demand buyer can be found in time quasi-polynomial in n (even when the value distribution is unbounded). We introduce the notion of *symmetric menu complexity* of a mechanism, which counts the number of distinct options the buyer may purchase, up to permutations of the items. We show that the above approximately optimal mechanism has quasi-polynomial symmetric menu complexity.
Joint work with Pravesh Kothari, Ariel Schvartzman, Sahil Singla, and Matt Weinberg.
Efficient Adjustment Dynamics (Ellen Muir)
We analyze an asset market with sequential arrival of agents in which imbalances of supply and demand are an implication of efficiency without any frictions. This setup permits us to unambiguously categorize states of the market as buyers', sellers' or neutral markets and to provide clear-cut yet intuitive definitions of the breadth, depth, thickness, and liquidity of a market. As these concepts relate only to efficiency, they are primitive economic notions. The efficient adjustment dynamics to shocks, defined as out-of-equilibrium states, involve "fire trades"—the immediate liquidation of trades whose level exceeds the efficient storage threshold—followed by slow adjustments to a neutral state. Our framework also allows us to analyze optimal policy interventions, whereby a benevolent policymaker, observing the state of the order book, can improve next period's distribution at some cost to society.Joint work with Simon Loertscher.A Second Look at the Incentive Auction: Exploring Trade-offs in the Incentive Auction Design Space (Neil Newman)
Over 13 months in 2016—17 the US Federal Communications Commission conducted an "incentive auction" to repurpose radio spectrum from broadcast television to wireless internet. In the end, the auction grossed $19.8 billion USD, $10.05 billion USD of which was paid to 175 broadcasters for voluntarily relinquishing their licenses across 14 UHF channels. Stations that continued broadcasting were assigned potentially new channels to fit as densely as possible into the channels that remained.The focus of this talk is taking another look at the incentive auction design. We investigated the impact of a wide range of auction variations by running extensive simulations on a large computer cluster over the past couple of years, leveraging an auction simulator and a realistic model of bidder values. Our goal was to attempt to understand more deeply how well the auction worked, which elements of the design were most important, and which variations of the design might have led to even better outcomes (and hence might be important to consider in future auctions).
Neural Networks for Predicting Human Interactions in Repeated Games (Gali Noti)
We consider the problem of predicting human players' actions in repeated strategic interactions. Our goal is to predict the dynamic step-by-step behavior of individual players in previously unseen games. We study the ability of neural networks to perform such predictions and the information that they require. We show on a dataset of normal-form games from experiments with human participants that standard neural networks are able to learn functions that provide more accurate predictions of the players' actions than established models from behavioral economics. The networks outperform the other models in terms of prediction accuracy and cross-entropy, and yield higher economic value. We show that if the available input is only of a short sequence of play, economic information about the game is important for predicting behavior of human agents. However, interestingly, we find that when the networks are trained with long enough sequences of history of play, action-based networks do well and additional economic details about the game do not improve their performance, indicating that the sequence of actions encode sufficient information for the success in the prediction task.Secretary Algorithms for Online Assignment Problems (Rebecca Reiffenhauser)
Online assignment problems have received growing interest due to the evolving field of e-commerce, especially ad auctions. The famous secretary problem and its well-known optimal solution can be extended and transferred quite intuitively to yield (near-)optimal algorithms and truthful mechanisms for rich problems like online weighted bipartite matching. The talk introduces this line of research and shortly outlines exemplary techniques, as well as their limitations.Financial Networks with Derivatives: Complexity and Systemic Risk (Steffen Schuldenzucker)
Smoothed Analysis of Multi-Item Auctions with Correlated Values (Ariel Schvartzman)
Principal-Agent VCG Contracts (Elisheva S. Shamash)
We study a game of complete information with multiple principals and multiple common agents. Each agent takes an action that can affect the payoffs of all principals. Prat and Rustichini (Econometrica, 2003) who introduce this model assume first price contracts: each principal offers monetary transfers to each agent conditional on the action taken by the agent. We define a notion of VCG contracts which are a restricted natural class of contractible contracts and study its effect on the existence of efficient pure subgame perfect equilibrium outcomes. We identify a "unitary balancedness” condition that is necessary and sufficient for the existence of a pure subgame perfect equilibrium (SPE) with VCG contracts. As a consequence, we show that the class of instances of this game that admit an efficient SPE with VCG contracts strictly contains the class of instances of this game that admit an efficient SPE with first price contracts. Although VCG contracts broaden the existence of pure subgame perfect equilibria, we show that the worst case welfare loss in any SPE outcome with VCG contracts is not worse than the respective worst case loss with first price contracts.AI-driven mechanism design (Weiran Shen)
Due to its huge success in the industry, mechanism design has become one of the central research topics at the interface of economics and computer science. However, we are still faced with challenges in both theory and practice. In theory, due to the huge design space and agent strategy space, how to design optimal mechanisms for various settings still remains largely unknown. In practice, many assumptions made in theory do not hold, resulting in a gap between theory and application.We propose the AI-driven mechanism design framework, which contains an agent model and a mechanism model that interact with each other. By combining AI techniques with mechanism design theory, we are able to solve some problems that cannot be solved using tools from either domain alone. We show that our framework is helpful for both theoretical analysis and practical applications, providing an alternative way of dealing with these problems.
Robust Non-Bayesian Social Learning (Segev Shlomov)
We introduce the class of virtually additive non-Bayesian learning heuristics to aggregate beliefs in social networks. A virtually additive heuristic is characterized by a single function that maps a belief to a real number which is the virtual belief. To aggregate beliefs agent simply sums up all virtual beliefs of his neighbors to obtain his new virtual belief. We provide two fundamental learning results for virtually additive heuristics.The first result focuses on the robustness of learning with respect to the initial information of the agents.
In a canonical setting with binary-state and conditionally i.i.d. signals we show that whenever it is possible to naively learn the state robustly it is possible to do so with a virtually additive heuristic. The second result shows that naive learning with virtually additive heuristics can hold without the common prior assumption.
Raising the Bar: Certification Thresholds and Market Outcames (Steve Tadelis)
Certification is a ubiquitous tool that alleviates information asymmetries in markets, yet little is known about the impact of certification-selectivity on market outcomes. Exploiting a policy change on eBay, we study how a more selective certification threshold affects entry, exit, and incumbent behavior. We develop a stylized model that shows that changes in selectivity impact the distribution of quality and prices in predictable ways. Using rich data from hundreds of online categories on eBay.com, we find support for the model’s hypotheses. Our results help inform the design of certification selectivity in electronic and other markets.Balancing the Robustness and Convergence of Tatonnement (Yixin Tao)
A major goal in Algorithmic Game Theory is to justify equilibrium concepts from an algorithmic and complexity perspective. One appealing approach is to identify robust natural distributed algorithms that converge quickly to an equilibrium. This work addresses a lack of robustness in existing convergence results for discrete forms of tatonnement, including the fact that it need not converge when buyers have linear utility functions.This work achieves greater robustness by seeking approximate rather than exact convergence in large market settings. More specifically, this work shows that for Fisher markets with buyers having CES utility functions, including linear utility functions, tatonnement will converge quickly to an approximate equilibrium (i.e. at a linear rate), modulo a suitable large market assumption. The quality of the approximation is a function of the parameters of the large market assumption.
Buy-many mechanisms are not much better than item pricing (Yifeng Teng)
Multi-item mechanisms can be very complex offering many different bundles to the buyer that could even be randomized. Such complexity is thought to be necessary as the revenue gaps between randomized and deterministic mechanisms, or deterministic and simple mechanisms are huge even for additive valuations.We challenge this conventional belief by showing that these large gaps can only happen in restricted situations. These are situations where the mechanism overcharges a buyer for a bundle while selling individual items at much lower prices. Arguably this is impractical in many settings because the buyer can break his order into smaller pieces paying a much lower price overall. Our main result is that if the buyer is allowed to purchase as many (randomized) bundles as he pleases, the revenue of any multi-item mechanism is at most O(log n) times the revenue achievable by item pricing, where n is the number of items. This holds in the most general setting possible, with an arbitrarily correlated distribution of buyer types and arbitrary valuations.
We also show that this result is tight in a very strong sense. Any family of mechanisms of subexponential description complexity cannot achieve better than logarithmic approximation even against the best deterministic mechanism and even for additive valuations. In contrast, item pricing that has linear description complexity matches this bound against randomized mechanisms. This is joint work with Shuchi Chawla and Christos Tzamos.
Estimating Approximate Incentive Compatibility (Ellen Vitercik)
In practice, most mechanisms for selling, buying, matching, voting, and so on are not incentive compatible: agents can improve their utilities by bidding strategically. We show how to estimate the extent to which an agent can improve his utility by bidding strategically, given samples from the distribution over agents' values. We do so by measuring the maximum utility an agent can gain by misreporting his value on average over the samples, assuming his true and reported values are from a finite subset of the value space. The primary challenge is proving that the maximum utility gain over this finite subset nearly matches the maximum utility gain overall, despite the volatility of the utility functions we study. We apply our tools to several well-studied mechanisms, including the first-price and generalized second-price auctions.This is joint work with Nina Balcan and Tuomas Sandholm.