Seminars & Conferences

Faculty Research - Seminars & Conferences - TGIF: Research Seminar Series on Data Science in Business

TGIF (Think Grapple Innovate Fridays): Research Seminar Series on Data Science in Business

In early 2020, the School of Data Science instituted a new tradition we call TGIF, or “Think-Grapple-Innovate-Fridays.” These recurring events are intended to bring together researchers from a variety of departments and disciplines to explore areas for collaboration. Please visit this link for more information.

2021 Speakers:

  • January 22, Rupert Freeman, "Eliciting Truthful Forecasts with Competitive Incentives"
    • Abstract:

      I will discuss a setting in which a principal elicits probabilistic predictions from multiple forecasters on a sequence of events. Unlike the standard setting in which the principal can reward each forecaster independently, we will instead suppose that forecasters compete for a single prize. This competitive incentive structure, common in many real-world settings, is known to distort the incentive properties of existing truthful elicitation schemes, meaning that forecasters may no longer truthfully report their predictions to the principal.

      I will begin by presenting a truthful mechanism for selecting a single winning forecaster once all events have materialized. When the number of events is large, we are guaranteed to choose the most accurate forecaster with high probability. I will then consider a closely related problem in which the principal wishes to use the elicited forecasts to make accurate predictions of her own by, for every event, mimicking the prediction of one of the forecasters. Forecasters derive utility (the "prize") from having their prediction used by the principal. We show that the principal can simultaneously incentivize truthful predictions and make an accurate sequence of predictions as measured by their competitiveness with the single most accurate forecaster in hindsight (aka. their regret). The mechanism we obtain can be viewed as a truthful analogue of the multiplicative weights algorithm from online learning.

      Based on joint work with Chara Podimata, Jens Witkowski, Jennifer Wortman Vaughan, David Pennock, and Andreas Krause.

  • February 19, Michael Albert, “Provable Lower Bounds for Black Box Mechanism Design”

    • Abstract: The field of mechanism design has had significant success in constructing optimal mechanisms to allocate resources when there are information asymmetries. These successes include the modern second price auction format implemented by eBay as well as matching markets for medical residencies and kidney exchanges. However, there are many situations under which no optimal mechanism is known. For example, the revenue optimal mechanism for two bidders with two items is an open question. This has led to the adoption of black box non-linear function optimizers, such as deep neural networks, to learn good performing mechanisms. However, these learned mechanisms only approximately satisfy traditional mechanism design guarantees, such as incentive compatibility. Given that these mechanisms fail traditional mechanism design guarantees, they cannot guarantee any lower bound on their performance. In this work, we present a procedure where by having sample access to a mechanism we can prove a lower bound on the performance. Moreover, we develop new techniques to construct mechanisms using deep neural networks that provide good lower bounds on the performance.