Research Publications

Range-Dependent Attribute Weighting in Consumer Choice: An Experimental Test

Econometrica, 2022, 90(2): 799830.

Program Recertification Costs: Evidence from SNAP (with Tatiana Homonoff)

American Economic Journal: Economic Policy, 2021, 13(4): 271–298.

Quantifying Brand Loyalty: Evidence from the Cigarette Market (with Philip DeCicca, Donald S. Kenkel and Feng Liu) [nber wp]

Journal of Health Economics, 2021, 76: 102512

Consumers' Ability to Identify a Surplus When Returns to Attributes are Nonlinear (with Pete Lunn)

Judgement and Decision Making, 2021, 16(5):1186–1220

Modeling Risk Aversion in Economics (with Ted O'Donoghue)

Journal of Economic Perspectives, 2018, 32(2): 10-25.

Can Chocolate Cure Blindness? Investigating the Effect of Preference Strength and Incentives on the Incidence of Choice Blindness (with Feidhlim McGowan)

Journal of Behavioral and Experimental Economics, 2016, 61(4): 1-11.

Choice Blindness in Financial Decision Making (with Owen McLaughlin)

Judgement and Decision Making, 2013, 8(5): 561-572.

Working Papers and Work in Progress

Distinguishing Common Ratio Preferences from Common Ratio Effects Using Paired Valuation Tasks [pdf] [Online Appendix] [Supplementary Material]

(with Christina McGranaghan, Kirby Nielsen, Ted O'Donoghue and Charles Sprenger)

The empirical observation of the common ratio effect (CRE) is often interpreted as evidence of an underlying common ratio preference (CRP). However, prior research has demonstrated that, in the presence of noise, expected utility can generate a CRE in standard paired choice tasks. We expand on that research to describe how the existence or absence of a CRE may reveal little about whether there exists an underlying CRP. We then propose an alternative approach to test for the existence of a CRP using paired valuation tasks that is robust to heterogeneity and noise. We implement this approach in an online experiment with 900 participants, and we find no evidence of a systematic CRP. To reconcile our findings with existing evidence, we present the same participants with standard paired choice tasks, and we demonstrate how appropriately chosen experimental parameters can generate a CRE even in our population that has no systematic CRP.

Inequality of Opportunity and Demand for Redistribution (with Marcel Preuss, Germán Reyes and Joy Wu)

This paper investigates how the dynamics of luck and effort shape demand for redistribution. We vary the timing, importance, and transparency of luck in a two-stage experimental design. First, we recruit workers in an online marketplace to compete at a real-effort task. We generate Inequality of Opportunity (IOp) by experimentally varying the return to workers' effort. We then recruit a nationally representative sample of Americans (``spectators’’) to make redistributive decisions between pairs of workers facing different IOp. Spectators know the workers' return to effort but not who exerted more effort. We find that spectators’ redistribution decisions react strongly to IOp. A 10 percentage points increase in the probability that the winner exerted more effort increases the share of earnings redistributed by 2 percentage points. However, the effect of IOp on redistribution is lower than in a fully transparent environment in which we determine the winner by a coin-flip, raising the possibility that misperceptions about the role of luck might affect redistribution. We test this by providing spectators with information about the importance of luck in determining the outcome. Our information treatment decreases redistribution by 3 percentage points on average, implying that biased beliefs about IOp play an important role in explaining redistribution patterns.

Removing Barriers to Program Enrollment: Experimental Evidence from SNAP (with Eric Gianella, Tatiana Homonoff, and Gwen Rino)

We study the impact of providing flexible, on-demand application interviews during enrollment for the Supplemental Nutrition Assistance Program. Using a field experiment involving over 63,000 applicants, we find that on-demand interviews increase program participation by 3.3 percentage points—an increase of 5.7 percent. Applicants with access to on-demand interviews had their cases processed 21% faster, suggesting large efficiency and welfare gains. We find no evidence that interview flexibility worsens targeting; our results are driven by an increase in take-up amongst likely-eligible cases. We document substantial heterogeneity by distinct office: lower-resource offices see the largest increases in participation while higher-resource offices see small or null effects. Our results have implications for the design of means-tested programs and highlight the distributional consequences of inefficient targeting.