A New Automated Redistricting Simulator Using Markov Chain Monte Carlo

 

  Abstract

Legislative redistricting is a critical element of representative democracy. A number of political scientists have used simulation methods to sample redistricting plans under various constraints in order to assess their impacts on partisanship and other aspects of representation. However, while many optimization algorithms have been proposed, surprisingly few simulation methods exist in the literature. Furthermore, the standard algorithm has no theoretical justification, scales poorly, and is unable to incorporate fundamental substantive constraints required by redistricting processes in the real world. To fill this gap, we formulate redistricting as a graph-cut problem and propose a new automated redistricting simulator based on Markov chain Monte Carlo. We show how this algorithm can incorporate various constraints including equal population, geographical compactness, and status quo biases. Finally, we apply simulated and parallel tempering to improve the mixing of the resulting Markov chain. Through a small-scale validation study, we show that the proposed algorithm outperforms the standard algorithm in terms of both speed and ability to approximate a target distribution. We also apply the proposed methodology to the data from New Hampshire and Pennsylvania and show that our algorithm scales and yields new substantive insights. The open-source software is available for implementing the proposed methodology. (Last Revised March, 2017)

© Kosuke Imai
 Last modified: Wed Aug 17 10:18:43 EDT 2016