How Small Are Our Big Data: Turning the 2016 Surprise into a 2020 Vision
Host
Department of Applied MathematicsSpeaker
Xiao-Li MengDepartment of Statistics, Harvard University
https://statistics.fas.harvard.edu/people/xiao-li-meng
Description
The term “Big Data” emphasizes data quantity, not quality. However, many of the current measures of statistical uncertainties and errors are adequate only when the data are of the desired quality, that is, when they can be viewed as probabilistic samples. We show that once we take into account the data quality, the effective sample size of a “Big Data” set can be vanishingly small. Without understanding this phenomenon, “Big Data” can do more harm than good because of the drastically inflated precision assessment, hence a gross overconfidence, setting us up to be caught by surprise when the reality unfolds, as we all experienced during the 2016 US presidential election. Data from the Cooperative Congressional Election Study (CCES, conducted by Stephen Ansolabehere, Douglas River and others, and analyzed by Shiro Kuriwaki), are used to assess the data quality in 2016 US election polls, with the aim to gain a clearer vision for the 2020 election and beyond.