Measurements are purely the contraints researcher, at times statisticians, set for themselves. It is impossible to work with a raw data set; therefore, no reason presents itself for which anyone would collect data without being able to test/compare it.
(I’m reserving this line if I have time to discuss the differences between field researchers and statisticians. There is something worth noting there, but I won’t go on a tangent now. Beamster is quaking.)
Both of these texts bring to mind the term “proof of concept”. The nature of any model is conceptual–Yes, even those that include massive amounts of data. There will always be error, as no one can measure the entirety of the population at any given point. However, the presence of big data allows for the tracking of said population. It easily delivers raw data without explicitly having conduct a search for what would be, in the case of a correlational test, at least two separate pieces of information from thirty individuals.
The presence of big data has also generated many algorithms, which contributes massively to the field of psychological marketing and advertising. Rather than having a general idea, or basing tactics off of a single academic study, scientists within corporations to directly harvest data from their consumer-base.
If big data operates under the premise of theory, then theory is bolstered, is it not? How else will we generate ideas? How does innovation continue? How do societies utilize this to create prosperity?