## Archive for the ‘science’ Category

My tribe—the data nerds—is feeling pretty smug right now, after Nate Silver’s smart poll aggregation totally nailed the election results. But we’re also a little puzzled by the cavalier way in which what Nate Silver does is described as just “math”, or “simple statistics”. There is a huge amount of judgement, and hence subjectivity, required in designing the kind of statistical models that 538 uses. I hesitate to bring this up because it’s one of the clubs idiots use to beat up on Nate Silver, but 538 does not weight all polls equally, and (correct me if I’m wrong) the weights are actually set ~~by hand~~ using a complex series of formulae.

The point is that the kind of model-building that Nate Silver et al. do is not just “math”, but *science*. This is why I don’t really like that XKCD comic that everyone has seen by now. Well I like the smug tone, because that is how I, a data scientist, feel about 538’s success. That is right on. But we’ve known that numbers work for a long time. Nate Silver and 538 is not just about numbers, about quantifying things. Pollsters have been doing that for a long time. It is about understanding the structured uncertainty in those numbers, the underlying statistical structure, the interesting relationships between the obvious data (polling numbers) and the less obvious data (economic activity, barometric pressure, etc.) and using that understanding to combine lots of little pieces of data into one, honkin’, solid piece of data. It is about teasing apart the Signal and the Noise. There are an infinity of ways to combine all the polling numbers that 538 aggregates, and let’s just say there is another infinity’s worth of ways to take all that data and make predictions about what will happen in the space of variables that we ultimately care about (like, “who is President in 2014”). It’s not like Nate Silver just sits at his desk with his TI-83 and types in percentage after percentage.

In fact, Joseph Fruehwald makes this point clearly and elegantly, by quantitatively comparing the 538 predictions and the simple average of the very same polls that 538 aggregates to make those predictions. The 538 prediction is something like twice as good (in RMSE terms), and is especially good where either candidate outperformed the polls, meaning there is some “special sauce” that Nate Silver contributes something substantial. Nate Silver isn’t some kind of prophet; there are other poll aggregators who did comparably well. But this whole enterprise is about a lot more than just “using numbers to determine which of two things is bigger”.

I think a good analogy can be made with the whole Sabermetrics trend in baseball (which Nate Silver was involved in, of course). There are lots of ways that a baseball player can be quantified: height, total biomass of body hair, red blood cell count, RBI, slugging percentage, etc. Some of these are very useful in quantifying the individual contribution of a player to the team’s success—and hence their monetary value—while others are not. Knowing which numbers to put into your model, and how, is a step beyond just having the numbers, and that takes some knowledge about the domain—what the numbers *mean*.

Looks like we are baby-stepping towards the singularity:

An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain’s ability to learn more closely than any other machine.

Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS.

I think this is very cool, for a couple of reasons…