MIT Technology Review has published the new annual “10 new techologies that will change the way you live” (eventually, when they’re finally deployed to the market).

The most immediately interesting is “Reality Mining”, on how we might mine the data from location-aware phones. At the moment we still seem to be in the long-heralded Bluetooth ‘for social networking’ and ‘proximity dating for desperate 30-somethings in upmarket London supermarkets’ moment /yawn/ — but the article also talks about using things like the data from accelerometers already in iPhones and ‘voice-tone/speed = emotional state’ audio sensors. The study tracked…

“ninety-four subjects using mobile phones pre-installed with several pieces of software that record and send the researcher data on call logs, Bluetooth devices in proximity, [mobile phone] tower IDs, application usage, and phone status. These subjects were observed via mobile phones over the course of nine months, representing over 330,000 person-hours of data (about 35 years worth of observations).” The data provided a remarkably intimate view of the subjects’ lives. The researchers were, for instance, able to “identify characteristic behavioral signatures of relationships that allowed us to accurately predict 95% of the reciprocated friendships in the study. Using these behavioral signatures we can predict, in turn, individual-level outcomes such as job satisfaction.”

MIT’s Reality Mining lab has a list of publications and full-text PDFs, along with the data sets used.

I think we can assume that the heavy hand of our EU data-protection overlords will eventually moderate practices such as employers tracking their employees’ emotional well-being and truth-telling as well as their location, and the domestic law for things like domestic phone spying by jealous partners.

What seems far more interesting is the possibility of mass anonymous real-time feeds of this kind of data, as huge washes of data (the already-collected location data, plus acceleration, tone-of-voice, background noise-level) flow in a loop from the street to servers to interpretable maps and thus out to aggregation software, then back down onto the location-aware device, all in a real-time feedback loop. And then would such an ‘aggregated’ city be able to autonomously ‘twitter’ to itself about the ongoing flows of activity happening in it? Would that be expressed only in online maps and mobile phone apps, or also be made visible in the sculptural and light/sound elements of the city itself?