The Sensor Society: Invisible Infrastructure

So much about the Internet age has turned out to be vexing–a far cry from my enthusiasm a generation ago, shared by many nerds, for a utopian future of connectivity and a library of one’s dreams.

“I would not open windows into men's souls.” — Attributed to Queen Elizabeth IHard to know where to start with the disappointments and fears, but one that particularly nags is the feeling that we are building (with our eyes closed and tacit consent) an infrastructure that monitors our every move, encasing every one of us in a personal surveillance state, in return for the convenience of carrying a connected device everywhere we go.

Australian Prof. Mark Burdon has termed this the “Sensor Society,” the notion that passively, without our knowledge or consent, and for unknown purposes, everything we do becomes raw data for commercial discovery (and possibly for government snooping). This follows inevitably from the “always on/always connected” world, but is it too high a price to pay?

The entire interview is worth reading, but herewith a few bracing bits:

Q: What are the implications if sensors completely permeate society?

A: Well, it’s not necessarily just about the complete permeation of sensors. Rather, the greater implications regard the emergence of pervasive and always on forms of data collection. The relationship between sensors, the data they produce, and ourselves is important to understand.

For example, sensors don’t watch and listen. Rather, they detect and record. So sensors do not rely on direct and conscious registration on the part of those being monitored. In fact, the opposite is the case. We need to be passive and unaware of the sensing capabilities of our devices for the sensors to be an effective measurer of our activity and our environments.

Our relationship with our devices as sensors is consequently a loaded one. We actively interact with our devices, but we need to be passively unaware of the sensors within our devices. The societal implications are significant—it could mean that everything we do is collected, recorded and analysed without us consciously being aware that such activities are taking place because collection is so embedded in daily life.

Q: How would you recommend someone learn more about the impact of living in a sensor society?

A: Look at your everyday devices in a different way. Behind the device and the sensor are vast and imperceptible, invisible infrastructures. Infrastructures of collection enable the explosion of collectible data and infrastructures of prediction enable understanding and thus give purpose to sensors. Otherwise, sensor-generated data without an analytical framework to understand it is just a mountain of unintelligible data.

The sensor society, therefore, redirects us towards the hidden technological processes that make data collection capture, storage, and processing possible. This, in turn, highlights the importance of understanding relations of ownership and control of sensors and the infrastructures in which sensors operate. So when you’re at home with your devices, realize that you are not alone and just think about those invisible infrastructures that are also present with you. Then question to ask then is: What data is being collected, by whom and for what purpose?

Our metadata, ourselves… how are we ever to be left alone? He’s got a good TedX talk as well.

“[The framers] sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations. They conferred, as against the government, the right to be let alone-the most comprehensive of rights and the right most valued by civilized men.”  — Louis Brandeis


Tech Talk: How Computers Work

Tipped by Neverending Search, I’ve been looking at a course (aimed at kids I think) on How Computers Work.  First full episode here (I would skip the content-free intro by one Bill Gates).

They are fun, if a tad heavy hyperactive in the video editing. Useful whether new info, checking what you know, or in teaching and learning contexts. It’s from the people.

Quotable Words: Meltdown and Complexity

A few weeks back, CPU bugs Meltdown and Spectre (why the b-movie titles?) made headlines for the comprehensive threat they posed (and still do) to computer security.

These are low level bugs, meaning they prey on the very architecture of computers and other digital devices, that is, the chips that make the whole thing go,  and NYMag asked some pertinent questions of a chip designer.

[Jake Swearingen]: To me, a layman, it’s odd that CPUs require so much research, since the architecture is designed by humans. Why do they require so much outside research to sort of understand what they’re doing?

Diagram of a chip

[Researcher Anders Fogh] Because CPUs are remarkably complex. So to build a CPU, what you do is, you take a handful of sand, bit of epoxy, a tiny bit of metal, and a bit of pixie dust, and you stir it all together and you get this machine that basically runs our world today. You can imagine that that process has to be very, very complex. So down at the lowest level you have to deal with quantum phenomena; at the next level you have heat dissipation; on the next level you have to connect everything; and then the next level and next level all the way up, you actually have a piece of silicon that takes instructions, and that just turns out to be incredibly complex. For scale, a modern CPU, not even the newest and the biggest, has about 5 billion transistors in them. The Saturn V rocket that took man to the moon has about 3 million. So this is a really ridiculously complex machine, and they have been developed for longer than I have been alive.

Begins to get at why unwinding CPU-based vulnerablities is a formidable task.

The Mother of the Father of the World Wide Web

The web’s grandparents, Mary Lee and Conway Berners-Lee.

The British Library’s Sound and vision blog has a nice piece on Mary Lee Berners-Lee, mother of Tim, who as everybody knows wrote the first spec for what became the World Wide Web, while working at CERN in the late 1980s.

” After studying mathematics at the University of Birmingham, she [MLBL] spent the latter part of the Second World War working at the Telecommunications Research Establish (TRE), the secret centre of Britain’s radar development effort. With the war over she returned to her studies, before leaving Britain for the Mount Stromlo observatory in Australia in 1947, where she worked classifying the spectra of stars. In 1951 she returned to Britain and chanced across an advert for a job at Ferranti in Manchester that would change her life: “I was reading Nature and saw an advertisement one day for – saying, ‘Mathematicians wanted to work on a digital computer.’”

One of many “voices of science” in the Library holdings.

Being a Better Helper

Have you ever asked somebody for computer help? Been asked? Offered advice unasked?  Received said unsolicited advice?

I’ve been in all four categories, and I suspect anybody reading this blog has as well. It can be a grim business the ‘computer helping’ game. (If I did reality shows instead of educational media, I’d pitch ‘Family Tech Support’ intense relationship drama. Probably too full of bad language for cable even. “But I don’t even see the enter key any where? Why the #$#!~*& is it called enter if it means ‘return?” A question for the ages.)

But there’s hope: earlier today, I encountered the best advice for helping somebody use a computer I’ve seen–and it’s 21 years old. Comes from a post by the Phil Agre, who was then at UCLA. The entire thing is at but here is the first bit…

Computer people are fine human beings, but they do a lot of harm in the ways they “help” other people with their computer problems. Now that we’re trying to get everyone online, I thought it might be helpful to write down everything I’ve been taught about helping people use computers.

First you have to tell yourself some things:

Nobody is born knowing this stuff.

You’ve forgotten what it’s like to be a beginner.


Good advice for teaching in general…Speaks to keeping the experience and the goals of the learner in mind, rather than a primary focus on what the teacher is doing. Simple, but hard to do…

Tip of the hat to for the link.

Reasoning Words: Should Public Libraries be TOR Exit Relays?

The Electronic Freedom Foundation reports that a a pilot project at the Lebanon, New Hampshire, Library to serve as a TOR exit relay has been temporarily halted, and potentially scotched, by the U.S. Department of Homeland Security. ProPublica has a rundown as well.

To shed some light on the the question of whether this an outrage or reasonable, here’s a quick TOR 101 lesson.  TOR (name comes from The Onion Router, but no relation to the satirical web site) is a means of using the Internet anonymously. Individual computers (of volunteers) provide entry into and exit from anonymous, encrypted network paths–sort of a series of safe houses that let  computer traffic pass from one to the next without recording from whence it came or wither it goes. (Disclosure: I’ve not used it, got as far as downloading the software, installing it, and chickening out. So somebody who has it running live can no doubt improve and correct that description.) Also: lots of good explanations around the web, including EFF’s “in plain English”. The key thing is that the set-up provides a theoretically untraceable way to navigate the Internet, and can be installed on any computer.

The Kilton, NH, library proposes to offer an exit for TOR, meaning people could use its computer network to get  materials anonymously. A bunch of questions ensue: what do people do in TOR and does it matter as a point of library policy? The dark speculations come easily: Deal drugs? Send a bomb threat? Plot  insurrection or worse? But in the other column, there are better possibilities: evading censorship for for political art? Blowing the whistle on unconstitutional surveillance? Negotiating a job offer across international borders or protecting a trade secret,  protecting the pre-release version of a blockbuster film? Negotiating safe passage for a political prisoner?

Since it’s software, TOR is simply a platform for human purposes, be they benign or malignant. It is no more culpable than the card catalog of a previous era: those listed  how to find books on the shelves, providing neutral access to anything, be it The Anarchists’ Cookbook or Charlotte’s Web. What patrons did with the books was their concern, and librarians at least aspired to stay out of that question.

Were I still a librarian, I would be vexed by this one. It’s a first amendment loving profession, and access is central (both characteristics resonate with me). At the same, criminal activity such as Silk Road, or ransomware, depend on TOR, to say nothing of payments that support terror that perhaps move through this as well. Yet, TOR’s stated goals are to support free expression, privacy, and human rights, and libraries, in their nerdy, sometimes quaint way engage with that every day. If some teenage Ai Wei Wei in North Korea is trying to get her message out, and my library is her exit relay, should I say no? Are the ideals of access entwined with rights to privacy–when that privacy (unlike curling up with a copy of the oft banned Ulysses say) means instant connections with the writhing volatile mass that the Internet can be.

I think on the whole (particularly if I were a New Hampshire librarian–a state that has “Live Free Or Die” on its license plants), I’d brave the battle and provide the relay. Libraries are now networks, and although its easy to stay neutral, and let others fight this battle, who is doing it? Our Google overlords have already got a huge advantage, and are so unfazed by their ability to track our every move online that their position–something which I think the STASI would have been fine with — is “don’t do anything that you shouldn’t, and everything will be fine.” Privacy in our lawful actions is not something we should compelled to give up, nor do our intentions and our explanations of what we might do become property of the state, even if some of our fellow inhabitants of the planet have dark ones, and use tools to foment them. TOR is tool to keep things private, at least some of which should be, even at a public library.


Read This: “What is Code” in Bloomberg Business Week

Check out this brilliantly done article by Paul Ford the Business Week team. The videos aren’t ads, by the way, and are also great.


“This is real. A Scrum Master in ninja socks has come into your office and said, “We’ve got to budget for apps.” Should it all go pear-shaped, his career will be just fine.”

Huffington Post has a behind the scenes piece as well.

Tech & Humanities Watch: Hackers and Hacks

For a further dispatch on the already noted incursion of big data/AI into journalism, see Tim Adams’ good piece in the Guardian about the cheerful software guys who are building a “Terminator” for the workaday reporter. It’s called Quill, and it is a software program that can take raw data feeds and craft news stories without the intervention of people. It’s part of the next generation of “data journalism,” and although I’m not sure quite how widespread it is, I’m confident that it will be soon, as it partakes of the inexorable “if it can be automated it will be automated” trend.

I suppose considered as a technical problem, a newspaper is just another “front end” to fill up with content, (just as the web itself originated as kind of a “front end” for the underlying Internet). We are living in a time where computing power can dip into previously unimagined sources and craft front ends for all kinds of things instantaneously (and not just presentation or content, software can make other software. One example is “The Grid” an AI-based system that custom designs and builds your blog for you.) Perhaps the most amazing thing about Quill is it’s probably not even that hard a content challenge for a computer to turn out the average local news story, earnings report, sports extra, or even a profile of Phyllis George. Not only can a computer replace half the newsroom, it can do it without breaking a sweat. The larger question: what other content is out there waiting to be harvested and automated? Textbooks? Annual reports? TV news broadcasts? Online courses already are to some extent. Surely somebody in a dorm at Cal Tech has written a bot to craft the perfect OK Cupid profile after scraping your FB feed. It’s the work of a weekend for a sufficiently gifted and lonely programmer.

What’s more, readers don’t really know the difference between computer authors and real ones: From the story,

“Perhaps the most interesting result in the study is that there are [almost] no… significant differences in how the two texts are perceived,” Clerwall concluded. “An optimistic view would be that automated content will… allow reporters to focus on more qualified assignments, leaving the descriptive ‘recaps’ to the software.”

And it’s just begun…The computer can also craft endless localized or more detailed versions of the story, with the pieces that are relevant to a very specific reader (think Amazon suggestions, but tuned to your news interests.) The era of the reporter–or the reader–having to manually crunch the numbers or anything for that matter, may be passing by.

“Hammond fully intends to live to see the day when people look at spreadsheets and data sets as being as antiquated as computer punch cards.

What is the most sophisticated thing the machine can do in this respect now? “We can do an eight-page exegesis of one number,” Hammond says, “for example on how likely it is a company is going to default on its debt. The eight pages will be written in plain English, supported where appropriate by graphs and tables. It will show you how it got to its conclusion. It is fine to read. The most important bits of analysis are shoved to the top.”

As a person with a lot of loyalty to the somewhat battered profession of journalism, I’m a little freaked out by this, but as a techy who thinks were still at minute 1 or 2 of what we can do with data, I’m super excited. Not about these ordinary stories that will now be automated, but that lurking behind this innovation is some new and potentially much better way of getting news to people. When rich media meets big data that should set off some sparks, or when the same algorithms that write the overnight sports stories are turned on say economic news or science topics, maybe we can change the whole nature of the usually inadequate coverage in these areas. I also think a tool like Quill when thought of through an educational lens (explanatory/educational journalism rather than breaking news reporting) offers a lot.

Yet, and for another day, what’s lost when doing it the old way finally fades: what an improbable and glorious human endeavor the newsroom was…

Human-based Content Creation & Management, once upon a time. The Denver Post newsroom in the 1970s.
Human-based Content Creation & Management, once upon a time. The Denver Post newsroom in the 1970s.

Beautiful Machines: Frank Gehry and Software

"Beekman Place New York" by Emmett hume. Licensed under CC BY 3.0 via Wikimedia Commons -
A Frank Gehry building in NYC: “Beekman Place New York” by Emmett Hume. Licensed under CC BY 3.0 via Wikimedia Commons.

The characteristic designs of architect Frank Gehry’s signature buildings are only possible with the advent of sophisticated computerized design programs. (True of most modern buildings I would think, unless there are some historical re-enactor types who limit themselves to only the tools available to Palladio.)

A fascinating, and underreported, aspect of his legacy is the extension of computer approaches not only into the design process, but also into the engineering, the sourcing and the fabrication. The designs go from the computer to the fabricators directly, who create the pieces of the building on a just in time basis. The results get shipped to the site, and all of this is a “paper free” process, that is born digital, bits the whole way until the steel, titanium, or whatever is being fabricated, becomes atoms. No blue prints, no 2-D models, just a data stream.

I learned about this hearing a visitor talk about it at an exhibit in the San Jose Tech Museum of Innovation. He was explaining to his companion how the panels for Disney Hall came in from their Midwestern fabricator in batches, everything controlled by algorithms. As the building came together variances occurred in the panels, and this information could be fed back by computers to the manufacturer who made the next batch of panels to the slightly changed spec. In fact, the computer program could predict most of the variances, including any changes that resulted from weather during shipping.

This use of nearly real-time data across a network is what makes these buildings visually possible, and financially feasible. Earlier architects with hit unbreachable limits that Gehry, and his peers, can blow past. (To wit one of Gehry’s quips: “Had Erich Mendelsohn had the computer stuff that we got now, I would have had to do something else.” )

The fascinating blog Priceconomics has a great piece on it by Lian Chikako Chang, an architecture writer and researcher, with detail about how his system works and just how transformative the concept was to the whole industry. Gehry’s shop is now also a tech company that offers a platform and service for other studios as well. It also mentions the irony that Gehry himself has no patience or aptitude for computers at all!

From the piece:

Gehry suspected that digitally designed geometries could be executed much more efficiently with less redundancy. Instead of creating standard 2D construction drawings, Gehry now had his contractors refer directly to the 3D digital model, translating digitized coordinates directly into manual cutting instructions and machine tooling paths.

The contractors he worked with welcomed his guidance. “Most contractors,” he has since said, “want the architect to be the Daddy.” In 1997, the museum [Guggenheim in Bilbao] opened on budget and on time to rave reviews.

The Guggenheim Museum in Bilbao Spain. Photograph taken by User:MykReeve on 14 January, 2005.
The Guggenheim Museum in Bilbao Spain. Photograph taken by User:MykReeve on 14 January, 2005.