Linked by the indefatigable people at O’Reilly, I came across a Q&A with Judah Pearl, an AI pioneer, who has some measured criticism of the enterprise as it stands:
Hartnett: People are excited about the possibilities for AI. You’re not?
Pearl: As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.
Hartnett: The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.
Pearl: No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future—what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition. If you deprive the robot of your intuition about cause and effect, you’re never going to communicate meaningfully. Robots could not say “I should have done better,” as you and I do. And we thus lose an important channel of communication.
So maybe it’s not all curve fitting and optimization problems? Seems plausible, but the already formidable mathematics would seemingly get nearly impossible.
Tipped by a friend, I just learned about “The Data Detox,” a sequential 8-day program for reviewing, cleaning up/deleting, and rethinking the data trails you leave (and people profit from and may exploit in other ways).
As you go through each day, the detox gives quick explanations of how data tracking works, and also explains the implication of decisions you have made (or not made when you just check ‘agree’). Related: the same group has a useful rundown on whether to stay on FaceBook or to go. Not a hard one for me, I was barely on in the first place.
So much about the Internet age has turned out to be vexing–a far cry from my enthusiasm a generation ago, shared by many nerds, for a utopian future of connectivity and a library of one’s dreams.
Hard to know where to start with the disappointments and fears, but one that particularly nags is the feeling that we are building (with our eyes closed and tacit consent) an infrastructure that monitors our every move, encasing every one of us in a personal surveillance state, in return for the convenience of carrying a connected device everywhere we go.
Australian Prof. Mark Burdon has termed this the “Sensor Society,” the notion that passively, without our knowledge or consent, and for unknown purposes, everything we do becomes raw data for commercial discovery (and possibly for government snooping). This follows inevitably from the “always on/always connected” world, but is it too high a price to pay?
Q: What are the implications if sensors completely permeate society?
A: Well, it’s not necessarily just about the complete permeation of sensors. Rather, the greater implications regard the emergence of pervasive and always on forms of data collection. The relationship between sensors, the data they produce, and ourselves is important to understand.
For example, sensors don’t watch and listen. Rather, they detect and record. So sensors do not rely on direct and conscious registration on the part of those being monitored. In fact, the opposite is the case. We need to be passive and unaware of the sensing capabilities of our devices for the sensors to be an effective measurer of our activity and our environments.
Our relationship with our devices as sensors is consequently a loaded one. We actively interact with our devices, but we need to be passively unaware of the sensors within our devices. The societal implications are significant—it could mean that everything we do is collected, recorded and analysed without us consciously being aware that such activities are taking place because collection is so embedded in daily life.
Q: How would you recommend someone learn more about the impact of living in a sensor society?
A: Look at your everyday devices in a different way. Behind the device and the sensor are vast and imperceptible, invisible infrastructures. Infrastructures of collection enable the explosion of collectible data and infrastructures of prediction enable understanding and thus give purpose to sensors. Otherwise, sensor-generated data without an analytical framework to understand it is just a mountain of unintelligible data.
The sensor society, therefore, redirects us towards the hidden technological processes that make data collection capture, storage, and processing possible. This, in turn, highlights the importance of understanding relations of ownership and control of sensors and the infrastructures in which sensors operate. So when you’re at home with your devices, realize that you are not alone and just think about those invisible infrastructures that are also present with you. Then question to ask then is: What data is being collected, by whom and for what purpose?
Tipped by Neverending Search, I’ve been looking at a course (aimed at kids I think) on How Computers Work. First full episode here (I would skip the content-free intro by one Bill Gates).
They are fun, if a tad heavy hyperactive in the video editing. Useful whether new info, checking what you know, or in teaching and learning contexts. It’s from the code.com people.
A few weeks back, CPU bugs Meltdown and Spectre (why the b-movie titles?) made headlines for the comprehensive threat they posed (and still do) to computer security.
[Jake Swearingen]: To me, a layman, it’s odd that CPUs require so much research, since the architecture is designed by humans. Why do they require so much outside research to sort of understand what they’re doing?
Diagram of a chip
[Researcher Anders Fogh] Because CPUs are remarkably complex. So to build a CPU, what you do is, you take a handful of sand, bit of epoxy, a tiny bit of metal, and a bit of pixie dust, and you stir it all together and you get this machine that basically runs our world today. You can imagine that that process has to be very, very complex. So down at the lowest level you have to deal with quantum phenomena; at the next level you have heat dissipation; on the next level you have to connect everything; and then the next level and next level all the way up, you actually have a piece of silicon that takes instructions, and that just turns out to be incredibly complex. For scale, a modern CPU, not even the newest and the biggest, has about 5 billion transistors in them. The Saturn V rocket that took man to the moon has about 3 million. So this is a really ridiculously complex machine, and they have been developed for longer than I have been alive.
Begins to get at why unwinding CPU-based vulnerablities is a formidable task.
” After studying mathematics at the University of Birmingham, she [MLBL] spent the latter part of the Second World War working at the Telecommunications Research Establish (TRE), the secret centre of Britain’s radar development effort. With the war over she returned to her studies, before leaving Britain for the Mount Stromlo observatory in Australia in 1947, where she worked classifying the spectra of stars. In 1951 she returned to Britain and chanced across an advert for a job at Ferranti in Manchester that would change her life: “I was reading Nature and saw an advertisement one day for – saying, ‘Mathematicians wanted to work on a digital computer.’”
Have you ever asked somebody for computer help? Been asked? Offered advice unasked? Received said unsolicited advice?
I’ve been in all four categories, and I suspect anybody reading this blog has as well. It can be a grim business the ‘computer helping’ game. (If I did reality shows instead of educational media, I’d pitch ‘Family Tech Support’ intense relationship drama. Probably too full of bad language for cable even. “But I don’t even see the enter key any where? Why the #$#!~*& is it called enter if it means ‘return?” A question for the ages.)
But there’s hope: earlier today, I encountered the best advice for helping somebody use a computer I’ve seen–and it’s 21 years old. Comes from a post by the Phil Agre, who was then at UCLA. The entire thing is at http://polaris.gseis.ucla.edu/pagre/ but here is the first bit…
Computer people are fine human beings, but they do a lot of harm in the ways they “help” other people with their computer problems. Now that we’re trying to get everyone online, I thought it might be helpful to write down everything I’ve been taught about helping people use computers.
First you have to tell yourself some things:
Nobody is born knowing this stuff.
You’ve forgotten what it’s like to be a beginner.
Good advice for teaching in general…Speaks to keeping the experience and the goals of the learner in mind, rather than a primary focus on what the teacher is doing. Simple, but hard to do…
The Electronic Freedom Foundation reports that a a pilot project at the Lebanon, New Hampshire, Library to serve as a TOR exit relay has been temporarily halted, and potentially totally scotched, by the U.S. Department of Homeland Security. ProPublica has a rundown as well.
To shed some light on the the question of whether this is an outrage or reasonable, here’s a quick TOR 101 lesson. TOR (name comes from The Onion Router, but no relation to the satirical web site) is a means of using the Internet anonymously. Individual computers (of volunteers) provide entry into and exit from anonymous, encrypted network paths–sort of a series of safe houses that let computer traffic pass from one to the next without recording from whence it came or whither it goest. (Disclosure: I’ve not used it, got as far as downloading the software, installing it, and chickening out. So somebody who has it running live can no doubt improve and correct that description.) Also: lots of good explanations around the web, including one from EFF’s “in plain English” series. The key thing is that the set-up provides a theoretically untraceable way to navigate the Internet, and can be installed on any computer.
The library proposes to offer an exit for TOR, meaning people could use its computer network to download materials anonymously. A bunch of questions ensue: what do people do in TOR, and does this activity matter as a point of library policy? The dark speculations come easily: Deal drugs? Send a bomb threat? Plot insurrection or worse? Just steal software? But in the other column, there are better possibilities: evading censorship for for political art? Blowing the whistle on unconstitutional surveillance? Negotiating a job offer across international borders or protecting a trade secret? Organizing for rights in a closed regime? Negotiating safe passage for a political prisoner?
Since it’s software, TOR is simply a platform for human purposes, be they benign or malignant. It is no more culpable than the library card catalog of a previous era: those listed how to find books on the shelves, providing neutral access to anything, be it The Anarchist Cookbook or Charlotte’s Web. What patrons did with the books was their concern, and librarians at least aspired to stay out of that question.
Were I still a librarian, I would be vexed by this one. It’s a first-amendment loving profession, and access is central (both characteristics resonate with me). At the same, criminal activity such as Silk Road, or ransomware bots, may live in TOR, organizing capacity for hate groups, and human trafficking networks could lurk as well. Yet, TOR’s stated goals are to support free expression, privacy, and human rights, and libraries, in their nerdy, sometimes quaint way aim to live that mission every day. If some teenage Ai Wei Wei-type in is trying to get her message out, and my library is her exit relay, should I say no? Access is entwined with the right to privacy: being able to checking out the oft-banned Ulysses, for instance, means being able to check this out more or less anonymously. If I use a library terminal to tap the Internet, what content is fair game, and what level of privacy is appropriate?
I think on the whole (particularly if I were a New Hampshire librarian–a state that has “Live Free Or Die” on its license plants), I’d brave the battle and provide the relay. Libraries are networks, and although its easy to stay out of the fray, and let others fight this battle, who is really doing it from the public interest side? Our Google overlords have already got a huge advantage, and are so unfazed by their ability to track our every move online that their position–something which I think the STASI would have been fine with — is “don’t do anything that you shouldn’t, and everything will be fine.” Privacy in our lawful actions is not something we should compelled to give up, nor do our intentions and our explanations of what we might do become property of the state, even if some of our fellow inhabitants of the planet have dark ones, and use tools to foment them. TOR is tool to keep things private, at least some of which should be, even at a public library, perhaps even particularly there, where there is a means to discuss the public good and answer to it.
“This is real. A Scrum Master in ninja socks has come into your office and said, “We’ve got to budget for apps.” Should it all go pear-shaped, his career will be just fine.”