Being a Better Helper

Have you ever asked somebody for computer help? Been asked? Offered advice unasked?  Received said unsolicited advice?

I’ve been in all four categories, and I suspect anybody reading this blog has as well. It can be a grim business the ‘computer helping’ game. (If I did reality shows instead of educational media, I’d pitch ‘Family Tech Support’ intense relationship drama. Probably too full of bad language for cable even. “But I don’t even see the enter key any where? Why the #$#!~*& is it called enter if it means ‘return?” A question for the ages.)

But there’s hope: earlier today, I encountered the best advice for helping somebody use a computer I’ve seen–and it’s 21 years old. Comes from a post by the Phil Agre, who was then at UCLA. The entire thing is at but here is the first bit…

Computer people are fine human beings, but they do a lot of harm in the ways they “help” other people with their computer problems. Now that we’re trying to get everyone online, I thought it might be helpful to write down everything I’ve been taught about helping people use computers.

First you have to tell yourself some things:

Nobody is born knowing this stuff.

You’ve forgotten what it’s like to be a beginner.


Good advice for teaching in general…Speaks to keeping the experience and the goals of the learner in mind, rather than a primary focus on what the teacher is doing. Simple, but hard to do…

Tip of the hat to for the link.


Reasoning Words: Should Public Libraries be TOR Exit Relays?

The Electronic Freedom Foundation reports that a a pilot project at the Lebanon, New Hampshire, Library to serve as a TOR exit relay has been temporarily halted, and potentially scotched, by the U.S. Department of Homeland Security. ProPublica has a rundown as well.

To shed some light on the the question of whether this an outrage or reasonable, here’s a quick TOR 101 lesson.  TOR (name comes from The Onion Router, but no relation to the satirical web site) is a means of using the Internet anonymously. Individual computers (of volunteers) provide entry into and exit from anonymous, encrypted network paths–sort of a series of safe houses that let  computer traffic pass from one to the next without recording from whence it came or wither it goes. (Disclosure: I’ve not used it, got as far as downloading the software, installing it, and chickening out. So somebody who has it running live can no doubt improve and correct that description.) Also: lots of good explanations around the web, including EFF’s “in plain English”. The key thing is that the set-up provides a theoretically untraceable way to navigate the Internet, and can be installed on any computer.

The Kilton, NH, library proposes to offer an exit for TOR, meaning people could use its computer network to get  materials anonymously. A bunch of questions ensue: what do people do in TOR and does it matter as a point of library policy? The dark speculations come easily: Deal drugs? Send a bomb threat? Plot  insurrection or worse? But in the other column, there are better possibilities: evading censorship for for political art? Blowing the whistle on unconstitutional surveillance? Negotiating a job offer across international borders or protecting a trade secret,  protecting the pre-release version of a blockbuster film? Negotiating safe passage for a political prisoner?

Since it’s software, TOR is simply a platform for human purposes, be they benign or malignant. It is no more culpable than the card catalog of a previous era: those listed  how to find books on the shelves, providing neutral access to anything, be it The Anarchists’ Cookbook or Charlotte’s Web. What patrons did with the books was their concern, and librarians at least aspired to stay out of that question.

Were I still a librarian, I would be vexed by this one. It’s a first amendment loving profession, and access is central (both characteristics resonate with me). At the same, criminal activity such as Silk Road, or ransomware, depend on TOR, to say nothing of payments that support terror that perhaps move through this as well. Yet, TOR’s stated goals are to support free expression, privacy, and human rights, and libraries, in their nerdy, sometimes quaint way engage with that every day. If some teenage Ai Wei Wei in North Korea is trying to get her message out, and my library is her exit relay, should I say no? Are the ideals of access entwined with rights to privacy–when that privacy (unlike curling up with a copy of the oft banned Ulysses say) means instant connections with the writhing volatile mass that the Internet can be.

I think on the whole (particularly if I were a New Hampshire librarian–a state that has “Live Free Or Die” on its license plants), I’d brave the battle and provide the relay. Libraries are now networks, and although its easy to stay neutral, and let others fight this battle, who is doing it? Our Google overlords have already got a huge advantage, and are so unfazed by their ability to track our every move online that their position–something which I think the STASI would have been fine with — is “don’t do anything that you shouldn’t, and everything will be fine.” Privacy in our lawful actions is not something we should compelled to give up, nor do our intentions and our explanations of what we might do become property of the state, even if some of our fellow inhabitants of the planet have dark ones, and use tools to foment them. TOR is tool to keep things private, at least some of which should be, even at a public library.


Read This: “What is Code” in Bloomberg Business Week

Check out this brilliantly done article by Paul Ford the Business Week team. The videos aren’t ads, by the way, and are also great.


“This is real. A Scrum Master in ninja socks has come into your office and said, “We’ve got to budget for apps.” Should it all go pear-shaped, his career will be just fine.”

Huffington Post has a behind the scenes piece as well.

Tech & Humanities Watch: Hackers and Hacks

For a further dispatch on the already noted incursion of big data/AI into journalism, see Tim Adams’ good piece in the Guardian about the cheerful software guys who are building a “Terminator” for the workaday reporter. It’s called Quill, and it is a software program that can take raw data feeds and craft news stories without the intervention of people. It’s part of the next generation of “data journalism,” and although I’m not sure quite how widespread it is, I’m confident that it will be soon, as it partakes of the inexorable “if it can be automated it will be automated” trend.

I suppose considered as a technical problem, a newspaper is just another “front end” to fill up with content, (just as the web itself originated as kind of a “front end” for the underlying Internet). We are living in a time where computing power can dip into previously unimagined sources and craft front ends for all kinds of things instantaneously (and not just presentation or content, software can make other software. One example is “The Grid” an AI-based system that custom designs and builds your blog for you.) Perhaps the most amazing thing about Quill is it’s probably not even that hard a content challenge for a computer to turn out the average local news story, earnings report, sports extra, or even a profile of Phyllis George. Not only can a computer replace half the newsroom, it can do it without breaking a sweat. The larger question: what other content is out there waiting to be harvested and automated? Textbooks? Annual reports? TV news broadcasts? Online courses already are to some extent. Surely somebody in a dorm at Cal Tech has written a bot to craft the perfect OK Cupid profile after scraping your FB feed. It’s the work of a weekend for a sufficiently gifted and lonely programmer.

What’s more, readers don’t really know the difference between computer authors and real ones: From the story,

“Perhaps the most interesting result in the study is that there are [almost] no… significant differences in how the two texts are perceived,” Clerwall concluded. “An optimistic view would be that automated content will… allow reporters to focus on more qualified assignments, leaving the descriptive ‘recaps’ to the software.”

And it’s just begun…The computer can also craft endless localized or more detailed versions of the story, with the pieces that are relevant to a very specific reader (think Amazon suggestions, but tuned to your news interests.) The era of the reporter–or the reader–having to manually crunch the numbers or anything for that matter, may be passing by.

“Hammond fully intends to live to see the day when people look at spreadsheets and data sets as being as antiquated as computer punch cards.

What is the most sophisticated thing the machine can do in this respect now? “We can do an eight-page exegesis of one number,” Hammond says, “for example on how likely it is a company is going to default on its debt. The eight pages will be written in plain English, supported where appropriate by graphs and tables. It will show you how it got to its conclusion. It is fine to read. The most important bits of analysis are shoved to the top.”

As a person with a lot of loyalty to the somewhat battered profession of journalism, I’m a little freaked out by this, but as a techy who thinks were still at minute 1 or 2 of what we can do with data, I’m super excited. Not about these ordinary stories that will now be automated, but that lurking behind this innovation is some new and potentially much better way of getting news to people. When rich media meets big data that should set off some sparks, or when the same algorithms that write the overnight sports stories are turned on say economic news or science topics, maybe we can change the whole nature of the usually inadequate coverage in these areas. I also think a tool like Quill when thought of through an educational lens (explanatory/educational journalism rather than breaking news reporting) offers a lot.

Yet, and for another day, what’s lost when doing it the old way finally fades: what an improbable and glorious human endeavor the newsroom was…

Human-based Content Creation & Management, once upon a time. The Denver Post newsroom in the 1970s.
Human-based Content Creation & Management, once upon a time. The Denver Post newsroom in the 1970s.

Beautiful Machines: Frank Gehry and Software

"Beekman Place New York" by Emmett hume. Licensed under CC BY 3.0 via Wikimedia Commons -
A Frank Gehry building in NYC: “Beekman Place New York” by Emmett Hume. Licensed under CC BY 3.0 via Wikimedia Commons.

The characteristic designs of architect Frank Gehry’s signature buildings are only possible with the advent of sophisticated computerized design programs. (True of most modern buildings I would think, unless there are some historical re-enactor types who limit themselves to only the tools available to Palladio.)

A fascinating, and underreported, aspect of his legacy is the extension of computer approaches not only into the design process, but also into the engineering, the sourcing and the fabrication. The designs go from the computer to the fabricators directly, who create the pieces of the building on a just in time basis. The results get shipped to the site, and all of this is a “paper free” process, that is born digital, bits the whole way until the steel, titanium, or whatever is being fabricated, becomes atoms. No blue prints, no 2-D models, just a data stream.

I learned about this hearing a visitor talk about it at an exhibit in the San Jose Tech Museum of Innovation. He was explaining to his companion how the panels for Disney Hall came in from their Midwestern fabricator in batches, everything controlled by algorithms. As the building came together variances occurred in the panels, and this information could be fed back by computers to the manufacturer who made the next batch of panels to the slightly changed spec. In fact, the computer program could predict most of the variances, including any changes that resulted from weather during shipping.

This use of nearly real-time data across a network is what makes these buildings visually possible, and financially feasible. Earlier architects with hit unbreachable limits that Gehry, and his peers, can blow past. (To wit one of Gehry’s quips: “Had Erich Mendelsohn had the computer stuff that we got now, I would have had to do something else.” )

The fascinating blog Priceconomics has a great piece on it by Lian Chikako Chang, an architecture writer and researcher, with detail about how his system works and just how transformative the concept was to the whole industry. Gehry’s shop is now also a tech company that offers a platform and service for other studios as well. It also mentions the irony that Gehry himself has no patience or aptitude for computers at all!

From the piece:

Gehry suspected that digitally designed geometries could be executed much more efficiently with less redundancy. Instead of creating standard 2D construction drawings, Gehry now had his contractors refer directly to the 3D digital model, translating digitized coordinates directly into manual cutting instructions and machine tooling paths.

The contractors he worked with welcomed his guidance. “Most contractors,” he has since said, “want the architect to be the Daddy.” In 1997, the museum [Guggenheim in Bilbao] opened on budget and on time to rave reviews.

The Guggenheim Museum in Bilbao Spain. Photograph taken by User:MykReeve on 14 January, 2005.
The Guggenheim Museum in Bilbao Spain. Photograph taken by User:MykReeve on 14 January, 2005.