Zipheads

Social bookmarking, tagging, and editing have helped launch Web 2.0, or whatever you want to call it. This is a phenomena few if anyone accurately predicted. Except that Vernor Vinge predicted it quite accurately in 1999 with his Hugo Award winning novel A Deepness in the Sky. In the novel, a future race of humans called the Emergents has enslaved the brain by turning many of its citizens into autistic savants. These “zipheads” become so focused on one task that they are unable to take care of themselves. The Emergents make use of these zipheads in an end-to-end system roughly analogous to a grid of networked data centers, complete with pattern-recognition capabilities, redundancy, and low latency (the zipheads speak to each other with their own highly modified and efficient language.)

Although I highly recommend the novel, the “technological enslavement of the mind” depicted made the novel exceedingly difficult for me to get through. There have been several recent and unsettling developments in social technologies that remind me of these zipheads.

Zipheads at Work

Amazon’s Mechanical Turk project, something Amazon is calling “artificial artificial intelligence,” will pay humans to complete tasks for which they are better suited than computers. Many of these tasks depend on repetitive pattern-recognition, something humans are exceedingly good at. This is not enslavement, of course. Instead, Amazon Mechanical Turk is a synthesis of capitalism with Web 2.0. Which I guess some view as a form of enslavement.

Zipheads at Play

First there was Slashdot.org. Then there was del.icio.us. Then there was Digg.com. Now there is Diggdot.us, a website that combines them all while attempting to eliminating redundancy. And of redundancy, there is much redundancy. Not only do the major social bookmarking/tagging/editing sites overlap in their own coverage, they reveal the redundancy so common in the media- and blogosphere to which they link.

A case in point: I follow planetary science and astronomy news very closely. My options are numerous. The sites I visit, many of which have RSS feeds to which I subscribe, include Space.com, SpaceRef, Spaceflight Now, New Scientist: Space, Universe Today, space agency sites, and mission-specific sites. Many of the same stories also show up on news sites, social sites, and blogs.

In true ziphead fashion, I run my own website primarily focused on planetary science news and commentary.

How do we cull through all this news and commentary? Aggregate sites like Diggdot.us and technologies like RSS and CNET’s “The Big Picture” visual tool are helpful. Unfortunately, the number of aggregates sites, RSS feeds, and tools continue to grow until they too become redundant, a result of the lack of coordination between zipheads. Each feels that he, she, or other has something unique to add to the larger conversation. The cream tends to rise to the top, but not without serious information overload.

Zipheads at Death

The ziphead phenomena may be short-lived. What I have not pointed out yet is that all of this activity is part of a larger scale culling of middlemen everywhere. Eventually, automation technologies will feature those techniques now unique to humans and will relegate humans to prosumers. I expect this to occur by 2010, when the first autistic savant software agents emerge to create some sort of order out of cyberspace while feeding the results of their reorganization to new user interfaces that are less dependent on text and web portals. In the process, they will eliminate the need for any media giant, web portal, aggregator, as well as the social aspects of Web 2.0.

The automation of news reporting and editing, of searching, categorizing, bookmarking, and tagging…it begins with human zipheads, but does not end with them.

Published by

Richard Leis

Richard Leis is a writer and poet. His first published poem, "Roadside Freak Show," arrives on August 21, 2017 in Impossible Archetype.  His essays about fairy tales and technology have been published on Tiny Donkey. Richard is also the Downlink Lead for the High Resolution Imaging Science Experiment (HiRISE) team at the University of Arizona. He monitors images of the Martian surface taken by the HiRISE camera located on board the Mars Reconnaissance Orbiter in orbit around Mars and helps ensure they process successfully and are validated for quick release to the science community and public. Once upon a time, Richard wrote and edited the science and technology news and commentary website Frontier Channel, hosted the RADIO Frontier Channel podcast, and organized transhumanist clubs. Follow Richard on his website (richardleis.com), on Goodreads (richardleis), Twitter (@richardleisjr), and Facebook (richardleisjr).

2 thoughts on “Zipheads”

  1. It’s true that there’s a lot of redundancy to the data, but at this point that’s hardly an inefficiency. We have tremendous amounts of unused storage space in PCs around the planet, and the space that is used is mostly used for storing the same large media files over & over. Multiple instances of people saying the same idea are gold compared to most of the trash we’re storing. Text is so small and so important that I’m sure once we start really making trustworthy computing systems we’ll make many thousands of copies every time anyone types anything. I think it’s very important that there are lots of different systems right now working on organizing & aggregating data. Theoretically it might be more efficient to have us gather in a smaller number of places, but I think that’s completely overwhelmed by the fact that we can’t trust anyone in particular with having that much control over society, even if we make an explicit social contract with them (power corrupts). It’s better to have thousands of different sets of code on thousands of different servers, so that the whole thing is fault tolerant. Fortunately the organizational tools we’re working with now are capable of meta-aggregation. So if things fall apart into Digg, Slashdot and Delicious, you can put them back together as Diggdot.us. Back when you had to actually visit a particular website in order to see the information on it, each person could only go to a few different websites, and there was more danger in schisms. Now it’s all the same soup. That’ll go a lot deeper in the next few years. I think we’re going to move past this model where a particular piece of public text is addressed to a particular website, i.e. a “Blogger post.” Things will go out to many different places, just as we now bring things in from many different places. One way that could help the cream rise is by lowering the barrier to entry for newbies– not just to having a blog at all, but to having a blog that gets noticed. Anyone who actually has something to say will be able to say it & have it get to anyone who ought to hear it. That is, assuming that we structure things so that the blogosphere is largely a meritocracy. Which is what we need to work towards. ❤

  2. It’s a huge inefficiency for the reader, in wasted time scanning feed headlines or article summaries to determine to what extent it rehashes something you’ve already read. It seems like Semantic Web tags could really help, showing what the original and secondary sources are and how much of the article is attributed to each.

Comments are closed.