Digital technology, media and intellectual property
Random header image at GB Media

WEEKEND TECH: our end-of-the-year edition – “Where in HELL is technology taking us?”

December 20th, 2013 |  Published in Digital and Mobile Technology, Intellectual Property, Telecom and broadband, Weekend tech diversions

 

Data vortex

20 December 2013 – Edward Snowden’s revelations this year brought to everyone’s mind Aldous Huxley’s best known text, Brave New World, published in 1932. The title comes from Miranda’s speech in Shakespeare’s The Tempest : “Oh, wonder! / How many goodly creatures are there here! / How beauteous mankind is! Oh brave new world, / That has such people in’t.”

I used Huxley as my lead-in to a presentation I made this past fall to an EU Commission committee on surveillance, ethics, law and technology vis-à-vis their discussion on data and the safe harbor rules. Huxley was the subject of my first monograph in college and I kept my interest in his work.

Brave New World is set in the London of the distant future – AD 2540 – and describes a fictional society inspired by two things: Huxley’s imaginative extrapolation of scientific and social trends; and his first visit to the US, in which he was struck by how a population could apparently be rendered docile by advertising and retail therapy, subjects of several of his essays.

As an intellectual who was fascinated by science, he guessed (correctly, as it turned out) that scientific advances would eventually give humans powers that had hitherto been regarded as the exclusive preserve of the gods. And his encounters with industrialists like Alfred Mond led him to think that societies would eventually be run on lines inspired by the managerial rationalism of mass production (“Fordism”) – which is why the year 2540 AD in the novel is “the Year of Our Ford 632”.

Evgeny Morozov said it best: “Huxley’s dystopia is a totalitarian society, ruled by a supposedly benevolent dictatorship whose subjects have been programmed to enjoy their subjugation through conditioning and the use of a narcotic drug – soma – that is less damaging and more pleasurable than any narcotic known to us. The rulers of “Brave New World” have solved the problem of making people love their servitude.”

Which brings us to today. On the Orwellian front, we are doing rather well – as the revelations of Edward Snowden have recently underlined. We have constructed an architecture of state surveillance that would make Orwell gasp. And indeed for a long time, for those of us who think about such things, it was the internet’s capability to facilitate such comprehensive surveillance that attracted most attention.

In the process, however, we forgot about Huxley’s intuition. We failed to notice that our runaway infatuation with the sleek toys produced by the likes of Apple and Samsung – allied to our apparently insatiable appetite for Facebook, Google and other companies that provide us with “free” services in exchange for the intimate details of our daily lives – might well turn out to be as powerful a narcotic as soma was for the inhabitants of Brave New World.

As Jason Lanier has pointed out, we have a stupendous amount of information about our private lives being stored, analyzed and acted on in advance of a demonstrated valid use for it. We have an unusually proficient and (relatively) good-natured technical elite. The mostly young people who run the giant cloud computing companies that provide modern services such as social networking or Web searching, as well as many of their counterparts in the intelligence world, are for the most part well intentioned. But, what evil lurks beneath.

And now we debate … and confuse … omniscience (having total knowledge) with omnipotence (having total power). It’s a reasonable supposition that, before the Snowden revelations hit, America’s spymasters had made just that mistake. They were … and really still are … a force unto themselves. I think the drip-drip-drip of Snowden’s mother of all leaks has taught us that omniscience is not omnipotence.

And having just finished Dave Eggar’s The Circle the dystopian message that came through – in keeping with Aldous Huxley – is that we are willingly to shed privacy in the name of digital connections and convenience.

But, hey. It is not as if we did not know where we were going. Waaaaaaaaay back in 1967, The Public Interest (then a leading venue for highbrow policy debate) published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching. Titled The Future Computer Utility, the essay speculated that someday a few big, centralized computers would provide “information processing … the same way one now buys electricity.” It is an essay on the reading list of almost every informatics course program. It was on my initial reading list at MIT.

Baran was straight, to the point:

“Our home computer console will be used to send and receive messages-like telegrams. We could check to see whether the local department store has the advertised sports shirt in stock in the desired color and size. We could ask when delivery would be guaranteed, if we ordered. The information would be up-to-the-minute and accurate. We could pay our bills and compute our taxes via the console. We would ask questions and receive answers from “information banks”-automated versions of today’s libraries. We would obtain up-to-the-minute listing of all television and radio programs … The computer could, itself, send a message to remind us of an impending anniversary and save us from the disastrous consequences of forgetfulness.”

It took decades for cloud computing to fulfill Baran’s vision. But he was prescient enough to worry that utility computing would need its own regulatory model. He wanted policies that could “offer maximum protection to the preservation of the rights of privacy of information”:

“Highly sensitive personal and important business information will be stored in many of the contemplated systems … At present, nothing more than trust-or, at best, a lack of technical sophistication-stands in the way of a would-be eavesdropper … Today we lack the mechanisms to insure adequate safeguards. Because of the difficulty in rebuilding complex systems to incorporate safeguards at a later date, it appears desirable to anticipate these problems.”

Sharp, bullshit-free analysis: techno-futurism has been in decline ever since.

So now we have this new digital infrastructure, thriving as it does on real-time data contributed by “citizens”. It allows technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of deliberation with the cleanliness and efficiency of the new, the exciting, the “data-powered administration”!!

You have read enough about this, I am sure, to know this phenomenon has a meme-friendly name, “algorithmic regulation” (coined by Tim O’Reilly, by the way).

In summary: the algorithm rules!!

In essence, we have turned over our systems to the point where all problems can be solved by engineering!! (To read about the simple difficulty in all of this read Evgeny Morozov’s  To Save Everything, Click Here).

And this is not new stuff, either. One of my delights this year has been my conversations with Todd Haley at eTERA Consulting, a company that my e-discovery readers should get to know. Todd is Vice President (Business Intelligence) at eTERA and also an Adjunct Professor on legal technology at Georgetown University and a frequent author and speaker on electronic discovery, litigation support and information technology. He and I have both done the “deep dive” into these issues and he reminded me about Spiros Simitis.

So, climbing back once again into the “Way Back Machine”, this time 1985, we have Spiros Simitis, Germany’s leading privacy scholar and practitioner and at the time the data protection commissioner of the German state of Hesse — who explored the very same issue that preoccupied Baran: the automation of data processing. He saw that citizens would be “just self-contented suppliers of information to all-seeing and all-optimizing technocrats”. To know his primary points, start with his lecture at the University of Pennsylvania Law School (click here).

Today we see the result: software makes its selection decisions based upon multiple variables (even thousands). We have read countless stories where a government entity or corporation … asked to provide a detailed response as to why an individual was singled out/why a decision was made by an automated recommendation system … says: “this is what the algorithm found based on previous cases”.

This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better. Well, ok, in some cases worse. It’s just that we don’t know exactly why or how.

But my biggest issue is few people are trying to really understand this technology. Granted, I have had some wonderful opportunities. Over the last few years I have had access to MIT, the Swiss Federal Institute of Technology, and members of the IBM Research team, with the concurrent time to pursue/read/attend study sessions. And I do not fear the math, which many people tell me looks like this to them:

Math to me

 

Math is everywhere, being used to benchmark babies’ tantrums to labor strikes to guerrilla wars to global terrorism to predicting confrontations.

But a bonus for my math/data science chums: How much does Google know about where you’ve been? How to plot the geolocation data Google stores about you, using R (click here).

And math’s utility. To help connect the dots:

 

Information Knowledge

 

 

 

As we have been able to generate more data, we and our institutions have become addicted. If you withheld the data and severed the feedback loops, it’s not clear whether our companies/institutions could continue at all. We, as citizens, are caught in an odd position: our reason for disclosing the data is not that we feel deep concern for the public good. No, we release data out of self-interest, on Google or via self-tracking apps. We are too cheap not to use free services subsidized by advertising. So we allow the devil in, and the NSA piggybacks right behind.

We have reached a state that Carl Sagan warned about. One of the most marvelous clips I have of him is below on the Charlie Rose show shortly before he died. Carl Sagan inspired a generation of scientists with his work in and out of the classroom. But he didn’t always present science with cheer. In this clip, he passionately defends science with a grave warning. It’s something we all need to hear:

 

Sagan explored these issues in much detail in one of his last books The Demon-Haunted World: Science as a Candle in the Dark. The book is intended to explain the scientific method to laypeople and to encourage people to learn critical or skeptical thinking. It explains methods to help distinguish between ideas that are considered valid science, and ideas that can be considered pseudoscience. Sagan states that when new ideas are offered for consideration, they should be tested by means of skeptical thinking, and should stand up to rigorous questioning. One of his more prescient quotes:

 

Carl Sagan foreboding

 

 

And no, I do not disparage all technology, all science.  Just this data creep. It took 12 years and $3 billion to sequence the human genome and figure out the code in our DNA. This Xmas my wife bought me a subscription that includes a kit to post off a blood sample and get information on my genetic inheritance and a basic genome sequence for as little as a couple of thousand dollars. Wow. From $3 billion to a few bucks.

And I recently uploaded one of those coincidensity apps that pulls together readers’ and authors’ interests — as determined by AI — to create tiny, human scale, thematic communities. It pulls together a spectrum of identities, a constellation of communities, all geared to me. Yes, I “surrendered” some data about myself. But for me, voluntarily.

And I remain gobsmacked at the breakneck speed of developments in artificial intelligence, genetics and medical technology. And other developments in science across multiple fields. I cannot possible catalog all of the past year but just a few with which I was involved, simply on knowing the scientists or institutions involved, or the science journalists I have worked with in the past few years:

* The Inkjet Research Centre in Cambridge University (UK) has developed a process to print eye cells which can be used to cure human blindness.

* Scientists have been able to generate stem cells from the companion cells left behind after neurons die. These stem cells go on to become neurons. This is an important step towards being able to replace neurons lost to neurodegenerative diseases.

* MIT scientists have developed a groundbreaking technique to measure the mass of exoplanets by using only their transit signal. This could have important implications in the accuracy of estimating of exoPlanets habitability.

* “Biology as destiny” … the view where genes override choice .. is going out the window. Developments in epigenetics at several institutions are moving toward self-directed biological transformation.

* Several institutions have developed a new breed of computer chips that operate more like the brain which may narrow the gulf between artificial and natural computation – between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world. Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These “neuromorphic” chips may be the missing piece of many promising but unfinished projects in artificial intelligence.

Endnote:

The Singularity goes Mainstream

Finally, an attempt to introduce the term “Technological Singularity”to a mainstream audience. Public awareness and education in a step toward social acceptance. With director Wally Pfister (cinematographer of movies like InceptionThe Dark Knight (+ Rises)The Prestige and much more) plus actors Johnny Depp and Morgan Freemann, this looks like the perfect project to use the immersive power of the moving image to spread awareness about our possible future, described in detail by Ray Kurzweil and others.

Yes, controversial. Will technology and biology become one? Yes, it has started (think: the current use of nano bots to combate/hunt cancer). The purging of homo sapiens? Ummm … not sure.

But the movie has an interesting trailer:

And if you haven’t read any Ray Kurzweil, start with his last book How to Create a Mind.  His best.

And for a great ride in the “Way Back Machine” take a look at Ray when he was on “I Have A Secret” in the 1960s:

 

 

Wishing you all a very Merry Xmas and a happy and prosperous New Year.

  

Gregory P. Bufithis   Founder/Chairman

Christmas tree graphic

About the author


Email | All posts by

"The mind that lies fallow but a single day sprouts up follies that are only to be killed by a constant and assiduous culture."
Latest Videos

Un aperçu de la FIC 2017 / A quick look at FIC 2017 (Lille, France)

Cybersecurity: a chat with John Frank, Vice President EU Government Affairs for Microsoft

From Legaltech NYC 2017: a chat with Andy Wilson of Logikcull

5G is coming ... and it's going to blow you away. Yes. Really.

The Internet of Things ... or the cybernetic consortia? (Part 1)

From the Mobile World Congress 2016: an introduction