Digital technology, media and intellectual property
Random header image at GB Media

Happy New Year from Paris

December 31st, 2014 |  Published in New Years Eve 2014

Xmas card graphic 2014

 

Happy New Year from Paris

I wish you a happy holiday season and New Year. I extend my sincere thanks and appreciation to my clients, my readers and my friends for their confidence, loyalty and partnership. I have been so very fortunate this year to have had significant time to spend with many of you in all the parts of my milieu: the e-discovery ecosystem, the mobile telecom industry, the neuroscience community, the astronomy/physics community, the literary establishment, and especially my artificial intelligence community.So just a few end-of-the-year observations on the brain and AI , concluding with one of my trips to Google Labs (how festive!) :

– Gregory P. Bufithis, Esq.

 

Damn, technology develops at an exponential rate. With advances in the collection, processing, and analysis of data, we have turned a civilizational corner. We can now make sense of a multitudinous system of decisions … and “their tethered outcomes” as Josh Nimoy likes to say … which in the past, were far too complex, and seemingly chaotic, for us to extract any tangible benefit.

The information/telecommunications revolution of the late 20th and early 21st centuries has resulted in a tech revolution with the same massive societal impact as the Industrial Revolution. The technological advances in the fields of mobile computing and data processing, and the introduction of multi-analytical platforms, have transformed the way human beings identify, exhibit, and explore themselves, and the companies, organizations, and nations they populate.

It is an amazing time. Our mind is constantly trying to make sense out of the world. We watch what’s going on around us and seek patterns. In the sciences we attempt to represent those patterns in mathematics (the laws of motion are matched with the calculus, the behavior of numbers is matched with Peano’s axioms). In a religious or spiritual life we attempt to match the patterns in our life with the shared experiences of others or with connections that seem to come from deep inside us. Our technology accelerates that pattern seeking.

 

Although sometimes that “pattern seeking” can be a delusion.

There is a well-known experiment we learn at Cambridge Neuroscience. A group is led to a room where wires lace both ceiling and walls, tiny colored light beads spaced evenly along the wires. The lights go off and then harpsichord music is played from speakers placed in each of the four corners of the room. It is a Bach fugue – and the room is filled with tiny points of flickering light from the beads. The lights dance in perfect harmony with the music; the patterns of notes and the patterns of light weave an intricate web of sensation that seem to transport you.

After the session, the group … the entire group … asks about the computer process involved. How does it work? Where is it?

“In your head” the professor laughs. “The lights are flickering at random; your mind seeks a pattern to match the music.”

 

 

It’s called “patternicity”. It is why people see faces in nature, interpret window stains as human figures, hear voices in random sounds generated by electronic devices. The proximate cause is the priming effect, in which our brain and senses are prepared to interpret stimuli according to an expected model. We have a tendency, a need to find patterns.

The brain is so incredibly complex. It’s a network of neural nodes that can parallel process information. It’s a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren’t injured, it’s possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, it’s possible to improve your own cognition just by getting enough sleep and exercise.

You don’t have to delve into the technical details too much to see this in your life. Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation – it might not even be possible to express these states mathematically.

And it is why, for example, scientists working on auto translation software are bedeviled in such areas as morphologically complex words (assign-ment, listen-ed) and how they are represented and processed in the brain. Do complex words have cortical representations and involve brain processes equivalent to single lexical objects or are they processed as sequences of separate morpheme-like units? If so, how do we duplicate that in a software program?

I mention all of this with respect to the chatter about artificial intelligence which seems to be graduating to meaningless buzz word status like “big data” and “cloud.” Or “predictive coding”.

Those who believe that the mind can be replicated on a computer tend to explain the mind in terms of a computer. I even slipped into that myself in one of the paragraphs above. When theorizing about the mind, especially to outsiders but also to one another, defenders of AI often rely on computational concepts. They regularly describe the mind and brain as the “software and hardware” of thinking, the mind as a “pattern” and the brain as a “substrate,” senses as “inputs” and behaviors as “outputs,” neurons as “processing units” and synapses as “circuitry,” to give just a few common examples.

The argument is our brain is just a computer. It fires electronic pulses and uses chemicals. All we are is a computer that is biologically programmed in such a way it gives is intelligence and self-awareness. Life is nothing special. Our intelligence isn’t some sort of magic that can only be created through life. If we program a computer to work like our brain, why can’t it have emotions and feelings? I mean, an emotion is just a bunch of chemicals produced when one part of you see, hear, smell and taste something, right? Our brain structure is basically the programming code of a computer, right?

 

 

I am a big supporter of the AI space but there seems to be more misinformation out there than solid facts. The general public seems to view AI as the mythical purple unicorn of technology: elusive, powerful, mysterious, dangerous and most likely made up. And many think Ray Kurzweil owns it (kidding).

 

 

The use of computer analogies to try to describe, understand, and replicate mental processes has led to their widespread abuse. Mental processes cannot be understood entirely in computational terms. There is this complex integration of mental processes (such as memory and emotion) with both neurobiology (such as neural activity in specific circuits) and interpersonal relationships (such as patterns of communication). Experience, gene expression and gene regulation, mental activity, behavior, and continued interactions with the environment (experience) are tightly linked in a transactional set of processes that cannot be replicated.

 

 

And, yes, there has been much breathless talk of late about all the varied mysteries of human existence that have been or soon will be solved by neuroscience. As a neuroscience student I could easily expatiate on the wonders of the discipline. For a start, it is a science in which many other sciences converge: physics, biology, chemistry, biophysics, biochemistry, pharmacology, and psychology, among others. In addition, its object of study is the one material object that, of all the material objects in the universe, bears most closely on our lives: the brain, and more generally, the nervous system. But I study and learn as a skeptic so I give proper respect to what neuroscience can tell us … and not tell us … about ourselves.

I enjoy the debates/discussions/warnings about artificial intelligence and its potential to exterminate us. Humans, if you hadn’t already noticed, have stopped evolving. As David Attenborough recently reminded us in an excellent series on BBC, our species is the first – by our free will – to remove itself from the process of natural selection, therefore stunting evolution. That, accompanied by Steven Hawking and Elon Musk’s theories that robots will supersede human intelligence and become our biggest existential threat, paints a pretty bleak vision of the future.

 

 

But those conversations are not referring to current AI. The dangers may arise if (when?) some kind of machine intelligence is devised that is as “intelligent” as a human. Lots of things called AI today are just reasonably complex algorithms, nothing more.

But as I noted at the beginning, the “problem” (if that is the correct word to use) is that technology develops at an exponential rate, AI included. Artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement.

 

 

This means two things. First, while AI is still relatively primitive its capabilities will develop faster than people realize and once we have developed a “General Intelligence” on par with our own intellect it will very quickly advance to a level of human intelligence higher than we can even comprehend. Second, can you encode morals/ethics/human values into a computer? Can you balance privacy vs. security on a computer? Program greater good vs individual rights? Would we trust it to make its own decisions? Can we prevent it from changing the values we impose on it? How would we expect it to show empathy to us without it developing its own opinions?

 

 

Because that is what machine learning is all about: getting machines to do what is probably the human brain’s best trick – making inferences and building capability by learning from what has gone before. If nothing else the brain is a “learning machine”.

So a brief word on what I think was the most significant development in AI this year:

I had an opportunity to see a demonstration at a company called Google Deepmind (originally DeepMind Technologies), a company founded by Demis Hassabis. He is a bit of a genius, really. He started playing chess at age four and soon blossomed into a child prodigy. At age eight, success on the chessboard led him to ponder two questions that have obsessed him ever since: first, how does the brain learn to master complex tasks; and second, could computers ever do the same? His quest to understand and create intelligence has led him through three careers: game developer, neuroscientist, and now, artificial-intelligence entrepreneur. His company was acquired by Google this year as Google seems to be hell-bent to corner the market in deep learning, which simply put involves processing data through networks of crudely simulated neurons.

Demis’ achievement? He has been able to borrow a trick (as it were) from an area of our brain (the hippocampus, a part of the brain that underpins memory and spatial navigation, and which is still poorly understood) so that Demis’ process replays its past experiences over and over to try and extract the most accurate hints on what it should do in the future. To put it simply the computers were programmed with software to play certain games, but were not programmed with any of the rules. Through trial and error the computers mastered the games through reinforcement learning. Artificial intelligence researchers have been tinkering with reinforcement learning for decades. But not until Demis built a system capable of learning something as complex as how to play a computer game did AI analysts realize the breakthrough.

 

 

The intelligentsia were a bit shocked because they did not expect that anybody could do that at this stage of the technology. Deepmind now has a team of seventy-five people in London to apply that technology to all of Google’s products, especially search.

As David Deutsch has said, it is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object “capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists”.

 

Nor are its unique abilities confined to such cerebral matters. As Brian Cox has said in his most recent science series the cold, physical fact is that it is the only kind of object “that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances”.

In his biography of Steve Jobs, Walter Isaacson quoted Jobs “I always thought of myself as a humanities person as a kid, but I liked electronics. Then I read something that one of my heroes, Edwin Land of Polaroid, said about the importance of people who could stand at the intersection of humanities and sciences, and I decided that’s what I wanted to do”.

In an interview Isaacson said “I suppose a deep appreciation and respect of both humanities and sciences truly drives innovation. The people who were comfortable at this humanities-technology intersection helped to create the human-machine symbiosis that is at the core of the technology story”.

 

The most creative innovations of the digital age came from those who were able to connect the arts and sciences. They believed that beauty mattered. Human creativity involves values, intentions, aesthetic judgments, social emotions, and personal consciousness. These are what the arts and humanities teach us – and why those realms are as valuable a part of education as science, technology, engineering, and math.

We humans can remain relevant in an era of cognitive computing because we are able to think different, something that an algorithm, almost by definition, can’t master. We possess an imagination that, as Ada Lovelace said “brings together things, facts, ideas, conceptions in new, original, endless, ever-varying combinations. We discern patterns and appreciate their beauty. We weave information into narratives. We are storytelling animals as well as social ones”.

Can AI replicate that creativity? Maybe. Until now, most of the AI/digital innovation (I exempt medicine) we have seen has involved pouring old wine – books, newspapers, journals, songs, television shows, movies – into new digital bottles. But at LeWeb Paris this year the “buzz” was that the next phase of the AI/digital revolution will bring a true fusion of technology with the creative industries, such as media, fashion, music, entertainment, education, literature, and the arts. I saw many examples of what was in progress, what folks have planned.

 

Oh, the places we’ll go!! What fun!! Bring your robot!!

I wish you and yours a delightful holiday season.

 

Postscript

As most of you know, last year I set up the Alice Nikki Vellios Foundation to execute the guiding philosophy of my mother: to provide opportunities for reading, travel and other luminous encounters for children. The Foundation’s activities are funded solely through my own resources although this year we accepted our first in-kind contribution from 105 of you …. 105 iPads that will be distributed to several schools in the U.S., in Europe and in the Middle East that do not have funds for such equipment, all of this being coordinted courtesy of CAMI MAC, an Apple reseller based in Belgium. I will have more details in the coming year. Many, many, many, many thanks to all of the contributors.

– Gregory P. Bufithis, Esq.

ANVF logo off website

www.anvf.org

 

About the author


Email | All posts by

"The mind that lies fallow but a single day sprouts up follies that are only to be killed by a constant and assiduous culture."
Latest Videos

Un aperçu de la FIC 2017 / A quick look at FIC 2017 (Lille, France)

Cybersecurity: a chat with John Frank, Vice President EU Government Affairs for Microsoft

From Legaltech NYC 2017: a chat with Andy Wilson of Logikcull

5G is coming ... and it's going to blow you away. Yes. Really.

The Internet of Things ... or the cybernetic consortia? (Part 1)

From the Mobile World Congress 2016: an introduction