Focus on the moments that matter

Can We Integrate As Fast As We Innovate?

November 11, 2016

Humans and computers “think” very differently. The history of human-computer interaction is a story of each one trying to model the other in order to translate and adapt these different ways of thinking.

The problem is that computers are adapting to humans faster than humans are adapting to these new computer adaptations.

Closing the gap between transistors and neurons

At their core, computers “think” in a language of 1’s and 0’s flowing through logic gates. In contrast, human thinking is mostly unconscious and uses a language of extremely sophisticated pattern recognition and prediction (with lots of specialized capabilities for human-to-human interaction built in at birth). The interesting thing about this human-computer story is how the burden of adaptation has been steadily moving from humans to computers.

In the very beginning, if you wanted to use a computer you had to do it on its terms. Programming was done in the computer’s native binary language. This evolved into the slightly less tedious punch card programming. Then there was typing in “assembly language” on terminal keyboards. Then a series of more human-friendly programming languages were built on top of the computer-friendly languages, moving from Assembly to C to C++ to C#/Java, Swift, etc.

Along the way, non-programmers could start “programming” (make the computer do useful work) via applications. Then came friendlier graphical user interfaces running on more personal computers. Then came even friendlier touch UI’s running on even more personal mobile devices.

During most of this human-computer evolution, humans and computers were still better at different things. Now, however, computers are actually starting to encroach on territory that has been exclusively human up to this point.

We humans have always had the upper hand in situations where good decisions required pattern recognition and quick, reasonably accurate assessment of our context (especially relative to other humans). Now, our smartphone apps can know where we are, who is near us, what our habits are and even what we are likely to do or ask next.

Applying computers to driving is a good example of this shift. While driving, our brains are unconsciously processing mountains of sensory data about the road, our vehicle, and other drivers. We are recognizing patterns and continuously predicting what will happen in the next moment. For instance, if a car merges on to the highway to my right and I know there is split up ahead where most people are going to want to bear left, I unconsciously imagine and prepare for the other driver suddenly wanting to move into my lane. The amazing thing is that we do all of this while we are consciously thinking about something completely unrelated, like what to have for lunch.

Today, computer programs are able to do all of this situational sensing, interpreting, and predicting in real time and continuously fine tune their model of the user and the user’s context. Self-driving cars are just the latest example of computers modeling and adapting to human needs.

The breathless pace of the techno-human race  

Up until recently, this mutual adaptation has been a bit of a dance. Someone invents a new piece of software that is smarter and easier to use (i.e. more human-adapted) and then we take the next step of integrating it into our lives. This integration involves updating our own mental model of what the computer is doing and why and deciding how we can (or should) change our behavior to take advantage of the new capabilities.

Beginning around the mid-nineties, with the dawn of the Internet, this dance started getting faster and faster. Today, it’s as if our technology dance partner has gone crazy, shifting and twirling and swooping in so many directions at once we are never sure where we should step next.

So, I believe a key question is: Who is mediating between the technical world of algorithms and the human world of thoughts and feelings? It used to be fairly clear that humans were in the driver’s seat, literally and figuratively. Now, not so much.

I think this is an especially important question for public health. It’s not just a question of what technology can do. The answer to that is already beyond what most of us can even imagine. The better question is: how are we going to integrate these new possibilities into our lives in a way that works well for humans?

The new learning curve

What makes the whole thing even more complex is that fact that both the humans and the algorithms are continuously trying to adapt to each other. For instance, I’ve learned that if I click on one kind of Facebook ad or click-bait “suggested post” I will be deluged with more of the same kind of posts or ads for the next two weeks. So I change my click behavior. But then the algorithms adapt to my changes. It’s a never ending battle for control, with the algorithms getting smarter while I’m just getting more frustrated.

And computers have the advantage. Unlike humans, computers are rigorously logical and methodical. Humans are not. While we humans excel at some kinds of pattern recognition and prediction, our brain is also hardwired to conserve energy by taking shortcuts. In other words, we generally don’t do any more thinking than seems to be necessary to solve (what we think is) the problem. But sometimes these shortcuts are too short and we confuse ourselves, often by interpreting correlation as causation.

Also, computers have no problem admitting they are wrong, if that is a part of the programming. In fact, the new AI machine learning software actually depends on making mistakes in the early going so that it can learn the nuances that lead to better outcomes. Humans, however, are full of emotions and ego that cause us to stick with faulty conclusions.

I think all this has profound implications for healthcare. It used to be, if I had a health problem my doctor would order tests and then sit down with me a week or two later to give me his/her interpretation. The human doctor was the mediator. Now, I get the results in my online chart before the doctor has a chance to help me interpret them. I may worry when I don’t need to (or be relieved when I shouldn’t be).

Or what happens when I get it into my head – via some article I read on the Web – that my symptoms are caused by X but some software is telling my doctor that the cause is Y. Meanwhile, the doctor has a hunch that the cause is really Z but he/she knows that to ignore both me and the software could be disastrous in a lawsuit.

And don’t even get me started on the insurance companies, who much of the time seem to be functioning as mediators with a frustrating mix of error-prone human bureaucracy and heartless spreadsheets.

Augmenting (technical) reality with the human touch

There is one thing that humans are still uniquely good at: empathy. Unlike artificial intelligence (so far), humans know what it feels like to be human. So, if we are going to learn to trust what the computer is telling us about our health it is going to have to be balanced out by human-to-human interaction becoming even more, well … human.

Does that mean that doctors will eventually be reduced to nothing more than empathetic messengers of computer-generated diagnoses? I hope not but I’ll take that over an un-empathetic messenger any day.

Bottom line: The pace of adaptation between humans and computers has become so fast that we simply cannot predict where it is going. It is a complex dance that both we and the technology we unleash are learning on the fly.

Only now the computers are learning much faster than we are.

Which leads to some important questions: Who should be mediating these interactions? Who is really in the driver’s seat? How do we know who (or what) to trust? These are important questions because we are no longer just “using” technology – we are integrating it intimately into our lives.

Which leaves us with perhaps the most important question of all: How will all of this back-and-forth adaptation and deeper human-computer integration ultimately affect our human-to-human relationships? After all, it is how we adapt to each other that still makes our (computer-assisted) lives truly meaningful.

Michael Dennis Moore
Michael is the principal marketing and innovation consultant at Likewhyze and the creator of the Value Story Mapping process.
Subscribe