What is exponential impatience? And how can we teach robots empathy?
Years from now, when I look back on my career as a reporter, this interview will stand out.
Usually, I spend my time talking to people about business, engineering basics and company management. Rarely do I ever confront existential questions of humanity, and it is less often that I am confronted with ideas that are so unfamiliar I feel like a tourist in the conversation.
Typically, this may happen once and stand out as being a memorable conversation. With Dr. Ben Goertzel, CEO of SingularityNET and Chief Scientist at Hanson Robotics, I was confronted with about 5 ideas in a manner of 15 minutes that were simultaneously obvious and mind-blowing.
This is the skill of a true visionary. I am not referring to someone who calls themselves a visionary because they are good at business. I mean the people who see things differently.
What happens when someone like this speaks is the fantastic intellectual stimulation of discovering an entirely different perspective, but one that is also easy to understand. It is not that the concept is difficult, or that I need to be a genius to figure out the thesis, it is simply that I have never confronted that idea until the visionary poked me in the correct direction.
I was lucky enough to speak to Ben at the Singapore Week of Innovation and TeCHnology 2018 last month. He spoke at a panel discussion on the topic of AI and the autonomous world at the Deep Tech Summit organised by SGInnovate.
Here is the interview, edited for brevity and clarity.
During the talk, you spoke about exponential impatience [the idea that as human beings, our technology has created an environment whereby our patience level is shrinking at an exponential speed]. You mentioned this was a concept you have been thinking about. Can we fix it? And should we fix it?
I don’t know if we can fix it. I guess this is a culture and psychology which is somewhat subtle.
In a way, due to a tight networking of people and machines, the nexus of intelligence is [ascending] to groups and networks of people, rather than individuals. Increasingly, each of us is a neuron in the global brain.
When I started my research career in the 1980s, I could spend 6-months thinking about one topic without talking to someone else. Now, you have a new idea, you can immediately find someone on a message board, you can tweet about it, you get feedback very, very quickly.
So, the individual attention span is less than it used to be, but on the other hand a lot of stuff is still getting done, it’s just getting done by groups and networks of people rather than isolated individuals.
You said evolutionary learning, an important concept in the artificial intelligence sector, started in the 1970s.What’s the difference between then and now?
A big difference now is we have much faster computers, and a lot of memory in the computers. And of course, we have good sensors in cameras and microphones, and we have a lot of data.
Even if the algorithms are basically the same, you can experiment [much more quickly] to refine your ideas.
So multi-layered neural networks, which we call deep neural nets, were around in the 1960s. I used to teach about them in the 1990s.
The algorithms have not changed that much, but people have made small adjustments, and to understand what small adjustments to make, you just have to experiment over and over and over again.
And that’s easier to do when you have really fast computers.
Yeah, because I think people like me think this AI stuff started in like 2010.
In 1958 the field of AI was formally started. But really if you look at Norbert Wiener, the founder of cybernetics, he wrote the book Cybernetics in the 1930s, which basically laid out the theory of AI and the concepts.
When I got my PhD in 1989, all of the key algorithms used in AI were already there and had been there for a decade or two.
The ideas are not new, the math is not new, it’s just we have the resources now to make it all work.
Why are you confident AI will surpass human intelligence?
It is almost obvious. I mean the human brain is a specific combination of molecules. You can easily see it has many limitations.
How many items can you remember in your mind’s eye at a given time? Seven, plus or minus two, definitely not one hundred.
You can’t do calculations as well as a pocket calculator, neither can I. I can’t remember where I was on January 3,1972.
There are a lot of obvious limitations to this thinking machine in our head.
It seems very obvious that with the right design, you could make a combination of molecules that does thinking better than our brain does. Just like we make airplanes that fly higher than a bird.
[Humans] happened to have evolved to [our current state] but is it the smartest possible system that could be built?
So then that becomes a design and engineering problem.
To me, it seems Artificial Intelligence has been creating tools that are better than humans at one thing, but they are not as good as us at everything. Am I wrong?
That is the status of things right now and I think that is partly driven by business. It is easy to get business funding for doing something in one vertical niche very well. You can make a lot of money that way.
The nature of our economic system has pushed us to the lowest hanging fruit, which is usually “how can I solve this specific problem a little better than people do it?”
Whereas to get to a more general intelligence, you have to take a step back and think of it more like a digital baby. It fundamentally needs to learn about the world and grow its general intelligence bit by bit.
Are there any particularly interesting projects that look like a baby and 20 years from now may be an adult?
I mean, there is no astounding progress.
I would say Google DeepMind was founded with that in mind. My own OpenCog and SingularityNET projects were founded with that in mind.
There are probably one to two dozen significant projects around the world that are focused on artificial general intelligence but even those of us that want to do that are still focused on domain-specific AI projects.
It’s like in Hollywood when they have to make the crappy movie to pay the bills to do what they want.
I mean making a domain-specific neural AI is also useful. It is always 80-90 per cent highly specific domain work and 10-20 per cent the actual deep learning you want to do.
But, that approach is going to give us self-driving cars, it is optimising supply chains, it is doing a lot of cool things so you can’t argue with it. It is not like every major government has a Manhattan Project aimed at making human-like general intelligence.
There is a lot of money going into AI but it is all a very narrow, vertical-market specific stuff. So that’s just a short time horizon problem that human beings have.
How do you teach an AI Robot empathy?
Mostly imitation learning, so I mean if we have you, me and a robot here, and then the robot sees you interact with me, they can see what that is and start to imitate how that reaction is.
If I describe something bad that happened to me and it looks very pained and sad somehow, and you react by slowing down, expressing empathy, compassion, maybe matching my movements, then the robot learns that that is what to do.
If I am out on the street with the robot and I see a dying baby bird on the side of the road, and I look concerned and want to help the bird somehow then the robot observes how I am acting.
Then you wire the robot with imitation learning, just like how a child has imitation learning.
That’s probably the only way to do it, because there is no list of human values that is explicitly articulated that means anything. Any list you make is going to have exceptions and weird cases.
You are based in Hong Kong now, so theoretically, with imitation learning, would the robot be able to adapt to the culture in Hong Kong?
I think there is a strong intersection of human values, I mean there are many differences also, but in the end, almost everyone recoils if they see someone tortured. They see a burst of happiness when they see something good happen for someone else.
Every culture likes babies, every culture likes cute furry animals. Everyone gets tired when they exert too much energy. There are a lot of common human values.
By the time you get to the point where cultural differences are the main issue, you are in a pretty good position.
A lot of the talk is about big ideas, but what is something that is a bit smaller that you find exciting?
One thing that I’m doing that has practical implications is that we are using AI computer vision to analyse pictures of leaves of plants to tell if it has a disease that’s going to get worse that season
That tells the farmer whether they are going to put pesticides in the plant or not.
What is interesting is that I went around to a bunch of farms in rural Sichuan province in China and met with farmers to see what diseases their crops get each year.
Also Read: Grab CEO courier stunt is disheartening
All of these farmers were like, “Yeah AI, great, bring it on!”. The level of awareness of AI and the faith that these farmers put in AI is sort of touching. They were really hoping that the AI Doctor would come in and fix all their crops problems.
We can use machine learning right now so we are working to integrate that.
In the AgriStore in rural Sichuan, when the farmer goes in to get pet food and pesticide, they’ll bring in pictures of their plants, upload it into the computer and some analysis will be made about what the odds are that the disease will progress that year.
So that’s not as whizzy as a humanoid robot, but that’s an example of how AI is infusing in every domain of human pursuit.
The post A fascinating interview with SingularityNET CEO Ben Goertzel appeared first on e27.