We’ve spent a lot of time in this class talking about an AI dystopia. A world where the AI owners’ control all of us through their powerful apps that become ubiquitous in our lives.
AI will set our work targets – if we have work to go to. AI will curate our news, our music, our email (not a bad thing), our friends. AI will curate everything.
AI will drive our cars, cook our meals, arrange our travel, turn on our lights, wake us in the morning, and lull us into a state of everlasting security. Security that we can enjoy on our guaranteed basic income.
What can be wrong with that?
There is a potential downside. A super-intelligent AI could take control of everything and enslave us in a dystopia where we exist as entertainment for a vastly superior being. Not likely, right?
There are three questions that are raised as we think of any possible scenario.
- Will a superintelligence (surpassing human intelligence in every way) ever be developed
- How can this entity improve or destroy our lives
- What can we do to control the outcome before it happens
In our current state, AI is great at responding quickly to queries and returning great answers. Kind of like a Super-Google. Experts have called this kind of AI artificial narrow intelligence (ANI). Artificial general intelligence (AGI) is the next step. AGI is AI that can make decisions and increase productivity.
AGI is the kind of AI that raises the most concern because it can potentially wreak havoc on our economy. Or it can make all of our lives much better. We’ll see. It isn’t like we really doubt that we will see AGI in the next few decades. We can wait and see how things play out.
It becomes really scary when we think of the breakthrough that produces artificial superintelligence (ASI). This level of AI operates beyond the bounds of what human intelligence can achieve. As Nobel Prize winner Geoffrey Hinton posits, “If you want to know what it’s like not to be the apex intelligence, ask a chicken”.
Hinton also asks something that brings some concern:
We’ve never had to deal with things more intelligent than ourselves before. And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing?
That’s a sobering thought. Hinton provides an example of what he is thinking when he says:
If you have children, when they’re quite young, one day they will try and tie their own shoelaces. And if you’re a good parent, you let them try and you maybe help them do it. But you have to get to the store. And after a while you just say, ‘OK, forget it. Today, I’m going to do it.’ That’s what it’s going to be like between us and the superintelligences,”
There’s going to be things we do and the superintelligences just get fed up with the fact that we’re so incompetent and just replace us.
He believes that we will likely be kept in the same way that we keep pet tigers. He said, “I don’t see why they wouldn’t. But we’re not going to control things anymore.”
At least most of us won’t be around to see it (except maybe Taylor).
Maybe.
When are we likely to see ASI? When asked, Hinton replied:
I think it’s maybe five to 20 years before we get superintelligence. Maybe longer, but it’s coming quicker than I thought.
Jeff Clune, another expert in the field answered this way
I definitely think that there’s a chance, and a non-trivial chance, that it could show up this year
We have entered the era in which superintelligence is possible with each passing month and that probability will grow with each passing month.
Oh well, all is not that bad. Experts estimate that there is a 30 to 35 percent chance that human beings will be able to maintain control over ASI.
We’ll all have to wait and see, and if the experts are right, most of us will be around to see it.
Reference
https://www.cbc.ca/news/science/artificial-intelligence-predictions-1.7427024
Comments
2 responses to “Who Will Be in Control?”
“Experts estimate that there is a 30 to 35 percent chance that human beings will be able to maintain control over ASI”. That statement gives me hope.
However, hope is not a strategy. We as a species need to be considering how we can imbed guardrails that allow us to either remain as the more intelligent entity or be able to exercise control over the entity.
This is the exact thing that terrifies me about Ai. That our laziness will lead to reliance and important skills will be lost or taken over. I’ve noticed a resurgence in homesteading as idealistic in recent social media trends. While I don’t think it’s an obvious result of Ai; I think the general atmosphere of it is inviting it.