“A.i. will be as smart as the average human in 10 years time.” is a statement often used today. And it’s usually followed by some doomsday talk about politics needs to take this into account. But is this really a doomsday scenario?
An AI comparable in capability to a human brain is not the same as an AI that can replace a human in every way. It will, as most machines, be better at doing some things than humans are and humans will still be better at doing some things than machines will be.
The point is that we have spent the last 250 years replacing human workforce with machines that can do certain tasks better. AI is just the latest such development and it is as such just an ordinary disruptive technology. But on the first meta-level it’s a sustaining technology and follows the pattern of continuous disruptions of the last 250 years.
If we are talking about a singularity type disruptive event, it has to be disruptive also on the first meta-level. But such a disruptive technology is further into the future if AI is involved in it. There are two options for this:
- An AI Android, which can replace a human completely and live the replaced human’s life. But be a better version of the human in every aspect, even emotionally. The reason is that you have to have a human body and feel the same electrochemical emotions as a human to be able to understand things about the world from a human perspective. Something the AI research community realised more than 20 years ago.
- A network living sentient entity which is nothing like a human intelligence at all, and which understands the world just as the network it lives in. Such an entity would not understand or even be able to comprehend the physical world beyond the network. An intelligence like this could do things humans cannot and doesn’t even understand the meaning of doing. It won’t replace any human workforce, but will expand the world as we know it.
So for the doomsday scenario in the first case, the Android, is still several decades into the future and we need not plan for it in the present. The second case, the network entity, could already exist if a botnet has lost its controller and mutated into it. It could take decades before we even become aware of its existence after it’s created, because it will have in its genes to stay hidden and avoid detection, including not causing external effects.
So what should politics, and everyone else, do? The answer is plan for AI as just an ordinary disruptive productivity gain that will make some current job types obsolete and create a demand for previously unknown types of work.