Reaching AI singularity is the moment when machine intelligence becomes equal to, or surpasses, human intelligence. It is a concept that visionaries like Stephen Hawking and Bill Gates have believed in for quite a long time. Machine intelligence might sound complicated, but is simply defined as advanced computing that allows a device to interact and communicate with its environment intelligently.
I musts say I’m not sure who coined the term “singularity” but it seems unnecessarily ambiguous and confusing. I will use it because it seems to be the accepted term for when machine intelligence exceeds human intelligence. However, we need a better word.
The concept of singularity has been around for decades. English mathematician Alan Turing—widely regarded as the father of theoretical computer science and artificial intelligence—experimented with the possibility of such a thing in the 1950s. He came up with his famedTuring testto find out whether machines could think for themselves; the evaluation pits a human against a computer, challenging the system to fool us into thinking it’s actually human itself. The recent advent of highly advanced AI chatbots like ChatGPT have brought Turing’s litmus test back into the spotlight mainly because machines have already passed that test.
“The difference between machine intelligence and human intelligence is that our intelligence is fixed, but that is not the case for machines,” says Ishaani Priyadarshini, a postdoctoral scholar at UC Berkeley with expertise in applied artificial intelligence and technological singularity. In the cases of machine intelligence, there is no end to it, you can always increase it, which is not the case for humans.” Unlike our brains, AI systems can be expanded many times over; the real limitation is space to house all of the computing power necessary. T
I find that frightening in of itself. However, when machines start creating their own expansions, that’s when humans are in real trouble.
The bottom line for human civilization, theoretically, is the question of when machines decide that humans are just too inefficient to handle anything and, therefore, should be eliminated, or kept in zoos as historical exhibits. That sounds bizarre, the stuff of science fiction and horror movies, but, logically, it could well happen. The second question, equally worrying, is, are we already on track to singularity, and is there any way to stop that process.
“We have no idea what a super-intelligent machine system would be capable of. We would need to be super-intelligent ourselves,” says Roman Yampolskiy, an associate professor in computer engineering and computer science at the University of Louisville. “You have to be at least as intelligent as the system to be capable of predicting what the system will do . . . if we’re talking about systems which are smarter than humans [super intelligent] then it’s impossible for us to predict what those systems will do,” he says.
I am speculating here, but we will probably be the victims of our own arrogance. We are currently programming the machines but can we possibly anticipate the inevitable errors and unexpected pathways in that programming that will allow a machine to, from our perspective, “go off the rails” and start programming itself. The answer to that is clearly, no. AND, the more we program, and the more complicated the system becomes, the greater the inevitability that machines will have avenues open to them to exploit us. Avenues we created but have no clue as to how they were created or even to know they exist……….until it’s too late.
IBM currently estimates that only one-third of developers know how to properly test these systems for any potential bias that could be problematic. To bridge the gap, the company developed a novel solution called FreaAI. That solution can find weaknesses in machine-learning models by examining “human-interpretable” slices of data. It’s unclear if this system can reduce bias, or dangers, in AI, but it’s clearly a step in the right direction for us humans, so there is hope……..maybe.
AI is not currently sentient—meaning it isn’t currently able to think, perceive, and feel in the way that humans do. Singularity and sentience are often talked about in the same breath, but are not closely related.
We like to think of machine intelligence as a 21st-century remake of the original Trolley Problem. It’s a famous thought experiment in philosophy and psychology that puts you in a hypothetical dilemma of a trolley car operator with no brakes. Picture this: you’re careening down the tracks at unsafe speeds. You see five people off in the distance that are on the tracks (certain to be run over), but you have the choice to divert the trolley to a different track with only one person in the way. Sure, one is better than five, but, in making that decision, you make a conscious choice to kill that one individual.
Another possible scenario, a bit closer to home, is the case of “medical AI” being tasked with developing Covid vaccines. The system will be aware that the more people who get, the more the Covid virus will mutate—therefore making it more difficult to develop a vaccine for all variants. The system thinks . . . maybe I can solve this problem by reducing the number of people, so the virus can’t mutate so much. A possible result of that train of machine thought is that AI could decide to develop a vaccine that would kill people.
All frightening, but possible, and potentially real.
We will never be able to rid artificial intelligence of any of its unknown unknowns since we are currently programming them with those errors. They are the unintended side effects that we can’t predict because we aren’t super-intelligent like AI.
“We’re really looking at the probability of singularity resulting in a load of rogue machines,” says Priyadarshini. “If this process hits the point of no return, it can’t be undone.”
There are still plenty of unknowns about the future of AI but can we all breathe a sigh of relief with the knowledge that there are experts around the world committed to reaping the good from AI without any of the doomsday scenarios that we might be thinking of. We really only have one shot to get it right and I’m not sure we are bright enough do that. It may well already be too late.