How worried should we be about artificial intelligence?
Popular culture is rife with terrifying tales of insane or malevolent AI, from HAL-9000 in “2001” to Skynet in the “Terminator” films and the machines that turn human beings into living batteries in the “Matrix” movies.
Rogue AI is a sci-fi cliche. But is it close to becoming a reality?
Some 1,800 tech entrepreneurs, computer scientists and other experts have put their names to a petition calling for a moratorium on AI development. Elon Musk and Apple co-founder Steve Wozniak are among the signatories.
The latest large language models—”LLMs,” such as GPT-4—can write computer code, and mischievous hackers have prompted them to try writing code to implant themselves onto mobile devices.
If an AI could propagate itself through phones, tablets, internet servers and orbital satellites, the human race could find its communications infrastructure crippled, even if the viral intelligence had no aim beyond its own replication.
A biological virus like COVID-19 has no hostile intentions either. But by using our bodies to reproduce itself, a virus can kill, and a networked viral AI would be devastating to any computer within its reach.
If the AI were better at coding than human beings are, our only countermeasure would be to build an even more powerful intelligence.
A science fiction novelist could have fun spinning a story about humans becoming mere bystanders—or collateral damage—in an automated arms race between rival coding machines.
All this is still fiction for now. ChatGPT does not write infallible code, just as it doesn’t give infallible answers to questions about history or literature. Virally replicating and extending an AI is a feat that hasn’t been pulled off yet.
But there are risks in the potential for AI to develop new technology more quickly than we can understand and control it.
Last year, the journal Nature Machine Intelligence published a paper that reported that commercial AI systems meant to design better medicines could be repurposed to concoct deadlier toxins instead. Chemical, biological and nuclear weapons programs are not cheap, but AI will bring their costs down dramatically.
What should trouble us as much as AI itself, however, is the horizon beyond merely “artificial” intelligence.
There are many things organisms do that neither software nor hardware can replace, just as there are many things that computers and machines can do that organisms cannot. But what happens when the line dividing living and mechanical vanishes?
In February, New Scientist ran the ghoulish headline, “Stuffed dead birds made into drones could spy on animals or humans.” The magazine quoted New Mexico Tech associate professor of mechanical engineering Mostafa Hassanalian saying, “Instead of using artificial materials for building drones, we can use the dead birds and re-engineer them as a drone.”
Scientists at Rice University coined the term “necrobotics” last year to describe their success in hydraulically reanimating a dead spider’s limbs, which could then grip objects delicately like a living arachnid.
The world will not be taken over by zombie spider-cyborgs any time soon. But, as with the taxidermied bird drones, this is a hint of where the use of animal bodies for machines is going.
In the case of frog-derived “xenobots” and mouse or human “mini-brain” computers, it’s hard to say whether machines are using animals or animals are using machines—if it even makes sense to distinguish the two.
Xenobots are tiny robots designed with the aid of artificial intelligence and made out of modified living cells from the African clawed frog Xenopus laevis. They can reproduce organically, though these are very simple chunks of meat that perform various kinds of trivial mechanical work; they’re not, yet, monstrous robo-amphibians.
Mini-brain “organoids” are living networks of neural tissue grown from stem cells. In March, New Scientist reported that researchers at the University of Illinois Urbana-Champaign built a “living computer” out of 80,000 mouse brain cells. Organoid experiments are also carried out with human cells. Sometimes the human mini-brains start growing eyes …
Organoids will allow us to understand the brain better and repair it when it’s damaged. Scientists engaged in this research say it’s unlikely these mini-brains could be conscious. Yet they’re building more complex organoids all the time.
AI will help engineer ever more sophisticated organic brains. And what’s to prevent electronic AI neural networks from working in tandem with organic networks? AI is not alive, but mini-brains are. What do we call it when each is made by the other?
Artificial intelligence has great therapeutic potential for manipulating human genetics as well, when combined with technologies like the gene-editing CRISPR. At first, such refinements will help eliminate disease. But it won’t stop there.
Artificial intelligence is proving difficult enough for human beings to control. If biotechnology and AI continue to advance along their current research paths, however, the living beings that must manage the machines may be designed by them in the first place.
COPYRIGHT 2023 CREATORS.COM
Leave a Reply