As the battle between allowing for an efficient AI development and regulating it to ensure safety continues, the AI researchers say that imposing strict restrictions on AI while it is in its infancy stage will only hurt the development. However, others believe that AGI (artificial general intelligence) may be less than a decade away and that the field of artificial intelligence must be approached very carefully. An ongoing battle between scientists, researchers, and AI experts raises the question about the nature of AI regulations and whether we will ever be able to achieve the full safety if the AGI becomes the reality.
Are We Years Away From The AGI?
While AI can perform a variety of tasks, some experts argue that they are still relatively narrow and that the way to the AGI that would allow an AI to understand and learn any intellectual tasks that humans can is still long and has not been much developed since the first breakthroughs in the 1960s.
“No one has created anything that’s anything like the capabilities of human intelligence,” said Neil Lawrence, a professor at the University of Cambridge and the former machine learning director of Amazon. “These are simple algorithmic decision-making things.”
“People say what if we create a conscious AI and it’s sort of a free will,” said Lawrence. “I think we’re a long way from that even being a relevant discussion.”
Having said that, not everybody is sharing Lawrence’s optimism. Others believe that we may be closer than we think to produce machines that will be far more capable than humans are. What will happen then is an age of rapid technological changes and dramatic changes for humanity, once the machines reach the point of “technological singularity”. While Lawrence claims we are nowhere near this point, others say that it may happen by 2030.
Experts Worried Such An Extreme Intelligence Cannot Be Controlled
An Oxford University team warned: “Such extreme intelligence could not easily be controlled (either by the groups creating them or by some international regulatory regime)…the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”
Another person widely known for his skepticism towards AI is Elon Musk, the founder of Tesla. Musk said “We need to be super careful with AI. Potentially more dangerous than nukes.”
“I’m increasingly inclined to think there should be some regulatory oversight [of AI], maybe at the national and international level,” he added.
Musk, Hawking, And Chomsky Urging UN To Ban Weaponized AI
Musk was one of more than 20 000 signatories that joined the open letter on AI, published at the 2015 International Conference on Artificial Intelligence. In the letter signed by Musk, Hawking, Chomsky, as well as the most prominent robotics and AI researchers, the United Nations was urged to ban development of weaponized AI since it could develop “beyond meaningful human control”
“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason,” Musk said.
Challenges With Effective AI Regulations
The problem for the regulators is that with the speed of some of the technological breakthroughs, we don’t know how far away we are from that conscious AI. It can be a few centuries, few decades, but it can also be a few years.
The regulators worldwide are carefully considering how to find a balance between developing AI and minimizing the risks. There is also a large question connected to whether AI should be regulated as a whole or divided into the specific areas.
“For it (legislation) to be practically useful, you have to talk about it in context,” said Lawrence, adding that policymakers should identify what “new thing” AI can do that wasn’t possible before and then consider whether regulation is necessary.
The most developed regulations involving AI are in the European Union. Already last year, In the EU issued that draft strategy on developing and regulating AI. A few months later, in October 2020, the European Parliament published recommendations on what the AI rules should entail.
“High-risk AI technologies, such as those with self-learning capacities, should be designed to allow for human oversight at any time,” the European Parliament said. The Parliament added that a top priority is figuring out how to disable the self-learning capacities of AI if they become dangerous.
Currently, the topics remain a source of constant discussion between experts who say that AI can threaten us and those who say that we don’t need to worry about it
Are you interested in AI developments? You may also like these: