Spain has established Europe’s first artificial intelligence (AI) policy task force, taking a decisive first step in determining laws around the promising but controversial technology as many governments remain uncertain about the best way forward.
The Council of Ministers on Aug. 22 approved a Royal Decree to create the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), a task force that will work under the guidance of the Ministry of Economic Affairs and Digital Transformation.
The task force is the first of its kind in Europe, following on from the European Union’s Artificial Intelligence Act, which sought to try and establish a framework for governance and oversight of the growing technology.
The decree cited the “unquestionable” global impact of AI technology and the rapid advancement the technology has undergone. The AESIA, as part of the National Artificial Intelligence Strategy, will try to provide a framework under which Spain can continue to develop its AI technology and implement its use.
GERMAN MILITARY PLOWS MILLIONS INTO AI ‘ENVIRONMENT’ FOR COMBAT-CHANGING WEAPONS TEST
AI policy remains a difficult topic for many governments as they remain committed to developing the technology in order to not fall behind other nations, but not to allow unlimited use of the technology for fears of abuse.
Leading nations have differed on where to draw that line in the sand, with China reportedly giving its military virtually total freedom to the People Liberation Army (PLA) to experiment with the tech and determine its own limits while requiring any new generative AI platform developments to pass a security check.
Italy took a more hardline stance and banned ChatGPT while authorities investigated a number of alleged national data breaches in March, but about one month later lifted the ban.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Tesla CEO Elon Musk on Wednesday told FOX Business’ Hillary Vaughn that AI requires a “referee” to regulate the tech, but he argued that Congress is “not yet” ready to step into that role. Musk met with other tech leaders, including Meta’s Mark Zuckerberg, OpenAI CEO Sam Altman, Microsoft founder Bill Gates and others on Capitol Hill in Washington, D.C.
“I think this meeting may go down on history as being very important to the future of civilization,” Musk said, noting that at one point, Senate Majority Leader Chuck Schumer, D-N.Y., asked everyone in the room to raise their hands if they were in favor of AI regulation. “And I believe almost everyone did. So that’s a good sign.”
“The sequence of events will not be jumping in at the deep end and making rules. It starts with insight,” he told reporters. “You start with a group formed to create insights to understand the situation. Then you have proposed rulemaking.”
ADOBE’S NEW AI-POWERED FIREFLY AVAILABLE FOR COMMERCIAL USE
“You’ll get some objections from industry or whatever, and then ultimately, you get sort of a consensus on rulemaking, that rulemaking then becomes law or regulation,” he added.
The U.K., which pledged 100 million pounds ($125.8 million) toward buying up NVIDIA chips to try and compete with other AI development leaders like the U.S. and China, tasked its institutions with creating similar frameworks.
The Financial Conduct Authority of the U.K. has started consulting the Alan Turing Institute and other legal and academic institutions to better understand AI and help shape its decisions regarding any such framework.
The United Nations in July held its first formal discussion on AI, addressing both military and non-military applications and the “very serious consequences for global peace and security.”
U.N. Secretary-General António Guterres has repeatedly urged members to form an oversight body similar to the International Atomic Energy Agency, as the U.N. lacks the power to form such a group on its own, but he noted that the organization can lay out recommendations, which it plans to publish by the end of the year.
Reuters contributed to this report.