World leaders in technology are uniting to establish a common set of guidelines on the use of artificial intelligence and reel in the potential for misuse.
The Global AI Council, which was created as part of a summit by the World Economic Forum in San Francisco, will focus not just on establishing standards for how AI should and shouldn’t be applied across fields, but in making those standards mesh among world powers, particularly the U.S. and China.
The goal of connecting disparate governments is arguably best exemplified through the council’s leaders — Microsoft President Brad Smith and Chinese AI expert Kai-Fu Lee.
According to a statement from the World Economic Forum, specifically, the council hopes to establish channels of communication between partners of the council on best practices and case studies as well as addressing what it calls ‘governance gaps’ — presumably areas where regulation has yet to keep up with potentially harmful technology.
As noted by MIT Technology Review, one particular area that will likely be a flashpoint for regulatory and ethical guidelines surrounding AI is surveillance.
AI-enabled surveillance technology, spurred by an increase in the availability and sophistication of facial recognition algorithms, has already gathered significant attention from lawmakers, advocacy groups, and law enforcement in the U.S.
While critics say that the technology encroaches on privacy and civil liberties, law enforcement agencies have adopted the tool around the country to help identify potential suspects, using it to scan pictures against a database of known criminals.
As some in the U.S. work to regulate the use of AI-backed facial recognition software, China has applied its use broadly through a highly advanced and unprecedented state-sponsored surveillance network.
China’s systems span tens of millions of cameras and have been deployed readily to track its ethnic minorities.
This rift in ideology alone represents why the global AI council will have its hands full in coalescing a set of values according to Jack Clark, the policy director at OpenAI, a company in San Francisco who spoke to MIT Technology review.
‘Different cultures have different values, and AI is a technology that can encode values,’ he said.
‘I think it’s going to be challenging at first to agree on things like ‘What values should we encode into a system?’ from a global perspective.’
The World Economic Council isn’t the only organization looking to establish a common set of principles around AI.
This month the Organization for Economic Co-operation and Development (OECD) also released a set of principles surrounding AI that it hopes will safeguard human law while continuing the technology’s growth and development.
It’s worth noting that none of the principles, which were adopted by by 36 nation-state in the OECD, are legally binding — there are currently now laws, international or otherwise, explicitly regulating the use of AI.