Google CEO calls for regulation of AI to protect against deepfakes and facial recognition


The chief executive of Google has called for international cooperation on regulating artificial intelligence technology to ensure it is ‘harnessed for good’.

Sundar Pichai said that while regulation by individual governments and existing rules such as GDPR can provide a ‘strong foundation’ for the regulation of AI, a more coordinated international effort is ‘critical’ to making global standards work. 

The CEO said that history is full of examples of how ‘technology’s virtues aren’t guaranteed’ and that with technological innovations come side effects.

These range from internal combustion engines, which ushered the era of personal travel in the 19th century but also caused more accidents, to the internet, which has helped people connect but also made it easier for misinformation to spread.

Pichai said ‘nefarious’ uses of facial recognition and misinformation on the internet, such as deepfakes, are examples of the negative consequences of AI. 

These are lessons that teach us ‘we need to be clear-eyed about what could go wrong’ in the development of AI-based technologies, he said.   

‘Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,’ he said, writing in the Financial Times.

‘It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.

‘Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.’  

Pichai pointed to Google’s AI Principles, a framework by which the company evaluates its own research and application of technologies.

He said the list of seven principles help Google avoid bias, test for safety and make the technology accountable to people, such as consumers.

The company also vows not to design or deploy technologies that cause harm – such as killer autonomous weapons or surveillance-monitoring.

To enforce these principles, Google is testing AI decisions for fairness and conducting independent human rights assessments of new products. 

Mr Pichai, who was also made CEO of Google’s parent company Alphabet last month, said that international alignment will be critical to ensure the safety of humanity in the face of AI.

‘We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs.

‘While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone,’ wrote Pichai.

‘We offer our expertise, experience and tools as we navigate these issues together.’

Existing rules such as the General Data Protection Regulation (GDPR) can also serve as a strong foundation for individual governments to enforce regulation of technologies, he said.

However, Pichai’s company does not have an entirely clean record in this regard and the first step for Google will be heeding its own advice.

Last year, French data regulator CNIL imposed a record €50 million fine on Google for breaching GDPR.

The company also had to suspend its own facial recognition research programme after reports emerged that its workers had been taking pictures of homeless black people to build up its image database. 

Some critics believe the burden of responsibility to control AI – and prevent an era of independently-thinking killer robots – ultimately lies with companies like Google. 

Last year, for example, a Google software engineer expressed fears about a new generation of robots that could carry out ‘atrocities and unlawful killings’.

Laura Nolan, who previously worked on the tech giant’s military drone initiative, Project Maven, called for the ban of all autonomous war drones, as these machines do not have the same common sense or discernment as humans.

‘What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,’ said Nolan, who is now a member of the International Committee for Robot Arms Control.

‘There could be large-scale accidents because these things will start to behave in unexpected ways,’ she explained to the Guardian. 

The Campaign to Stop Killer Robots has already penned an open letter to Pichai urging his company not to engage in Project Maven, a US Department of Defense project that’s developing AI for military drone strikes. 

While many of today’s drones, missiles, tanks and submarines are semi-autonomous – and have been for decades – they all have human supervision.

However, a new crop of weapons being developed by nations like the US, Russia and Israel, called lethal autonomous weapons systems (LAWS), can identify, target, and kill a person all on their own, despite no international laws governing their use. 

Consumers, businesses and independent groups alike all fear the point where artificial intelligence becomes so sophisticated that it can outwit or be physically dangerous to humanity – whether it’s programmed to or not.  

National and global AI regulations have been piecemeal and slow to enter into law, although some advances are being made. 

Last May, 42 countries adopted the first set of intergovernmental policy guidelines on AI, including Organisation for Economic Cooperation and Development (OECD) countries the UK, the US, Australia, Japan and Korea.

The OECD Principles on Artificial Intelligence comprise five principles for the ‘responsible deployment of trustworthy AI’ and recommendations for public policy and international co-operation.

But the Principles don’t have force of law, and the UK is still yet to enforce a concrete legal regime regulating to the use of AI.

A report from Drone Wars UK also claims that the Ministry of Defence is funding multiple AI weapon systems, despite not developing them itself. 

As for the US, the Pentagon released a set of recommendations on the ethical use of AI by its Department of Defense last November.

However, both the UK and the US are reportedly among a group of states – also including Australia, Israel, Russia – speaking against legal regulation of killer robots at the UN last March. 




About Author

Leave A Reply