French Defense Minister Florence Parly insisted in an op-ed that the country would not allow advance military AIs to become fully automatic and end up making life-and-death decisions without human input.
Parly came out strongly against “lethal autonomous weapon systems that some call ‘killer robots’” in an opinion piece published by Defense News.
She said France “refuses to entrust the decision of life or death to a machine that would act fully autonomously and escape any form of human control,” and will not send such machines into combat.
Such systems are fundamentally contrary to all our principles… Terminator will never march down the Champs-Elysees on Bastille Day.
The minister previously voiced concern over the prospects of fully-automatic robots on the battlefield in April. Recently, France set up a government ethics committee to monitor the development of military AI.
Writing for Defense News, Parly also stressed that advanced military AI must stay out of the hands of “irresponsible” states and non-state actors.
The rapid development of AI technology has propelled questions over their ethical use into the forefront, especially in life-and-death situations in the middle of war.
Last year, Google was forced out of renewing work on the controversial Project Maven, which would allow the Pentagon to enhance the targeting capabilities of its combat drones – after thousands of the company’s employees revolted, arguing that the research goes against Google’s core values.
Amazon faced similar scrutiny over its bid to work on a military cloud computing network, known as JEDI, for the Pentagon.
Think your friends would be interested? Share this story!