Public trust, regulation and killer robots: Why 2021 is going to be an important year for the AI debate

0

Public trust in the technology is on a knife-edge, experts believe.

This article is part of Business Futures, a series tackling the key issues shaping Irish business today and in the future. If you’re part of that future, you might want to know that TheJournal.ie has partnered with UCD Smurfit School to offer one reader a full scholarship for the Modular Executive MBA (EMBA) worth more than €30,000. Find out more her

BUILDING PUBLIC TRUST in artificial intelligence is something that its advocates are hugely concerned with — and with good reason.

In recent memory, few technologies have aroused quite as much public suspicion. Debates around AI frequently throw up questions of privacy, security and, chiefly, whether robots are coming to steal your job or mow you down in an automated car.

“I’m not going to say that you shouldn’t be worried about AI,” says Patricia Scanlon, chief executive officer of Dublin-based SoapBox Labs. “I think it’s the case that there are issues to be worried about.”

Scanlon’s company is one of a small but ever-growing cohort of Irish companies on the frontline of the AI phenomenon.

In a nutshell, SoapBox uses AI to develop children’s speech recognition products, which it licenses to educational technology (EdTech) businesses, toy companies and the entertainment sector.

So for Scanlon, much of the concern about AI stems from confusion about what exactly the technology is and how it’s used. She believes the risks associated with the relatively new technology have to be calculated and assessed within individual industries.

We can expect some progress in this direction in 2021, which is shaping up to be an important year for the AI debate.

Three years after its adoption, the European Union will conduct its first review of its Coordinated Plan on Artificial Intelligence; a programme aimed at increasing investment and fostering development of the technology across the bloc.

Also in 2021, the EU will publish long-awaited legislation on AI regulation, which industry experts will be the key to increasing public trust in the technology.

“I think that’s what some of the EU framework is about: classifying AI and which [usages]are riskier. Autonomous driving is the classic one where we can all understand safety issues around it.

“But it doesn’t mean it shouldn’t be done,” she says. “It should be regulated.”

Mystification

Knowing what we’re even talking about when we talk about AI is difficult at times. This hasn’t helped the sense of mystification around the technology, says Cormac O’Neill, chief executive at Webio.

He believes there’s been too much hype around AI — “like any form of new technology”, he says — “in terms of what it can and cannot achieve” currently.

O’Neill is the chief executive at Webio, a Dublin company that uses machine learning to develop tools used by businesses to speak with their customers.

“We’re completely focused on conversations around the area of credit collections of payments,” he explains.

Webio’s systems analyses data from conversations between the business and its customers “to try and help make those conversations run more effectively and efficiently for not just the enterprise — who is our customer, the one we serve — but also their customer; a person who may be in financial difficulty.”

So for O’Neill, AI isn’t just chatbots and automated business procedures.

“I think you have some people who confuse ordinary, everyday stuff as being AI. In fact it actually isn’t. Things like simple automated chatbots, in my view, is not AI. I think AI goes much deeper than that,” he says.

Scanlon says there’s really three kinds of AI.

What O’Neill refers to “ordinary, everyday stuff”, Scanlon describes are rule-based systems: “something that’s pretty much hard-coded and that people are just applying rules to” like your average customer service chatbot.

“And then you have another form of AI, which I would call ‘applied AI,” she explains. “That’s where you can take publicly available data and you can maybe use somebody else’s AI services to generate your model or your insights and you use that in your business and sell it on.”

And then there’s the more fundamental use of AI, which is what companies like Soapbox and Webio are doing.

We’ve had to collect our own data; we had to build our own technology. It’s all proprietary. We’re not using Microsoft; we’re not using Google — nobody else’s technology… We had to build that infrastructure ourselves and our AI sits behind that. We have to teach it how children speak and how to respond.

“It’s a case of building from the ground up”, Scanlon explains.

Silver lining

Building from the ground up is something that a lot of businesses will have to do in 2021 after the turbulence of the last 11 months.

After a huge drop-off in general investment levels across the board, business leaders of all stripes are also hoping that the trend reverses in 2021.

#Open journalism

No news is bad news
Support The Journal

Your contributions will help us continue
to deliver the stories that are important to you

Support us now

AI advocates in particular are hoping that the tide turns. Happily, O’Neill says it seems to be recovering already.

“There certainly was tangible slowdown [in investments by businesses in AI]last year for about six months. But now it’s starting to come back for sure,” he says.

For both Scanlon and O’Neill, another silver lining is that the pandemic has actually settled some of the debates around the application of AI in business and workplace settings.

“The pandemic has certainly made companies look up and question their digital strategies,” O’Neill says.

You had thousands of companies literally overnight having to get their workforce from working in an office to working out of everyone’s homes or small numbers in the office. So I think from a company point of view, it made enterprises realise, ‘Hang on a second. We really need to engage with our digital strategy here.’

Answering difficult questions about how to keep on top of customer engagement at the outset of the pandemic actually forced some companies to find solutions in new technologies like AI.

O’Neill highlights Eir as an example.

“There’s an example of where intelligent use of AI automation of conversations and handling of customer support queries could have a tangible impact for the consumer and also for the people who work in these contact centres,” he says.

Against the backdrop of school closures and lockdowns, AI and machine learning-based products have also proved incredibly useful to the education and EdTech sector — one that is “notoriously slow to move”, Scanlon says, because of sales cycles and regulation. 

A good example, she says, is a SoapBox product that was due to be launched in September 2021 was actually brought to market last December: “a hugely accelerated pace. And that’s never happened in this industry. But it was completely Covid-driven”.

More widespread use of AI and crucially, “good-use examples” are what Scanlon believes will ultimately build public trust in the technology.

Incoming EU regulation will also help but there’s a balance to be struck, she argues.

AI is much too important to be solely in the hands of large tech multinationals she says and there’s a risk that by “over-regulating” the EU could stifle progress with SMEs frozen out of the market because of “costly processes”.

But ultimately, Scanlon says, “You want to regulate. I really think it’s important because you then you can build trustworthy AI. People can know that this was built in the EU and so can you trust us.

“People are on a knife-edge with that; with not trusting AI… So we want to bring people back to that so they know you can trust it if it’s built in the right way.” 

Share.

Leave A Reply