On Monday 15 April, in preparation for Canada hosting the G7 summit in 2025, the High Commission of Canada in the UK held a roundtable discussion on AI adoption. This was an opportunity to examine how artificial intelligence is currently used, how its use is expected to evolve, and what gaps exist in present-day AI regulation. The roundtable took place at Canada House in London.
The facilitator of the discussion was OpenText, a Canadian company specialising in information management and digital transformation. The attendees included company and government representatives from both the UK and Canada. Darwin took part alongside participants from institutions such as Shell, Nestlé, the Digital Catapult, the Canada Pension Plan and the NHS.
Daniela Petrovic, Darwin’s co-founder, spoke at the roundtable about Darwin’s involvement in using AI for insurance modelling. She said it was a pleasure to represent Darwin in the discussion. ‘It was a great honour for Darwin to be invited as a participant, and to help inform global policies in the AI arena.’
Points that arose at the roundtable and will later be used to inform G7 discussion included:
Data concerns
Data privacy and uncontrolled data sharing were among the largest concerns to come up in the discussion.
Because generative AI needs to be trained on huge quantities of data in order to work effectively, it may not be feasible to inspect all the training data before it’s introduced to the AI model. What if personal or sensitive data finds its way into the dataset? And, if an AI model is mistakenly trained on sensitive data, what if someone then requests that information from the model, whether it’s a trade secret or someone’s home address?
In its publication ‘How to use AI and personal data appropriately and lawfully’, the Information Commissioner’s Office recommends using AI only when necessary, on account of privacy concerns: ‘You should assess whether you need to use AI for the context you will deploy it in. AI is generally considered a high-risk technology and there may be a more privacy-preserving and effective alternative.’
Checking and correcting AI output
Human input will still be required to make effective use of AI.
It’s important to recognise that generative AI isn’t truly intelligent. In other words, current AI models don’t truly understand the content they’re producing. This means that AI output can include ‘hallucinations’: incorrect or invented details that sound plausible based on the material that was used for training.
Because of this, AI models can’t be used to create material without supervision. If text written using AI is published online without editing, it may contain factual errors and, moreover, may then be used to train other AI models, further spreading false information. To avoid issues like this, a human will need to check AI output for accuracy and rewrite it where necessary.
AI is here to stay
AI is already in use in many corporate settings, and, now that it’s been made widely available, its introduction can’t be undone. Because of this, it makes sense for discussions about AI to focus on how AI can be used responsibly and effectively, rather on whether AI should be used at all.
One way or another, institutions will need to work out how to operate in a world with AI. Even if an organisation opts not to use AI itself, it will need to understand how other people might be using it. This is a subject that also arose at the ABI conference earlier this year: whether an insurer uses AI or not, it will need to be aware of the possibility that people will use AI to make fraudulent claims, for example by generating false images of damaged items.
If governments strike the right balance in AI regulation, and if people are trained effectively in what AI is capable of and how to mitigate the risks involved, we can help to ensure that artificial intelligence is used as a beneficial tool across countless industries.
Darwin Innovation Group is a UK-based company that provides services related to autonomous vehicles and communications. If you’re interested in working with us, take a look at our careers page. If you’d like to know how we can help your organisation make use of autonomous vehicles, contact us. You can also follow us on LinkedIn or Twitter.