Paul Martin and Colin Rooke discuss the implications of AI on insurance.
Listen to the full episode here, or read the full transcript below.
Paul Martin:
Welcome to Risky Business, Commercial Insurance with Butler Byers. This is Paul Martin, a business commentator on CKOM, and joining me today is Colin Rooke, the Commercial Risk Reduction Specialist at Butler Byers and the star of this particular program that you get to hear very frequently on weekends throughout Southern Saskatchewan.
Colin, I guess what’s old is new again. We’ll get to that part, but this thing called deepfake, it sounds like a movie, a not very nice movie, a sinister movie title, but it’s something we really need to be cognizant of, isn’t it? I mean, this thing is jumping up and biting us in a way that no one would’ve expected. We’ve been warned, AI is both positive and negative and there’s insurance implications to it too.
Colin Rooke:
Yeah, absolutely. The term deepfake comes from deep learning and then fake. Pretty simple. But what it means is it’s the ability to alter images, videos, or recording using AI at such a sophisticated level now using deep learning that it’s almost impossible for even the experts to determine whether or not a video image, audio is fake. It has gotten that good. In fact, they use other AI tools to determine if AI generated deepfakes are in fact fake. Because for the most part, humans can’t tell. It’s a growing problem.
We’ve talked a lot about the evolution of cyber on this show, and it’s really important to talk about this term because a lot of the tools and tips that I have given over the years are almost rendered obsolete. We’ve talked about the prevalence of wire transfer fraud and not working through email. I say pick up the phone or have a video chat if you’re stuck at home due to COVID. Make sure you get a first person message saying, “Yes, go ahead and transfer that.”
Well, now you can’t trust that because what AI does is it learns all the predictive behaviors at such a deep level that it is almost impossible for the person on the other end to tell or to recognize that they are having a conversation with another party that is completely fraudulent despite the fact that they’re watching a video having a call, and during that call, receiving email confirmation.
You’re on the phone. You’re getting emails. You’re like, “Well, I know what Paul Martin sounds like. During that call, Paul Martin is emailing me. Of course, I’m going to go ahead and transfer.” And I’m telling you now that that all could be fraudulent.
Paul Martin:
Getting scary, isn’t it? I mean, how do average people protect themselves is going to be the question that comes up with all of this. But I alluded to at the beginning of the program that what’s old is new again. We’re going to go back to some what we consider to be primitive business practices if we have no choice on this. If we can’t trust the digital world, we have to go back to an analog world.
Colin Rooke:
It’ll be interesting to see how the insurance markets look at this. We’ve talked about ways that you can proactively reduce the likelihood of wire transfer fraud. We’ve talked about educating your employees around email phishing scams, and we’ve talked about what spear phishing is, so a very direct targeted approach to one individual rather than a blanket approach. But then if the technology is so good, you can’t tell, you can no longer easily look for the tips or the warning signs of a fraudulent email. Then what do you do?
One of the answers, rather than say, “Look, get someone on the phone before you authorize a transfer,” one of the answers is you get up out of your chair. You walk, you drive, train, you fly to that person. You look them in the eye, explain what you’d like done. Get a physical check from that person and take that physical check to the bank. It sounds archaic, but it is one of the ways to make sure that you are not a victim of cyber crime.
Paul Martin:
What we’re talking about here is leading edge. This is a new threat that’s just starting to manifest itself. I guess it’s been there. If you were really plugged into AI or something, you would’ve maybe understood it. But for those of us who are I’ll call them lay people in terms of the digital realm, this is new to us. A, are you hearing people are getting concerned about it? And B, what are the rudimentary or preliminary steps you’re hearing some people are taking?
Colin Rooke:
Well, just back to the nature of the problem as well, I mean, it’s not even just an internal cyber liability or wire transfer fund issue. Using AI, if you can get video clips… You can get into a social media account. You can find written texts. You can find images. And if you can get some audio, there are now completely false videos from public figures or the would be CEO of companies that don’t exist trying to sway public opinion or trying to receive donations for a cause.
And then you get into issues of legal and regulatory issues and who’s truly at fault here, the company that’s fraudulently being represented, or is it the cyber criminal itself? We are just at the tip of this thing. A lot of people, I think, listening to the show today will not be completely established or well-versed in AI and all the implications, but I guess you need to get there. We talk about training employees. What steps are businesses taking? Well, one, you have to explain what it is. People need to understand what a deepfake is.
They also need to understand AI and it’s capabilities and how it’s evolving further than just, well, it can write a nice note to someone that I don’t want to write, or it may take my research job away because it will comb the internet for me. You got to understand that all the capabilities of artificial intelligence, you got to start there. What is it? And then you have to talk about how it’s changing and the implications of that. You also need to talk to your IT providers to say, “Okay, what is out there? What’s changing?”
If you look at IT from a break-fix perspective, so you’re aware of deepfakes, what is being done to identify that and have that conversation? And from there, if you are a victim of fraud, you need to have a response strategy. What will you do? We’ve talked about PR related risks on this show. What will you do if there’s videos out there of your CEO giving false information to the public at large that did not originate from your office?
Paul Martin:
This has got me thinking about the Hollywood strike. I mean, it’s one of the topics in there. Maybe we could explore that when we come back. We’ve got to take a little break. You’re listening to Risky Business, Commercial Insurance with Butler Byers. Paul Martin here and talking today with Colin Rooke, a commercial risk reduction specialist at Butler Byers, about deepfake, a whole new realm of risk that’s coming at us. You don’t want to miss what we’ve got coming up. We’ll be back right after this break.
Welcome back to Risky Business, Commercial Insurance with Butler Byers. This is Paul Martin, and joining me is Colin Rooke, a Commercial Risk Reduction Specialist at Butler Byers. Colin, before the break, you had mentioned the potential for fake videos and stuff.
As you said that, it tweaked in my mind, that’s one of the issues at play in the Hollywood actor strike is they’re worried about their images being turned into movies and the actors are never really on the site or on the set because there’s so much of their images already online or available, already been recorded, that AI could rebuild them without them ever being there. They’re questioning whether they get paid for that. This is way more real and more in your face than some hypothetical that’s down the road.
Colin Rooke:
Yeah, it’s a very real concern that they would have. If you think about it, if you were to use AI and then… You start with AI. You use an app that would be able to read or analyze the audio of any given show. AI would then be able to replicate or recreate movies using every… The dialect would be exact. It really does raise the question of does the actor who sounds and looks like the completely artificial version of themselves receive any compensation for that movie?
Again, back to the issue of deepfakes, I mean, it’s one thing to sort of… When you see very obviously fraudulent videos on YouTube or Reel, on Snapchat, or you’re on TikTok, it might be, for example, making fun of Donald Trump or Kim Jong Un and Vladimir Putin, but at the core of those videos, those are early deepfakes. Videos created using images, audio of individuals that were never in that room. But now it’s gotten so sophisticated, the experts cannot tell. I know I’ve said that before, but then you think about, okay, the impact on public opinion.
If you can’t tell that a fake video is fake, is it actually fake? Again, back to the nature of this talk and the impact to business, you need to make sure you’re aware of what’s out there. You need to make sure or have some idea of ways that you could be vulnerable. You need to make sure you’re up to speed on your IT, but also your understanding. And then we’ve talked about incident response plans and how they’ve evolved over the years into it really does turn into a PR plan.
You need to work on what your response will be today should you realize you’ve become a victim of deepfakes. Throughout the years in the show, we’ve talked about all these different… Ransomware was the big one, wire transfer fraud. We’ve talked about social engineering, spear phishing. The new and emerging risk that needs to be on your radar is deepfakes and how they can be used to accomplish all of those things I’ve mentioned, all different types of fraud, different scams in a way that’s almost undetectable.
It really is a scary thought, but it’s so important that we are educating our people to at least be aware of the topic so we can all work together and mitigate the risk.
Paul Martin:
I’m guessing you’re talking to business people who are starting to get their head around this saying, “What tactics are they using?” What are you hearing from those you’re talking to?” Are they taking steps to protect themselves or to wind this back a bit?
Colin Rooke:
Well, wind it back a bit is… Again, one of the ways to protect yourself is actually taking a step back in technology. Until the detection is there, until you are feeling like you really understand the nature of the problem, it’s not a bad idea to say, “You know what? Other than maybe a few very trusted accounts, we may need to, again, take a step back and start writing physical checks, especially if it’s over a certain amount.”
I think you might start to see that from the insurance industry as well to say if it’s under 5,000, the insurance companies will pretty well leave you alone. But as the amounts grow, there’s limitations in the policy itself by way of coverage if certain steps aren’t followed. I think you’ll start to see up over a certain amount, they might start asking for more physical solutions.
Paul Martin:
We’ve got maybe two, two and a half minutes left in this, and I don’t want to overlook one important aspect of this. As you and I are talking about this, I’m sure the insurance companies are too. Are they starting to change their policies and their wordings to take this into account? What are you seeing from the industry that as a buyer of insurance I need to know?
Colin Rooke:
That’s a good point. The concern that I have is, you’ve got a new risk, and as it becomes more prevalent, the insurance industry will respond and they’ll amend wordings or create all new policies altogether. I mean, 15 years ago, there wasn’t much by way of cyber liability. The first policies ever written are completely archaic today. Where the issue lies is the policy itself.
Because if the industry isn’t working on deepfakes and amending the wording to broaden the definition of cyber crime to include deepfakes, will we have customers or policy holders finding themselves in a position where the loss is excluded? Because quite simply, there’s nothing in the policy language that suggests that it’s included because of how little there is known about that risk.
What I will say is the industry is very good at responding to cyber related threats, and I’m sure they’re working on it, but it is a concern that as AI evolves and technology evolves, are they required to pay? If the technology is so good that no one can verify whether or not the message did in fact come from the person sending it or not, what do you do in this case?
Paul Martin:
Well, this is the stuff that the Elon Musks of the world were warning us about, isn’t it? It’s kind of like the two sides of the coin. It’s like gasoline, it’ll power your car, but it’ll also burn down your house if you don’t use it right. This is a whole new world that we’re all going to have to think about and maybe the resurrection of snail mail. Who knows? Colin, thank you for sitting in on this today and providing us with this insight.
This is a really fascinating story, and I know we’re going to be talking about it more as we go forward. You’ve been listening to Risky Business, Commercial Insurance with Butler Byers. This is Paul Martin. Thanks for joining us and we’ll talk to you next time.