When we think of cybercrime, most people’s minds go to one of two places. On the one hand, some think about the annoying, misspelled emails that are so obviously scams, while on the other, we can’t help but think about the hacks that we see in movies, where a criminal manages to overcome the best the government can incorporate into their defenses.
In truth, there is a middle ground that many people don’t realize is present: cybercrimes that can be carried out via artificial intelligence to varying degrees of success. In fact, a newly published study analyzed twenty cybercrimes that might incorporate AI to target their victims more effectively.
Let’s observe a few takeaways from this study to see what the data can tell us about AI-enhanced crime as it is projected for the next 15 years.
Spoiler alert: Deepfakes are predicted to be really, really bad news.
The Research Process
To determine the largest threats that AI could play a part in, researchers identified 20 threat categories present in academic papers, news, current events, and even pop culture. Reviewed for two days by a conference of academic, law enforcement, defense, government, and public sector representatives, these threats were debated and analyzed to create a catalogue of threats that AI could enable based on four considerations:
- Expected harm to the victim, whether in terms of financial loss or loss of trust.
- Profit that could be generated by the perpetrator, whether in terms of capital or some other motivation. This can often overlap with Harm.
- How achievable the threat was for the perpetrator to carry out.
- The attack’s defeatability, or how challenging it would be to overcome, prevent, or neuter.
Divided into independent groups, participants ranked these attacks in a bell-curve distribution via q-sorting, with less-severe threats falling to the left, and the worst of the worst falling to the right.
The Relationship Between AI and Criminal Activity
Crime, as a concept, is remarkably diverse. Not only can a crime potentially be committed against a considerable assortment of targets, there are equally assorted motivations and impacts upon their victims that can come as a result. The introduction of artificial intelligence (either practically or conceptually) adds another variable into the equation.
Of course, AI is much more applicable to some forms of crime than others. While robotics have come leaps and bounds from their origins, AI is still a better tool for phishing than it would be for assault and battery—which, in our computing-centric modern world, makes it a very effective tool for cybercriminals to harness as they can. Furthermore, the kind of crimes that AI is most effective at enabling can be repeated ad infinitum and ad nauseam.
As a result, cybercrimes can now be bartered, shared, and sold. Seeing as data and information are considered just as valuable as physical goods, this makes AI-powered cybercrime a significant threat.
As one of the authors of study, Professor Lewis Griffin of UCL Computer Science, said, “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”
The Researchers’ Results
By the end of the conference, the assembled experts had created a bell curve of the 20 assorted threats that they had identified, with the mean values of the above four considerations defined for each threat in terms of whether or not they were advantageous for the criminal responsible. Each column in the bell curve contained threats that were considered on par with one another. As a result, these AI-enabled threats could be broken down into three categories:
These threats generally saw few benefits for the criminal, as they would cause little harm and bring small profits, usually without being very achievable and being relatively simple to defeat. In ascending order, these threats included forgery, then AI-assisted stalking and certain forms of AI-authored fake news, and finally bias exploitation (or malicious use of platform algorithms), burglar bots (small remote drones with enough AI to assist with a break-in by stealing keys or opening doors), and avoiding detection by AI systems.
These threats turned out to be generally more neutral, with the four considerations averaging out to be neither good nor bad for the criminal, with a few outliers that still balanced out. These eight threats were divided into two columns of severity. The first column contained market bombing (where financial markets are manipulated by trade patterns), tricking face recognition, online eviction (or blocking someone from access to essential online services), and autonomous attack drones for smuggling and transport disruptions.
The second column in the moderate range included learning-based cyberattacks, which essentially boils down to an artificially intelligent distributed denial of service attack. This column also featured snake oil, where fake AI is sold as a part of a misrepresented service. Data poisoning and military robots rounded out this group, as the injection of false data into a machine-learning program and the takeover of autonomous battlefield tools could both cause some severe concerns.
Finally, there were plenty of threats that were ranked as very concerning by the teams of experts. Crimes like disrupting AI-controlled systems and more inflammatory AI-authored fake news were joined by wide-scale blackmail. Tailored phishing (or what we usually describe as spear phishing) and the use of autonomous vehicles as weapons ranked just above that.
As we referenced above, the threat that ranked as most beneficial to the criminal across all four considerations was the use of audio/visual impersonation, more commonly referred to as deepfakes.
Deepfakes are the result of using computer programming and artificial intelligence to digitally recreate an individual’s appearance with great accuracy, enabling someone to literally make it look like someone is saying something that they never said, or appeared someplace that they have never been. YouTube is rife with examples of varying quality, but it is easy to see how a well-made deepfake could be damning to someone who is targeted maliciously.
Of course, just because some threats—like deepfakes—are so much more impactful than others, doesn’t mean that you can ignore these other threats. In fact, the opposite is true. While having someone literally put words in your mouth is obviously harmful, it could also be extremely harmful to have an assortment of negative reviews shared online, whether they were generated by AI or not.
The importance of keeping this in mind cannot be understated.
In an increasingly online world, business opportunities are largely migrating to the Internet. As a result, you need to ensure that your business is protected against online threats of all kinds—again, regardless of whether AI is involved.
Seamróg Technology Solutions is here to help. To learn about our cybersecurity solutions and the best practices that we can introduce to your staff, reach out to us at (717) 827-7400.