Artificial Intelligence Will Be Assisting Cybercriminals

To effectively manage the risk your business is under due to cybercriminals and their activities, it is important to acknowledge what attacks your business may soon have to deal with. Due to the increased accessibility of artificial intelligence (AI) and related processes, we predict that cybercrimes will likely use AI to their advantage in the very near future.

We aren’t alone in believing so, either. A recent study examined 20 such AI-integrating cybercrimes to see where the biggest threats would lie.

Here, we’re looking at the results of this study to see what predictions can be made about the next 15 years where AI-enhanced crime is concerned. Here’s a sneak preview: Deepfakes (fake videos of celebrities and political figures) will be very believable, which is very bad.

The Process
To compile their study, researchers identified 20 threat categories from academic papers, current events, pop culture and other media to establish how AI could be harnessed. These categories were then reviewed and ranked during a conference attended by subject matter experts from academia, law enforcement, government and defense and the public sector. These deliberations resulted in a catalogue of potential AI-based threats, evaluated based on four considerations:

  • Expected harm to the victim, whether in terms of financial loss or loss of trust.
  • Profit that could be generated by the perpetrator, whether in terms of capital or some other motivation. This can often overlap with harm.
  • An attack’s achievability, as in how feasible it would be to commit the crime in terms of required expense, technical difficulty and other assorted obstacles.
  • The attack’s defeatability, or how challenging it would be to overcome, prevent or neuter.

Split amongst themselves, the group ranked the collection of threats to create a bell curve distribution through q-sorting. Less severe threats and attacks fell to the left, while the biggest dangers were organized to the right.

When the group came back together, their distributions were compiled to create their conclusive diagram.

How Artificial Intelligence Cooperates with Criminality
In and of itself, the concept of crime is a very diverse one. A crime could potentially be committed against assorted targets, for several different motivating reasons, and the impact that the crime has upon its victims could be just as assorted. Bringing AI to the party, either in practice or even as an idea, only introduces an additional variable.

Having said that, some crimes are much better suited to AI than others are. Sure, we have pretty advanced robotics at this point, but that doesn’t mean that using AI to create assault-and-battery-bots is a better option for a cybercriminal than a simple phishing attack would be. Not only is phishing considerably simpler to do, there are far more opportunities to profit from it. Unless there is a very specific purpose to a crime, AI seems most effective in the criminal sense when used repeatedly, on a wide scope.

This has also made cybercrime an all-but-legitimate industry. When data is just as valuable as any physical good, AI becomes a powerful tool for criminals and a significant threat to the rest of us.

Professor Lewis Griffin of UCL Computer Science, one of the authors of the study we are discussing, said, “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be and how they may impact our lives.”

The Results of the Study
When the conference concluded, the assembly of experts had generated a bell curve that ranked 20 threats, breaking each down by describing the severity of the four considerations listed above. Threats were grouped in the bell curve based on similar severity, so the results neatly split into three categories, low threats, medium threats and high threats.

Low Threats
As you might imagine, those crimes ranked as low threats suggested little value to the cybercriminal, creating little harm and bringing no profit while being difficult to pull off and easy to overcome. In ascending order, the conference ranked low threats as such:

  1. Forgery
  2. AI-assisted stalking and AI-authored fake reviews
  3. Bias exploitation to manipulate online algorithms, burglar bots and evading AI detection

Medium Threats
Overall, these threats leveled themselves out. The considerations for most canceled each other out, generally providing no advantage or disadvantage to the cybercriminal. The threats included here were as follows:

  1. Market bombing to manipulate financial markets through trade manipulation, tricking face recognition software, blocking essential online services through online eviction and utilizing autonomous drones for smuggling and interfering with transport.
  2. Learning-based cyberattacks, fake AI sold in a snake oil misrepresented service, data poisoning by injecting false numbers and hijacked military robots.

High Threats
Finally, we come to those AI-based attacks that experts felt the most concerned about as sources of real damage. These columns broke down as such:

  1. AI being used to author fake news, blackmail on a wide scale and disrupting systems normally controlled by AI.
  2. Tailored phishing attacks and weaponized driverless vehicles.
  3. Audio/visual impersonation, also referred to as Deepfakes.

Deepfakes are a digital recreation of someone’s appearance to make it appear as though they said or did something they didn’t or were present somewhere they weren’t. You can find plenty of examples on YouTube of Deepfakes of various quality. Viewing them, it is easy to see how inflammatory and damaging to someone’s reputation a well-made Deepfake could prove to be.

Don’t Underestimate Any Cyberattack
Now that we’ve gone over these threats and described how much of a practical threat they really are, it is important that we remind ourselves that all of these threats could damage a business in some way, shape or form. We also can’t fool ourselves into thinking that these threats must be staged with AI. Human beings could also be responsible for most of them, which makes them no less of a threat to businesses.

It is crucial we keep this in mind as we work to secure our businesses while we operate them.

As more business opportunities can be found online, more threats have followed them. Keeping your business protected from them, whether AI is involved or not, is crucial to its success.

Advisors Tech can help you keep your business safe from all manner of threats. To find out more about the solutions we can offer to benefit your operations and their security, give us a call at 844.671.6071.