Thanks to the advent of artificial intelligence (AI), cybersecurity professionals have to reconsider how they approach these threats. Machine learning is one option, as it can help today’s modern solutions learn how to be more effective against advanced threats. On the other hand, there’s nothing stopping the other side from also taking advantage of artificial intelligence.
If you think about it, this makes a lot of sense, as computers are capable of working much faster than humans. Plus, they’re less prone to user error. Hackers have found AI to be effective for the deployment of phishing attacks. According to a study conducted by cyber security company ZeroFOX in 2016, an AI called SNAP_R was capable of administering spear phishing tweets at a rate of about 6.75 per minute, tricking 275 out of 800 users into thinking they were legitimate messages. In comparison, a staff writer at Forbes could only churn out about 1.075 tweets a minute, and they only fooled 49 out of 129 users.
A more recent development by IBM is using machine learning to create programs capable of breaking through some of the best security measures out there. Of course, this also means we’ll eventually have to deal with malware powered by artificial intelligence, assuming it isn’t already being leveraged somewhere.
IBM’s project, DeepLocker, showcased how video conferencing software can be hacked. The process involved the software being activated by the target’s face being detected in a photograph. The IBM team, including lead researcher Marc Ph. Stoecklin, has this to say about these kinds of attacks: “This may have happened already, and we will see it two or three years from now.”
Other researchers have demonstrated that AI can be used in cyberattacks, even going as far as using open-source tools to make them happen.