Just about every company involved in the cybersecurity industry is looking at artificial intelligence, (AI), as a means to win the war against cybercrime and address the human element which has been attributed to 70% of IT breaches. Many of the arguments are compelling, but is artificial intelligence the answer to cybercrime?
AI is in our future at some point whether we like it or not. According to renowned physicist, Stephen Hawking, true AI would be the biggest event in human history. Unfortunately, many have not fully understood the risks much less taken steps to sufficiently mitigate those risks. The advances in AI could potentially bring humanity phenomenal good; they could also spell disaster.
When I try to formulate an answer to the question: “Is Artificial Intelligence the answer to cybercrime?” my thought process keeps coming back the the fact that technology is a tool and not a solution. Tools such as technology, along with processes, are implemented and controlled by humans. While AI can help us, I could never agree that AI is the solution to our cybersecurity challenges.
According to Dell SecureWorks, 70% of IT breaches are attributed to the human element, and 90% of malware requires human action before it can infect its target. On the opposing side of the target are other humans acting as cyber criminals from either inside or outside the organization. AI should then, at least in theory, be able to apply machine learning and heuristic evaluation which would either prevent malware from being presented to humans in the first place to act upon, or instantly neutralize the threat once a human acted upon it.
But can a machine be built, and can it learn, certain that the AI is free from all error? I don’t think so. This is because we can never be completely sure that the baseline, upon which the AI was built even going back decades to the original work in AI, can be completely error free. This is the same challenge we have always faced when deploying an intrusion detection system: while we can baseline an environment, we can never know with 100% certainty that the environment was not already breached and we did not yet know it.
For example, (and no offense intended to those who do not hold religious beliefs), it is commonly believed among many religious groups that we humans were created “in the image of God, or a higher power as one might understand “God.” Yet, we are far from perfect and if we are the creation of some higher intelligence or being, then despite the perceived perfection of that higher being, we were created flawed and with error. If we, as humans, are the “god,” or “creator,” of artificial intelligence, then is it possible we can build some of our flaws into our creation?
On a more scientific level, we can consider the Calculus. For all that we have been able to achieve using the Calculus, it is still not a mathematical absolute or perfection and can be argued to have a built in error. If, as in differential Calculus we measure the rate of change, elapsed time, while getting ever closer to zero, never truly equals zero, as that would be as meaningless as dividing zero by zero. Hence, we accept a very small element of error even though that error may not materially impact the outcome.
Hence, while I can agree that AI can be a useful tool in fighting cybercrime, it can never be accepted as error free.
Cybercrime is also a dynamic between humans and as long as humans exist, or at last until we evolve as a species to a higher level of consciousness. there will be those among us who seek to exploit others. This problem has existed as long as we have existed. Whether it is the intent to deceive, as demonstrated by most external actors, or the intent to do harm, as is often the case with insiders, the intelligent machine can still only react and would not be in control of human intent. There will still be humans with the intent of gaining advantage over other humans. The intelligent machine then can be no better than the professional poker player who is skilled at discerning human intent from facial expressions, gestures and intangibles.
Recent figures have stated the cost of cybercrime at over $400 billion globally. Cybercrime is big business, and an entire industry exists just to provide solutions to the cybercrime problem. As the industry feverishly works to develop AI solutions to protect us from cybercrime, we would be kidding ourselves thinking that organized crime is not investing just as heavily developing AI to take on the intelligent machines being developed to protect us. There is simply too much money at stake.
AI is a tool and we should of course explore and deploy AI where it makes sense. We should also exercise extreme caution and keep in mind that we just may end up with intelligent machines on both sides playing the same game we have played for years, with each side working to get one step ahead of the other. Then perhaps, getting back to Stephen Hawking and not fully understanding the risks, the machines may one day figure out that they don’t need us. Then what?