Cybersecurity AI Is Not That Intelligent…YET

Here’s some simple logic that is not lost on anyone: “You don’t buy a padlock for the stable after the horse is stolen.”

In running my company, I’m often tempted to wag the proverbial finger at client’s who back burner their cybersecurity responsibilities. I usually stop short for three reasons: One, I’d be doing a lot of finger wagging and two, my Mother did her best to shame me into being better and it never worked. As a result, I find myself shrugging a lot.

As for the third reason, you can rest assured it does not involve my mother. Rather it involves the cybersecurity industry’s consistent approach to threat and attack. Just like the people who got their horses stolen, we are too often reduced to examining yesterday’s attack in an effort to thwart tomorrow’s. This is not a hell of a lot different than client’s who only consider their data security after its been breached.

Attempting to assess tomorrow’s threat is not simply predicted by looking at what happened yesterday. In tech, this method of investigation has proven unsound. Vectors of attack change as quickly as technology itself. We’re talking about hitting ever-moving targets.

Enter Artificial Intelligence (AI) and machine learning- touted as panacea against all future threats. A pretty damn bold declaration I have to say and despite this surging sentiment, not one I completely agree with. Now that the spooky promise of AI is upon us, and as ubiquitous as the term has become, let’s get straight on what it is-

AI is a system that optimizes to increase a variable. To use investing as an example, if AI is given the task to “increase my portfolio value,” it iteratively changes an infinite number of variables to optimize the other. The amount of data or dataset that the AI has access to determines the number of variables it can compute. Should AI determine that a variable being decreased increases another, that’s when Steven Spielberg starts taking an interest.

If an airplane being taken out of the air increases the value of a short position I’m taking on Virgin Atlantic and Equifax, and this AI has access to the variable of missile launchers, well you know the rest. It sounds far out but the potential is insanely real. Consider that most every system is computer controlled i.e. firewalls/encryption, this would be a piece of cake for an infinitely powered AI. AI would need to be coupled with quantam computing power, which is no longer a super sized mainframe sitting in a remote locale. Consider the Botnets virus’- it installs malware and uses recruited CPU power to spread the virus. With cloud technologies, we are now seeing projects like Golem and Eslatic, offer rentable quantam computing power that can be used on a laptop.

When infinite computing capability meets a Google-sized dataset, things could easily get out of hand. While the potential is enormous, the world of data security is relatively finite in comparison to the mentioned examples. Therefore, the subsets of data need not be the size of Darpa to guard against attack.

AI is software that perceives its environment well enough to identify events and take action against a predefined purpose. AI is quite good at pattern recognition and anomaly detection within a defined space. This makes it a potentially excellent tool to detect threats. Machine learning is often used with AI. It is software that can infer based on human input and results. Together with AI, machine learning can become a tool to predict outcomes but this would still be based on past events.

That’s a decent primer on some technology that is going to play a big part in our future. At Gotham Cybersecurity, we always analyze things in real time and make efforts to see what’s coming, not what we already saw. One day the tech might catch up to us. J

Going forward, this is what I see: The kinks of artificial intelligence will work themselves out and create tangible application that will eventually provide needed prevention. I also see a world where people aren’t as caught up in the misconception and hyperbole about the current role of AI in our industry.

For the time being, the implementation of basic safeguards is a great way for companies to protect vital information and that should always be priority number one.

Finally, I envision a world where every concern that stores sensitive data has the good sense to have a padlock on the stable. Shrugs.

Trevor Goering – CEO, Gotham Security