Rise of the machines for cyber defense: Artificial intelligence to augment IoT security amidst growing attack vectors
Today’s security teams are tasked with protecting critical embedded, IT, and business systems from a growing number of cyber threats, some of which can mutate to expose vulnerabilities and evade traditional defense mechanisms. In this interview with Amir Husain, Founder and CEO of SparkCognition, he addresses the shortcomings of traditional security technologies against advanced attacks, such as Stuxnet, and reveals how artificial intelligence (AI) can augment the expertise of security professionals equipped with limited resources.
With all the attack vectors in the Internet of Things (IoT), what is the biggest challenge security teams face?
HUSAIN: The challenge is enormous and actually has two dimensions. First, attacks are becoming more sophisticated and the likelihood is increasing that an attack that has never been seen before will target physical infrastructure. The second dimension, of course, is that the volume of threats is increasing tremendously. Automated tools are being used by hackers to develop and mutate threats, which means that an automated enemy is already engaged in battle with human incident response teams. In order to address this dual challenge of rising complexity and burgeoning quantity, traditional blacklist or signature-based approaches simply won’t suffice. And finding enough security talent to manually deal with these phenomena is also harder than ever before. New tools, techniques, and strategies have to be brought forward.
How can cognitive computing, in other words, artificial intelligence, be applied to augment traditional security technologies such as blacklisting and whitelisting?
HUSAIN: Blacklisting and signatures are good protection against already seen, pre-categorized threats that are known to be malicious. The issue is that with the rate at which new and unknown threats and malevolent sources/destinations are materializing, it’s hard to work off of a blacklist no matter how frequently it is updated.
Stuxnet is a classic case of a previously unseen attack against cyber-physical infrastructure that found its way into a secure, air-gapped facility. That is to suggest that while conventional IT defenses such as air gapping, firewalling, and the like are not to be ignored, they cannot be relied upon exclusively, particularly when it comes to cyber threats that translate into physical consequences. Protecting against threats like these is not about matching a signature, but about watching each connected asset and ensuring that it is functioning as expected at a physical level. If you can do that, then regardless of what vector an attack employed or what it is doing to harm the equipment – causing it to rev up beyond a safe level, for example, as Stuxnet did – the end result of this threat can be detected and remedial action can be taken before damage is done.
AI and cognitive capabilities step in to augment the security analyst by reasoning like he or she would, but at machine scale. It’s not about matching a pre-known pattern, but about looking at myriad different clues that raise suspicion about a certain behavior and then having the ability to research that behavior autonomously like a human security researcher does. This can include reading about threats via comprehension of natural language, or “understanding” a suspicious binary by performing AI-powered static analysis. As a matter of fact, our Artificially Intelligent security platform, SparkSecure, does both these things. We’ve invested a lot in developing algorithms that enable automated model building, a capability that allows sensor data and physical observations to be automatically translated into a predictive model of the underlying asset. Variations can be observed and potentially disastrous outcomes can be predicted ahead of time. The nice thing about this technology is that it guards against malicious threats and natural failure equally well.
Beyond this, it doesn’t merely suggest that something might be a threat, it also assists the incident response professional by clearly identifying the evidence in support of that conclusion.
What sort of implications does this type of technology have on existing infrastructure? Does it require major architectural rework to integrate with already-deployed tools and systems?
HUSAIN: Many clients cannot host their data in the cloud for privacy, security, and compliance reasons. As such, we support hybrid deployments. Infrastructure can be used in the cloud or implemented via virtual or physical appliances in on-premise private clouds.
With regards to integration, depending on the type of security (cyber, physical, or both) we can integrate with everything from security information and event management (SIEM) systems, logs on distributed file systems, data historians for sensor data, and data acquisition systems like those from Natural Instruments.
We also integrate with IBM Watson to enable functionality that we refer to as in-context remediation. In other words, when we find a threat, the massive security corpus we have trained Watson on can be employed to answer any specific questions the human operator has regarding what the threats and issues are and how they can be resolved. In some cases, custom equipment or remediation documentation may need to be added to the Watson corpus, which is also possible.
Let’s be honest, are we eliminating jobs here? Is there a point in the not-too-distant future where these platforms will operate autonomously and simply email alerts to a CTO when they recognize certain behaviors?
HUSAIN: SparkSecure has an automated capability that can not only detect threats, but block them autonomously by integrating with firewalls and operating system (OS) configurations. But the reality is that when we deploy, customers want to approve such actions before they are put into use. We think there will be an element of human approval in filtering these autonomous security actions for quite some time.
The other side, of course, is that we simply don’t have enough cyber security experts and professionals. It’s not as if our autonomous security AI is stepping in and causing people to be relieved of their jobs, it’s more often assisting small teams who simply can’t keep up with the volume of threats directed at them. Why are these teams small? Because, like I said, we simply don’t have enough cyber security pros trained up. If you factor in the needs of corporations and government – particularly in the context of cyber warfare – cyber security engineers and professionals should be in no fear of losing their jobs to AI for the foreseeable future. AI will be a great ally for them.
We are at the cusp of an exciting time when Internet-connected things, both big and small, are about to be integrated with autonomous, software-powered intelligence. This will result in amazing things that will come pretty close to our conception of magic: Self-driving cars are just an early preview; autonomous passenger drones such as those demonstrated by the Chinese company Ehang will be our magic carpets; increasingly automated warehouses and factories run by robots such as those built by Kuka, our Aladin’s lamps; and inexpensive, collaborative, multi-purpose desktop fabricators like MakerArm, Santa’s elves. All this is coming our way soon. But as the physical and the digital are melding together, cyber threats can and will have real-world consequences. Autonomous cars can be hacked, drones can be crashed, and robots commandeered. Perhaps the most important element in fulfilling the promise of AI-powered IoT is to have a secure infrastructure that ensures the benefits of this brave new world outweigh the risks. That is precisely the goal we’ve committed ourselves to.