Written By Roark Pollock And Presented By Ziften CEO Chuck Leaver
If you study history you will observe numerous examples of severe unintended consequences when new technology has been presented. It typically surprises people that brand-new technologies may have dubious purposes in addition to the positive purposes for which they are launched on the market however it takes place on a very regular basis.
For example, Train robbers utilizing dynamite (“You think you used adequate Dynamite there, Butch?”) or spammers utilizing email. Just recently making use of SSL to hide malware from security controls has become more common because the legitimate use of SSL has actually made this technique more useful.
Since new technology is typically appropriated by bad actors, we have no reason to believe this will not be true about the brand-new generation of machine learning tools that have reached the marketplace.
To what effect will there be misuse of these tools? There are probably a few ways in which assailants might utilize machine-learning to their benefit. At a minimum, malware writers will check their new malware versus the brand-new class of sophisticated risk protection products in a quest to modify their code to ensure that it is less likely to be flagged as destructive. The effectiveness of protective security controls always has a half-life because of adversarial learning. An understanding of machine learning defenses will help hackers be more proactive in decreasing the efficiency of machine learning based defenses. An example would be an attacker flooding a network with phony traffic with the intention of “poisoning” the machine-learning model being constructed from that traffic. The objective of the attacker would be to trick the protector’s artificial intelligence tool into misclassifying traffic or to produce such a high level of false positives that the defenders would dial back the fidelity of the notifications.
Machine learning will likely also be utilized as an attack tool by attackers. For instance, some scientists predict that enemies will make use of artificial intelligence techniques to refine their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to customize a social engineering attack is especially uncomfortable given the efficiency of spear phishing. The capability to automate mass customization of these attacks is a powerful economic incentive for assailants to embrace the techniques.
Anticipate breaches of this type that provide ransomware payloads to increase dramatically in 2017.
The requirement to automate tasks is a major motivation of financial investment choices for both attackers and protectors. Artificial intelligence guarantees to automate detection and response and increase the functional tempo. While the technology will significantly become a basic component of defense in depth strategies, it is not a magic bullet. It ought to be understood that assailants are actively dealing with evasion techniques around machine learning based detection solutions while also using artificial intelligence for their own offensive functions. This arms race will require protectors to significantly achieve incident response at machine speed, additionally exacerbating the requirement for automated incident response abilities.