Site icon Securing Bits To Protect Humanity

CyberSecurity: How to Avoid Being Blinded by Artificial Intelligence

Ever growing cybersecurity threat vector due to digitization and proliferation of IoT devices, the impact, effectiveness of attacks and the shortage of skilled resources to identify, mitigate threats are the main challenges most of the organizations are facing today. This has lead security technology firms to look for options to reduce the efforts of security analyst and improve the effectiveness of identifying malicious code, abnormal activities at earlier stages of attack kill chain. The Artificial intelligence and Machine learning are the obvious choices for these technology firms.

To maintain the security posture and reduce the impact of the security attack, it is important to identify the patterns of Users and Entity behavior, identify the malicious/unwanted code getting injected into your network, be able to detect the abnormal behavior and/or malicious activities at earlier stages of attack kill chain. This was previously done by the writing rules and signatures. This old technique is time-consuming and effective only for the known malicious code/vulnerabilities/activities. The machine learning algorithms can perform these tasks very efficiently.

There are two types of machine learning algorithms, supervised learning and non-supervised learning. The supervised learning needs labeled datasets to train algorithms, an example is a code as acceptable or malicious.  Most of the Security technology firms are using these algorithms in their products.  This helps these firms to sell the products by taking the advantage of the hype of Artificial Intelligence.

The Problem – False Sense of Security

There are chances to generate the false sense of security among the security analyst, once the tools with artificial intelligence are deployed and this is the problem coz of the following reasons,

  • The adversaries have learned their lessons long back and they keep on changing the malicious code very frequently. The attacks are becoming subtler and adversaries are trying to blend the attack with targeted organizations traffic.
  • The technology firm has deadlines for releasing the new products, updates, in this hurry, there are chances that algorithms may not get trained on all the data that is labeled.
  • As stated in my previous blog “Five Security Dynamics That Need to be Re-looked”, the adversaries can compromise the algorithm by tampering the labeled data to let the malicious code pass through without generating the alert
  • Adversaries can learn the security tools algorithms and modify the malicious code in a way not to get identified as malicious code and still serve the purpose
  • Last but not the least, Adversaries are also studying the AI and using it to their advantage to weaponize it. IBM Research has developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware. This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.

“Humans are susceptible to social engineering. Machines are susceptible to tampering. Machine learning is vulnerable to adversarial attacks. Researchers have been able to successfully attack deep learning models used to classify malware to completely change their predictions by only accessing the output label of the model for the input samples fed by the attacker” Holly Stewart, Jugal Parikh and Randy Treit – Microsoft

The Solution

In the Blackhat USA 2018, Microsoft Holly Stewart and the team has warned that if the security technology product is utilizing single or master algorithm and if this algorithm gets compromised then it will not generate the alerts for the malware developed by adversaries.

The layered security approach we are taking to develop the security architecture should be followed to use the ML algorithms in security tools. This will help to identify the malicious code / abnormal activity if it is skipped by the upper layer compromised algorithms.

The best example of this is Microsoft Windows Defender. Windows Defender Antivirus uses a layered approach to protection: tiers of advanced automation and machine learning models evaluate files in order to reach a verdict on suspected malware.

The Security professionals need to and get familiarized with AI, Machine Learning, and Data Sciences. This will help them to understand tools algorithm effectiveness during the time of the Proof Of Concept Phase. Also as part of ongoing operations, it is required to perform the integrity assessments of the tool’s algorithms as mentioned in my previous blog “Impact of Artificial Intelligence & Machine Learning on Cyber-security Career” is performing assessments of the tool.

Conclusion

AI and machine learning based tools are going to help organizations to large extent but we can not blindly rely on these tools to generate the alerts and act. However, advanced tools and technologies you deploy in your environment with automation capability still the security cannot be a one-time project as adversaries will be always few steps ahead of the security defense mechanism and hence the security of the organizations crown jewels should be managed continuously.

“As cybercriminals increasingly weaponize AI, cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses.” Dhilung Kirat, Jiyong Jang and Marc Ph Stoecklin – IBM research

Exit mobile version