Ancient “AI” in the Age of Advanced Adversaries
There has been a lot that’s being said about the use of AI in Cyber Security. This is for good reasons – people have said here and folks in information security (as we have called “cyber security” for decades now) have experienced first-hand. It’s only natural that already stretched InfoSec teams look at AI as the “saviour” to the skills / personnel gap to close it. Then again, there is a lot being said about companies selling products as “AI enabled” too.
But realistically speaking are there some things traditional organizations (“non-AI”) can do to actually do what many of these “AI enabled” products do? I wouldn’t have written this blog post now would I if the answer was anything but yes! 🙂
Let’s look at them:
- Anomaly Detection – this is age old! Almost all security tools that “alert” us on something are essentially using this. How well? That’s debatable. The kind of anomaly detection that I am talking about is simple (but different). For example, abnormal login attempts on your Internet-facing systems is anomaly detection. So is abnormality of DNS queries. Your Cloudtrail logs (in AWS) showing an inordinate spend on EC2 instances is also anomaly. A abnormally small amount of time spent between a git commit and a production deployment of that commit is also odd! Your SaaS or Okta bill being high or your APIs getting throttled (without any known changes) are all anomalies. The response time for these depends on whether or not you have been able to automate these anomalies. The day you automate these “known” anomalies you are already doing what many of these “AI enabled” products are doing today (after of course charging you an arm and leg!)
- UBA / User behavior analytics – a lot of products do that but the most simplistic things are reduction of logins / preventing logon from areas where you do not expect your users to originate from. This is “reduction” of attack surface. Is that foolproof? Hell no! Why? Generally, speaking adversaries do not attack systems from their home computers. Adversaries operate by using trampoline servers (sometimes layers of them) to send the attack from the “attacker controlled bots”. But it reduces your area of concentration. And then you can use UBA more effectively since you do know where your users are expected from at a macro level. To improve your “AI-ness” you can then add capabilities which are able to say not at a macro-level but on a per user level where that specific user is expected to originate from. And if it looks abnormal (or anomalous) then ask them to step up. There are numerous vendors in this space as well as products on the cheap which you could do. There are open source libraries that can also help you do that on the cheap. Again, something very expensive “AI enabled” products can do too.
I am sure there are many other things that as an organization one can start doing. Obviously, at the end of the day, every initiative takes resources and by no means are any of these simple but YMMV depending the size of your datasets, users, and organizations.