February 28, 2022 AI,CYBER SECURITY

AI & Cybersecurity

Something I keep hearing of late is what cybersecurity is set to look like this year (and beyond), as well as how AI will play a bigger role against ransomware and breaches.

First of all, it isn’t AI if it doesn’t have the element of prediction.  And here, we’re not predicting anything, to be honest, we’re only inferring (at best).  Inferring possible behaviors.  As such, let’s call AI for what it is, and that’s Machine Learning.  And yet, the industry keeps trying to use AI because it sounds so much more impressive.  It’s intelligent.


Well, Machine Learning is just as impressive, is it not?  A machine that “learns” and adapts its behavior based on what transpired in the past?  Tell me that’s not mind-blowing?

Aside from this critical clarification, we also need to consider that hackers aren’t just sitting around idly.  They too have access to “AI” tools.  So, while we may think the new and ultimate tool is coming, beware, our adversaries are likely already using those precise same tools.  Innovation happens on both sides.  Our enemies have access to the same resources.  Most software tools are open source and available to everyone, for better or for worse.  And those who have nefarious intentions are very skilled, very smart individuals too.

This initial consideration aside, we appear to be placing far too much reliance on something that, in all likelihood, will not deliver as we hope.

I myself have tested AI-based AV.

For an entire year.

I used it to test every email our filters were scanning, in parallel with our filters.  And in that one year, that AI-based AV captured a grand total of 4 emails.  I repeat, FOUR!!  Considering the fact that we scan millions of emails every day, that number is beyond miniscule.  Our “traditional” scanners, comprised of over 70 engines (each tailored to specific issues), captured hundreds of thousands of emails.  Why?  Because threats don’t come inside emails as attachments.  No hacker would send you a virus attached to an email because that’s far too easy to catch.  Block executable code altogether, and you’re blocking every threat even if you don’t know what threat that is (which, ultimately, doesn’t even matter – a threat is a threat and needs to be stopped, regardless of what name it goes by).

For the most part, hackers send you links.  They send you phishing emails.  Spoofed emails.  They send you something that aims to trick your users into clicking and downloading the threat code.

So, is that email a threat per se even though it does not contain executable code?

Yes, it is.

Because, sadly, users will be users, and some will just keep clicking on things they’re not supposed to because they can’t help themselves.  Instead of a “think before you click” mentality, they click first and think later.  And that’s when the real threat starts.  When the clicked link goes out to grab that code which will now infect your entire network.

How do you protect from all this?  No need for fancy AI.

Scan HTTPS, make sure things are properly blocked in the web filtering.  That’s where you need to apply the real and best protection nowadays because once the user clicks (and you know someone will), you may still have a chance at blocking the threat BUT only if you’re properly filtering and scanning HTTPS.

Before closing, a final word about endpoint AV.

Traditional AV is practically useless.

With more than 1250 new threats per minute, a signature-based AV will never be able to keep up, and here’s where pundits advocate the use of AI/ML.  Now, I don’t necessarily disagree with this approach, but I honestly believe it’s insufficient.  I mean, we’re still in the realm of trust but verify.  And we know that on its own is also no longer sufficient.  Zero Trust tells us that we need to “assume breach“, that it’s bad news from the get go.

On that note, a better approach is that taken by companies like White Cloud Security.

This company has readapted the concept of whitelisting (born about 15 years ago) and finally made it into a product that actually works.  The idea is that nothing is allowed to run on your computer unless it has been “trusted“. Without going too much into details, the product uses certificates to identify legitimate software (i.e., the OS itself), has a very long list of stuff that should be allowed because it’s recognized as legitimate, and when installed, it goes into “learning mode“, checking everything that tries to run on your computer.  Practically questioning everything it doesn’t know.

Once you turn it into blocking mode, only what’s been whitelisted is allowed to run.

And nothing else.

You can install ransomware as much as you want.  It just won’t be allowed to run.  So, even IF you get breached and you download ransomware, it won’t be able to cause you any problems because simply put, it will not be able/allowed to run.

To be perfectly honest, I find this a much better approach than a ML tool, since the latter  may or may not recognize a threat.  We’re putting too much faith into a new technology still in its infancy, and definitely not ready for the great things we tag it as capable of doing.

AI/ML will likely be great.  Some day.

But, by that point, hackers will have a similar tool as well, and so the battle continues on.

Wouldn’t you agree?