The good, the bad and the (un)known
https://pixabay.com/en/users/geralt-9301/

The good, the bad and the (un)known

Information Security or Cyber Security relies on a model which we applied for years, since the very early days. This model changed very little, comfort has set in and unfortunately we are in situation where we rely too much on it almost like an addiction which prevents us from thinking about something new and question it.

This applied model of security is one based on “known bad”. Its foundation is based on the idea that one is able to identify threats allowing a software or security tool to detect it, flag it and prevent the exploit. Techniques used for identification exists in many different flavors, it can be anything from a hash, indicator of compromise, vulnerability database, hash, URL or IP reputation etc… By nature it is a reactive approach, we need to discover the exploit first, and in the majority of the cases it means it has already made casualties. These first casualties are sacrificed for the majority of the users, unfortunately for them they might suffer severe damage whilst others are saved. Once discovered we update all tools in order to allow them to detect and eventually prevent the bad from happening.

This is a global problem. We don't have a malware problem. We have an adversary problem. There are people being paid to try to get inside our systems 24/7. Tony Cole

The technology evolution allows us to limit the zero day (A zero-day vulnerability is an undisclosed computer-software vulnerability that hackers can exploit to adversely affect computer programs, data, additional computers or a network, Source:Wikipedia) gap, fast detection of a zero day exploit by using sandbox techniques for example allows us to protect the environment quicker and reducing the attack time window. Unfortunately this approach has drawbacks, polymorphic malware modifies itself and makes sandboxing difficult. When your organization suffers a polymorphic malware attack it means that every malware looks differently and would require sandboxing in order to get proper detection.Though other solutions are present which look workable at least for malware detection.

However, targeted attacks have little benefit from this type of protection. If your organization is in the crosshair of hackers you might suffer a breach and detect at best 200 days later.

Technological progress is like an axe in the hands of a pathological criminal.Albert Einstein

Why does it fail to detect?

These hackers may use known malware but at the time they use it, it is likely to be unknown by any of the “known bad” tools. A few days/weeks later you detect a piece of malicious software. Your security tools and staff cleans it up and considers the job is done. It is extremely difficult to distinguish if this piece of software is part of well-crafted attack against your organization. In the meantime this hacker remains hidden in your environment and will use covert channels to communicate with your environment. These hackers might stay asleep for weeks, months or even longer. The variety of bad is far bigger than bringing into sight normal and good actions. When the “known bad” model was established it relied on a simple concept, the misfits are a minority. This is the case in our real lives, luckily. The bad has outnumbered the good therefore we might need to add something in the loop.

How to distinguish the wanted from the unwanted?

Many organizations are only looking outside. Many of us are clueless about the behavior of the network, systems, users, application etc… For the majority it is impossible to dig up some baselines that show how the remote users log in, who logs in from what location. Once we've a baseline on what can be considered normal or good behavior we can start proactive security. Whenever a deviation is detected the security team can investigate and if something serious is on the rise it can be stopped. Today we wait until havoc brakes out and we'll try to run after the facts. At best we can contain it and fix it at a later stage.

This model implies that your organization supports the idea that an attack will happen, a hacker will eventually gain access to your environment and try to steal, destroy or blackmail. The “known good” model requires you to learn about normal behavior since there is no fixed hash, signature or definition for normal behavior. And it will be different for each organization; it is even different within an organization since the accounting department probably has different working habits than the IT department.It is a tough row to hoe, it requires investment in time, staff and financially.

What it is not!

It is not the silver bullet and it doesn’t replace the “known bad” model. Both are complementary and combined with machine learning and artificial intelligence it will be a continuous improvement process to fine tune and understand “normal” and distinguish it from bad. Additionally we need to change our ideas on what we should log into SIEM and what not, which data is useful for correlation and which is not.

 

John Gerards

Global Security Governance Specialist at ABN AMRO Bank N.V.

7y

Keep these excellent articles coming !!! Thanks

Philippe Duluc

CTO big data & security, SVP

7y

Interesting vision. Moreover, "known bad" focuses mainly on IT, while "known good" needs deeper knowledge of business....

To view or add a comment, sign in

Insights from the community

Explore topics