Vague Signals & Behavioral Analytics

Gartner Analyst Anton Chuvakin shreds the myth that excelling in detection of threats means you should be at the same level or higher of preventing them. For some (including myself), this should be obvious. Preventing, detecting, and responding to security threats should be treated and evaluated as independent disciplines. Excellence in one doesn’t guarantee a level of maturity in either of the others. Unfortunately, given that some security vendors insist on perpetuating this myth, Chuvakin by necessity eviscerates this false premise with several good arguments. I’m only going to focus on one because of its impact in identity or user behavior analytics.

One of the points that Chuvakin makes regarding prevention is that signals in this area are often vague, making prevention with this level of data impossible, unless you want angry users storming your gates for being denied access. This is particularly true when evaluating the activity or behavior of a user. While some machines are capable of measuring the risk score of a given activity, do we really want a block on a connection when it barely crosses a threshold that may or may not be valid? The smarter approach would be escalate the user’s request to another level of authentication. Even if the challenge succeeds, it might make sense to flag the activity for human review.

If I login from a London based IP address 6 hours after my last known activity (from the US), it might be prudent to have the system in question challenge me for another factor of authentication to ensure the credentials have not been compromised. If no response is given or the session is terminated, flagging the account for review would be prudent. Even better, if the analytics engine has access to my travel & badging data (both viable points of integration), the signal to noise ratio on the event could be reduced (or escalated) quickly. Human intervention may still be useful here but automation becomes at least feasible based on our ability to raise or lower the risk score of the event based on the user’s response.

This level of sophistication for behavioral analytics as a  prevention protocol is fairly mature, but still pretty nascent for most enterprises. I see this as one of the early challenges in developing a behavioral analytics program. The use case I described is pretty straightforward, but establishing baselines for user behavior, especially in large enterprises, is far more daunting. Integrating that knowledge with your access management tools & policies is another level of challenge. That doesn’t mean we shouldn’t attempt to do so, however.

As a side note, this is an area where the concept of Shared Signals intrigues me. As our identity fabric becomes more and more decentralized/federated, adding external events to our behavioral analytics engine only seems to make sense. Further, we still hold control over how to interpret those events vs. relying on a machine interpretation of an external event that raises a higher level of vagueness on what took place.

It stands to reason that detection activities would mature at a faster rate than prevention. Arguably response activities can mature even faster, given appropriate resources. All three are worth investing in to protect company assets. But in the end reality has to intervene in our expectations with respect to achievements in one bearing any relationship to maturity in the other two.