Applying a Rheostat to Local Admin Rights

“Think of everything you do in terms of a rheostat, rather than a switch.” Horseman Mark Rashid

In information security, we often measure the controls that are deployed in terms of the friction, or resistance that is presented to the user. In digital identity, we speak of balancing the user experience against the friction that is experienced in the name of security. Requiring two-factor authentication is a good example.

In information security, an equivalent of a rheostat might be the principle of least privilege: grant the user no more (or less) privilege than they need to succeed at a given task. If possible, suspend that privilege while it is not in use.

When first discussing privileged access management in any organization, regardless of their size, the first question I would ask of the stakeholders is: do your users retain local admin privileges on their desktop or laptop devices?

According to a recent study by Avecto, over 94% of the critical vulnerabilities that Microsoft patched over the last year could be mitigated by removing local admin access from a user’s profile on their desktop. In the same study, that number closes at 100% for Edge and Internet Explorer vulnerabilities if the user is running a lower privilege profile for their browsing. In cybersecurity, it is often said that there are no ‘silver bullets’ in protecting users, but this one gets pretty darned close.

Removing local admin rights can feel like IT is throwing a switch on privilege. That can be seen sometimes as an extreme measure to protect users. I think that depends greatly on how it is communicated, and how the experience is delivered. Is it a switch, or does the resistance vary, like a rheostat?

Justin Richer of Bespoke Identity echoes this concept in a recent blog:

In physical systems, friction has a way of wearing out parts and causing mechanisms to fail. Otherwise productive energy is lost as heat to the environment. It’s no wonder we use it as a metaphor in computer science and seek to eliminate it. But at the same time, friction is also responsible for the ability to stop and start motion. For things like wheels and pulleys to work, they need friction between certain parts. In other words, friction in physical systems can be useful, but only when it exists as a tool and not as a byproduct.

I’d like to posit that not every action the user can take in an application should be equally easy. Instead of being eliminated, friction in a user experience needs to be carefully controlled. For example, if an action is destructive, especially if it can’t be undone, then it’s generally a very good idea to stop the user before they break everything and make sure they realize what they’re doing.

Ray Hunt is often credited with being one of the original thinkers behind natural horsemanship. When working with horses, he thought it was important to “make the wrong thing hard and the right thing easy.” That seems to be a pretty solid UX principle. How can we apply this when the user is working away on their laptop or other computing devices?

Extending Justin’s message, execution of a higher privilege other than that of a user should include some friction, but how much? Are you formatting a system partition on a disk? Probably high friction. Are you updating your mouse drivers? Probably low friction. How about installing new software?That probably depends. If it is a known publisher with a signed distribution (possibly on a whitelist of apps), perhaps we give the user no friction. Right now we get more of a binary method. You either have the keys to your PC/device kingdom, or you don’t.

We had some early experiences with a form of variable friction starting with Windows Vista (thru Windows 10) and its UAC or User Access Control. By default, the UAC was set to high, which meant the user had to click a box every time they installed software, updated a non-windows driver, or executed a variety of functions that could result in system changes. Problem is: this wasn’t really a rheostat, it was a switch. The rheostat (though still, not really) was in the form of a global slider (with settings) to determine when the user would be challenged during those events.  For users, this often became a game of “how do I make this window go away permanently”? From a security perspective, this is a disastrous result. A simple search of “disable UAC” shows how effective this has become.

In the enterprise context, we have a little more control. We can prevent users from altering UAC settings. We can also revoke their local admin privileges. But we’re still back to the old switch pattern. Probably 80% of the time, this isn’t a problem. But when a VP needs to install a new (non-standard) conferencing client to collaborate with a partner and they lack the rights and there is no one immediately available to help them do so, then the phone calls begin.

This is not to say we lack solutions for this today. There are a few vendors in the enterprise privilege management (EPM) space that can help with this problem, and leverage a variety of controls. But how many companies focus on this as an early priority in overall security strategy?  Based on the latest Verizon Data Breach Investigation Report (DBIR), far too few. There are many things to note in the report, but the one that got my attention is that 88% of breaches are still leveraging methods mentioned in the 2014 report.

Purchasing an EPM tool isn’t a requirement, especially for smaller companies. But, once you get into scale challenges, EPM solutions will make deployment and management much easier.

If you want to eat your own dog food, yank the local admin privileges from the account you are viewing this post from (if you haven’t already). Then make a log of the number of times you’ve had to leverage an admin credential to do your activities on the device. I did, and it surprised me how little I actually needed it.

EPM vendors have something going for them, but I would love a low-cost consumer version of this capability. Start with a whitelist of the top 100 consumer applications and perhaps grow it from there with vendors that have good release/update hygiene. Make this tool more of a rheostat, and only increase the resistance when the user is trying to do something that incurs proportionate risk, like opening an attachment from an email that results in changes to the system. Our users will be happier, and more secure.



Vague Signals & Behavioral Analytics

Gartner Analyst Anton Chuvakin shreds the myth that excelling in detection of threats means you should be at the same level or higher of preventing them. For some (including myself), this should be obvious. Preventing, detecting, and responding to security threats should be treated and evaluated as independent disciplines. Excellence in one doesn’t guarantee a level of maturity in either of the others. Unfortunately, given that some security vendors insist on perpetuating this myth, Chuvakin by necessity eviscerates this false premise with several good arguments. I’m only going to focus on one because of its impact in identity or user behavior analytics.

One of the points that Chuvakin makes regarding prevention is that signals in this area are often vague, making prevention with this level of data impossible, unless you want angry users storming your gates for being denied access. This is particularly true when evaluating the activity or behavior of a user. While some machines are capable of measuring the risk score of a given activity, do we really want a block on a connection when it barely crosses a threshold that may or may not be valid? The smarter approach would be escalate the user’s request to another level of authentication. Even if the challenge succeeds, it might make sense to flag the activity for human review.

If I login from a London based IP address 6 hours after my last known activity (from the US), it might be prudent to have the system in question challenge me for another factor of authentication to ensure the credentials have not been compromised. If no response is given or the session is terminated, flagging the account for review would be prudent. Even better, if the analytics engine has access to my travel & badging data (both viable points of integration), the signal to noise ratio on the event could be reduced (or escalated) quickly. Human intervention may still be useful here but automation becomes at least feasible based on our ability to raise or lower the risk score of the event based on the user’s response.

This level of sophistication for behavioral analytics as a  prevention protocol is fairly mature, but still pretty nascent for most enterprises. I see this as one of the early challenges in developing a behavioral analytics program. The use case I described is pretty straightforward, but establishing baselines for user behavior, especially in large enterprises, is far more daunting. Integrating that knowledge with your access management tools & policies is another level of challenge. That doesn’t mean we shouldn’t attempt to do so, however.

As a side note, this is an area where the concept of Shared Signals intrigues me. As our identity fabric becomes more and more decentralized/federated, adding external events to our behavioral analytics engine only seems to make sense. Further, we still hold control over how to interpret those events vs. relying on a machine interpretation of an external event that raises a higher level of vagueness on what took place.

It stands to reason that detection activities would mature at a faster rate than prevention. Arguably response activities can mature even faster, given appropriate resources. All three are worth investing in to protect company assets. But in the end reality has to intervene in our expectations with respect to achievements in one bearing any relationship to maturity in the other two.