Slides from Recent PAM Talk

This talk was originally given at RSA, but I was able to do an expanded version recently at IT Hot Topics. A few have asked for the slides, so here they are. I actually hope to write out the talk in full at some point as a blog post, but I have two more talks to write so probably not soon.

Applying a Rheostat to Local Admin Rights

“Think of everything you do in terms of a rheostat, rather than a switch.” Horseman Mark Rashid

In information security, we often measure the controls that are deployed in terms of the friction, or resistance that is presented to the user. In digital identity, we speak of balancing the user experience against the friction that is experienced in the name of security. Requiring two-factor authentication is a good example.

In information security, an equivalent of a rheostat might be the principle of least privilege: grant the user no more (or less) privilege than they need to succeed at a given task. If possible, suspend that privilege while it is not in use.

When first discussing privileged access management in any organization, regardless of their size, the first question I would ask of the stakeholders is: do your users retain local admin privileges on their desktop or laptop devices?

According to a recent study by Avecto, over 94% of the critical vulnerabilities that Microsoft patched over the last year could be mitigated by removing local admin access from a user’s profile on their desktop. In the same study, that number closes at 100% for Edge and Internet Explorer vulnerabilities if the user is running a lower privilege profile for their browsing. In cybersecurity, it is often said that there are no ‘silver bullets’ in protecting users, but this one gets pretty darned close.

Removing local admin rights can feel like IT is throwing a switch on privilege. That can be seen sometimes as an extreme measure to protect users. I think that depends greatly on how it is communicated, and how the experience is delivered. Is it a switch, or does the resistance vary, like a rheostat?

Justin Richer of Bespoke Identity echoes this concept in a recent blog:

In physical systems, friction has a way of wearing out parts and causing mechanisms to fail. Otherwise productive energy is lost as heat to the environment. It’s no wonder we use it as a metaphor in computer science and seek to eliminate it. But at the same time, friction is also responsible for the ability to stop and start motion. For things like wheels and pulleys to work, they need friction between certain parts. In other words, friction in physical systems can be useful, but only when it exists as a tool and not as a byproduct.

I’d like to posit that not every action the user can take in an application should be equally easy. Instead of being eliminated, friction in a user experience needs to be carefully controlled. For example, if an action is destructive, especially if it can’t be undone, then it’s generally a very good idea to stop the user before they break everything and make sure they realize what they’re doing.

Ray Hunt is often credited with being one of the original thinkers behind natural horsemanship. When working with horses, he thought it was important to “make the wrong thing hard and the right thing easy.” That seems to be a pretty solid UX principle. How can we apply this when the user is working away on their laptop or other computing devices?

Extending Justin’s message, execution of a higher privilege other than that of a user should include some friction, but how much? Are you formatting a system partition on a disk? Probably high friction. Are you updating your mouse drivers? Probably low friction. How about installing new software?That probably depends. If it is a known publisher with a signed distribution (possibly on a whitelist of apps), perhaps we give the user no friction. Right now we get more of a binary method. You either have the keys to your PC/device kingdom, or you don’t.

We had some early experiences with a form of variable friction starting with Windows Vista (thru Windows 10) and its UAC or User Access Control. By default, the UAC was set to high, which meant the user had to click a box every time they installed software, updated a non-windows driver, or executed a variety of functions that could result in system changes. Problem is: this wasn’t really a rheostat, it was a switch. The rheostat (though still, not really) was in the form of a global slider (with settings) to determine when the user would be challenged during those events.  For users, this often became a game of “how do I make this window go away permanently”? From a security perspective, this is a disastrous result. A simple search of “disable UAC” shows how effective this has become.

In the enterprise context, we have a little more control. We can prevent users from altering UAC settings. We can also revoke their local admin privileges. But we’re still back to the old switch pattern. Probably 80% of the time, this isn’t a problem. But when a VP needs to install a new (non-standard) conferencing client to collaborate with a partner and they lack the rights and there is no one immediately available to help them do so, then the phone calls begin.

This is not to say we lack solutions for this today. There are a few vendors in the enterprise privilege management (EPM) space that can help with this problem, and leverage a variety of controls. But how many companies focus on this as an early priority in overall security strategy?  Based on the latest Verizon Data Breach Investigation Report (DBIR), far too few. There are many things to note in the report, but the one that got my attention is that 88% of breaches are still leveraging methods mentioned in the 2014 report.

Purchasing an EPM tool isn’t a requirement, especially for smaller companies. But, once you get into scale challenges, EPM solutions will make deployment and management much easier.

If you want to eat your own dog food, yank the local admin privileges from the account you are viewing this post from (if you haven’t already). Then make a log of the number of times you’ve had to leverage an admin credential to do your activities on the device. I did, and it surprised me how little I actually needed it.

EPM vendors have something going for them, but I would love a low-cost consumer version of this capability. Start with a whitelist of the top 100 consumer applications and perhaps grow it from there with vendors that have good release/update hygiene. Make this tool more of a rheostat, and only increase the resistance when the user is trying to do something that incurs proportionate risk, like opening an attachment from an email that results in changes to the system. Our users will be happier, and more secure.

 

 

RSA Thoughts, Part 1

(photo credit: Brian Campbell)

I think teaching eviscerated my time for blogging. Going to try and put more energy in it this year. Naturally, I’m going big on this revival with a two part post about my experience at the RSA Conference, to the best of my knowledge the largest security conference on the planet (especially if you count their global adjuncts).

This was my first RSA, both as an attendee and speaker. I thought Oracle OpenWorld was huge. Good gravy. I think estimates had it at about 45,000 attendees. In spite of the size, kudos to RSA and their management vendor who run an incredibly tight conference for that scale.

On one hand it’s awesome that we have so many people, vendors, and speakers focused in the information security space. On the other, its a touch overwhelming and nearly impossible to get to all the content you want. Overall I think that’s a good problem to have, because this is a tough problem to solve. It was refreshing that they featured an identity track (a first, I believe) at the conference.

The good news is they make much of the content available online, including some videos of the sessions. Mine has audio but no video, which isn’t a loss, heh. It isn’t very technical, but has a solid foundation on some of the key elements and challenges that go into a Privileged Access Management program. I’ve delivered this talk at the Cloud Identity Summit, BSides Charlotte, and IT Hot Topics, but this was definitely the most mature version of the talk because of the time that has passed and the lessons learned.

My talk was on Thursday, which originally I loved because I thought it would give me more time to prepare. I was mistaken. This talk is by far the most mature of the ones I’ve developed so very little additional time was needed to update it for the conference. I don’t know necessarily that I would have wanted to go on Tuesday, as there were some serious heavyweights in the industry to compete against. My biggest concern was making sure I kept my energy balanced throughout the sessions, networking, and vendor parties so that I could be sharp as possible when it came time to take the stage. It required missing a few tracks, but I eventually achieved that.

I discovered in the hours leading up to my talk that seat reservations had reached a level that they created an overflow room in case demand exceeded capacity. That was extremely flattering, but I did my best not to make it bigger than it was. The talk wasn’t changing, or the stage. I was thrilled that so many people were interested in this area, because I think sometimes it gets lost in between the traditional domains of identity & access management and information security. Clearly others felt the same way given the number that turned up.

Overall, I couldn’t be more pleased with how the talk went. Even though the hall was a little dark so they could broadcast it to the overflow room, I could feel the engagement and energy from the audience. It showed when I finished, as the questions that emerged were insightful and thought provoking. Once we wrapped up, I went outside and answered even more questions, happily, for another 40 minutes. Such great conversation with such intelligent and thoughtful people! I retired to the speaker’s lounge to decompress a little and make some mental notes from some of the questions that were asked. (photo credit: Scott Bollinger)

I know I’m kind of working this post backwards, but the next chapter will have some of my takeaways from the conference, both in hallway conversations and some of the tracks and keynotes I attended.

I’m writing this post at the airport with a feeling of extreme gratitude for the opportunity that was presented to me, and all of the support that I’ve received from countless people to help make this conference a personal and professional success.

PS. Thanks to Ian Glazer for the support.