Evaluating 2FA in the Era of Security Panic Theater

(note: this is a writeup of a talk that I gave at DerbyCon 2019 and at UNCC’s CyberSecurity Symposium in 2020. Thought it would be useful to get it in blog form, especially with the Solar Winds event unfolding.)

It seems like today’s world offers constant reminders of how insecure our digital lives can be. As a security professional, part of my job is to monitor for threats to my company and the organizations with which I have a relationship. A significant part of that effort lies in assessing how likely or realistic those threats are. If you believed every infosec vulnerability headline you see come across twitter, it would be easy to feel somewhat like chicken little, with the sky ever falling. I’ve actually coined a term for this phenomenon (though I’m not sure if I actually originated it, but Google seems to think so): Security Panic Theater.

If this term sounds mildly familiar, it is because of its proximity to the phrase ‘security theater’. We experience this pretty regularly whenever we attend a major sporting event like the World Series and we have to go through long lines where people wave a wand over us to ensure my keychain knife doesn’t get admitted to the stadium. This takes place even though the track record of seizing weapons that would matter is pretty poor. But the mere act of this experience makes patrons feel safer. This is even worse when we travel and pass through TSA’s gauntlet of screeners. Consistent penetration tests reveal a woeful rate of actually detecting items that could cause us harm while we are in flight. To add to the insult of this process, there is a comic reality with what actually is seized. I’ll let comedian Steve Hofstetter explain:

If you bring too much liquid, the TSA confiscates it and throws it away, in case it’s a bomb. So they throw it away. In case it’s a bomb. In the garbage can, right next to them. With all the other possible bombs. In the area with the most amount of people.

In case it’s a bomb.

Steve Hofstetter

Security Panic Theater (SPT) is a bit of a different experience. The process for SPT goes something like this:

Vulnerability/breach announced regarding a product or control (x) [Security]

+ Inflammatory internet headline(s) regarding (x) [Panic], which leads to the conclusion:

Product or Control (x) is useless/defeated [Theater]

A relatively recent example of this was the release of a penetration testing toolkit by Polish researcher Piotr Duszyński named Modlishka, which loosely translates in English to Mantis. The central feature of this toolkit was the use of a reverse proxy that could accelerate a phishing flow by sending a user to a spoofed URL, but the rest of the web experience was as the user expected. This enabled a man-in-the-middle (MITM) attack to capture both the credential and the SMS code being used by the user.

The significance of this new framework didn’t lie with the fact that you could now phish any two-factor authentication (2FA) method that used one time passwords (OTP). What made this release notable was that it was now significantly easier to accelerate the phishing flow because you didn’t have to spin up a fake site. A reverse proxy would do the work for you. To be clear, that is certainly noteworthy, but also not new.

However, to hear the twitterverse and online media outlets talk about it, you’d think all our credentials, even if protected by 2FA, were suddenly moments away from being captured by hackers. Now, to be fair, there are some responsible journalists who try to treat these topics fairly, but even a sane article can often be overridden by a clickbait title like “Is 2FA Dead?”

Let’s get a few basics clear for the sake of sanity & clarity:

2FA can’t be killed

2FA represents a combination of factors for authentication, not a single technology or pattern. The last few years alone have had a litany of episodes where a particular technology may be at risk (often temporarily, or misleadingly so), such as:

RSA tokens were allegedly cracked (mostly not true)

SS7 flaw will drain all your bank accounts (true, but hard to implement)

NIST Killed SMS 2FA (sort of, but not really)

Modlishka makes SMS useless (sort of, but not really) 

Google Security keys have Bluetooth flaw (recall for some, not all)

Yubikey FIPS keys flawed (recall for some, not all) 

Apple promoted modifications to SMS 2FA for improved anti-phishing strength & joined FIDO’s board. 

2FA implementation in 2020 Iowa Caucus renders app nearly unusable

And even today as I update this, the SolarWinds hackers bypassed OWA’s 2FA because they compromised the server hosting the private key.

That last one hasn’t had enough oxygen yet for the 2FA headlines to blaze, and they will, but both the company analyzing the hack and Bruce Schneier emphasize:

It should be noted this is not a vulnerability with the MFA provider and underscores the need to ensure that all secrets associated with key integrations, such as those with an MFA provider, should be changed following a breach.

Notice the trend here? While there is some truth for most of these from a vulnerability perspective, the reality is that these technologies still work to protect your credentials. Apple’s recent announcement has its own debate worth talking about (and has been on IDPro’s Slack site) and the debacle in Iowa shows that any technology is a dumpster fire waiting to happen if its implementation is designed poorly.

The diversity of the 2FA landscape makes it stronger, not more vulnerable. 

Let’s take a look at the following categories of authentication: 

Pretty diverse to be killed with a single vulnerability, I would think! Now let’s overlay which ones have at least one known vulnerability:

If we look at all the ones in red, that would be pretty disheartening to the casual observer. That’s where journalists and analysts need to take special care in talking about vulnerabilities. The real story doesn’t fit neatly into a simple headline regarding the vitality of the authentication landscape.

All methods of 2FA are still incredibly effective (some more than others) 

Google published a study of some internal findings on various methods used to secure their public credentials. Yes, SMS should be the low hanging fruit of 2FA but guess what, even this well-beaten pinata of 2FA stopped 76% of targeted attacks and nearly 100% of automated & bulk phishing attacks!

Microsoft recently published some numbers to similar effect, that the risk of account compromise is reduced by 99% using multi-factor authentication (MFA). I’d say 2FA is far from dead in that context.

Yes, we should get rid of the 2 in 2FA, long live MFA*

The biggest reason for this is that users can be more secure, and less inconvenienced when they have access to multiple ways of authenticating instead of one token combined with a password that can be lost, or a phone that can be upgraded and lock a user out. Without promoting one vendor, I can say thoughtfully that I have several methods to secure my key accounts and that diversity of options, I believe, is the key to giving our users the power of choice as to how they want to login. That power is how we eventually do reduce passwords to an edge use case. The key is that more sites need to support those methods to incentivize adoption. We’re not there yet, but the last few years show a lot of promise in eventually achieving that goal.

The reality is, even the coolest methods of authentication will eventually find a vulnerability. History proves this. But we don’t throw the baby out with the bathwater when those are discovered. We fix it, learn from it, and stay secure. Let’s leave the theater to the actors, where it belongs.

* For another blog post, but I’m wondering if MFA needs to be retired as a concept and we simply focus on the strength of authentication. To be continued…

SMS as a 2FA Method

I’ll be the first one to admit that I jumped the gun a little when Twitter announced that their founder, Jack Dorsey, had his account hijacked.

Initially, no one (including yours truly) had details as to how his account was taken over. However, all fingers pointed at a SMS jacking, which wasn’t terribly far from the truth. The assumption was that this allowed them to use SMS combined with some knowledge of Jack’s password to access the account. That turned out to be inaccurate:

So, yeah, it wasn’t a 2FA hack, but it did show how fragile an account can be when SMS is involved. There’s a reason NIST deprecated SMS as an out-of-band factor of authentication when they updated their 800-63-3 standard.

SMS is still dominant as a method of two-factor authentication because it is one of the lowest barriers to entry, both for the identity provider (IdP) and the user. It is also arguably the least secure method, as Jack Dorsey’s case proved.

That said, if SMS is your only option for 2FA, use it. In the case of Twitter, it is not (much to their credit). You can use both an application based method (such as Microsoft Authenticator, Google, or Authy) and/or a Security Key leveraging FIDO’s Universal 2-Factor protocol (U2F). For account recovery, you can store a backup code in your password manager (or somewhere else).

A key can cost as cheap as $20 and can be used to secure a number of your critical accounts.

Twitter caught a lot of flack on this case, somewhat unfairly. That being said, I do think they should remove SMS as a method for 2FA. Mobile apps for 2FA are pretty ubiquitous and a low barrier to entry for all users. So help your user base out, and turn it off. That wouldn’t have saved Jack, but that is a post for another day.

Slides from Recent PAM Talk

This talk was originally given at RSA, but I was able to do an expanded version recently at IT Hot Topics. A few have asked for the slides, so here they are. I actually hope to write out the talk in full at some point as a blog post, but I have two more talks to write so probably not soon.

Applying a Rheostat to Local Admin Rights

“Think of everything you do in terms of a rheostat, rather than a switch.” Horseman Mark Rashid

In information security, we often measure the controls that are deployed in terms of the friction, or resistance that is presented to the user. In digital identity, we speak of balancing the user experience against the friction that is experienced in the name of security. Requiring two-factor authentication is a good example.

In information security, an equivalent of a rheostat might be the principle of least privilege: grant the user no more (or less) privilege than they need to succeed at a given task. If possible, suspend that privilege while it is not in use.

When first discussing privileged access management in any organization, regardless of their size, the first question I would ask of the stakeholders is: do your users retain local admin privileges on their desktop or laptop devices?

According to a recent study by Avecto, over 94% of the critical vulnerabilities that Microsoft patched over the last year could be mitigated by removing local admin access from a user’s profile on their desktop. In the same study, that number closes at 100% for Edge and Internet Explorer vulnerabilities if the user is running a lower privilege profile for their browsing. In cybersecurity, it is often said that there are no ‘silver bullets’ in protecting users, but this one gets pretty darned close.

Removing local admin rights can feel like IT is throwing a switch on privilege. That can be seen sometimes as an extreme measure to protect users. I think that depends greatly on how it is communicated, and how the experience is delivered. Is it a switch, or does the resistance vary, like a rheostat?

Justin Richer of Bespoke Identity echoes this concept in a recent blog:

In physical systems, friction has a way of wearing out parts and causing mechanisms to fail. Otherwise productive energy is lost as heat to the environment. It’s no wonder we use it as a metaphor in computer science and seek to eliminate it. But at the same time, friction is also responsible for the ability to stop and start motion. For things like wheels and pulleys to work, they need friction between certain parts. In other words, friction in physical systems can be useful, but only when it exists as a tool and not as a byproduct.

I’d like to posit that not every action the user can take in an application should be equally easy. Instead of being eliminated, friction in a user experience needs to be carefully controlled. For example, if an action is destructive, especially if it can’t be undone, then it’s generally a very good idea to stop the user before they break everything and make sure they realize what they’re doing.

Ray Hunt is often credited with being one of the original thinkers behind natural horsemanship. When working with horses, he thought it was important to “make the wrong thing hard and the right thing easy.” That seems to be a pretty solid UX principle. How can we apply this when the user is working away on their laptop or other computing devices?

Extending Justin’s message, execution of a higher privilege other than that of a user should include some friction, but how much? Are you formatting a system partition on a disk? Probably high friction. Are you updating your mouse drivers? Probably low friction. How about installing new software?That probably depends. If it is a known publisher with a signed distribution (possibly on a whitelist of apps), perhaps we give the user no friction. Right now we get more of a binary method. You either have the keys to your PC/device kingdom, or you don’t.

We had some early experiences with a form of variable friction starting with Windows Vista (thru Windows 10) and its UAC or User Access Control. By default, the UAC was set to high, which meant the user had to click a box every time they installed software, updated a non-windows driver, or executed a variety of functions that could result in system changes. Problem is: this wasn’t really a rheostat, it was a switch. The rheostat (though still, not really) was in the form of a global slider (with settings) to determine when the user would be challenged during those events.  For users, this often became a game of “how do I make this window go away permanently”? From a security perspective, this is a disastrous result. A simple search of “disable UAC” shows how effective this has become.

In the enterprise context, we have a little more control. We can prevent users from altering UAC settings. We can also revoke their local admin privileges. But we’re still back to the old switch pattern. Probably 80% of the time, this isn’t a problem. But when a VP needs to install a new (non-standard) conferencing client to collaborate with a partner and they lack the rights and there is no one immediately available to help them do so, then the phone calls begin.

This is not to say we lack solutions for this today. There are a few vendors in the enterprise privilege management (EPM) space that can help with this problem, and leverage a variety of controls. But how many companies focus on this as an early priority in overall security strategy?  Based on the latest Verizon Data Breach Investigation Report (DBIR), far too few. There are many things to note in the report, but the one that got my attention is that 88% of breaches are still leveraging methods mentioned in the 2014 report.

Purchasing an EPM tool isn’t a requirement, especially for smaller companies. But, once you get into scale challenges, EPM solutions will make deployment and management much easier.

If you want to eat your own dog food, yank the local admin privileges from the account you are viewing this post from (if you haven’t already). Then make a log of the number of times you’ve had to leverage an admin credential to do your activities on the device. I did, and it surprised me how little I actually needed it.

EPM vendors have something going for them, but I would love a low-cost consumer version of this capability. Start with a whitelist of the top 100 consumer applications and perhaps grow it from there with vendors that have good release/update hygiene. Make this tool more of a rheostat, and only increase the resistance when the user is trying to do something that incurs proportionate risk, like opening an attachment from an email that results in changes to the system. Our users will be happier, and more secure.

 

 

Why UX Matters or How Color (and other) Choices Can Ruin an Identity Experience for Users

I’m not writing this to shame a company, though I do plan to share this post with them in hopes that they can make some adjustments that will benefit customers in the future. As such, I’ll do my best to mask their identity as much as reasonably possible.

Before doing so, I want to back up a second. When I am attempting to convey to someone how critical digital identity is to their product or service, I start with this premise: The experience of managing their digital identity is often their very first interaction with your product or service. If a login is required, it is usually the proverbial front door every time they use your service. Getting that right, consistently, is critical to your success.

Sunday, last week I had an interesting UX lesson in how colors can influence user choices and, in this case, result in a horrible experience trying to manage an identity/account. To be clear, it wasn’t just colors that created the experience, but I’ll illuminate the additional issues below.

Due to an illness, I was trying to access my remote care service that let’s me speak with a doctor for basic first aid/primary care. It is a terrific service for times when I have poison ivy (usually once a year) or an average ear infection (not yearly, but pretty common). It usually saves me a primary care visit and I get a script called into my pharmacy pretty quickly. In some years, I talk to them more than I do my primary care physician. It is usually a huge time and money saver.

To expedite receiving a call, I have a profile setup thru their website. I did this a few years ago. Today, I tried to login, but they had changed their website since I last visited (I think), and this is what I was presented with (pardon the masking, but trying to be helpful, not critical):

Now, bear in mind, this particular case was kind of urgent. So time was of the essence. I quickly looked at the screen and couldn’t quickly remember if I was considered a client or a member. Now, the bright blue color login is for members, but the bright blue section below it correlated to businesses trying to partner with them. That created some confusion so I chose the white login button for the client login portal.

Using my 1Password shortcut, I attempted to login. No luck, bad username or password. My username is a little complex, so I tried a few more times for good measure. No joy. Well, the website had changed, maybe they force a password reset every so often, like after design change and maybe I missed the notice or it was dumped as spam. So I initiate a password reset, and get this screen.

Seems straightforward, so I input my username and email address. The system accepts my parameters and I get a reset link sent to my email address. I click on the link and get this:

That’s odd. Naturally, the security geek immediately starts wondering if I have a man-in-the-middle attack going on, so I attempt it again. Same result. Once more, no luck.

At this point, I just call the 800 number to request a call. After a wait of about 40 minutes (unusual, given my previous experience) I get an attendant and we navigate the process to get a doctor queued to call me.

Now, it may be blindingly obvious to some (clearly, not me) that I may have gone to the wrong portal. I never thought to go back and attempt to use the member portal instead. At the time, I didn’t even think there were two portals. After talking with the service operator, she initiated a manual password reset for me and naturally told me to go to THIS page:

A ha! I’m masking this page some, but the rest of the screen makes it quite clear this was enabled for customers of the service. Naturally, armed with my new password I was able to login and update my password and security question. So I was off on the wrong branch of the site flow the whole time. A single, understandable, but ultimately incorrect choice resulted in almost an hour of wasted time. Besides the lessons learned for yours truly, I think there are a few for the vendor.

First, proper error handling is one of the first key tests for an effective user experience. If I’m using a valid member portal user ID on the client portal, maybe test the ID against the member portal and offer to redirect? That would have avoided this entirely.

Second, while I don’t know that their identity stores are unified or linked, I was able to initiate a reset of my member user ID’s password from the client portal. That’s bad. Had that failed, I might have at least suspected my ID was messed up and gone a different route. Again, checking that ID against the member portal may have saved a step here. Either way, accepting the member portal ID as valid and sending me a reset link to the client portal that kicked back with an expired token reinforced the idea that I was in the right place but something was broken. This ultimately ties into lesson one regarding error handling.

Next, reconsider the color choices on the main page. Perhaps align both the member login color with the member solicitation screen? And perhaps align the client login with the client solicitation color. Consistent coloring can reinforce users choices when they are unsure.

Also, maybe reconsider the ‘client’ term vs. member? I realize the website eventually clarifies, but maybe consider the term ‘partner’? Member vs. Partner is a pretty clear distinction. I don’t think this is critical, but it could be useful. I know patient isn’t in vogue these days, but the patient portal likely would have landed me in the right spot.

Finally, some language on each portal page to assist the user if they selected the wrong portal might be beneficial. The client portal in particular is fairly sparse. They do a good job with the member portal (if I had actually clicked on it).

Now, in full disclosure, I now also have their mobile app installed, which has a significantly better user experience. If I were to guess, it is designed for members only. Therefore, the confusion I had with the dueling web portals couldn’t happen. It also has TouchID/FaceID integration so that’s even better. Aligning the UX of mobile with the web site would be a nice next step to get an even greater consistency for the customer. They should also market their mobile app on the web page.

So, in reality, 2-3 hopefully minor changes could improve this vendor’s client UX considerably. I was fortunate, and persistent, so this ended well. But, what if the user was put off by the wait time and the password reset problem and went to the ER or Urgent Care (this happened on a Sunday) instead? That was a huge difference in cost and opportunity cost for whomever was behind me in line.

While this dealt with a more serious type of service experience, businesses undergoing digital transformations should consider hiring people that can look at these flows (better than I do, as I am not a UX expert) and give them proper guidance. Even if your service is selling t-shirts or fidget spinners, helping your users navigate your service easily from an identity context can be the difference between a sale or a closed browser. Or better, you’ve created a repeat customer.

Deploying Identity Solutions – ‘Field of Dreams’ Doesn’t Work

(Note: this topic is background for a panel that I’m participating on June 20th at the Cloud Identity Summit, in Chicago, Illinois. I wrote this in hopes of informing some of the context around the panel, though I’m sure it will be revisited in some respect during our session.)

Knock, Knock: Identity is here. Identity Who? Exactly.

Tuesday, June 20th, 4:20pm, Chicago Ballroom IX

The genesis for this panel took place during dinner following the Ping Identify conference in New York. Rob Davis from TIAA & I were talking about some of our challenges in deploying identity solutions, especially ones where customer, stakeholder, or developer engagement are required. In other words, pretty much everything except directory synchronization. Even governance solutions, like certification or privileged access management, that had the benefit of the ‘stick’ approach to service adoption; seemed to lag in engagement even when doing so wasn’t necessarily voluntary. You could lead the horse to water (you knew there would be a horse analogy, right?), but you couldn’t make them drink.

The simple reality was, this is no ‘Field of Dreams’. We built it, but they didn’t come to participate. Password recovery and management solutions are probably the easiest one to point to as an example of this failure. Nearly every enterprise worth their salt has deployed a password management and recovery product and yet password recovery is perpetually listed as the number one reason users call the help desk!

Rob & I both agreed that this would be an excellent subject for a talk at CIS. So I commenced finding the right people that could both explain their own challenges in this space and hopefully offer up solutions that might help others, including myself, succeed in the future. Between Rob & I, we had both financial services and healthcare/life sciences covered, but I wanted diversity of perspective. Through some networking, I think we put together a really great breadth of knowledge and experience across many industries. In addition to yours truly, we also have:

Bernard Diwakar – Security & IAM Architect at Intuit

Frank Villavicencio – CPO, Security Management Services at ADP

Steve Hutchinson – Principal Identity Architect at GE

And finally, no panel is successful without an awesome moderator, so naturally I asked Ian Glazer of Salesforce, Kantara, & IDESG if he’d do the honors in spite of his incredible schedule at the conference. Some promise of bourbon may have been part of the exchange, but in the end I think we’ve got a killer lineup of identity pros that will share their wit, wisdom, and experience on this important subject.

But wait! Part of what will make this a successful session is great questions and shared experiences from the audience. So bring your own stories and let’s make this a conversation!

Unfortunately, the scheduling gods put Rob’s talk against the panel, so we had to go to the bullpen. See you in Chicago! If you can’t make it, follow the action using #CloudIDSummit tag on Twitter.

RSA Thoughts, Part 1

(photo credit: Brian Campbell)

I think teaching eviscerated my time for blogging. Going to try and put more energy in it this year. Naturally, I’m going big on this revival with a two part post about my experience at the RSA Conference, to the best of my knowledge the largest security conference on the planet (especially if you count their global adjuncts).

This was my first RSA, both as an attendee and speaker. I thought Oracle OpenWorld was huge. Good gravy. I think estimates had it at about 45,000 attendees. In spite of the size, kudos to RSA and their management vendor who run an incredibly tight conference for that scale.

On one hand it’s awesome that we have so many people, vendors, and speakers focused in the information security space. On the other, its a touch overwhelming and nearly impossible to get to all the content you want. Overall I think that’s a good problem to have, because this is a tough problem to solve. It was refreshing that they featured an identity track (a first, I believe) at the conference.

The good news is they make much of the content available online, including some videos of the sessions. Mine has audio but no video, which isn’t a loss, heh. It isn’t very technical, but has a solid foundation on some of the key elements and challenges that go into a Privileged Access Management program. I’ve delivered this talk at the Cloud Identity Summit, BSides Charlotte, and IT Hot Topics, but this was definitely the most mature version of the talk because of the time that has passed and the lessons learned.

My talk was on Thursday, which originally I loved because I thought it would give me more time to prepare. I was mistaken. This talk is by far the most mature of the ones I’ve developed so very little additional time was needed to update it for the conference. I don’t know necessarily that I would have wanted to go on Tuesday, as there were some serious heavyweights in the industry to compete against. My biggest concern was making sure I kept my energy balanced throughout the sessions, networking, and vendor parties so that I could be sharp as possible when it came time to take the stage. It required missing a few tracks, but I eventually achieved that.

I discovered in the hours leading up to my talk that seat reservations had reached a level that they created an overflow room in case demand exceeded capacity. That was extremely flattering, but I did my best not to make it bigger than it was. The talk wasn’t changing, or the stage. I was thrilled that so many people were interested in this area, because I think sometimes it gets lost in between the traditional domains of identity & access management and information security. Clearly others felt the same way given the number that turned up.

Overall, I couldn’t be more pleased with how the talk went. Even though the hall was a little dark so they could broadcast it to the overflow room, I could feel the engagement and energy from the audience. It showed when I finished, as the questions that emerged were insightful and thought provoking. Once we wrapped up, I went outside and answered even more questions, happily, for another 40 minutes. Such great conversation with such intelligent and thoughtful people! I retired to the speaker’s lounge to decompress a little and make some mental notes from some of the questions that were asked. (photo credit: Scott Bollinger)

I know I’m kind of working this post backwards, but the next chapter will have some of my takeaways from the conference, both in hallway conversations and some of the tracks and keynotes I attended.

I’m writing this post at the airport with a feeling of extreme gratitude for the opportunity that was presented to me, and all of the support that I’ve received from countless people to help make this conference a personal and professional success.

PS. Thanks to Ian Glazer for the support.

How Do I Get Into InfoSec?

This is a question I hear often, in a variety of forms that I won’t belabor here. It’s always difficult to answer in a short conversation. To be honest, the point of this post is really self-serving; mainly, to give folks I speak with an easy place to look that I can remember when having this conversation.

I could make an effort to answer this question, but frankly I think anything I could offer would be redundant and not as expertly versed as some people I respect that have already attempted to do so. Some conversations I had at RSA is what encouraged me to finally get something written.

Two people have done a really nice job with this subject. The first, in a somewhat older post from 2014, is Daniel Miessler. That isn’t meant to short his contributions in this space, far from it. This post just provides a really nice overview of getting into this field. His blog is also excellent and quite prolific.

Next is Lesley Carhart, a Digital Forensics & Incident Response (DFIR) expert, a self-described “Full Spectrum Cyber-Warrior Princess”, and an all-around thoughtful person. She has a terrific blog (and posts way more frequently than I do, though I hope to change that).

Of particular importance is the fact that she posts frequently to an advice section of her blog that often includes career guidance. To whit, here’s a link to several terrific posts on building an infosec career. I would encourage someone to start with the Chapters 1-3 Megamix.

I will post some follow-up thoughts on this subject, particularly a more specific consideration for folks wanting to learn more about identity & access management, but I hope this helps some people. Oh, and if you’re not following these folks on twitter, you’re missing out.

Dropbox, 2FA, FIDO, and You

fbDusting the blog off for a PSA. Hopefully most of you are aware of the news surrounding Dropbox’s 2012 hack and some of the new details surrounding it.

Not going to say too much beyond this but simply request that my friends (or anyone who reads this) do the following:

  1. If you’re using dropbox, change your password, even if you’ve done it since 2012. Make the new password unique (avoid reuse especially for services like this), and strong.
  2. Please, please, please setup two factor authentication (2FA). This article walks you step by step thru the process. Do NOT opt for text messages as the form of verification. Easiest is using a mobile app. If you want a recommendation, go with Authy. Its a terrific mobile app and syncs across devices. It has the added benefit of working with a number of common services like Gmail, facebook, amazon, microsoft live, wordpress, evernote, tumblr, slack, and I’m sure a list of others.
  3. Consider, in addition to #2, buying a FIDO U2F compliant security token, like Yubikey to secure the account. It’s not as convenient for mobile, but is more secure in my opinion. Doing 1 & 2 gets you solid. #3 is even better.

Finally, seriously consider setting up 2FA for all your accounts that have it. If you aren’t sure if your service offers it, check  here. If they don’t, tell them to get it or consider a competitor. If they only have SMS/text for 2FA, consider a competitor.

Vague Signals & Behavioral Analytics

Gartner Analyst Anton Chuvakin shreds the myth that excelling in detection of threats means you should be at the same level or higher of preventing them. For some (including myself), this should be obvious. Preventing, detecting, and responding to security threats should be treated and evaluated as independent disciplines. Excellence in one doesn’t guarantee a level of maturity in either of the others. Unfortunately, given that some security vendors insist on perpetuating this myth, Chuvakin by necessity eviscerates this false premise with several good arguments. I’m only going to focus on one because of its impact in identity or user behavior analytics.

One of the points that Chuvakin makes regarding prevention is that signals in this area are often vague, making prevention with this level of data impossible, unless you want angry users storming your gates for being denied access. This is particularly true when evaluating the activity or behavior of a user. While some machines are capable of measuring the risk score of a given activity, do we really want a block on a connection when it barely crosses a threshold that may or may not be valid? The smarter approach would be escalate the user’s request to another level of authentication. Even if the challenge succeeds, it might make sense to flag the activity for human review.

If I login from a London based IP address 6 hours after my last known activity (from the US), it might be prudent to have the system in question challenge me for another factor of authentication to ensure the credentials have not been compromised. If no response is given or the session is terminated, flagging the account for review would be prudent. Even better, if the analytics engine has access to my travel & badging data (both viable points of integration), the signal to noise ratio on the event could be reduced (or escalated) quickly. Human intervention may still be useful here but automation becomes at least feasible based on our ability to raise or lower the risk score of the event based on the user’s response.

This level of sophistication for behavioral analytics as a  prevention protocol is fairly mature, but still pretty nascent for most enterprises. I see this as one of the early challenges in developing a behavioral analytics program. The use case I described is pretty straightforward, but establishing baselines for user behavior, especially in large enterprises, is far more daunting. Integrating that knowledge with your access management tools & policies is another level of challenge. That doesn’t mean we shouldn’t attempt to do so, however.

As a side note, this is an area where the concept of Shared Signals intrigues me. As our identity fabric becomes more and more decentralized/federated, adding external events to our behavioral analytics engine only seems to make sense. Further, we still hold control over how to interpret those events vs. relying on a machine interpretation of an external event that raises a higher level of vagueness on what took place.

It stands to reason that detection activities would mature at a faster rate than prevention. Arguably response activities can mature even faster, given appropriate resources. All three are worth investing in to protect company assets. But in the end reality has to intervene in our expectations with respect to achievements in one bearing any relationship to maturity in the other two.