Security Unlocked

Share

Inside Insider Risk

Ep. 23

Throughout the course of this podcast series, we’ve had an abundance of great conversations with our colleagues at Microsoft about how they’re working to better protect companies and individuals from cyber-attacks, but today we take a look at a different source of malfeasance: the insider threat. Now that most people are working remotely and have access to their company’s data in the privacy of their own home, it’s easier than ever to access, download, and share private information.  


On today’s episode, hosts Nic Fillingham and Natalia Godyla sit down with Microsoft Applied Researcher, Rob McCann to talk about his work in identifying potential insider risk factors and the tools that Microsoft’s Internal Security Team are developing to stop them at the source. 


In This Episode, You Will Learn:

• The differences between internal and external threats in cybersecurity 

• Ways that A.I. can factor into anomaly detection in insider risk management 

• Why the rise in insider attacks is helping make it easier to address the issue.  


Some Questions We Ask:

• How do you identify insider risk? 

• How do you create a tool for customers that requires an extreme amount of case-by-case customization? 

• How are other organizations prioritizing internal versus external risks?


Resources:

Rob McCann’s Linkedin: 

https://www.linkedin.com/in/robert-mccann-004b407/


Rob McCann on Uncovering Hidden Risk:

https://www.audacy.com/podcasts/uncovering-hidden-risks-45444/episode-1-artificial-intelligence-hunts-for-insider-risks-347764242 


Insider Risk Blog Post:

https://techcommunity.microsoft.com/t5/security-compliance-identity/don-t-get-caught-off-guard-by-the-hidden-dangers-of-insider/ba-p/2157957 


Nic Fillingham’s LinkedIn:

https://www.linkedin.com/in/nicfill/ 


Natalia Godyla’s LinkedIn:

https://www.linkedin.com/in/nataliagodyla/ 


Related:

Security Unlocked: CISO Series with Bret Arsenault

https://SecurityUnlockedCISOSeries.com


Transcript

[Full transcript can be found at https://aka.ms/SecurityUnlockedEp23]


Nic Fillingham:

Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.


Natalia Godyla:

And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security. Deep dive into the newest threat intel, research and data science.


Nic Fillingham:

And profile some of the fascinating people working on artificial intelligence in Microsoft Security.


Natalia Godyla:

And now, let's unlock the pod.


Natalia Godyla:

Hello Nic, welcome to today's episode, how's it going with you?


Nic Fillingham:

Hello Natalia, I'm very well, thank you, I hope you're well, and uh, welcome to listeners, to episode 23, of the Security Unlocked podcast. On the pod today, we have Rob McCann, applied researcher here at Microsoft, working on insider risk management, which is us taking the Security Unlocked podcast into- to new territory. We're in the compliance space, now.


Natalia Godyla:

We are, and so we're definitely interested in feedback. Drop us a note at securityunlocked@microsoft.com to let us know whether these topics interested you, whether there is another avenue you'd like us to go down, in compliance. Also always accepting memes.


Nic Fillingham:

Cat memes, sort of more specifically.


Natalia Godyla:

(laughing)


Nic Fillingham:

All memes? Or just cat memes?


Natalia Godyla:

Cat memes, llama memes, al-


Nic Fillingham:

Alpaca-


Natalia Godyla:

... paca memes.


Nic Fillingham:

... memes. Yeah. Alpaca. Yeah, this is a really interesting uh, topic, so insider risk, and insider risk management is the ability for security teams, for IT teams, for HR to use AI and machine learning, and other sort of automation based tools, to identify when an employee, or when someone inside your organization might be accidentally doing something that is going to create risk for the company, or potentially intentionally uh, whether they have, you know, nefarious or sort of malicious intent.


Nic Fillingham:

So, it really- really great conversation we had with- with Rob about what is insider risk, what are the different types of insider risk, how is uh, AI and ML being used to go tackle it?


Natalia Godyla:

Yeah, there's an incredible amount of work happening to understand the context, because so many of these circumstances require data from different departments, uh, uniquely different departments, like HR, to try to understand, well is- is somebody about to leave the company, and if so, how is that related to the volume of data that they just downloaded? And with that, on to the pod.


Nic Fillingham:

On with the pod.


Nic Fillingham:

Welcome to the Security Unlocked podcast, Rob McCann, thank you so much for your time.


Rob McCann:

Thank you for having me.


Nic Fillingham:

Rob, we'd love to start with a quick intro. Who are you, what do you do? What's your day to day look like at Microsoft, what kind of products or technology do you touch? Give us a- give us an intro, please.


Rob McCann:

Well, I've been at Microsoft for about 15 years, I am a- I've been an applied researcher the entire time. So, what that means is, I get to bounce around various products and solve technical challenges. That's the official thing, what it actually means is, whatever my boss needs done, that's a technical hurdle, uh, they just throw it my way, and I have to try to work on that. So, applied scientist.


Nic Fillingham:

Applied scientist, versus what's a- what's a different type of scientist, so what- what's the parallel to applied science, in this sense?


Rob McCann:

So, applied researcher is sort of a dream job. So, when I initially started, they're sort of the academic style researcher, that it's very much uh, your production is to produce papers and new ideas that sort of in a vacuum look good, and get those out to the scientific community. I love doing that kind of stuff. I don't so much like just writing papers. And so, an applied researcher, what we gotta do, is we gotta sort of be this conduit.


Rob McCann:

We get to solve things that are closer to the product, and sort of deliver those into the product. So we get very real, tangible impact, but then we're also very much a bridge. So, part of our responsibility is to keep, you know, fingers on what's going on in the abstract research world and try to foster, basically, a large innovation pipe. So, I freaking love this job. Uh, it's exactly what I like to do. I like to solve hard technical problems, and then I like to ship stuff. I'm a very um ... I need tangible stuff. So I love it.


Nic Fillingham:

And what are you working on at the moment, what's the scope of your role, what's your bailiwick? (laughing)


Rob McCann:

My bailiwick is uh, right now I'm very much focused on IRM, which is insider risk management, and so what we've been doing over the last year or so, insider risk management GA'd in February of 2020, I want to say. So, Ignite Today is a very festive sort of one year anniversary type thing. That with compliance solutions. So, over this last year, what we've done a lot of is sort of uh, build a team of researchers to try to tackle these challenges that are in insider risk, uh, and sort of bring the science to this brand new product. So, a lot of what I'm doing on a daily basis is on one hand, the one hand is, solve some technical things and get it out there, and the other hand is build a team to strengthen the muscle, the research muscle.


Natalia Godyla:

So, let's talk a little bit more about insider risk management. Can you describe how insider risk differs from external risk, and more specifically, some of the risks associated with internal users?


Rob McCann:

It's uh, there's some overlap. But it's a lot different than external attack. So, first of all, it's very hard, not saying that external attack is not hard, I- I work with a lot of those people as well. But insiders are already in, right? And they already have permissions to do stuff, and they're already doing things in there. So there's not like, you have a- a ... some perimeter that you can just camp on, and try to get people when they're coming in the front door.


Rob McCann:

So that makes it hard. Uh, another thing that makes it hard is the variety of risks. So, different customers have different definitions of risk. So, risk might be um, we might want to protect our data, so we don't want data exfiltrated out of the company. We might want trade secrets, so we don't want people to even see stuff that they shouldn't see. We don't want workplace harassment, uh, we don't want sabotage. We don't want people to come in, and implant stuff into our code that's gonna cause problems later. It's a very broad space of potential risks, and so that makes it challenging as well.


Rob McCann:

And then I would say the third thing that makes it very challenging is, what I said, different customers want- have different definitions of risk. So it's not like ... like, I like the contrast to malware detection. So, we have these external security people that are trying to do all this sophisticated machine learning, to have a classifier that can recognize incoming bad code. Right? And sort of when they get that, like, the whole industry is like, "Yes, we agree, that's bad code, put it in Virus Total, or wherever the world wants to communicate about bad code." And it's sort of all mutually agreed upon, that this thing is bad.


Rob McCann:

Insider risk is very different. It's um, you know, this customer wants to monitor these things, and they define risk a certain way. Uh, this customer cares about these things, and he want to define risk a certain way. There is a heightened level of customer preferences that have to be brought into the- the intelligence, to- to detect these risks.


Natalia Godyla:

And what does detecting one of those risks look like? So, fraud, or insider trading, can you walk through what a workflow would look like, to detect and remediate an insider attack?


Rob McCann:

Yeah, definitely. So- so, first of all, since it's such a broad landscape of potential damage, I guess you would say, first thing the product has to do is collect signals from a lot of different places. We have to collect signals about people logging in. You have to collect signals about people uploading and downloading files from a- from OneDrive, you have to ... you have to see what people are sharing on Teams, what people are ec- you know, emailing externally. If you want the harassment angle, you gotta- you know, you gotta have a harassment detector on communications.


Rob McCann:

So the first thing is just this huge like, data aggregation problem of this very broad set of signals. So that's one, which in my mind is a- is a very strong advantage of Microsoft to do this, because we have a lot of sources of signals, across all of our products. So, aggregating the data, and then you need to have some detectors that can swim through that, uh, and try to figure out, you know, this thing right here doesn't quite look right. I don't know necessarily that it's bad, but the customer says they care about these kind of things, so I need to surface that to the customer.


Rob McCann:

So, uh, technics that we use there a lot are anomaly detection. Uh, so a lot of unsupervised type of learning, just to look for strangeness. And then once we surface that to the- the customer, they have to triage it, right? And they have to look at that and make a decision, did I really- do I really want to take action on this thing? Right? And so, along with just the verdict, like, it's probability 98% that this thing is strange, you also have to have all this explanation and context. So you have to say, why do I think this thing is strange?


Rob McCann:

And then you have to pull in all these things, so like, it's strange because they- they moved a bunch of sensitive data around, that- in ways they usually didn't, but then you also need to bring in other context about the user. This is very user-centric. So you have to say things like, "And by the way, this person is getting ready to leave the company." That's a huge piece of context to help them be able to make a decision on this. And then once the customer decides they want to make a decision, then the product, you know, facilitates uh, different workflows that you might do from that. So, escalating a case to legal, or to HR, there are several remediation actions that the customer can choose from.


Nic Fillingham:

On this podcast, we've spoken with a bunch of data scientists,


Nic Fillingham:

... and sort of machine learning folks who have talked about the challenge of building external detections using ML, and from what you've just explained, it sounds like you probably have some, some pretty unique challenges here to give the flexibility to customers, to be able to define what risk needs to them. Does that mean that you have to have a customized model built from scratch for every customer? Or can you have a sort of a global model to help with that anomaly detection that then just sort of gets customized more slightly on top based on, on preferences? I, I guess my question is, how do you utilize a tool like machine learning in a solution like this that does require so much sort of customization and, and modification by the, by the customer?


Rob McCann:

That's, that's a fantastic question. So, what you tried to do, you scored on that one.


Nic Fillingham:

(laughs).


Rob McCann:

You try to do both, right? So, customers don't wanna start from scratch with any solution and build everything from the ground up, but they want customizability. So, what you try to do, I always think of it as smart defaults, right? So, you try to have some basic models that sort of do things that maybe the industry agrees is suspicious type, right? And you expose a few high-level knobs. Like, do you care about printing? Or do you care about copying to USB? Or do you want to focus this on people that are leaving the company? Like some very high level knobs.


Rob McCann:

But you don't expose the knobs down to the level of the anomaly detection algorithm and how it's defining distance and all the features it's using to define normal behavior, but you have to design your algorithm to be able to respect those higher level choices that the u- that the user made. And then as far as the smart default, what you try to do as you pr- you try to present a product where out of the box, like it's gonna detect some things that most people agree are sort of risky, and you probably wanna take a look at, but you just give the, you offer the ability to customize as, as people wanna tweak it and say, nah, that's too much. I don't like that. Or printing, it's no big deal for us. We do it. We're printing house, right?


Nic Fillingham:

Does a solution like this, is it geared towards much larger organizations because they would therefore have more signal to allow you to build a high fidelity model and see where there are anomalies. So, for example, could the science of the insider risk management work for a small, you know, multi hundred, couple hundred person organization? Or is it sort of geared to much, much larger entities, sort of more of the size of a, of a Microsoft where there are tens of thousands employees and therefore there's tens of thousands of types of signal and sort of volume of signal.


Rob McCann:

Well, you've talked to enough scientists. I look at your guys's guest list. I mean, you know, the answer, right, more data is better, right? But it's not limiting. So, of course, if you have tons and tons of employees in a rich sorta like dichotomy of roles in the company, and you have all this structure around a large company, if you have all that, we can leverage it to do very powerful things. But if you just have a few hundred employees, you can still go in there and you can still say, okay, your typical employees, they have this kind of activity. Weird, the one guy out of a 100 that's about ready to leave suddenly did something strange, uh, or you can still do that, right? So, you, you got to make it work for all, all spectrums. But more data is always better, man. Um, more signals, more data, bring it on. Let's go. Give me some computers. Let's get this done.


Natalia Godyla:

Spoken like a true applied scientist. So, I know that you mentioned that there's a customized components inside of risk management, but when you look across all of the different customers, are you seeing any commonalities? Are there clear indicators of insider threats that most people would recognize across organizations like seeing somebody exfiltrate X volume of data, or a certain combination of indicators happening at once? I'm assuming those are probably feeding your smart defaults?


Rob McCann:

Correct. So, there's actually a lot of effort to go. So, I s- I said that we're sort of a bridge between external academic type research and product research. So, that's actually a large focus and it happened in external security too. As you get industry to sort of agree like on these threat matrices, and what's the sort of agreed upon stages of attack or risk in this case. So, yeah, there are things that everybody sort of agrees like, uh, this is fishy. Like, let's make this, let's make this priority. So, that, like you said, it feeds into the smart defaults. The same time we're trying to, you know, we don't think we know everything. So, we're working with external experts. I mean, you saw past podcasts, we talked to Carnegie Mellon, uh, we talked to Mitre, we talked to these sort of industry experts to try to make this community framework or, uh, language and the smart defaults. Uh, and then we try to take what we can do on top of that.


Nic Fillingham:

So, Rob, a couple of times now, you've, you've talked about this scenario where an employee's potentially gearing up to leave the, the company. And in this hypothetical situation, this is an employee that may be looking to, uh, exfiltrate some, some data on their way out or something, something that falls inside the scope of, of identifying and managing, uh, insider risk. I wonder, how do you determine when a user is potentially getting ready to leave the company? Is that, do you need sort of more manual signals from like an HR system because an employee might've been placed on a, on a, on a review, in a review program or review period? Or, uh, are you actually building technology into the solution to try and see behaviors, and then those behaviors in a particular sort of, uh, collection in a particular shape lead you to believe that it could be someone getting ready to leave the company? Or is it both or something else?


Rob McCann:

So, quick question, Nick, what are you doing after this podcast?


Nic Fillingham:

Yeah.


Rob McCann:

Do you want a job? Because it feels like you're reading some of my notes here (laughter). Uh, we, uh-


Nic Fillingham:

If you can just wait while I download these 50 gigs of files first-


Rob McCann:

(laughs).


Nic Fillingham:

... from this SharePoint that, that I don't normally go to, and then I sort of print everything and then I can talk to you about a job. No, I'm being silly.


Rob McCann:

No, I mean, I mean, you hit the nail on the head there. It's, uh, there are manual signals. This is the same case with say asset labels, like file labels, uh, highly sensitive stuff and not sensitive stuff. So, in both cases, like we want the clear signals. When the customers use our plugins or a compliance solution to tell us that, you know, here's an HR event that's about ready to happen. Like the person's leaving or this file's important. We are definitely gonna take that and we're gonna use it. But that's sort of like the scientists wanna go further. Like what about the stuff they're not labeling? Does that mean they just haven't got around to it? Or does that mean that it's really not important? Or like you just said, like, this guy is starting to email recruiters a lot, this is like, is he getting ready to leave? So, there's definitely behavioral type detection and inference that, uh, we're working on behind the scenes to try to augment what the users are already telling us explicitly.


Natalia Godyla:

So, what's the reality of insider risk management programs? How mature is this practice? Are folks paying attention to insider risk? Is there a gap here or is there still education that needs to happen?


Rob McCann:

Yeah. So, there has been people working on this a lot longer than I have, but I do have to say that things are escalating quickly. I mean, especially with modern workforce, right? The perimeter is destroyed and everybody's at home and it's easier to do damage, right? And risk is everywhere, but some, you know, cold, hard numbers, like the number of incidents are going up, b- like, over the last two years. But I think Gardner just come out and said in, in the last two years, the number of incidents have went out by about half. So, the number of incidents are happening more probably, maybe 'cause of the way we work now. The amount of money that people, uh, companies are spending to address this problem is going up. I think Gardner's number was, when, uh, the average went up several million over the last couple of years, um, they just sort of released an insider risk survey and more people are concerned about it. So, all the metrics are pointing up and it just makes sense with the way the world is right now.


Nic Fillingham:

Where did sort of insider risk start? What's sort of the, the beginning of this solution... what did the sort of incubation technology look like? Where did it start? Uh, are you able to talk to that?


Rob McCann:

I mean, sure. A little bit. So, this was before me, so a lot of this came out of, uh, DSRE, which is our, our sort of internal security team for, at Microsoft babysitting our own network. So, they had to develop tools to address these very real issues, and the guys that I did a podcast with before Tyler Mirror and, and Robin, they, um, they sorta, you know, brought this out and started making it a proper product to take all these technologies that we were using in-house and try to help turn them into a product to help other people. So, it sort of organically grew out of just necessity, uh, in-house. But as far as like industry, like, uh, Carnegie Mellon, uh, certain National Insider Threat Center and I think they've been, uh, studying this problem for over a decade.


Nic Fillingham:

And as a solution, as a technical solution, did it start with like, sort of basic heuristics and just looking for like hard coded flags and logs, or did it actually start out as a sort of a data science problem and, you know, the sort of basic models that have gotten more sophisticated over time?


Rob McCann:

Yeah. So, it did start, start out with some data science at the beginning as well. Uh, so of course he always have the heuristics. We do that in external attack too. Heuristics are very precise, they, uh, allow us to write down things that are very specific. And they're very, very important part of the arsenal. A lot of people diss on heuristics hero sticks, but it's a very im- very important part of that, that thing. But it also has, it started out with some data science in it, you know, the anomaly detection is a big one. Um, and so there were already some models that they brought right from, uh in-house to detect when stuff was suspicious.


Natalia Godyla:

So, what


Natalia Godyla:

... what's the future of IRM look like? What are you working on next?


Rob McCann:

Well, I mean, we could, you could go several ways. You know, there could be broadness of different types of risk. The thing that I enjoy the most is sort of the more sophisticated ways of doing newer algorithms, maybe for existing charters, or maybe broad charters.


Rob McCann:

Uh, one thing that, I- I'm very interested in lately is the sort of interplay between supervised learning and, and anomaly detection. So you can think of as, uh, semi-supervised. That's a thing that we've actually been playing with at Microsoft for, for a long time.


Rob McCann:

I've had this awesome, awesome journey here. I've, I've always been on teams that were sorta, like ... It's kinda like I've been an ML evangelist. Like, I always get to the teams right when they're starting to do the really cool tech, and then I get to help usher that in. So, I got to do that in the past with spam filtering, when that was important. Remember when Bill Gates promised that we were gonna solve spam in a, in two years or whatever. Those were some of the first ML models we ever did i- in Microsoft products, and even back then we're playing with this intersection of, you know, things look strange, but I know that certain spam looks like this, so how do you combine that sort of strangeness into sort of a semi-supervised stuff ...


Rob McCann:

That's the stuff that really floats my boat is ho- how do you, how do you take this existing technology that some people think of as very different ... There's unsupervised, there's supervised, uh, there's anomaly detection. How do you take that kinda stuff and get it to actually talk to each other and do something cooler than you could do on one set or the other? That's where I see the future from a technical standpoint behind the scene for smarter detectors, is how we do that kind of stuff.


Rob McCann:

Product roadmap, it's related to what we're, we talked about earlier about the industry agreeing on threat major sees and customers telling us what's the most important to them. That, that's stuff's gonna guide, guide the product roadmap. Um, but the technical piece, there's so much interesting work to do.


Natalia Godyla:

When you're trying to make a hybrid of those different models, the unsupervised and supervised machine learning models, what are you trying to achieve? What are the benefits of each that you're trying to capture by combining them?


Rob McCann:

Oh, it's the story of semi-supervised, right? I have tons and tons of data that can tell me things about the distribution of activity, I just o-, d-, only have labels on a little bit of it. So, how do I leverage the distributions of activity that's unlabeled with the things that I can learn from my few labeled examples? And how do I get those two things to make a better decision than, than either way on its own?


Rob McCann:

It's gonna be better than training on just a few things in a supervised fashion, 'cause you don't have a lot of data with labels. So you don't wanna throw away all that distributional information, but if you go over to the distributional information, then you might just detect weirdness. But you never actually get to the target which is risky weirdness, which is two different things.


Nic Fillingham:

Is the end goal, though, supervised learning, so if you, if you have unsupervised learning with a small set of labels, can you use that small set of labels to create a larger set of labels, and then ultimately get to ... I'm horribly paraphrasing all this here, but, is that sort of the path that you're on?


Rob McCann:

So, we're gonna try to make the best out of the labels that we can get, right? But, I don't think you ever throw away the unsupervised side. Because, uh, I mean, this c-, this has come up in the external security stuff, as well, is if you're always only learning how to catch the things that you've already labeled, then you're never gonna really s-, be super good at detecting brand new things that you don't have anything like it. Right?


Rob McCann:

So, you have to have the ... It's sorta like the Explore-exploit Paradigm. You can think of it, at a very high level you can think of supervised as you're exploiting what you already know, and you're finding stuff similar to it. But the explore side is like, "This thing's weird. I don't know what it is, but I wanna show it to a person and see if they can tell me what it is. I wanna see if they like that kinda stuff."


Rob McCann:

Uh, that's sorta synergy. That's, that's a powerful thing.


Nic Fillingham:

What's the most sophisticated thing that the IRM solution can do? Like, have you been sort of surprised by the types of, sort of, anomalies that can be both detected and then sort of triaged and then flagged, or even have automated actions taken? Is there, is there a particular example that you think is a paramount sort of example of what, what this tech can do?


Rob McCann:

Well, it's constantly increasing in complexity. First of all, anybody who's done applied science knows how hard it is to get data together. So when I work with the IRM team, first of all, I'm blown away at the level of the breadth of signals they've managed to put together into a place that we can reason over. That is such a strong thing. So the, their data collection is super strong. And they're always doing more. I mean, these guys are great. If I come up with an idea, and I say, "Hey, if we only had these signals," they'll go make it happen. It is super, super cool.


Rob McCann:

As far as sophistication, I mean, you know, we start, we start with heuristics, and then you start doing, like, very obvious anomaly detection, like, "Hey, these, this guy just blew us out of the water by copying all these files." I mean, that's sort of the next level. And then the next level is, uh, "Okay, this guy's not so obvious. He tries to fly under the radar and sort of stay low and slow. But can we detect an aggregate? Over time he's doing a lot of damage." So those more subtle long-term risks. That's actually something we're releasing right now.


Rob McCann:

Another very powerful paradigm that we're releasing right now is, not just individual actions, but very precise sequences of actions. So you could think of it in a external as kill chain. Like, "They did this, and then they did this, and then they did this." That can be much more powerful than, "They did all three of those separately and then added together," if you know what I mean.


Rob McCann:

So that sort of interesting sequences thing, that's a very powerful thing. And once you sorta got these frameworks up, like, you can get arbitrarily sophisticated under the hood. And so, it's not gonna stop.


Nic Fillingham:

Rob, you talked about working on spam detection and spam filters as previous sort of projects you were working on. I wonder if you could tell us a little bit about that work, and I wonder if there's any connective tissue between what you did back then and, and IRM.


Rob McCann:

Yeah, so I've worked on a lot more than spam. So, I got hired to do spam, to do the research around the spam team, but it quickly, uh, it was this newfangled ML stuff that we were doing, and, uh, it started working on lots of different problems, if you can imagine that. And so we started working on spam detection, and, and phish detection. We started working on Microsoft accounts. We would, we would look at how they behave and try to detect when it looks like suddenly they've been compromised, and help people, you know, sort of lock down their accounts and get, and get protection.


Rob McCann:

All those things it's been cool to watch. We sorta, we sorta had a little incubation-like science team, and we would put these cool techniques on it and it would start working well, and then they've all sort of branched out into their own very mature products over the years. A- and they're all based very heavily on, uh, the sort of techniques that, that have worked along the way.


Rob McCann:

It's amazing how much reuse there is. I mean, I mean, let's boil down what we do to just finding patterns in data that support a business objective. That's the same game, uh, in a lot of different domains. So, yes, of course, there's a lot of overlap.


Nic Fillingham:

What was your first role at Microsoft? Have you always been in, in research on applied research?


Rob McCann:

I have always been a spoiled brat. I mean, I, I just get to go work on hard problems. Uh, I don't know how I've done it, but they just keep letting me do it, and it's fun. Uh, yeah, I've always been an applied researcher.


Nic Fillingham:

And that, you said you joined about 14 years ago?


Rob McCann:

Yep. Yep, yep. That was even back before, uh, the sort of cluster machine learning stuff was hot. So we, I mean, we used to, we used to take, uh, lots of sequel servers and crunch data and get our features that way, and then feed it into some, like, single box, uh, learning algorithms on small samples. And, like, I've got to see this progression to, like, distributed learning over large clusters. In-house first, we used to have a system called [Cosmos In-House 00:28:04]. I actually got to write some of the first algorithms that did machine learning on that. It was super, super rewarding. And now we have all this stuff that we release to the public and Azure's this big huge ... It's a very, very cool to have seen happen.


Nic Fillingham:

Giving the listener maybe a, uh, a reference point for, for your entry into Microsoft-


Rob McCann:

(laughs)


Nic Fillingham:

... is there anything you worked on that's either still around, or that people would have known? I think, like, just the internal Cosmos stuff is, is certainly fascinating. I'm just wondering if there's a, if there's a touchstone on the product side.


Rob McCann:

Spam filtering for Hotmail. That was my first gig.


Nic Fillingham:

Nice! I, I cut my teeth on Hotmail.


Rob McCann:

Yeah, yeah-


Nic Fillingham:

Yeah, I was a Hotmail guy. I was working on the Hotmail team as we transitioned to Outlook.com.


Rob McCann:

Mm-hmm (affirmative).


Nic Fillingham:

And I was, uh, down in Palo Alto, I can't even remember. I was somewhere, where- wherever the Silicone Valley campus is-


Rob McCann:

SVC-


Nic Fillingham:

We were rolling like a boar-, a boardroom waiting for the new domain to go live, and we got, like, a 15 minute heads-up. So I'm just Nic@Outlook.com. That's, that's my email address, and I got, I got my wife her first name at Outlook.com.


Nic Fillingham:

Were you there for that, Rob? Do you have a, did you get a super secret email address?


Rob McCann:

I was not there for the release, but as soon as it was out, I went and grabbed some for my kids. So I w-, I keep my Hotmail one, 'cause I've had it forever, but, uh-


Nic Fillingham:

Yeah.


Rob McCann:

... I got all my kids, like, the, the ones they needed. So.


Rob McCann:

It's amazing how much stuff came out of the, that, that service right there. So I talked about identity management that we do for Microsoft accounts now. I, that stuff came from trying to protect people, their Hotmail accounts. So we would build models to try to determine, like, "Oh, this guy's suddenly emailing a bunch of people that he doesn't usually," anomaly detection, if you can imagine, right? The-


Nic Fillingham:

Yeah-


Rob McCann:

... same thing works.


Rob McCann:

All that stuff, and then it sorta grew in, and then Microsoft had a bigger account, and then that team's kinda like, "Hey, you guys are doing this ML to detect account compromise, can you come, like, do some


Rob McCann:

... of that over here," and then it grew out to what it is today. A lot of things came from the OML days, it was very fun.


Natalia Godyla:

Thinking of the different policies organizations have and the growing awareness of those policies, over time, employees are going to shift their tactics. Like you said there are some who are already doing low and slow activities that are evading detection, so, how do you think this is going to impact the way you try to tackle these challenges, or have you already noticed people try to subvert the policies that are in place?


Rob McCann:

Yeah, so that's the, that's the next frontier, which is w-, you know, why I said we started just getting into, like, the low and slow stuff. It's gonna be like all other security, it's gonna be, "These guys are watching this thing, I gotta try something different."


Rob McCann:

Actually that's a good motivation for the sort of the high-level approach we're taking, which is tons of signals, so there's not very many activities you could do. You could print, copy to USB, you could upload to something, you could get a third-party app that does the uploading for you. There's not very many avenues that you could do that we're not gonna be able to at least see that happening.


Rob McCann:

So you couple that with some, that mountain of data with some algorithm that can try to pick out, "This is a strange thing, and this is in the context of somebody leaving." It's gonna be an interesting cat-and-mouse, that's for sure.


Natalia Godyla:

Do you have any examples of places where you've already had to shift tactics because you're noticing a user try to subvert the existing policies? Or are you still in the exploration phase trying to figure out what really, what this is really going to look like next?


Rob McCann:

So, right now I don't think we've had ... We haven't got to the phase yet where we're affecting people a lot. Uh, this is very early product, we're a year in. So, I don't see the reactions yet, but I, I guarantee it's gonna happen. And then we're gonna learn from that, and we're gonna say, "Okay, I have the Explore-exploit going. The Explorer just told me that something strange that I've never seen before happened." We're gonna put some people on that that are experts that figure out what that's gonna be. We're gonna figure out how to bring that into the fold of agreed-upon bad stuff, so we're gonna expand this threat matrix, right, as we go along? And we're gonna keep exploring. And that's the same for every single security product.


Nic Fillingham:

Rob, as someone that's been able to sort of come into different teams and, and different solutions and, and help them, as you say, sort of bring more academic or theoretical research into, into product, what techniques are you keeping your eye on? Like, what's, what's coming in the next two or three years, maybe not necessarily for IRM, maybe just in terms of, as machine learning, as sort of AI techniques are evolving and, and, and sort of getting more and more mature, like, what, where are you excited? What are you, what are you looking at?


Rob McCann:

So you want the secret sauce, is what you're asking for?


Nic Fillingham:

That's exactly what I want. I want the secret sauce.


Rob McCann:

(laughs) Um, well, I mean, there's two schools of thought. There's one school of thought which is, "You better keep your finger on the pulse, because the, the new up-n-comers, the whippersnappers are gonna bring you some really cool, cool stuff." And then there's the other school of thought which is, "Everything they've brought in the last ten years is a slight change of what they, was before, the previous ... It's a cycle, right, as with s-, i- ... Science is refinement of existing ideas.


Rob McCann:

So, I'm a very muted person that way, in that I don't latch on to the next latest and greatest big thing. Um, but I do love to see progress. I s-, just see it as more of a multi-faceted gradual rise of mankind's pattern-recognition ability, right?


Rob McCann:

Things that excite me are things that deal with ... Like, big data with big labels? Super, super cool stuff happening there. I mean, like, you know, who doesn't like the word deep learning, or have used it-


Nic Fillingham:

What's a big label? Is there a small label?


Rob McCann:

(laughs) No, I mean lots of labeled data. Like, uh-


Nic Fillingham:

Okay.


Rob McCann:

... yes.


Nic Fillingham:

Big data sets, lots of labels.


Rob McCann:

Yes. That stuff, um, that's exciting. There's a lot of cool stuff we couldn't do two decades ago that are happening right now, and that's very, very powerful.


Rob McCann:

But a lot of the business problems in security, especially, 'cause we're trying to always get this new thing that the bad guys are doing that we haven't seen before. It's very scarce label-wise. And so the things that excite me are how you inject domain knowledge, right? I talked about, we want customers to be able to sort of control on some knobs that you, like, focus the thing on what they think's important.


Rob McCann:

But it also happens with security analysts, because, there's a lot of very smart people that I get to work with, and they have very broad domain knowledge about what risks look like, and various forms of security. How do you get these machines to listen to them, more than them just being a label machine? How do you embed that domain knowledge into there?


Rob McCann:

So there's a lot of cool stuff happening. Uh, in that space, weak learning is one that's very popular. Came out of Stanford, actually. But I'm very la-, I'm very, very excited about what we can do with one-shot, or weak supervision, or very scarce labeled examples. I think that's a very, very powerful paradigm.


Nic Fillingham:

Doing more with less.


Rob McCann:

That's right.


Rob McCann:

And transfer learning, I'm sure you guys have talked to a lot of people about that. That's another one. A lot of things we do in IRM ... Well, in, in lots of security is you try to, like, leverage labeled, uh, supervised classification ... Like, think about HR events.


Rob McCann:

So, maybe I could, don't have a m-, a bunch of labeled, "These are IRM incidents" that I can train this big supervised classifier on. But what I can do is I can get a bunch more HR events, and I can learn things, like you said, that predict that an HR event is probably happening, right? And I chose that HR event, because that's correlated with the label I care about, right? So, I can use all that supervised machinery to try to predict that proxy thing, and then I can try to use what it learned to get me to what I really want with maybe less labels.


Nic Fillingham:

Got it. My final IRM question is, from what I know about IRM, it feels like it's about protecting the organization from an employee who may maliciously or accidentally do something they're not meant to do. And we've used the example of an employee getting ready to leave the company.


Nic Fillingham:

What about, though, IRM as a tool to spot well-meaning, but, but practices that, that o-, expose the company to risk? So instead of, like, looking for the employee that's about to leave and exfil 50 gigs of cat meme data that they shouldn't, what about, like, just using it to identify, "You know what, this team's just sort of got some sloppy practices here that's sort of opening us for risk. We can use the IRM tool to go and find the groups that need the, sort of the extra training, and to, need to sort of bring them up to scratch. And so it's almost more of a, um, just thinking of it more in sort of a positive reinforcement sense, as opposed to sort of an avoiding a negative consequence.


Nic Fillingham:

Is that a big function of IRM?


Rob McCann:

Yeah, I mean, I, I'm sorry if I didn't, uh, communicate that well, but, IRM is definitely intentional and unintentional. In s-, in some of the workflows the way you can do when we detect risky activity is just send an email to the, uh, to the employee and say, "Hey, this behavior is risky, change your ways, please," right?


Rob McCann:

So, you're right, it's, it can be a coaching tool as well, it's not just, "Data's gonna leave," right? Intentionally.


Nic Fillingham:

Got it. You've been very generous. This has been a great conversation. I wondered, before you leave us, do you have anything you would like to plug? Do you have a blog, do you have a Twitter? Is there a- another podcast? Which one were you on, Rob?


Rob McCann:

Uncovering Hidden Risk. I would also like to point you guys to, uh, an inside risk blog. I mean, we, we publish a lot on, on what's coming out and where the product is headed, so it's: aka.ms/insiderriskblog. That's a great place to sorta keep abreast on the technologies and, and where we wanna go.


Nic Fillingham:

That sounds good. Well, Rob McCann, thank you so much for your time. Uh, this has been a great conversation, um, we'll have to have you back on at some point in the future to learn more about weak learning and other th-, other sort of, uh, cool new technique you hinted at.


Rob McCann:

Yeah. I appreciate it. Thanks for having me.


Rob McCann:

(music)


Natalia Godyla:

Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.


Nic Fillingham:

And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode.


Nic Fillingham:

Until then, stay safe.


Natalia Godyla:

Stay secure.

More Episodes

7/21/2021

Discovering Router Vulnerabilities with Anomaly Detection

Ep. 37
Ready for a riddle? What do 40 hypothetical high school students and our guest on this episode have in common?Whythey can help you understand complex cyber-attack methodology, of course!In this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylaare brought back to school byPrincipalSecurityResearcher,Jonathan Bar Or who discusses vulnerabilities in NETGEAR Firmware. During the conversation Jonathan walks through how his teamrecognized the vulnerabilities and worked with NETGEAR to secure the issue,andhelps usunderstand exactly how the attack workedusing an ingenious metaphor.In This Episode You Will Learn: How a side-channel attack worksWhy attackers are moving away fromoperating systemsand towards network equipmentWhy routers are an easy access point for attacksSome Questions We Ask: How do you distinguish an anomaly from an attack?What are the differences between a side-channel attack and an authentication bypass?What can regular users do to protect themselvesfrom similarattacks? Resources: Jonathan Bar Or’s Blog Post:https://www.microsoft.com/security/blog/2021/06/30/microsoft-finds-new-netgear-firmware-vulnerabilities-that-could-lead-to-identity-theft-and-full-system-compromise/Jonathan Bar Or’s LinkedIn:https://www.linkedin.com/in/jonathan-bar-or-89876474/Nic Fillingham’s LinkedIn: https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog: https://www.microsoft.com/security/blog/ Related: Security Unlocked: CISO Series with Bret Arsenault https://thecyberwire.com/podcasts/security-unlocked-ciso-series
7/14/2021

Securing the Internet of Things

Ep. 36
Thereused to bea time when our appliances didn’t talk back to us, but it seems like nowadays everything in our home is getting smarter.Smart watches, smart appliances,smart lights-smart everything! Thisconnectivity to the internetis what we call the Internet of Things(IoT).It’s becoming increasingly common for our everyday items to be “smart,” and while thatmay providea lot of benefits, like your fridge reminding you when you may need to get more milk, it alsomeans thatall ofthose devices becomesusceptible to cyberattacks.On this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylatalk toArjmandSamuelabout protecting IoT devices, especially with a zero trust approach.Listenin to learnnot onlyaboutthe importance of IoT security,but also what Microsoft is doing to protect againstsuchattacks and how you canbettersecurethesedevices.In This Episode You Will Learn: Whatthe techniquesareto verify explicitly on IoT devicesHow to apply the zero trust model in IoTWhat Microsoft is doing to protect against attacks on IoTSome Questions We Ask:What isthedifference between IoT and IT?Why is IoT security so important?What are the best practices for protecting IoT?Resources:ArjmandSamuel’s LinkedIn:https://www.linkedin.com/in/arjmandsamuel/Nic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://thecyberwire.com/podcasts/security-unlocked-ciso-seriesTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp36]Nic Fillingham:(music) Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in new and research from across Microsoft's security, engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod. (music)Natalia Godyla:Welcome everyone to another episode of Security Unlocked. Today we are joined by first time guest, Arjmand Samuel, who is joining us to discuss IoT Security, which is fitting as he is an Azure IoT Security leader a Microsoft. Now, everyone has heard the buzz around IoT. There's been constant talk of it over the past several years, and, but now we've all also already had some experience with IoT devices in our personal life. Would about you, Nic? What do you use in your everyday life? What types of IoT devices?Nic Fillingham:Yeah. I've, I've got a couple of smart speakers, which I think a lot of people have these days. They seem to be pretty ubiquitous. And you know what? I sort of just assumed that they automatically update and they've got good security in them. I don't need to worry about it. Uh, maybe that's a bit naïve, but, but I sort of don't think of them as IoT. I just sort of, like, tell them what I music I want to play and then I tell them again, because they get it wrong. And then I tell them a third time, and then I go, "Ugh," and then I do it on my phone.Nic Fillingham:I also have a few cameras that are pointed out around the outside of the house. Because I live on a small farm with, with animals, I've got some sheep and pigs, I have to be on the look out for predators. For bears and coyotes and bobcats. Most of my IoT, though, is very, sort of, consummary. Consumers have access to it and can, sort of, buy it or it comes from the utility company.Natalia Godyla:Right. Good point. Um, today, we'll be talking with Arjmand about enterprise grade IoT and OT, or Internet of Things and operational technology. Think the manufacturing floor of, uh, plants. And Arjmand will walk us through the basics of IoT and OT through to the best practices for securing these devices.Nic Fillingham:Yeah. And we spent a bit of time talking about zero trust and how to apply a zero trust approach to IoT. Zero trust, there's sort of three main pillars to zero trust. It's verify explicitly, which for many customers just means sort of MFA, multi factorial authentication. It's about utilizing least privilege access and ensuring that accounts, users, devices just have access to the data they need at the time they need it. And then the third is about always, sort of, assuming that you've been breached and, sort of, maintaining thing philosophy of, of let's just assume that we're breached right now and let's engage in practices that would, sort of, help root out a, uh, potential breach.Nic Fillingham:Anyway, so, Arjmand, sort of, walks us through what it IoT, how does it relate to IT, how does it relate to operational technology, and obviously, what that zero trust approach looks like. On with the pod.Natalia Godyla:On with the pod. (music) Today, we're joined by Arjmand Samuel, principle program manager for the Microsoft Azure Internet of Things Group. Welcome to the show, Arjmand.Arjmand Samuel:Thank you very much, Natalia, and it's a pleasure to be on the show.Natalia Godyla:We're really excited to have you. Why don't we kick it off with talking a little bit about what you do at Microsoft. So, what does your day to day look like as a principle program manager?Arjmand Samuel:So, I am part of the Azure IoT Engineering Team. I'm a program manager on the team. I work on security for IoT and, uh, me and my team, uh, we are responsible for making sure that, uh, IoT services and clients like the software and run times and so on are, are built securely. And when they're deployed, they have the security properties that we need them and our customers demand that. So, so, that's what I do all a long.Nic Fillingham:And, uh, we're going to talk about, uh, zero trust and the relationship between a zero trust approach and IoT. Um, but before we jump into that, Arjmand, uh, we, we had a bit of a look of your, your bio here. I've got a couple of questions I'd love to ask, if that's okay. I want to know about your, sort of, tenure here at Microsoft. Y- y- you've been here for 13 years. Sounds like you started in, in 2008 and you started in the w- what was called the Windows Live Team at the time, as the security lead. I wonder if you could talk a little bit about your, your entry in to Microsoft and being in security in Microsoft for, for that amount of time. You must have seen some, sort of, pretty amazing changes, both from an industry perspective and then also inside Microsoft.Arjmand Samuel:Yeah, yeah, definitely. So, uh, as you said, uh, 2008 was the time, was the year when I came in. I came in with a, a, a degree in, uh, security, in- information security. And then, of course, my thinking and my whole work there when I was hired at Microsoft was to be, hey, how do we actually make sure that our product, which was Windows Live at that time, is secure? It has all the right security properties that, that we need that product to have. So, I- I came in, started working on a bunch of different things, including identity and, and there was, these are early times, right? I mean, we were all putting together this infrastructure, reconciling all the identity on times that we had. And all of those were things that we were trying to bring to Windows Live as well.Arjmand Samuel:So, I was responsible for that as well as I was, uh, working on making sure that, uh, our product had all the right diligence and, and security diligence that is required for a product to be at scale. And so, a bunch of, you know, things like STL and tech modeling and those kind of things. I was leading those efforts as well at, uh, Windows Live.Natalia Godyla:So, if 2008 Arjmand was talking to 2021 Arjmand, what would he be most surprised about, about the evolution over the past 13 years, either within Microsoft or just in the security industry.Arjmand Samuel:Yeah. Yeah. (laughs) That's a great, great question, and I think in the industry itself, e- evolution has been about how all around us. We are now engulfed in technology, connected technology. We call it IoT, and it's all around us. That was not the landscape 10, 15 years back. And, uh, what really is amazing is how our customers and partners are taking on this and applying this in their businesses, right? This meaning the whole industry of IoT and, uh, Internet of Things, and taking that to a level where every data, every piece of data in the physical world can be captured or can be acted upon. That is a big change from the last, uh, 10, 15 to where we are today.Nic Fillingham:I thought you were going to say TikTok dance challenges.Arjmand Samuel:(laughs)Natalia Godyla:(laughs)Nic Fillingham:... because that's, that's where I would have gone.Arjmand Samuel:(laughs) that, too. That, too, right? (laughs)Nic Fillingham:That's a (laughs) digression there. So, I'm pretty sure everyone knows what IoT is. I think we've already said it, but let's just, sort of, start there. So, IoT, Internet of Things. Is, I mean, that's correct, right? Is there, is there multiple definitions of IoT, or is it just Internet of Things? And then, what does the definition of an Internet of Things mean?Arjmand Samuel:Yeah, yeah. It;s a... You know, while Internet of Things is a very recognized acronym these days, but I think talking to different people, different people would have a different idea about how Internet of Thing could be defined. And the way I would define it, and again, not, not, uh, necessarily the authority or the, the only definition. There are many definitions, but it's about having these devices around us. Us is not just people but also our, our manufacturing processes, our cars, our, uh, healthcare systems, having all these devices around, uh, these environments. They are, these devices, uh, could be big, could be small. Could be as small as a very small temperature sensor collecting data from an environment or it could be a Roboticom trying to move a full car up and down an assembly line.Arjmand Samuel:And first of all, collecting data from these devices, then bringing them, uh, uh, using the data to do something interesting and insightful, but also beyond that, being able to control these devices based on those insights. So, now there's a feedback loop where you're collecting data and you are acting on that, that data as well. And that is where, how IoT is manifesting itself today in, in, in the world. And especially for our customers who are, who tend to be more industrial enterprises and so on, it's a big change that is happening. It's, it's a huge change that, uh, they see and we call it the transformation, the business transformation happening today. And part of that business transformation is being led or is being driven through the technology which we call IoT, but it's really a business transformation.Arjmand Samuel:It's really with our customers are finding that in order to remain competitive and in order to remain in business really, at the end of the day, they need to invest. They need to bring in all these technologies to bear, and Internet of Things happens that technology.Nic Fillingham:So, Arjmand, a couple other acronyms. You know, I think, I think most of our audience are pretty familiar with IoT, but we'll just sort of cover it very quickly. So, IoT versus IT. IT is, obviously, you know, information technology, or I think that's the, that's the (laughs) globally accepted-Arjmand Samuel:Yeah, yeah.Nic Fillingham:... definition. You know, do you we think of IoT as subset of IT? What is the relationship of, of those two? I mean, clearly, there are three letters versus two letters, (laughs) but there is relationship there. Wh- wh- what are your thoughts?Arjmand Samuel:Yeah. There's a relationship as well as there's a difference, and, and it's important to bring those two out. Information technology is IT, as we know it now for many years, is all about enterprises running their applications, uh, business applications mostly. For that, they need the network support. They need databases. They need applications to be secured and so on. So, all these have to work together. The function of IT, information technology, is to make sure that the, there is availability of all these resources, applications, networks and databases as well as you have them secured and private and so on.Arjmand Samuel:So, all of that is good, but IoT takes it to the next level where now it's not only the enterprise applications, but it's also these devices, which are now deployed by the enterprise. I mentioned Roboticoms. Measured in a conference room you have all these equipment in there, projection and temperature sensors and occupancy sensors and so on. So, all of those beco- are now the, the add on to what we used to call IT and we are calling it the IoT.Arjmand Samuel:Now, the interesting part here is in the industrial IoT space. Th- this is also called OT, operation technology. So, you know, within an organization there'll be IT and OT. OT's operation technology and these are the people or the, uh, function within an organization who deal with the, with the physical machines, the physical plant. You know, the manufacturing line, the conveyor belts, the Roboticoms, and these are called OT functions.Arjmand Samuel:The interesting part here is the goal of IT is different from the goal of OT. OT is all about availability. OT's all about safety, safety so that it doesn't hurt anybody working on the manufacturing line. OT's all about environmental concerns. So, it should not leak bad chemicals and so on. A while, if you talk about security, and this is, like, a few years back when we would talk about security with an OT person, the, the person who's actually... You know, these are people who actually wear those, uh, hard hats, you know, on, uh, a manufacturing plant. And if you talk about security to an OT person, they will typically refer to that guard standing outside and, and, uh, the-Nic Fillingham:Physical security.Arjmand Samuel:The physical security and the, the walls and the cameras, which would make sure that, you know, and then a key card, and that's about all. This was OT security, but now when we started going in and saying that, okay, all these machines can be connected to, to each other and you can collect all this data and then you can actually start doing something interesting with this data. That is where the definition of security and the functions of OT evolved. And not evolving, I mean different companies are at different stages, but they're now evolving where they're thinking, okay, it's not only about the guard standing outside. It's also the fact that the Roboticom could be taken over remotely and somebody outside, around the world, around the globe could actually be controlling that Roboticom to do something bad. And that realization and the fact that now you actually have to control it in the cyber sense and not only in the physical sense is the evolution that happened between OT.Arjmand Samuel:Now, IT and OT work together as well because the same networks are shared typically. Some of the applications that use the data from these devices are common. So, IT and OT, this is the other, uh, thing that has changed and, and we are seeing that change, is starting to work and come closer. Work together more. IoT's really different, but at the same time requires a lot of stuff that IT has traditionally done.Natalia Godyla:Hmm. So, what we considered to be simple just isn't simple anymore.Arjmand Samuel:That's life, right? (laughs) Yeah.Natalia Godyla:(laughs)Arjmand Samuel:(laughs)Natalia Godyla:So, today we wanted to talk about IoT security. So, let's just start with, with framing the conversation a little bit. Why is IoT security important and what makes it more challenging, different than traditional security?Arjmand Samuel:As I just described, right, I mean, we are now infusing compute and in every environment around us. I mean, we talked a little bit about the conveyor belt. Imagine the conference rooms, the smart buildings and, and all the different technologies that are coming in. These are technologies, while they're good, they're serve a scenario. They, they make things more efficient and so on, but they're also now a point of, uh, of failure for that whole system as well as a way for malicious sectors to bring in code if possible. And to either, uh, imagine a scenario where or an attack where a malicious sector goes into the conveyor belt and knows exactly the product that is passing through. And imagine that's something either takes the data and sells it to somebody or, worse case, stops the conveyor belt. That is millions of dollars of loss very, uh, that data that the company might be incurring.Arjmand Samuel:So, now that there's infused computer all around us, we are now living in a target which in a environment which can be attacked, and which can be used for bad things much more than what it was when we were only applications, networks and databases. Easy to put a wall around. Easy to understand what's going on. They're easy to lock down. But with all these devices around us, it's becoming much and much harder to do the same.Nic Fillingham:And then what sort of, if, if we think about IoT and IoT security, one of the things that, sort of, makes it different, I- I th- think, and here I'd love you to explain this, sort of... I- I'm thinking of it as a, as a, as a spectrum of IoT devices that, I mean, they have a CPU. They have some memory. They have some storage. They're, they're running and operating system in some capacity all the way through to, I guess, m- much more, sort of, rudimentary devices but do have some connection, some network connection in order for instruction or data to, sort of, move backwards and forwards. What is it that makes this collection of stuff difficult to protect or, you know, is it difficult to protect? And if so, why? And then, how do we think about the, the, the potential vectors for attack that are different in this scenario versus, you know, protecting lap tops and servers?Arjmand Samuel:Yeah, yeah. That's a good one. So, uh, what happens is you're right. Uh, IoT devices can be big and small, all right. They could be a small MCU class device with a real-time operating system on it. Very small, very, uh, single purpose device, which is imagine collecting temperature or humidity only. Then we have these very big, what we call the edge or heavy edge devices, which are like server class devices running a Roboticom or, or even a gateway class device, which is aggregating data from many devices, right, as a, a, and then take, taking the data and acting on it.Arjmand Samuel:So, now with all this infrastructure, one of the key things that we have seen is diversity and heterogeneity of these devices. Not just in terms of size, but also in terms of who manufactured them, when they were manufactured. So, many of the temperature sensors in environments could be very old. Like, 20 years old and people are trying to use the same equipment and not have to change anything there. And which they can. Technically they could, but then those devices were never designed in for a connected environment for these, this data to actually, uh, be aggregated and sent on the network, meaning they per- perhaps did not have encryption built into it. So, we have to do something, uh, additional there.Arjmand Samuel:And so now with the diversity of devices, when they came in, the, the feature set is so diverse. Some of them were, are more recent, built with the right security principles and the right security properties, but then some of them might not be. So, this could raise a, a challenge where how do you actually secure an infrastructure where you have this whole disparity and many different types of devices, many different manufacturers, many of ages different for these devices. Security properties are different and as we all know talking about security, the attack would always come from the weakest link. So, the attacker would always find, within that infrastructure, the device which has the least security as a entry point into that infrastructure. So, we can't just say, "Oh, I'll just protect my gateway and I'm fine." We have to have some mitigation for everything on that network. Everything. Even the older ones, older devices. We call them brownfield devices because they tend to be old devices, but they're also part of the infrastructure.Arjmand Samuel:So, how do we actually think about brownfield and the, the newer ones we call greenfield devices? Brownfield and greenfield, how do we think about those given they will come from different vendors, different designs, different security properties? So, that's a key challenge today that we have. So, they want to keep those devices as well as make sure that they are secure because the current threat vectors and threat, uh, the, and attacks are, are much more sophisticated.Natalia Godyla:So, you have a complex set of devices that the security team has to manage and understand. And then you have to determine at another level which of those devices have vulnerabilities or which one is the most vulnerable, and then, uh, assume that your most vulnerable, uh, will be the ones that are exploited. It, so, is that, that typically the attack factor? It's going to be the, the weakest link, like you said? And h- how does an attacker try to breach the IoT device?Arjmand Samuel:Yeah, yeah. And, and this is where we, we started using the term zero trust IoT.Natalia Godyla:Mm-hmm (affirmative).Arjmand Samuel:So, IoT devices are deployed in an environment which can not be trusted, should not be trusted. You should assume that there is zero trust in that environment, and then all these devices, when they are in there, you will do the right things. You'll put in the right mitigations so that the devices themselves are robust. Now, another example I always give here is, and, uh, I, your question around the attack vectors and, and how attacks are happening, typically in the IT world, now that we, we have the term defined, in the IT world, you will always have, you know, physical security. You will always put servers in a room and lock it, and, and so on, right, but in an IoT environment, you have compute devices. Imagine these are powerful edge nodes doing video analytics, but they're mounted on a pole next to a camera outside on the road, right? So, which means the physical access to that device can not be controlled. It could be that edge node, again, a powerful computer device with lots of, you know, CPU and, and so on, is deployed in a mall looking at video streams and analyzing those video streams, again, deployed out there where any attacker physically can get a hold of the device and do bad things.Arjmand Samuel:So, again, the attack vectors are also different between IT and OT or IoT in the sense that the devices might not be physically contained in a, in an environment. So, that puts another layer of what do we do to protect such, uh, environments?Nic Fillingham:And then I want to just talk about the role of, sort of, if we think about traditional computing or traditional, sort of, PC based computing and PC devices, a lot of the attack vectors and a lot of the, sort of, weakest link is the user and the user account. And that's why, you know, phishing is such a massive issue that if we can socially engineer a way for the person to give us their user name and password or whatever, we, we, we can get access to a device through the user account. IoT devices and OT devices probably don't use that construct, right? They probably, their userless. Is that accurate?Arjmand Samuel:Yeah. That's very accurate. So, again, all of the attack vectors which we know from IT are still relevant because, you know, if you, there's a phishing attack and the administrator password is taken over you can still go in and destroy the infrastructure, both IT and IoT. But at the same time, these devices, these IoT devices typically do not have a user interacting with them, typically in the compute sense. You do not log into an IoT device, right? Except in sensor with an MCU, it doesn't even have a user experience, uh, a screen on it. And so, there is typically no user associated with it, and that's another challenge. So you need to still have an identity off the device, not on the device, but off the device, but that identity has to be intrinsic off the device. It has to be part of the device and it has to be stable. It has to be protected, secure, and o- on the device, but it does not typically a user identity.Arjmand Samuel:And, and that's not only true for temperature sensors. You know, the smaller MCU class devices. That's true for edge nodes as well. Typically, an edge node, and by the way, when I say the edge node, edge node is a full blown, rich operating system. CPU, tons of memory, even perhaps a GPU, but does not typically have a user screen, a keyboard and a mouse. All it has is a video stream coming in through some protocol and it's analyzing that and then making some AI decisions, decisions based on AI. And, and, but that's a powerful machine. Again, there might never ever be a user interactively signing into it, but the device has an identity of its own. It has to authenticate itself and it workload through other devices or to the Cloud. And all of that has to be done in a way where there is no user attached to it.Natalia Godyla:So, with all of this complexity, how can we think about protecting against IoT attacks. You discussed briefly that we still apply the zero trust model here. So, you know, at a high level, what are best practices for protecting IoT?Arjmand Samuel:Yeah, yeah. Exactly. Now that we, we just described the environment, we described the devices and, and the attacks, right? The bad things that can happen, how do we do that? So, the first thing we want to do, talk about is zero trust. So, do not trust the environment. Even if it is within a factory and you have a guard standing outside and you have all the, you know, the physical security, uh, do not trust it because there are still vectors which can allow malicious sectors to come into those devices. So, that's the first one, zero trust.Arjmand Samuel:Uh, do not trust anything that is on the device unless you explicitly trust it, you explicitly make sure that you can go in and you can, attest the workload, as an example. You can attest the identity of the device, as an example. And you can associate some access control polices and you have to do it explicitly and never assume that this is, because it's a, uh, environment in a factory you're good. So, you never assume that. So, again, that's a property or a principle within zero trust that we always exercise.Arjmand Samuel:Uh, the other one is you always assume breach. You always assume that bad things will happen. I- it's not if they'll happen or not. It's about when they're s- uh, going to happen. So, for the, that thinking, then you're putting in place mitigations. You are thinking, okay, if bad things are going to happen, how do I contain the bad things? How do I contain? How do I make sure that first of all, I can detect bad things happening. And we have, and we can talk about some of the offerings that we have, like Defender for IoT as an example, which you can deploy on to the environment. Even if it's brownfield, you can detect bad things happening based on the network characteristics. So, that's Defender for IoT.Arjmand Samuel:And, and once you can detect bad things happening then you can do something about it. You get an alert. You can, you can isolate that device or take that device off the network and refresh it and do those kind of things. So, the first thing that needs to happen is you assume that it's going breach. You always assume that whatever you are going to trust is explicitly trusted. You always make sure that there is a way to explicitly trust, uh, uh, uh, either the workload or the device or the network that is connected onto the device.Nic Fillingham:So, if we start with verify explicitly, in the traditional compute model where it's a user on a device, we can verify explicitly with, usually, multi factor authentication. So, I have my user name and password. I add an additional layer of authentication, whether it's an, you know, app on my phone, a key or something, some physical device, there's my second factor and I'm, I'm verified explicitly in that model. But again, no users or the user's not, sort of, interacting with the device in, sort of, that traditional sense, so what are those techniques to verify explicitly on an IoT device?Arjmand Samuel:Yeah. I, exactly. So, we, in that white paper, which we are talking about, we actually put down a few things that you can actually do to, to, en- ensure that you have all the zero trust requirements together. Now, the first one, of course, is you need, uh, all devices to have strong identity, right? So, because identity is a code. If you can not identi- identify something you can not, uh, give it an access control policy. You can not trust the data that is coming out from that, uh, device. So, the first thing you do is you have a strong identity. By a strong identity we mean identity, which is rooted in hardware, and so, what we call the hardware based root of trust. It's technologies like TPM, which ensure that you have the private key, which is secured in our hardware, in the hardware and you can not get to it, so and so on. So, you, you ensure that you have a, a strong identity.Arjmand Samuel:You always have these privilege access so you do not... And these principles have been known to our IT operations forever, right? So, many years they have been refined and, uh, people know about those, but we're applying them to the IoT world. So, these privilege access, if our device is required to access another device or data or to push out data, it should only do that for the function it is designed for, nothing more than that. You should always have some level of, uh, device health check. Perhaps you should be able to do some kind of test station of the device. Again, there is no user to access the device health, but you should be able to do, and there are ways, there are services which allow you to measure something on the device and then say yes it's good or not.Arjmand Samuel:You should be able to do a continuous update. So, in case there is a device which, uh, has been compromised, you should be able to reclaim that device and update it with a fresh image so that now you can start trusting it. And then finally you should be able to securely monitor it. And not just the device itself, but now we have to technologies which can monitor the data which is passing through the network, and based on those characteristics can see if a device is attacked or being attacked or not. So, those are the kind of things that we would recommend for a zero trust environment to take into account and, and make those requirements a must for, for IoT deployments.Natalia Godyla:And what's Microsoft's role in protecting against these attacks?Arjmand Samuel:Yeah, yeah. So, uh, a few products that we always recommend. If somebody is putting together a new IoT device right from the silicone and putting that device together, we have a great secure be design device, which is called Azure Sphere. Azure Sphere has a bunch of different things that it does, including identity, updates, cert management. All these are important functions that are required for that device to function. And so, a new device could use the design that we have for Azure Sphere.Arjmand Samuel:Then we have, a gateway software that you put on a gateway which allows you to secure the devices behind that gateway for on time deployments. We have Defender for IoT, again as I mentioned, but Defender for IoT is on-prem, so you can actually monitor all the tracks on the network and on the devices. You could also put a agent, a Micro Agent on these devices, but then it also connects to Azure Sentinel. Azure Sentinel is a enterprise class user experience for security administrators to know what bad things are happening on, on-prem. So, it, the whole end to end thing could works all the way from the network, brownfield devices to the Cloud.Arjmand Samuel:We also have things like, uh, IoT Hub Device Provisioning service. Device provisioning service is an interesting concept. I'll try to briefly describe that. So, what happens is when you have an identity on a device and you want to actually put that device, deploy that device in your environment, it has to be linked up with a service in the Cloud so that it can, it knows the device, there's an identity which is shared and so on. Now, you could do it manually. You could actually bring that device in, read a code, put it in the Cloud and your good to go because now the Cloud knows about that device, but then what do you do when you have to deploy a million devices? And we're talking about IoT scale, millions. A fleet of millions of devices. If you take that same approach of reading a key and putting it in the Cloud, one, you'd make mistakes. Second, you will probably need a lifetime to take all those keys and put them in the cloud.Arjmand Samuel:So, in order to solve that problem, we have the device provisioning service, which it's a service in the Cloud. It is, uh, linked up to the OEMs or manufacturing devices. And when you deploy our device in your field, you do not have to do any of that. Your credentials are passed between the service and the, and the device. So, so, that's another service. IoT Hub Device Provisioning Service.Arjmand Samuel:And then we have, uh, a work, the, uh, a piece of work that we have done, which is the Certification of IoT Devices. So, again, you need the devices to have certain security properties. And how do you do that? How do you ensure that they have the right security properties, like identity and cert management and update ability and so on, we have what we call the Edge Secured-core Certification as well as Azure Certified Device Program. So, any device which is in there has been tested by us and we certify that that device has the right security properties. So, we encourage our customers to actually pick from those devices so that they, they actually get the best security properties.Natalia Godyla:Wow. That's a lot, which is incredible. What's next for Microsoft's, uh, approach to IoT security?Arjmand Samuel:Yeah, yeah. So, uh, one of the key things that we have heard our customers, anybody who's going into IoT ask the question, what is the risk I'm taking? Right? So, I'm deploying all these devices in my factories and Roboticom's connecting them, and so on, but there's a risk here. And how do I quantify that risk? How do I understand th- that risk and how do I do something about that risk?Arjmand Samuel:So, we, we got those questions many years back, like four, five years back. We started working with the industry and together with the Industrial Internet Consortium, IIC, which a consortium out there and there are many companies part of that consortium, we led something called The Security Maturity Model for IoT. So, so, we put down a set of principles and a set of processes you follow to evaluate the maturity of your security in IoT, right? So, it's a actionable thing. You take the document, you evaluate, and then once you have evaluated, it actually give you a score.It says you're level one, or two, or three, or four. Four, that's the authentication. All else is controlled management. And then based on th- that level, you know where you care, first of all. So, you know what your weaknesses are and what you need to do. So, that's a very actionable thing. But beyond that, if you're at level two and you want to be at level four, and by want to means your scenario dictates that you should be at level four, it is actionable. It gives you a list of things to do to go from level two to level four. And then you can reevaluate yourself and then you know that you're at level four. So, that's a maturityArjmand Samuel:Now, In order to operationalize that program with in partnership with IAC, we also have been, and IAC's help, uh, has been instrumental here, we have been working on a training program where we have been training auditors. These are IoT security auditors, third party, independent auditors who are not trained on SMMs Security Maturity Model. And we tell our customers, if you have a concern, get yourself audited using SMM, using the auditors and that will tell you where you are and where you need to go. So, it's evolving. Security for IoT's evolving, but I think we are at the forefront of that evolution.Nic Fillingham:Just to, sort of, finish up here, I'm thinking of some of the recent IoT security stories that were in the news. We won't mention any specifically, but there, there have been some recently. My take aways hearing those stories reading those stories in the news is that, oh, wow, there's probably a lot of organizations out here and maybe individuals at companies that are using IoT and OT devices that maybe don't see themselves as being security people or having to think about IoT security, you know T security. I just wonder if do you think there is a, a population of folks out here that don't think of themselves as IoT security people, but they really are? And then therefore, how do we sort of go find those people and help them go, get educated about securing IoT devices?Arjmand Samuel:Yeah, that's, uh, that's exactly what we are trying to do here. So, uh, people who know security can obviously know the bad things that can happen and can do something about it, but the worst part is that in OT, people are not thinking about all the bad things that can happen in the cyber world. You mentioned that example with that treatment plant. It should never have been connected to the network, unless required. And if it was connected to the, uh, to the network, to the internet, you should have had a ton a mitigations in place in case somebody was trying to come in and should have been stopped. And in that particular case, y- there was a phishing attack and the administrative password was, was taken over. But even with that, with the, some of our products, like Defender for IoT, can actually detect the administrative behavior and can, can detect if an administrator is trying to do bath things. It can still tell other administrators there's bad things happening.Arjmand Samuel:So, there's a ton of things that one could do, and it all comes down, what we have realized is it all comes down to making sure that this word gets out, that people know that there is bad things that can happen with IoT and it's not only your data being stolen. It's very bad things as in that example. And so, the word out, uh, so that we can, uh, we can actually make IoT more secure.Nic Fillingham:Got it. Arjmand, again, thanks so much for your time. It sounds like we really need to get the word out. IoT security is a thing. You know, if you work in an organization that employs IoT or OT devices, or think you might, go and download this white paper. Um, we'll put the link in the, uh, in the show notes. You can just search for it also probably on the Microsoft Security Blog and learn more about cyber security for IoT, how to apply zero trust model. Share it with your, with your peers and, uh, let's get as much education as we can out there.Arjmand Samuel:Thank you very much for this, uh, opportunity.Nic Fillingham:Thanks, Arjmand, for joining us. I think we'll definitely touch on cyber security for IoT, uh, in future episodes. So, I'd love to talk to you again. (music)Arjmand Samuel:Looking forward to it. (music)Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to Tweet us @MSFTSecurity or email us at securityunlocked@Microsoft.com with topics you'd like to hear on a future episode. (music) Until then, stay safe.Natalia Godyla:Stay secure. (music)
7/7/2021

Looking a Gift Card Horse in the Mouth

Ep. 35
Is it just me, or do you also miss the goodoledays of fraudulent activity?You remember the kind I’m talking about, theemails from princes around the world asking for just a couple hundred dollars to help them unfreeze or retrieve their massive fortune which they would share with you. Attacks havegrownmore nuanced, complex, and invasive since then, but because of the unbelievable talent at Microsoft, we’re constantly getting better at defending against it.On this episode of Security Unlocked, hosts Nic Fillingham and NataliaGodylasit down with returning champion, Emily Hacker, to discuss Business Email Compromise (BEC), an attack that has perpetrators pretending to be someone from the victim’s place of work and instructs them to purchase gift cards and send them to thescammer.Maybe it’s good tolookagift cardhorse in the mouth?In This Episode You Will Learn:Why BEC is such an effective and pervasive attackWhat are the key things to look out for to protect yourself against oneWhy BEC emails are difficult to trackSome Questions We Ask:How do the attackers mimic a true-to-form email from a colleague?Why do we classify this type of email attack separately from others?Why are they asking for gift cards rather than cash?Resources:Emily Hacker’s LinkedIn:https://www.linkedin.com/in/emilydhacker/FBI’s2020Internet Crime Reporthttps://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdfNicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp35]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest thread intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft security.Natalia Godyla:And now, let's unlock the pod.Nic Fillingham:Hello listeners, hello, Natalia, welcome to episode 35 of Security Unlocked. Natalia, how are you?Natalia Godyla:I'm doing well as always and welcome everyone to another show.Nic Fillingham:It's probably quite redundant, me asking you how you are and you asking me how you are, 'cause that's not really a question that you really answer honestly, is it? It's not like, "Oh, my right knee's packing at the end a bit," or "I'm very hot."Natalia Godyla:Yeah, I'm doing terrible right now, actually. I, I just, uh- Nic Fillingham:Everything is terrible.Natalia Godyla:(laughs)Nic Fillingham:Well, uh, our guest today is, is a returning champ, Emily Hacker. This is her third, uh, appearance on Security Unlocked, and, and she's returning to talk to us about a, uh, new business email compromise campaign that she and her colleagues helped unearth focusing on some sort of gift card scam.Nic Fillingham:We've covered business email compromise before or BEC on the podcast. Uh, we had, uh, Donald Keating join us, uh, back in the early days of Security Unlocked on episode six. The campaign itself, not super sophisticated as, as Emily sort of explains, but so much more sort of prevalent than I think a lot of us sort of realize. BEC was actually the number one reported source of financial loss to the FBI in 2020. Like by an order of magnitude above sort of, you know, just places second place, third place, fourth place. You know, I think the losses were in the billions, this is what was reported to the FBI, so it's a big problem. And thankfully, we've got people like, uh, Emily on it.Nic Fillingham:Natalia, can you give us the TLDR on the, on the campaign that Emily helps describe?Natalia Godyla:Yeah, as you said, it's, uh, a BEC gift card campaign. So the attackers use typosquatted domains, and socially engineered executives to request from employees that they purchase gift cards. And the request is very vague. Like, "I need you to do a task for me, "or "Let me know if you're available." And they used that authority to convince the employees to purchase the gift cards for them. And they then co-converted the gift cards into crypto at, at scale to collect their payout.Nic Fillingham:Yeah, and we actually discuss with Emily that, that between the three of us, Natalia, myself and Emily, we actually didn't have a good answer for how the, uh- Natalia Godyla:Mm-hmm (affirmative).Nic Fillingham:... these attackers are laundering these gift cards and, and converting them to crypto. So we're gonna, we're gonna go and do some research, and we're gonna hopefully follow up on a, on a future episode to better understand that process. Awesome. And so with that, on with the pod.Natalia Godyla:On with the pod.Nic Fillingham:Welcome back to the Security Unlocked podcast. Emily hacker, how are you?Emily Hacker:I'm doing well. Thank you for having me. How are you doing?Nic Fillingham:I'm doing well. I'm trying very hard not to melt here in Seattle. We're recording this at the tail end of the heat wave apocalypse of late June, 2021. Natalia, are you all in, I should have asked, have you melted or are you still in solid form?Natalia Godyla:I'm in solid form partially because I think Seattle stole our heat. I'm sitting in Los Angeles now.Nic Fillingham:Uh huh, got it. Emily, thank you for joining us again. I hope you're also beating the heat. You're here to talk about business email compromise. And you were one of the folks that co-authored a blog post from May 6th, talking about a new campaign that was discovered utilizing gift card scams. First of all, welcome back. Thanks for being a return guest. Second of all, do I get credit or do I get blame for the tweet that enabled you to, to- Emily Hacker:(laughs) It's been so long, I was hoping you would have forgotten.Nic Fillingham:(laughs) Emily and I were going backward forward on email, and I basically asked Emily, "Hey, Emily, who's like the expert at Microsoft on business email compromise?" And then Emily responded with, "I am."Emily Hacker:(laughs)Nic Fillingham:As in, Emily is. And so I, I think I apologized profusely. If I didn't, let me do that now for not assuming that you are the subject matter expert, but that then birthed a very fun tweet that you put out into the Twitter sphere. Do you wanna share that with the listeners or is this uncomfortable and we need to cut it from the audio?Emily Hacker:No, it's fine. You can share with the listeners. I, uh- Nic Fillingham:(laughs)Emily Hacker:... I truly was not upset. I don't know if you apologized or not, because I didn't think it was the thing to apologize for. Because I didn't take your question as like a, "Hey," I'm like, "Can you like get out of the way I did not take it that way at all. It was just like, I've been in this industry for five years and I have gotten so many emails from people being like, "Hey, who's the subject matter in X?" And I'm always having to be like, "Oh, it's so and so," you know, or, "Oh yeah, I've talked to them, it's so-and-so." And for once I was like, "Oh my goodness, it me."Natalia Godyla:(laughs)Emily Hacker:Like I'm finally a subject matter in something. It took a long time. So the tweet was, was me being excited that I got to be the subject matter expert, not me being upset at you for asking who it was.Nic Fillingham:No, I, I took it in it's, I did assume that it was excitement and not crankiness at me for not assuming that it would be you. But I was also excited because I saw the tweet, 'cause I follow you on Twitter and I'm like, "Oh, that was me. That was me." And I got to use- Emily Hacker:(laughs)Nic Fillingham:... I got to use the meme that's the s- the, the weird side eye puppet, the side, side eye puppet. I don't know if that translates. There's this meme where it's like a we-weird sort of like H.R. Pufnstuf sort of reject puppet, and it's sort of like looking sideways to the, to the camera.Emily Hacker:Yes.Nic Fillingham:Uh, I've, and I've- Emily Hacker:Your response literally made me laugh a while though alone in my apartment.Nic Fillingham:(laughs_ I've never been able to use that meme in like its perfect context, and I was like, "This is it."Emily Hacker:(laughs) We just set that one up for a comedy home run basically.Nic Fillingham:Yes, yes, yes. And I think my dad liked the tweet too- Natalia Godyla:(laughs)Nic Fillingham:... so I think I had that, so that was good.Emily Hacker:(laughs)Nic Fillingham:Um, he's like my only follower.Emily Hacker:Pure success.Nic Fillingham:Um, well, on that note, so yeah, we're here to talk about business email compromise, which we've covered on the, on the podcast before. You, as I said, uh, co-authored this post for May 6th. We'll have a, a broader conversation about BEC, but let's start with these post. Could you, give us a summary, what was discussed in this, uh, blog post back on, on May 6th?Emily Hacker:Yeah, so this blog post was about a specific type of business email compromise, where the attackers are using lookalike domains and lookalike email addresses to send emails that are trying, in this particular case, to get the user to send them a gift card. And so this is not the type of BEC where a lot of people might be thinking of in terms of conducting wire transfer fraud, or, you know, you read in the news like some company wired several million dollars to an attacker. That wasn't this, but this is still creating a financial impact and that the recipient is either gonna be using their own personal funds or in some cases, company funds to buy gift cards, especially if the thread actor is pretending to be a supervisor and is like, "Hey, you know, admin assistant, can you buy these gift cards for the team?" They're probably gonna use company funds at that point.Emily Hacker:So it's still something that we keep an eye out for. And it's actually, these gift card scams are far and away the most common, I would say, type of BEC that I am seeing when I look for BEC type emails. It's like, well over, I would say 70% of the BEC emails that I see are trying to do this gift card scam, 'cause it's a little easier, I would say for them to fly under the radar maybe, uh, in terms of just like, someone's less likely to report like, "Hey, why did you spend $30 on a gift card?" Than like, "Hey, where did those like six billion dollars go?" So like in that case, "This is probably a little easier for them to fly under the radar for the companies. But in terms of impact, if they send, you know, hundreds upon hundreds of these emails, the actors are still gonna be making a decent chunk of change at the end of the day.Emily Hacker:In this particular instance, the attackers had registered a couple hundred lookalike domains that aligned with real companies, but were just a couple of letters or digits off, or were using a different TLD, or use like a number or sort of a letter or something, something along the lines to where you can look at it and be like, "Oh, I can tell that the attacker is pretending to be this other real company, but they are actually creating their own."Emily Hacker:But what was interesting about this campaign that I found pretty silly honestly, was that normally when the attacker does that, one would expect them to impersonate the company that their domain is looking like, and they totally didn't in this case. So they registered all these domains that were lookalike domains, but then when they actually sent the emails, they were pretending to be different companies, and they would just change the display name of their email address to match whoever they were impersonating.Emily Hacker:So one of the examples in the blog. They're impersonating a guy named Steve, and Steve is a real executive at the company that they sent this email to. But the email address that they registered here was not Steve, and the domain was not for the company that Steve works at. So they got a little bit, I don't know if they like got their wires crossed, or if they just were using the same infrastructure that they were gonna use for a different attack, but these domains were registered the day before this attack. So it definitely doesn't seem like opportunistic, and which it doesn't seem like some actors were like, "Oh, hey look, free domains. We'll send some emails." Like they were brand new and just used for strange purposes.Natalia Godyla:Didn't they also fake data in the headers? Why would they be so careless about connecting the company to the language in the email body but go through the trouble of editing the headers?Emily Hacker:That's a good question. They did edit the headers in one instance that I was able to see, granted I didn't see every single email in this attack because I just don't have that kind of data. And what they did was they spoofed one of the headers, which is an in-reply-to a header, which makes it, which is the header that would let us know that it's a real reply. But I worked really closely with a lot of email teams and we were able to determine that it wasn't indeed a fake reply.Emily Hacker:My only guess, honestly, guess as to why that happened is one of two things. One, the domain thing was like a, a mess up, like if they had better intentions and the domain thing went awry. Or number two, it's possible that this is multiple attackers conducting. If one guy was responsible for the emails with the mess of domains, and a different person was responsible for the one that had the email header, like maybe the email header guy is just a little bit more savvy at whose job of crime than the first guy.Natalia Godyla:(laughs)Nic Fillingham:Yeah, I li- I like the idea of, uh, sort of ragtag grubbing. I don't mean to make them an attractive image, but, you know, a ragtag group of people here. And like, you've got a very competent person who knows how to go and sort of spoof domain headers, and you have a less competent person who is- Emily Hacker:Yeah. It's like Pinky and the Brain.Nic Fillingham:Yeah, it is Pinky and the Brain. That's fantastic. I love the idea of Pinky and the Brain trying to conduct a multi-national, uh- Emily Hacker:(laughs)Nic Fillingham:... BEC campaign as their way to try and take over the world. Can we back up a little bit? We jumped straight into this, which is totally, you know, we asked you to do that. So, but let's go back to a little bit of basics. BEC stands for business email compromise. It is distinct from, I mean, do you say CEC for consumer email compromise? Like what's the opposite side of that coin? And then can you explain what BEC is for us and why we sort of think about it distinctly?Emily Hacker:Mm-hmm (affirmative), so I don't know if there's a term for the non-business side of BEC other than just scam. At its basest form, what BEC is, is just a scam where the thread actors are just trying to trick people out of money or data. And so it doesn't involve any malware for the most part at the BEC stage of it. It doesn't involve any phishing for the most part at the BEC stage of it. Those things might exist earlier in the chain, if you will, for more sophisticated attacks. Like an attacker might use a phishing campaign to get access before conducting the BEC, or an attacker might use like a RAT on a machine to gain access to emails before the actual BEC. But the business email compromise email itself, for the most part is just a scam. And what it is, is when an attacker will pretend to be somebody at a company and ask for money data that can include, you know, like W-2's, in which case that was still kind of BEC.Emily Hacker:And when I say that they're pretending to be this company, there's a few different ways that that can happen. And so, the most, in my opinion, sophisticated version of this, but honestly the term sophisticated might be loaded and arguable there, is when the attacker actually uses a real account. So business email compromise, the term might imply that sometimes you're actually compromising an email. And those are the ones where I think are what people are thinking of when they're thinking of these million billion dollar losses, where the attacker gains access to an email account and basically replies as the real individual.Emily Hacker:Let's say that there was an email thread going on between accounts payable and a vendor, and the attacker has compromised the, the vendor's email account, well, in the course of the conversation, they can reply to the email and say, "Hey, we just set up a new bank account. Can you change the information and actually wire the million dollars for this particular project to this bank account instead?" And if the recipient of that email is not critical of that request, they might actually do that, and then the money is in the attacker's hands. And it's difficult to be critical of that request because it'll sometimes literally just be a reply to an ongoing email thread with someone you've probably been doing business with for a while, and nothing about that might stand out as strange, other than them changing the account. It can be possible, but difficult to get it back in those cases. But those are definitely the ones that are, I would say, the most tricky to spot.Emily Hacker:More common, I would say, what we see is the attacker is not actually compromising an email, not necessarily gaining access to it, but using some means of pretending or spoofing or impersonating an email account that they don't actually have access to. And that might include registering lookalike domains as in the case that we talked about in this blog. And that can be typosquatted domains or just lookalike domains, where, for example, I always use this example, even though I doubt this domain is available, but instead of doing microsoft.com, they might do Microsoft with a zero, or like Microsoft using R-N-I-C-R-O-S-O-F-t.com. So it looks like an M at first glance, but it's actually not. Or they might do something like microsoft-com.org or something, which that obviously would not be available, but you get the point. Where they're just getting these domains that kind of look like the right one so that somebody, at first glance, will just look up and be like, "Oh yeah, that looks like Microsoft. This is the right person."Emily Hacker:They might also, more commonly, just register emails using free email services and either do one of two things, make the email specific to the person they're targeting. So let's say that an attacker was pretending to be me. They might register emilyhacker@gmail.com, or more recently and maybe a little bit more targeted, they might register like emily.hacker.microsoft.com@gmail.com, and then they'll send an email as me. And then on the, I would say less sophisticated into the spectrum, is when they are just creating an email address that's like bob@gmail.com. And then they'll use that email address for like tons of different targets, like different victims. And they'll either just change the display name to match someone at the company that they're targeting, or they might just change it to be like executive or like CEO or something, which like the least believable of the bunch in my opinion is when they're just reusing the free emails.Emily Hacker:So that's kind of the different ways that they can impersonate or pretend to be these companies, but I see all of those being used in various ways. But for sure the most common is the free email service. And I mean, it makes sense, because if you're gonna register a domain name that cost money and it takes time and takes skill, same with compromising an email account, but it's quick and easy just to register a free email account. So, yeah.Nic Fillingham:So just to sort of summarize here. So business email compromise i-is obviously very complex. There's lots of facets to it.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It sounds like, first of all, it's targeted at businesses as opposed to targeted individuals. In targeted individuals is just more simple scams. We can talk about those, but business email compromise, targeted at businesses- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and the end goal is probably to get some form of compromise, and which could be in different ways, but some sort of compromise of a communication channel or a communication thread with that business to ultimately get some money out of them?Emily Hacker:Yep, so it's a social engineering scheme to get whatever their end goals are, usually money. Yeah.Nic Fillingham:Got it. Like if I buy a gift card for a friend or a family for their birthday, and I give that to them, the wording on the bottom says pretty clearly, like not redeemable for cash. Like it's- Emily Hacker:So- Nic Fillingham:... so what's the loophole they're taking advantage of here?Emily Hacker:Criminals kind of crime. Apparently- Natalia Godyla:(laughs)Emily Hacker:... there are sites, you know, on the internet specifically for cashing out gift cards for cryptocurrency.Nic Fillingham:Hmm.Emily Hacker:And so they get these gift cards specifically so that they can cash them out for cryptocurrency, which then is a lot, obviously, less traceable as opposed to just cash. So that is the appeal of gift cards, easier to switch for, I guess, cryptocurrency in a much less traceable manner for the criminals in this regard. And there are probably, you know, you can sell them. Also, you can sell someone a gift card and be like, "Hey, I got a $50 iTunes gift card. Give me $50 and you got an iTunes gift card." I don't know if iTunes is even still a thing. But like that is another means of, it's just, I think a way of like, especially the cryptocurrency one, it's just a way of distancing themselves one step from the actual payout that they end up with.Nic Fillingham:Yeah, I mean, it's clearly a, a laundering tactic.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It's just, I'm trying to think of like, someone's eventually trying to get cash out of this gift card-Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and instead of going into Target with 10,000 gift cards, and spending them all, and then turning right back around and going to the returns desk and saying like, "I need to return these $10,000 that I just bought."Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:I guess I'm just puzzled as to how, at scale- Emily Hacker:Yeah.Nic Fillingham:... and I guess that's the key word here, at scale, at a criminal scale, how are they, what's the actual return? Are they getting, are they getting 50 cents on the dollar? Are they getting five cents on the dollar? Are they getting 95 cents on the dollar? Um, it sounds like, maybe I don't know how to ask that question, but I think it's a fascinating one, I'd love to learn more about.Emily Hacker:It is a good question. I would imagine that the, the sites where they exchange them for cryptocurrency are set up in a way where rather than one person ending up with all the gift cards to where that you have an issue, like what you're talking about with like, "Hey, uh, can I casually return these six million gift cards?" Like rather than that, they're, it's more distributed. But there probably is a surcharge in terms of they're not getting a one-to-one, but it's- Nic Fillingham:Yeah.Emily Hacker:... I would not imagine that it's very low. Or like I would not imagine that they're getting five cents on the dollar, I would imagine it's higher than that.Nic Fillingham:Got it.Emily Hacker:But I don't know. So, that's a good question.Natalia Godyla:And we're talking about leveraging this cryptocurrency model to cash them out. So has there been an increase in these scams because they now have this ability to cash them out for crypto? Like, was that a driver?Emily Hacker:I'm not sure. I don't know how long the crypto cash out method has been available.Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:I've only recently learned about it, but that's just because I don't spend, I guess I don't spend a lot of time dealing with that end of the scam. For the most part, my job is looking at the emails themselves. So, the, learning what they're doing once they get the gift cards was relatively new to me, but I don't think it's new to the criminals. So it's hard for me to answer that question, not knowing how long the, the crypto cash out method has been available to them. But I will say that it does feel like, in the last couple of years, gift card scams have just been either increasing or coming into light more, but I think increasing.Nic Fillingham:Emily, what's new about this particular campaign that you discussed in the blog? I-it doesn't look like there's something very new in the approach here. This feels like it's a very minor tweak on techniques that have been employed for a while. Tell me what's, what's new about this campaign? (laughs)Emily Hacker:(laughs) Um, so I would agree that this is not a revolutionary campaign.Nic Fillingham:Okay.Emily Hacker:And I didn't, you know, choose to write this one into the blog necessarily because it's revolutionary, but rather because this is so pervasive that I felt like it was important for Microsoft customers to be aware that this type of scam is so, I don't know what word, now we're both struggling with words, I wanna say prolific, but suddenly the definition of that word seems like it doesn't fit in that sentence.Nic Fillingham:No, yeah, prolific, that makes sense. Emily Hacker:Okay.Nic Fillingham:Like, this is, it sounds like what you're saying is, this blog exists not because this campaign is very unique and some sort of cutting-edge new technique, it exists because it's incredibly pervasive.Emily Hacker:Yes.Nic Fillingham:And lots and lots of people and lots and lots of businesses are probably going to get targeted by it. Emily Hacker:Exactly.Nic Fillingham:And we wanna make sure everyone knows about it.Emily Hacker:And the difference, yes, and the, the only real thing that I would say set this one apart from some of the other ones, was the use of the lookalike domains. Like so many of the gift cards scams that I see, so many of the gift cards scams that I see are free email accounts, Gmail, AOL, Hotmail, but this one was using the lookalike domains. And that kind of gave us a little bit more to talk about because we could look into when the domains were registered. I saw that they were registered the day, I think one to two days before the attack commenced. And that also gave us a little bit more to talk about in terms of BEC in the blog, because this kind of combined a couple of different methods of BEC, right? It has the gift cards scam, which we see just all the time, but it also had that kind of lookalike domain, which could help us talk about that angle of BEC.Emily Hacker:But I had been, Microsoft is, is definitely starting to focus in on BEC, I don't know, starting to focus in, but increasing our focus on BEC. And so, I think that a lot of the stuff that happens in BEC isn't new. Because it's so successful, there's really not much in the way of reason for the attackers to shift so dramatically their tactics. I mean, even with the more sophisticated attacks, such as the ones where they are compromising an account, those are still just like basic phishing emails, logging into an account, setting up forwarding rules, like this is the stuff that we've been talking about in BEC for a long time. But I think Microsoft is talking about these more now because we are trying to get the word out, you know, about this being such a big problem and wanting to shift the focus more to BEC so that more people are talking about it and solving it. Natalia Godyla:It seemed like there was A/B testing happening with the cybercriminals. They had occasionally a soft intro where someone would email and ask like, "Are you available?" And then when the target responded, they then tried to get money from that individual, or they just immediately asked for money.Emily Hacker:Mm-hmm (affirmative).Natalia Godyla:Why the different tactics? Were they actually attempting to be strategic to test which version worked, or was it just, like you said, different actors using different methods?Emily Hacker:I would guess it's different actors using different methods or another thing that it could be was that they don't want the emails to say the same thing every time, because then it would be really easy for someone like me to just identify them- Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:... in terms of looking at mail flow for those specific keywords or whatever. If they switch them up a little bit, it makes it harder for me to find all the emails, right? Or anybody. So I think that could be part of the case in terms of just sending the exact same email every time is gonna make it really easy for me to be like, "Okay, well here's all the emails." But I think there could also be something strategic to it as well. I just saw one just yesterday actually, or what day is it, Tuesday? Yeah, so it must've been yesterday where the attacker did a real reply.Emily Hacker:So they sent the, the soft opening, as you said, where it just says, "Are you available?" And then they had sent a second one that asked that full question in terms of like, "I'm really busy, I need you to help me, can you call me or email me," or something, not call obviously, because they didn't provide a phone number. Sometimes they do, but in this case, they didn't. And they had actually responded to their own email. So the attacker replied to their own email to kind of get that second push to the victim. The victim just reported the email to Microsoft so they didn't fall for it. Good for them. But it does seem that there might be some strategy involved or desperation. I'm not sure which one.Natalia Godyla:(laughs) Fine line between the two.Emily Hacker:(laughs)Nic Fillingham:I'd want to ask question that I don't know if you can answer, because I don't wanna ask you to essentially, you know, jeopardize any operational security or sort of tradecraft here, but can you give us a little tidbit of a glimpse of your, your job, and, and how you sort of do this day-to-day? Are you going and registering new email accounts and, and intentionally putting them in dodgy places in hopes of being the recipient? Or are you just responding to emails that have been reported as phishing from customers? Are you doing other things like, again, I don't wanna jeopardize any of your operational security or, you know, the processes that you use, but how do you find these?Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:And how do you then sort of go and follow the threads and uncover these campaigns?Emily Hacker:Yeah, there's a few ways, I guess that we look for these. We don't currently have any kind of like Honey accounts set up or anything like that, where we would be hoping to be targeted and find them this way. I know there are different entities within Microsoft who are, who do different things, right? So my team is not the entity that would be doing that. So my team's job is more looking at what already exists. So we're looking at stuff that customers have reported, and we're also looking at open source intelligence if anyone else has tweeted or released a blog or something about an ongoing BEC campaign, that might be something that then I can go look at our data and see if we've gotten.Emily Hacker:But the biggest way outside of those, those are the two, like I would say smaller ways. The biggest way that we find these campaigns is we do technique tracking. So we have lots of different, we call them traps basically, and they run over all mail flow, and they look for certain either keywords or there are so many different things that they run on. Obviously not just keywords, I'm just trying to be vague here. But like they run on a bunch of different things and they have different names. So if an email hits on a certain few items, that might tell us, "Hey, this one might be BEC," and then that email can be surfaced to me to look into.Emily Hacker:Unfortunately, BEC is very, is a little bit more difficult to track just by the nature of it not containing phishing links or malware attachments or anything along those lines. So it is a little bit more keyword based. And so, a lot of times it's like looking at 10,000 emails and looking for the one that is bad when they all kind of use the same keywords. And of course, we don't just get to see every legitimate email, 'cause that would be like a crazy customer privacy concern. So we only get to really see certain emails that are suspected malicious by the customer, in which case it does help us a little bit because they're already surfacing the bad ones to us.Emily Hacker:But yeah, that's how we find these, is just by looking for the ones that already seem malicious kind of and applying logic over them to see like, "Hmm, this one might be BEC or," you know, we do that, not just for BEC, but like, "Hmm, this one seems like it might be this type of phishing," or like, "Hmm, this one seems like it might be a buzz call," or whatever, you know, these types of things that will surface all these different emails to us in a way that we can then go investigate them.Nic Fillingham:So for the folks listening to this podcast, what do you want them to take away from this? What you want us to know on the SOC side, on the- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... on the SOC side? Like, is there any additional sort of, what are some of the fundamentals and sort of basics of BEC hygiene? Is there anything else you want folks to be doing to help protect the users in their organizations?Emily Hacker:Yeah, so I would say not to just focus on monitoring what's going on in the end point, because BEC activity is not going to have a lot, if anything, that's going to appear on the end point. So making sure that you're monitoring emails and looking for not just emails that contain malicious links or attachments, but also looking for emails that might contain BEC keywords. Or even better, if there's a way for you to monitor your organization's forwarding rules, if a user suddenly sets up a, a slew of new forwarding rules from their email account, see if there's a way to turn that into a notification or an alert, I mean, to you in the SOC. And that's a really key indicator that that might be BEC, not necessarily gift cards scam, but BEC.Emily Hacker:Or see if there is a way to monitor, uh, not monitor, but like, if your organization has users reporting phishing mails, if you get one that's like, "Oh, this is just your basic low-level credential phishing," don't just toss it aside and be like, "Well, that was just one person and has really crappy voicemail phish, no one's going to actually fall for that." Actually, look and see how many people got the email. See if anybody clicked, force password resets on the people that clicked, or if you can't tell who clicked on everybody, because it really only takes one person to have clicked on that email and you not reset their password, and now the attackers have access to your organization's email and they can be conducting these kinds of wire transfer fraud.Emily Hacker:So like, and I know we're all overworked in this industry, and I know that it can be difficult to try and focus on everything at once. And especially, you know, if you're being told, like our focus is ransomware, we don't want to have ransomware. You're just constantly monitoring end points for suspicious activity, but it's important to try and make sure that you're not neglecting the stuff that only exists in email as well. Natalia Godyla:Those are great suggestions. And I'd be remiss not to note that some of those suggestions are available in Microsoft Defender for Office 365, like the suspicious forwarding alerts or attack simulation training for user awareness. But thank you again for joining us, Emily, and we hope to have you back on the show many more times.Emily Hacker:Yeah, thanks so much for having me again.Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity, or email us at securityunlocked@microsoft.com with topics you'd like to hear on our future episode. Until then, stay safe.Natalia Godyla:Stay secure.