Security Unlocked

Share

Red-teaming AI with CounterFit

Ep. 31

It’s an all out offensive on today’s episode while we talk about how the best defense is a good offense. But before we plan our attack, we need to know our vulnerabilities, and that’s where our guest comes in.  

On this episode, hosts Nic Fillingham and Natalia Godyla are joined by Will Pearce, who discusses his role as AI Red Team Lead from the Azure Trustworthy ML Group and how he works to find weaknesses in security infrastructure to better develop ways to prevent against attacks.  


In This Episode You Will Learn:  

  • The three main functions of counterfeit  
  • Why the best defense is a good offense 
  • Why Will and his team aren’t worried about showing their hand by releasing this software as open source  

Some Questions We Ask:  

  • What previously developed infrastructure was the counterfeit tool built upon? 
  • How AI red teaming differs from traditional specops red teaming 
  • How did the counterfeit project evolve from conception to release? 


Resources:  

Will Pearce’s LinkedIn  

https://www.linkedin.com/in/will-pearce-a62331135/  

AI security risk assessment using Counterfit  

https://www.microsoft.com/security/blog/2021/05/03/ai-security-risk-assessment-using-counterfit/  

Nic Fillingham’s LinkedIn:   

https://www.linkedin.com/in/nicfill/  

Natalia Godyla’s LinkedIn:   

https://www.linkedin.com/in/nataliagodyla/  

Microsoft Security Blog:   

https://www.microsoft.com/security/blog/  

  

Related:

Security Unlocked: CISO Series with Bret Arsenault  

https://SecurityUnlockedCISOSeries.com  


Transcript:

[Full transcript can be found at https://aka.ms/SecurityUnlockedEp31]


Nic Fillingham: (00:08)

Hello and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.


Natalia Godyla: (00:20)

And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel, research and data science.


Nic Fillingham: (00:30)

And profile some of the fascinating people working on artificial intelligence in Microsoft security.


Natalia Godyla: (00:36)

And now let's unlock the pod.


Nic Fillingham: (00:41)

Hello listeners, and welcome to episode 31 of Security Unlocked. Natalia, hello to you. Welcome.


Natalia Godyla: (00:46)

Hello, Nic. Happy to be here. Uh, what do we have on the docket for today?


Nic Fillingham: (00:50)

Today we have Will Pearce joining us. Will Pearce is the AI red team lead inside the Azure Trustworthy Machine Learning Group. Eager listeners of the podcast might recognize Will's name from a couple of episodes back where we had Ram Shankar Siva Kumar come on the podcast and mentioned Will a few times. Will is here to talk to us today about a blog post that he co-authored with Ram Shankar Siva Kumar on May 3rd, discussing the announcement of a new AI security risk assessment tool called Counterfit. And this is a great conversation, a sort of fascinating project here, and his job is about trying to break into our AI systems and compromise them in order to sort of make them, make them safer, make them better. And so we're gonna say that word, we're gonna say this word red teaming in quite a bit in the interview, and for those that may not be super familiar with the concept, we thought we might just sort of revisit it. Natalia, you've, you've got a good definition there, w- walk us through what does red teaming mean?


Natalia Godyla: (01:47)

And so red teaming originated in the military as a way to test strategies by posing as an external force. The US force would be the blue team, the defenders, and the red team would be someone that is trying to infiltrate the United States, and that same concept is now applied to security. So red teaming is that training exercise to determine where are the gaps in your security strategy.


Nic Fillingham: (02:11)

Right. And so in this context here, with regards to the Counterfit tool, Will just had a bunch of scripts that he had built himself just to sort of do his job. These are scripts he built for himself, and at some point Will talked about in the interview how he decided to pull them together into a toolkit and create a sort of an open source project that's now available up on GitHub, so that other AI red team folks, uh, really anyone who's out there trying to make AI systems more secure through red teaming can benefit from the work that Will's done. Natalia, some of the things that Counterfit can do, obviously we'll hear from Will in just a second, but what's your summary.


Natalia Godyla: (02:45)

I mean, there's so many different ways you can use this tool for offensive security. So you, you can pen test an red team AI systems using Counterfit, you can do vulnerability scanning, and you can also log for AI systems. So collect that telemetry to improve your understanding of the different failure modes in AI systems.


Nic Fillingham: (03:07)

Well, this is a great conversation with Will Pearce. I think you'll enjoy it. On with the pod.


Natalia Godyla: (03:11)

On with the pod. Today, we are joined by Will Pearce, an AI red team lead from the Azure Trustworthy ML Group to talk about a blog post called AI Security Risk Assessment Using Counterfit. Welcome to the show Will.


Will Pearce: (03:29)

Thank you. Thanks for having me.


Natalia Godyla: (03:31)

Awesome. Yeah. We're really excited to talk about Counterfit, and I think it'd be great to start with a little bit of an intro. So could you share who you are, what your day-to-day is at Microsoft?


Will Pearce: (03:40)

Yeah. Yeah. As you mentioned, Will Pearce, I'm the red team lead for the Azure Trustworthy Machine Learning team. My day to day is attacking machine learning inside Microsoft. So building tools, doing research and going after machine learning models wherever they live inside Microsoft.


Natalia Godyla: (03:59)

And Counterfit is a tool that helps with that, correct? Could you share what Counterfit is?


Will Pearce: (04:05)

Yep. Yeah. So Counterfit is a command line application that helps me automate these assessments. So this was sort of a lot of data processing that can go into them, and is taking a lot of time, and so I sort of built this command line application to take care of it. I come from the ops world, so traditional red teaming, you know, where you kind of hack networks. And so sort of the command line interface, that malware interface is what I was used to, but in the machine learning world, a lot of the tools or libraries, they're not, so they're not really readily available for you to automate things. And so I just kind of married the two together that basically wraps existing frameworks.


Nic Fillingham: (04:47)

Will, I'd love to step back just to speak to you. So you are the AI red team lead, tell us about AI red teaming or AI ML red teaming, how does that differ from sort of traditional SecOps red teaming?


Will Pearce: (05:00)

In and a lot of ways it doesn't, machine learning is a new sort of attack surface that is coming up like as businesses integrate machine learning into all kinds of things, the security of machine learning hasn't really been paid attention to. But you know, machine learning is part of a larger system, it's still an information asset that still the model files exist on a server. They're put into websites, all the normal stuff. And so a lot of those skills transferred, you know, one-to-one, the difference being is having that, that knowledge of how machine learning algorithms work, how you can bend them, how you can alter your inputs to get the outputs that you want, and a lot of it, a lot of the attacks are really just kind of engineering to get to that point.


Nic Fillingham: (05:46)

And the types of specialists that you have on an AI red team versus again, a sort of, sort of more, more generalist, uh, SecOps red team. Do you have data scientists and do have other statisticians and other folks that maybe have a different set of skills?


Will Pearce: (06:01)

Yep, absolutely. So we have a couple of members on the team that are extremely experienced data scientists and ML engineers. So basically blending of those skillsets, you know, where I don't have that formal background, but I do understand how sort of attacks work and, you know, how to run an op. They understand how the algorithm works at a, a very deep level, and so we, we have a lot of fun going back and forth brainstorming ideas.


Natalia Godyla: (06:32)

So bringing this back to the Counterfit project, how did the Counterfit project evolve? As I understand it, it started as a group of attack scripts, and, and now it's an automated tool. So what did that process of evolution look like?


Will Pearce: (06:49)

So earlier I mentioned all these things are libraries and-


Natalia Godyla: (06:53)

Mm-hmm (affirmative).


Will Pearce: (06:53)

... you know, I've been at Microsoft for nine months-ish. And coming from that ops role, it just wasn't scalable. So to write a script for every attack that you wanted to do-


Natalia Godyla: (07:04)

Mm-hmm (affirmative).


Will Pearce: (07:05)

... isn't scalable. So the first thing, just natural to want that tool, that malware type interface was to build, was to wrap these into a single tool that you could run any attack script that you wanted in, in an automated fashion. That was that, it was, it was just a need for an automated tool for my own purposes and it kind of evolved into this. Truth be told, I didn't necessarily think it was gonna be as popular as it was.


Natalia Godyla: (07:29)

(laughs)


Will Pearce: (07:30)

Yeah. I wrote it because I needed it, not because, you know, we wanted to release it, but it has kind of taken on a life of its own at this point where, you know, I don't do more bug fixes than I do attacks, but I could see in the not too distant future we would need a dev to like take care of the day-to-day maintenance of it, or, you know, build in whatever features we wanted for it.


Nic Fillingham: (07:55)

And did not thing exists here in this space Will, was there, was there nothing that allowed for the automation of, of the work that you were doing and that's why you sort of built it, or did something exist, but the modifications that would have been necessary to meet your needs would have been sort of too laborious?


Will Pearce: (08:10)

I shouldn't say nothing existed 'cause I don't... There was nothing that, you know, for example, data types, right? Like you have texts, images, NumPy, or, or arrays of numbers, things like that. A lot of the tools only focus on one of those data types or two let's say, right? But there's a wide variety of models at Microsoft that I need to test. And so having something that can do text, audio, image, any arbitrary data type is extremely valuable, and that was sort of the first step. It was just having a need, I didn't wanna use five different tools, you know, I wanted to use one, and so that was kind of the, the driver for me to build it.


Nic Fillingham: (08:53)

And I noticed, uh, Will it's been published through GitHub. So is the intent here for it to be a true sort of community initiative, community project and, and have contributors and, and sort of a, a vibrant community?


Will Pearce: (09:05)

Yeah, absolutely. Yeah, that's the plan. Ram will tell you I'm not the best data scientist, so this is the blending of offensive security and machine learning, right? And data science. And so there are just conventions in the data science world that I'm not familiar with, similarly, there are inventions in the offensive security world that data scientists aren't familiar with. So moving this Counterfit becomes a metaphor of sorts for these machine learning algorithms, where people feel welcomed to submit new research, um, and to really become a platform for the conversation between machine learners and security people to evolve, start to understand each other and what matters to the other.


Natalia Godyla: (09:51)

And are you also continuously updating the tool, so as you learn more adversarial attacks against AI, will you be feeding that into the product, and what does that process look like?


Will Pearce: (10:04)

Yeah, yeah, absolutely. So it exists on algorithms, right?


Natalia Godyla: (10:09)

Mm-hmm (affirmative).


Will Pearce: (10:09)

Uh, attack algorithms. So an algorithm basically iterates on an input in a particular way, right? And that's how it, you kind of create that output that you want. So there's that piece, is just creating new algorithms that will do whatever we think is useful for the particular task. But there's also things like a web interface that would be extremely nice for some users or, you know, just some niceties that aren't built in yet still somewhat difficult to look at the results of a scan or the samples of the scan. And so, so some of those things still need to be built in, but yeah, that's kind of the plan is to build any, you know, someone could submit a feature request tomorrow and we would probably build it the next day just because we're excited to see what people do with it and what they care about with it.


Nic Fillingham: (11:05)

So Will, if we could jump forward into, I think the three core functions or the three use cases of this tool as they're sort of listed out in the blog here for those that have read the blog post. So the first one is listed out as penetration testing and red teaming AI systems, and the, the tool here is preloaded with published attack algorithms, which can be used to, to test out evading and, and stealing AI models. We've had a bunch of your colleagues, uh, and peers on the podcast before, so we've learned a little bit on the podcast here about adversarial ML. We know that it's sort of a new frontier, we know that the vast majority of organizations out there don't have anything in place to protect their AI systems. Can you tell us a bit about this first scenario here? So evading and stealing AI models, what does that sort of look like in a hypothetical sense or in the real world, and then how do we use this tool to sort of test against it?


Will Pearce: (11:59)

Let me go backwards a little bit in your questions.


Nic Fillingham: (12:01)

Please. Yeah.


Will Pearce: (12:02)

So you mentioned that organizations don't have the tools to protect these systems.


Nic Fillingham: (12:08)

Right.


Will Pearce: (12:08)

That's only partly true, only because machine learning, the model itself is a very small part of that whole system, but there's a very mature information security presence around principles of least privilege, setting up servers, deploying end points. Like we know exactly there are very mature security processes that can already be attached to these things, the difference is because machine learning people aren't cued in to this, the security apparatus at a higher level, they're not aware that these things exist, right? So you're looking at ML engineers who are responsible for deploying an endpoint to, uh, you know, let's say a public site, but they're not aware that maybe the way they're deploying it, you know, they, they put secrets in the code or, or whatever. And that's kind of what this is about, is it is about marrying of traditional information security principles and this new technology, machine learning.


Will Pearce: (13:07)

So in terms of evading a model, I mean, what that looks like is basically you have a model that is responsible for taking input and making a decision based on that input. So the classic example is images, but, you know, if you think about authentication system, you know, where it uses your face, you know, Windows Hello, maybe there is a different face that would also work on it. So evading a model is basically just giving an input such that you get the output that you want. So in the traditional information security sense, it would be like bypassing a malware classifier, bypassing a spam filter, so that's how you get your phishing.


Will Pearce: (13:43)

Stealing is, it's basically turning machine learning on its head. So it's just reflecting the model back at itself. So all you do is you send in, you grab a dataset from online, there's ton of them, for example, like an email data set. So let's say you're a spam filter. I did some research like before I got to Microsoft, it was a spam filter. In their email headers, they leaked their spam scores. So you'd send an email and you'd get one back, and in the headers it would be like 900.


Nic Fillingham: (14:12)

Hmm.


Will Pearce: (14:13)

I recall it's interesting. And it was in every email. So what we did is we grabbed big data set of emails, like the Enron data set, and we just sent every single email, every single Enron email through this spam filter, and we collected the email we had already. And then for each email, we just collected the score, right? And then we just trained a local model to mimic the spam filter, and using that, we were able to sort of reverse that spam filter and figure out what words the model thought were bad and what words the model thought were good.


Will Pearce: (14:46)

And so Counterfit kind of automates that process. It gives you a framework in which you can put all that code into one place and then run that attack. The code we wrote for that particular attack, it was in like, you know, 15 different files, it was several different services. It wasn't pretty, or repeatable necessarily. And so Counterfit allows you to sort of aggregate all of the weird code that you might need and allow you to interface some target model with any number of algorithmic attacks, including, you know, model stealing.


Nic Fillingham: (15:22)

So I, I might've got this wrong Will, but, so if the goal is to stop adversaries from potentially stealing your model using this technique here where you, you'd basically grab a dataset, throw it at a, at a model, monitor the output and then go train your own model to mimic that. How does Counterfit help protect against that, or how does Counter- what kind of information or data does Co- Counterfit output to help you in that, in stopping model stealing?


Will Pearce: (15:49)

Um, (laughs) it, it doesn't.


Nic Fillingham: (15:51)

Oh.


Will Pearce: (15:52)

Counterfit is an offensive security tool. (laughs)


Nic Fillingham: (15:55)

Got it.


Will Pearce: (15:56)

So the primary piece being offense drives defense.


Nic Fillingham: (16:00)

Got it.


Will Pearce: (16:01)

So using this tool in that particular way, you can then test, right? In any number of scenarios, before you deploy a model, you can scan it and you, after you deploy a model, you can scan it, but you start to develop benchmarks. So in traditional information security, when you have a vulnerability scan, right? You scan the entire network, you get your list of critical, high, medium, low vulnerabilities. You then go start checking, you know, patching, check it, and then you re-scan the next month. This is a similar function.


Natalia Godyla: (16:34)

So we talked through two of the use cases here, the pen testing and red teaming, and then you just touched on vulnerability scanning. Can you provide a little bit more color on how you intend security professionals to use it for logging, what's the, the purpose, the driver behind that use case?


Will Pearce: (16:54)

Yeah. So logging... (laughs) Going back to security foundations, currently machine learning, a lot of them don't log-


Natalia Godyla: (17:00)

Mm-hmm (affirmative).


Will Pearce: (17:02)

... or they, they don't explicitly log for the purpose of security. So they'll log telemetry data, they'll log usage data, but that doesn't feed any higher level security processes. So the Counterfit has logging built in where it will track every input and every output, just as you would, you would put a l- a logging mechanism behind a model where you would track every input and every output. So we've built it in here so organizations can get some form of logging during an attack, right? So they could then turn those logs into some sort of detection pipeline, some sort of ability to detect a particular attack, but ideally organizations would log, right? They're gonna be logging anyway. And so I think it, in a lot of ways, it's just about getting machine learning people to start thinking about these security motions in a consistent way. So if you're gonna collect logs, do it in a way that's repeatable (laughs) and consistent and gives you the information that you need to, to do whatever you need to do, whether it's, you know, telemetry data or usage data or w- whatever it is.


Nic Fillingham: (18:11)

You know, you talked about a, a golfer Counterfit to sort of fit the nature of a metasploit, and being, uh, popular and, and powerful red teaming tool. What efforts are being made, or what's being done to ensure that this doesn't end up being an actual breach toolkit for adversaries? How do you toe that line of making a, a powerful tool for red teams who are ultimately trying to do good, and actually, you know, making it easier for adversaries to go out there and evade or steal models?


Will Pearce: (18:39)

I don't have a good answer for you. Well, I mean, in a lot of ways, you know, offense drives defense, right? So we think adversaries are gonna be doing this anyway. So in this way, if we can get a tool into people that make it easier for everybody (laughs) including adversaries, you know, we would hope that organizations would start putting mitigations in place for these things. If they see an uptick in attacks, they should do something about it, if they don't, then great, it's obviously not on the radar of attackers. And I would say currently it is not really on the radar of attackers.


Nic Fillingham: (19:19)

Well, not until this podcast comes out.


Will Pearce: (19:21)

Yeah, yeah. Exactly.


Natalia Godyla: (19:21)

(laughs)


Will Pearce: (19:22)

And so we're, yeah, I think we're maybe a little ahead of schedule just in terms of what this tool represents, and we might've missed the mark completely, right? Like we might be, we don't know if attackers are gonna go this route of attacking machine learning. There are certainly new attacks every year that come out, so the trend is up, but I think widespread abuse has yet to be seen, which I guess is the whole point here is to get ahead of that.


Nic Fillingham: (19:51)

Well, let me to just recap to make sure I, I sort of understand this. So as someone red teaming and penetration testing AI machine learning systems, you had a lot of disparate scripts, a lot of disparate tools, a lot of disparate processes, you needed to bring them all together into a, into a single pane of glass, to use an overused, uh, analogy. So you created a first and foremost for you, then you realize it would be a powerful tool for, for others out there that are, that are trying to protect AI machine learning systems through red teaming, through, as you say, offense drives defense. Can you share any examples of how the, the tool, either the, the work that you've done in protecting ML systems at Microsoft or with customers or other projects, do you have any stories you can tell of how this tool has been used out in the wild and, and some of the things that it's done to help find vulnerabilities, help patch gaps? Yeah, what are some of the positive stories or positives outcomes?


Will Pearce: (20:42)

Yeah. I mean, in the wild, I don't think so. You know, it's like when I go back-


Nic Fillingham: (20:46)

(laughs)


Will Pearce: (20:46)

... to talk to my, my like traditional red team peers, for them, machine learning is still a main in a lot of ways. So it's like they only hear about it in terms of, you know, they're only being sold at, right? Like they only say an EDR and it's like, okay, well, we've seen this story a million times. Like two years ago, it was application white listing. So it's gonna take, I think a little bit to get on board, but there are a couple of use cases. There's one we did with the expense fraud where you would take a receipt and you would change a digit to be more, right? So you would spend 20 bucks, you get a receipt for 20 bucks, but you'd change the two to three, then you would net $10.


Will Pearce: (21:25)

There, in a lot of systems, there's still like a human in the loop, so a lot of engines will have like a rule that says, if this is below 90% confidence, send it to a human, otherwise just trust the machine learning algorithm. There's a number of different NLP models that we've gone through, uh, with this where you can, you know, make algorithms say racist things or impolite things, and you can basically force it to do that.


Nic Fillingham: (21:56)

NLP is, uh, natural language processing?


Will Pearce: (21:58)

Mm-hmm (affirmative). Yeah. It's also neu- neuro linguistic programming-


Nic Fillingham: (22:03)

Okay. Okay.


Will Pearce: (22:03)

... and I, I think it's natural language processing. (laughs)


Nic Fillingham: (22:04)

But it's, it's sort of, it's sort of the processing of written or spoken word?


Will Pearce: (22:08)

Yup. Yeah, exactly. So have you, I'm sure you might've heard of GPT-3, Open AI.


Nic Fillingham: (22:11)

Yes, we have.


Will Pearce: (22:15)

Yeah. So there's, there's a couple things there with the, like that dataset for example. They pulled everything from the internet, right? And it's like as much public data as they possibly could, but it's like, just because it was public doesn't mean it should have been public. So there's a number, an amount of PII that you can pull out of GPT-3 that, you know, organizations might not be aware exists inside the model. A lot of models like will memorize training data, and so, you know, when you deploy like an NLP model to an end point and you don't realize this, if that model has PII in it, you know, you're kind of exposing it to whoever has access to that end point. And that's, that's a new challenge for sure.


Will Pearce: (23:02)

It also, you know, if you have PII saved in your model, like it's easy to say a database has PII, this falls within a particular compliance boundary, but when you say, this model has PII, where does that fall? Does it fall inside of that same compliance boundary? Security would say yes, but a lot of machine learning data scientists, they're not there yet. And so, you know, you might have a model that is deployed that is backed by this NLP system where you can pull PII from, and Counterfit kind of helps automate this and helps me, you know, play and tweak and, you know, figure out what I need to send to model to get the output that I want.


Natalia Godyla: (23:45)

How do you coordinate with teams inside Microsoft to build a feedback loop? I'm, I'm assuming you're, as you said, tweaking along the way, and with your findings, you've discovered vulnerabilities or opportunities to evolve the way that we're handling our AI systems. How do you work with teams to better the process?


Will Pearce: (24:08)

Yeah. It's report writing. (laughs)


Natalia Godyla: (24:11)

(laughs)


Will Pearce: (24:12)

So sometimes we reach out, you know, there's a particular service we wanna go after, maybe it has a high impact, a high value to us, you know, maybe there's something that we, we wanna do 'cause we think for style points, so, you know, we wanna go after that. So we'll reach out and we'll contact PLC as like, hey, we're, as the trustworthy machine learning team we wanna attack your model, we'll give you a report. Other times we'd go into the Azure website and I just look at all the products that exist and I just provision them into my, into our own tenant and attack them from there, and then write the report and send it over.


Will Pearce: (24:50)

So it usually depends, it's a production system. I usually provision it if I can, and go after it that way. If it's not quite there yet, or it's, you know, a high impact use case, you know, for example, the PII one that we just talked about, will work directly with the team and kind of set up an official project. We have like rules of engagement, you know, there's a cadence, and in the end it's a report that basically states what we did, recommendations that we have, and a kind of a, a pat on the back and-


Natalia Godyla: (25:23)

(laughs)


Will Pearce: (25:24)

... good luck, not good luck, but, you know, reach out if you need anything kind of thing. And I would say, yeah, it's been positive. I think it's really difficult to show impact. So in a traditional information security sense, getting domain admin, you know, it's an easy way to show impact. Dumping a database full of PII, you know, it's an easy way to show impact, but, you know, when you, uh, change an image to make a dog look like a cat, and then you'd like, okay, see, this is possible? Like it's a harder sell and it doesn't quite hit home. So, you know, a lot of the work done is really just trying to show impact and give teams just an easy way to see the risks that exist-


Natalia Godyla: (26:11)

Mm-hmm (affirmative).


Will Pearce: (26:12)

... without having to, not dumb it down, but without having to resort to toy examples.


Nic Fillingham: (26:19)

So are there folks out there Will listening to this podcast hearing about the Counterfit tool who may not think of themselves as sort of the target audience for this, you know, protecting AI and ML systems is, is obviously still very nice and red teaming AI and ML systems, it sounds like even more so. Can you talk to us about some of the types of data scientists, security ops folks, what are some of the roles out there of people that should be taking a look at Counterfit and sort of thinking about the AI systems that might be in use in their organizations that need to be pen tested, vulnerability tested, logged, et cetera, et cetera, who, who needs to use this tool that maybe doesn't realize they need to use this tool?


Will Pearce: (26:58)

You know, really anybody using machine learning. But Microsoft has a mature information security program, a lot of places don't. So what this tool doesn't give is like, there's no model inventory, there's no tracking of assets. There's, there's none of th- those foundational security things that are, that would normally in place, right? Like how do you know what to vulnerability scan in a traditional environment where you can either scan, right? You can just every internal IP address possible, you know, or you can pull it out of an asset inventory, right? Organizations for their models don't even have asset inventories yet. If there is a machine learning person who is wondering, you know, what is possible, you know, with this model, like what can I get it to do? Like those are the kinds of people, and it's just bringing it into their own process, their own machine learning development life cycle, and saying at the end of this, I'm gonna scan and see, see what's there.


Will Pearce: (27:53)

Or maybe they're the ones responsible for deploying models to a public endpoint, and they were like, you know what? Let's see what this thing kicks out, right? Let's, let's, let's see what Counterfit comes up with. We're just point Counterfit, and if something falls out, like we'll deal with it then. But I don't know, from the security side, anytime you mention machine learning to security people, they, math, like they just don't wanna talk to you 'cause they assume machine learning means math.


Nic Fillingham: (28:19)

(laughs)


Will Pearce: (28:20)

And in a lot of ways-


Nic Fillingham: (28:20)

Math hard.


Will Pearce: (28:21)

... it does.


Natalia Godyla: (28:21)

(laughs)


Will Pearce: (28:21)

Yeah. And I, to be fair, I was maybe one of those people in the beginning, but I have always enjoyed like numbers and data and things like that. So this is kind of a, in some ways a dream, right? For me, because that's the things that I'm interested in. But I would say if there is an interest in data and numbers and watching what comes out, like it is a rabbit hole that just doesn't end, right? Like you can think of, I mean, in, in all the ways like attacks are, are just like this, like attackers need feedback, right? To, to be successful, and machine learning model is the same way. It's like you input data, you get output, and then you in the middle, there's some inference, there's some like black box that you have to like wonder what happens.


Will Pearce: (29:08)

And so I think in a lot of ways, security people are, already think that way. So for Counterfit, like if you have a product that you wanna bypass, if you have a spam filter you wanna bypass, like figure out how these, these algorithms that, you know, researchers built that you can use in your ops, and you'll find that fortunately, that all the math has done for you and, and all you have to do is get your data in the right format and just let the math take care of itself.


Nic Fillingham: (29:39)

I wonder if you should make up some t-shirts or some stickers that say like, you know, just Counterfit it. Like should we verb-


Natalia Godyla: (29:45)

(laughs)


Nic Fillingham: (29:45)

... should we verb that now and then like put it all over Blackout Conference in RSA and-


Will Pearce: (29:50)

Yeah.


Nic Fillingham: (29:51)

... get all the, get all the SecOps folks out there just, uh, just point Counterfit at it and see what happens.


Will Pearce: (29:56)

Yeah. Well, it's funny. So the spam filter attack that I mentioned earlier, the reason it's called Counterfit is because it is a, like a model stealing piece. So I think in some libraries like to fit a model is the term.


Natalia Godyla: (30:11)

Mm-hmm (affirmative).


Will Pearce: (30:12)

So it's like to Counterfit is to steal it.


Nic Fillingham: (30:15)

Very clever. I think you're, you're neck and neck with a cyber battle SIM for-


Natalia Godyla: (30:19)

(laughs)


Nic Fillingham: (30:19)

... coolest, uh, ML tool name, uh, to come out, in, of, of Microsoft. Will Pearce, thank you so much for joining us on Security Unlocked today. Before we wrap, before we let you go, tell us where our listeners can go to learn more about this project and/or potentially follow you on the inter webs.


Will Pearce: (30:36)

You can go to, to get the tool, go to github.com/azure/counterfit, and there is a highly recommend the Wiki, and Docker and/or Ubuntu, or if you're brave, you can install it on Windows. And I am on Twitter @Moohacks, which is...


Nic Fillingham: (30:57)

Moohacks as in M-O-O or M-U? What's Moohacks?


Will Pearce: (30:59)

Uh, M-O-O... I can't remember if I have the underscore, on my Git I have Moohacks.


Nic Fillingham: (31:06)

All right. What will we find if we follow you on Twitter, or is that an NSFW question?


Will Pearce: (31:11)

No, it's mostly, uh, machine learning things... Well, it's a good mix I think. Machine learning and, uh, cybersecurity research that I like.


Nic Fillingham: (31:20)

Sounds good. All right. Well, Will Pearce once again, thanks for being on Security Unlocked.


Will Pearce: (31:23)

Yeah. Thank you very much.


Natalia Godyla: (31:25)

Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.


Nic Fillingham: (31:36)

And don't forget to tweet us @msftsecurity, or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.


Natalia Godyla: (31:47)

Stay secure.

More Episodes

7/21/2021

Discovering Router Vulnerabilities with Anomaly Detection

Ep. 37
Ready for a riddle? What do 40 hypothetical high school students and our guest on this episode have in common?Whythey can help you understand complex cyber-attack methodology, of course!In this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylaare brought back to school byPrincipalSecurityResearcher,Jonathan Bar Or who discusses vulnerabilities in NETGEAR Firmware. During the conversation Jonathan walks through how his teamrecognized the vulnerabilities and worked with NETGEAR to secure the issue,andhelps usunderstand exactly how the attack workedusing an ingenious metaphor.In This Episode You Will Learn: How a side-channel attack worksWhy attackers are moving away fromoperating systemsand towards network equipmentWhy routers are an easy access point for attacksSome Questions We Ask: How do you distinguish an anomaly from an attack?What are the differences between a side-channel attack and an authentication bypass?What can regular users do to protect themselvesfrom similarattacks? Resources: Jonathan Bar Or’s Blog Post:https://www.microsoft.com/security/blog/2021/06/30/microsoft-finds-new-netgear-firmware-vulnerabilities-that-could-lead-to-identity-theft-and-full-system-compromise/Jonathan Bar Or’s LinkedIn:https://www.linkedin.com/in/jonathan-bar-or-89876474/Nic Fillingham’s LinkedIn: https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog: https://www.microsoft.com/security/blog/ Related: Security Unlocked: CISO Series with Bret Arsenault https://thecyberwire.com/podcasts/security-unlocked-ciso-series
7/14/2021

Securing the Internet of Things

Ep. 36
Thereused to bea time when our appliances didn’t talk back to us, but it seems like nowadays everything in our home is getting smarter.Smart watches, smart appliances,smart lights-smart everything! Thisconnectivity to the internetis what we call the Internet of Things(IoT).It’s becoming increasingly common for our everyday items to be “smart,” and while thatmay providea lot of benefits, like your fridge reminding you when you may need to get more milk, it alsomeans thatall ofthose devices becomesusceptible to cyberattacks.On this episode of Security Unlocked, hostsNic FillinghamandNatalia Godylatalk toArjmandSamuelabout protecting IoT devices, especially with a zero trust approach.Listenin to learnnot onlyaboutthe importance of IoT security,but also what Microsoft is doing to protect againstsuchattacks and how you canbettersecurethesedevices.In This Episode You Will Learn: Whatthe techniquesareto verify explicitly on IoT devicesHow to apply the zero trust model in IoTWhat Microsoft is doing to protect against attacks on IoTSome Questions We Ask:What isthedifference between IoT and IT?Why is IoT security so important?What are the best practices for protecting IoT?Resources:ArjmandSamuel’s LinkedIn:https://www.linkedin.com/in/arjmandsamuel/Nic Fillingham’s LinkedIn:https://www.linkedin.com/in/nicfill/Natalia Godyla’s LinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://thecyberwire.com/podcasts/security-unlocked-ciso-seriesTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp36]Nic Fillingham:(music) Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in new and research from across Microsoft's security, engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft Security.Natalia Godyla:And now, let's unlock the pod. (music)Natalia Godyla:Welcome everyone to another episode of Security Unlocked. Today we are joined by first time guest, Arjmand Samuel, who is joining us to discuss IoT Security, which is fitting as he is an Azure IoT Security leader a Microsoft. Now, everyone has heard the buzz around IoT. There's been constant talk of it over the past several years, and, but now we've all also already had some experience with IoT devices in our personal life. Would about you, Nic? What do you use in your everyday life? What types of IoT devices?Nic Fillingham:Yeah. I've, I've got a couple of smart speakers, which I think a lot of people have these days. They seem to be pretty ubiquitous. And you know what? I sort of just assumed that they automatically update and they've got good security in them. I don't need to worry about it. Uh, maybe that's a bit naïve, but, but I sort of don't think of them as IoT. I just sort of, like, tell them what I music I want to play and then I tell them again, because they get it wrong. And then I tell them a third time, and then I go, "Ugh," and then I do it on my phone.Nic Fillingham:I also have a few cameras that are pointed out around the outside of the house. Because I live on a small farm with, with animals, I've got some sheep and pigs, I have to be on the look out for predators. For bears and coyotes and bobcats. Most of my IoT, though, is very, sort of, consummary. Consumers have access to it and can, sort of, buy it or it comes from the utility company.Natalia Godyla:Right. Good point. Um, today, we'll be talking with Arjmand about enterprise grade IoT and OT, or Internet of Things and operational technology. Think the manufacturing floor of, uh, plants. And Arjmand will walk us through the basics of IoT and OT through to the best practices for securing these devices.Nic Fillingham:Yeah. And we spent a bit of time talking about zero trust and how to apply a zero trust approach to IoT. Zero trust, there's sort of three main pillars to zero trust. It's verify explicitly, which for many customers just means sort of MFA, multi factorial authentication. It's about utilizing least privilege access and ensuring that accounts, users, devices just have access to the data they need at the time they need it. And then the third is about always, sort of, assuming that you've been breached and, sort of, maintaining thing philosophy of, of let's just assume that we're breached right now and let's engage in practices that would, sort of, help root out a, uh, potential breach.Nic Fillingham:Anyway, so, Arjmand, sort of, walks us through what it IoT, how does it relate to IT, how does it relate to operational technology, and obviously, what that zero trust approach looks like. On with the pod.Natalia Godyla:On with the pod. (music) Today, we're joined by Arjmand Samuel, principle program manager for the Microsoft Azure Internet of Things Group. Welcome to the show, Arjmand.Arjmand Samuel:Thank you very much, Natalia, and it's a pleasure to be on the show.Natalia Godyla:We're really excited to have you. Why don't we kick it off with talking a little bit about what you do at Microsoft. So, what does your day to day look like as a principle program manager?Arjmand Samuel:So, I am part of the Azure IoT Engineering Team. I'm a program manager on the team. I work on security for IoT and, uh, me and my team, uh, we are responsible for making sure that, uh, IoT services and clients like the software and run times and so on are, are built securely. And when they're deployed, they have the security properties that we need them and our customers demand that. So, so, that's what I do all a long.Nic Fillingham:And, uh, we're going to talk about, uh, zero trust and the relationship between a zero trust approach and IoT. Um, but before we jump into that, Arjmand, uh, we, we had a bit of a look of your, your bio here. I've got a couple of questions I'd love to ask, if that's okay. I want to know about your, sort of, tenure here at Microsoft. Y- y- you've been here for 13 years. Sounds like you started in, in 2008 and you started in the w- what was called the Windows Live Team at the time, as the security lead. I wonder if you could talk a little bit about your, your entry in to Microsoft and being in security in Microsoft for, for that amount of time. You must have seen some, sort of, pretty amazing changes, both from an industry perspective and then also inside Microsoft.Arjmand Samuel:Yeah, yeah, definitely. So, uh, as you said, uh, 2008 was the time, was the year when I came in. I came in with a, a, a degree in, uh, security, in- information security. And then, of course, my thinking and my whole work there when I was hired at Microsoft was to be, hey, how do we actually make sure that our product, which was Windows Live at that time, is secure? It has all the right security properties that, that we need that product to have. So, I- I came in, started working on a bunch of different things, including identity and, and there was, these are early times, right? I mean, we were all putting together this infrastructure, reconciling all the identity on times that we had. And all of those were things that we were trying to bring to Windows Live as well.Arjmand Samuel:So, I was responsible for that as well as I was, uh, working on making sure that, uh, our product had all the right diligence and, and security diligence that is required for a product to be at scale. And so, a bunch of, you know, things like STL and tech modeling and those kind of things. I was leading those efforts as well at, uh, Windows Live.Natalia Godyla:So, if 2008 Arjmand was talking to 2021 Arjmand, what would he be most surprised about, about the evolution over the past 13 years, either within Microsoft or just in the security industry.Arjmand Samuel:Yeah. Yeah. (laughs) That's a great, great question, and I think in the industry itself, e- evolution has been about how all around us. We are now engulfed in technology, connected technology. We call it IoT, and it's all around us. That was not the landscape 10, 15 years back. And, uh, what really is amazing is how our customers and partners are taking on this and applying this in their businesses, right? This meaning the whole industry of IoT and, uh, Internet of Things, and taking that to a level where every data, every piece of data in the physical world can be captured or can be acted upon. That is a big change from the last, uh, 10, 15 to where we are today.Nic Fillingham:I thought you were going to say TikTok dance challenges.Arjmand Samuel:(laughs)Natalia Godyla:(laughs)Nic Fillingham:... because that's, that's where I would have gone.Arjmand Samuel:(laughs) that, too. That, too, right? (laughs)Nic Fillingham:That's a (laughs) digression there. So, I'm pretty sure everyone knows what IoT is. I think we've already said it, but let's just, sort of, start there. So, IoT, Internet of Things. Is, I mean, that's correct, right? Is there, is there multiple definitions of IoT, or is it just Internet of Things? And then, what does the definition of an Internet of Things mean?Arjmand Samuel:Yeah, yeah. It;s a... You know, while Internet of Things is a very recognized acronym these days, but I think talking to different people, different people would have a different idea about how Internet of Thing could be defined. And the way I would define it, and again, not, not, uh, necessarily the authority or the, the only definition. There are many definitions, but it's about having these devices around us. Us is not just people but also our, our manufacturing processes, our cars, our, uh, healthcare systems, having all these devices around, uh, these environments. They are, these devices, uh, could be big, could be small. Could be as small as a very small temperature sensor collecting data from an environment or it could be a Roboticom trying to move a full car up and down an assembly line.Arjmand Samuel:And first of all, collecting data from these devices, then bringing them, uh, uh, using the data to do something interesting and insightful, but also beyond that, being able to control these devices based on those insights. So, now there's a feedback loop where you're collecting data and you are acting on that, that data as well. And that is where, how IoT is manifesting itself today in, in, in the world. And especially for our customers who are, who tend to be more industrial enterprises and so on, it's a big change that is happening. It's, it's a huge change that, uh, they see and we call it the transformation, the business transformation happening today. And part of that business transformation is being led or is being driven through the technology which we call IoT, but it's really a business transformation.Arjmand Samuel:It's really with our customers are finding that in order to remain competitive and in order to remain in business really, at the end of the day, they need to invest. They need to bring in all these technologies to bear, and Internet of Things happens that technology.Nic Fillingham:So, Arjmand, a couple other acronyms. You know, I think, I think most of our audience are pretty familiar with IoT, but we'll just sort of cover it very quickly. So, IoT versus IT. IT is, obviously, you know, information technology, or I think that's the, that's the (laughs) globally accepted-Arjmand Samuel:Yeah, yeah.Nic Fillingham:... definition. You know, do you we think of IoT as subset of IT? What is the relationship of, of those two? I mean, clearly, there are three letters versus two letters, (laughs) but there is relationship there. Wh- wh- what are your thoughts?Arjmand Samuel:Yeah. There's a relationship as well as there's a difference, and, and it's important to bring those two out. Information technology is IT, as we know it now for many years, is all about enterprises running their applications, uh, business applications mostly. For that, they need the network support. They need databases. They need applications to be secured and so on. So, all these have to work together. The function of IT, information technology, is to make sure that the, there is availability of all these resources, applications, networks and databases as well as you have them secured and private and so on.Arjmand Samuel:So, all of that is good, but IoT takes it to the next level where now it's not only the enterprise applications, but it's also these devices, which are now deployed by the enterprise. I mentioned Roboticoms. Measured in a conference room you have all these equipment in there, projection and temperature sensors and occupancy sensors and so on. So, all of those beco- are now the, the add on to what we used to call IT and we are calling it the IoT.Arjmand Samuel:Now, the interesting part here is in the industrial IoT space. Th- this is also called OT, operation technology. So, you know, within an organization there'll be IT and OT. OT's operation technology and these are the people or the, uh, function within an organization who deal with the, with the physical machines, the physical plant. You know, the manufacturing line, the conveyor belts, the Roboticoms, and these are called OT functions.Arjmand Samuel:The interesting part here is the goal of IT is different from the goal of OT. OT is all about availability. OT's all about safety, safety so that it doesn't hurt anybody working on the manufacturing line. OT's all about environmental concerns. So, it should not leak bad chemicals and so on. A while, if you talk about security, and this is, like, a few years back when we would talk about security with an OT person, the, the person who's actually... You know, these are people who actually wear those, uh, hard hats, you know, on, uh, a manufacturing plant. And if you talk about security to an OT person, they will typically refer to that guard standing outside and, and, uh, the-Nic Fillingham:Physical security.Arjmand Samuel:The physical security and the, the walls and the cameras, which would make sure that, you know, and then a key card, and that's about all. This was OT security, but now when we started going in and saying that, okay, all these machines can be connected to, to each other and you can collect all this data and then you can actually start doing something interesting with this data. That is where the definition of security and the functions of OT evolved. And not evolving, I mean different companies are at different stages, but they're now evolving where they're thinking, okay, it's not only about the guard standing outside. It's also the fact that the Roboticom could be taken over remotely and somebody outside, around the world, around the globe could actually be controlling that Roboticom to do something bad. And that realization and the fact that now you actually have to control it in the cyber sense and not only in the physical sense is the evolution that happened between OT.Arjmand Samuel:Now, IT and OT work together as well because the same networks are shared typically. Some of the applications that use the data from these devices are common. So, IT and OT, this is the other, uh, thing that has changed and, and we are seeing that change, is starting to work and come closer. Work together more. IoT's really different, but at the same time requires a lot of stuff that IT has traditionally done.Natalia Godyla:Hmm. So, what we considered to be simple just isn't simple anymore.Arjmand Samuel:That's life, right? (laughs) Yeah.Natalia Godyla:(laughs)Arjmand Samuel:(laughs)Natalia Godyla:So, today we wanted to talk about IoT security. So, let's just start with, with framing the conversation a little bit. Why is IoT security important and what makes it more challenging, different than traditional security?Arjmand Samuel:As I just described, right, I mean, we are now infusing compute and in every environment around us. I mean, we talked a little bit about the conveyor belt. Imagine the conference rooms, the smart buildings and, and all the different technologies that are coming in. These are technologies, while they're good, they're serve a scenario. They, they make things more efficient and so on, but they're also now a point of, uh, of failure for that whole system as well as a way for malicious sectors to bring in code if possible. And to either, uh, imagine a scenario where or an attack where a malicious sector goes into the conveyor belt and knows exactly the product that is passing through. And imagine that's something either takes the data and sells it to somebody or, worse case, stops the conveyor belt. That is millions of dollars of loss very, uh, that data that the company might be incurring.Arjmand Samuel:So, now that there's infused computer all around us, we are now living in a target which in a environment which can be attacked, and which can be used for bad things much more than what it was when we were only applications, networks and databases. Easy to put a wall around. Easy to understand what's going on. They're easy to lock down. But with all these devices around us, it's becoming much and much harder to do the same.Nic Fillingham:And then what sort of, if, if we think about IoT and IoT security, one of the things that, sort of, makes it different, I- I th- think, and here I'd love you to explain this, sort of... I- I'm thinking of it as a, as a, as a spectrum of IoT devices that, I mean, they have a CPU. They have some memory. They have some storage. They're, they're running and operating system in some capacity all the way through to, I guess, m- much more, sort of, rudimentary devices but do have some connection, some network connection in order for instruction or data to, sort of, move backwards and forwards. What is it that makes this collection of stuff difficult to protect or, you know, is it difficult to protect? And if so, why? And then, how do we think about the, the, the potential vectors for attack that are different in this scenario versus, you know, protecting lap tops and servers?Arjmand Samuel:Yeah, yeah. That's a good one. So, uh, what happens is you're right. Uh, IoT devices can be big and small, all right. They could be a small MCU class device with a real-time operating system on it. Very small, very, uh, single purpose device, which is imagine collecting temperature or humidity only. Then we have these very big, what we call the edge or heavy edge devices, which are like server class devices running a Roboticom or, or even a gateway class device, which is aggregating data from many devices, right, as a, a, and then take, taking the data and acting on it.Arjmand Samuel:So, now with all this infrastructure, one of the key things that we have seen is diversity and heterogeneity of these devices. Not just in terms of size, but also in terms of who manufactured them, when they were manufactured. So, many of the temperature sensors in environments could be very old. Like, 20 years old and people are trying to use the same equipment and not have to change anything there. And which they can. Technically they could, but then those devices were never designed in for a connected environment for these, this data to actually, uh, be aggregated and sent on the network, meaning they per- perhaps did not have encryption built into it. So, we have to do something, uh, additional there.Arjmand Samuel:And so now with the diversity of devices, when they came in, the, the feature set is so diverse. Some of them were, are more recent, built with the right security principles and the right security properties, but then some of them might not be. So, this could raise a, a challenge where how do you actually secure an infrastructure where you have this whole disparity and many different types of devices, many different manufacturers, many of ages different for these devices. Security properties are different and as we all know talking about security, the attack would always come from the weakest link. So, the attacker would always find, within that infrastructure, the device which has the least security as a entry point into that infrastructure. So, we can't just say, "Oh, I'll just protect my gateway and I'm fine." We have to have some mitigation for everything on that network. Everything. Even the older ones, older devices. We call them brownfield devices because they tend to be old devices, but they're also part of the infrastructure.Arjmand Samuel:So, how do we actually think about brownfield and the, the newer ones we call greenfield devices? Brownfield and greenfield, how do we think about those given they will come from different vendors, different designs, different security properties? So, that's a key challenge today that we have. So, they want to keep those devices as well as make sure that they are secure because the current threat vectors and threat, uh, the, and attacks are, are much more sophisticated.Natalia Godyla:So, you have a complex set of devices that the security team has to manage and understand. And then you have to determine at another level which of those devices have vulnerabilities or which one is the most vulnerable, and then, uh, assume that your most vulnerable, uh, will be the ones that are exploited. It, so, is that, that typically the attack factor? It's going to be the, the weakest link, like you said? And h- how does an attacker try to breach the IoT device?Arjmand Samuel:Yeah, yeah. And, and this is where we, we started using the term zero trust IoT.Natalia Godyla:Mm-hmm (affirmative).Arjmand Samuel:So, IoT devices are deployed in an environment which can not be trusted, should not be trusted. You should assume that there is zero trust in that environment, and then all these devices, when they are in there, you will do the right things. You'll put in the right mitigations so that the devices themselves are robust. Now, another example I always give here is, and, uh, I, your question around the attack vectors and, and how attacks are happening, typically in the IT world, now that we, we have the term defined, in the IT world, you will always have, you know, physical security. You will always put servers in a room and lock it, and, and so on, right, but in an IoT environment, you have compute devices. Imagine these are powerful edge nodes doing video analytics, but they're mounted on a pole next to a camera outside on the road, right? So, which means the physical access to that device can not be controlled. It could be that edge node, again, a powerful computer device with lots of, you know, CPU and, and so on, is deployed in a mall looking at video streams and analyzing those video streams, again, deployed out there where any attacker physically can get a hold of the device and do bad things.Arjmand Samuel:So, again, the attack vectors are also different between IT and OT or IoT in the sense that the devices might not be physically contained in a, in an environment. So, that puts another layer of what do we do to protect such, uh, environments?Nic Fillingham:And then I want to just talk about the role of, sort of, if we think about traditional computing or traditional, sort of, PC based computing and PC devices, a lot of the attack vectors and a lot of the, sort of, weakest link is the user and the user account. And that's why, you know, phishing is such a massive issue that if we can socially engineer a way for the person to give us their user name and password or whatever, we, we, we can get access to a device through the user account. IoT devices and OT devices probably don't use that construct, right? They probably, their userless. Is that accurate?Arjmand Samuel:Yeah. That's very accurate. So, again, all of the attack vectors which we know from IT are still relevant because, you know, if you, there's a phishing attack and the administrator password is taken over you can still go in and destroy the infrastructure, both IT and IoT. But at the same time, these devices, these IoT devices typically do not have a user interacting with them, typically in the compute sense. You do not log into an IoT device, right? Except in sensor with an MCU, it doesn't even have a user experience, uh, a screen on it. And so, there is typically no user associated with it, and that's another challenge. So you need to still have an identity off the device, not on the device, but off the device, but that identity has to be intrinsic off the device. It has to be part of the device and it has to be stable. It has to be protected, secure, and o- on the device, but it does not typically a user identity.Arjmand Samuel:And, and that's not only true for temperature sensors. You know, the smaller MCU class devices. That's true for edge nodes as well. Typically, an edge node, and by the way, when I say the edge node, edge node is a full blown, rich operating system. CPU, tons of memory, even perhaps a GPU, but does not typically have a user screen, a keyboard and a mouse. All it has is a video stream coming in through some protocol and it's analyzing that and then making some AI decisions, decisions based on AI. And, and, but that's a powerful machine. Again, there might never ever be a user interactively signing into it, but the device has an identity of its own. It has to authenticate itself and it workload through other devices or to the Cloud. And all of that has to be done in a way where there is no user attached to it.Natalia Godyla:So, with all of this complexity, how can we think about protecting against IoT attacks. You discussed briefly that we still apply the zero trust model here. So, you know, at a high level, what are best practices for protecting IoT?Arjmand Samuel:Yeah, yeah. Exactly. Now that we, we just described the environment, we described the devices and, and the attacks, right? The bad things that can happen, how do we do that? So, the first thing we want to do, talk about is zero trust. So, do not trust the environment. Even if it is within a factory and you have a guard standing outside and you have all the, you know, the physical security, uh, do not trust it because there are still vectors which can allow malicious sectors to come into those devices. So, that's the first one, zero trust.Arjmand Samuel:Uh, do not trust anything that is on the device unless you explicitly trust it, you explicitly make sure that you can go in and you can, attest the workload, as an example. You can attest the identity of the device, as an example. And you can associate some access control polices and you have to do it explicitly and never assume that this is, because it's a, uh, environment in a factory you're good. So, you never assume that. So, again, that's a property or a principle within zero trust that we always exercise.Arjmand Samuel:Uh, the other one is you always assume breach. You always assume that bad things will happen. I- it's not if they'll happen or not. It's about when they're s- uh, going to happen. So, for the, that thinking, then you're putting in place mitigations. You are thinking, okay, if bad things are going to happen, how do I contain the bad things? How do I contain? How do I make sure that first of all, I can detect bad things happening. And we have, and we can talk about some of the offerings that we have, like Defender for IoT as an example, which you can deploy on to the environment. Even if it's brownfield, you can detect bad things happening based on the network characteristics. So, that's Defender for IoT.Arjmand Samuel:And, and once you can detect bad things happening then you can do something about it. You get an alert. You can, you can isolate that device or take that device off the network and refresh it and do those kind of things. So, the first thing that needs to happen is you assume that it's going breach. You always assume that whatever you are going to trust is explicitly trusted. You always make sure that there is a way to explicitly trust, uh, uh, uh, either the workload or the device or the network that is connected onto the device.Nic Fillingham:So, if we start with verify explicitly, in the traditional compute model where it's a user on a device, we can verify explicitly with, usually, multi factor authentication. So, I have my user name and password. I add an additional layer of authentication, whether it's an, you know, app on my phone, a key or something, some physical device, there's my second factor and I'm, I'm verified explicitly in that model. But again, no users or the user's not, sort of, interacting with the device in, sort of, that traditional sense, so what are those techniques to verify explicitly on an IoT device?Arjmand Samuel:Yeah. I, exactly. So, we, in that white paper, which we are talking about, we actually put down a few things that you can actually do to, to, en- ensure that you have all the zero trust requirements together. Now, the first one, of course, is you need, uh, all devices to have strong identity, right? So, because identity is a code. If you can not identi- identify something you can not, uh, give it an access control policy. You can not trust the data that is coming out from that, uh, device. So, the first thing you do is you have a strong identity. By a strong identity we mean identity, which is rooted in hardware, and so, what we call the hardware based root of trust. It's technologies like TPM, which ensure that you have the private key, which is secured in our hardware, in the hardware and you can not get to it, so and so on. So, you, you ensure that you have a, a strong identity.Arjmand Samuel:You always have these privilege access so you do not... And these principles have been known to our IT operations forever, right? So, many years they have been refined and, uh, people know about those, but we're applying them to the IoT world. So, these privilege access, if our device is required to access another device or data or to push out data, it should only do that for the function it is designed for, nothing more than that. You should always have some level of, uh, device health check. Perhaps you should be able to do some kind of test station of the device. Again, there is no user to access the device health, but you should be able to do, and there are ways, there are services which allow you to measure something on the device and then say yes it's good or not.Arjmand Samuel:You should be able to do a continuous update. So, in case there is a device which, uh, has been compromised, you should be able to reclaim that device and update it with a fresh image so that now you can start trusting it. And then finally you should be able to securely monitor it. And not just the device itself, but now we have to technologies which can monitor the data which is passing through the network, and based on those characteristics can see if a device is attacked or being attacked or not. So, those are the kind of things that we would recommend for a zero trust environment to take into account and, and make those requirements a must for, for IoT deployments.Natalia Godyla:And what's Microsoft's role in protecting against these attacks?Arjmand Samuel:Yeah, yeah. So, uh, a few products that we always recommend. If somebody is putting together a new IoT device right from the silicone and putting that device together, we have a great secure be design device, which is called Azure Sphere. Azure Sphere has a bunch of different things that it does, including identity, updates, cert management. All these are important functions that are required for that device to function. And so, a new device could use the design that we have for Azure Sphere.Arjmand Samuel:Then we have, a gateway software that you put on a gateway which allows you to secure the devices behind that gateway for on time deployments. We have Defender for IoT, again as I mentioned, but Defender for IoT is on-prem, so you can actually monitor all the tracks on the network and on the devices. You could also put a agent, a Micro Agent on these devices, but then it also connects to Azure Sentinel. Azure Sentinel is a enterprise class user experience for security administrators to know what bad things are happening on, on-prem. So, it, the whole end to end thing could works all the way from the network, brownfield devices to the Cloud.Arjmand Samuel:We also have things like, uh, IoT Hub Device Provisioning service. Device provisioning service is an interesting concept. I'll try to briefly describe that. So, what happens is when you have an identity on a device and you want to actually put that device, deploy that device in your environment, it has to be linked up with a service in the Cloud so that it can, it knows the device, there's an identity which is shared and so on. Now, you could do it manually. You could actually bring that device in, read a code, put it in the Cloud and your good to go because now the Cloud knows about that device, but then what do you do when you have to deploy a million devices? And we're talking about IoT scale, millions. A fleet of millions of devices. If you take that same approach of reading a key and putting it in the Cloud, one, you'd make mistakes. Second, you will probably need a lifetime to take all those keys and put them in the cloud.Arjmand Samuel:So, in order to solve that problem, we have the device provisioning service, which it's a service in the Cloud. It is, uh, linked up to the OEMs or manufacturing devices. And when you deploy our device in your field, you do not have to do any of that. Your credentials are passed between the service and the, and the device. So, so, that's another service. IoT Hub Device Provisioning Service.Arjmand Samuel:And then we have, uh, a work, the, uh, a piece of work that we have done, which is the Certification of IoT Devices. So, again, you need the devices to have certain security properties. And how do you do that? How do you ensure that they have the right security properties, like identity and cert management and update ability and so on, we have what we call the Edge Secured-core Certification as well as Azure Certified Device Program. So, any device which is in there has been tested by us and we certify that that device has the right security properties. So, we encourage our customers to actually pick from those devices so that they, they actually get the best security properties.Natalia Godyla:Wow. That's a lot, which is incredible. What's next for Microsoft's, uh, approach to IoT security?Arjmand Samuel:Yeah, yeah. So, uh, one of the key things that we have heard our customers, anybody who's going into IoT ask the question, what is the risk I'm taking? Right? So, I'm deploying all these devices in my factories and Roboticom's connecting them, and so on, but there's a risk here. And how do I quantify that risk? How do I understand th- that risk and how do I do something about that risk?Arjmand Samuel:So, we, we got those questions many years back, like four, five years back. We started working with the industry and together with the Industrial Internet Consortium, IIC, which a consortium out there and there are many companies part of that consortium, we led something called The Security Maturity Model for IoT. So, so, we put down a set of principles and a set of processes you follow to evaluate the maturity of your security in IoT, right? So, it's a actionable thing. You take the document, you evaluate, and then once you have evaluated, it actually give you a score.It says you're level one, or two, or three, or four. Four, that's the authentication. All else is controlled management. And then based on th- that level, you know where you care, first of all. So, you know what your weaknesses are and what you need to do. So, that's a very actionable thing. But beyond that, if you're at level two and you want to be at level four, and by want to means your scenario dictates that you should be at level four, it is actionable. It gives you a list of things to do to go from level two to level four. And then you can reevaluate yourself and then you know that you're at level four. So, that's a maturityArjmand Samuel:Now, In order to operationalize that program with in partnership with IAC, we also have been, and IAC's help, uh, has been instrumental here, we have been working on a training program where we have been training auditors. These are IoT security auditors, third party, independent auditors who are not trained on SMMs Security Maturity Model. And we tell our customers, if you have a concern, get yourself audited using SMM, using the auditors and that will tell you where you are and where you need to go. So, it's evolving. Security for IoT's evolving, but I think we are at the forefront of that evolution.Nic Fillingham:Just to, sort of, finish up here, I'm thinking of some of the recent IoT security stories that were in the news. We won't mention any specifically, but there, there have been some recently. My take aways hearing those stories reading those stories in the news is that, oh, wow, there's probably a lot of organizations out here and maybe individuals at companies that are using IoT and OT devices that maybe don't see themselves as being security people or having to think about IoT security, you know T security. I just wonder if do you think there is a, a population of folks out here that don't think of themselves as IoT security people, but they really are? And then therefore, how do we sort of go find those people and help them go, get educated about securing IoT devices?Arjmand Samuel:Yeah, that's, uh, that's exactly what we are trying to do here. So, uh, people who know security can obviously know the bad things that can happen and can do something about it, but the worst part is that in OT, people are not thinking about all the bad things that can happen in the cyber world. You mentioned that example with that treatment plant. It should never have been connected to the network, unless required. And if it was connected to the, uh, to the network, to the internet, you should have had a ton a mitigations in place in case somebody was trying to come in and should have been stopped. And in that particular case, y- there was a phishing attack and the administrative password was, was taken over. But even with that, with the, some of our products, like Defender for IoT, can actually detect the administrative behavior and can, can detect if an administrator is trying to do bath things. It can still tell other administrators there's bad things happening.Arjmand Samuel:So, there's a ton of things that one could do, and it all comes down, what we have realized is it all comes down to making sure that this word gets out, that people know that there is bad things that can happen with IoT and it's not only your data being stolen. It's very bad things as in that example. And so, the word out, uh, so that we can, uh, we can actually make IoT more secure.Nic Fillingham:Got it. Arjmand, again, thanks so much for your time. It sounds like we really need to get the word out. IoT security is a thing. You know, if you work in an organization that employs IoT or OT devices, or think you might, go and download this white paper. Um, we'll put the link in the, uh, in the show notes. You can just search for it also probably on the Microsoft Security Blog and learn more about cyber security for IoT, how to apply zero trust model. Share it with your, with your peers and, uh, let's get as much education as we can out there.Arjmand Samuel:Thank you very much for this, uh, opportunity.Nic Fillingham:Thanks, Arjmand, for joining us. I think we'll definitely touch on cyber security for IoT, uh, in future episodes. So, I'd love to talk to you again. (music)Arjmand Samuel:Looking forward to it. (music)Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to Tweet us @MSFTSecurity or email us at securityunlocked@Microsoft.com with topics you'd like to hear on a future episode. (music) Until then, stay safe.Natalia Godyla:Stay secure. (music)
7/7/2021

Looking a Gift Card Horse in the Mouth

Ep. 35
Is it just me, or do you also miss the goodoledays of fraudulent activity?You remember the kind I’m talking about, theemails from princes around the world asking for just a couple hundred dollars to help them unfreeze or retrieve their massive fortune which they would share with you. Attacks havegrownmore nuanced, complex, and invasive since then, but because of the unbelievable talent at Microsoft, we’re constantly getting better at defending against it.On this episode of Security Unlocked, hosts Nic Fillingham and NataliaGodylasit down with returning champion, Emily Hacker, to discuss Business Email Compromise (BEC), an attack that has perpetrators pretending to be someone from the victim’s place of work and instructs them to purchase gift cards and send them to thescammer.Maybe it’s good tolookagift cardhorse in the mouth?In This Episode You Will Learn:Why BEC is such an effective and pervasive attackWhat are the key things to look out for to protect yourself against oneWhy BEC emails are difficult to trackSome Questions We Ask:How do the attackers mimic a true-to-form email from a colleague?Why do we classify this type of email attack separately from others?Why are they asking for gift cards rather than cash?Resources:Emily Hacker’s LinkedIn:https://www.linkedin.com/in/emilydhacker/FBI’s2020Internet Crime Reporthttps://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdfNicFillingham’sLinkedIn:https://www.linkedin.com/in/nicfill/NataliaGodyla’sLinkedIn:https://www.linkedin.com/in/nataliagodyla/Microsoft Security Blog:https://www.microsoft.com/security/blog/Related:Security Unlocked: CISO Series with Bret Arsenaulthttps://SecurityUnlockedCISOSeries.comTranscript:[Full transcript can be found athttps://aka.ms/SecurityUnlockedEp35]Nic Fillingham:Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.Natalia Godyla:And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest thread intel, research and data science.Nic Fillingham:And profile some of the fascinating people working on artificial intelligence in Microsoft security.Natalia Godyla:And now, let's unlock the pod.Nic Fillingham:Hello listeners, hello, Natalia, welcome to episode 35 of Security Unlocked. Natalia, how are you?Natalia Godyla:I'm doing well as always and welcome everyone to another show.Nic Fillingham:It's probably quite redundant, me asking you how you are and you asking me how you are, 'cause that's not really a question that you really answer honestly, is it? It's not like, "Oh, my right knee's packing at the end a bit," or "I'm very hot."Natalia Godyla:Yeah, I'm doing terrible right now, actually. I, I just, uh- Nic Fillingham:Everything is terrible.Natalia Godyla:(laughs)Nic Fillingham:Well, uh, our guest today is, is a returning champ, Emily Hacker. This is her third, uh, appearance on Security Unlocked, and, and she's returning to talk to us about a, uh, new business email compromise campaign that she and her colleagues helped unearth focusing on some sort of gift card scam.Nic Fillingham:We've covered business email compromise before or BEC on the podcast. Uh, we had, uh, Donald Keating join us, uh, back in the early days of Security Unlocked on episode six. The campaign itself, not super sophisticated as, as Emily sort of explains, but so much more sort of prevalent than I think a lot of us sort of realize. BEC was actually the number one reported source of financial loss to the FBI in 2020. Like by an order of magnitude above sort of, you know, just places second place, third place, fourth place. You know, I think the losses were in the billions, this is what was reported to the FBI, so it's a big problem. And thankfully, we've got people like, uh, Emily on it.Nic Fillingham:Natalia, can you give us the TLDR on the, on the campaign that Emily helps describe?Natalia Godyla:Yeah, as you said, it's, uh, a BEC gift card campaign. So the attackers use typosquatted domains, and socially engineered executives to request from employees that they purchase gift cards. And the request is very vague. Like, "I need you to do a task for me, "or "Let me know if you're available." And they used that authority to convince the employees to purchase the gift cards for them. And they then co-converted the gift cards into crypto at, at scale to collect their payout.Nic Fillingham:Yeah, and we actually discuss with Emily that, that between the three of us, Natalia, myself and Emily, we actually didn't have a good answer for how the, uh- Natalia Godyla:Mm-hmm (affirmative).Nic Fillingham:... these attackers are laundering these gift cards and, and converting them to crypto. So we're gonna, we're gonna go and do some research, and we're gonna hopefully follow up on a, on a future episode to better understand that process. Awesome. And so with that, on with the pod.Natalia Godyla:On with the pod.Nic Fillingham:Welcome back to the Security Unlocked podcast. Emily hacker, how are you?Emily Hacker:I'm doing well. Thank you for having me. How are you doing?Nic Fillingham:I'm doing well. I'm trying very hard not to melt here in Seattle. We're recording this at the tail end of the heat wave apocalypse of late June, 2021. Natalia, are you all in, I should have asked, have you melted or are you still in solid form?Natalia Godyla:I'm in solid form partially because I think Seattle stole our heat. I'm sitting in Los Angeles now.Nic Fillingham:Uh huh, got it. Emily, thank you for joining us again. I hope you're also beating the heat. You're here to talk about business email compromise. And you were one of the folks that co-authored a blog post from May 6th, talking about a new campaign that was discovered utilizing gift card scams. First of all, welcome back. Thanks for being a return guest. Second of all, do I get credit or do I get blame for the tweet that enabled you to, to- Emily Hacker:(laughs) It's been so long, I was hoping you would have forgotten.Nic Fillingham:(laughs) Emily and I were going backward forward on email, and I basically asked Emily, "Hey, Emily, who's like the expert at Microsoft on business email compromise?" And then Emily responded with, "I am."Emily Hacker:(laughs)Nic Fillingham:As in, Emily is. And so I, I think I apologized profusely. If I didn't, let me do that now for not assuming that you are the subject matter expert, but that then birthed a very fun tweet that you put out into the Twitter sphere. Do you wanna share that with the listeners or is this uncomfortable and we need to cut it from the audio?Emily Hacker:No, it's fine. You can share with the listeners. I, uh- Nic Fillingham:(laughs)Emily Hacker:... I truly was not upset. I don't know if you apologized or not, because I didn't think it was the thing to apologize for. Because I didn't take your question as like a, "Hey," I'm like, "Can you like get out of the way I did not take it that way at all. It was just like, I've been in this industry for five years and I have gotten so many emails from people being like, "Hey, who's the subject matter in X?" And I'm always having to be like, "Oh, it's so and so," you know, or, "Oh yeah, I've talked to them, it's so-and-so." And for once I was like, "Oh my goodness, it me."Natalia Godyla:(laughs)Emily Hacker:Like I'm finally a subject matter in something. It took a long time. So the tweet was, was me being excited that I got to be the subject matter expert, not me being upset at you for asking who it was.Nic Fillingham:No, I, I took it in it's, I did assume that it was excitement and not crankiness at me for not assuming that it would be you. But I was also excited because I saw the tweet, 'cause I follow you on Twitter and I'm like, "Oh, that was me. That was me." And I got to use- Emily Hacker:(laughs)Nic Fillingham:... I got to use the meme that's the s- the, the weird side eye puppet, the side, side eye puppet. I don't know if that translates. There's this meme where it's like a we-weird sort of like H.R. Pufnstuf sort of reject puppet, and it's sort of like looking sideways to the, to the camera.Emily Hacker:Yes.Nic Fillingham:Uh, I've, and I've- Emily Hacker:Your response literally made me laugh a while though alone in my apartment.Nic Fillingham:(laughs_ I've never been able to use that meme in like its perfect context, and I was like, "This is it."Emily Hacker:(laughs) We just set that one up for a comedy home run basically.Nic Fillingham:Yes, yes, yes. And I think my dad liked the tweet too- Natalia Godyla:(laughs)Nic Fillingham:... so I think I had that, so that was good.Emily Hacker:(laughs)Nic Fillingham:Um, he's like my only follower.Emily Hacker:Pure success.Nic Fillingham:Um, well, on that note, so yeah, we're here to talk about business email compromise, which we've covered on the, on the podcast before. You, as I said, uh, co-authored this post for May 6th. We'll have a, a broader conversation about BEC, but let's start with these post. Could you, give us a summary, what was discussed in this, uh, blog post back on, on May 6th?Emily Hacker:Yeah, so this blog post was about a specific type of business email compromise, where the attackers are using lookalike domains and lookalike email addresses to send emails that are trying, in this particular case, to get the user to send them a gift card. And so this is not the type of BEC where a lot of people might be thinking of in terms of conducting wire transfer fraud, or, you know, you read in the news like some company wired several million dollars to an attacker. That wasn't this, but this is still creating a financial impact and that the recipient is either gonna be using their own personal funds or in some cases, company funds to buy gift cards, especially if the thread actor is pretending to be a supervisor and is like, "Hey, you know, admin assistant, can you buy these gift cards for the team?" They're probably gonna use company funds at that point.Emily Hacker:So it's still something that we keep an eye out for. And it's actually, these gift card scams are far and away the most common, I would say, type of BEC that I am seeing when I look for BEC type emails. It's like, well over, I would say 70% of the BEC emails that I see are trying to do this gift card scam, 'cause it's a little easier, I would say for them to fly under the radar maybe, uh, in terms of just like, someone's less likely to report like, "Hey, why did you spend $30 on a gift card?" Than like, "Hey, where did those like six billion dollars go?" So like in that case, "This is probably a little easier for them to fly under the radar for the companies. But in terms of impact, if they send, you know, hundreds upon hundreds of these emails, the actors are still gonna be making a decent chunk of change at the end of the day.Emily Hacker:In this particular instance, the attackers had registered a couple hundred lookalike domains that aligned with real companies, but were just a couple of letters or digits off, or were using a different TLD, or use like a number or sort of a letter or something, something along the lines to where you can look at it and be like, "Oh, I can tell that the attacker is pretending to be this other real company, but they are actually creating their own."Emily Hacker:But what was interesting about this campaign that I found pretty silly honestly, was that normally when the attacker does that, one would expect them to impersonate the company that their domain is looking like, and they totally didn't in this case. So they registered all these domains that were lookalike domains, but then when they actually sent the emails, they were pretending to be different companies, and they would just change the display name of their email address to match whoever they were impersonating.Emily Hacker:So one of the examples in the blog. They're impersonating a guy named Steve, and Steve is a real executive at the company that they sent this email to. But the email address that they registered here was not Steve, and the domain was not for the company that Steve works at. So they got a little bit, I don't know if they like got their wires crossed, or if they just were using the same infrastructure that they were gonna use for a different attack, but these domains were registered the day before this attack. So it definitely doesn't seem like opportunistic, and which it doesn't seem like some actors were like, "Oh, hey look, free domains. We'll send some emails." Like they were brand new and just used for strange purposes.Natalia Godyla:Didn't they also fake data in the headers? Why would they be so careless about connecting the company to the language in the email body but go through the trouble of editing the headers?Emily Hacker:That's a good question. They did edit the headers in one instance that I was able to see, granted I didn't see every single email in this attack because I just don't have that kind of data. And what they did was they spoofed one of the headers, which is an in-reply-to a header, which makes it, which is the header that would let us know that it's a real reply. But I worked really closely with a lot of email teams and we were able to determine that it wasn't indeed a fake reply.Emily Hacker:My only guess, honestly, guess as to why that happened is one of two things. One, the domain thing was like a, a mess up, like if they had better intentions and the domain thing went awry. Or number two, it's possible that this is multiple attackers conducting. If one guy was responsible for the emails with the mess of domains, and a different person was responsible for the one that had the email header, like maybe the email header guy is just a little bit more savvy at whose job of crime than the first guy.Natalia Godyla:(laughs)Nic Fillingham:Yeah, I li- I like the idea of, uh, sort of ragtag grubbing. I don't mean to make them an attractive image, but, you know, a ragtag group of people here. And like, you've got a very competent person who knows how to go and sort of spoof domain headers, and you have a less competent person who is- Emily Hacker:Yeah. It's like Pinky and the Brain.Nic Fillingham:Yeah, it is Pinky and the Brain. That's fantastic. I love the idea of Pinky and the Brain trying to conduct a multi-national, uh- Emily Hacker:(laughs)Nic Fillingham:... BEC campaign as their way to try and take over the world. Can we back up a little bit? We jumped straight into this, which is totally, you know, we asked you to do that. So, but let's go back to a little bit of basics. BEC stands for business email compromise. It is distinct from, I mean, do you say CEC for consumer email compromise? Like what's the opposite side of that coin? And then can you explain what BEC is for us and why we sort of think about it distinctly?Emily Hacker:Mm-hmm (affirmative), so I don't know if there's a term for the non-business side of BEC other than just scam. At its basest form, what BEC is, is just a scam where the thread actors are just trying to trick people out of money or data. And so it doesn't involve any malware for the most part at the BEC stage of it. It doesn't involve any phishing for the most part at the BEC stage of it. Those things might exist earlier in the chain, if you will, for more sophisticated attacks. Like an attacker might use a phishing campaign to get access before conducting the BEC, or an attacker might use like a RAT on a machine to gain access to emails before the actual BEC. But the business email compromise email itself, for the most part is just a scam. And what it is, is when an attacker will pretend to be somebody at a company and ask for money data that can include, you know, like W-2's, in which case that was still kind of BEC.Emily Hacker:And when I say that they're pretending to be this company, there's a few different ways that that can happen. And so, the most, in my opinion, sophisticated version of this, but honestly the term sophisticated might be loaded and arguable there, is when the attacker actually uses a real account. So business email compromise, the term might imply that sometimes you're actually compromising an email. And those are the ones where I think are what people are thinking of when they're thinking of these million billion dollar losses, where the attacker gains access to an email account and basically replies as the real individual.Emily Hacker:Let's say that there was an email thread going on between accounts payable and a vendor, and the attacker has compromised the, the vendor's email account, well, in the course of the conversation, they can reply to the email and say, "Hey, we just set up a new bank account. Can you change the information and actually wire the million dollars for this particular project to this bank account instead?" And if the recipient of that email is not critical of that request, they might actually do that, and then the money is in the attacker's hands. And it's difficult to be critical of that request because it'll sometimes literally just be a reply to an ongoing email thread with someone you've probably been doing business with for a while, and nothing about that might stand out as strange, other than them changing the account. It can be possible, but difficult to get it back in those cases. But those are definitely the ones that are, I would say, the most tricky to spot.Emily Hacker:More common, I would say, what we see is the attacker is not actually compromising an email, not necessarily gaining access to it, but using some means of pretending or spoofing or impersonating an email account that they don't actually have access to. And that might include registering lookalike domains as in the case that we talked about in this blog. And that can be typosquatted domains or just lookalike domains, where, for example, I always use this example, even though I doubt this domain is available, but instead of doing microsoft.com, they might do Microsoft with a zero, or like Microsoft using R-N-I-C-R-O-S-O-F-t.com. So it looks like an M at first glance, but it's actually not. Or they might do something like microsoft-com.org or something, which that obviously would not be available, but you get the point. Where they're just getting these domains that kind of look like the right one so that somebody, at first glance, will just look up and be like, "Oh yeah, that looks like Microsoft. This is the right person."Emily Hacker:They might also, more commonly, just register emails using free email services and either do one of two things, make the email specific to the person they're targeting. So let's say that an attacker was pretending to be me. They might register emilyhacker@gmail.com, or more recently and maybe a little bit more targeted, they might register like emily.hacker.microsoft.com@gmail.com, and then they'll send an email as me. And then on the, I would say less sophisticated into the spectrum, is when they are just creating an email address that's like bob@gmail.com. And then they'll use that email address for like tons of different targets, like different victims. And they'll either just change the display name to match someone at the company that they're targeting, or they might just change it to be like executive or like CEO or something, which like the least believable of the bunch in my opinion is when they're just reusing the free emails.Emily Hacker:So that's kind of the different ways that they can impersonate or pretend to be these companies, but I see all of those being used in various ways. But for sure the most common is the free email service. And I mean, it makes sense, because if you're gonna register a domain name that cost money and it takes time and takes skill, same with compromising an email account, but it's quick and easy just to register a free email account. So, yeah.Nic Fillingham:So just to sort of summarize here. So business email compromise i-is obviously very complex. There's lots of facets to it.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It sounds like, first of all, it's targeted at businesses as opposed to targeted individuals. In targeted individuals is just more simple scams. We can talk about those, but business email compromise, targeted at businesses- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and the end goal is probably to get some form of compromise, and which could be in different ways, but some sort of compromise of a communication channel or a communication thread with that business to ultimately get some money out of them?Emily Hacker:Yep, so it's a social engineering scheme to get whatever their end goals are, usually money. Yeah.Nic Fillingham:Got it. Like if I buy a gift card for a friend or a family for their birthday, and I give that to them, the wording on the bottom says pretty clearly, like not redeemable for cash. Like it's- Emily Hacker:So- Nic Fillingham:... so what's the loophole they're taking advantage of here?Emily Hacker:Criminals kind of crime. Apparently- Natalia Godyla:(laughs)Emily Hacker:... there are sites, you know, on the internet specifically for cashing out gift cards for cryptocurrency.Nic Fillingham:Hmm.Emily Hacker:And so they get these gift cards specifically so that they can cash them out for cryptocurrency, which then is a lot, obviously, less traceable as opposed to just cash. So that is the appeal of gift cards, easier to switch for, I guess, cryptocurrency in a much less traceable manner for the criminals in this regard. And there are probably, you know, you can sell them. Also, you can sell someone a gift card and be like, "Hey, I got a $50 iTunes gift card. Give me $50 and you got an iTunes gift card." I don't know if iTunes is even still a thing. But like that is another means of, it's just, I think a way of like, especially the cryptocurrency one, it's just a way of distancing themselves one step from the actual payout that they end up with.Nic Fillingham:Yeah, I mean, it's clearly a, a laundering tactic.Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:It's just, I'm trying to think of like, someone's eventually trying to get cash out of this gift card-Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... and instead of going into Target with 10,000 gift cards, and spending them all, and then turning right back around and going to the returns desk and saying like, "I need to return these $10,000 that I just bought."Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:I guess I'm just puzzled as to how, at scale- Emily Hacker:Yeah.Nic Fillingham:... and I guess that's the key word here, at scale, at a criminal scale, how are they, what's the actual return? Are they getting, are they getting 50 cents on the dollar? Are they getting five cents on the dollar? Are they getting 95 cents on the dollar? Um, it sounds like, maybe I don't know how to ask that question, but I think it's a fascinating one, I'd love to learn more about.Emily Hacker:It is a good question. I would imagine that the, the sites where they exchange them for cryptocurrency are set up in a way where rather than one person ending up with all the gift cards to where that you have an issue, like what you're talking about with like, "Hey, uh, can I casually return these six million gift cards?" Like rather than that, they're, it's more distributed. But there probably is a surcharge in terms of they're not getting a one-to-one, but it's- Nic Fillingham:Yeah.Emily Hacker:... I would not imagine that it's very low. Or like I would not imagine that they're getting five cents on the dollar, I would imagine it's higher than that.Nic Fillingham:Got it.Emily Hacker:But I don't know. So, that's a good question.Natalia Godyla:And we're talking about leveraging this cryptocurrency model to cash them out. So has there been an increase in these scams because they now have this ability to cash them out for crypto? Like, was that a driver?Emily Hacker:I'm not sure. I don't know how long the crypto cash out method has been available.Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:I've only recently learned about it, but that's just because I don't spend, I guess I don't spend a lot of time dealing with that end of the scam. For the most part, my job is looking at the emails themselves. So, the, learning what they're doing once they get the gift cards was relatively new to me, but I don't think it's new to the criminals. So it's hard for me to answer that question, not knowing how long the, the crypto cash out method has been available to them. But I will say that it does feel like, in the last couple of years, gift card scams have just been either increasing or coming into light more, but I think increasing.Nic Fillingham:Emily, what's new about this particular campaign that you discussed in the blog? I-it doesn't look like there's something very new in the approach here. This feels like it's a very minor tweak on techniques that have been employed for a while. Tell me what's, what's new about this campaign? (laughs)Emily Hacker:(laughs) Um, so I would agree that this is not a revolutionary campaign.Nic Fillingham:Okay.Emily Hacker:And I didn't, you know, choose to write this one into the blog necessarily because it's revolutionary, but rather because this is so pervasive that I felt like it was important for Microsoft customers to be aware that this type of scam is so, I don't know what word, now we're both struggling with words, I wanna say prolific, but suddenly the definition of that word seems like it doesn't fit in that sentence.Nic Fillingham:No, yeah, prolific, that makes sense. Emily Hacker:Okay.Nic Fillingham:Like, this is, it sounds like what you're saying is, this blog exists not because this campaign is very unique and some sort of cutting-edge new technique, it exists because it's incredibly pervasive.Emily Hacker:Yes.Nic Fillingham:And lots and lots of people and lots and lots of businesses are probably going to get targeted by it. Emily Hacker:Exactly.Nic Fillingham:And we wanna make sure everyone knows about it.Emily Hacker:And the difference, yes, and the, the only real thing that I would say set this one apart from some of the other ones, was the use of the lookalike domains. Like so many of the gift cards scams that I see, so many of the gift cards scams that I see are free email accounts, Gmail, AOL, Hotmail, but this one was using the lookalike domains. And that kind of gave us a little bit more to talk about because we could look into when the domains were registered. I saw that they were registered the day, I think one to two days before the attack commenced. And that also gave us a little bit more to talk about in terms of BEC in the blog, because this kind of combined a couple of different methods of BEC, right? It has the gift cards scam, which we see just all the time, but it also had that kind of lookalike domain, which could help us talk about that angle of BEC.Emily Hacker:But I had been, Microsoft is, is definitely starting to focus in on BEC, I don't know, starting to focus in, but increasing our focus on BEC. And so, I think that a lot of the stuff that happens in BEC isn't new. Because it's so successful, there's really not much in the way of reason for the attackers to shift so dramatically their tactics. I mean, even with the more sophisticated attacks, such as the ones where they are compromising an account, those are still just like basic phishing emails, logging into an account, setting up forwarding rules, like this is the stuff that we've been talking about in BEC for a long time. But I think Microsoft is talking about these more now because we are trying to get the word out, you know, about this being such a big problem and wanting to shift the focus more to BEC so that more people are talking about it and solving it. Natalia Godyla:It seemed like there was A/B testing happening with the cybercriminals. They had occasionally a soft intro where someone would email and ask like, "Are you available?" And then when the target responded, they then tried to get money from that individual, or they just immediately asked for money.Emily Hacker:Mm-hmm (affirmative).Natalia Godyla:Why the different tactics? Were they actually attempting to be strategic to test which version worked, or was it just, like you said, different actors using different methods?Emily Hacker:I would guess it's different actors using different methods or another thing that it could be was that they don't want the emails to say the same thing every time, because then it would be really easy for someone like me to just identify them- Natalia Godyla:Mm-hmm (affirmative).Emily Hacker:... in terms of looking at mail flow for those specific keywords or whatever. If they switch them up a little bit, it makes it harder for me to find all the emails, right? Or anybody. So I think that could be part of the case in terms of just sending the exact same email every time is gonna make it really easy for me to be like, "Okay, well here's all the emails." But I think there could also be something strategic to it as well. I just saw one just yesterday actually, or what day is it, Tuesday? Yeah, so it must've been yesterday where the attacker did a real reply.Emily Hacker:So they sent the, the soft opening, as you said, where it just says, "Are you available?" And then they had sent a second one that asked that full question in terms of like, "I'm really busy, I need you to help me, can you call me or email me," or something, not call obviously, because they didn't provide a phone number. Sometimes they do, but in this case, they didn't. And they had actually responded to their own email. So the attacker replied to their own email to kind of get that second push to the victim. The victim just reported the email to Microsoft so they didn't fall for it. Good for them. But it does seem that there might be some strategy involved or desperation. I'm not sure which one.Natalia Godyla:(laughs) Fine line between the two.Emily Hacker:(laughs)Nic Fillingham:I'd want to ask question that I don't know if you can answer, because I don't wanna ask you to essentially, you know, jeopardize any operational security or sort of tradecraft here, but can you give us a little tidbit of a glimpse of your, your job, and, and how you sort of do this day-to-day? Are you going and registering new email accounts and, and intentionally putting them in dodgy places in hopes of being the recipient? Or are you just responding to emails that have been reported as phishing from customers? Are you doing other things like, again, I don't wanna jeopardize any of your operational security or, you know, the processes that you use, but how do you find these?Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:And how do you then sort of go and follow the threads and uncover these campaigns?Emily Hacker:Yeah, there's a few ways, I guess that we look for these. We don't currently have any kind of like Honey accounts set up or anything like that, where we would be hoping to be targeted and find them this way. I know there are different entities within Microsoft who are, who do different things, right? So my team is not the entity that would be doing that. So my team's job is more looking at what already exists. So we're looking at stuff that customers have reported, and we're also looking at open source intelligence if anyone else has tweeted or released a blog or something about an ongoing BEC campaign, that might be something that then I can go look at our data and see if we've gotten.Emily Hacker:But the biggest way outside of those, those are the two, like I would say smaller ways. The biggest way that we find these campaigns is we do technique tracking. So we have lots of different, we call them traps basically, and they run over all mail flow, and they look for certain either keywords or there are so many different things that they run on. Obviously not just keywords, I'm just trying to be vague here. But like they run on a bunch of different things and they have different names. So if an email hits on a certain few items, that might tell us, "Hey, this one might be BEC," and then that email can be surfaced to me to look into.Emily Hacker:Unfortunately, BEC is very, is a little bit more difficult to track just by the nature of it not containing phishing links or malware attachments or anything along those lines. So it is a little bit more keyword based. And so, a lot of times it's like looking at 10,000 emails and looking for the one that is bad when they all kind of use the same keywords. And of course, we don't just get to see every legitimate email, 'cause that would be like a crazy customer privacy concern. So we only get to really see certain emails that are suspected malicious by the customer, in which case it does help us a little bit because they're already surfacing the bad ones to us.Emily Hacker:But yeah, that's how we find these, is just by looking for the ones that already seem malicious kind of and applying logic over them to see like, "Hmm, this one might be BEC or," you know, we do that, not just for BEC, but like, "Hmm, this one seems like it might be this type of phishing," or like, "Hmm, this one seems like it might be a buzz call," or whatever, you know, these types of things that will surface all these different emails to us in a way that we can then go investigate them.Nic Fillingham:So for the folks listening to this podcast, what do you want them to take away from this? What you want us to know on the SOC side, on the- Emily Hacker:Mm-hmm (affirmative).Nic Fillingham:... on the SOC side? Like, is there any additional sort of, what are some of the fundamentals and sort of basics of BEC hygiene? Is there anything else you want folks to be doing to help protect the users in their organizations?Emily Hacker:Yeah, so I would say not to just focus on monitoring what's going on in the end point, because BEC activity is not going to have a lot, if anything, that's going to appear on the end point. So making sure that you're monitoring emails and looking for not just emails that contain malicious links or attachments, but also looking for emails that might contain BEC keywords. Or even better, if there's a way for you to monitor your organization's forwarding rules, if a user suddenly sets up a, a slew of new forwarding rules from their email account, see if there's a way to turn that into a notification or an alert, I mean, to you in the SOC. And that's a really key indicator that that might be BEC, not necessarily gift cards scam, but BEC.Emily Hacker:Or see if there is a way to monitor, uh, not monitor, but like, if your organization has users reporting phishing mails, if you get one that's like, "Oh, this is just your basic low-level credential phishing," don't just toss it aside and be like, "Well, that was just one person and has really crappy voicemail phish, no one's going to actually fall for that." Actually, look and see how many people got the email. See if anybody clicked, force password resets on the people that clicked, or if you can't tell who clicked on everybody, because it really only takes one person to have clicked on that email and you not reset their password, and now the attackers have access to your organization's email and they can be conducting these kinds of wire transfer fraud.Emily Hacker:So like, and I know we're all overworked in this industry, and I know that it can be difficult to try and focus on everything at once. And especially, you know, if you're being told, like our focus is ransomware, we don't want to have ransomware. You're just constantly monitoring end points for suspicious activity, but it's important to try and make sure that you're not neglecting the stuff that only exists in email as well. Natalia Godyla:Those are great suggestions. And I'd be remiss not to note that some of those suggestions are available in Microsoft Defender for Office 365, like the suspicious forwarding alerts or attack simulation training for user awareness. But thank you again for joining us, Emily, and we hope to have you back on the show many more times.Emily Hacker:Yeah, thanks so much for having me again.Natalia Godyla:Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.Nic Fillingham:And don't forget to tweet us @msftsecurity, or email us at securityunlocked@microsoft.com with topics you'd like to hear on our future episode. Until then, stay safe.Natalia Godyla:Stay secure.