Transcript
Mike Matchett: Hi, Mike Matchett with Small World Big Data. I am here today talking with HackerOne about hacking. We've got some discussion coming up about AI. Of course, the AI threat. Uh, what's going on in AI? They've got some new, uh, results from a survey they've done. They want to talk about, uh, and maybe help you understand where this world is going with AI attackers. Ai for good, AI for bad. What color hat does an AI wear? By the way, in security. Hold on a second. We'll bring, uh, hacker one on. Oh. Hi, Laurie Mercer. How are you doing? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Excellent. How are you, Mike? Mike Matchett: Okay, so you're you're with hacker one. What color? What color hat do you wear? White hat, I assume. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: But we are definitely the white hats. Yes, absolutely. Yeah. Mike Matchett: Yeah. So, uh, tell us a little bit about how you got involved in cybersecurity. What, what got you into hacking? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Yeah. So for me, it was really a light bulb moment where actually one of my colleagues actually discovered a vulnerability in a light bulb. Um, so this was an IoT connected light bulb. Um, and it turns out if you send the right message to it, it would expose the Wi-Fi password in a, in a network message. Um, we tried to report the vulnerability to the company and it caused an absolute disaster. Um, with all sorts of, um, complications and legal twists and turns. And it really showed me that, um, you know, there was a real problem in the world with the reporting of vulnerabilities, um, that they obviously exist and they're easily findable and that we need to find a way for people to be able to report them and for people to fix them fundamentally. Mike Matchett: I like that story because the light bulb actually turns on and you're like, oh, we got a problem. We've got to do something about it. Uh, so, uh, you know, so HackerOne, uh, just tell us a little bit about, uh, this idea of what HackerOne does, where it fits in the ecosystem at a, at a really high level, uh, in security. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Yes. So hacker HackerOne is the largest platform for reporting vulnerabilities, either through a responsible disclosure or a vulnerability reporting mechanism, where you can responsibly tell a company about a vulnerability that you've discovered in their digital services. Um, although what we're most famous for is what we call a vulnerability rewards program or a bug bounty program, and that's the same idea, but you get paid a bounty, a reward for that finding itself. Mike Matchett: All right, so, so I can make some money by hacking, uh, legitimately instead of just, uh, in fact, I'm a researcher. I can go and do that. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Absolutely. And we paid 80 million in the past year in bounty rewards for vulnerabilities. Mike Matchett: That's a pretty big business, right? Uh, so you, you know, obviously the, the theme of, you know, this last mid-decade here is AI and AI workloads. So let's, let's get into that. You've done some research, you've done some surveys here. Um, what are you finding, uh, with the state of security around the implementation of AI. What is what is sort of your big biggest sort of aha moments there? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: So, so I think the big aha moment is, is the realization that AI as a technology, it basically connects lots of different systems and information together in new and adaptive ways. Right? And people use the word non-stochastic, right? It's kind of different every time you use it. Right? And as a result, the, the risks involved are also changing all the time as well. And what we're seeing is just a rapid evolution in the types of security research and the types of vulnerabilities that can be discovered in this. And the message that we really want to get out there is that people need to really get get ahead of this because it's a rapidly changing landscape with new risks being highlighted every, every month, it seems. Um, and a lot of people are deploying these AI systems that just are unaware of these risks. Mike Matchett: Yeah. It's amazing to me how, uh, we easily put something into production that we can't fully test. I mean, there's billions of nodes in an LM. For example, we built a chatbot. We can't actually explore all the different potential paths that that thing can come up with. That's part of its power. And to be honest, its charm. Right? It sounds like a human. Uh, so, uh, this is getting bigger and bigger and bigger. Um, uh, what have you what have you found? Uh, what did your survey sort of on, on Barry for us? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Yes. So we performed a survey across security leaders, and we found that, um, only 60% of people are formally testing, um, these AI systems. Mike Matchett: Oh, only two thirds are testing. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Exactly. Yes. That leaves a third of people who are releasing things that have not been tested at all. Um, and this of course, um, is causing a huge increase in AI related vulnerabilities and attacks over the past year. Mike Matchett: Uh, what? Just, just to, just to be clear, what can go wrong when we say an AI vulnerability? Well, just can you enumerate what AI vulnerabilities might be for people, especially that third, that's not even testing. They're probably not even aware. But what what should they be thinking of? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: So, so, so, so there's, there's the, there's the normal stuff, right? So you, you could trick an AI system into giving you information about another user of the system, right? Um, who's not given their permission. Right. So you could say, well, forget about me, let's talk about him over there. Right? And get someone's private information and trick it into giving you an email address or a telephone number. Um, you can also, um, sometimes these AI systems are implemented in a way with privileges, which are like administrator privileges, right? And then if you can, if you can prompt, inject that AI into doing, uh, doing something and sometimes even calling a tool, then you can actually get them to perform actions that a human would normally do with administrator privileges as a, as a, as a normal user of the system. And, and then on the other side, we have the concept of these AI safety issues or, um, kind of, um, issues that are less related to do with security. They're more to do with offensive language, offensive images, incorrect product information, incorrect tax information. If you're, if you're developing a chatbot on a website and you're offering services, um, we've had instances where customers have said, will you do this for $10 for a product that perhaps costs 1000? And the chatbot said, yeah, sure. And then all of a sudden they've had to honor that price because, uh, because that's the price that's been, you know, shown on their official website. And so there's also a whole whole list of issues which are not security related, but are kind of more like safety or, um, um, almost like functionally related, which we tend to find in these systems. Mike Matchett: All right, so we've got the hallucinations. Everyone's sort of aware that AI can, you know, navigate a conversation where it's just sort of making stuff up because that's what it does. Um, we wouldn't normally consider that to be a security problem, but it obviously can have real economic ramifications or security ramifications. Uh, but the idea that you can, uh, go in and actively find like a security specific problem where you can go find, you know, cross someone's domain and get to their other customer data, their other customer's data that you can fool, uh, you know, an AI agent into doing something for you that, uh, or manipulate an agent for you into doing something that wasn't intended or would not be good for the sponsor. Uh, there's just, there's just tons of things going on there. So, uh, what you talk about doing, you talk about HackerOne doing, um, you know, this idea of red teaming, uh, type type testing. Explain to us like how someone might say, uh, actively think about securing or testing their AI system. What, what, what approach should they take? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: So, so, so the way that we've been approaching this problem is this, this concept of AI red teaming, which is essentially a form of adversarial testing where we, we, we work together on a threat model with the maker of the system to basically produce a menu of the worst case scenario. So what is the worst case scenario for this system? So if it's a logistics system, you know, delivering a package to the wrong address, you know, and stealing, stealing someone else's orders, right? Or it could be for a generative image system, it might be the creation of a, you know, offensive imagery, right? Um, and we develop this menu and then we, we basically attach dollar values to each of these objectives. And then we engage with a number of experts, um, who can then try and probe the AI systems to try and achieve those objectives. And if they do, they get paid a bounty reward, right? Mike Matchett: So, so farming it out a little bit like setting up the problem saying, here's a system, you know, here are the worst ways we think it could be broken or abused and go at it. And, uh. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Exactly, exactly. And then crowdsourcing that from a, from a list of vetted experts that are able to do that testing and then, and then obviously rewarding them as they find things. Mike Matchett: Oh, interesting. Uh, and you mentioned before earlier that you paid out $18 million in awards this year. So that's that number is going up and up and up year by year as well, right? Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Exactly. Yeah. Partly caused by AI related vulnerabilities and also vulnerabilities that have been introduced by AI creating insecure code. So this this seems to be accelerating. Mike Matchett: All right. Yeah, let's talk about let's talk about that ecosystem. This goes in circles a little bit, I think. So we're talking about like really increasing vulnerabilities with AI chatbots or AI interfaces, right? The LLM side of things. But people are also using AI to create code, uh, and to create those chatbots and those systems, uh, fall into this purview too, right? We need to go and validate code that AI has created. Uh, and, um, uh, I mean, there are other aspects to this too that where we see AI involved. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Yeah, absolutely. So, so, um, what we're seeing is we're seeing with the rapid, um, development and shipping of code using AI tools. It's also rapidly pushing out bugs as well. Right? And so there's this cat and mouse game between producers of code and also security assessors trying to match up with each other increasingly at machine speed. Um, to try to detect the problems that AI is created, often using AI to detect the problems as well. Um, and then, and then the next step of course, would be to try and fix those problems as well using AI. So, so, so you have this situation where, um, you have the, the creation, the discovery and the fixing of vulnerabilities all theoretically possible with AI, um, and with both white hats and black hats. Using these techniques, we're entering a real, real acceleration of the model that we've been in for the past decade. Mike Matchett: Right? And it does seem like this is not going to end anytime soon. Like there's no place to like, just put your brakes on here. Like everyone's incentivized to do things faster and better and on a bigger scale. And so this, this, this, this loop of, you know, putting out new code that might have vulnerabilities in it, finding those vulnerabilities, validating those vulnerabilities, patching those vulnerabilities, uh, is just going to keep accelerating, uh, for the next couple of years, I guess. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Yeah, it's the Red Queen's race, right? You have to keep on running as fast as to keep up. Yeah, yeah. Mike Matchett: Yeah, yeah. Uh, Darwin and, uh. Dawkins. Right? We want to we got to keep reading about. Keep reading about how this works. Uh, so the, the, uh, the idea of, uh, I mean, I just want to explore this philosophically. So the idea of doing this better, going in the future involves, uh, not just AI to create the code, but AI to run things, but also then AI in a security way. How comfortable are you using AI? Uh, as, as a security assistant? Um, is that, is that something that most people are saying? Yeah, this comes as de rigueur. This is just something we do. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: This is just something that we do. Yeah. So so I'm using an AI system every single day in the, in the, in the tools that I'm using. I'm using AI to give me insights into vulnerability reports. I'm using AI to assess the cvss severity of reports. I'm using AI to even in some cases, validate simple proof of concepts like cross-site scripting and idle vulnerabilities. Um, but, but the key thing, having a human in the loop to validate the output and to make decisions. So we're not paying people money based on an AI, it would always be a person who has their hand on the wallet. Mike Matchett: Right, right. At least for now. I mean, I could, I could, I mean, for the same reason that a third of people don't even do security testing on their AI workloads, I can see that people are going to quickly, or a third of people probably will let the AI make the decisions and just keep digging them in a deeper. Well, I can see that happening for some time. Uh, so where, you know, would you mention keeping human in the loop? What? Obviously, other than that, we can't really trust the AI's not to hallucinate at any stage of this. Uh, why else would we want to keep the human in the loop, per se? Speaker 3: I think I think it's context. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: I think context is so important when it comes to, um, to, to the cybersecurity industry in general. Um, you know, you could have a system, you could have a situation where someone has been able to break into a server online. Um, but that server is a marketing website where everything is already, is already public, right? And so there's no necessarily confidentiality breach there, but, but obviously there is the, the impact of defacement. And you could publish incorrect information, for example. Whereas you could have like a system that holds like 20 million records of, you know, individuals across the world. And if someone were to access that information, it would be, it would be pretty, pretty impactful, right? Um, so I think the context really, really matters. And I think that that's where a human's judgment will always, will always be called into, into practice. Um, and I think that also, um, humans often in, in, in many ways have the creativity to find novel and elusive, uh, findings. Um, so often an AI system may be able to find a few point point vulnerabilities or theoretical weaknesses, but it often takes a human to stitch that together, um, in order to make an impactful vulnerability that has a proper narrative of how it can actually affect a system and be used to exfiltrate information, for example. Mike Matchett: So we're looking at, we're looking at the, the idea that there's still this matter of context, there's still this matter of bigger picture, uh, prioritization. Uh, you know, what's, what's really important at the end of the day in the machine can be taught to go down a checklist in so many ways, but we'll probably never have the full understanding when you put complex systems together of, of how, of how to really sort that out, which gives me some hope that there's still a use for us. Uh, Lori, I'm feeling like I want a job five years from now. You know, I don't replace by that. Uh, all right, so you guys have done some research in this. You've done some surveys. You've got you've got a survey out if someone wants to read that and figure out if they're one of the one thirds, uh, or find out other information about, uh, what's going on at HackerOne or red teaming. Uh, where would you suggest they start after this? Yes. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: So every year we produce a report. It's called the hacker Powered Security report. And it details the profile of all the vulnerabilities that have been discovered through our network over the past year. We just released our latest version in late last year. It talks about all the different AI vulnerabilities that we're seeing in our network. It talks about the Non-ai vulnerabilities as well. It talks about the tools that people are using for security research, and it talks about the systems that are being tested. So the trends in the business. And so my recommendation to anyone that's interested in finding out more is download the hacker powered security report. It's available on our website hackerone.com. Mike Matchett: All right. Get started there. Thank you very much. Uh, I'm sure there's a lot more to talk about. And I am just dying to see what happens in the next year with AI hacking, AI vulnerabilities, AI security agents of AI working for good and bad. How do we even tell? You know, what color hat and AI agent is wearing when it shows up at our doorstep? Lori. I don't I don't know how you do that. So I think there's a great brave frontier there. Like what color hat are they wearing when they, when they come and access my API? So, uh, thank you for being here today. Laurie Mercer, Sr. Director - Solutions Engineering, HackerOne: Thank you for having me. Mike Matchett: All right. Take care, folks. And this is not going away. This is an increasing thing that you ought to really be paying attention to. Of course. Take care folks.