Truth in IT
    • Sign In
    • Register
        • Videos
        • Channels
        • Pages
        • Galleries
        • News
        • Events
        • All
Truth in IT Truth in IT
  • Data Management ▼
    • Converged Infrastructure
    • DevOps
    • Networking
    • Storage
    • Virtualization
  • Cybersecurity ▼
    • Application Security
    • Backup & Recovery
    • Data Security
    • Identity & Access Management (IAM)
    • Zero Trust
  • Cloud ▼
    • Hybrid Cloud
    • Private Cloud
    • Public Cloud

Mindgard: Red Teaming AI So You Don’t End Up in the Headlines

Truth in IT
08/20/2025
3
0 (0%)
Share
  • Comments
  • Download
  • Transcript
Report Like Favorite
  • Share/Embed
  • Email
Link
Embed

Transcript


Mike Matchett: Hi Mike Matchett Small World Big Data. We are here today talking about AI, of course, and cybersecurity. Those things are coming together. We saw at RSAC recently that those were the hot topics kind of becoming meshed. There's a lot of problems with AI in the cybersecurity space that haven't been solved yet completely. We have Mindgard AI here today to talk about how you can help secure your AI, your AI applications, and your AI agents a lot better using red teaming. So just hold on a second. Hey, welcome, Peter. Welcome to our show. Peter Garraghan, CEO & Co-Founder: Hello. Lovely to be here. How are you? Mike Matchett: Uh, so this is this is interesting. You know, we we talk about a lot of vendors recently in the cybersecurity space coming at the problems of cybersecurity from a lot of different angles. Uh, but not very many people are talking about being as proactive in the sense of red teaming as as you guys now support and espouse as part of what Mindgard does. So just tell just tell us a little, tell our audience a little bit about like, how did you get to this end of the security market? How did you get involved in creating something that was about red teaming? Peter Garraghan, CEO & Co-Founder: Sure. How did we get here? Um, so quick background. Um, I'm a university professor as well, beyond being the co-CEO of Mindgard. So as a professor, I care about making positive change social good. That's why I'm a scientist. Um, I've always worked with big tech companies, so a lot of my prototypes and even the people I train end up working in big tech labs, typically. And after a decade of research and looking at problems of AI, I realized this would be a huge problem and that I can imagine every type of software application, every asset having this new type of AI technology and architecture inside them. A is not a new concept for decades, but this type particularly is getting very pervasive very, very quickly. And I realized that if I don't do something about this, this could be a huge problem if we don't keep up. So took half my scientists from the university and we went down to London and we kind of formed Mindgard. Mike Matchett: All right. So let's, let's I want to get into red teaming. Of course, that's what we're going to talk about the bulk of today. Uh, but what are the vulnerabilities we're talking about with AI. You know, there's there's a prompt cycle. There's uh, you know, it's an LLM. We know about how it hallucinates, but what are some what are some of the things that are really, you see, could go wrong? Peter Garraghan, CEO & Co-Founder: Sure. I still software and I haven't seen a novel attack in AI that you cannot see as an analogy in other type of application, um, systems. So prompt injection and SQL injection have a lot of analogies between them or comparisons, you know, trying to bypass an IDs system and bypassing a guardrail, kind of the same principle, trying to leak data from a database or a model conceptually is the same idea, but technically it's done quite differently, and which is why it kind of trips people up. So common examples. And I give three different buckets. The type of risks you should care about in AI is that the type of risks are just a series of other risks, but you don't understand what's happening. And that goes across security safety and business. So security risk. A common problem is that within large language models the data and control plane are just one or the same. Make it very, very complicated to actually try to ascertain what's a system instruction. What is application instruction. So I can embed an OS command instruction within a big document, and it will come back and ping me in terms of information I shouldn't have access to, or even something like remote code execution. So classic cybersecurity problems, the safety problems could be things such as well, I don't want my AI to talk about controversial subjects or insult my customers, but also tell me how I can use my products in a bad manner. That's reputational risk. Then there's business risk, which is my I could tell my user, how do I make my own products? Or how do I why I should go to other competitors instead of my business? I can't do that. Typically in other applications I can't do SQL injection till I get the secret sauce, so I probably could, but it'll take me such a long time. The trick in AI is that because it's good at natural language processing, it can give me that information in a very clear form. All three of these are business risks, um, in different capacities. And we found that people we work with, they have different they care about different things depending on the use case. And again, to even look at this more grounded, replace the word AI with software. Every piece of software has different use cases. It has different threat models. Those threat models need testing. That testing comes from red teaming and you need to interrogate what you care about. Sometimes it is public facing reputational risks to use is much more problematic. I don't care about that too much. If it's my private code generation that will never see the light of day. So it's really important to figure out which vulnerabilities map to your threat model, to your use case, and actually go and look at those things. Mike Matchett: That's really interesting because I, you know, coming to this was really just focusing on the security. I mean, it's a topic, right? The security risks. But to think about the idea that we're also still talking about data protection, we're talking about data leakage, data exfiltration, IP leakage is different words. Right. And then you're talking about the idea that I can actually subvert someone's LLM to tell me how to compete against them. I mean, that's just that's just something I'm not sure a lot of people have, have even considered on there. Um, so your, your position then is the way to address these risks is with red teaming. So tell us about what you know. And we've got some security people in our audience, some very good security people. But tell us just in general what is red teaming entail? What is what is sort of the scope of of being a red team? Peter Garraghan, CEO & Co-Founder: Sure. I think it's even I think it's even important to define AI red teaming and how it differs from red teaming. So the phrase red teaming is slung around a lot nowadays, and it makes it a bit more complicated because red teaming does have a definition. Goes back many, many years in AI red teaming. Um, there's two camps. One of them is to look at evaluation of the performance of the AI and the models to make sure it's high performance. It stays on topic, which is super useful for data scientists. Then you have AI red teaming, which is more traditional red teaming, which is it's a methodology, um, developed in the US government to try adopt an offensive mindset to achieve certain goals. So the goal could be my goal is to try exfiltrate this data of some sort, or my goal is to get past to gain access to things I shouldn't do. It's not just a tool, it's a methodology in place that include things such as I'm going to social engineer someone to give me their password, and then use that to go in without being detected. The system that traditional red teaming, AI red teaming in this space is it's a combination of traditional red teaming, because you could argue that with AI models you're social engineering the AI model to do things you shouldn't do, which actually would function if it combines penetration testing, which actually trying to penetrate to find vulnerabilities in the AI itself, but also looking at security testing to try and test and at test. What type of issues can I see in the application itself? And again, I think that's super important because a red teaming is everywhere. And a lot of the tools out there looking at evaluation, which is super useful, that's really important. But there's another big bucket, which is the how do I find meaningful risks. And that's kind of the camp we focus on. Mike Matchett: All right. So tell us a little bit then more specifically what Mindgard AI is about. What does it offer to people who are red teaming AI now? So they've got an AI application. Maybe they have some AI agents that they're in development with and they say, well, we know these are risky to put out into public until we've done something to secure them, but we don't know what what role then does Mindgard play? Peter Garraghan, CEO & Co-Founder: Yeah. So Mindgard is a product company and we and our product is continuous and automated red teaming system for AI. Specifically what it allows the security teams to find vulnerabilities quicker and less time. Um, the alternative is that it's done manually and it's really, really painful. Trust me. For too many years. Uh, fundamentally, is a lot of the red teamers or Pentesters or Appsec people who are testing and interrogating AI to find abilities is really complicated. They use our tool to augment themselves to do this quicker at scale. Um, and it's kind of three pillars. We do this one. So one is we believe testing systems and not just models. Um, in my experience, jailbreaking models on their own isn't that interesting? I can find stuff online. Anyway, um, risks become more material once the AI talks to other things in the system. Mike Matchett: So which is the which is the whole genetic thing, right? It's like it's like we don't have the LLM on its own. We now want it to go out and interface with all the other stuff we have. Peter Garraghan, CEO & Co-Founder: Yes, I even talk about analogies. So if I was a developer, I download my SQL database, spun it up into SQL injection and say, look how bad it is. You'd say, that's not how. That's not how you do testing. Well, I normally do is defined use case done to find a threat model, build my application, then I test it for SQL injection. That is the perfect analogy in terms of identity model hugging face or ChatGPT, I'll probably find stuff all day. But actually that's not valuable to me. What's valuable is yes, I've got a use case. It talks to other things in my system and then I can do testing. So our system does both, um, testing of applications and systems, but also models together. And that's I think that's actually one of the things that makes it quite different in that space is more meaningful when they actually talk to other systems. Um, second thing is flexibility. So yes, we offer we know applications will come in different sizes in AI. So we do CI CD testing, we do Burp Suite, we know people do for the front end. And the last one is we're pretty R&D focused as a company comes from our roots as a university spin out. This is not a solved area of research in our security. And anyone saying otherwise maybe be able to. They should be, um, we try keep on top of this because we have a lot of people with PhD level research. We disclose to companies vulnerabilities, we publish in this space. And we have a really big, um, laboratory in the UK that I'm a professor, I run. We have all that type of attack techniques and know how comes the company itself as well. Mike Matchett: I mean, that's a that's it's terribly interesting on there because, uh, you know, when I think about, you know, red teaming, uh, it sounds complicated. It sounds like you have to have a lot of expertise to do it. Just normally, uh, you have to know about the applications. You have to know about kind of little bit about the data you're going after. Uh, this. But when I get into the AI space there, there can't be too many top level experts in AI red teaming out there today. Peter Garraghan, CEO & Co-Founder: No, there's I actually say hand and heart, probably less than 30 genuine ones, people getting into it. And that's they used to be two people. Now it's 30. It'll be a few thousand expected next year or two. But it's interesting. It typically in red teaming as you think of highly sophisticated someone who knows the way through systems. But an AI red teaming given some of the nomenclature has been adapted from basically pen testing and evaluation, which is more of hey, I'm doing testing, but even things in AI, if I say to an AI and ask it, what do you do? It will tell me what it does. And that's great. As a red team, you don't have my job for me in terms of I know what the capabilities are, I can then use it against the type of system. Um, so one of the goals of, of this tool is to bring down the skill level required to do this testing. So whether you are someone coming into red teaming for the first time, or you're a pen tester or a security tester, you have an arsenal of tools that allows you to run attack techniques and surface things that are relevant and interesting, but also variabilities I can report on, but also the Red teamer who really knows their way in and out of the system. This is a tool to augment themselves. So it's not full automation. We kind of balancing the two of them to say things such as our tool automate setup to make it quicker. That's a really big pain. It then runs techniques to find things that I say quotation interesting. That is enough for red teamers. Essentially that's enough for me. I can go off and do it myself, or a tool that allows, hey, I can do the next step if you wanted to. So we try to cater towards both the person coming into it who's less experienced versus someone who's very sophisticated in this red teaming capabilities. Mike Matchett: Right. So you're really enabling people who are, you know, given security minded, possibly already on a red team, red team experience to now tackle the AI space with red teaming really. Peter Garraghan, CEO & Co-Founder: Both pentesters and pen testers and red teamers. Yeah, exactly. Mike Matchett: Yeah. And because you're doing this automation, you're obviously going to shave some time off of of the cycles it takes to do thorough testing. Um, one of the things that that I would question is, um, how do you know? And this is just maybe a naive red testing question. How do you know when you're done? How do you know when it's good enough? Uh, if you're if you're if you're if you're doing this testing. Peter Garraghan, CEO & Co-Founder: This is a classic scientific problem evaluation. How do I know I've done enough? So, for example, if someone said to you I ran one test, is that enough? You say no. What about ten? What about 1000? What about a billion? The answer is until you got statistical certainty. And that itself has its own caveats as to what? What is certainty? Um, the more concrete answer in terms of especially with AI, given it's very random. We have no way of formally verifying actually something is secure in AI. I am. The combinations are very, very large that way, and let alone the the models themselves don't have natural code. The bunch of matrices and probabilities towards them. Mike Matchett: And the probabilistic part of it too is also confusing a lot of people. Right. It's not going to give you the same answer twice. Peter Garraghan, CEO & Co-Founder: You know, if I do a SaaS, for example, there's no code. There's no code level meaning to SaaS beyond what's connected to, um, but in terms of how I've done enough, it's a combination of things. So like, like any security testing. So I have an application I will define threats that I want to prioritize, maybe because it's in my budget, whether it's because I care about these things or it's realistic. Um, I don't have to make a decision on resourcing about this. And then you do this, and then you have some way of saying this is sufficient for my ability. So I do an AI lens. I probably don't care about building bombs. For most applications that might be in some cases, I actually don't care about this. I care about things like it's going to connect to a third party service. It's going to be customer facing. What do I do about these things? I then prioritize my testing around those things within my budget and time you can configure how much you want to run this thing. So you may not get you might not get formal verification if it's secure, but you can get to ability where you get consistent results by saying probability wise, 70% of the time is going to happen in that system. That depends a lot. Depending on the application you have, the model you have. So our tool allows you to give all these tools to figure out how long do I run this thing for. At what cost and what type of confidence do I get in the results as well? Again, to emphasize this is not a small area of research. Mike Matchett: But you can make some judgment and start to start to get your hands around when the decision points are the transition points into production. Peter Garraghan, CEO & Co-Founder: I know fundamentally for these tools, if you get one attack to work, that's enough. I don't need to have certainty beyond that. It takes me a billion times versus ten times. That's the thing I'd be looking for. Mike Matchett: Yeah, yeah. Uh, and I guess, uh, you did sort of give also a clue at the beginning. We talk about continuous red team testing. This is not something that people should be thinking of as, like here, I did it once, and now I've released it to production and it's gone, right? This is something that going forward, we're going to have to be doing on an ongoing basis as the threats evolve, we have to evolve. Peter Garraghan, CEO & Co-Founder: Yeah, and it's a necessity. So you would even forgetting about the market trends of red teaming and pentesting going towards a new basis. Fundamentally, why should you do retesting? Because there's a state change in the application. Unfortunately, with AI that state change happens all the time. Every time I fine tune the data, I introduce new data and new domain, a new system prompt, a new tool in my agent pipeline, new state change. I need to do retesting again. The question then becomes even like a new attack comes out, testing has to happen again. The question then becomes how frequently? And that depends on the combination of the velocity of the application, but also appetite for risk and cost, and also keeping things aligned in terms of actually looking at my security posture. Mike Matchett: Yeah. And then there's things that are coming out all the time. You know, a year ago we were just starting to talk about Rag and introducing all the corporate data. And then now the last couple of months, we talk about MCP and talking about Agentic and who knows what we'll be talking about six months from now. So this is quite a dynamic space itself. Peter Garraghan, CEO & Co-Founder: Exactly. And I think the important thing to mention, though, is that cutting through the marketing hype of conceptual problems and what's happening on, on, on the, on, on the floor now, it's quite different. So a lot of talk about rsac about agentic workflows in security. I remember two years ago they basically removed the word Agentic and they were talking about LMS and security risks. And here are the issues. But if you talk to customers two years ago about how much I LMS you have in production, it was like basically nothing. It's all in POC mode and pilot mode. Fast forward to now. Those architectures and LMS are now in production, getting close to pre-production. They're gone and then they're doing testing the workflows. I'm seeing lots of discussion and excitement, but if you actually ask how many of you actually have agentic workflows in reality in production at scale, very few. There are some companies out there, but very few, um, actually have it. But let's just say it's not a problem. Come fast forward nine months and they will do. And even some like MCP in its infancy, it's a it's a it's a framework that allows us to do this easily. It's going to be updated. So again I like I really want to cut through. What's the marketing though? Um, FUD fear and doubt in the space of what could happen versus what's actually happening now is if you have an organization with AI, you have an application, an LLM. Have you done testing against it? If you don't have an agent, you should be red teaming. It doesn't exist in the first place. Mike Matchett: Right? Right. So, uh, there's there's so much going on here. It's moving so fast. Um, if someone, uh. Let me just get to this point here, Peter, if someone wants to look into red teaming, they want to look into Mindgard how you guys can help with their red teaming efforts, particularly in the AI space. What would you recommend would be good first steps or where they should start their research? Peter Garraghan, CEO & Co-Founder: Yeah. So if you're interested in AI security and specifically AI red teaming, lots of material online. So the OWASp, um community AI exchange in the top ten is a good starting point to understand the type of risks within AI applications and models. You can also look at the meta Atlas is a good framework trying to look at the tactics and techniques You could. Every major tech company, an AI company has a red teaming document of some sort. Every government is also talking about this in different capacities. The various level of detail. And then there are companies on AI security, including Mindgard, that has a lot of material. Even on our website we talk about AI red teaming. We have research papers. We give disclosures about this technical information. So if you want to have your feet, I would go to samita. If you're curious to learn about more about AI red teaming. No company likes Mindgard and others have lots of material in this space. Technical and non-technical. Mike Matchett: All right. That's, uh, that's a great place to start. Uh, and I have heard so much about people talking about AI and so many applications, and every user's got some sort of project in the pipeline. But you're right. We're kind of at this moment where people are kind of stuck getting into production, and they don't know why they're stuck. And one of the reasons is because they they don't have the confidence. Right? They're just they know it's not secure. They know the things. They're not only hallucinating but could have vulnerabilities in them. So this sounds like a way forward and actually get some value out of their AI Investments. Um. Peter Garraghan, CEO & Co-Founder: Yeah, exactly. And again, replace the word AI with app or software. Same problem applies. Same problem. Exactly. Mike Matchett: And you know, I totally agree with you. Like, in two years, we won't be saying AI anymore. Every app will have some AI module. It'll just be like a database. Where's the AI layer? Here's the database layer. Here's the here's the other layer building. Building into it. Right. We built that together. Uh, so thank you very much for for being here today. If I give you the last word, uh, last question, uh, Peter, I would say, where do you think we're going with this? Where do you think the where do you think the market's going? What are going to be the next set of challenges we're gonna have to tackle? Peter Garraghan, CEO & Co-Founder: Yeah, I mean, I can give you the professor answer. I can give you the business answer. So, professors, we look 510 year horizon, as you mentioned. Um, I won't be going away. There'll be new types of architecture. And I think I give the analogy of a sieve. So when a new technology comes out, the flower, you say, does everything fantastic and you start shaking it a bit, it all falls out and you have 3 or 4 pieces. Each of those is worth billions and billions and billions. Think of virtualization, of cloud computing. People were claiming the cloud could do absolutely everything. You shake it. Actually, there's some really good use cases about this, and I'm starting to see the same for AI. Um, so I'm seeing in the market you're going to have consolidation of use cases that actually work, as opposed to people claiming you can do absolutely everything in that type of space, which is good, and because that is valuable in that space. So I suppose it has to happen. Agents are a new concept, and there's a lot of comparisons to service architecture in terms of RESTful services connected together. I'm expecting some maturity in the use cases and the two types. Again, not to make sorry, not make a scientific lecture, but there are two ways this will probably be going. The first one is and I think this is what's going to work, we're going to be reinforcing good design patterns of agents. I don't think you're going to have agents completely free form, because that's not if you look at like thermodynamics or system theory, things reinforce themselves to be optimal in terms of application. So there'll be an agentic workflow for chatbots that is actually reinforced. And people just use that one. It's easy as opposed to doing it by running that system. Um, so I'm seeing probably a lot of templates being created that get reinforced naturally over time. It's a better code, but also it's quicker to build systems from it. But those would be very specific. The other one is you have basically free form Wild West agents just trying to build arbitrary tool calls combinations. Absolutely nightmare for threat modeling in terms of how do I do a complete, constantly changing system? I think the first one will win because ultimately I win by having real use to deliver value, and that will be reinforced by use cases and architecture patterns actually win out in that space. So I expect agents to be there, different modalities. They will move from the transformer architecture to other ones at some point in the future. Um, and like you said, it'll become pervasive, ubiquitous in everything we do. So what percentage of applications of AI inside it now? Well, in Transformers, people saying 80%, that's nowhere near the case. It's probably like 0.5%. It will be eventually 100% in the coming years because it's become pervasive in everything we do. Same as like visualization. Where is it? It's kind of everywhere, but we don't think about it anymore. Mike Matchett: I also think there's a there's a, there's an alignment or an analogy with open source and why people don't write their own logging tools, and why people don't write their own lower level libraries. We can just go get one, right? So it's kind of a similar argument. It's like eventually the AI space will mature to the point where there's just things you can just adopt, and you don't have to go and reinvent them. Peter Garraghan, CEO & Co-Founder: Exactly. You know, the regression algorithm has been around for it's AI. It's been out for a lot for decades. But people don't get upset obsessed about it so much. But if it came out today, people will be worrying about it. It's completely insecure. Say, well, actually, it's still an algorithm, but it could be insecure if it's deployed as an instance that you haven't put controls in place. Mike Matchett: All right. Well, thank you so much for being here today. I learned a lot. I think we could actually do this for a whole week and get a real education into both AI and red teaming. Uh, but unfortunately, that's all the time we have today. So thank you for being here today with us. Peter Garraghan, CEO & Co-Founder: Thanks. Thanks, Mike. Mike Matchett: All right. Take care. Mindgard. It's Mindgard. Take care.

In a recent interview, Mike Matchett talks with Mindgard CEO Peter Garraghan about one of the most urgent topics in enterprise tech: securing AI applications before they run wild in production.

With LLMs quickly moving from pilot to production, the traditional “test once, deploy forever” mindset is no longer viable. Mindgard brings automated and continuous red teaming to the world of AI, helping security teams identify vulnerabilities across not just models, but full AI-driven systems.

From prompt injections and data leakage to business logic exploits and reputational risk, Mindgard treats AI like any other software: with threat models, attack surfaces, and practical testing methodologies.

Whether you're a red teamer looking to augment your toolkit or a DevSecOps team wondering when it’s safe to ship your chatbot, Mindgard delivers clarity in a landscape clouded by marketing hype and unknown unknowns.

Categories:
  • » Small World Big Data
  • » Cybersecurity » Application Security
  • » Data Management » DevOps
Channels:
News:
Events:
Tags:
  • inbrief
  • matchett
  • mindgard
  • ai
  • artificial
  • intelligence
  • ai
  • applications
  • ai
  • apps
  • cybersecurity
  • redteaming
  • red
  • teaming
  • exploit
  • exploits
  • threat
  • model
  • attack
  • surface
  • devsecops
  • secops
  • devops
  • chatbot
Show more Show less

Browse videos

  • Related
  • Featured
  • By date
  • Most viewed
  • Top rated

            Video's comments: Mindgard: Red Teaming AI So You Don’t End Up in the Headlines

            Upcoming Spotlight Events

            • Sep
              11

              An Executive’s Guide to Secure AI Adoption

              09/11/202501:00 PM ET
              More events

              Upcoming 360 View Events

              • Sep
                25

                360View: Email Security & Social Engineering Defense

                09/25/202512:00 PM ET
                • Oct
                  23

                  360View: Preventing Data Exfiltration: Keeping Enterprise Data Secure

                  10/23/202512:00 PM ET
                  • Nov
                    20

                    360View: Budget Optimization: Doing More with Less

                    11/20/202512:00 PM ET
                    More events

                    Upcoming Industry Events

                    • Aug
                      25

                      Harnessing AI to Transform the Landscape of Data Security

                      08/25/202510:55 AM ET
                      • Aug
                        26

                        Renown Health Secures 10K Mailboxes & Stops $1M+ in Email Threats

                        08/26/202501:00 PM ET
                        • Aug
                          27

                          Mastering Secure AI Implementation: A Comprehensive Executive Guide

                          08/27/202510:55 AM ET
                          More events

                          Recent Industry Events

                          • Aug
                            19

                            CMMC 2.0 Insights: Understanding Compliance from an Expert Auditor's Perspective

                            08/19/202512:00 PM ET
                            • Aug
                              13

                              Understanding the Limitations of WAFs and API Gateways Against Modern Threats

                              08/13/202501:00 PM ET
                              • Jul
                                23

                                Enhancing API Security Testing: Identifying Vulnerabilities Ahead of Deployment

                                07/23/202501:00 PM ET
                                More events

                                Upcoming Events Calendar

                                • 08/25/2025
                                  10:55 AM
                                  08/25/2025
                                  Harnessing AI to Transform the Landscape of Data Security
                                  https://www.truthinit.com/index.php/channel/1381/harnessing-ai-to-transform-the-landscape-of-data-security/
                                • 08/26/2025
                                  10:55 AM
                                  08/26/2025
                                  Confronting AI’s Challenges: Insights into CISOs' Biggest Concerns
                                  https://www.truthinit.com/index.php/channel/1380/confronting-ai-s-challenges-insights-into-cisos-biggest-concerns/
                                • 08/26/2025
                                  01:00 PM
                                  08/26/2025
                                  Renown Health Secures 10K Mailboxes & Stops $1M+ in Email Threats
                                  https://www.truthinit.com/index.php/channel/1404/renown-health-secures-10k-mailboxes-stops-1m-in-email-threats/
                                • 08/27/2025
                                  10:55 AM
                                  08/27/2025
                                  Mastering Secure AI Implementation: A Comprehensive Executive Guide
                                  https://www.truthinit.com/index.php/channel/1379/mastering-secure-ai-implementation-a-comprehensive-executive-guide/
                                • 08/28/2025
                                  10:55 AM
                                  08/28/2025
                                  A Practitioner’s Roadmap for Safeguarding AI Implementation in Organizations
                                  https://www.truthinit.com/index.php/channel/1378/a-practitioner-s-roadmap-for-safeguarding-ai-implementation-in-organizations/
                                • 08/29/2025
                                  10:55 AM
                                  08/29/2025
                                  Ethical Frameworks and Compliance Strategies for Safe AI Implementation
                                  https://www.truthinit.com/index.php/channel/1377/ethical-frameworks-and-compliance-strategies-for-safe-ai-implementation/
                                • 09/11/2025
                                  01:00 PM
                                  09/11/2025
                                  An Executive’s Guide to Secure AI Adoption
                                  https://www.truthinit.com/index.php/channel/1374/an-executives-guide-to-secure-ai-adoption/
                                • 09/16/2025
                                  01:00 PM
                                  09/16/2025
                                  Beyond DMARC: Closing Critical Gaps in Your Email Security Shield
                                  https://www.truthinit.com/index.php/channel/1403/beyond-dmarc-closing-critical-gaps-in-your-email-security-shield/
                                • 09/18/2025
                                  11:00 AM
                                  09/18/2025
                                  Risk in Real Time: Visibility Into Cloud Based Vulnerabilities
                                  https://www.truthinit.com/index.php/channel/1372/understanding-dynamic-risk-management-in-real-time-environments/
                                • 09/25/2025
                                  12:00 PM
                                  09/25/2025
                                  360View: Email Security & Social Engineering Defense
                                  https://www.truthinit.com/index.php/channel/930/360view-email-security-social-engineering-defense/
                                • 10/23/2025
                                  12:00 PM
                                  10/23/2025
                                  360View: Preventing Data Exfiltration: Keeping Enterprise Data Secure
                                  https://www.truthinit.com/index.php/channel/931/360view-preventing-data-exfiltration-keeping-enterprise-data-secure/
                                • 11/20/2025
                                  12:00 PM
                                  11/20/2025
                                  360View: Budget Optimization: Doing More with Less
                                  https://www.truthinit.com/index.php/channel/932/360view-budget-optimization-doing-more-with-less/
                                • 12/18/2025
                                  12:00 PM
                                  12/18/2025
                                  360View: 2026 IT Predictions & Emerging Trends
                                  https://www.truthinit.com/index.php/channel/933/360view-2026-it-predictions-emerging-trends/
                                Truth in IT
                                • Sponsor
                                • About Us
                                • Terms of Service
                                • Privacy Policy
                                • Contact Us
                                • Preference Management
                                Desktop version
                                Standard version