Truth in IT
    • Sign In
    • Register
        • Videos
        • Channels
        • Pages
        • Galleries
        • News
        • Events
        • All
Truth in IT Truth in IT
  • Data Management ▼
    • Converged Infrastructure
    • DevOps
    • Networking
    • Storage
    • Virtualization
  • Cybersecurity ▼
    • Application Security
    • Backup & Recovery
    • Data Security
    • Identity & Access Management (IAM)
    • Zero Trust
    • Compliance & GRC
    • Endpoint Security
  • Cloud ▼
    • Hybrid Cloud
    • Private Cloud
    • Public Cloud
  • Webinar Library
  • TiPs

Lineaje: Your AI Is Writing Code… Now Who’s Securing It?

Truth in IT
04/22/2026
1
0 (0%)
Share
  • Comments
  • Download
  • Transcript
Report Like Favorite
  • Share/Embed
  • Email
Link
Embed

Transcript


Hi, Mike Matchett with Small World Big Data. I'm here today talking with a great security company that we talked with maybe last year post RSAC. Well, this year they've got something new. Of course, we're taking cyber security into the AI space. We've got Lineaje here today to explain how what they're doing to secure your software bill of materials now translates into securing your AI and maybe even your agents and maybe even everything crazy your people are trying to do with it. Just hang on. We're going to dive right into it. Hey, Javed, welcome to our show today. Hey. Thanks, Mike. Great being here. All right. So, you know, fresh off of RSAC 2026, you guys are now riding this wave of AI. We've talked to you previously about what you were doing to help secure the software supply chain. That software bill of materials I thought was so, so great, folks, you can go watch our previous recording on that if you're interested. But let's just get down into this. It's a world of AI today. What has brought you in Lineaje from looking at the software to looking at this AI part of it? It's almost a different workload. It's almost a different thing. But yet there's a lot of a lot of similarities. How did how did how did you get there? So I so first AI is software. Software is, you know, increasingly software is AI and Lineaje has been in the, in the business of using AI to secure software. We call it AI for security. And then as we have seen this emergence of AI. Ai is very easy to build. You know, you can you can write agents, I can write agents. It's sort of it's catching on pretty well. Uh, and we felt that the, the, the security landscape. So what, which I call security for AI was underserved. And so we decided to take a, take a close look at it. And so we were very excited on where we have reached with, with, with that initiative. So you went from I mean, I think you told me this earlier, you went from, uh, AI for security to now security for AI. Kind of flipping that around. But, uh, there's a lot that still applies when we look at what you were doing before and what you're doing now. It's just you shift the game over. Uh, so let's talk about, um, well, let's talk about first about, you know, the sort of the core Lineaje, uh, products you, you're able to secure open source, uh, and for people, right? And yes, deliver them things. Um, so first tell us a little bit about that. That's interesting. So. So we've done, done a very interesting thing, which is, you know, safe open source. We call it gold open source. We deliver it in two ways gold open source packages and gold open source images. And what we can do with gold open source effectively is take open source, rebuild it at scale. And so, you know, we can build essentially any open source package and deliver that. And when we do that, we can eliminate vulnerabilities. Critical and high end exploitable vulnerabilities attest it to be malware free, and then actually ensure or attest that every component in the chain is reliable and not untampered. So I mean, right, we've seen the recent rise of the acts of, of a whole bunch of npm compromise attacks and so on. So we, we detect all of them. And so gold open source eliminates all of those. Yeah. And I would suggest just as a, as a matter of process and architectural thinking, we know that things like mythos and those top tier new tier of, of hacking capable eyes are out there. If you're worried about that, you can start preparing now by actually moving your open source includes to gold open source, uh, on there and actually solidify and armor your environment up even before those things hit the market or get leaked to hackers or become widely available. Right? Exactly. Right. Yeah. And the second thing that we do is really, we have, uh, we can, we can scan first party software or the software inside a company and, uh, autonomously fix it. So we have two capabilities within that. So source code scanning and fix and container scanning and fix. So when we scan we find, you know, 95% of the risk in modern software is sourced from open source. So as we do that we can eliminate embedded dependencies both at the container level and a source code level and swap them with gold open source, eliminating about 95% of all risk. So what that does is it takes pretty vulnerable applications and containers and makes them near zero vulnerability without, and we can do it autonomously. So very, very low developer effort. Right? So when we're talking generally, you've got these CI, CD pipelines and these chains all the way to deployment, you can hook into that and make sure that what actually comes out the end of the pie there, the pipeline is going to be secure from just general hacking, just general cyber security. Okay, so Java, you have now brought out something you've just launched called unify. Unify, unify, um, which nominally is a policy manager or policy, uh, security cybersecurity policies. Is that something that is now overarching on your existing tool chain that you've just described, or also now including the AI workloads that we started talking about? So it's a great question, Mike. Right. It's both. So one is the traditional capabilities that that I spoke about, of course, can be managed and orchestrated through the central policy manager for the company. And the new ground it starts breaking is around AI applications and the AI applications. What we are seeing is they require a new policy set. There are new threats that are attacking AI like prompt injection, like reasoning, compromise, poisoned LLMs. So how do you manage all of that? There are best practices for AI. That agent to agent communication should be encrypted. You know agent you know MCP server should not talk directly to LMS. You know, so on and so forth. A whole bunch of those, right? And corporate policies that companies want, want to implement. And so what Unifi does is it brings your AI security policies together with your traditional software policies. And because modern software now, or AI software is both traditional software and AI application AI capabilities mixed together, right? So now these two unify brings it all together. Right? And this is really necessary at this point because AI is like is software. It's like other software in some respects, but it has a whole new ecosystem of vulnerabilities, a whole new set of vectors for being hacked and a whole new way of, of, of diverting a company's IP or getting into them or doing bad things that are different necessarily than from how people were just maybe trying to escalate privileges in the old world last year. You know, we're just looking for we were just looking for, you know, escalation last year. Now now we can just, you can completely drive a company, uh, bankrupt with by hooking you to their AI. Um, so this has a bunch of different policies in it, all sorts of different vectors. Uh, how do you detect? How does this detect when something is, you know, not quite right in an AI space? Some someone's developing an AI, an AI solution. They're trying to get it down there. Uh, the productivity, uh, chain, they're trying to, to get it into development and out of development into, into production. What do you, what are you actually scanning for? Do you, do you ask your questions? Do you do prompts? Do you I mean, you said best practices. So, so, so Mike, let me let me get a little bit deeper into it. So, you know, the, the predominant way that AI applications are being built is to one is we call it AI code, and one is low code and no code. So, right. So, so high code. Would you use a, a developer uses cursor, for example, to write code. And you know, and the code includes, you know, they may be doing coding, but the code essentially includes MCP servers. Lm references write their writing agents, and so on and so forth. And a company may have policies, security policies. One of our customers is talking about 40 pages of policies that they have already written, but they have 1000 developers. Now, the fundamental problem is the 1000 developers know the 40 pages of policies. The answer is heck no. They're building new applications. And so, for example, they may have a policy that says, look, uh, all PII information should be masked when displayed. It's a simple policy commonly used. And one of the things you suddenly realize is that as soon as you give data to a AI infrastructure, you have lost control. What you had DLP before that you had all kinds of data controls outside the AI infrastructure, but inside you don't. So as a developer is writing code to display, let's say, you know, uh, information through a prompt. We would look at it and auto insert based on the policy and auto insert a guardrail at that point in the code and that would go, oh, you're displaying PII information, we'll mask it. Right. And so when the code is compiled and put together or packaged, that guardrail is inherently built in. And we can also do that. The same thing with low code. And so that ultimately as the agent is delivered into production, it is hardened to be protected against these, you know, against data leakage for in this case, but also against threats, identity compromise, and so on and so forth, vulnerabilities and so on and so forth. Yeah. I mean, we tend to think of, you know, security problems as CVS where you've got some exposure of a credential or something else. But this idea that I can also spot a problem where I need a guardrail and automatically insert a guardrail into the AI code itself is pretty cool. Actually, I have to say just it just changes the way you think about development and how we're going to get AI into, into the world safely on there. Um, but you know, that doesn't, that doesn't necessarily cover, uh, what people might be doing with AI as they bring it to the masses, right? Yeah. That's great for production. Uh, coder who's, who's got something that they're trying to get into the, the business application, but people are building their own AI agents left and right in a company. Are you able to help them with that? Yeah. So they may be using a low code platform, for example, like Emma or glean. Right. And where, you know, you have a HR manager building an application, right? Or a financial analyst building, building a little application to process invoices. And as they do that, right. I mean, let's take, you know, you have invoice processing agent, so you just code it because the platforms have made it very easy to, to, to build that out. And for, for example, as they do that, and you can accept invoices as PDFs and it will process them. And so one example that we frequently use is the PDF itself may have malicious content, may have, uh, hidden content, so on and so forth. A normal developer doesn't even know how to address all of those, all of those situations. So what I will do is insert a guardrail automatically that says when you're accepting documents, make sure they're clean and if not, reject them. Right. So first you detect these kinds of, of, uh, possibilities, some malicious component embedded in and so on and so forth. Right? The, the policy is there. It's applied evenly across all, all agents, for example, in this case. Right. All right. And is that going to cover someone who's even doing AI for themselves or on a personal basis? Absolutely. So one of the things that we you know, so unify. So we'll take let's take this example, right? I mean, uh, open law is, is all the rage, right? And what open law what? And people are using it on their, sometimes on their work laptops or, you know, in other situations. And the challenge with, with an open law is that it does work. You know, it can learn to do many things very quickly. Right. And so now we are seeing consumer usage bleeding into enterprise. So now the requirement that the way we see it is really how do you make OpenCL for enterprise managed and safe? And one of the more interesting angles is just like we're talking about developers writing code. And right now here we open claw is writing code in runtime. All the security policies that we said should apply to developers using cursor should now apply to open claw writing code in runtime. How do you do that? Right. So unify enables you to do that, which is that we can, you know, it's a guardrail of guardrails if you think of it that way, that as code is being written even in runtime, it is looking at your corporate security policies for developers and making sure Open claw follows it. So this is what we are doing with a product that we're calling Gold Claw, right? So, so so so Gold claw allows you to, to now have a unified, managed open claw instance that you can deploy in any cloud on any device, easily get full visibility into what it is doing out of the box. It is secured by gold policies, as you would expect, right. And it and gives you full visibility and control over what it's doing. So you've gone, you've basically taken Lineaje from something that was very valuable, looking at the software bill of materials for large production applications and making sure that the entire supply chain of, of, of tooling that went into that application was secure and scanned and is to this real time perspective where people are doing their own personal AI and securing that as well. I mean, that's like, that's like the world turned six times in there. Uh, David. Uh, yes. Down there, which is awesome. Um, I, I'm, I'm really impressed. I really need to get a demo of that too. Or at least maybe I need to get a copy running on my own open claw here so I can feel safer about what I'm doing. Uh, I mean, because I think if people understand what you're offering in terms of securing AI for the consumer users within their enterprise, uh, this is a really big thing. Yeah. I think there's just a myriad of, of use cases, right? So, you know, we, we're very excited about where, where the space is and what Lineaje can contribute to, to securing AI. So if someone, if someone wants to find out more about this, uh, either, you know, unify, unify what you're doing at that, at the policy level, even just going back to the gold source packages, what would you, where would you send them to look? Uh, so, so go to Lineaje.com and, and click on schedule a demo or a presentation and we'll be happy to walk you through it. All right. That's Lineaje folks. Uh, obviously, uh, we're, we're getting into a much faster world here that's evolving with AI. Ai is getting into everything, uh, and cybersecurity needs to take, uh, take some lessons and you need to use AI to secure AI, right? That's, that's another tag. I'll give you another tagline. Thank you, David, for being here today. Thanks, Mike. My pleasure. All right. Take care folks.

In this inBrief chat, Mike Matchett speaks with Javed Hasan about the shift from securing traditional software supply chains to addressing emerging risks in AI-driven development.

As AI becomes embedded in modern applications, new attack vectors like prompt injection, data leakage, and compromised models are expanding the security landscape.

Lineaje discusses how techniques originally developed for software bill of materials (SBOM) security are being adapted to AI systems, including automated vulnerability remediation and policy enforcement.

Mike and Javed cover the growing importance of centralized governance across both conventional code and AI-generated logic, particularly as low-code tools and autonomous agents accelerate development. They also explores how security guardrails can be dynamically inserted during development and runtime, helping organizations manage risk as AI adoption spreads across both technical and non-technical users.

Categories:
  • » Small World Big Data
  • » inBrief Sessions
  • » Cybersecurity » Application Security
  • » AI & Machine Learning
  • » Data Protection
Channels:
News:
Events:
Tags:
  • inBrief
  • Matchett
  • Lineaje
  • AI security
  • software supply chain
  • SBOM
  • guardrails
  • prompt injection
  • policy enforcement
  • DevSecOps
  • agent security
  • LLM risk
  • vulnerability management
  • AI governance
  • secure development
Show more Show less

Browse videos

  • Related
  • Featured
  • By date
  • Most viewed
  • Top rated
  •  

              Video's comments: Lineaje: Your AI Is Writing Code… Now Who’s Securing It?

              Upcoming Webinar Calendar

              • 04/23/2026
                01:00 PM
                04/23/2026
                Cultivating Trust as a Foundation for the Agentic Consumer in 2026
                https://www.truthinit.com/index.php/channel/1883/cultivating-trust-as-a-foundation-for-the-agentic-consumer-in-2026/
              • 04/29/2026
                12:00 PM
                04/29/2026
                Strategies for Safeguarding AI in Applications, Agents, and APIs
                https://www.truthinit.com/index.php/channel/1893/strategies-for-safeguarding-ai-in-applications-agents-and-apis/
              • 04/30/2026
                10:00 AM
                04/30/2026
                Insights from the 2026 Keepit Annual Data Report on SaaS Data Protection
                https://www.truthinit.com/index.php/channel/1868/insights-from-the-2026-keepit-annual-data-report-on-saas-data-protection/
              • 04/30/2026
                01:00 PM
                04/30/2026
                The New Economics of a VMware Exit
                https://www.truthinit.com/index.php/channel/1880/the-new-economics-of-vmware-exit/
              • 05/06/2026
                02:00 AM
                05/06/2026
                Transforming AI's Potential: Proactively Identifying Attacks Before Breaches Occur
                https://www.truthinit.com/index.php/channel/1886/transforming-ais-potential-proactively-identifying-attacks-before-breaches-occur/
              • 05/06/2026
                10:00 PM
                05/06/2026
                World Password Day: Strategies for Managing Your Passwords Effectively
                https://www.truthinit.com/index.php/channel/1913/world-password-day-strategies-for-managing-your-passwords-effectively/
              • 05/07/2026
                05:00 AM
                05/07/2026
                World Password Day: Strategies for Managing Your Passwords Effectively
                https://www.truthinit.com/index.php/channel/1914/world-password-day-strategies-for-managing-your-passwords-effectively/
              • 05/07/2026
                01:00 PM
                05/07/2026
                World Password Day: Strategies for Managing Your Passwords Effectively
                https://www.truthinit.com/index.php/channel/1915/world-password-day-strategies-for-managing-your-passwords-effectively/
              • 05/12/2026
                01:00 PM
                05/12/2026
                Transforming Black Box to Glass Box: Revealing Hidden Threats and AI Risks through Data Lineage
                https://www.truthinit.com/index.php/channel/1895/transforming-black-box-to-glass-box-revealing-hidden-threats-and-ai-risks-through-data-lineage/
              • 05/12/2026
                11:30 PM
                05/12/2026
                Effective Strategies for Safeguarding Active Directory and Minimizing Data Exposure
                https://www.truthinit.com/index.php/channel/1888/effective-strategies-for-safeguarding-active-directory-and-minimizing-data-exposure/
              • 05/13/2026
                01:00 AM
                05/13/2026
                Transforming the Black Box: Revealing AI Risks and Hidden Threats through Data Lineage
                https://www.truthinit.com/index.php/channel/1890/transforming-the-black-box-revealing-ai-risks-and-hidden-threats-through-data-lineage/
              • 05/13/2026
                05:00 AM
                05/13/2026
                Transforming Black Box to Glass Box: Revealing AI Risks and Hidden Threats through Data Lineage
                https://www.truthinit.com/index.php/channel/1894/transforming-black-box-to-glass-box-revealing-ai-risks-and-hidden-threats-through-data-lineage/

              Upcoming Events

              • Apr
                23

                Cultivating Trust as a Foundation for the Agentic Consumer in 2026

                04/23/202601:00 PM ET
                • Apr
                  29

                  Strategies for Safeguarding AI in Applications, Agents, and APIs

                  04/29/202612:00 PM ET
                  • Apr
                    30

                    Insights from the 2026 Keepit Annual Data Report on SaaS Data Protection

                    04/30/202610:00 AM ET
                    • Apr
                      30

                      The New Economics of a VMware Exit

                      04/30/202601:00 PM ET
                      • May
                        06

                        Transforming AI's Potential: Proactively Identifying Attacks Before Breaches Occur

                        05/06/202602:00 AM ET
                        More events
                        Truth in IT
                        • Sponsor
                        • About Us
                        • Terms of Service
                        • Privacy Policy
                        • Contact Us
                        • Preference Management
                        Desktop version
                        Standard version