AI Security Trends 1H 2025: Attacks on AI Infrastructure and the Road Ahead
This episode of Trend Talks Threat Research, hosted by Jon Clay, VP of Threat Intelligence at Trend Micro, spotlights the newly released “Trend Micro State of AI Security Report: 1H 2025.” The discussion matters for IT and security teams because it details real-world attacks against AI infrastructure, evolving LLM threats, and how adversaries operationalize AI.
The report’s first section focuses on current attacks against AI infrastructure. Trend Micro’s Zero Day Initiative included AI infrastructure for the first time at its May Berlin hacking event, uncovering critical issues across the stack. Notable findings include exploitable flaws in KronDB, NVIDIA Triton Inference Server, Redis, and the NVIDIA Container Toolkit—paired with a worrying surge in unauthenticated internet exposure.
Exposed AI Inference and MLOps Systems
Post-event internet scanning found thousands of AI-related systems exposed without authentication. The count grew from roughly 3,000 to more than 10,000 servers, underscoring poor access controls and rushed deployments in AI/ML pipelines.
LLM Application Risks and Prompt-Leak Evolution
The report analyzes attacks on complex, LLM-based applications, tracking prompt injection and prompt-leak techniques across popular models. Comparative tests showed varying resilience; for example, Mistral models exhibited higher susceptibility to prompt leaks in Trend Micro’s assessment.
Criminal Adoption: Deepfakes and Off-the-Shelf Tools
Adversaries increasingly leverage legitimate, commercially available AI apps—particularly for deepfake audio/video—rather than building bespoke tools. This lowers barriers for fraud, social engineering, and KYC bypass schemes.
Policy, GenAI Complexity, and Vendor Posture
The report looks ahead to EU policy momentum and the rising complexity of agentic/genetic AI systems. It also outlines vendor responses, including Trend Micro’s research, product posture, and links to first-half content for deeper technical context.
Key Takeaways
- AI infrastructure is a prime target; inference servers and MLOps components are being actively probed and exploited.
- Internet-exposed AI services without authentication are increasing, amplifying organizational risk.
- LLM prompt-leak and injection techniques are evolving; model resilience varies significantly.
- Criminals prefer legitimate deepfake tools, accelerating fraud and KYC bypass attempts.
- EU policy shifts and agentic AI complexity will shape near-term security requirements.
For IT and security leaders, the message is clear: treat AI infrastructure as Tier-0 assets, harden exposure, validate LLM application security, and prepare for agentic AI and regulatory changes that will redefine cloud and data security controls.