Palo Alto Networks announced Prisma AIRS 2.0, a major platform upgrade that completes the native integration of the recently acquired Protect AI to deliver the industry's most comprehensive AI security platform. This release directly confronts a critical enterprise challenge: 78% of organizations are transforming with AI, but only a staggering 6% have the guardrails to do so securely. Prisma AIRS 2.0 meets this urgent demand by providing customers with end-to-end protection across the entire AI application lifecycle, securing everything from autonomous agents to the models themselves.

Prisma AIRS 2.0 delivers comprehensive end-to-end AI security, seamlessly connecting deep AI agent and model inspection in development with real-time agent defense at production runtime. The platform, continuously validated by autonomous AI red teaming, secures all interactions between AI models, agents, data, and users. This gives enterprises the confidence to discover, assess, and protect their entire AI ecosystem, accelerating secure innovation.

Already trusted by global leaders in finance, healthcare, and government, Prisma AIRS 2.0 provides visibility, control, and confidence at scale through three enhanced security modules: AI Agent Security: Securing the Autonomous Workforce. Provides real-time, in-line defense against prompt injections, tool misuse, and malicious agent behavior. Prisma AIRS discovers and inventories every AI agent in use ?

sanctioned or unsanctioned ("Shadow AI") giving the visibility and control needed to secure the explosion of AI agents. AI Red Teaming: Continuous, Autonomous Vulnerability Hunting. Addresses the new, dynamic attack surface of Generative AI applications.

Uses an autonomous, continuous, and context-aware agentic approach and over 500 specialized attacks to proactively find vulnerabilities in enterprise AI systems before they can be exploited. While others offer periodic testing, Prisma AIRS delivers a persistent, automated red team that thinks like a real adversary. AI Model Security: Shielding Open-Source Deployment.

Performs a deep architectural analysis of the model itself to find threats traditional scanners can't see. It can detect sophisticated, AI-native threats like architectural backdoors, data poisoning, and malicious code hidden within the model layers. AI Model Security provides a complete "list of ingredients" for enterprise AI, including the model's architecture, training datasets, open-source licenses, and all software dependencies.

This provides unparalleled visibility for AI model governance, risk, and compliance.