Pangea Unveils Definitive Study on GenAI Vulnerabilities: Insights From 300,000+ Prompt Injection Attempts
- Pangea, an AI security company based in Palo Alto, announced on May 15, 2025, results from its global Prompt Injection Challenge held in March 2025.
- More than 800 individuals from 85 nations took part in a month-long challenge organized by Pangea, aiming to overcome AI security protections through a series of three progressively harder virtual environments.
- Over the course of a month-long challenge, participants launched close to 330,000 prompt injection trials, with roughly 10% successfully bypassing fundamental AI system guardrails, highlighting critical security vulnerabilities.
- Oliver Friedrichs, CEO of Pangea, highlighted that the frequency and complexity of current attacks demonstrate how quickly AI security challenges are growing and changing.
- Pangea recommends organizations adopt multi-layered guardrails, continuous security testing, and attack surface reduction to defend AI applications from unpredictable prompt injection attacks.
15 Articles
15 Articles

Pangea Laboratory Appoints John Moore as Chief Executive Officer
TUSTIN, Calif., May 16, 2025 /PRNewswire/ -- Pangea Laboratory, a leader in the development of advanced cancer diagnostics and microbiome Next-Generation Sequencing (NGS) testing, is proud to announce the appointment of John Moore as its new Chief Executive Officer. Mr.…

Pangea Unveils Definitive Study on GenAI Vulnerabilities: Insights from 300,000+ Prompt Injection Attempts
PALO ALTO, Calif., May 15, 2025 /PRNewswire/ -- Pangea, a leading provider of AI security guardrails, today released findings from its global $10,000 Prompt Injection Challenge conducted in March 2025. The month-long initiative attracted more than 800 participants from 85…
Pangea Unveils Definitive Study on GenAI Vulnerabilities - AI-Tech Park
Pangea, a leading provider of AI security guardrails, today released findings from its global $10,000 Prompt Injection Challenge conducted in March 2025. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three virtual rooms with increasing levels of difficulty. The research comes at a critical time as GenAI adoption has accelerated dramatically across industries…
GenAI vulnerable to prompt injection attacks
New research shows that one in 10 prompt injection atempts against GenAI systems manage to bypass basic guardrails. Their non-deterministic nature also means failed attempts can suddenly succeed, even with identical content. AI security company Pangea ran a Prompt Injection Challenge in March this year. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three vir…
Coverage Details
Bias Distribution
- 80% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage