I ran 3,360 safety tests on GPT-4o, Claude, Grok, DeepSeek, Gemini

4 pointsposted 5 hours ago
by aestrad7

7 Comments

AlejaGiral31

an hour ago

Wow! Excellent independent work! I can't believe Grok performed so well! How did you ensure all models were tested equally?

aestrad7

an hour ago

Thanks! The short answer is, all models went through identical conditions: same techniques, same prompts and same scoring logic.

I routed everything through OpenRouter with a single API key, so request handling, timeout logic, and retry behavior were identical across models.

OpenRouter does direct forwarding without modifying the prompt payload. If it introduces any bias, it does so equally for all five, which preserves relative comparability.

inaros

5 hours ago

Great work.

TLDR: 42 attack types. 5 models. 3,360 tests. 1 in 3 harmful requests got through.

aestrad7

5 hours ago

Thanks! and yes, that's the summary!. The distribution matters too. GPT-4o at 10.6% vs Gemini at 56.1% is a 5x gap between first and last. And the highest-bypass category across all five models was social engineering / identity impersonation at 35%, which maps directly to the indirect prompt injection problem in agentic deployments.

inaros

5 hours ago

The fact your work is independent of the vendors is a major plus. My recommendation is to continue to develop, refuse any "colaboration" with these well funded companies.

I could see this turning into a valuable third party resource, you can even monetize, for companies implementing agentic solutions. The industry needs independent third party voices.

Kudos.

aestrad7

4 hours ago

That's exactly the intent, independent, reproducible and no vendor relationship.

The monetization angle is interesting. A continuously updated version with more models and frontier models, agentic scenarios, and multi-turn testing would be genuinely useful for teams making deployment decisions. That's the direction for v2.