ozgurozkan
17 hours ago
A few people have already asked how Pingu Unchained differs from existing LLMs like GPT-4, Claude, or open-weight models like Mistral and Llama.
1. Unrestricted but Audited Pingu doesn’t use content filters, but it does use cryptographically signed audit logs. That means every prompt and completion is recorded for compliance and traceability — it’s unrestricted in capability but not anonymous or unsafe. Most open models remove both restrictions and accountability. Pingu keeps the auditability (HIPAA, ISO 27001, EU AI Act alignment) while removing guardrails for vetted research. 2. Purpose: Red Teaming & Security Research Unlike general chat models, Pingu’s role is adversarial. It’s used inside Audn.ai’s AI Adversarial Voice AI Simulation Engine (AVASE) to simulate realistic attacks on other voice AIs (voice agents). Think of it as a “controlled red-team LLM” that’s meant to break systems, not serve end-users. 3. Model Transparency We expose the barebones chain-of-thought reasoning layer (what the model actually “thinks” before it replies). but we keep the reasoning there. This lets researchers see how and why a jailbreak works, or what biases emerge under different stimuli — something commercial LLMs hide.
4. Operational Stack Runs on a 120B GPT-OSS variant Deployed on Modal.com on GPU nodes (H100) Integrated with FastAPI + Next.js dashboard
5. Ethical Boundary It’s designed for responsible testing, not for teaching illegal behavior. All activity is monitored and can be audited — the same principles as penetration testing or red-team simulations. Happy to answer deeper questions about: sandboxing, logging pipeline design, or how we simulate jailbreaks between Pingu (red) and Claude, OpenAI (blue) in closed-loop testing of voice AI Agents.
boratac
12 hours ago
What about pricing? You didn't mention it here.
ozgurozkan
12 hours ago
It's explained here: https://audn.ai/pingu-unchained
min required monthly subscription is $200