Skip to main content

Nov 03 2025

Cerebras October 2025 Highlights

October was a month of momentum for Cerebras. With new launches, global events, and groundbreaking collaborations, we continued to expand access to wafer-scale AI around the world.

Follow us to stay up to date on the latest

Try OpenAI gpt-oss-safeguard-120b at Cerebras speed

Cerebras is the fastest inference provider for OpenAI's newest model. Enabling real-time reasoning about AI safety policies with full configurability and zero black-box limits.

🦺 Policy-based & transparent: Bring your policy, get explainable classifications
📖 Open-weight & configurable: Apache 2.0 license; weights freely available
🚀 Run it in real time: Moderation, document triage, and agent guardrails at wafer-scale speed 

Join the private preview on Cerebras Inference Cloud to experience open-weight safety AI at wafer-scale speed.

The Fastest AI Inference, just $10 away

With Cerebras Inference: Pay Per Token, you can start building on wafer-scale compute for as little as $10 — no contracts, no friction, no GPU queues.

Just add your credit card and go.

Unlock Blazing-Fast Inference on AWS Marketplace with Cerebras — Join Our Webinar

Join experts from AWS and Cerebras Systems to explore how 30x faster inference is changing what’s possible in GenAI. 

Faster inference isn’t a technical milestone — it’s a business imperative. When models run faster, teams can code more, iterate more, ship more, and build more. 

Speed drives discovery, reduces costs, and shortens the path from idea to impact.

Andrew's Corner - Podcasts, Commentary and More

20VC Podcast

Andrew spoke with Harry Stebbings about our recent $1.1B Series G fundraise. 

Inside OKC Datacenter

Join Andrew and Matthew Berman on a tour of the FASTEST data center in the WORLD

GITEX 2025: AI Beyond Limits

From Dubai to the world, AI innovation took center stage at GITEX.

Cerebras showcased how wafer-scale compute is accelerating breakthroughs across large-scale model training and inference — standing alongside our partners, customers, and collaborators who are redefining what’s possible with AI.

From Abu Dhabi to San Francisco, with partners spanning from Africa, Asia, and the Middle East, we saw how global collaboration and shared compute infrastructure are driving the next wave of AI performance, efficiency, and accessibility.

UPCOMING EVENTS

Meet the Cerebras Team IRL

🧠 Supercomputing 2025

New research papers from Cerebras and partners on wafer-scale scaling, sparse MoE training, and inference efficiency.

☕️ CafeCompute NYC

A live, hands-on meetup for developers exploring real-world inference and rapid iteration.

☁️ AWS re:Invent

Demos, talks, and announcements on how Cerebras Inference powers GenAI at enterprise scale.