Principal Engineer, AI Inference Reliability
Cerebras Systems
12 hours ago
•No application
About
- Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
- Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
- In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.
- In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.
- About the team
- The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.
- About the role
- We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.
- In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.
- If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.
Responsibilities
- Define and drive reliability strategy: establish SLOs and ensure alignment across engineering.
- Design and implement reliability mechanisms: build and evolve systems for fault detection, graceful degradation, failover, throttling, and recovery across multiple regions and data centers.
- Lead large-scale incident management: own postmortems, root-cause analysis, and prevention loops for reliability-related incidents.
- Architect for reliability and observability: influence system design for redundancy, durability, and debuggability.
- Develop reliability tooling: create internal tools and frameworks for chaos testing, load simulation, and distributed fault injection.
- Collaborate broadly: work across software, infrastructure, and hardware teams to ensure reliability is embedded into every layer of our inference service.
- Monitor and communicate reliability metrics: build dashboards and alerts that measure service health and provide actionable insights.
- Mentor and influence: guide engineers and set best practices for designing, testing, and operating reliable large-scale systems.
Skills & Qualifications
- Bachelor's or master's degree in computer science or related field.
- 7+ years of experience in backend, infrastructure, or reliability engineering for large-scale distributed systems.
- Strong programming skills in at least one popular backend programming language such as Python, C++, Go, or Rust.
- Deep and hard-earned experience of reliability principles: SLO/SLI/SLA design, incident response, and postmortem culture.
- Excellent communication and cross-functional leadership skills.
- Bonus: prior experience building large-scale AI infrastructure systems.
- Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
- Read our blog: Five Reasons to Join Cerebras in 2025.
- Apply today and become part of the forefront of groundbreaking advancements in AI!
- Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
- This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.




