← All jobs · Anthropic

Staff+ Software Engineer, Observability

Anthropic ·
63
AI-Agency
B62 U65
📍 San Francisco, US 🌐 Remote/hybrid 💰 $405K–$485K 🛂 Visa sponsor available 🛠 AI tools welcome at work Staff 10+ yrs
PythonRustGoPrometheusGrafanaClickHouseOpenTelemetryKubernetes
TL;DR

Staff+ Software Engineer at Anthropic building observability infrastructure for large-scale GPU/TPU clusters. Focus on telemetry ingest pipelines, columnar storage, distributed tracing, and AI-assisted diagnostics.

Apply at Anthropic →
share:
you'll be redirected to the company's career page

Job description

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on—from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable. By joining this team, you’ll have a direct impact on the reliability and operational excellence of Anthropic’s research and product systems.

As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems—high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools—to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.

Responsibilities

You May Be a Good Fit If You:

Strong Candidates May Also Have:

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$405,000$485,000 USD

Logistics

Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience

Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Apply at Anthropic →

More open roles at Anthropic

Anthropic ⚡ AI-native · 🔄 synced 2m ago
Research Engineer, Frontier Red Team (Autonomy)
📍 San Francisco, US 🌐 Remote 💰 $320K–$850K 🛂 Visa sponsor 🛠 AI tools welcome at work · Senior
Research Engineer at Anthropic building autonomous AI systems and defensive agents to understand and counter adversarial AI. Focus on agent design, evals, robotics integration, and policy-relevant demonstrations.
PythonClaudePyTorchreinforcement learningrobotics
93
AI-core
Anthropic ⚡ AI-native · 🔄 synced 2m ago
Research Scientist, Interpretability
📍 San Francisco, US 🌐 Remote 💰 $350K–$850K 🛂 Visa sponsor · Senior
Research Scientist in Interpretability at Anthropic, focused on mechanistic understanding of large language models. Develop methods to reverse-engineer neural network algorithms, design experiments at scale, and build infrastructure for interpretability research.
PythonPyTorchJAXTransformers
93
AI-core
Anthropic ⚡ AI-native · 🔄 synced 2m ago
Research Engineer, Production Model Post-Training
📍 Zürich, CH 🌐 Remote 🛂 Visa sponsor · Senior
Research Engineer at Anthropic building post-training pipelines for production Claude models. Focus on implementing Constitutional AI, RLHF, and alignment techniques at scale on frontier models.
PythonPyTorchJAXTensorFlowdistributed systemsHPC
93
AI-core
Anthropic ⚡ AI-native · 🔄 synced 2m ago
Research Engineer, Agents
📍 San Francisco, US 🌐 Remote 💰 $500K–$850K 🛂 Visa sponsor 🛠 AI tools welcome at work · Senior
Research Engineer, Agents at Anthropic building agentic systems and infrastructure for Claude. Focus on agent harness design, benchmarking, evaluation, and model optimization for complex multi-step tasks.
ClaudeLLMsPythonPyTorchJAX
93
AI-core
Anthropic ⚡ AI-native · 🔄 synced 2m ago
Research Engineer / Scientist, Alignment Science
📍 San Francisco, US 🌐 Remote 💰 $350K–$500K 🛂 Visa sponsor · Mid
Research Engineer at Anthropic building empirical AI safety research. Focus on scalable oversight, AI control, alignment stress-testing, and safeguards for advanced AI systems.
PythonPyTorchKubernetesLLMsreinforcement learning
91
AI-core