← All jobs · Together AI

Staff Engineer, Distributed Storage,HPC & AI Infrastructure

Together AI ·
75
AI-Agency
B88 U55
📍 Amsterdam, NL 🌐 Remote/hybrid Staff 8+ yrs
GoPythonKubernetesWekaFSCephLustreTerraformPrometheus
TL;DR

Staff engineer at Together AI designing multi-petabyte distributed storage systems for AI training and inference workloads. Focus on high-performance parallel filesystems, Kubernetes-native storage operators, and cost optimization for GPU clusters.

Apply at Together AI →
you'll be redirected to the company's career page

Job description

About the Role

In this role, you will design and deliver multi-petabyte storage systems purpose-built for the world’s largest AI training and inference workloads. You’ll architect high-performance parallel filesystems and object stores, evaluate and integrate cutting-edge technologies such as WekaFS, Ceph, and Lustre, and drive aggressive cost optimization-routinely achieving 30-50% savings through intelligent tiering, lifecycle policies, capacity forecasting, and right-sizing. 

You will also build Kubernetes-native storage operators and self-service platforms that provide automated provisioning, strict multi-tenancy, performance isolation, and quota enforcement at cluster scale. Day-to-day, you’ll optimize end-to-end data paths for 10-50 GB/s per node, design multi-tier caching architectures, implement intelligent prefetching and model-weight distribution, and tune parallel filesystems for AI workloads. 

Hybrid Working 2 days a week at our offices in Amsterdam

Responsibilities

Requirements

Nice to Have Skills

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

Apply at Together AI →

More open roles at Together AI

Together AI ·
Research Engineer, Core ML
📍 San Francisco, US · Staff
Research Engineer, Core ML at Together AI building production inference and RL/post-training systems. Role spans efficient inference algorithms, RL pipelines, and high-performance serving infrastructure to optimize latency, throughput, and model quality at scale.
PythonSGLangvLLMCUDAPyTorchTransformers
83
AI-core
Together AI ·
LLM Inference Frameworks and Optimization Engineer
📍 San Francisco, US 🌐 Remote 💰 $160K–$230K · Mid
LLM inference frameworks and optimization engineer at Together AI building distributed inference engines for language and multimodal models. Focus on GPU optimization, tensor parallelism, and high-throughput serving infrastructure.
PythonC++CUDATritonTensorRTvLLM
83
AI-core
Together AI ·
AI Researcher, Core ML (Turbo)
📍 San Francisco, US · Senior
AI Researcher at Together AI building efficient inference and RL/post-training systems. Role spans algorithms, inference engines (SGLang, vLLM), and distributed training pipelines to optimize model performance and cost at production scale.
PythonSGLangvLLMGRPORLHFDPO
83
AI-core
Together AI ·
Senior Machine Learning Engineer, Voice AI
📍 San Francisco, US 💰 $200K–$260K · Senior
Senior ML Engineer at Together AI optimizing inference for voice models (STT, TTS, speech-to-speech). Focus on serving engines like TRT-LLM and SGLang, GPU optimization, and productionizing voice workloads at scale.
PythonPyTorchTensorRT-LLMSGLangvLLMCUDA
81
AI-core
Together AI ·
Machine Learning Engineer - Inference
📍 San Francisco, US 💰 $160K–$230K · Mid
Machine Learning Engineer at Together AI building the inference engine for large language models. Focus on optimizing runtime services, performance at scale, and high-performance systems using PyTorch.
PythonPyTorchCUDATritonvLLMTensorRT-LLM
81
AI-core