← All jobs · Together AI

Systems Research Engineer, GPU Programming

Together AI ·
71
AI-Agency
B88 U45
📍 San Francisco, US 🌐 Remote/hybrid 💰 $160K–$230K Mid
CUDATritonGPU programmingparallel computing
TL;DR

Systems Research Engineer at Together AI optimizing GPU kernels and algorithms for ML/AI applications. Focus on CUDA/Triton programming, performance profiling, and co-designing efficient GPU architectures with modeling and hardware teams.

Apply at Together AI →
share:
you'll be redirected to the company's career page

Job description

About the Role

As a Systems Research Engineer specialized in GPU Programming, you will play a crucial role in developing and optimizing GPU-accelerated kernels and algorithms for ML/AI applications. Working closely with the modeling and algorithm team, you will co-design GPU kernels and model architecture to enhance the performance and efficiency of our AI systems. Collaborating with the hardware and software teams, you will contribute to the co-design of efficient GPU architectures and programming models, leveraging your expertise in GPU programming and parallel computing. Your research skills will be vital in staying up-to-date with the latest advancements in GPU programming techniques, ensuring that our AI infrastructure remains at the forefront of innovation.

Requirements

Responsibilities

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance, and other benefits, as well as flexibility in terms of remote work. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

 

Apply at Together AI →

More open roles at Together AI

Together AI ⚡ AI-native · 🔄 synced 5h ago
AI Researcher, Core ML (Turbo)
📍 San Francisco, US · Senior
AI Researcher, Core ML at Together AI building efficient inference and RL/post-training systems. Role spans algorithms, inference engines (SGLang, vLLM), and production-scale RL pipelines to optimize model speed, cost, and capabilities.
PythonSGLangvLLMGRPORLHFDPO
88
AI-core
Together AI ⚡ AI-native · 🔄 synced 5h ago
Research Engineer, Core ML
📍 San Francisco, US · Staff
Research Engineer, Core ML at Together AI building production inference and RL/post-training systems. Focus on efficient inference algorithms, speculative decoding, and scaling RL pipelines to optimize latency, throughput, and model quality.
PythonSGLangvLLMATLASPyTorchGRPO
82
AI-core
Together AI ⚡ AI-native · 🔄 synced 5h ago
Machine Learning Engineer - Inference
📍 San Francisco, US 💰 $160K–$230K · Mid
Machine Learning Engineer at Together AI building the inference engine for large language models. Focus on optimizing runtime services, performance at scale, and high-performance systems using PyTorch and low-level systems concepts.
PythonPyTorchCUDATritonRustCython
73
AI-fluent
Together AI ⚡ AI-native · 🔄 synced 5h ago
LLM Inference Frameworks and Optimization Engineer
📍 San Francisco, US 🌐 Remote 💰 $160K–$230K · Mid
LLM inference frameworks and optimization engineer at Together AI building distributed inference engines for large language models. Focus on GPU optimization, tensor parallelism, and software-hardware co-design for scalable model serving.
PythonC++CUDATritonTensorRTTensorRT-LLM
73
AI-fluent
Together AI ⚡ AI-native · 🔄 synced 5h ago
Senior Machine Learning Engineer, Voice AI
📍 San Francisco, US 💰 $200K–$260K · Senior
Senior ML Engineer at Together AI optimizing inference for voice models (STT, TTS, speech-to-speech). Focus on model serving engines, GPU optimization, and productionizing voice workloads at scale.
PythonPyTorchTensorRT-LLMvLLMSGLangCUDA
72
AI-fluent