← All jobs · Together AI

Customer Support Engineer (GPU Cluster), India

Together AI ·
47
AI-Agency
B35 U65
📍 India, IN Mid 3+ yrs
KubernetesGPUSLURMAnsibleNFScontainer infrastructurePython
TL;DR

Customer Support Engineer at Together AI providing technical support for GPU cluster infrastructure and AI services. Resolves complex customer issues with Kubernetes, GPU technologies, and HPC environments while collaborating with product and engineering teams.

Apply at Together AI →
you'll be redirected to the company's career page

Job description

About the Role

As a Customer Support Engineer at a pioneering AI company, you'll be the first line of defense to support customers as they build out training, fine tuning, and inference solutions with Together AI. You'll dive deep into complex technical challenges, providing swift and effective solutions while serving as a product expert. As a part of the Customer Experience organization, you will collaborate closely with product and sales, driving continuous improvement of our offerings. This is an exciting opportunity for a deeply technical professional passionate about AI and customer success to make a significant impact in a fast-paced, innovative environment.

Responsibilities

Requirements

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure. 

Working Schedule

This position operates on a non-standard workweek to support business needs. The schedule will be either Saturday–Wednesday or Wednesday–Sunday, with two weekly off days.

Compensation

We offer competitive compensation, startup equity, health insurance, and other benefits, as well as flexibility in terms of remote work for the respective hiring region. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Apply at Together AI →

More open roles at Together AI

Together AI ·
Research Engineer, Core ML
📍 San Francisco, US · Staff
Research Engineer, Core ML at Together AI building production inference and RL/post-training systems. Role spans efficient inference algorithms, RL pipelines, and high-performance serving infrastructure to optimize latency, throughput, and model quality at scale.
PythonSGLangvLLMCUDAPyTorchTransformers
83
AI-core
Together AI ·
LLM Inference Frameworks and Optimization Engineer
📍 San Francisco, US 🌐 Remote 💰 $160K–$230K · Mid
LLM inference frameworks and optimization engineer at Together AI building distributed inference engines for language and multimodal models. Focus on GPU optimization, tensor parallelism, and high-throughput serving infrastructure.
PythonC++CUDATritonTensorRTvLLM
83
AI-core
Together AI ·
AI Researcher, Core ML (Turbo)
📍 San Francisco, US · Senior
AI Researcher at Together AI building efficient inference and RL/post-training systems. Role spans algorithms, inference engines (SGLang, vLLM), and distributed training pipelines to optimize model performance and cost at production scale.
PythonSGLangvLLMGRPORLHFDPO
83
AI-core
Together AI ·
Senior Machine Learning Engineer, Voice AI
📍 San Francisco, US 💰 $200K–$260K · Senior
Senior ML Engineer at Together AI optimizing inference for voice models (STT, TTS, speech-to-speech). Focus on serving engines like TRT-LLM and SGLang, GPU optimization, and productionizing voice workloads at scale.
PythonPyTorchTensorRT-LLMSGLangvLLMCUDA
81
AI-core
Together AI ·
Machine Learning Engineer - Inference
📍 San Francisco, US 💰 $160K–$230K · Mid
Machine Learning Engineer at Together AI building the inference engine for large language models. Focus on optimizing runtime services, performance at scale, and high-performance systems using PyTorch.
PythonPyTorchCUDATritonvLLMTensorRT-LLM
81
AI-core