MLOPS Engineer 3
Posted on 10 Mar 2026
Role: MLOPS Engineer 3
Candidates Required: 1
Focus: Production architecture development with scalable & stable infrastructure. Act as a bridge between DS and Dev.
Experience: 4–5 Years
Skillset
Productionization:
Expert in deploying models using Nvidia Triton Inference Server and managing containerized workloads via Docker and Kubernetes (K8s) on EC2.
Feature Stores:
Experience building and maintaining scalable Feature Stores (e.g., Feast, Featureform) to ensure training-serving consistency.
Programming:
Proficiency in Python for ML Deployments, and Kotlin or Java for building robust, scalable backend deployment services. Knowledge of Go for high-performance systems.
Data Systems:
Hands-on experience with Snowflake (as a source), BigQuery, and high-speed databases like Cassandra or Redis for low-latency serving.
DevOps / CI-CD:
Strong command of Bash scripting and CI/CD pipelines (e.g., GitHub Actions, GitLab CI) tailored for ML (Continuous Training pipelines).
Observability:
Setting up and managing monitoring stacks using Grafana and Kibana to track model drift, latency, and system health.
Added Value:
- Knowledge of Infrastructure as Code (Terraform/CDK) to manage AWS resources (EC2/SageMaker) programmatically.
- Knowledge of Spot Instances, right-sizing inference servers specific to workload.
Role Expectations
Primary Role: Deployment at scale with Reliability
ML Areas: Sound Knowledge
Deep Learning: Deployment / Quantization
Coding: Python / Java / Go / Bash / SQL
Testing: Load / Stress Testing
Skills
- python
- java
- sql
- golang
- bash
BKC, Mumbai
4 years Exp.
In Office
Thank you, we have received your application
Our team will evaluate your application and get back to you