Sasso Consulting (Pty) Ltd

MLOps Engineer

Sasso Consulting (Pty) Ltd

United States Remote DevOps 8h ago via Himalayas
mlops-engineer machine-learning-operations devops-engineer ml-infrastructure-engineer ai-platform-engineer mlops python ci-cd docker kubernetes aws azure google-cloud

Job details

Company
Sasso Consulting (Pty) Ltd
Location
United States
Remote
Yes
Field
DevOps
Source
via Himalayas
Posted April 26, 2026
Is the job expired?

About this role

MLOps Engineer (Remote)

Location:

Remote (Global)

Employment Type:

Full-Time

About the Role

We are seeking a highly skilled MLOps Engineer to join our growing team. In this role, you will be responsible for deploying, monitoring, and maintaining machine learning models in production environments, ensuring reliability, scalability, and performance.

You will work at the intersection of machine learning, DevOps, and software engineering, enabling seamless integration of AI models into business systems through robust CI/CD pipelines and automation.

Key Responsibilities

  • Design, build, and maintain end-to-end ML pipelines
  • Deploy machine learning models into production environments
  • Implement and manage CI/CD pipelines for ML workflows
  • Monitor model performance, data drift, and system health
  • Automate model retraining and versioning processes
  • Collaborate with Data Scientists and Engineers to productionise models
  • Ensure scalability, reliability, and security of ML systems
  • Manage cloud infrastructure for ML workloads (AWS, Azure, or GCP)
  • Troubleshoot and resolve issues in production ML systems

Required Skills & Experience

  • Strong experience in MLOps or DevOps within ML environments
  • Proficiency in Python and scripting for automation
  • Experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn)
  • Hands-on experience with CI/CD tools (GitHub Actions, Jenkins, GitLab CI)
  • Knowledge of containerisation (Docker) and orchestration (Kubernetes)
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Familiarity with model monitoring tools (e.g., Prometheus, MLflow, Evidently)
  • Understanding of data pipelines and ETL processes
  • Experience with version control systems (Git)

Nice to Have

  • Experience with Kubeflow, Airflow, or SageMaker
  • Knowledge of infrastructure as code (Terraform, CloudFormation)
  • Exposure to feature stores and model registries
  • Experience with real-time/streaming data systems (Kafka, Spark)

Key Competencies

  • Strong problem-solving and analytical thinking
  • Ability to work independently in a remote environment
  • Excellent collaboration and communication skills
  • Detail-oriented with a focus on system reliability

What We Offer

  • Fully remote work environment
  • Flexible working hours
  • Opportunity to work on cutting-edge AI/ML systems
  • Collaborative and innovative team culture
  • Competitive salary and benefits

Originally posted on Himalayas

Apply for this job via Himalayas