Bestkaam Logo
Alphanome.AI Logo

AI/ML Platform Engineer (AWS Focus)

Actively Reviewing the Applications

Alphanome.AI

India, Andhra Pradesh Full-Time
Posted 1 day ago Apply by July 2, 2026

Job Description

We are seeking a highly skilled AI/ML Platform Engineer to design, scale, and maintain our production-grade AI infrastructure. Unlike a traditional data science role, this position focuses on the "engineering" side of AI—moving models out of notebooks and into robust, scalable production environments. The ideal candidate has deep expertise in the AWS ecosystem, specifically leveraging SageMaker and Bedrock to deploy LLMs and generative AI solutions at scale.

Key Responsibilities


  • Infrastructure & Scaling: Build and maintain scalable AI pipelines and inference endpoints on AWS, ensuring high availability and low latency for production workloads.
  • Model Orchestration: Use AWS Bedrock to integrate, manage, and deploy foundational models, and SageMaker for end-to-end ML lifecycle management.
  • Data Architecture: Design and optimize data storage and retrieval layers using DynamoDB and ElasticSearch/OpenSearch to support high-speed AI applications and RAG (Retrieval-Augmented Generation) patterns.
  • Production Evaluations: Implement automated evaluation frameworks ("Evals") to monitor model performance, drift, and accuracy in live production environments.
  • Advanced Fine-tuning: Execute fine-tuning strategies for LLMs and specialized models, specifically focusing on techniques to achieve high performance in low-data environments (e.g., PEFT, LoRA, synthetic data generation).
  • MLOps: Develop CI/CD pipelines for machine learning, ensuring seamless transitions from experimentation to deployment.
Technical Qualifications
Professional Experience
  • Minimum 3+ years of professional experience as an ML Engineer, Platform Engineer, or AI Engineer.
  • Strong AWS Expertise: Proven track record with AWS-native AI services, including SageMaker (Training, Hosting, Pipelines) and AWS Bedrock.
  • Database Proficiency: Hands-on experience with DynamoDB (for state management/metadata) and ElasticSearch/OpenSearch (for vector search and indexing).
  • Production Scaling: Demonstrated experience taking ML products from prototype to a global production scale, handling high request volumes.
  • Fine-tuning Mastery: Experience fine-tuning models where data is scarce or imbalanced using modern optimization techniques.
  • Evaluation Frameworks: Experience building "Evals in production" to provide continuous feedback loops on model quality.
  • Python Ecosystem: Expert-level Python skills and familiarity with frameworks like PyTorch, Hugging Face, LangChain, or LlamaIndex.
Educational Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, Artificial Intelligence, or a related technical discipline.
Preferred (Good-to-Have) Skills
  • Experience with Infrastructure as Code (IaC) such as AWS CDK or Terraform.
  • Knowledge of Serverless architectures (AWS Lambda, Fargate) in the context of AI.
  • Experience with Synthetic Data Generation to augment small datasets for fine-tuning.
  • AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect.
Soft Skills & Competencies
  • Operational Mindset: You don't just build models; you build systems that stay up and perform under pressure.
  • Independent Contributor: Ability to navigate the fast-moving AWS/AI landscape with minimal supervision.
  • Collaborative Problem Solver: Works effectively with product teams to translate business requirements into technical architecture.
  • Continuous Learner: Stays at the forefront of Generative AI, Bedrock updates, and evolving ML engineering best practices.


Check Qualification

Quick Tip

Customize your resume and cover letter to highlight relevant skills for this position to increase your chances of getting hired.