Data Engineer
Vadodara, Gujarat, India
1 hour ago
Applicants: 0
4 weeks left to apply
Job Description
Location: Vadodara Department: Data Engineering Job Summary We are seeking a versatile Data, Analytics & MLOps Engineer to join our growing AI Engineering team. The ideal candidate is an expert in designing robust data pipelines (ETL pipeline), managing data lakes (Data lake Architect), building enterprise-grade dashboards, and enabling automation and AI/ML workflows using modern DevOps and orchestration tools. You will contribute to a unified data ecosystem that supports advanced analytics, Data quality frameworks, reporting, and AI adoption at scale. Key Responsibilities Functional Area Responsibilities Data Engineering - Design, develop, and maintain scalable data pipelines for ingestion, transformation, and quality. - Manage structured and unstructured data in Data Lakes (Azure, AWS S3, GCP). ETL/ELT Orchestration - Build ELT pipelines using Fivetran , dbt , Azure Data Factory , Apache Airflow , SSIS , Talend . Automation & Workflow - Implement and maintain automation using Power Automate , Pipefy , or Salesforce Workflows . Dashboards & BI - Develop analytics dashboards using Power BI , Tableau , Looker , or QlikView with data modeling and KPI visualizations. Data Governance - Implement data access, lineage, audit trails, and classification policies using tools like Purview , Collibra , or Alation . MLOps Engineering - Operationalize ML pipelines using MLflow , Azure ML , Kubeflow , SageMaker Pipelines . - Package and deploy models via CI/CD pipelines. DevOps & CI/CD - Integrate data and ML workflows with Azure DevOps , GitHub Actions , or GitLab CI/CD . Collaboration - Partner with data scientists, product teams, and business stakeholders to deliver insight-driven solutions. Required Skills & Experience 2?8 years of experience in Data Engineering , Analytics , or MLOps roles. Strong programming skills in SQL , Python , and optionally Scala or R . Experience with Cloud platforms : Azure (Data Lake Gen2, Synapse, Fabric), AWS (Glue, S3, Redshift), GCP (BigQuery, Dataflow). Hands-on with pipeline tools: Fivetran , dbt , ADF , Apache Nifi , Kafka . Visualization expertise in Power BI , Looker , Tableau , or similar. Experience with Power Automate , Pipefy , or Salesforce Flow to automate business processes. Understanding of data warehouse concepts and dimensional modeling. Working knowledge of DataOps , MLOps , DevOps , and related best practices. Desirable Skills Experience in event-driven architectures using Apache Kafka , Event Hub , or Pub/Sub . Familiarity with GenAI pipelines , RAG architecture , and vector databases like Pinecone , Weaviate , or Qdrant . Exposure to Data Mesh , Data Fabric , or Modern Data Stack . Experience in Salesforce Data Integration , Power Platform , or Microsoft Fabric . Hands-on with PowerShell scripting or Bash for automation. Certifications Preferred Microsoft Certified: Azure Data Engineer Associate / Azure AI Engineer Associate AWS Certified Big Data or Machine Learning GCP Professional Data Engineer dbt Fundamentals / Power BI Data Analyst Associate Soft Skills Ownership mindset and ability to work in a fast-paced, ambiguous environment. Strong verbal and written communication for stakeholder alignment. Problem-solving skills with a business-outcome-driven mindset. Opportunities Drive enterprise transformation through modern data and AI platforms. Work cross-functionally across supply chain , customer insights , marketing analytics , and finance intelligence . Engage in real-world GenAI readiness , Copilot integration , and agentic AI deployments.
Required Skills
Additional Information
- Company Name
- Drevol
- Industry
- N/A
- Department
- N/A
- Role Category
- AI Engineer
- Job Role
- Mid-Senior level
- Education
- No Restriction
- Job Types
- On-site
- Gender
- No Restriction
- Notice Period
- Less Than 30 Days
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 1 hour ago
- Application Ends
- 4 weeks left to apply