Bestkaam Logo
SMA Solar India

Big Data Administrator & Development Engineer

Bengaluru, Karnataka, India

1 day ago

Applicants: 0

Data processing Hadoop Kafka SQL Python
Salary Not Disclosed

2 months left to apply

Job Description

Description Are you passionate about data platforms, high-performance systems, and large-scale data processing? Join our innovative team as a Big Data Administrator & Development Engineer , where you?ll be responsible for ensuring the stability, security, and efficiency of on-premise Big Data environments. You will manage and enhance clusters based on Hadoop, Kafka, and Druid, while also contributing to automation, system optimization, and strategic architectural decisions. This role combines deep technical administration with development responsibilities, offering the opportunity to work on cutting-edge data infrastructure that powers critical business insights and decision-making. Key Responsibilities Manage and Optimize Big Data Clusters Ensure the reliable, secure, and high-performance operation of on-premise Hadoop, Kafka, and Druid clusters, proactively monitoring performance and system health. Implement and Maintain Monitoring Solutions Deploy and maintain monitoring and alerting tools such as Cloudera Manager and Splunk to track system availability, detect anomalies, and ensure proactive issue resolution. Administer Access and Security Controls Manage role-based access control (RBAC) , encryption, and compliance policies to safeguard data integrity and meet security standards. Develop and Automate Data Workflows Design and implement ETL processes, data transfers, and analysis pipelines using SQL, NiFi, Python, or Spark to support evolving business requirements. Support Architectural and Technical Evolution Collaborate with cross-functional teams on architecture design, system upgrades, and technology evaluations to improve scalability, reliability, and performance. Required Qualifications Minimum 8 years of hands-on experience in administering on-premise Hadoop, Kafka, and Druid clusters. Proven expertise with HDFS, Hive, Spark, Hue, and NiFi in enterprise data environments. Strong SQL proficiency, including query tuning and performance optimization. Solid understanding of Linux system administration and cluster maintenance best practices. Experience with automation and scripting tools such as Ansible, Python, or Bash. Preferred Qualifications Bachelor?s or Master?s degree in Computer Science, Engineering, or a related field. Experience developing ETL pipelines and integrating diverse data sources. Familiarity with cloud-based Big Data platforms (Azure preferred). Knowledge of disaster recovery, high-availability architectures, and runbook creation. Excellent English communication skills and ability to collaborate effectively in agile, cross-functional teams. #bethechange We look forward to receiving your application. Your contact is Anita Virnave SMA Solar India Pvt. Ltd. ? SMA is committed to diversity and equal opportunity - unattached of gender, age, origin, religion, disability or sexual orientation.

Required Skills

Data processing Hadoop Kafka SQL Python

Additional Information

Company Name
SMA Solar India
Industry
N/A
Department
N/A
Role Category
Scala Developer
Job Role
Associate
Education
No Restriction
Job Types
On-site
Gender
No Restriction
Notice Period
Less Than 30 Days
Year of Experience
1 - Any Yrs
Job Posted On
1 day ago
Application Ends
2 months left to apply