Intermediate Application Developer -Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics).
Chennai, Tamil Nadu, India
1 month ago
Applicants: 0
1 month left to apply
Job Description
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow?people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills Cloud Platforms: GCP and Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks, improving scalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications Minimum 5 years of Relevant experience Bachelor?s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Required Skills
Additional Information
- Company Name
- UPS
- Industry
- N/A
- Department
- N/A
- Role Category
- Go Developer
- Job Role
- Mid-Senior level
- Education
- No Restriction
- Job Types
- On-site
- Gender
- No Restriction
- Notice Period
- Less Than 30 Days
- Year of Experience
- 1 - Any Yrs
- Job Posted On
- 1 month ago
- Application Ends
- 1 month left to apply