Your key responsibilities
- Develop & deploy azure databricks in a cloud environment using Azure Cloud services.
- ETL design, development, and deployment to Cloud Service.
- Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams.
- Design and optimize model codes for faster execution.
Skills and attributes for success
- 1 to 3 years of Experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions.
- 1-3 years Hands on experience in programming like python/pyspark. (Must have)
- Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, etc.
- Need to have good knowledge on DWH concepts and implementation knowledge on Snowflake
- Well versed in DevOps and CI/CD deployments. (Good to have)
- Must have hands on experience in SQL and procedural SQL languages.
- Strong analytical skills and enjoys solving complex technical problems.
To qualify for the role, you must have
- Graduate or equivalent with 1 to 3 years of industry experience
- Have working experience in an Agile base delivery methodology (Preferable)
- Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
- Strong analytical skills and enjoys solving complex technical problems
- Proficiency in Software Development Best Practices
- Excellent debugging and optimization skills
- Excellent communicator (written and verbal formal and informal).
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.