Develop & deploy azure databricks in a cloud environment using Azure Cloud services
ETL design, development, and deployment to Cloud Service
Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams
Design and optimize model codes for faster execution
Skills and attributes for success
3 to 8 years of Experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions
Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc.
Familiar with cloud services like Azure
Hands on experience on spark /pyspark
Hands on experience in programming like python/scala
Well versed in DevOps and CI/CD deployments
Must have hands on experience in SQL and procedural SQL languages
Strong analytical skills and enjoys solving complex technical problems
To qualify for the role, you must have
Be a computer science graduate or equivalent with 3-8 years of industry experience
Have working experience in an Agile base delivery methodology (Preferable)
Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
Strong analytical skills and enjoys solving complex technical problems
Proficiency in Software Development Best Practices
Excellent debugging and optimization skills
Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc
Excellent communicator (written and verbal formal and informal).
Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support.