Job responsibilities
- Design and implement scalable data architectures using Databricks at an enterprise-scale.
- Design and implement Databricks integration and interoperability with other cloud providers such as AWS, Azure, GCP, Snowflake, Immuta, and OpenAI.
- Collaborate with data scientists, analysts and business stakeholders to understand requirements and deliver solutions.
- Develop and maintain data architecture standards, including data product interfaces, data contracts, and governance frameworks.
- Implement data governance and security measures to ensure data quality and compliance with industry and regulatory standards.
- Monitor and optimize the performance and scalability of data products and infrastructure.
- Provide training and support to domain teams on data mesh principles and AWS technologies.
- Stay up-to-date with industry trends and emerging technologies in data mesh and cloud computing.
Required qualifications, capabilities, and skills
- Formal training or certification on data engineering concepts and 10+ years applied experience
- Experience with multiple cloud platforms such as AWS, Azure, Google Cloud, Databricks, and Snowflake.
- Experience as a Data Management Architect or similar role in an enterprise environment.
- Hands-on practical experience delivering system design, application development, testing, and operational stability
- Influencer with a proven record of successfully driving change and transforming across organizational boundaries
- Strong communication skills, with the ability to present and effectively communicate to Senior Leaders and Executives.
- Experience with data governance, security, and industry and regulatory compliance best practices.
Preferred qualifications, capabilities, and skills
- Deep understanding of Apache Spark, Delta Lake, and other big data technologies
- Proficiency in programming languages such as Python, Scala, and SQL for data processing and analysis.