Job Responsibilities:
- Design and implement scalable data architectures using Databricks at an enterprise-scale.
- Design and implement Databricks integration and interoperability with other cloud providers such as AWS, Azure, GCP, Snowflake, Immuta, and OpenAI.
- Collaborate with data scientists, analysts and business stakeholders to understand requirements and deliver solutions.
- Develop and maintain data architecture standards, including data product interfaces, data contracts, and governance frameworks.
- Implement data governance and security measures to ensure data quality and compliance with industry and regulatory standards.
- Monitor and optimize the performance and scalability of data products and infrastructure.
- Provide training and support to domain teams on data mesh principles and cloud data technologies.
- Stay up-to-date with industry trends and emerging technologies in data mesh and cloud computing
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 5+ years applied experience . In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise
- 15+ years applied experience in Data Engineering space using enterprise tools, home grown frameworks and 3-5+ years of specialty in Databricks implementation from start to end.
- Experience with multiple cloud platforms such as AWS, Azure, Google Cloud, Databricks, and Snowflake.
- Experience as a Databricks solution architect or AD lead or similar role in an enterprise environment.
- Hands-on practical experience delivering system design, application development, testing, and operational stability
- Influencer with a proven record of successfully driving change and transforming across organizational boundaries
- Strong leadership skills, with the ability to present and effectively communicate to Senior Leaders and Executives.
- Experience with data governance, security, and industry and regulatory compliance best practices.
- Deep understanding of Apache Spark, Delta Lake, DLT and other big data technologies
- Certification in Databricks data engineer associate/professional
Preferred qualifications, capabilities, and skills:
- Experience in Snowflake is preferred
- Experience with Kinesis, Flink is preferred
- Experience with AI/ML is preferred.
- Experience of working in a development teams, using agile techniques and Object Oriented development and scripting languages, is preferred.