The point where experts and best companies meet
Share
What you will do
Apply Infrastructure-as-Code principles in all aspects of the data platform implementation
Build ML Pipeline capabilities into the Data Platform.
Integrate data platform components together with other Red Hat systems and infrastructure
Participate in the design, implementation and reliability of ML Pipelines.
Participate in the design and development of new features to enable Data 'as-a-service'
Design and write automation software to provision, upgrade, monitor, and heal Data 'as-a-service'
Automate controls and reports that satisfy compliance and regulatory requirements
Develop enablement content that will assist peers and users of the data platform
Service data platform users with cost and usage insight, along with cost optimization recommendations
Ensure a best-class experience for data engineers, data analysts, and data scientists using the data platform, with high productivity and low friction
What you will bring
Bachelor's degree in Computer Science, Computer Engineering, or related field.
8+ years of software development experience with a focus on data applications & systems
Exceptional software engineering skills that lead to elegant and maintainable data platform
Proficiency in at least one general purpose programming language, eg. Python, Go, Java, Rust, etc.
Loosely held strong opinions and perspectives that you kindly debate, defend, or change to ensure that the entire team moves as one
Sets and resets the bar on all things quality, from code through to data, and everything in between
Deep empathy for your users of your data platform, leading to a constant focus on removing friction, increasing adoption, and delivering business results
Prunes and prioritizes work in order to maximize your contributions and impact
Bias for action and leading by example
Past experience in building enterprise data platforms that have a high level of governance and compliance requirements
Optional Skills
Familiarity with open source or inner source development and processes
Familiarity of data mesh architectural principles
Experience with Snowflake, Fivetran, dbt, Airflow / Astronomer
Experience with building ML Pipelines with tools like MLflow, Kubeflow etc
Deep hands on knowledge of Vector Databases, RAG Pipelines is a huge plus.
These jobs might be a good fit