Responsibilities:
- Coordinate the definition of workflows for Edge AI application development and build an end-to-end machine learning pipeline
- Define secure code practices and implement a repository management strategy for proprietary or open-source software
- Use continuous integration / continuous deployment and containerization techniques to deploy Machine Learning models to production
- Collaborate with application and platform developers, data scientists and ML engineers to identify tooling and automation needs and subsequent solutions
- Identify relevant metrics for DevOps and MLOps practices and use supervision tools to safeguard development efficiency and an operation performance
Required Skills and Experience:
- Proficiency with version control systems (e.g. Git), cloud platforms (e.g. AWS, Azure) and Infrastructure as Code tools (e.g. Terraform, Ansible, Packer)
- Strong background in defining a CI/CD/CT pipeline and building and maintaining a CI/CT/CD infrastructure (e.g., Jenkins, GitLab CI, GitHub Actions)
- Robust programming and scripting skills preferably in Python / C / C++, previous experience with Linux Systems and containers (e.g. Docker, Kubernetes)
- Degree or equivalent experience in Computer Science, Electronics, or a related field
“Nice To Have” Skills and Experience:
- Hands-on experience with CI/CT solutions for Embedded Systems (e.g. SiL / HiL) and monitoring tools (e.g. Prometheus/Grafana/ElasticSearch/Kibana)
- Knowledge in repository management tools (e.g. Gerrit, GitLab) and CI/CD pipelines for open-source projects
- Familiarity with ML frameworks (e.g. Pytorch, TensorFlow) and model testing