Responsibilities:
- Work on cloud-based Unified Cloud Application Protection projects
- Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and analyze large-scale datasets.
- Implement and deploy machine learning models into production environments, ensuring reliability, scalability, and performance.
- Collaborate with data scientists, software engineers, and stakeholders to identify and prioritize data engineering and machine learning projects.
- Optimize and fine-tune machine learning models for performance and scalability.
- Develop and maintain robust data architectures and infrastructure, including data storage, data processing, and data retrieval systems.
- Conduct code reviews and ensure best practices in software engineering and data engineering.
- Stay up-to-date with the latest advancements in machine learning, data engineering, and related fields.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in data engineering and machine learning roles.
- Strong proficiency in programming languages such as Python, Java, or Scala.
- Experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, scikit-learn, etc.
- Proficient in data manipulation and analysis using tools like SQL, Pandas, NumPy, etc.
- Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure.
- Excellent understanding of data architectures, ETL processes, and data warehousing.
- Proven ability to build and deploy machine learning models in production environments.
- Experience with big data technologies such as ElasticSearch, Spark, Flink or similar.
- Excellent communication and collaboration skills.
Wage ranges are based on various factors including the labor market, job type, and job level. Exact salary offers will be determined by factors such as the candidate's subject knowledge, skill level, qualifications, experience, and geographic location.