Share
What you'll be doing:
Drive hands-on safety research on a wide range of AI and networking products.
Develop tools and processes to expose novel weaknesses in AI systems and preempt threats.
Participate in defining and ensuring AI development processes meet safety standards.
Partner with cross-functional teams to understand needs and implement solutions.
Be a technical focal point across multiple development and networking teams and provide hands-on AI safety and engineering expertise.
What we need to see:
Bachelor’s or Master’s Degree in Computer Science, Computer Engineering, Data Science, or a related field (or equivalent experience).
Demonstrated experience of 5+ years in AI safety/security and offensive cybersecurity.
Knowledge of AI (both model and infrastructure) vulnerabilities and effective mitigation strategies.
In-depth understanding of LLM, MLLM, Generative AI, Agents and RAG workflows.
Proven Python programming expertise
Self-starter with a passion for growth, enthusiasm for continuous learning, and sharing findings across the team
Extremely motivated, highly passionate, and curious about new technologies.
Ways to stand out from the crowd:
Hands-on experience designing and building software products, including infrastructure and system design.
Knowledge of MLOps technologies such as Docker and Kubernetes.
Familiarity with ML libraries (PyTorch, TensorRT, or TensorRT-LLM).
These jobs might be a good fit