Leverage expertise in AI safety to uncover potential risks and develop novel mitigation strategies, including alignment techniques, constitutional AI approaches, RLHF, and robustness improvements for large language models.
Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes.
Build automated safety testing systems, generalize safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems.
Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters.
Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users.
Embody our culture and values.
Required Qualifications
Bachelor’s Degree in Computer Science, or related technical discipline AND 4 years technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python
OR equivalent experience.
Experience prompting and working with large language models.
Bachelor’s Degree in Computer Science or related technical field AND 8 years technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python
OR Master’s Degree in Computer Science or related technical field AND 6 years technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python
OR equivalent experience.
Demonstrated interest in Responsible AI.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: