In this role you will: - Design and implement evaluation frameworks for measuring model performance, including human annotation protocols, quality control mechanisms, statistical reliability analysis, and LLM-based autograders to scale evaluation- Apply statistical methods to extract meaningful signals from human-annotated datasets, derive actionable insights, and implement improvements to models and evaluation methodologies- Analyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, creating insight/interpretability tools to understand and predict failure modes.- Work across the entire ML development cycle, such as developing and managing data from various endpoints, managing ML training jobs with large datasets, and building efficient and scalable model evaluation pipelines- Independently run and analyze ML experiments for real improvements