Responsible AI Engineer (Governance & Safety) - Senior
You will lead—operationalizing, security and privacy controls,, andfor production AI systems. The remit spans policy to code: frameworks, controls, tooling, documentation, red teaming, and continuous evaluation.
Key responsibilities
- Establish and run LLMOps/Agentic Ops practices (lifecycle, approvals, versioning, observability, incident response/playbooks) integrated with platform tooling.
- Define and enforce governance & security controls (PII protection, data residency, model access, content safety, jailbreak/prompt injection defenses), integrating with enterprise security.
- Build LLM evaluation pipelines (groundedness/faithfulness, toxicity/PII, bias/fairness, robustness) and quality gates for pre prod and ongoing post deployment monitoring.
- Implement guardrails/ethics (policy as code, allow/deny lists, safety filters, red teaming harnesses) and ensure compliant documentation (model cards, data sheets, DPIAs).
- Contribute production grade Python libraries, policies, and APIs that product teams can adopt “as a service”; partner with platform teams on AWS/Azure controls.
Must have skills
- LLMOps/Agentic Ops patterns and tooling
- Governance/Security (threat modeling for LLMs, privacy, access controls)
- LLM Evaluation frameworks (human+automatic metrics; eval harnesses)
- Guardrails/Ethics techniques and incident playbooks
PythonAWS/Azure- API development for governance services
Good to have
- Familiarity with RAG/advanced agentic AI to guide safe design choices
Qualifications & experience
- B.Tech/M.Tech/MS in CS/EE or equivalent.
- 4+ years in Responsible AI Engineering
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.