Expoint – all jobs in one place
The point where experts and best companies meet
Limitless High-tech career opportunities - Expoint

Salesforce Lead Applied Scientist - Responsible AI 
United States, California, San Francisco 
839499348

21.08.2025

Job Category

Software Engineering

Job Details

Job Responsibilities

  • Develop strategy, alongside Salesforce AI Research, engineering, data science, and product management to create, develop, and ship cutting-edge generative AI capabilities for Salesforce customers while mitigating ethical risks and capturing ethical opportunities.

  • Identify potential negative consequences, then identify how those consequences might be mitigated and drive prioritization of those mitigations into a team’s roadmap. Conversely, identify positive ethical impacts in aroadmap/specification/designand ways to amplify them in the product.

  • Trust and safety, as well as CRM benchmarking against other models and different versions of the same model.

  • Develop solutions for real-world, large-scale problems.

  • As needed, lead teams to deliver on more complex pure and applied research projects.

Minimum Requirements:

  • Master's degree (or foreign degree equivalent) in Computer Science, Engineering, Information Systems, Data Science, Social or Applied Sciences, or a related field

  • 5-8 years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles. Additional experience researching responsible generative AI challenges and risk mitigations.

  • Expertise in one of the following areas: alignment, adversarial robustness,interpretability/explainability,or fairness in generative AI.

  • Proven leadership, organizational, and execution skills. Passion for developing cutting-edge AI ethics technology and deploying it through a multi-stakeholder approach.

  • Experience working in a technical environment with a broad, cross-functional team to drive results, define product requirements, coordinate resources from other groups (design, legal, etc.), and guide the team through key milestones

  • Proven ability to implement, operate, and deliver results via innovation at a large scale.

  • Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.

Preferred Requirements:

  • 8-10 years of relevant experience in AI ethics, AI research, security, Trust & Safety, or similar roles

  • Advanced degree in Computer Science, Human-Computer Interaction, Engineering, Data Science or quantitative Social Sciences

  • Published research on algorithmic fairness, accountability, and transparency, especially around detecting and mitigating bias or AI safety.

  • Full-time industry experience in deep learning research/product.

  • Strong experience building and applying machine learning models for business applications.

  • Strong programming skills

  • Experience in implementing high-performance and large-scale deep learning systems.

  • Thoughtful about AI impacts and ethics.

  • Fantastic problem solver; ability to solve problems the world has not solved before.

  • Presented a paper at NeurIPS, FAccT, AIES, or similar conferences

  • Works well under pressure, and is comfortable working in a fast-paced, ever-changing environment.

If you require assistance due to a disability applying for open positions please submit a request via this.

Posting Statement

to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records. For New York-based roles, the base salary hiring range for this position is $172,000 to $334,600. For Washington-based roles, the base salary hiring range for this position is $157,600 to $306,600. For California-based roles, the base salary hiring range for this position is $172,000 to $334,600.