Across all our hires, it’s important for colleagues to share our enthusiasm about the role of technology and AI in health and healthcare, but also appreciate the challenges and risks of delivering effective solutions in a complex and safety critical space. By design we will remain a lean founding team (albeit within a much larger org), and as such you will need to be highly self-sufficient, and be able to span from high level strategy through to ground-level execution on a wide range of tasks. This also means you will play a major role in cultivating and promoting a positive team culture.
Responsibilities
MTS-AI's work across the full AI modelling stack, including prompt design, pre- and post- training, inference, evaluation, LM systems design, and deployment. In this role, you will work across research and engineering boundaries to deliver LLM solutions in healthcare.
You will work in a small team within a broader organisation, designing and validating hypothesis across a cutting edge language modelling stack. Work in these roles will involve a hybrid of research and engineering, and can flex across every aspect of the language modelling stack to make models that are as useful and safe for health as possible.
We are looking for early-hire full-stack candidates who possess:
- Deep, full-stack expertise in designing and evaluating AI applications. Evidence of this may include research papers at top AI conferences and journals, open source projects, industry experience in building production AI stacks.
- Strong intuition about pre/post training, metric design for AI, prompt engineering methodologies, and AI systems design.
- Demonstrated experience in one or more of the following areas: prompt engineering, experimental design, language model evaluations, fine tuning, reinforcement learning/direct preference optimization, data curation, and classic machine learning principles.
Required/Minimum Qualifications
- Bachelor's Degree in Computer Science, or related technical discipline AND technical engineering experience with coding in languages including, but not limited to, C, C , C#, Java, JavaScript, or Python OR equivalent experience.
- Demonstrated full-stack experience in large-scale AI. Empirical evidence of this in the form of top tier publications, open source contributions, and/or on-the-job work experience.
- Deeper expertise in one or more parts of the AI stack, including prompt engineering, pre-training, fine-tuning, reinforcement learning and direct preference optimization, data curation, LLM inference, orchestration, evaluation pipelines, and deployment.
Additional or Preferred Qualifications
- Or relevant experience
- Ability to flex across research and engineering boundaries, wearing a bit of both hats.
- Passionate about conversational AI and its deployment.
- Demonstrated written and verbal communication skills with the ability to work closely with cross-functional teams, including product managers, designers, and other engineers.
- Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in AI.
- Proven ability to collaborate and contribute to a positive, inclusive work environment, fostering knowledge sharing and growth within the team.