Expoint - all jobs in one place

מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר

Limitless High-tech career opportunities - Expoint

Microsoft Senior Software Engineer 
United States, Washington 
886141578

Yesterday

and data scientist.

at Azureof requests per day

You will be joining the Inference team thatworks directly with OpenAI to host modelsefficiently on Azure

LLM (Large Language Model) infrastructure,LLMs and Diffusion models for inference at high scale and low latency.


Required/Minimum Qualifications:

  • Bachelor’s degree in computer science, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python o OR equivalent experience.
  • 2+ years’ experience working with LLMs using Python.

Other Requirements:

  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Experience in distributed computing and architecture, and/or developing and operating high scale, reliable online services.
  • C/C++ development experience.
  • Proven experience in observability, performance engineering, optimizing for cost or a related domain
  • Knowledge and experience with Kubernetes based online services at scale
  • Proficiency in data science modeling and statistical methodologies.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:


Microsoft will accept applications for the role until December 25, 2024.

Responsibilities
  • Engage directly with key partners to understand and implement complex inferencing capabilities and observability strategies foroptimizingAI model performance and GPUutilization
  • Develop solutions for benchmark performance and optimization, load testing framework for customer AI workloads, and efficiency improvements using data science modeling initiatives.
  • Collaborate with cross-functional teams to improve service reliability and performance.
  • Develop and refine metrics to assess the performance and effectiveness of runtime inferencing. Lead efforts in driving down latency and throughput improvements.
  • identify, assess, track, and mitigate project risks and issues in a fast-paced start up like environment.
  • to build constructive and effective relationships and solve problems collaboratively
  • production inferenceSLAsfor core AI scenarios on

Other:

  • Embody our and