Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

Cisco Solutions Engineer AI 
South Korea, Seoul 
580068890

18.11.2024
What You’ll Do
We are seeking a senior Solutions Engineer - Artificial Intelligence (AI) to join our dynamic sales team. As an SE (AI), you will drive the adoption of our AI solutions across various industries. You will identify potential clients, understand their specific needs, and provide tailored AI solutions that enhance their business operations. This role requires a deep understanding of AI technologies and experience relaying technical concepts to a diverse audience.
Who You Are
Minimum Qualifications:
  • 6+ years of technical presales / customer-facing experience (preferably selling Compute, Storage, and/or Network solutions).
  • Good understanding of programming/scripting languages (e.g. Python, MySQL, Github, Git, GO, ETL, OLAP, RDBMS, Scribus, AWS, Azure, Google, IBM Cloud, or other platforms/programming languages).
  • Ability to provide detailed and consumable documentation of standard methodologies for deployment around application acceleration, automation/management efficiencies, enterprise edge, and AI/ML solutions.
  • Excellent presentation skills – ability to value-sell and deliver engaging workshops to both technical and non-technical audiences on AI and/or infrastructure topics.
  • Fluent in Korean & business English communication.
Preferred Qualifications:
  • Bachelor's Degree in Computer Science, Computer Engineering, Electrical Engineering, or related field. Advanced degree is a plus.
  • AI experience with Nvidia, IBM, Microsoft, Dell, NetApp, HPE, and/or other AI vendors.
  • In-depth understanding of language models, including but not limited to GPT-3, BERT, or similar architectures.
  • Experienced with databases (Oracle, PostgreSQL, MySQL, MongoDB, Cassandra, Redis, Snowflake, BigQuery) and AI/ML frameworks (scikit-learn, TensorFlow, PyTorch, Hugging Face).
  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.
  • Experience in deploying LLM models in cloud environments (e.g., AWS, Azure, GCP) and on-premises infrastructure.
  • Familiarity with containerization technologies (e.g., Docker or equivalent experience) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.