Expoint – all jobs in one place
המקום בו המומחים והחברות הטובות ביותר נפגשים

דרושים Data Center Operations Trainee

תפקיד מתכנת Data Center Operations Trainee הוא כוכב עולה בשמיי ההייטק, כשמפתחים יכולים לבחור בין מגוון של פרויקטים מעניינים תוך כדי עבודה דינאמית ומאתגרת. בואו למצוא את המשרה הבאה שלכם כמפתחי Data Center Operations Trainee כאן באקספוינט!
חברה
אופי המשרה
קטגוריות תפקיד
שם תפקיד (1)
Israel
עיר
נמצאו 390 משרות
Yesterday
U

Unity Senior DevOps Engineer Data Platform Israel, Tel-Aviv District, Tel-Aviv

Limitless High-tech career opportunities - Expoint
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams. DataOps Excellence: Create seamless...
תיאור:
The opportunity
  • Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
  • DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
  • Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
  • Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
  • Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
  • Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
  • Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
  • Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
  • Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
  • Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
What we're looking for
  • 3+ years of hands-on DevOps experience building, shipping, and operating production systems.
  • Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
  • Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
  • Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
  • Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
  • GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
  • Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
You might also have
  • Data Pipeline Orchestration - Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Prefect, Kubernetes operators) and implementing GitOps practices for data workloads
  • Data Engineer Experience Focus - Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
  • Data Infrastructure Deep Knowledge - Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems
Additional information
  • Relocation support is not available for this position.
  • Work visa/immigration sponsorship is not available for this position

This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.

Show more
Yesterday
F

Forter Data ResearcherNew Israel, Tel Aviv District, Tel Aviv-Yafo

Limitless High-tech career opportunities - Expoint
Invent, design, implement, and refine our system’s core decisioning logic and models in a live production environment. Conduct in-depth research into complex fraud patterns, adversarial networks, and emerging global threats....
תיאור:

What you'll be doing:

  • Invent, design, implement, and refine our system’s core decisioning logic and models in a live production environment.
  • Conduct in-depth research into complex fraud patterns, adversarial networks, and emerging global threats.
  • Leverage rich datasets to derive actionable insights, develop new system components, and advance our feature engineering processes.
  • Develop, prototype, and automate new tools and processes to enhance the precision and scale of our systems.
  • Collaborate with a world-class team of Data Scientists, Analysts, Researchers, and Engineers to develop the next generation of Forter’s AI technology.

What you'll need:

  • Relevant experience (one of the below)
    • At least 2 years of hands-on experience in quantitative research/data science, or a related role involving production-oriented data analytics and hypothesis-led research.
    • MSc or PhD in a quantitative field (e.g., Physics, Economics, Neuroscience, Biotechnology, Computer Science, etc.).
  • Strong analytical and logical reasoning skills with a proven ability to dissect and solve highly complex problems.
  • Extensive experience working with large datasets using scripting languages (e.g., Python, R, or Matlab).
  • Excellent communication skills – ability to articulate complex technical concepts and research findings to diverse audiences.
Bonus points for:
  • Experience with SQL and big data technologies (e.g., Spark).
  • Deep familiarity with machine learning concepts and practice
  • Risk / Intelligence experience

Trust is backed by data – Forter is a recipient of over 10 workplace and innovation awards, including:

  • Great Place to Work Certification (2021, 2022, 2023, )
  • Fortune’s Best Workplaces in NYC (2022, 2023 and )
  • Forbes Cloud 100 (2021, 2022, 2023 and )
  • #3 on Fast Company’s list of “Most Innovative Finance Companies” ( )
  • Anti-Fraud Solution of the Year at the Payments Awards ( )
  • SAP Pinnacle Awards “New Partner Application Award” (2023)
  • Fintech Breakthrough Awards – Best Fraud Prevention Platform (2023)
Show more

משרות נוספות שיכולות לעניין אותך

Yesterday
U

Unity Staff Data AI Engineer Israel, Tel-Aviv District, Tel-Aviv

Limitless High-tech career opportunities - Expoint
Develop and execute AI strategies aligned with business objectives. Advise leadership on AI capabilities and potential applications. Guide teams in adopting AI tools and methodologies. Ensure ethical and efficient implementation...
תיאור:
SuperSonic is hiring a Staff Data AI Lead to lead our AI initiatives.As a Staff Data AI Lead you will be responsible for leading SuperSonic organization AI integration efforts and serving as the bridge between advanced AI technologies and our business needs, implementing cutting-edge Data & AI technologies, creating AI-driven strategies and integrating innovative AI solutions across multiple platforms.
What you'll be doing
  • Develop and execute AI strategies aligned with business objectives
  • Advise leadership on AI capabilities and potential applications
  • Guide teams in adopting AI tools and methodologies
  • Ensure ethical and efficient implementation of AI technologies
  • Design and oversee AI-driven process improvements
  • Collaborate with various departments to identify AI opportunities
  • Stay current with the latest AI trends and advancements
  • Conduct AI-related training and workshops for staff
  • Manage AI projects from conception to implementation
  • Evaluate and recommend AI tools and platforms
  • Leading a team of AI engineers
What we're looking for
  • Deep understanding of AI technologies, including large language models
  • Expertise in prompt engineering and AI-powered automation
  • Proficiency with AI tools such as ChatGPT, Claude, Midjourney, and Copilot
  • Knowledge of AI ethics and regulatory considerations
  • Strong problem-solving and analytical skills
  • Proficiency with Python or TypeScript for building AI workflows and data pipelines
  • Excellent communication and leadership abilities
  • Ability to translate complex AI concepts for non-technical audiences
  • Experience in project management and cross-functional collaboration
You might also have
  • Advanced degree in Computer Science, AI, or related field
  • Previous experience in AI implementation within an organizational setting
  • Certifications in relevant AI technologies or platforms
  • Familiarity with no-code AI application development
  • Bachelor's degree in Computer Science, AI, or related field
Additional information
  • Relocation support is not available for this position.
  • Work visa/immigration sponsorship is not available for this position

This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.

Show more

משרות נוספות שיכולות לעניין אותך

22.11.2025
U

Unity Data Platform Engineering Lead Israel, Tel-Aviv District, Tel-Aviv

Limitless High-tech career opportunities - Expoint
Leading a senior Data Platform team: setting clear objectives, unblocking execution, and raising the engineering bar. Owning SLOs, on-call, incident response, and postmortems for core data services. Designing and operating...
תיאור:

Unify online/offline for features: Drive Flink adoption and patterns that keep features consistent and low-latency for experimentation and production.

Make self-serve real: Build golden paths, templates, and guardrails so product/analytics/DS engineers can move fast safely.

Run multi-tenant compute efficiently: EMR on EKS powered by Karpenter on Spot instances; right-size Trino/Spark/Druid for performance and cost.

Cross-cloud interoperability: BigQuery + BigLake/Iceberg interop where it makes sense (analytics, experimentation, partnership).

What you'll be doing
  • Leading a senior Data Platform team: setting clear objectives, unblocking execution, and raising the engineering bar.
  • Owning SLOs, on-call, incident response, and postmortems for core data services.
  • Designing and operating EMR on EKS capacity profiles, autoscaling policies, and multi-tenant isolation.
  • Tuning Trino (memory/spill, CBO, catalogs), Spark/Structured Streaming jobs, and Druid ingestion/compaction for sub-second analytics.
  • Extending Flink patterns for the feature platform (state backends, checkpointing, watermarks, backfills).
  • Driving FinOps work: CUR-based attribution, S3 Inventory-driven retention/compaction, Reservations/Savings Plans strategy, OpenCost visibility.
  • Partnering with product engineering, analytics, and data science & ML engineers on roadmap, schema evolution, and data product SLAs.
  • Leveling up observability (Prometheus/VictoriaMetrics/Grafana), data quality checks, and platform self-service tooling.
What we're looking for
  • 2+ years leading engineers (team lead or manager) building/operating large-scale data platforms; 5+ years total in Data Infrastructure/DataOps roles.
  • Proven ownership of cloud-native data platforms on AWS: S3, EMR (preferably EMR on EKS), IAM, Glue/Data Catalog, Athena.
  • Production experience with Apache Iceberg (schema evolution, compaction, retention, metadata ops) and columnar formats (Parquet/Avro).
  • Hands-on depth in at least two of: Trino/Presto, Apache Spark/Structured Streaming, Apache Druid, Apache Flink.
  • Strong conceptual understanding of Kubernetes (EKS), including autoscaling, isolation, quotas, and observability
  • Strong SQL skills and extensive experience with performance tuning, with solid proficiency in Python/Java.
  • Solid understanding of Kafka concepts, hands-on experience is a plus
  • Experience running on-call for data platforms and driving measurable SLO-based improvements.
You might also have
  • Experience building feature platforms (feature definitions, materialization, serving, and online/offline consistency).
  • Airflow (or similar) at scale; Argo experience is a plus.
  • Familiarity with BigQuery (and ideally BigLake/Iceberg interop) and operational DBs like Aurora MySQL.
  • Experience with Clickhouse / Snowflake / Databricks / Starrocks.
  • FinOps background (cost attribution/showback, Spot strategies).
  • Data quality, lineage, and cataloging practices in large orgs.
  • IaC (Terraform/CloudFormation)
Additional information
  • Relocation support is not available for this position.
  • Work visa/immigration sponsorship is not available for this position

This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.

Show more

משרות נוספות שיכולות לעניין אותך

21.11.2025
B

Buildots Sales Operations Specialist Israel, Tel Aviv District, Tel Aviv-Yafo

Limitless High-tech career opportunities - Expoint
Collaborate with business stakeholders to gather, analyze, and document complex business requirements. Design, recommend, and implement scalable, best-practice Salesforce solutions to streamline business processes. Serve as the platform expert, advising...
תיאור:

About the Role:

is seeking a highly experienced and technicalto optimize and enhance our Salesforce platform. This role is crucial for translating complex business requirements into efficient, scalable technical solutions and maintaining seamless data flow across our integrated applications.


System Design & Process Optimization:

  • Collaborate with business stakeholders to gather, analyze, and document complex business requirements.
  • Design, recommend, and implement scalable, best-practice Salesforce solutions to streamline business processes.
  • Serve as the platform expert, advising on how to best leverage Salesforce capabilities for maximum efficiency.

Configuration, Automation, and Customization:

  • Design and implement advanced automation solutions.
  • Develop custom objects, fields, formula fields, page layouts, record types, and validation rules.
  • Manage sandbox environments, data deployments, and the release management process.
  • Perform all standard administrative tasks including managing user roles, profiles, permission sets, sharing rules, and security settings.

Integration and Data Management:

  • Design, configure, and maintain robust integrations between Salesforce and external business applications such as Slack, Outreach, and Hubspot .
  • Develop and manage complex data integrations using dedicated iPaaS tools, with specific experience in Workato .
  • Work with APIs and integration tools to ensure reliable and secure data flow between systems.

Quality Assurance (QA) and Support:

  • Develop and execute detailed test plans for all new features, configurations, and integrations to ensure system stability.
  • Provide advanced-level support and troubleshooting for complex technical issues, serving as a primary escalation point.
  • Create and maintain detailed system documentation, including design specifications and process maps.

Requirements:

  • Experience: 2 years of hands-on experience as a Salesforce Administrator or Systems Administrator, managing an enterprise-level Salesforce instance.
  • Automation Expertise: Expert proficiency in building complex automation using Salesforce Flow .
  • Integration Tool Experience: Proven, hands-on experience with the Workato integration platform, or similar iPaaS tools, to connect Salesforce with other applications.
  • Technical Skills: Proficiency in configuring the Salesforce platform, including declarative tools and the data model.
  • Custom Code: Familiarity with Apex, Visualforce, and LWC sufficient for troubleshooting, performing minor modifications, and collaborating effectively with developers.
  • Application Integration: Demonstrated success in integrating Salesforce with sales, marketing, and communication platforms like Outreach, Hubspot, and Slack .
  • Data Management: Strong understanding of data security, data modeling, and governance.

If you don’t meet every single requirement, we still encourage you to apply. Your unique experiences, skills, and passion may be exactly what we’re looking for.

*By submitting your application, you agree that Buildots will process your personal data in accordance with .

Show more

משרות נוספות שיכולות לעניין אותך

21.11.2025
R

Rapyd Revenue Operations Specialist Israel

Limitless High-tech career opportunities - Expoint
Architecture & Development: Lead the design and development of robust, scalable, and secure backend systems and APIs, primarily using Python. Cloud Infrastructure: Architect, build, and maintain high-scale solutions on modern...
תיאור:
Description

Get the tools to grow globally at . Follow: , , ,

Working with cutting-edge technologies like Python, Node.js, and AWS, you will build innovative, low-latency, and highly available systems that form the core of our business. In this role, you will be instrumental in shaping our technical architecture and driving excellence in a fast-paced, evolving environment.

Key Responsibilities:

  • Architecture & Development: Lead the design and development of robust, scalable, and secure backend systems and APIs, primarily using Python.
  • Cloud Infrastructure: Architect, build, and maintain high-scale solutions on modern cloud platforms (specifically AWS) to ensure reliability, scalability, and performance.
  • System Performance: Design and implement services optimized for low-latency processing, high availability, and fault tolerance for mission-critical financial applications.
  • Technical Leadership: Provide technical leadership and mentorship to junior and mid-level developers, fostering best practices in code quality, testing, and maintainability.
  • Product Focus: Develop and enhance Compliance and Risk management products for the Fintech sector, ensuring they meet strict regulatory requirements and industry standards.
  • Optimization & Reliability: Take ownership of system performance, scalability, and reliability. Proactively identify and resolve bottlenecks in code and architecture.
  • Collaboration: Work closely with cross-functional teams, including frontend developers, DevOps engineers, and product managers, to deliver seamless end-to-end solutions.
  • Innovation: Stay current with emerging technologies and industry trends, driving the adoption of new tools and frameworks to solve complex challenges effectively.
Requirements
  • Experience: 5-8 years of professional backend development experience.
  • Python Proficiency: Expert-level knowledge of Python and its ecosystem.
  • Frameworks: Strong experience with web frameworks such as Django, Flask, Sanic, or FastAPI.
  • Cloud Computing: Proven experience designing, deploying, and managing applications on AWS (e.g., EC2, S3, Lambda, RDS, ECS/EKS).
  • Databases: Proficiency with both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, DynamoDB) databases.
  • Architecture: Solid understanding of microservices architecture, RESTful APIs, and event-driven systems.
  • Mentorship: Demonstrated ability to mentor and guide other engineers.
  • Problem Solving: Excellent analytical and problem-solving skills with a strong sense of ownership.

Preferred Qualifications (Nice to Have)

  • Experience with Node.js.
  • Previous experience in the Fintech, RegTech (Regulatory Technology), or financial services industry.
  • Hands-on experience with containerization technologies like Docker and Kubernetes.
Show more

משרות נוספות שיכולות לעניין אותך

Limitless High-tech career opportunities - Expoint
Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams. DataOps Excellence: Create seamless...
תיאור:
The opportunity
  • Technical Leadership & Architecture: Drive data infrastructure strategy and establish standardized patterns for AI/ML workloads, with direct influence on architectural decisions across data and engineering teams
  • DataOps Excellence: Create seamless developer experience through self-service capabilities while significantly improving data engineer productivity and pipeline reliability metrics
  • Cross-Functional Innovation: Lead collaboration between DevOps, Data Engineering, and ML Operations teams to unify our approach to infrastructure as code and orchestration platforms
  • Technology Breadth & Growth: Work across the full DataOps spectrum from pipeline orchestration to AI/ML infrastructure, with clear advancement opportunities as a senior infrastructure engineer
  • Strategic Business Impact: Build scalable analytics capabilities that provide direct line of sight between your infrastructure work and business outcomes through reliable, cutting-edge data solutions
What you'll be doing
  • Design Data-Native Cloud Solutions - Design and implement scalable data infrastructure across multiple environments using Kubernetes, orchestration platforms, and IaC to power our AI, ML, and analytics ecosystem
  • Define DataOps Technical Strategy - Shape the technical vision and roadmap for our data infrastructure capabilities, aligning DevOps, Data Engineering, and ML teams around common patterns and practices
  • Accelerate Data Engineer Experience - Spearhead improvements to data pipeline deployment, monitoring tools, and self-service capabilities that empower data teams to deliver insights faster with higher reliability
  • Engineer Robust Data Platforms - Build and optimize infrastructure that supports diverse data workloads from real-time streaming to batch processing, ensuring performance and cost-effectiveness for critical analytics systems
  • Drive DataOps Excellence - Collaborate with engineering leaders across data teams, champion modern infrastructure practices, and mentor team members to elevate how we build, deploy, and operate data systems at scale
What we're looking for
  • 3+ years of hands-on DevOps experience building, shipping, and operating production systems.
  • Coding proficiency in at least one language (e.g., Python or TypeScript); able to build production-grade automation and tools.
  • Cloud platforms: deep experience with AWS, GCP, or Azure (core services, networking, IAM).
  • Kubernetes: strong end-to-end understanding of Kubernetes as a system (routing/networking, scaling, security, observability, upgrades), with proven experience integrating data-centric components (e.g., Kafka, RDS, BigQuery, Aerospike).
  • Infrastructure as Code: design and implement infrastructure automation using tools such as Terraform, Pulumi, or CloudFormation (modular code, reusable patterns, pipeline integration).
  • GitOps & CI/CD: practical experience implementing pipelines and advanced delivery using tools such as Argo CD / Argo Rollouts, GitHub Actions, or similar.
  • Observability: metrics, logs, and traces; actionable alerting and SLOs using tools such as Prometheus, Grafana, ELK/EFK, OpenTelemetry, or similar.
You might also have
  • Data Pipeline Orchestration - Demonstrated success building and optimizing data pipeline deployment using modern tools (Airflow, Prefect, Kubernetes operators) and implementing GitOps practices for data workloads
  • Data Engineer Experience Focus - Track record of creating and improving self-service platforms, deployment tools, and monitoring solutions that measurably enhance data engineering team productivity
  • Data Infrastructure Deep Knowledge - Extensive experience designing infrastructure for data-intensive workloads including streaming platforms (Kafka, Kinesis), data processing frameworks (Spark, Flink), storage solutions, and comprehensive observability systems
Additional information
  • Relocation support is not available for this position.
  • Work visa/immigration sponsorship is not available for this position

This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.

Show more
מגוון רחב של משרות כמו Data Center Operations Trainee. למצוא עבודה בחברות נבחרות כבר לא יהיה חלום. Expoint מסייעת לכם למצוא את המשרות הנחשקות במגוון רחב של מדינות המובילות בעולם בהן תוכלו למצוא תפקיד מאתגר במדינה שיהיה לכם כיף לעבוד בה.