hero image

Welcome to LHH Israel Network

On this board you can review our network of companies that will assist you finding new job opportunities. This board automatically pulls the jobs from their career sites.
Found a suitable job? Send us the job link including your resume to: jobs@lhh.co.il and we will make sure it reaches the right person in the organization.
Please do not apply on this platform.

Before sending your resume, please check how well your CV matches the role requirements using the LHH AI CV Optimizer.

AI Embedded Graph Compiler Engineer

CEVA

CEVA

Posted on Feb 24, 2026

AI Embedded Graph Compiler Engineer

  • Ra'anana, Israel (IL)
  • Intermediate
  • Global

Description

About the AI Division

The AI Division is a unique and dedicated group within Ceva, driving innovation in Machine Learning and Generative AI architectures for edge devices and cloud inference.

Our R&D domains span Neural Network Processors (NPU), Vision DSPs, and advanced AI algorithms for applications across smartphones, tablets, automotive, surveillance cameras and many more edge AI systems.

We combine cutting-edge hardware IP design with embedded software and system-level solutions, enabling the next generation of intelligent and energy-efficient devices.

About the Role:

In this role, you will be a key contributor to the design and implementation of Ceva’s AI Graph Compiler software stack for Neural Processing Units (NPUs). You will take part in defining software architecture, implementing performance-critical components, and enabling efficient execution of advanced neural networks under tight power, memory, and latency constraints.

You will work closely with hardware and system architects, software and hardware engineers, influencing both software and hardware decisions. You will design and implement major parts of Ceva NPU embedded solutions, actively promoting Ceva AI capabilities to the customers.

What will you do:

Own and design key components of the AI Graph Compiler software stack for NPU-based systems.

Optimize inference performance (latency, throughput, memory footprint, power) for edge deployments.

Collaborate on HW–SW co-design, influencing NPU architecture.

Support IP evaluations and silicon bring-up, root-cause complex HW/SW issues, and influence development methodologies.

Mentor junior engineers and contribute to technical best practices.

Requirements

  • 3 years of experience in building high-quality embedded software using C/C++.
  • BSc/MSc in Computer Science, Electrical Engineering, or equivalent.
  • Proven experience developing and maintaining complex embedded systems, including multi-component software stacks, tight HW/SW integration, and system-level debugging.
  • Experience in designing and implementing software based on product & hardware specifications.
  • Experience working under tight memory, power, and real-time constraints.
  • Excellent interpersonal and communication skills, with a proven ability to work well in a team.

Advantages:

  • Experience in data-flow optimization using profiling tools.
  • Interaction with AI compilers / graph optimizers.
  • Familiarity with fixed-point / quantized inference is a strong plus.
  • Familiarity with neural network open-source frameworks such as PyTorch and TensorFlow.
  • Proficiency in Python coding.