hero image

Welcome to LHH Israel Network

On this board you can review our network of companies that will assist you finding new job opportunities. This board automatically pulls the jobs from their career sites.
Found a suitable job? Send us the job link including your resume to: jobs@lhh.co.il and we will make sure it reaches the right person in the organization.
Please do not apply on this platform.

Before sending your resume, please check how well your CV matches the role requirements using the LHH AI CV Optimizer.

AI Graph Compiler Engineer

CEVA

CEVA

Posted on Jan 28, 2026

AI Graph Compiler Engineer

  • Ra'anana, Israel (IL)
  • Global

Description

About the AI Division

The AI Division is a unique and dedicated group within Ceva, driving innovation in Machine Learning and Generative AI architectures for edge devices and cloud inference.

Our R&D domains span Neural Network Processors (NPU), Vision DSPs, and advanced AI algorithms for applications across smartphones, tablets, automotive, surveillance cameras and many more edge AI systems.

We combine cutting-edge hardware IP design with embedded software and system-level solutions, enabling the next generation of intelligent and energy-efficient devices.

About the Role:

The AI Graph Compiler Engineer will design and develop next-generation graph compiler technologies enabling efficient execution of advanced AI models on Ceva Neural Processing Units (NPUs) used in edge and embedded AI devices. In this role, you will work at the intersection of AI frameworks, graph optimization, and hardware acceleration, enabling efficient execution of neural networks on cutting-edge AI hardware.

You will work at the intersection of AI frameworks, compiler infrastructure, and hardware acceleration, helping translate high-level AI models into highly optimized executions on Ceva NPUs.

What will you do:

Develop and enhance AI graph compiler components targeting Ceva NPU architectures. Implement graph-level optimizations such as operator fusion, scheduling, memory planning, and layout transformations. Participate in lowering models from AI frameworks (e.g., PyTorch, ONNX, TensorFlow) into Ceva NPU–optimized representations. Contribute to compiler passes focused on performance, memory efficiency, and numerical correctness. Collaborate closely with hardware, runtime, and AI framework teams to achieve optimal end-to-end performance. Analyze performance bottlenecks and assist in compiler-based optimizations. Debug and resolve issues across compiler, runtime, and hardware layers. Support testing, validation, and documentation of compiler features.

Requirements

  • BSc or MSc in Computer Science, Electrical Engineering, or a related field
  • Proficiency in C++ and Python.
  • 2–5 years of experience in systems software, AI software, embedded software, or other performance-critical development.
  • Strong analytical skills, software fundamentals, including data structures, debugging, and performance optimization.
  • Excellent interpersonal skills, flexibility, and a proactive “Can Do” attitude

Advantages:

  • Experience with compiler or IR-based systems (e.g., LLVM, MLIR, domain-specific compilers).
  • Experience with AI inference engines, runtimes, or model deployment pipelines.
  • Familiarity with AI accelerators such as NPUs, GPUs, DSPs, or ASICs.
  • Knowledge of reduced-precision numerical formats (FP16, BF16, INT8).