About Me


I create smart machines that free people from boring tasks and enable them to do what they are most good at. For the past ten years, I built machines that can sense, think, and act, in places like factories, farms, and homes. I combined my expertise in multiple fields to create these smart machines. My expertise is in the fields of machine vision, machine learning, Internet-of-Things, embedded systems, and mechatronics.

Professional Profile

  • Commercial product development, from prototyping to mass production.
  • Optimization of computer vision and machine learning algorithms on custom hardware.
  • Internet-of-things devices, edge and cloud platforms for machine learning applications.
  • Implementation of electronics systems, from chip design to firmware/software stack.
  • Integration of electronics systems into mechatronics and robotics systems.

Experience


  • January 2024
    -
    Current

    AM-Flow

    Computer Vision Engineer

    Automate 3D printing factory with computer vision.

  • July 2020
    -
    Dec 2023

    ams OSRAM

    Machine Vision Consultant via TMC

    Design algorithms and demonstrators for smart image sensor. Design hardware-friendly HDR algorithms. Analyze AR/VR applications and machine learning models on resource-constrained platform. Modeling of camera systems. Develop Linux kernel space driver for Mira image sensors. Bring up a new variant of intelligent LED chip.

  • April 2023
    -
    August 2023

    Bosch Security Systems

    Embedded Software Engineer via TMC

    Integrate Linux kernel space driver for multimedia device in conference system.

  • July 2020
    -
    Dec 2023

    TMC

    Machine Vision Expert

    Help clients create innovative machine vision systems.

  • Sep 2019
    -
    May 2020

    CASPAR.AI

    Hardware and Software Engineer

    Bring up smart home device prototypes. Build edge computing platforms supporting deep learning applications in vision domain. Build an intelligent operating system for homes.

  • July 2016
    -
    Aug 2019

    Connecterra

    Hardware and Devices Engineer

    Commercial product development, from prototyping to mass production. Electronics, firmware, and software for wireless Internet-of-things devices. Optimization of neural network for motion analysis on battery-powered devices. Edge computing platform supporting deep learning applications. Building the technology behind IDA.

  • Aug 2014
    -
    June 2016

    Intel

    Firmware Engineer

    Implementation of firmware stack for Intel Image Signal Processors (ISP), used in various products including Intel Apollo Lake.

  • April 2014
    -
    July 2014

    Delft University of Technology

    Researcher on Embedded Vision

    Implementation of a proprietary computer vision algorithm on field-programmable gate arrays (FPGAs) in 4 months.

  • Sep 2009
    -
    March 2014

    Eindhoven University of Technology

    PhD Researcher on Embedded Vision Architecture

    Design and implementation of high-speed vision processing systems on field-programmable gate arrays (FPGAs), and high-speed high-precision vision-in-the-loop mechatronic systems.

Education


  • 2009
    -
    2020

    Eindhoven University of Technology

    PhD Degree in Mechanical Engineering

    Design, implementation, and modeling of high-speed and high-precision vision-in-the-loop mechatronic systems. Thesis defended on 26 May 2020.

  • 2006
    -
    2009

    Eindhoven University of Technology

    Master Degree in Embedded System

    Thesis on graphics processing units (GPUs) architecture and code optimization.

  • 2002
    -
    2006

    Harbin Institute of Technology

    Bachelor Degree in Electronic Engineering

    With half-year exchange to Hong Kong University of Science and Technology

Portfolio


Project 1: Embedded Vision Architecture at TU Eindhoven

Summary: build prototype high-speed and high-precision machine vision system.

  • Design of vision algorithms and electronic systems for 1000 frames-per-second vision processing on FPGA.
  • Implementation of vision based closed-loop control systems, a.k.a. ”vision-in-the-loop” systems, for precision motion control (see the material below).
  • Cooperation with multiple industrial partners and a multidisciplinary team consisting of electronic engineers, computer vision scientists, and control engineers.

Used technologies: C, assembly, C++, Python, Matlab, CUDA, OpenCL, OpenMP, SIMD, FPGA, RTL design, VHDL/Verilog, High-Level Synthesis, ADC/DAC interface, LVDS interface, CameraLink, high-speed (1000 fps) image processing, image processing algorithm, OpenCV, mechatronics, motion control, feedforward and feedback controller design and tuning.


Figure 1: 1000 fps vision-in-the-loop system.

Project 2: Real-time 2D-to-3D conversion on FPGAs at TU Delft

Summary: implement a proprietary single-image 2D-to-3D conversion algorithm on FPGA.

  • Simplify the 2D-to-3D conversion algorithms designed by image scientist such that it can be efficiently implemented on FPGAs.
  • Rapid prototyping of the vision algorithm within 4 months for demonstration of feasibility.

Used technologies: RTL design, VHDL/Verilog, FPGA, C, image processing.


Figure 2: FPGA system running 1080p 2D-to-3D conversion at 30fps.

Project 3: Image Signal Processor at Intel

Summary: image processing on custom circuits and vector processors.

  • Implementation of firmware stack for customized circuits and image signal processors.
  • Support Windows and Android driver teams for bringing up and troubleshooting device features.
  • Cooperate with image algorithm designers for implementation and interface of firmware.

Used technologies: C, microcontrollers, DSP, SIMD.


Figure 3: Image signal processor of Intel (top right part).

Project 4: Deep Learning on IOT sensors and edge computers at Connecterra

Summary: perform motion analysis of animals on IOT devices and edge computers.

  • Commercial product development, from prototyping to mass production.
  • Electronics, firmware, and software for wireless Internet-of-things devices at industrial scale.
  • Optimisation of machine learning algorithms on battery-powered devices.
  • Building the technology behind IDA: Intelligent Dairy farmer’s Assistant.

Used technologies: C, C++, Python, microcontroller (Arm Cortex-M), IOT wireless protocols, RF hardware, RFID, edge computing, deep learning, TensorFlow, cloud platform (Azure/Google).


Figure 4: IDA sensor. More info on http://ida.io.

Project 5: Intelligent Smart Building at CASPAR.AI

Summary: develop smart home devices and on-premises edge computing infrastructure.

  • Prototyping and production of smart sensor devices for smart home.
  • Build infrastructure to support machine learning on edge computers.

Used technologies: C, Python, OpenCV, deep learning, on-premises edge computing, cloud platform (AWS).


Figure 5: Caspar smart home device.

Project 6: New Image Sensor and Compute Platform at ams OSRAM

Summary: prototyping of next generation image sensor and processing platform.

  • Design classic computer vision algorithms (detecting objects by color/shape/motion etc.) on resource-constrained embedded vision platform.
  • Design hardware friendly High Dynamic Range (HDR) algorithms.
  • Benchmark and optimize deep learning models (object classification, human/face detection, etc.) on resource-constraint platforms.
  • Evaluate various off-the-shelf and custom-built hardware accelerators for computer vision and deep learning applications.
  • Develop Linux kernel space drivers for image sensors and bring up sensor features.

Used technologies: C, C++, Python, Python GUI (Qt), bare-metal, RTOS (Mbed), Linux, docker, Linux kernel space driver, microcontrollers (Arm Contrex-M), SIMD, RISC-V, microNPU (Arm U55), TensorFlow (/Lite/microController), Caffe, Arm CMSIS NN, deep learning, neural network quantiza- tion, OpenCV, object/shape detection, HDR.


Figure 6: ams OSRAM image sensor connected to an evaluation kit.

Project 7: End-to-end Modeling of Vision Systems for AR/VR Use Cases at ams OSRAM

Summary: model illuminator, optics, image sensors, and scene for AR/VR applications.

  • Integrate in-house models (in Matlab and Python) into a framework for camera system simulation.
  • Implement tools for design parameter optimization of camera system.
  • Benchmark and optimize deep learning models (object classification, human/face detection, etc.) on resource-constraint platforms.
  • Co-optimize vision algorithms and deep learning models with camera system.

Used technologies: C, C++, Python, Matlab, bare-metal, microNPU (Arm U55), TensorFlow (/Lite/microController), deep learning, neural network quantization, OpenCV, AR/VR applications.


Figure 7: End-to-end modeling framework for AR/VR applications.

Project 8: New Device Bring Up and Driver Integration at Bosch

Summary: bring up a new variation of multimedia device in conference system.

  • Customize Linux kernel driver for new chips in device.
  • Bring up and troubleshoot new device prototypes.

Used technologies: C, Linux kernel space driver, I2C, oscilloscope, new device bring up.


Figure 8: Current generation of Bosch DICENTIS multimedia device in conference system.

Project 9: New Chip Bring Up and Firmware Development at ams OSRAM

Summary: bring up a new variant of intelligent LED for automotive ambient lighting.

  • Bring up initial samples of new chip, and develop firmware for new features..

Used technologies: C, Arm Cortex-M, bare-metal, SPI, I2C, oscilloscope, logic analyzer.


Figure 9: Current generation of intelligent LED OSIRE E3731i for automotive ambient lighting.

Research


PhD Project: Embedded Vision Architecture

In this project, I have implemented an FPGA-based vision system, integrated it into a motion stage, and performed the modeling and control of the visual servo system. Experimental results demonstrate the feasibility of using visual feedback for precision motion control. Below is a demo of the 1000 frames-per-second vision-in-the-loop system. Results of this project have been presented in ICT.OPEN.2012 (slides).

PhD Thesis: Implementation, Modeling, and Exploration of Precision Visual Servo Systems

In this thesis, methods are provided to design high-speed vision systems, to evaluate delay and accuracy of vision algorithms, to design control laws that compensate for these constraints, and to find out among a large number of design options which is most suitable for a specific use case. This thesis is defended on 26 May 2020.

Publications

  • Gernot Fiala, Zhenyu Ye, Christian Steger, "TPDNet: A Tiny Pupil Detection Neural Network for Embedded Machine Learning Processor Arm Ethos-U55", to appear in Intelligent Systems Conference (IntelliSys) 2023. (preprint)
  • Gernot Fiala, Zhenyu Ye, Christian Steger, "Framework for Image Sensor Design Parameter Optimization for Pupil Detection", in International Conference on Systems and Informatics (ICSAI) 2022. (preprint, doi)
  • Gernot Fiala, Zhenyu Ye, Christian Steger, "Pupil Detection for Augmented and Virtual Reality based on Images with Reduced Bit Depths", in IEEE Sensors Applications Symposium (SAS) 2022. (preprint, doi)
  • Zhenyu Ye, Henk Corporaal, Pieter Jonker, Henk Nijmeijer, "Cross-domain Modeling and Optimization of High-speed Visual Servo Systems", in 15th International Conference on Control, Automation, Robotics and Vision (ICARCV) 2018. (preprint, slides.pdf, slides.pptx, doi)
  • Roel Pieters, Zhenyu Ye, Pieter Jonker, Henk Nijmeijer, "Direct Motion Planning for Vision-Based Control", in IEEE Transactions on Automation Science and Engineering Oct. 2014. (preprintdoi)
  • Zhenyu Ye, Pieter Jonker, Henk Corporaal, Henk Nijmeijer, "High-Level Synthesis of Massively Parallel Vision Architectures for 100000 Frames-per-Second Visual Servo Control", in ICT.OPEN 2013. (poster)
  • Zhenyu Ye, Pieter Jonker, Henk Corporaal, Henk Nijmeijer, "Closed-Loop Evaluation of an Embedded Visual Servo System", in ICT.OPEN 2012. (presentation, poster)
  • Zhenyu Ye, Henk Corporaal, Pieter Jonker, "PhD Forum: A Cyber-Physical System Approach To Embedded Visual Servoing", in ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) 2011. (doipreprintbibtexreview)
  • Zhenyu Ye, Yifan He, Roel Pieters, Bart Mesman, Henk Corporaal, Pieter Jonker, "Demo: An Embedded Vision System for High Frame Rate Visual Servoing", in ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) 2011. (doipreprintbibtex,reviewvideos)
  • Yifan He, Zhenyu Ye, Dongrui She, Bart Mesman, Henk Corporaal, "Feasibility Analysis of Ultra High Frame Rate Visual Servoing on FPGA and SIMD Processor", in Advanced Concepts for Intelligent Vision Systems (ACIVS) 2011. (preprintbibtex)
  • Zhenyu Ye, Yifan He, Roel Pieters, Bart Mesman, Henk Corporaal, Pieter Jonker, "Bottlenecks and Tradeoffs in High Frame Rate Visual Servoing: A Case Study", in IAPR Conference on Machine Vision Applications (MVA) 2011. (preprintposterbibtexreview)
  • Yu Pu, Yifan He, Zhenyu Ye, Sebastian Moreno Londono, Anteneh Alemu Abbo, Richard Kleihorst, Henk Corporaal, "From Xetal-II to Xetal-Pro: On the Road Towards An Ultra Low-Energy and High Throughput SIMD Processor", in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) April 2011. (preprintdoi)
  • Yifan He, Yu Pu, Zhenyu Ye, Sebastian M. Moreno, Richard Kleihorst, Anteneh A. Abbo, Henk Corporaal, "Xetal-Pro: An Ultra-low Energy and High Throughput SIMD Processor", in Design Automation Conference (DAC) 2010. (preprintdoi) (HiPEAC Paper Award)
  • Zhenyu Ye, "Design Space Exploration for GPU-Based Architecture", Master thesis, TU Eindhoven, 2009. (preprintarchived record)

Teaching


Contact


zhenyu.z.ye@gmail.com

Amsterdam, Netherlands