I create smart machines that free people from boring tasks and enable them to do what they are most good at. For the past ten years, I built machines that can sense, think, and act, in places like factories, farms, and homes. I combined my expertise in multiple fields to create these smart machines. My expertise is in the fields of machine vision, machine learning, Internet-of-Things, embedded systems, and mechatronics.
Integrate Linux kernel space driver for multimedia device in conference system.
Design algorithms and demonstrators for smart image sensor. Design hardware-friendly HDR algorithms. Analyze AR/VR applications and machine learning models on resource-constrained platform. Modeling of camera systems. Develop Linux kernel space driver for image sensors.
Help clients create innovative machine vision systems.
Bring up smart home device prototypes. Build edge computing platforms supporting deep learning applications in vision domain. Build an intelligent operating system for homes.
Commercial product development, from prototyping to mass production. Electronics, firmware, and software for wireless Internet-of-things devices. Optimization of neural network for motion analysis on battery-powered devices. Edge computing platform supporting deep learning applications. Building the technology behind IDA.
Implementation of firmware stack for Intel Image Signal Processors (ISP), used in various products including Intel Apollo Lake.
Implementation of a proprietary computer vision algorithm on field-programmable gate arrays (FPGAs) in 4 months.
Design and implementation of high-speed vision processing systems on field-programmable gate arrays (FPGAs), and high-speed high-precision vision-in-the-loop mechatronic systems.
Design, implementation, and modeling of high-speed and high-precision vision-in-the-loop mechatronic systems. Thesis defended on 26 May 2020.
Thesis on graphics processing units (GPUs) architecture and code optimization.
With half-year exchange to Hong Kong University of Science and Technology
Summary: build prototype high-speed and high-precision machine vision system.
Used technologies: C, assembly, C++, Python, Matlab, CUDA, OpenCL, OpenMP, SIMD, FPGA, RTL design, VHDL/Verilog, High-Level Synthesis, ADC/DAC interface, LVDS interface, CameraLink, high-speed (1000 fps) image processing, image processing algorithm, OpenCV, mechatronics, motion control, feedforward and feedback controller design and tuning.
Summary: implement a proprietary single-image 2D-to-3D conversion algorithm on FPGA.
Used technologies: RTL design, VHDL/Verilog, FPGA, C, image processing.
Summary: image processing on custom circuits and vector processors.
Used technologies: C, microcontrollers, DSP, SIMD.
Summary: perform motion analysis of animals on IOT devices and edge computers.
Used technologies: C, C++, Python, microcontroller (Arm Cortex-M), IOT wireless protocols, RF hardware, RFID, edge computing, deep learning, TensorFlow, cloud platform (Azure/Google).
Summary: develop smart home devices and on-premises edge computing infrastructure.
Used technologies: C, Python, OpenCV, deep learning, on-premises edge computing, cloud platform (AWS).
Summary: prototyping of next generation image sensor with embedded vision processing.
Used technologies: C, C++, Python, Python GUI (Qt), bare-metal, RTOS (Mbed), Linux, docker, Linux kernel space driver, microcontrollers (Arm Contrex-M), SIMD, RISC-V, microNPU (Arm U55), TensorFlow (/Lite/microController), Caffe, Arm CMSIS NN, deep learning, neural network quantiza- tion, OpenCV, object/shape detection, HDR.
Summary: model illuminator, optics, image sensors, and scene for AR/VR applications.
Used technologies: C, C++, Python, Matlab, bare-metal, microNPU (Arm U55), TensorFlow (/Lite/microController), deep learning, neural network quantization, OpenCV, AR/VR applications.
Summary: bring up a new variation of multimedia device in conference system.
Used technologies: C, Linux kernel space driver, I2C, oscilloscope, new device bring up.
In this project, I have implemented an FPGA-based vision system, integrated it into a motion stage, and performed the modeling and control of the visual servo system. Experimental results demonstrate the feasibility of using visual feedback for precision motion control. Below is a demo of the 1000 frames-per-second vision-in-the-loop system. Results of this project have been presented in ICT.OPEN.2012 (slides).
In this thesis, methods are provided to design high-speed vision systems, to evaluate delay and accuracy of vision algorithms, to design control laws that compensate for these constraints, and to find out among a large number of design options which is most suitable for a specific use case. This thesis is defended on 26 May 2020.