TESA Developer Space
  • πŸ‘Welcome
  • πŸ“ΊTESA Podcast
  • ⭐Getting Started
    • Who we are?
    • What we do?
    • Who's our Networking?
  • 🏫TESA University Program
    • RT-Thread IoT OS
      • University Program
        • RT-Thread Architecture
          • Hardware supported
          • RT-Thread Layers
          • RT-Thread x Renesas
        • Edge AI Workshop
          • Installation & First Coding
          • External IRQ via Button
          • Enable Ulog for FinSH
          • Enable WiFi for FinSH
          • IoT Connectivity using MQTT
          • OpenMV IDE
          • MicroPython Programming
          • TFLite in OpenMV
          • AI Model Training via Edge Impulse
      • RT-Thread on RISC-V
    • FPGA Edge AI
      • Professional Courses
    • Problem-based Learning
      • STM32 Development Toolchain
  • πŸ“šTESA Technical Contents
    • Linux OS
      • Zero to Linux Hero
        • Computer OS Architecture
        • Anatomy of Linux System
          • Busybox
          • Linux Environment for Developer
        • Anatomy of Linux Kernel
          • UNIX/Linux History
          • GNU Project
          • Linux OS Architecture
        • Anatomy of Linux Kernel
          • Linux Kernel Principles
    • Karel Robot
  • 🀘RECOMMENDED by TESA
    • Embedded Systems Roadmap
Powered by GitBook
On this page
  1. TESA University Program
  2. RT-Thread IoT OS
  3. University Program
  4. Edge AI Workshop

TFLite in OpenMV

PreviousMicroPython ProgrammingNextAI Model Training via Edge Impulse

Last updated 5 months ago

TensorFlow Lite

TensorFlow is a free and open-source machine learning library. The TensorFlow Lite is a special feature and mainly designed for embedded devices like mobile. This uses a custom memory allocator for execution latency and minimum load. It is also explaining the new file format supported Flat Buffers. TensorFlow Lite takes existing models and converts them into an optimized version within the sort of .tflite file.

The tf module is capable of executing Quantized TensorFlow Lite Models on the OpenMV Cam. The final output .tflite model can be directly loaded and run by your OpenMV Cam. The model and the model’s required sratch RAM must fit within the available frame buffer stack RAM on your OpenMV Cam. Alternatively, you can also load a model onto the MicroPython Heap or the OpenMV Cam frame buffer. However, this significantly limits the model size on all OpenMV Cams.

Reference:

🏫
https://github.com/openmv/tensorflow/blob/master/tensorflow/lite/micro/example
s/person_detection/training_a_model.md