TESA Developer Space
  • 👏Welcome
  • 📺TESA Podcast
  • ⭐Getting Started
    • Who we are?
    • What we do?
    • Who's our Networking?
  • 🏫TESA University Program
    • RT-Thread IoT OS
      • University Program
        • RT-Thread Architecture
          • Hardware supported
          • RT-Thread Layers
          • RT-Thread x Renesas
        • Edge AI Workshop
          • Installation & First Coding
          • External IRQ via Button
          • Enable Ulog for FinSH
          • Enable WiFi for FinSH
          • IoT Connectivity using MQTT
          • OpenMV IDE
          • MicroPython Programming
          • TFLite in OpenMV
          • AI Model Training via Edge Impulse
      • RT-Thread on RISC-V
    • FPGA Edge AI
      • Professional Courses
    • Problem-based Learning
      • STM32 Development Toolchain
  • 📚TESA Technical Contents
    • Linux OS
      • Zero to Linux Hero
        • Computer OS Architecture
        • Anatomy of Linux System
          • Busybox
          • Linux Environment for Developer
        • Anatomy of Linux Kernel
          • UNIX/Linux History
          • GNU Project
          • Linux OS Architecture
        • Anatomy of Linux Kernel
          • Linux Kernel Principles
    • Karel Robot
  • 🤘RECOMMENDED by TESA
    • Embedded Systems Roadmap
Powered by GitBook
On this page
  1. TESA University Program
  2. RT-Thread IoT OS
  3. University Program
  4. Edge AI Workshop

AI Model Training via Edge Impulse

PreviousTFLite in OpenMVNextRT-Thread on RISC-V

Last updated 5 months ago

Edge Impulse

Edge AI is the development and deployment of artificial intelligence (AI) algorithms and programs on edge devices. It is a form of edge computing where data is analyzed and processed near where the data is generated or collected. Edge AI contrasts cloud-based AI, which involves data being transmitted across the internet to be processed on a remote server.

In machine learning (ML), data is fed into the training process. For supervised learning, the ground-truth labels are also provided along with each sample. The training algorithm automatically updates the parameters (also known as "weights") in the ML model. Most ML projects follow a similar flow when it comes to collecting data, examining that data, training an ML model, and deploying that model. Optimization can involve a number of processes that reduce the size and complexity of the ML model, such as pruning unimportant nodes from the neural network, quantizing operations to run more efficiently on low-end hardware, and compiling models to run on specialized hardware (e.g. GPUs and NPUs).

Edge Impulse is the leading edge AI platform for collecting data, training models, and deploying them to your edge computing devices. It provides an end-to-end framework that easily plugs into your edge MLOps workflow.

🏫