Introduction to On-device AI

Overview

|800

Why On-device

  1. Cost Effective: Reduces recurring costs by minimizing dependency on cloud computing resources.
  2. Efficient: Faster processing speed and power efficiency by leveraging local computation power.
  3. Private: Keeps data on the device, enhancing security and protecting user privacy.
  4. Personalized: Allows continuous model customization without external data transfer or updates.

Device In-the-loop Deployment

  1. Capture model
  2. Compile for the target device
  3. Validate numerics
  4. Measure Performance
  5. Deploy

Applications

Semantic Segmentation Models/ Algorithms

Fuss Free Network (FFNET)

|700

https://arxiv.org/abs/2206.08236
Fuss-Free Network (FFNet): A simple encoder-decoder architecture with a ResNet-like backbone and small multi-scale head.
Performance: Performs on-par or better than complex semantic segmentation architectures such as HRNet, FANet and DDRNets.

Jupyter Notebook

Preparing for on-device deployment

|500
On-device deployment key concepts

L3 Jupyter Notebook

TensorFlow Lite

Source

Also Read

Thoughts 🤔 by Soumendra Kumar Sahoo is licensed under CC BY 4.0