Introduction to On-device AI



Why On-device

  1. Cost Effective: Reduces recurring costs by minimizing dependency on cloud computing resources.
  2. Efficient: Faster processing speed and power efficiency by leveraging local computation power.
  3. Private: Keeps data on the device, enhancing security and protecting user privacy.
  4. Personalized: Allows continuous model customization without external data transfer or updates.

Device In-the-loop Deployment

  1. Capture model
  2. Compile for the target device
  3. Validate numerics
  4. Measure Performance
  5. Deploy


Semantic Segmentation Models/ Algorithms

Fuss Free Network (FFNET)

Fuss-Free Network (FFNet): A simple encoder-decoder architecture with a ResNet-like backbone and small multi-scale head.
Performance: Performs on-par or better than complex semantic segmentation architectures such as HRNet, FANet and DDRNets.

Jupyter Notebook

Preparing for on-device deployment

On-device deployment key concepts

L3 Jupyter Notebook

TensorFlow Lite


Also Read

Thoughts 🤔 by Soumendra Kumar Sahoo is licensed under CC BY 4.0