TUTORIALS


RESOURCE-AWARE MACHINE LEARNING ON BIOMEDICAL WEARABLE SYSTEMS

TIME: Monday 23, 11:00 - 12:30

TEACHERS

Amir Aminifar (EPFL, Switzerland), David Atienza  (EPFL, Switzerland)

SUMMARY

We consider wearable technologies for real-time health monitoring in the edge artificial intelligence and Internet of Things (IoT) era. We discuss the design of real-time health monitoring wearable systems targeting to provide solutions to two fundamental problems: long-term and personalized monitoring. The first problem is tackled by introducing the concept of self-awareness in wearable systems monitoring the patient in order to improve the energy efficiency of such systems. The second problem is addressed by proposing a self-learning methodology that allows to perform automatic labeling of health pathologies in order to generate personalized training data with minimal patient intervention.

PARTICIPANT TAKEAWAY

  • Understand the key challenges in enabling long-term personalized health monitoring using artificial-intelligence and machine-learning techniques on resource-constrained edge mobile and IoT technologies.
  • Understand the self-awareness concept to enable long-term health monitoring using wearable technologies, from edge to cloud.
  • Understand the self-learning concept to enable personalization in health monitoring using wearable technologies.

NEUROMORPHIC AUDITORY PROCESSING

TIME: Monday 23, 11:00 - 12:30

TEACHERS

Shih-Chii Liu (UZH|ETH, Switzerland), Enea Ceolini (UZH|ETH, Switzerland)

SUMMARY

This tutorial focusses on advances in audio machine learning algorithms that have led to new applications in smart assistants that are currently ubiquitious in devices such as smartphones, laptops, and cars. While the largest challenge of large vocabulary speech recognition can be addressed by powerful but computationally demanding models running in the cloud, many other less complex tasks can be implemented by models running only on the device and which require limited resources and is executable with low latency. This general idea of computing on the edge (fog level),  opens up new challenges for ML research that now has to deal with the necessity of having small models that run fast on limited resource devices. These challenges include domains such as personalization, continual training, domain adaptation and especially privacy related issues. This tutorial will be useful for participants who are interested in learning about the ML audio algorithms suitable for embedded low power systems. They will learn about the important steps in developing, training and deploying ML audio models for various applications and the model constraints. Examples of solutions implemented especially for low resource devices and architecture considerations will be presented.

PARTICIPANT TAKEAWAY

  • Understand the machine learning audio algorithms suitable for embedded low power systems
  • Understand the important steps in developing, training and deploying ML audio models
  • Understand the model architecture choices based on the IoT platform constraints

ENABLING EFFICIENT DEEP LEARING ON EMBEDDED SYSTEMS: CHALLENGES AND OPPORTUNITIES

TIME: Tuesday 24, 8:30 - 10:00 (Part I) -- Wednesday 25, 8:30 - 10:00 (Part II)

TEACHERS

Partha Maji (Arm ML Labs), Igor Fedorov (Arm ML Labs)

SUMMARY

When it comes to enabling AI in embedded devices, the hardware implementation is viewed as a critical part of any end-user application design effort. In order to ensure the embedded AI products meet the required functionality, consumes low power, and is secure and reliable, a lot of challenges are faced by the device manufacturers during the optimisation and design phase. The process starts with model selection and typically followed by a sequence of selectively chosen optimisations techniques that are applied to fine tune the selected model to suit a particular target architecture. This tutorial focuses on the intricate details of this optimisation process for embedded AI including a deep understanding of commonly used DNN architectures, a wide variety of implementation strategies, and advanced network architecture search. The tutorial also highlights open problems and challenges for future research in this area.

PARTICIPANT TAKEAWAY

  • Understand the key computational components of modern DNNs
  • Be able to recognise, analyse and compare different computational bottlenecks in deep models
  • Understand various data reuse patterns and how they can be exploited to speed up computation
  • Deep understanding on common software based kernel implementation for target hardware
  • Understand the common framework of advanced network architecture search (NAS/Auto-ML) based model design and optimisation

DEMYSTIFYING NEURAL NETWORKS AT THE VERY EDGE OF THE IOT

TIME: Tuesday 24, 8:30 - 10:00 (Part I) -- Wednesday 25, 8:30 - 10:00 (Part II)

TEACHERS

Michele Magno (ETH Zurich, Switzerland), Francesco Conti (ETH Zurich & University of Bologna), Manuele Rusci (University of Bologna & GreenWaves Technologies)

SUMMARY

Execution of “heavy” algorithms such as neural networks on platforms with 10-100 mW of power envelope and less than 1 MB of on-chip memory poses two symmetric problems: 1) minimization of memory and computational footprint; 2) maximization of performance and energy efficiency. In this tutorial, we will cover in detail the constraints machine learning algorithms must be subjected to for implementation on low-power end-nodes, as well as several current and future techniques to target these constraints. We range from DNN model distillation/quantization with low-bitwidth integers to the design of AI-dedicated programmable architectures flexible and efficient at the same time, achieving billions of operations per second in a microcontroller power budget. We showcase these concepts on commercial ARM-based Cortex-M class IoT endnodes as well as on RISC-V based parallel ultra-low-power (PULP) processors such as GreenWaves GAP8.

PARTICIPANT TAKEAWAY

  • Discover what are the challenges to drive AI to the very edge of the IoT
  • Understand the combined architectural and algorithmic techniques to overcome these challenges
  • See how these techniques can be applied to commercial low-power IoT endnodes (ARM Cortex-M class, GreenWaves GAP8)

Sponsors

© 2020
2020 IEEE International Conference on Artificial Intelligence Circuits and Systems.