Recorded lectures are available on the Whova app!!!
find the tutorial in the agenda, select it, then click on "View Recording"
At the scheduled time slot, click on "View Stream": the teachers will present a short summary and reply to questions.
TEACHERS
Amir Aminifar (EPFL, Switzerland), David Atienza (EPFL, Switzerland)
SUMMARY
We consider wearable technologies for real-time health monitoring in the edge artificial intelligence and Internet of Things (IoT) era. We discuss the design of real-time health monitoring wearable systems targeting to provide solutions to two fundamental problems: long-term and personalized monitoring. The first problem is tackled by introducing the concept of self-awareness in wearable systems monitoring the patient in order to improve the energy efficiency of such systems. The second problem is addressed by proposing a self-learning methodology that allows to perform automatic labeling of health pathologies in order to generate personalized training data with minimal patient intervention.
PARTICIPANT TAKEAWAY
TEACHERS
Shih-Chii Liu (UZH|ETH, Switzerland), Enea Ceolini (UZH|ETH, Switzerland)
SUMMARY
This tutorial focusses on advances in audio machine learning algorithms that have led to new applications in smart assistants that are currently ubiquitious in devices such as smartphones, laptops, and cars. While the largest challenge of large vocabulary speech recognition can be addressed by powerful but computationally demanding models running in the cloud, many other less complex tasks can be implemented by models running only on the device and which require limited resources and is executable with low latency. This general idea of computing on the edge (fog level), opens up new challenges for ML research that now has to deal with the necessity of having small models that run fast on limited resource devices. These challenges include domains such as personalization, continual training, domain adaptation and especially privacy related issues. This tutorial will be useful for participants who are interested in learning about the ML audio algorithms suitable for embedded low power systems. They will learn about the important steps in developing, training and deploying ML audio models for various applications and the model constraints. Examples of solutions implemented especially for low resource devices and architecture considerations will be presented.
PARTICIPANT TAKEAWAY
TEACHERS
Partha Maji (Arm ML Labs), Igor Fedorov (Arm ML Labs)
SUMMARY
When it comes to enabling AI in embedded devices, the hardware implementation is viewed as a critical part of any end-user application design effort. In order to ensure the embedded AI products meet the required functionality, consumes low power, and is secure and reliable, a lot of challenges are faced by the device manufacturers during the optimisation and design phase. The process starts with model selection and typically followed by a sequence of selectively chosen optimisations techniques that are applied to fine tune the selected model to suit a particular target architecture. This tutorial focuses on the intricate details of this optimisation process for embedded AI including a deep understanding of commonly used DNN architectures, a wide variety of implementation strategies, and advanced network architecture search. The tutorial also highlights open problems and challenges for future research in this area.
PARTICIPANT TAKEAWAY
TEACHERS
Michele Magno (ETH Zurich, Switzerland), Francesco Conti (ETH Zurich & University of Bologna), Manuele Rusci (University of Bologna & GreenWaves Technologies)
SUMMARY
Execution of “heavy” algorithms such as neural networks on platforms with 10-100 mW of power envelope and less than 1 MB of on-chip memory poses two symmetric problems: 1) minimization of memory and computational footprint; 2) maximization of performance and energy efficiency. In this tutorial, we will cover in detail the constraints machine learning algorithms must be subjected to for implementation on low-power end-nodes, as well as several current and future techniques to target these constraints. We range from DNN model distillation/quantization with low-bitwidth integers to the design of AI-dedicated programmable architectures flexible and efficient at the same time, achieving billions of operations per second in a microcontroller power budget. We showcase these concepts on commercial ARM-based Cortex-M class IoT endnodes as well as on RISC-V based parallel ultra-low-power (PULP) processors such as GreenWaves GAP8.
PARTICIPANT TAKEAWAY