DE

Event

Embedded Machine Learning [SS222400137]

Type
seminar (S)
Präsenz/Online gemischt
Term
SS 2022
SWS
Language
Deutsch/Englisch
Appointments
0
Links
ILIAS

Lecturers

Organisation

  • KIT-Fakultät für Informatik

Part of

Note

This seminar covers several topics, which are briefly presented here. In this seminar, the students discuss the latest research findings (publications) on the topics below. The findings are summarized in a seminar paper and presented to other participants in the seminar. Your own suggestions for topics are welcome, but not required. The seminar can be completed in German or English.

Machine learning on on-chip systems

Machine learning and on-chip systems form a symbiosis in which each research direction benefits from advances in the other. In this seminar, the students discuss the latest findings in both research areas.

Machine learning (ML) is finding its way more and more into all areas of information systems - from high-level algorithms such as image classification to hardware-related, intelligent CPU management. On-chip systems also benefit from advances in ML. Examples of this are adaptive resource management or the prediction of application behavior. Conversely, however, ML techniques also benefit from advances in on-chip systems. An example of this is the acceleration of training and inference of neural networks in current desktop graphics cards and even smartphone processors.

The students are able to independently research the state of research on a specific topic. This includes finding and analyzing, as well as comparing and evaluating publications. The students can prepare and present the state of research on a specific topic in writing.

Approximate Computing for Efficient Machine Learning

Nowadays, energy efficiency is a first-class design constraint in the ICT sector. Approximate computing emerges as a new design paradigm for generating energy efficient computing systems. There is a large body of resource-hungry applications (eg, image processing and machine learning) that exhibit an intrinsic resilience to errors and produce outputs that are useful and of acceptable quality for the users despite their underlying computations being performed in an approximate manner. By exploiting this inherent error tolerance of such applications, approximate computing trades computational accuracy for savings in other metrics, eg, energy consumption and performance. Machine learning, a very common and top trending workload of both data centers and embedded systems, is a perfect candidate for approximate computing application since, by definition, it delivers approximate results. Performance as well as energy efficiency (especially in the case of embedded systems) are crucial for machine learning applications and thus, approximate computing techniques are widely adopted in machine learning (eg, TPU) to improve its energy profile as well as performance.

Machine Learning methods for DNN compilation and mapping
Deep neural networks have achieved great success in challenging tasks such as image classification and object detection. There is a great demand for deploying these networks in different devices, ranging from cloud servers to embedded devices.
Mapping DNNs to these devices is a challenging task since each of these devices has different characteristics in terms of memory organization, compute units, etc. . There have been efforts to automate the process of mapping/compiling DNNs to hardware with different characteristics.
In this seminar, we will discuss the efforts that have been done in mapping/compiling DNNs over hardware using machine learning methods.