Seminar Summer Semester 2020 „Audio Signal Processing for Human-Robot Interaction“

Seminar Summer Semester 2020

Selected Topics in Multimedia Communications and Signal Processing

Audio Signal Processing for Human-Robot Interaction

Prof. Dr.-Ing. Walter Kellermann; Dr.-Ing. Heinrich Löllmann; Alexander Schmidt, M.Sc.

 

The acceptance of robots in our daily life will largely depend on how well a robot is aware of its environment and how responsive it can react to any kind of human expression. With acoustic signals revealing a rich amount of information about the environment and speech being the most effective means of communication between humans, robots must be able to analyze the acoustic scene and use voice communication in a natural way.

In this seminar, state-of-the-art concepts and methods for an intuitive audio-signal-based human-machine interaction for robots are discussed. This task is especially challenging since a robot typically operates in adverse acoustical environments characterized by noise, interfering sources and reverberation. Therefore, efficient and robust methods for acoustic scene analysis and signal enhancement are required, including localization, extraction and separation of acoustic sources, as well as and very specific for robots, the reduction of self-noise. Being typically equipped with other sensor modalities beside microphones, sensor fusion is also highly relevant.

Reflecting the current trend in research in this domain, many of the topics of this seminar are based on methods from artificial intelligence and machine learning.

This seminar is designed for Bachelor and Master programs in Electrical Engineering, Electronics and Information Technology (EEI), Information and Communication Technology (IuK), Industrial Engineering and Management (WING), Computational Engineering (CE), Communications and Multimedia Engineering (CME), Advanced Signal Processing and Communications Engineering (ASC) as well as related study programs.

The seminar consists of three mandatory meetings:

1st meeting (late April 2020): An introduction will be given and the individual topics are assigned to the participants.

2nd meeting (early June 2020): The participants will give a brief presentation about the status of their work and hints for the final presentation are given.

3rd meeting (mid July 2020): Each participant will give a presentation of 25 minutes and submit a report on his/her topic of 10 to 15 pages.

All meetings and presentations will be given in English and the reports are expected to be written in English.

 

Registration & Contact

Registration for this seminar via the central registration platform of the Department EEI: https://www.studon.fau.de/xcos2828315.html (March 30-April 5).

In total, we offer 12 seminar places. For questions and further information, contact alexander.as.schmidt@fau.de.

 

Offered Topics

The following topics will be offered:

Methodologies

→ Aspects of Reinforcement Learning with Applications to Audio Signal Processing

→ MM- and EM-Algorithm: Concepts and Applications

Localization

→ Sound Source Position Estimation in Robotics

→ Sound Source Tracking for Robots

→ Active Localization and Exploration

Signal Extraction and Enhancement

→ Blind Source Separation for Robots

→ Ego-noise: Characteristics and Suppression Methods

Classification, Detection and Understanding

→ Acoustic Event Classification for Robotics

→ Privacy for ASR and Scene Classification

Sensor Data Fusion

→ General Concepts for Sensor Data Fusion

→ Audio-visual Signal Enhancement/Audio-visual Tracking

Microphone Array Design

→ Optimal Microphone Placement