Navigation

Learning Data-Dependent Transformations for Ego-Noise Suppression

Proposal for a Master Thesis

Topic:

Learning Data-Dependent Transformations for Ego-Noise Suppression

Description:

Robot audition describes the research area of human-robot interaction by speech. Therefore, robots are often equipped with microphone arrays to capture their surrounding acoustic scene. If the robot is moving, the recorded  microphone signals are significantly distorted by self-induced noise emitted from the various moving mechanical parts of the robot. Various algorithms have been proposed to deal with this problem, e.g., [1]. Most approaches work in a transform domain, i.e., classically the STFT domain due to its sparsifying nature for speech signals. Recently, increasing research effort has been spent on learning transformations based on training data. An often used objective is to enforce sparsity in the transform domain, e.g., [2]. Learning transformations instead of employing data-independent ones has the merit of tailoring transformations to specific applications. In this thesis the potential of learning transformations for ego-noise suppression should be examined. The implemented algorithms should be evaluated against well-known STFT-based approaches with respect to their effect on noise  suppression algorithms.
As prerequisites, the student should have interest in signal processing and machine
learning algorithms, affinity to math and Matlab programming experience.

 

[1] T. Tezuka, T. Yoshida, and K. Nakadai, “Ego-motion noise suppression forrobots based on Semi-Blind Infinite Non-negative Matrix Factorization,” inICRA, 2014.

[2] S. Ravishankar and Y. Bresler, “Learning Sparsifying Transforms,”IEEE Trans-actions on Signal Processing, 2013.

Download

Professor:

Prof. Dr.-Ing. Walter Kellermann

Supervisior:

M.Sc. Thomas Haubner, room 05.018 (Cauerstr. 7), thomas.haubner@FAU.de

Available:

Immediately