Within the broader scope of machine learning and signal processing, our work focuses on developing algorithms that address real-world challenges across domains such as anomaly detection, generative modeling, 3D scene understanding, and efficient learning under constraints like limited labels or computational budgets.
Generative Models
Learning to generate realistic samples using models like Flow matching, Diffusion models, GANs and VAEs.
Key Focus Areas:
Novel Architecture Design: Developing new generative model architectures that improve sample quality, training stability, and computational efficiency.
Controllable Generation: Investigating methods for disentangling latent factors to allow for fine-grained control over generated outputs.
Data Augmentation and Synthesis: Utilizing generative models to create synthetic data for training robust machine learning models, especially in data-scarce scenarios.
Conditional Generation: Exploring techniques for generating data conditioned on specific inputs or properties, relevant for tasks like image-to-image translation or text-to-image synthesis.
Relevant Publications:
Anomaly Detection
Identifying rare or unusual patterns in complex data.
Key Focus Areas:
Unsupervised and Semi-Supervised Anomaly Detection: Developing algorithms that can learn normal behavior without explicit anomaly labels or with very few labels.
Deep Anomaly Detection: Utilizing deep neural networks to learn powerful representations for distinguishing normal from anomalous data points.
Streaming Anomaly Detection: Addressing the challenge of detecting anomalies in real-time data streams.
Explainable Anomaly Detection: Providing insights into why a particular data point is flagged as anomalous.
Relevant Publications:
Uncertainty Estimation
Modeling confidence in predictions for robust ML applications.
Key Focus Areas:
Bayesian Neural Networks: Exploring Bayesian approaches to deep learning for principled uncertainty quantification.
Ensemble Methods for Uncertainty: Developing efficient ensemble techniques to capture model uncertainty without the full computational cost of traditional Bayesian methods.
Uncertainty-Aware Training: Designing training procedures and loss functions that encourage models to output reliable uncertainty estimates.
Calibration of Uncertainty: Ensuring that predicted uncertainties accurately reflect the true likelihood of errors.
Relevant Publications:
Out-of-Distribution Detection
Detecting data points not seen during training.
Key Focus Areas:
Novel OOD Scoring Functions: Developing effective metrics and approaches to quantify how „out-of-distribution“ a given input is.
Training for OOD Robustness: Designing training strategies that explicitly teach models to distinguish between in-distribution and OOD samples.
Connections to Anomaly and Novelty Detection: Exploring the relationships and distinctions between OOD detection and other forms of outlier analysis.
Evaluation Benchmarks: Contributing to the development of rigorous benchmarks for assessing OOD detection performance across various domains.
Relevant Publications:
Few-Shot Learning
Learning from very limited labeled data.
Key Focus Areas:
Meta-Learning for Few-Shot Tasks: Developing models that learn to learn, enabling them to quickly adapt to new tasks with minimal examples.
Metric Learning: Designing embedding spaces where similar instances are close and dissimilar ones are far apart, facilitating comparison-based few-shot classification.
Data Augmentation and Synthesis: Generating diverse synthetic data to augment limited real datasets for few-shot learning.
Few-Shot Reinforcement Learning: Applying few-shot principles to enable agents to learn new skills with limited interactions.
Relevant Publications:
Hardware-Aware Machine Learning
Optimizing models for efficiency on various hardware platforms.
Key Focus Areas:
Model Compression: Investigating techniques like network pruning, weight quantization, and knowledge distillation to reduce model size and inference latency.
Neural Architecture Search (NAS): Automating the design of efficient neural network architectures tailored for specific hardware constraints and performance targets.
Efficient Inference on Edge Devices: Developing methods to deploy high-performing machine learning models on low-power, resource-limited hardware.
Energy-Efficient AI: Researching techniques that minimize the energy consumption of deep learning models during training and inference.
Relevant Publications:
Application Areas
Focusing on practical applications including Depth Estimation, 3D Point Cloud Processing, and other real-world problems.
Key Focus Areas:
Robust Depth Estimation: Developing techniques for accurately inferring per-pixel distance from images or video, crucial for autonomous navigation and augmented reality.
Advanced 3D Point Cloud Analysis: Focusing on processing sparse and dense 3D point clouds for tasks like semantic segmentation, object detection, and scene understanding in real-world environments.
Real-time Inference for Perception: Optimizing models for efficient execution on embedded systems to enable real-time perception in applications like automated driving and robotics.
Multi-modal Data Fusion: Integrating information from various sensors (e.g., cameras, LiDAR) to enhance the robustness and accuracy of perception systems.