CV
Education
- Ph.D in Computer Science (Machine Learning), Aarhus University, 2021 (expected)
- M.S. in Computer Science, Aarhus University, 2018.
Specialization: Algorithms and Machine Learning.
Thesis: Explainable Deep Learning: Facilitating the Use of Deep Neural Networks in Healthcare, grade: 12.
Weighted GPA: 10.4 on a 12 scale
Courses from other universities: Advanced Machine Learning (DTU, grade: 12), Computational Data Analysis (DTU, 12), Algorithms for Massive Data Sets (DTU, 12), Introduction to Artificial Intelligence (UC Berkeley, A), Introductory Applied Econometrics (UC Berkeley, A). - B.S. in Computer Science, Aarhus University, 2015.
GPA: 8.91 on a 12 scale.
Bachelor thesis: Compiler for the programming language Tiger, Grade: 12.
Publications
Teaching
Work experience
- 2015-2016 Windows (WPF) Developer at Visiolink ApS, Aarhus, Denmark.
- 2013-2015 Student Developer at Visiolink ApS, Aarhus, Denmark
Talks
Projects
Counterfactuals from Invertible Neural Networks
“How can I make a minimal and realistic change to an input such that the predicted outcome change?” This is the answer which I am aiming to answer using invertible Neural Networks. The idea relates a lot to pre-imaging from, e.g., kernel-methods.
Attention Mechanisms for Explanations
Many local post-hoc explainability techniques, such as DeConvNet, Guided Backprop, Layer-wise relevance propagation, and integrated gradients, rely on “gradient-like” computations, where explanations are propagated backwards through Neural Networks, one layer at a time. One can alter this backward computation to include attentions, which guides the explanation techniques to produce better explanations.
What if Neural Networks had SVDs?
Various Neural Networks employ time-consuming matrix operations like matrix inversion. Many such matrix operations are faster to compute given the Singular Value Decomposition (SVD). Techniques from [1, 2] allow using the SVD in Neural Networks without computing it. In theory, the techniques can speed up matrix operations, however, in practice, they are not fast enough.
We present an algorithm which is up to $27 \times $ faster than a previous approach, fast enough to speed up several matrix operations.
Skills
- Python
- Tensorflow
- Pytorch
- Keras
- Numpy
- MatLab
- R
- Java
- C++
Portfolio
- Layer-wise Relevance Propagation (LRP) framework for PyTorch 1.6 available on GitHub.
- Automatic Layer-wise Relevance Propagation (LRP) framework for Tensorflow 1.4 available on GitHub.
- Expectation Maximization for Kalman Filters GitHub.
- Material from study group on GANs GitHub.
Service and leadership
- 2016-2017 Board member of the Computer Science student organization (DSAU) as Aarhus University.
- Endless hours of gymnastics teaching.