Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

mlnotes

projects

What if Neural Networks had SVDs?

Published:

Various Neural Networks employ time-consuming matrix operations like matrix inversion. Many such matrix operations are faster to compute given the Singular Value Decomposition (SVD). Techniques from [1, 2] allow using the SVD in Neural Networks without computing it. In theory, the techniques can speed up matrix operations, however, in practice, they are not fast enough.
We present an algorithm which is up to $27 \times $ faster than a previous approach, fast enough to speed up several matrix operations.

Attention Mechanisms for Explanations

Published:

Many local post-hoc explainability techniques, such as DeConvNet, Guided Backprop, Layer-wise relevance propagation, and integrated gradients, rely on “gradient-like” computations, where explanations are propagated backwards through Neural Networks, one layer at a time. One can alter this backward computation to include attentions, which guides the explanation techniques to produce better explanations.

Counterfactuals from Invertible Neural Networks

Published:

“How can I make a minimal and realistic change to an input such that the predicted outcome change?” This is the answer which I am aiming to answer using invertible Neural Networks. The idea relates a lot to pre-imaging from, e.g., kernel-methods.

talks

teaching