There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution.
This article focusses on Feature Visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks for exploring variation in what neurons react to, how they interact, and how to improve the optimization process.
2 thoughts on “How neural networks build up their understanding of images”
Leave a Reply
You must be logged in to post a comment.
Les réseaux de neurones ont longtemps été des boîtes noires. Enfin on peut regarder dans ces boites!
Forcément, c’est la direction prise.