Deep Learning · Visualization or Exposition Techniques for Deep Networks

TitleAuthors
A Benchmark for Interpretability Methods in Deep Neural NetworksSara Hooker · Dumitru Erhan · Pieter-Jan Kindermans · Been Kim
Accurate, reliable and fast robustness evaluationWieland Brendel · Jonas Rauber · Matthias Kümmerer · Ivan Ustyuzhaninov · Matthias Bethge
Approximate Feature Collisions in Neural NetsKe Li · Tianhao Zhang · Jitendra Malik
Computing Linear Restrictions of Neural NetworksMatthew Sotoudeh · Aditya V Thakur
CXPlain: Causal Explanations for Model Interpretation under UncertaintyPatrick Schwab · Walter Karlen
Deliberative Explanations: visualizing network insecuritiesPei Wang · Nuno Nvasconcelos
Explanations can be manipulated and geometry is to blameAnn-Kathrin Dombrowski · Maximillian Alber · Christopher Anders · Marcel Ackermann · Klaus-Robert Müller · Pan Kessel
Fooling Neural Network Interpretations via Adversarial Model ManipulationJuyeon Heo · Sunghwan Joo · Taesup Moon
Full-Gradient Representation for Neural Network VisualizationSuraj Srinivas · François Fleuret
Grid Saliency for Context Explanations of Semantic SegmentationLukas Hoyer · Mauricio Munoz · Prateek Katiyar · Anna Khoreva · Volker Fischer
Intrinsic dimension of data representations in deep neural networksAlessio Ansuini · Alessandro Laio · Jakob H Macke · Davide Zoccolan
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizersAri Morcos · Haonan Yu · Michela Paganini · Yuandong Tian
The Geometry of Deep Networks: Power Diagram SubdivisionRandall Balestriero · Romain Cosentino · Behnaam Aazhang · Richard Baraniuk
Visualizing and Measuring the Geometry of BERTEmily Reif · Ann Yuan · Martin Wattenberg · Fernanda B Viegas · Andy Coenen · Adam Pearce · Been Kim
Visualizing the PHATE of Neural NetworksScott Gigante · Adam S Charles · Smita Krishnaswamy · Gal Mishne