Start with this overview blog post and a more recent addition.
Now it’s time to look at some specific techniques. Dig as deep as you like! These websites provide plenty of background information.
Deepvis (code and paper are linked, optional further reading on fooling neural nets)
LSTMVis (code and paper are linked)
Heatmapping - especially Methods for Interpreting and Understanding Deep Neural Networks (2017), Montavon et al.
Sanity checks for Saliency Maps - a reminder that explanations can be misleading
About implementation: To easily apply introspection techniques to your TF/Keras models, you can give tf-explain a try.
(It currently only works on image data)