Assignment 4: Graphs & DenseNets

Deadline: November 15th, 9am

Note: Find the notebook from the exercises here.

Graph-based Execution

So far, we have been using so-called “eager execution” exclusively: Commands are run as they are defined, i.e. writing y = tf.matmul(X, w) actually executes the matrix multiplication.

In Tensorflow 1.x, things used to be different: Lines like the above would only define the computation graph but not do any actual computation. This would be done later in dedicated “sessions” that execute the graph. Later, eager execution was added as an alternative way of writing programs and is now the default, mainly because it is much more intuitive/allows for a more natural workflow when designing/testing models.

Graph execution has one big advantage: It is very efficient because entire models (or even training loops) can be executed in low-level C/CUDA code without ever going “back up” to Python (which is slow). As such, TF 2.0 still retains the possibility to run stuff in graph mode if you so wish – let’s have a look!

As expected, there is a tutorial on the TF website as well as this one which goes intro extreme depth on all the subtleties. The basic gist is:

Go back to some of your pevious models and sprinkle some tf.function annotations in there. You might need to refactor slightly – you need to actually wrap things into a function!

DenseNet

Previously, we saw how to build neural networks in a purely sequential manner – each layer receives one input and produces one output that serves as input to the next layer. There are many architectures that do not follow this simple scheme. You might ask yourself how this can be done in Keras. One answer is via the so-called functional API. There is an in-depth guide here. Reading just the intro should be enough for a basic grasp on how to use it, but of course you can read more if you wish.

Next, use the functional API to implement a DenseNet. You do not need to follow the exact same architecture, in fact you will probably want to make it smaller for efficiency reasons. Just make sure you have one or more “dense blocks” with multiple layers (say, three or more) each. You can also leave out batch normalization (this will be treated later in the class) as well as “bottleneck layers” (1x1 convolutions) if you want.

Bonus: Can you implement DenseNet with the Sequential API? You might want to look at how to implement custom layers (shorter version here)…

What to Hand In

The next two parts are just here for completeness/reference, to show other ways of working with Keras and some additional TensorBoard functionalities. Check them out if you want – we will also (try to) properly present them in the exercise later.

Bonus: High-level Training Loops with Keras

As mentioned previously, Keras actually has ways of packing entire training loops into very few lines of code. This is good whenever you have a fairly “standard” task that doesn’t require much besides iterating over a dataset and computing a loss/gradients at each step. In this case, you don’t need the customizability that writing your own training loops gives you.

As usual, here are some tutorials that cover this:

There are also some interesting overview articles in the “guide” section but this should suffice for now. Once again, go back to your previous models and re-make them with these high-level training loops! Also, from now on, feel free to run your models like this if you want (and can get it to work for your specific case).

Bonus: TensorBoard Computation Graphs

You can display the computation graphs Tensorflow uses internally in TensorBoard. This can be useful for debugging purposes as well as to get an impression what is going on “under the hood” in your models. More importantly, this can be combined with profiling that lets you see how much time/memory specific parts of your model take.

To look at computation graphs, you need to trace computations explicitly. See the last part of this guide for how to trace tf.function-annotated computations. Note: It seems like you have to do the trace the first time the function is called (e.g. on the first training step).