It is well known that GPUs can significantly accelerate neural network training. However, there are lots of questions around the use of GPUs, especially for beginners. In this talk, we will dissect a particular convolutional Neural Network (NN) and use it as an example to answer these frequently asked questions. We will illustrate how to summarize and visualize the architecture of a NN, from which we will make a coarse estimate of memory requirement. Then, we’ll show how to accurately check the GPU memory usage at runtime and provide several advice in the case it runs out of GPU memory. The live demo in the talk uses Keras interface on Graham cluster and the source code will be provided after the talk.