Introduction: Understanding TensorFlow 2

Get an overview of TensorFlow 2 and what we will learn.

We'll cover the following

In this chapter, we’ll get an in-depth understanding of TensorFlow. This is an open-source distributed numerical computation framework, and it will be the main platform on which we will be implementing all our exercises.

Flow of the topics

Here, we cover the following topics:

• What is TensorFlow?
• The building blocks of TensorFlow (for example, variables and operations)
• Using Keras for building models
• Implementing our first neural network
TensorFlow 2

We’ll get started with TensorFlow by defining a simple calculation and trying to compute it using TensorFlow. After we complete this, we’ll investigate how TensorFlow executes this computation. This will help us to understand how the framework creates a computational graph to compute the outputs and execute this graph to obtain the desired outputs. Then we will dive into the details of how TensorFlow architecture operates by looking at how TensorFlow executes things, with the help of an analogy of how a fancy café works. We’ll then see how TensorFlow 1 used to work so that we can better appreciate the amazing features TensorFlow 2 offers.

Note: When we use the word “TensorFlow” by itself, we are referring to TensorFlow 2. We’ll specifically mention TensorFlow 1 if we are referring to TensorFlow 1.

Having gained a good conceptual and technical understanding of how TensorFlow operates, we’ll look at some of the important computations the framework offers. First, we will look at defining various data structures in TensorFlow, such as variables and tensors, and we’ll also see how to read inputs through data pipelines. Then we’ll work through some neural network-related operations (for example, convolution operation, defining losses, and optimization).

Finally, we will apply this knowledge in an exciting exercise, where we’ll implement a neural network that can recognize images of handwritten digits. We’ll also see that we can implement or prototype neural networks very quickly and easily by using a high-level submodule, such as Keras.