Now that you understand the fundamental building blocks like layers and activation functions, let's put theory into practice. This section guides you through constructing your first neural networks using both the Sequential and Functional APIs in Keras. We'll build simple models suitable for a basic classification task, reinforcing the concepts covered in this chapter.We'll start by creating some synthetic data to work with. This allows us to focus purely on the network construction aspect without needing to worry about complex data loading procedures yet. We will use Scikit-learn's make_moons function to generate a two-dimensional dataset where two classes are intertwined in a non-linear way, making it a good simple challenge for a neural network.import numpy as np from sklearn.datasets import make_moons import keras from keras import layers from keras import models from keras import utils # Generate synthetic data X, y = make_moons(n_samples=100, noise=0.15, random_state=42) # Print shapes to verify print(f"Features shape: {X.shape}") print(f"Labels shape: {y.shape}") # Optionally, visualize the data (requires matplotlib) # import matplotlib.pyplot as plt # plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis') # plt.title("Synthetic Moons Dataset") # plt.xlabel("Feature 1") # plt.ylabel("Feature 2") # plt.show()This code generates 100 data points, each with two features (X.shape will be (100, 2)), and corresponding binary labels (y.shape will be (100,)).Building a Network with the Sequential APIThe Sequential API is ideal for models where layers are stacked linearly, one after the other. It's the simplest way to get started. Let's build a small network with one hidden layer:An input layer implicitly defined by the input_shape in the first layer (or explicitly using keras.Input). Our data has 2 features.A Dense hidden layer with 8 neurons and a ReLU activation function.An output Dense layer with 1 neuron and a sigmoid activation function, suitable for binary classification.# Define the model using the Sequential API model_sequential = models.Sequential( [ keras.Input(shape=(2,), name="input_layer"), # Define input shape layers.Dense(8, activation='relu', name="hidden_layer"), layers.Dense(1, activation='sigmoid', name="output_layer") ], name="sequential_model" ) # Display the model's architecture model_sequential.summary()Running model_sequential.summary() provides a concise overview of the layers, their output shapes, and the number of trainable parameters:Model: "sequential_model" ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ hidden_layer (Dense) │ (None, 8) │ 24 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output_layer (Dense) │ (None, 1) │ 9 │ └─────────────────────────────────┴────────────────────────┴───────────────┘ Total params: 33 (132.00 B) Trainable params: 33 (132.00 B) Non-trainable params: 0 (0.00 B)The None in the output shape represents the batch size, which can vary. Notice how the parameters are calculated:Hidden layer: (2 input features * 8 neurons) + 8 biases = 16 + 8 = 24 parameters.Output layer: (8 inputs from hidden layer * 1 neuron) + 1 bias = 8 + 1 = 9 parameters.Building the Same Network with the Functional APIThe Functional API offers more flexibility for building complex models with multiple inputs/outputs or shared layers. While our current network is simple, let's rebuild it using this API to understand the workflow.The core idea is to define layers as functions that operate on tensors.Define an input tensor using keras.Input, specifying the shape of the input data.Call layers on the tensors, connecting them sequentially.Create the Model instance by specifying the input and output tensors.# Define the input tensor inputs = keras.Input(shape=(2,), name="input_tensor") # Define the hidden layer, connected to the input hidden = layers.Dense(8, activation='relu', name="hidden_layer")(inputs) # Define the output layer, connected to the hidden layer outputs = layers.Dense(1, activation='sigmoid', name="output_layer")(hidden) # Create the model model_functional = models.Model(inputs=inputs, outputs=outputs, name="functional_model") # Display the model's architecture model_functional.summary()The summary output will be identical to the Sequential model's summary, confirming we've built the same architecture:Model: "functional_model" ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_tensor (InputLayer) │ [(None, 2)] │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ hidden_layer (Dense) │ (None, 8) │ 24 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ output_layer (Dense) │ (None, 1) │ 9 │ └─────────────────────────────────┴────────────────────────┴───────────────┘ Total params: 33 (132.00 B) Trainable params: 33 (132.00 B) Non-trainable params: 0 (0.00 B)A significant advantage of the Functional API is the ability to easily visualize the model graph. You can use Keras's plot_model utility (which requires the pydot and graphviz libraries to be installed).# Generate a visual representation of the functional model # Note: Requires pydot and graphviz installation # utils.plot_model(model_functional, show_shapes=True, show_layer_activations=True, to_file="functional_model.png")This would generate a diagram like the one below, clearly showing the flow of data through the layers:digraph G { rankdir=TB; graph [splines=true, overlap=false, ranksep=1]; node [shape=record, style=filled, fillcolor="#a5d8ff"]; edge [color="#495057"]; "input_tensor" [label="{input_tensor | InputLayer | (None, 2)}", fillcolor="#ced4da"]; "hidden_layer" [label="{hidden_layer | Dense | (None, 8) | activation: relu}"]; "output_layer" [label="{output_layer | Dense | (None, 1) | activation: sigmoid}"]; "input_tensor" -> "hidden_layer"; "hidden_layer" -> "output_layer"; }A diagram representing the functional model architecture, showing the input layer connected to the hidden dense layer (ReLU activation), which connects to the output dense layer (Sigmoid activation). Shapes are indicated for each layer's output.Choosing Between APIsSequential API: Use it for simple, linear stacks of layers. It's concise and easy to read for straightforward architectures.Functional API: Prefer it for models with non-linear topology, such as those with multiple inputs or outputs, shared layers, or residual connections. It provides much greater flexibility.You have now successfully built your first neural network architectures using Keras! You defined the layers, specified activation functions, and connected them using both the Sequential and Functional APIs. The next step, covered in Chapter 3, is to compile these models by defining a loss function and optimizer, and then train them on data using the fit method.