This chapter establishes the fundamental concepts upon which neural networks are built. We begin by examining the structure and function of an artificial neuron, the basic processing unit. You will learn how inputs are processed using weights and biases, the learnable parameters represented in calculations like z=∑(weight×input)+bias.
We then introduce activation functions, such as Sigmoid, Tanh, and ReLU, explaining their role in introducing non-linearity (a=f(z)) which is essential for learning complex patterns. Finally, we'll see how these individual neurons are organized into layers (input, hidden, and output) and connected to form the basic architecture of a feedforward neural network. By the end of this chapter, you will understand the key components and how they fit together to process information.
1.1 From Biological to Artificial Neurons
1.2 Weights and Biases: The Network's Parameters
1.3 Activation Functions: Introducing Non-Linearity
1.4 Structuring Networks: Layers and Connections
1.5 A Simple Feedforward Network Example
1.6 Practice: Calculating Neuron Output
© 2025 ApX Machine Learning