Coursera Learner working on a presentation with Coursera logo and

What is a Residual Neural Network?

Coursera Learner working on a presentation with Coursera logo and

A residual neural network referred to as “ResNet” is a renowned artificial neural network. It assembles on constructs obtained from the cerebral cortex’s pyramid cells. The residual neural networks accomplish this by using shortcuts or “skip connections” to move over various layers.

Experts implement traditional residual neural network models with two or three-layer skips containing batch normalization and nonlinearities in between. Data scientists also take advantage of an extra weight matrix for learning the skip weights in some cases. The term used to describe this phenomenon is “Highwaynets.” Models consisting of multiple parallel skips are “Densenets.” Non-residual networks can also be referred to as plain networks when talking about residual neural networks.

A massive reason for skipping layers is to steer clear of vanishing gradients and similar issues. As the gradient is back-propagated to previous layers, this repeated process may make the gradient extremely small. Most individuals do this by utilizing the activations from preceding layers until the adjoining one learns in particular weights. While training, these weights adjust to the upstream layers and magnify the layer skipped previously. In the most straightforward case, the weights used for connecting the adjacent layers come into play.

However, this only works effectively when all of the intermediate layers are linear or overlapping over the non-linear layer. If that is not the case, utilizing a different weight matrix would be helpful for skipped connections. It would be best if you considered using a Highwaynet in such cases.

Skipping clears complications from the network, making it simpler, using very few layers during the initial training stage. It speeds up learning by tenfold, minimizing the effect of vanishing gradients. Why? Because there are hardly any layers to spread through. After this, the network eventually puts back the skilled layers while learning the feature space.

As the training nears completion and each layer expands, they get near the manifold and learn things more quickly. A neural network that does not have residual parts has more freedom to explore the feature space, making it highly endangered to perturbations, causing it to exit the manifold, and making it essential for the extra training data recuperate.

What Necessitated the Need for Residual Neural Networks?

After AlexNets celebrated a triumph at the 2012s LSVRC classification competition, deep residual network arguably became the most innovative and ingenious innovation in the deep learning and computer vision landscape history. ResNet enables you to train hundreds, if not thousands of layers, while achieving fascinating performance. 

Numerous computer vision apps took advantage of residual neural network’s strong representational capabilities and noticed a massive boost. Image classification wasn’t the only computer vision app that utilized ResNet – face recognition, and object detection also benefitted from this groundbreaking innovation.

Since residual neural networks left people astounded during its inauguration in 2015, several individuals in the research community tried discovering the secrets behind its success, and it’s safe to say that there have been tons of refinements made in ResNet’s vast architecture.

The Vanishing Gradient Issue

The vanishing gradient problem is common in the deep learning and data science community. People often encounter this problem when training artificial neural networks involving backpropagation and gradient-based learning. As discussed earlier, experts use gradients for updating weights in a specific network.

However, things are different sometimes as the gradient becomes incredibly small and almost vanishes. It prevents the weights from changing their values, causing the network to discontinue training as the same values will disseminate over and over without any meaningful work being done.

ResNet and Deep Learning

Every deep learning model possesses multiple layers that allow it to comprehend input features, helping it make an informed decision. While that is quite straightforward, how do networks identify various features present in the data?

It would be fair to think of neural networks as universal function approximators. Models attempt to learn the right parameters closely representing a feature or function that provides the right output. Incorporating more layers is a great way to add parameters, and it also enables the mapping of complicated non-linear functions.

However, this does not mean that stacking tons of layers will result in improved performance. If you look closely, you will realize that there is a catch. While we notice that implementing our models with more layers leads to better performances, the results could change drastically in some conditions, leading to saturation, and eventually, a rapid decline.

Understanding the issue With Multiple Layers

We must first understand how models learn from training data. The process happens by passing every input through the model (aka feedforward) and passing it again (aka backpropagation.) While backpropagation is happening, we update our model’s weights according to its input classification. The update subtracts the loss function’s gradient concerning the weight’s previous value.

How ResNet Solves the Vanishing Gradient Problem

As abundantly mentioned, residual neural networks are the ideal solution to the vanishing gradient problem. Deep learning experts add shortcuts to skip two or three layers to make the process faster, causing the shortcut to change how we calculate gradients at every layer. To simplify things, passing the input through the output prevents some layers from changing the gradient’s values, meaning that we can skip the learning procedure for some specific layers. The phenomenon also clarifies how the gradient enters back into the network.

As we continue training, the model grasps the concept of retaining the useful layers and not using those that do not help. The model will convert the later into identity mappings. It is a significant factor behind the residual neural network’s success as it is incredibly simple to create layers mapping to the identity function.

Furthermore, the fact that there is an option of hiding layers that don’t help is immensely useful. A massive amount of layers can make things quite confusing, but with the help of residual neural networks, we can decide which ones we want to keep and which ones don’t serve a purpose.

Final Thoughts

It would be fair to say that the residual neural network architecture has been incredibly helpful for increasing neural networks’ performance with multiple layers.  At their core, ResNets are like various networks with minor modifications. This architecture has similar functional steps to CNN (convolutional neural networks) or others. However, there is an additional step for tackling the vanishing gradient problem and other related issues.

Languages

Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.