Recurrent Neural Network in Autonomous Vehicles

Ritik Bompilwar
4 min readMay 23, 2021

--

Recurrent Neural Networks (RNN)

Recurrent Neural Networks (Source: Simplilearn)

Recurrent Neural Network(RNN) are a class of Neural Network in which the output from the previous step is the input to the current step.

In a feed-forward neural network, we assume that all the data fed into the ANN is available at the same time and that they are independent of each other. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable-length sequences of inputs.

Fully connected Recurrent Neural Network (Source: Simplilearn)

Here, x, hand, y are the input layer, hidden layer, and output layer respectively. A, B, and C are the network parameters that help to improve the output of the model. The current input at a given time t is a combination of input at x(t) and at x(t-1). Output at any given time is fetched back to the network in order to improve on the output.

Long Short Term Memory Network(LSTM)

Animated LSTM cell (Source: Raimi Karim)

Long Short Term Memory Network(LSTM) was developed to overcome Vanishing Gradients and Exploding Gradients in a Deep Recurrent Neural Network.

The basic concept behind the working of LSTM is to “remember” the data the network has seen so far and to “forget” the irrelevant data. It is done by using different activation function layers called “gates” which serve different purposes. Each LSTM recurrent unit consists of a vector called the Internal Cell State, which describes the information chosen to be retained by the previous LSTM recurrent unit. Forget Gate determines the extent to which to forget the previous data. Input Gate determines the extent to which information is to be written onto the Internal Cell State. Input Modulation Gate is generally considered as a sub-part of the input gate. It is used to modulate the information that the Input gate will write onto the Internal State Cell by adding non-linearity to the information and making the information Zero-mean. Output Gate determines the output(next Hidden State) to be generated from the current Internal Cell State.

Gated Recurrent Units (GRU)

Animated GRU cell (Source : Raimi Karim)

The GRU is like an LSTM with a forget gate, but has fewer parameters, as it lacks an output gate. GRUs have exhibited better performance on certain smaller and less frequent datasets than LSTMs.
The information stored in the Internal Cell State in an LSTM recurrent unit is incorporated into the hidden state of the GRU. This collective information is then passed onto the next Gated Recurrent Unit.

Update Gate determines how much of the past knowledge needs to be passed to the future. It is analogous to the Output Gate of an LSTM. Reset Gate determines how much of the past knowledge to forget. It is analogous to the combination of the Input Gate and the Forget Gate of LSTM. Current Memory Gate is generally incorporated into the Reset Gate and is used to introduce some non-linearity into the input and to also make the input Zero-mean.

Thus RNNs feature a natural way to take in a temporal sequence of images (that is, video) and produce state-of-the-art temporal prediction results and is an effective way for acceleration prediction. One of the most important advantages of RNNs is their capacity to learn from large amounts of temporal data. Since RNNs don’t have to only rely on the local, frame-by-frame, pixel-based changes in an image, they increase prediction robustness for the motion of non-rigid objects, like pedestrians and animals.

The RNN can output future position, and future velocity predictions for each dynamic object detected in the scene (for example, cars and pedestrians). These results can provide essential input information to longitudinal control functions in an autonomous vehicle, such as automatic cruise control and automatic emergency braking.

If you find this story useful, kindly follow me @Ritik Bompilwar, Thanks!

--

--

Ritik Bompilwar

Computer Vision | Deep Learning | Incoming MITACS GRI’22 @ Dalhousie University | Undergraduate Researcher | Junior @ IIIT Naya Raipur