Unlike RNN, bidirectional RNN is used for reading the inputs in both directions. It consists of two different layers of hidden units. In one layer, the hidden states are shared from left to right, and in the other layer, they are shared from right to left and both of these layers connect from the input layer to the output layer.
One problem with LSTM is that it involves too many parameters. This is mainly due to the presence of many gates and states in the LSTM cell. Since LSTM has too many parameters, it increases the training time. So, to avoid this, we use gated recurrent units which is just a simplified and modified version of LSTM.
In LSTM, the cell state is mainly used for storing the information and it is also referred to as the internal memory whereas the hidden state is mainly used for computing the output.
The recurrent network is preferred over feedforward networks when we want to perform a sequential task. Since the recurrent neural network store the past information in the hidden state, the recurrent neural network is very effective for a sequential task than the feedforward network.