Podcast
Questions and Answers
What happens when the entries of the output gate in LSTM approach 1?
What happens when the entries of the output gate in LSTM approach 1?
- The hidden state variables are jointly mapped into the inferred label
- All memory information is passed through to the predictor (correct)
- No further processing is done and information is retained within the memory cell
- The model resets all memory cells
In what scenarios is bidirectional RNN useful?
In what scenarios is bidirectional RNN useful?
- When future observations are crucial for inferring a label
- When only a single RNN is needed
- When only past observations are required for labeling
- For signal smoothing and denoising scenarios (correct)
What are the trainable parameters in LSTM networks?
What are the trainable parameters in LSTM networks?
- Input and output gates
- Memory cells and hidden states
- Weight matrices and bias vectors (correct)
- Predictors and information retainers
How do bidirectional RNNs differ from traditional RNNs?
How do bidirectional RNNs differ from traditional RNNs?
What does an output gate close to 0 indicate in LSTM?
What does an output gate close to 0 indicate in LSTM?
For what type of systems are RNNs primarily designed?
For what type of systems are RNNs primarily designed?
How do bidirectional RNNs leverage past and future observations?
How do bidirectional RNNs leverage past and future observations?
What differentiates bidirectional RNNs from traditional RNNs in terms of operation?
What differentiates bidirectional RNNs from traditional RNNs in terms of operation?
Flashcards are hidden until you start studying