门控循环单元(GRU)
循环神经网络中的梯度计算方法. 当时间步数较大或者时间步较小时, 循环神经网络的梯度较容易出现衰减或爆炸. 虽然裁剪梯度可以应对梯度爆炸, 但无法解决梯度衰减的问题. 通常由于这个原因, 循环神经网络在实际中较难捕捉时间序列中时间步距离较大的依赖关系.
门控循环神经网络 (gated recurrent neural network) 的提出, 正是为了更好地捕捉时间序列中时间步距离较大的依赖关系. 它通过可以学习的门来控制信息的流动. 其中, 门控循环单元 (gated recurrent unit,GRU) 是一种常用的门控循环神经网络.
门控循环单元
门控循环单元的设计. 它引入了重置门 (reset gate) 和更新门 (update gate) 的概念, 从而修改了循环神经网络中隐藏状态的计算方式.
重置门和更新门
门控循环单元中的重置门和更新门的输入均为当前时间步输入 Xt 与上一时间步隐藏状态 Ht−1, 输出由激活函数为 sigmoid 函数的全连接层计算得到.
候选隐藏状态
隐藏状态
代码实现
- #!/usr/bin/env python
- # coding: utf-8
- # In[10]:
- import d2lzh as d2l
- from mxnet import nd
- from mxnet.gluon import rnn
- import zipfile
- # In[11]:
- def load_data_jay_lyrics(file):
- """Load the Jay Chou lyric data set (available in the Chinese book)."""
- with zipfile.ZipFile(file) as zin:
- with zin.open('jaychou_lyrics.txt') as f:
- corpus_chars = f.read().decode('utf-8')
- corpus_chars = corpus_chars.replace('\n', '').replace('\r',' ')
- corpus_chars = corpus_chars[0:10000]
- idx_to_char = list(set(corpus_chars))
- char_to_idx = dict([(char, i) for i, char in enumerate(idx_to_char)])
- vocab_size = len(char_to_idx)
- corpus_indices = [char_to_idx[char] for char in corpus_chars]
- return corpus_indices, char_to_idx, idx_to_char, vocab_size
- # In[12]:
- file ='/Users/James/Documents/dev/test/data/jaychou_lyrics.txt.zip'
- (corpus_indices, char_to_idx, idx_to_char, vocab_size) = load_data_jay_lyrics(file)
- # In[13]:
- num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
- ctx = d2l.try_gpu()
- def get_params():
- def _one(shape):
- return nd.random.normal(scale=0.01, shape=shape, ctx=ctx)
- def _three():
- return (_one((num_inputs, num_hiddens)),
- _one((num_hiddens, num_hiddens)),
- nd.zeros(num_hiddens, ctx=ctx))
- W_xz, W_hz, b_z = _three() # 更新门参数
- W_xr, W_hr, b_r = _three() # 重置门参数
- W_xh, W_hh, b_h = _three() # 候选隐藏状态参数
- # 输出层参数
- W_hq = _one((num_hiddens, num_outputs))
- b_q = nd.zeros(num_outputs, ctx=ctx)
- # 附上梯度
- params = [W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q]
- for param in params:
- param.attach_grad()
- return params
- # In[14]:
- def init_gru_state(batch_size, num_hiddens, ctx):
- return (nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx), )
- # In[15]:
- def gru(inputs, state, params):
- W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
- H, = state
- outputs = []
- for X in inputs:
- Z = nd.sigmoid(nd.dot(X, W_xz) + nd.dot(H, W_hz) + b_z)
- R = nd.sigmoid(nd.dot(X, W_xr) + nd.dot(H, W_hr) + b_r)
- H_tilda = nd.tanh(nd.dot(X, W_xh) + nd.dot(R * H, W_hh) + b_h)
- H = Z * H + (1 - Z) * H_tilda
- Y = nd.dot(H, W_hq) + b_q
- outputs.append(Y)
- return outputs, (H,)
- # In[16]:
- num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
- pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']
- # In[ ]:
- d2l.train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens,
- vocab_size, ctx, corpus_indices, idx_to_char,
- char_to_idx, False, num_epochs, num_steps, lr,
- clipping_theta, batch_size, pred_period, pred_len,
- prefixes)
- View Code
长短期记忆(LSTM)
常用的门控循环神经网络: 长短期记忆(long short-term memory,LSTM). 它比门控循环单元的结构稍微复杂一点.
长短期记忆
LSTM 中引入了 3 个门, 即输入门 (input gate), 遗忘门(forget gate) 和输出门(output gate), 以及与隐藏状态形状相同的记忆细胞(某些文献把记忆细胞当成一种特殊的隐藏状态), 从而记录额外的信息.
输入门, 遗忘门和输出门
候选记忆细胞
记忆细胞
隐藏状态
代码实现
- #LSTM 初始化参数
- num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
- ctx = d2l.try_gpu()
- def get_params():
- def _one(shape):
- return nd.random.normal(scale=0.01, shape=shape, ctx=ctx)
- def _three():
- return (_one((num_inputs, num_hiddens)),
- _one((num_hiddens, num_hiddens)),
- nd.zeros(num_hiddens, ctx=ctx))
- W_xi, W_hi, b_i = _three() # 输入门参数
- W_xf, W_hf, b_f = _three() # 遗忘门参数
- W_xo, W_ho, b_o = _three() # 输出门参数
- W_xc, W_hc, b_c = _three() # 候选记忆细胞参数
- # 输出层参数
- W_hq = _one((num_hiddens, num_outputs))
- b_q = nd.zeros(num_outputs, ctx=ctx)
- # 附上梯度
- params = [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc,
- b_c, W_hq, b_q]
- for param in params:
- param.attach_grad()
- return params
- # In[19]:
- def init_lstm_state(batch_size, num_hiddens, ctx):
- return (nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx),
- nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx))
- View Code
深度循环神经网络
双向循环神经网络
来源: https://www.cnblogs.com/jaww/p/12313399.html