源碼解讀Tensorflow的Seq2Seq實(shí)現(xiàn)API,構(gòu)建一個(gè)數(shù)值預(yù)測

Seq2Seq的資料很多,這里就簡單介紹下吧。
顧名思義,它就是一個(gè)sequence來預(yù)測另一個(gè)sequence的模型,主要是一個(gè)encoder-decoder的框架。先放個(gè)圖,


上圖就是一個(gè)典型的seq2seq問答系統(tǒng),Q:Are you free tomorrow? A: Yes, what's up? 每個(gè)詞就是一個(gè)序列元素,好吧。是有點(diǎn)啰嗦了。簡言之,你認(rèn)為target之間是有關(guān)聯(lián)的問題,且input是關(guān)聯(lián)的,就可以用seq2seq。這里為什么先強(qiáng)調(diào)target,再談input,我想你們應(yīng)該是明白的!
再放一張細(xì)節(jié)圖,來聊聊內(nèi)部,個(gè)人感覺比較有趣的事情。

其中每個(gè)圓圈是個(gè)RNN Cell, 一般用LSTM或者GRU吧(為什么用這個(gè)?防止梯度消失,遠(yuǎn)程信息量損耗,為啥會(huì)防止梯度消失?因?yàn)殚T控,加法操作。你不是面試官吧!),個(gè)人感覺GRU會(huì)稍微好點(diǎn),畢竟參數(shù)較少,不容易過擬合。我這種不會(huì)調(diào)參的,還是越少越好吧。。
對其不太了解的可以先看看RNN(LSTM, GRU)的介紹
www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
觀察可以看出,中間那個(gè)C是由inputs encode得到,相當(dāng)于整合了每個(gè)input的信息,比如an alcoholic drink made from malt and hop是輸入的每個(gè)詞,那C就代表beer!
相比shallow model 深度模型就是逐層構(gòu)建高級(jí)特征,免去人工特征的優(yōu)勢,在這里可以很好地體現(xiàn)。
比如這個(gè)C,可以用它代表你每個(gè)詞組成的一句話,相當(dāng)于做了降維且抽象出更復(fù)雜的特征(甚至一篇由多個(gè)句子組成的文章也可以通過C來embeded),在Distributional Hypothesis下,還有word2vec和doc2vec等網(wǎng)絡(luò)語言模型來embedding。相比于他們,seq2seq考慮了語句順序的關(guān)系,當(dāng)然,你要用更多的參數(shù)...;還有C&W的SENNA間接訓(xùn)練word embedding "Natural Language Processing (Almost) from Scratch",M&H的HLBL "A scalable hierarchical distributed language model"等,其中有很多內(nèi)容和思想是可以學(xué)習(xí)的,比如在最后層用hierarchical structure降低查找的時(shí)間復(fù)雜度等!(好像有點(diǎn)跑題..)
再來稍微談下decoder吧,可以明顯看到,target之間是相連的,是不是想到了resnet的shortcut,加深網(wǎng)絡(luò)的神器??!個(gè)人認(rèn)為,Seq2Seq最大的特點(diǎn)就是讓輸出之間有關(guān)聯(lián)。如果輸出之間的關(guān)系更好擬和,那么seq2seq將會(huì)得到較好的結(jié)果,就如同ResNet,巧妙的轉(zhuǎn)化為殘差模型,使參數(shù)的調(diào)整更敏感,更易訓(xùn)練深度模型,帶來更大的提升!畢竟在數(shù)據(jù)充足的情況下,深度決定精度么
此圖僅觀賞

residual unit

有一點(diǎn)就是C對每一個(gè)target都是一樣的,于是就有了attention mechanism,讓每一個(gè)target擁有自己的C,所謂千人千面的思想,當(dāng)然增加了參數(shù),在數(shù)據(jù)多的話,效果還是很厲害滴!但是大家有想過,如果input序列對target影響不大時(shí),再使用attention會(huì)怎么樣呢?恩,這里就不多說了,感興趣朋友可以研究下,一起交流。
TensorFlow中實(shí)現(xiàn)
最近學(xué)了下tf,哇,感覺我太笨,難用。發(fā)現(xiàn)大多seq2seq都是做chatbot之類的,也就是輸入輸出是離散的,和我的應(yīng)用不符。。資料也比較少,只能來讀代碼了。。
先簡單介紹下chatbot實(shí)現(xiàn),再改成數(shù)值預(yù)測(比如用市盈率,凈值等特征預(yù)測資產(chǎn)價(jià)格),就很簡單了!
Seq2Seq模型訓(xùn)練:
Encoder:input_sequences ----> (RNN) ----> C(Cell State)
Decoder:C + 結(jié)合時(shí)刻i的target ----> (RNN) ----> 預(yù)測時(shí)刻i+1的target
重點(diǎn):訓(xùn)練過程decoder部分的輸入是target
預(yù)測過程區(qū)別:decoder的輸入是上一時(shí)刻的輸出

chatbot模型流程:

seq2seq源碼:github.com/google/seq2seq
chatbot因?yàn)槭菍υ?,每個(gè)詞是離散的,所以需要將每個(gè)詞embedding后,喂給模型,得到的decoder結(jié)果一般需要連接一個(gè)全連接層(Dense)+softmax來選擇輸出概率最大的詞作為最終結(jié)果,只要對輸入輸出做一個(gè)embedding處理就好,源碼還用了beam_search算法(貪心動(dòng)態(tài)規(guī)劃),來解碼,也就是先選擇幾個(gè)概率較大的輸出,接著他們解碼最后看聯(lián)合概率,選擇輸出序列。
還有處理不定長的情況,一般策略是選出最大長度,其他padding為0,這樣可能會(huì)使資源浪費(fèi)太大(比如,一個(gè)序列長度100000,其他平均10),可以用bucket策略,選擇幾個(gè)長度區(qū)間,把不同的序列padding到該區(qū)間內(nèi),節(jié)約資源,可謂工程實(shí)現(xiàn)上的亮點(diǎn)啦
我們以實(shí)現(xiàn)一個(gè)簡單版本的Seq2Seq模型來聊聊吧。(第一次用這個(gè),沒選markdown模式...)
按照上述chatbot的流程,

encoder構(gòu)建:
1.預(yù)處理

把所有文字存到字典(非常用的用<UNKNOW>表示,并添加<GO>,<EOS>,<PAD>用作啟示休止填充符),并構(gòu)建文字和編號(hào)的一一映射;假設(shè)問題最多10個(gè)詞,如果某句話只有8個(gè)詞,需要補(bǔ)2個(gè)<PAD>,答案前后需要添加<GO>和<EOS>指示,假設(shè)最多20個(gè)詞。
拿一個(gè)batch(32)來說,inputs的維度是(32,10),targets為(32,20)

2.encoder構(gòu)建

encoder比較簡單,就是一個(gè)RNN構(gòu)建,最后拿出來最后一個(gè)cell state作為decoder的初始cell state就行。(沒有markdown盡量寫簡單吧)

import tensorflow as tf
from tensorflow.contrib.seq2seq import dynamic_rnn

##定義輸入
inputs = tf.placeholder(tf.float32,shape=[batch_size,max_input_length])  

##我們的輸入需要embedding成向量喂給模型,定義embedding張量
with tf.variable_scope('embedding'):
  #定義一個(gè)Vocabulary_size x embedding_size維度的矩陣, embedding_size是詞向量的維度
  encoder_embeddings = tf.Variable(tf.random_uniform([Vocabulary_size,embedding_size], -1.0, 1.0,name='emcoder_embedding')
  decoder_embeddings = tf.Variable(tf.random_uniform([Vocabulary_size,embedding_size], -1.0, 1.0,name='decoder_embedding')

#將原始輸入轉(zhuǎn)化成embedded輸入,tf中在cpu進(jìn)行
with tf.device('/cpu:0'): 
  #embed是我們喂給rnn的輸入,通過這個(gè)函數(shù)返回(batch_size,max_input_length,embedding_size)維度的張量,
  #這個(gè)函數(shù)是每個(gè)詞的one-hot向量*embeddings矩陣得到的
  embed = tf.nn.embedding_lookup(embeddings,inputs)

##定義encoder cell,這里遵循tensorflow RNN建模流程
with tf.variable_scope('encoder'):  
  # 這里output_dim = 20
  lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(output_dim) 
  #rnn_layers 是要構(gòu)建幾層
  encoder_cell = tf.contrib.rnn.MultiRNNCell([for _ in xrange(rnn_layers)])

##通過dynamic_rnn 獲取encoder的final cell state
_, encoder_final_state = dynamic_rnn(encoder_cell, embed,dtype=tf.float32) 

encoder部分就已經(jīng)搞定了,對于decoder,需要分訓(xùn)練過程和預(yù)測過程來考慮,處理細(xì)節(jié)上有一定區(qū)別,上面已經(jīng)介紹過了,直接開始吧!

3.decoder構(gòu)建
說明

decoder構(gòu)建可以參考dynamic_decoder這個(gè)函數(shù)的輸入,一般需要三個(gè)函數(shù)helper(幫助給decoder在training和inferring不同的過程中提供輸入和初始化),BasicDecoder(實(shí)現(xiàn)decoder的一步訓(xùn)練過程),dynamic_decoder(實(shí)現(xiàn)整個(gè)過程),源碼參考helper.py,basic_decoder.py和decoder.py,這里解讀下重點(diǎn)過程吧

helper

介紹一下簡單的,這里主要是訓(xùn)練過程需要的TrainingHelper和預(yù)測過程的GreedyEmbeddingHelper

##說明##
"""
helper主要提供decoder的初始化輸入和下一次訓(xùn)練的輸入
這里只介紹重要的函數(shù),為了節(jié)約空間一些抽象類中定義的屬性就刪啦
"""
class TrainingHelper(Helper):
#這里繼承了Helper的抽象類,主要限制此類的函數(shù)和屬性

  def __init__(self, inputs, sequence_length, time_major=False, name=None):

    with ops.name_scope(name, "TrainingHelper", [inputs, sequence_length]):
      inputs = ops.convert_to_tensor(inputs, name="inputs")
      
      # time_major就是原來shape(batch_size,max_input_length,embedding_size),
      # 轉(zhuǎn)化為shape(batch_size,max_input_length,embedding_size) ,為了加速訓(xùn)練
      if not time_major:
        inputs = nest.map_structure(_transpose_batch_time, inputs)
      
      # 轉(zhuǎn)化之后,每次訓(xùn)練輸入就是同一時(shí)刻的不同batch的輸入,_unstack_ta就是把
      # 剛才的三維tensor分解為max_input_length個(gè)二維(batch_size,embedding_size)tensor
      self._input_tas = nest.map_structure(_unstack_ta, inputs)
      # sequence_length是用來控制訓(xùn)練次數(shù)的,是個(gè)一維tensor
      self._sequence_length = ops.convert_to_tensor(
          sequence_length, name="sequence_length")
      if self._sequence_length.get_shape().ndims != 1:
        raise ValueError(
            "Expected sequence_length to be a vector, but received shape: %s" %
            self._sequence_length.get_shape())
      # 顧名思義哦,這實(shí)在迭代過程中如果全部序列都輸入過了,就用0來完成迭代過程
      self._zero_inputs = nest.map_structure(
          lambda inp: array_ops.zeros_like(inp[0, :]), inputs)

      self._batch_size = array_ops.size(sequence_length)

  #獲取初始化輸入(第一個(gè)target)
  def initialize(self, name=None):
    with ops.name_scope(name, "TrainingHelperInitialize"):
      # 判斷此batch中序列是否為0
      finished = math_ops.equal(0, self._sequence_length)
      # 判斷是否都為零,如果是則輸入0就可以了
      all_finished = math_ops.reduce_all(finished)
      這里cond()是控制流op,如果all_finished == True,則輸出zero_output,否則輸出第一個(gè)target,還記得_input_tas吧
      next_inputs = control_flow_ops.cond(
          all_finished, lambda: self._zero_inputs,
          lambda: nest.map_structure(lambda inp: inp.read(0), self._input_tas))
      return (finished, next_inputs)
   
  # 根據(jù)output獲取對應(yīng)的輸出編號(hào),decoder最后一般接一個(gè)softmax,進(jìn)而獲取概率最大的輸出,根據(jù)前面定義的映射關(guān)系可以知道輸出的具體值
  def sample(self, time, outputs, name=None, **unused_kwargs):
    with ops.name_scope(name, "TrainingHelperSample", [time, outputs]):
      sample_ids = math_ops.cast(
          math_ops.argmax(outputs, axis=-1), dtypes.int32)
      return sample_ids

  # 獲取下一個(gè)輸入(next_target和next_cell_state)
  def next_inputs(self, time, outputs, state, name=None, **unused_kwargs):
    """next_inputs_fn for TrainingHelper."""
    with ops.name_scope(name, "TrainingHelperNextInputs",
                        [time, outputs, state]):
      next_time = time + 1
      # 下一個(gè)是否是某序列的最后一個(gè)
      finished = (next_time >= self._sequence_length)
      all_finished = math_ops.reduce_all(finished)
      # 讀取下一個(gè)target,這里定義一個(gè)函數(shù)主要因?yàn)橄旅婧瘮?shù)輸入的需要
      def read_from_ta(inp):
        return inp.read(next_time)
      next_inputs = control_flow_ops.cond(
          all_finished, lambda: self._zero_inputs,
          lambda: nest.map_structure(read_from_ta, self._input_tas))
      return (finished, next_inputs, state)

"""
以上是trainning過程的helper,下面介紹一下inferring過程的helper,主要區(qū)別就是next_inputs獲取的是上一個(gè)預(yù)測的結(jié)果
inferring通過檢查start_token和end_token來判斷停止
"""

class GreedyEmbeddingHelper(Helper):

  def __init__(self, embedding, start_tokens, end_token):
    # embedding要是callable的
    if callable(embedding):
      self._embedding_fn = embedding
    else:
      self._embedding_fn = (
          lambda ids: embedding_ops.embedding_lookup(embedding, ids))

    self._start_tokens = ops.convert_to_tensor(
        start_tokens, dtype=dtypes.int32, name="start_tokens")
    self._end_token = ops.convert_to_tensor(
        end_token, dtype=dtypes.int32, name="end_token")
    if self._start_tokens.get_shape().ndims != 1:
      raise ValueError("start_tokens must be a vector")
    self._batch_size = array_ops.size(start_tokens)
    if self._end_token.get_shape().ndims != 0:
      raise ValueError("end_token must be a scalar")
    
    # 初始化輸入為start_token<GO>,這是我們預(yù)處理時(shí)默認(rèn)定義滴
    self._start_inputs = self._embedding_fn(self._start_tokens)

  # 這個(gè)初始化就比較簡單了
  def initialize(self, name=None):
    finished = array_ops.tile([False], [self._batch_size])
    return (finished, self._start_inputs)

  # 和training的類似,返回最可能的輸出id
  def sample(self, time, outputs, state, name=None):
    del time, state  # unused by sample_fn
    if not isinstance(outputs, ops.Tensor):
      raise TypeError("Expected outputs to be a single Tensor, got: %s" % type(outputs))
    sample_ids = math_ops.cast(
        math_ops.argmax(outputs, axis=-1), dtypes.int32)
    return sample_ids
  
  # 也是通過finished來控制next_input,如果有next_input,只需要根據(jù)剛才得到的最優(yōu)輸出的id去embedding matrix里面找embedding vector作為下一個(gè)輸入就可
  def next_inputs(self, time, outputs, state, sample_ids, name=None):
    del time, outputs  # unused by next_inputs_fn
    finished = math_ops.equal(sample_ids, self._end_token)
    all_finished = math_ops.reduce_all(finished)
    next_inputs = control_flow_ops.cond(
        all_finished,
        # If we're finished, the next_inputs value doesn't matter
        lambda: self._start_inputs,
        lambda: self._embedding_fn(sample_ids))
    return (finished, next_inputs, state)
Basic_decoder

也是重點(diǎn)介紹一下核心內(nèi)容,主要是tensorflow.contrib.seq2seq.BasicDecoder

class BasicDecoder(decoder.Decoder):

  def __init__(self, cell, helper, initial_state):
    
    if not rnn_cell_impl._like_rnncell(cell):  # pylint: disable=protected-access
      raise TypeError("cell must be an RNNCell, received: %s" % type(cell))
    if not isinstance(helper, helper_py.Helper):
      raise TypeError("helper must be a Helper, received: %s" % type(helper))
    if (output_layer is not None
        and not isinstance(output_layer, layers_base.Layer)):
      raise TypeError(
          "output_layer must be a Layer, received: %s" % type(output_layer))
    
    # decoder的cell
    self._cell = cell
    # 根據(jù)訓(xùn)練還是預(yù)測過程選擇helper
    self._helper = helper
    # 這里是encocer_final_state,也就是那個(gè)C
    self._initial_state = initial_state
    # 一般選擇Dense來預(yù)測輸出
    self._output_layer = output_layer

  # 初始化,輸出helper的初始化(finished,first_target)和encoder_final_state
  def initialize(self, name=None): 
    return self._helper.initialize() + (self._initial_state,)
 
  # decoder過程
  def step(self, time, inputs, state, name=None):
      # 這里提一點(diǎn)注意事項(xiàng):encoder的輸出維度需要和decoder一樣
    with ops.name_scope(name, "BasicDecoderStep", (time, inputs, state)):
      # 根據(jù)一次input和state輸出對于的output和state
      cell_outputs, cell_state = self._cell(inputs, state)
      if self._output_layer is not None:
        # cell_out的結(jié)果進(jìn)行Dense預(yù)測輸出
        cell_outputs = self._output_layer(cell_outputs)
      # 根據(jù)Dense結(jié)果獲取對應(yīng)的編號(hào)id
      sample_ids = self._helper.sample(time=time, outputs=cell_outputs, state=cell_state)
      # 通過helper的next_input函數(shù)獲取下一個(gè)輸入,別忘了是time_majoy
      (finished, next_inputs, next_state) = self._helper.next_inputs(
          time=time,
          outputs=cell_outputs,
          state=cell_state,
          sample_ids=sample_ids)
    # 這里是定義的結(jié)構(gòu)
    outputs = BasicDecoderOutput(cell_outputs, sample_ids)
    return (outputs, next_state, next_inputs, finished)
dynamic_decoder

重點(diǎn)介紹一下核心內(nèi)容,代碼比較長,raise error等部分就刪了,具體可看源碼

# 提供一個(gè)decoder的實(shí)例,其他參數(shù)可選,可以查看源碼了解
def dynamic_decode(decoder,output_time_major=False,impute_finished=False,maximum_iterations=None,parallel_iterations=32,swap_memory=False,scope=None):

    # 通過decoder的初始化獲取初始化輸入
    initial_finished, initial_inputs, initial_state = decoder.initialize()
    
    # 這里根據(jù)輸出tensor的維度產(chǎn)生0輸出
    zero_outputs = _create_zero_outputs(decoder.output_size, decoder.output_dtype, decoder.batch_size)
     
    # 如果設(shè)定最大迭代次數(shù)為0,則迭代不會(huì)開始,還記得迭代是通過finished向量決定的吧
    if maximum_iterations is not None:
      initial_finished = math_ops.logical_or(
          initial_finished, 0 >= maximum_iterations)
    # 初始化sequnce_length全部為0
    initial_sequence_lengths = array_ops.zeros_like(
        initial_finished, dtype=dtypes.int32)
    # 初始化time=0
    initial_time = constant_op.constant(0, dtype=dtypes.int32)

    def _shape(batch_size, from_shape):
      if not isinstance(from_shape, tensor_shape.TensorShape):
        return tensor_shape.TensorShape(None)
      else:
        batch_size = tensor_util.constant_value(ops.convert_to_tensor(batch_size, name="batch_size"))
        return tensor_shape.TensorShape([batch_size]).concatenate(from_shape)

    def _create_ta(s, d):
      return tensor_array_ops.TensorArray(dtype=d, size=0,dynamic_size=True,element_shape=_shape(decoder.batch_size, s))
    # 構(gòu)建輸出tensor,可以用來檢測輸出結(jié)構(gòu)
    initial_outputs_ta = nest.map_structure(_create_ta, decoder.output_size, decoder.output_dtype)

    # 循環(huán)迭代條件:是否全部序列都跑完了
    def condition(unused_time, unused_outputs_ta, unused_state, unused_inputs,
                  finished, unused_sequence_lengths):
      return math_ops.logical_not(math_ops.reduce_all(finished))
    
    # 模型迭代過程函數(shù)
    def body(time, outputs_ta, state, inputs, finished, sequence_lengths):
      # 通過decoder的step計(jì)算一次迭代過程
      (next_outputs, decoder_state, next_inputs,decoder_finished) = decoder.step(time, inputs, state)
      # 判斷是否下次結(jié)束
      next_finished = math_ops.logical_or(decoder_finished, finished)
      # 這里考慮是否超過了最大迭代次數(shù),來限制迭代過程
      if maximum_iterations is not None:
        next_finished = math_ops.logical_or(
            next_finished, time + 1 >= maximum_iterations)
      # 這個(gè)函數(shù)形式是array_ops.where(condition,x,y),輸出為,如果第一個(gè)條件真,則輸出第一個(gè)函數(shù)內(nèi)容,否則輸出第二個(gè)的內(nèi)容,為pointwise操作!
      # 這里也就是如果某序列到頭了,就輸出它的長度
      next_sequence_lengths = array_ops.where(
          math_ops.logical_and(math_ops.logical_not(finished), next_finished),
          array_ops.fill(array_ops.shape(sequence_lengths), time + 1),
          sequence_lengths)

      nest.assert_same_structure(state, decoder_state)
      nest.assert_same_structure(outputs_ta, next_outputs)
      nest.assert_same_structure(inputs, next_inputs)

      # 序列結(jié)束了則輸出0就可以了
      if impute_finished:
        emit = nest.map_structure(lambda out, zero: array_ops.where(finished, zero, out),next_outputs,zero_outputs)
      else:
        emit = next_outputs

      # 某batch序列較短,則多出的迭代只需要輸出最后一個(gè)的cell_state,對于的輸出為0
      def _maybe_copy_state(new, cur):
        if isinstance(cur, tensor_array_ops.TensorArray):
          pass_through = True
        else:
          new.set_shape(cur.shape)
          pass_through = (new.shape.ndims == 0)
        return new if pass_through else array_ops.where(finished, cur, new)

      if impute_finished:
        next_state = nest.map_structure(_maybe_copy_state, decoder_state, state)
      else:
        next_state = decoder_state

      outputs_ta = nest.map_structure(lambda ta, out: ta.write(time, out),
                                      outputs_ta, emit)
      return (time + 1, outputs_ta, next_state, next_inputs, next_finished,
              next_sequence_lengths)
    
    # 循環(huán)迭代過程,輸出初始化內(nèi)容,最終得到結(jié)果,保存到res內(nèi)
    res = control_flow_ops.while_loop(
        condition,
        body,
        loop_vars=[
            initial_time, initial_outputs_ta, initial_state, initial_inputs,
            initial_finished, initial_sequence_lengths,
        ],
        parallel_iterations=parallel_iterations,
        swap_memory=swap_memory)

    final_outputs_ta = res[1]
    final_state = res[2]
    final_sequence_lengths = res[5]
    
    # 記得最初我們轉(zhuǎn)化為time_majoy么,現(xiàn)在轉(zhuǎn)化回來,變成(batch_size,output_length,outout_embedding_size)
    final_outputs = nest.map_structure(lambda ta: ta.stack(), final_outputs_ta)
    
    # 這里涉及了beem_search,就不講了
    try:
      final_outputs, final_state = decoder.finalize(
          final_outputs, final_state, final_sequence_lengths)
    except NotImplementedError:
      pass

    if not output_time_major:
      final_outputs = nest.map_structure(_transpose_batch_time, final_outputs)

  return final_outputs, final_state, final_sequence_lengths

三個(gè)文件因?yàn)槭欠珠_,很難講的特別連貫,大家如果想研究的可以自己看看,印象就比較深了,簡單說就是helper提供不同過程模型的輸入和輸出解析,BasicDecoder來實(shí)現(xiàn)一次迭代過程,dynamic_decoder來控制整個(gè)模型的迭代次數(shù),看一遍可以熟悉一下tensorflow的一些數(shù)據(jù)結(jié)構(gòu)與基本操作的使用。
那下面就接著encoder之后把decoder寫好吧(是不是中間插曲太長了。。)

decoder for ChatBot構(gòu)建:
#定義helper
if is_inferring:
    # inferring需要使用GreedyEmbeddingHelper
    start_token = tf.placeholder(tf.int32,shape=[None],name='start_token')  # <GO>
    end_token = tf.placeholder(tf.int32,name='end_token') # <EOS>
    helper = GreedyEmbeddingHelper(decoder_embeddings,start_token,end_token)
else:
    # output_max_length = 20
    target_ = tf.placeholder(tf.int32,shape=[batch_size,output_max_length],name='target_')
    decoder_length = tf.placeholder(tf.int32,shape=[batch_size],name='decoder_length')
    with tf.device('/cpu:0'):
        target_embed = tf.nn.embedding_lookup(decoder_embeddings,target_)
    helper = TrainingHelper(target_embed,decoder_length)

# 定義dynamic_decoder的decoder
with tf.variable_scope('decoder'):
    de_cell = tf.nn.rnn_cell.BasicLSTMCell(output_dim) 
    decoder_cell = tf.contrib.rnn.MultiRNNCell([for _ in xrange(rnn_layers)])
    # encoder_final_state為decoder的初始狀態(tài),cell的輸出接一個(gè)Dense預(yù)測輸出
    decoder = BasicDecoder(decoder_cell,helper,encoder_final_state ,Dense(Vocabulary_size))

# 定義好helper和decoder后,可以通過dynamic_decoder來獲取結(jié)果
# logits中存了每個(gè)batch的logit的輸出和對應(yīng)編碼的輸出,final_seq_length顯示了每個(gè)batch中序列的長度
logits,decoder_final_state, final_seq_lengths = dynamic_decode(decoder)

# 接下來就可以定義損失訓(xùn)練和預(yù)測模型了
if  is_inferring:
    target_pre = tf.nn.softmax(logits)
else:
    target_y = tf.reshape(target_,[-1])
    logits_y = tf.reshape(logits.rnn_output,[-1,Vocabulary_size])

    # 定義損失
    cost = tf.losses.softmax_cross_entropy(target_y ,logits_y )

    # 防止梯度爆炸定義梯度剪切
    tvars = tf.trainable_variables()
    grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
    #定義優(yōu)化器
    optimizer = tf.train.AdamOptimizer(0.01)
    train_step = optimizer.apply_gradients(zip(grads, tvars))

chatbot基本結(jié)構(gòu)就是這樣啦,可以在此結(jié)構(gòu)上增加attention mechanism啊,hierarchical structure啊,Bidirectional RNN啊,等等。基于這個(gè)框架,也可以考慮不用RNN來encode,decode,用性能優(yōu)越的CNN!甚至可以考慮用gate機(jī)制控制CNN做sequence傳輸,大家想想吧
因?yàn)槲业娜蝿?wù)不涉及離散過程,不需要embedding,基于這個(gè)任務(wù)改了下結(jié)構(gòu),大家想想主要修改什么呢?

  • 首先需要把抽象類繼承中不需要的函數(shù)刪掉,比如helper中的sample等;
  • GreedyEmbeddingHelper可以整個(gè)改為類似TrainingHelper的機(jī)制,注意next_input為上一個(gè)預(yù)測值即可,需要注意control_flow_ops.cond()函數(shù)輸入必須為callable
  • GreedyEmbeddingHelper初始化輸入設(shè)為0,無需start_token和end_token
  • BasicDecoder也不需要Dense層;直接輸出cell_output即可;需要注意輸出的類型,原始是tf.int32,現(xiàn)在改為tf.float32,否則會(huì)報(bào)錯(cuò);BasicDecoderOutput中不要sample_ids等
  • dynamic_decoder無需beam_search等
  • 實(shí)例檢測注意

直接上代碼吧

"""
額發(fā)現(xiàn)太長了,放一些關(guān)鍵代碼吧
"""
##Helper trainingHelper基本不用改

class GreedyEmbeddingHelper(Helper):

  def __init__(self,seq_length, target_output,time_major=False):

    # define seq_length for decoder the output
    self._sequence_length = seq_length
    target_output = ops.convert_to_tensor(target_output, name="tar_out")

    if not time_major:
        target_output = nest.map_structure(_transpose_batch_time, target_output)
    self._batch_size = array_ops.size(seq_length)

    self._start_inputs = nest.map_structure(
        lambda inp: array_ops.zeros_like(inp[0, :]), target_output)


  def initialize(self, name=None):
    finished = array_ops.tile([False], [self._batch_size])
    return (finished, self._start_inputs)

  def next_inputs(self, time, outputs, state, name=None):
    next_time = time + 1
    finished = (next_time >= self._sequence_length)
    all_finished = math_ops.reduce_all(finished)
    self.out = outputs
    next_inputs = control_flow_ops.cond(
        all_finished, lambda: self._start_inputs,
        lambda: self.out)
    return (finished, next_inputs, state)

## BaisicDecoder

class BasicDecoderOutput(
    collections.namedtuple("BasicDecoderOutput", ("rnn_output"))):
  pass

class BasicDecoder(decoder.Decoder):
  """Basic sampling decoder."""

  def __init__(self, cell, helper, initial_state):
    self._cell = cell
    self._helper = helper
    self._initial_state = initial_state

  def initialize(self, name=None):
    return self._helper.initialize() + (self._initial_state,)

  def step(self, time, inputs, state, name=None):
    with ops.name_scope(name, "BasicDecoderStep", (time, inputs, state)):
      cell_outputs, cell_state = self._cell(inputs, state)

      (finished, next_inputs, next_state) = self._helper.next_inputs(time=time,outputs=cell_outputs,state=cell_state,)
    outputs = BasicDecoderOutput(cell_outputs)
    return (outputs, next_state, next_inputs, finished)

#dynamic_decoder
"""
主要功能由helper和BaiscDecoder實(shí)現(xiàn)在,這里修改不多,主要是改過的decoder和helper在dynamic_decoder中檢查類似isinstance(decoder, Decoder),因?yàn)樾薷牡奈募赺_main__下。我直接注釋掉了。。。。
"""

到此,整個(gè)流程就結(jié)束啦。第一次寫東西,感謝大家,希望此文對大家有一定的幫助!
本人非相關(guān)專業(yè),肯定很多錯(cuò)誤,文中一些個(gè)人觀點(diǎn)也不一定對,希望大家不吝指出。
假期要過去了,哎。。。

如果有機(jī)會(huì),下次介紹GAN+RL構(gòu)建圖靈問答機(jī)!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 近日,谷歌官方在 Github開放了一份神經(jīng)機(jī)器翻譯教程,該教程從基本概念實(shí)現(xiàn)開始,首先搭建了一個(gè)簡單的NMT模型...
    MiracleJQ閱讀 6,718評論 1 11
  • 前面的文章主要從理論的角度介紹了自然語言人機(jī)對話系統(tǒng)所可能涉及到的多個(gè)領(lǐng)域的經(jīng)典模型和基礎(chǔ)知識(shí)。這篇文章,甚至之后...
    我偏笑_NSNirvana閱讀 14,397評論 2 64
  • 作者 | 武維AI前線出品| ID:ai-front 前言 自然語言處理(簡稱NLP),是研究計(jì)算機(jī)處理人類語言的...
    AI前線閱讀 2,669評論 0 8
  • 最近人工智能隨著AlphaGo戰(zhàn)勝李世乭這一事件的高關(guān)注度,重新掀起了一波新的關(guān)注高潮,有的說人工智能將會(huì)如何超越...
    MiracleJQ閱讀 3,117評論 2 1
  • 看了老哥Mr_Cxy的python對Mysql數(shù)據(jù)庫的操作小例感覺很贊,所以就有了這個(gè)操作csv的小例(也是想練習(xí)...
    TinyPiXOS閱讀 3,122評論 5 9

友情鏈接更多精彩內(nèi)容