tenssor flow estimatorがhookを使ってfinetune方式を実現します。


finetuneを実現するためには、次の2つの解決策があります。
モデルfnの中でモデルを定義してから直接に値を付けます。

 def model_fn(features, labels, mode, params):
 # .....
 # finetune
 if params.checkpoint_path and (not tf.train.latest_checkpoint(params.model_dir)):
 checkpoint_path = None
 if tf.gfile.IsDirectory(params.checkpoint_path):
  checkpoint_path = tf.train.latest_checkpoint(params.checkpoint_path)
 else:
  checkpoint_path = params.checkpoint_path

 tf.train.init_from_checkpoint(
  ckpt_dir_or_file=checkpoint_path,
  assignment_map={params.checkpoint_scope: params.checkpoint_scope} # 'OptimizeLoss/':'OptimizeLoss/'
 )
フックを使う。
tf.com ntrib.learn.Experimentを定義する際にtrin_を通過することができます。monitorsパラメータ指定

 # Define the experiment
 experiment = tf.contrib.learn.Experiment(
 estimator=estimator, # Estimator
 train_input_fn=train_input_fn, # First-class function
 eval_input_fn=eval_input_fn, # First-class function
 train_steps=params.train_steps, # Minibatch steps
 min_eval_frequency=params.eval_min_frequency, # Eval frequency
 # train_monitors=[], # Hooks for training
 # eval_hooks=[eval_input_hook], # Hooks for evaluation
 eval_steps=params.eval_steps # Use evaluation feeder until its empty
 )
tf.estimart.EstimatoSpecを定義する際にtriningを通してもいいです。ちぇf_hooksパラメータ指定。
でも、個人的には一番いいと思います。やはりestimatorで定義して、experimentを実験のモード(訓練回数、検証回数など)だけに集中させます。

def model_fn(features, labels, mode, params):

 # ....

 return tf.estimator.EstimatorSpec(
 mode=mode,
 predictions=predictions,
 loss=loss,
 train_op=train_op,
 eval_metric_ops=eval_metric_ops,
 # scaffold=get_scaffold(),
 # training_chief_hooks=None
 )
ここでは、以下のtf.estimart.EstimatoSpecの役割を説明します。オブジェクトはモデルの正方形面を記述する。含む:
現在のモード:
mode:A ModeKeys.Specifris is trining,evalution or predition.
計算図
predictions:Precions Tensor dict of Tensor.
loss:Training loss Tensor.Must be either scalar,or with shape[1]。
トレイop:Op for the trining step.
eval_metric_ops:Dict of metric result keyed by name.The values of the dict are the result of carrling a metric function,namelya(metric_)テナンタ、udate_op)tuple.metric_tenssor shoud be evaluated without any impact on state(typically is a put computation result based on variables.).For example,it shoud not trigger the up dateop or requires any input feting.
エクスポートポリシー
export.outputts:Describes the output signature s to be export to
SavedModel and used during serving.A dict{name:output}where:
name:An arbitrary name for this output.
out put:an ExportOutput object such as Class ification Output,Regression Output,or PredictOutput.Single-headed models need to speciful inctry.Multi-headed modelstreconstants.DEFAULT_SERVING_SIGN ATURE_DEF_KEY.
chiefフックトレーニング時のモデル保存術フックCheckpointSaverHook、モデル回復など
trining_ちぇf_hooks:Iterable of tf.trin.Session RunHook to run on the chief worker during trining.
ウォーカーフックトレーニング時の監視戦略フックは、NanTensorHookロギングTensorhookなどです。
trining_hooks:Iterable of tf.trin.SessionRunHook to run on all works during trining.
初期化とsaverを指定します。
scaffold:A tf.trin.Scaffold object that can be used to set initiazation,saver,and more to be used in trining.
evalutionフック
evalution_hooks:Iterable of tf.trin.SessionRunHook to run during evalution.
カスタムフックは以下の通りです。

class RestoreCheckpointHook(tf.train.SessionRunHook):
 def __init__(self,
   checkpoint_path,
   exclude_scope_patterns,
   include_scope_patterns
   ):
 tf.logging.info("Create RestoreCheckpointHook.")
 #super(IteratorInitializerHook, self).__init__()
 self.checkpoint_path = checkpoint_path

 self.exclude_scope_patterns = None if (not exclude_scope_patterns) else exclude_scope_patterns.split(',')
 self.include_scope_patterns = None if (not include_scope_patterns) else include_scope_patterns.split(',')


 def begin(self):
 # You can add ops to the graph here.
 print('Before starting the session.')

 # 1. Create saver

 #exclusions = []
 #if self.checkpoint_exclude_scopes:
 # exclusions = [scope.strip()
 #  for scope in self.checkpoint_exclude_scopes.split(',')]
 #
 #variables_to_restore = []
 #for var in slim.get_model_variables(): #tf.global_variables():
 # excluded = False
 # for exclusion in exclusions:
 # if var.op.name.startswith(exclusion):
 # excluded = True
 # break
 # if not excluded:
 # variables_to_restore.append(var)
 #inclusions
 #[var for var in tf.trainable_variables() if var.op.name.startswith('InceptionResnetV1')]

 variables_to_restore = tf.contrib.framework.filter_variables(
  slim.get_model_variables(),
  include_patterns=self.include_scope_patterns, # ['Conv'],
  exclude_patterns=self.exclude_scope_patterns, # ['biases', 'Logits'],

  # If True (default), performs re.search to find matches
  # (i.e. pattern can match any substring of the variable name).
  # If False, performs re.match (i.e. regexp should match from the beginning of the variable name).
  reg_search = True
 )
 self.saver = tf.train.Saver(variables_to_restore)


 def after_create_session(self, session, coord):
 # When this is called, the graph is finalized and
 # ops can no longer be added to the graph.

 print('Session created.')

 tf.logging.info('Fine-tuning from %s' % self.checkpoint_path)
 self.saver.restore(session, os.path.expanduser(self.checkpoint_path))
 tf.logging.info('End fineturn from %s' % self.checkpoint_path)

 def before_run(self, run_context):
 #print('Before calling session.run().')
 return None #SessionRunArgs(self.your_tensor)

 def after_run(self, run_context, run_values):
 #print('Done running one step. The value of my tensor: %s', run_values.results)
 #if you-need-to-stop-loop:
 # run_context.request_stop()
 pass


 def end(self, session):
 #print('Done with the session.')
 pass
以上のこのtenssor flow estimatorはヤフーを使ってfinetune方式を実現します。つまり、小編集で皆さんに共有する内容です。参考にしていただければと思います。よろしくお願いします。