[P2] Week 2 Day 2


今日の私の勉強目標は何ですか.
  • xlm-roberta-large
  • 提出機会5回全部使用
  • 今日私は私の学習目標を達成するために何をしましたか?
  • accuracy : 76.0000%
  • training_args = TrainingArguments(
      output_dir='./results',
      save_total_limit=5,
      save_steps=100,
      num_train_epochs=10,
      learning_rate=5e-5,
      per_device_train_batch_size=64,
      per_device_eval_batch_size=64,
      warmup_steps=300,
      weight_decay=0.01,
      logging_dir='./logs',
      logging_steps=100,
      evaluation_strategy='steps',
      eval_steps = 100,
      fp16=True,
      dataloader_num_workers=4,
      label_smoothing_factor=0.5
    )
  • accuracy : 76.7000%
  •   training_args = TrainingArguments(
        output_dir='./results',
        save_total_limit=5,
        save_steps=100,
        num_train_epochs=10,
        learning_rate=5e-5,
        per_device_train_batch_size=32,
        per_device_eval_batch_size=32,
        warmup_steps=300,
        weight_decay=0.01,
        logging_dir='./logs',
        logging_steps=100,
        evaluation_strategy='steps',
        eval_steps = 100,
        fp16=True,
        dataloader_num_workers=4,
        label_smoothing_factor=0.5
      )
  • accuracy : 77.7000%
  • ensemble 3
  • accuracy : 79.0000%
  •   training_args = TrainingArguments(
        output_dir='./results',
        save_total_limit=5,
        save_steps=100,
        num_train_epochs=15,
        learning_rate=1e-5,
        per_device_train_batch_size=32,
        per_device_eval_batch_size=32,
        warmup_steps=300,
        weight_decay=0.01,
        logging_dir='./logs',
        logging_steps=100,
        evaluation_strategy='steps',
        eval_steps = 100,
        dataloader_num_workers=4,
        label_smoothing_factor=0.5
      )
  • accuracy : 78.7000%
  •   training_args = TrainingArguments(
        output_dir='./results',
        save_total_limit=5,
        save_steps=100,
        num_train_epochs=15,
        learning_rate=1e-5,
        per_device_train_batch_size=32,
        per_device_eval_batch_size=32,
        warmup_steps=300,
        weight_decay=0.01,
        logging_dir='./logs',
        logging_steps=100,
        evaluation_strategy='steps',
        eval_steps = 100,
        dataloader_num_workers=4,
        label_smoothing_factor=0.5
      )
    今日はどんな方法でモデルを改善しましたか?
  • xlm−Roberta−LazeおよびBaseline CodeのHyperパラメータを変更することによってモデルを改良した.
  • 明日は何か違う試みがありますか?
  • は明日、train dataset 100%使用ポリシーとtest datasetパーセントの統合ポリシーを試みます.
  • の最後の部分
    今日よりもっと成长する明日を楽しみにしている私にまた明日会いましょう
    読んでくれてありがとう!