April 29, 2019

132 words 1 min read

openai/finetune-transformer-lm

openai/finetune-transformer-lm

Code and model for the paper “Improving Language Understanding by Generative Pre-Training”

repo name openai/finetune-transformer-lm
repo link https://github.com/openai/finetune-transformer-lm
homepage https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
language Python
size (curr.) 423329 kB
stars (curr.) 1459
created 2018-06-11
license MIT License

Status: Archive (code is provided as-is, no updates expected)

finetune-transformer-lm

Code and model for the paper “Improving Language Understanding by Generative Pre-Training”

Currently this code implements the ROCStories Cloze Test result reported in the paper by running: python train.py --dataset rocstories --desc rocstories --submit --analysis --data_dir [path to data here]

Note: The code is currently non-deterministic due to various GPU ops. The median accuracy of 10 runs with this codebase (using default hyperparameters) is 85.8% - slightly lower than the reported single run of 86.5% from the paper.

The ROCStories dataset can be downloaded from the associated website.

comments powered by Disqus