June 12, 2021

265 words 2 mins read

NielsRogge/Transformers-Tutorials

NielsRogge/Transformers-Tutorials

This repository contains demos I made with the Transformers library by HuggingFace.

repo name NielsRogge/Transformers-Tutorials
repo link https://github.com/NielsRogge/Transformers-Tutorials
homepage
language Jupyter Notebook
size (curr.) 57943 kB
stars (curr.) 370
created 2020-08-31
license

Transformers-Tutorials

Hi there!

This repository contains demos I made with the Transformers library by 🤗 HuggingFace.

Currently, it contains the following demos:

  • BERT (paper):
    • fine-tuning BertForTokenClassification on a named entity recognition (NER) dataset. Open In Colab
  • LayoutLM (paper):
    • fine-tuning LayoutLMForTokenClassification on the FUNSD dataset Open In Colab
    • fine-tuning LayoutLMForSequenceClassification on the RVL-CDIP dataset Open In Colab
    • adding image embeddings to LayoutLM during fine-tuning on the FUNSD dataset Open In Colab
  • TAPAS (paper):
  • Vision Transformer (paper):
    • performing inference with ViTForImageClassification Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using PyTorch Lightning Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using the 🤗 Trainer Open In Colab
  • LUKE (paper):
    • fine-tuning LukeForEntityPairClassification on a custom relation extraction dataset using PyTorch Lightning Open In Colab
  • DETR (paper):
    • performing inference with DetrForObjectDetection Open In Colab
    • fine-tuning DetrForObjectDetection on a custom object detection dataset Open In Colab
    • evaluating DetrForObjectDetection on the COCO detection 2017 validation set Open In Colab
    • performing inference with DetrForSegmentation Open In Colab
    • fine-tuning DetrForSegmentation on COCO panoptic 2017 Open In Colab

… more to come! 🤗

If you have any questions regarding these demos, feel free to open an issue on this repository.

Btw, I was also the main contributor to add the Vision Transformer (ViT) by Google AI, Data-efficient Image Transformers (DeiT) by Facebook AI, TAbular PArSing (TAPAS) by Google AI, LUKE by Studio Ousia and DEtection TRansformers (DETR) by Facebook AI to the library, so all of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!

comments powered by Disqus