NielsRogge/Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
repo name | NielsRogge/Transformers-Tutorials |
repo link | https://github.com/NielsRogge/Transformers-Tutorials |
homepage | |
language | Jupyter Notebook |
size (curr.) | 57943 kB |
stars (curr.) | 370 |
created | 2020-08-31 |
license | |
Transformers-Tutorials
Hi there!
This repository contains demos I made with the Transformers library by 🤗 HuggingFace.
Currently, it contains the following demos:
- BERT (paper):
- LayoutLM (paper):
- TAPAS (paper):
- fine-tuning
TapasForQuestionAnswering
on the Microsoft Sequential Question Answering (SQA) dataset - evaluating
TapasForSequenceClassification
on the Table Fact Checking (TabFact) dataset
- fine-tuning
- Vision Transformer (paper):
- LUKE (paper):
- DETR (paper):
… more to come! 🤗
If you have any questions regarding these demos, feel free to open an issue on this repository.
Btw, I was also the main contributor to add the Vision Transformer (ViT) by Google AI, Data-efficient Image Transformers (DeiT) by Facebook AI, TAbular PArSing (TAPAS) by Google AI, LUKE by Studio Ousia and DEtection TRansformers (DETR) by Facebook AI to the library, so all of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!