Title: Transformers: State-of-the-Art Natural Language Processing
Type Software Wolf, Thomas, Debut, Lysandre, Sanh, Victor, Chaumond, Julien, Delangue, Clement, Moi, Anthony, Cistac, Perric, Ma, Clara, Jernite, Yacine, Plu, Julien, Xu, Canwen, Le Scao, Teven, Gugger, Sylvain, Drame, Mariama, Lhoest, Quentin, Rush, Alexander M. (2020): Transformers: State-of-the-Art Natural Language Processing. Zenodo. Software. https://zenodo.org/record/5770483
Links
- Item record in Zenodo
- Digital object URL
Summary
New Model additions Perceiver
Eight new models are released as part of the Perceiver implementation: PerceiverModel, PerceiverForMaskedLM, PerceiverForSequenceClassification, PerceiverForImageClassificationLearned, PerceiverForImageClassificationFourier, PerceiverForImageClassificationConvProcessing, PerceiverForOpticalFlow, PerceiverForMultimodalAutoencoding, in PyTorch.
The Perceiver IO model was proposed in Perceiver IO: A General Architecture for Structured Inputs & Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Add Perceiver IO by @NielsRogge in https://github.com/huggingface/transformers/pull/14487Compatible checkpoints can be found on the hub: https://huggingface.co/models?other=perceiver
mLUKEThe mLUKE tokenizer is added. The tokenizer can be used for the multilingual variant of LUKE.
The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the LUKE model trained on the basis of XLM-RoBERTa.
Add mLUKE by @Ryou0634 in https://github.com/huggingface/transformers/pull/14640Compatible checkpoints can be found on the hub: https://huggingface.co/models?other=luke
ImageGPTThree new models are released as part of the ImageGPT integration: ImageGPTModel, ImageGPTForCausalImageModeling, ImageGPTForImageClassification, in PyTorch.
The ImageGPT model was proposed in Generative Pretraining from Pixels by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
Add ImageGPT by @NielsRogge in https://github.com/huggingface/transformers/pull/14240Compatible checkpoints can be found on the hub: https://huggingface.co/models?other=imagegpt
QDQBertEight new models are released as part of the QDQBert implementation: QDQBertModel, QDQBertLMHeadModel, QDQBertForMaskedLM, QDQBertForSequenceClassification, QDQBertForNextSentencePrediction, QDQBertForMultipleChoice, QDQBertForTokenClassification, QDQBertForQuestionAnswering, in PyTorch.
The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
Add QDQBert model and quantization examples of SQUAD task by @shangz-ai in https://github.com/huggingface/transformers/pull/14066 Semantic Segmentation modelsThe semantic Segmentation models' API is unstable and bound to change between this version and the next.
The first semantic segmentation models are added. In semantic segmentation, the goal is to predict a class label for every pixel of an image. The models that are added are SegFormer (by NVIDIA) and BEiT (by Microsoft Research). BEiT was already available in the library, but this release includes the model with a semantic segmentation head.
The SegFormer model was proposed in SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on image segmentation benchmarks such as ADE20K and Cityscapes.
The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Furu Wei. Rather than pre-training the model to predict the class of an image (as done in the original ViT paper), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI's DALL-E model given masked patches.
Add SegFormer by @NielsRogge in https://github.com/huggingface/transformers/pull/14019 Add BeitForSemanticSegmentation by @NielsRogge in https://github.com/huggingface/transformers/pull/14096 Vision-text dual encoderAdds VisionTextDualEncoder model in PyTorch and Flax to be able to load any pre-trained vision (ViT, DeiT, BeiT, CLIP's vision model) and text (BERT, ROBERTA) model in the library for vision-text tasks like CLIP.
This model pairs a vision and text encoder and adds projection layers to project the embeddings to another embeddings space with similar dimensions. which can then be used to align the two modalities.
VisionTextDualEncoder by @patil-suraj in https://github.com/huggingface/transformers/pull/13511 CodeParrotCodeParrot, a model trained to generate code, has been open-sourced in the research projects by @lvwerra.
Add CodeParrot 🦜 codebase by @lvwerra in https://github.com/huggingface/transformers/pull/14536 Language model support for ASR Add language model support for CTC models by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14339 Language model boosted decoding is added for all CTC models via https://github.com/kensho-technologies/pyctcdecode and https://github.com/kpu/kenlm.See https://huggingface.co/patrickvonplaten/wav2vec2-xlsr-53-es-kenlm for more information.
Flax-specific additionsAdds Flax version of the vision encoder-decoder model, and adds a Flax version of GPT-J.
Add FlaxVisionEncoderDecoderModel by @ydshieh in https://github.com/huggingface/transformers/pull/13359 FlaxGPTJ by @patil-suraj in https://github.com/huggingface/transformers/pull/14396 TensorFlow-specific additionsVision transformers are here! Convnets are so 2012, now that ML is converging on self-attention as a universal model.
Add TFViTModel by @ydshieh in https://github.com/huggingface/transformers/pull/13778Want to handle real-world tables, where text and data are positioned in a 2D grid? TAPAS is now here for both TensorFlow and PyTorch.
Tapas tf by @kamalkraj in https://github.com/huggingface/transformers/pull/13393Automatic checkpointing and cloud saves to the HuggingFace Hub during training are now live, allowing you to resume training when it's interrupted, even if your initial instance is terminated. This is an area of very active development - watch this space for future developments, including automatic model card creation and more.
Add model checkpointing to push_to_hub and PushToHubCallback by @Rocketknight1 in https://github.com/huggingface/transformers/pull/14492 Auto-processorsA new class to automatically select processors is added: AutoProcessor. It can be used for all models that require a processor, in both computer vision and audio.
Auto processor by @sgugger in https://github.com/huggingface/transformers/pull/14465 New documentation frontendA new documentation frontend is out for the transformers library! The goal with this documentation is to be better aligned with the rest of our website, and contains tools to improve readability. The documentation can now be written in markdown rather than RST.
Doc new front by @sgugger in https://github.com/huggingface/transformers/pull/14590 LayoutLM ImprovementsThe LayoutLMv2 feature extractor now supports non-English languages, and LayoutXLM gets its own processor.
LayoutLMv2FeatureExtractor now supports non-English languages when applying Tesseract OCR. by @Xargonus in https://github.com/huggingface/transformers/pull/14514 Add LayoutXLMProcessor (and LayoutXLMTokenizer, LayoutXLMTokenizerFast) by @NielsRogge in https://github.com/huggingface/transformers/pull/14115 Trainer ImprovementsYou can now take advantage of the Ampere hardware with the Trainer:
--bf16 - do training or eval in mixed precision of bfloat16 --bf16_full_eval - do eval in full bfloat16 --tf32 control having TF32 mode on/off Improvements and bugfixes Replace assertions with RuntimeError exceptions by @ddrm86 in https://github.com/huggingface/transformers/pull/14186 Adding batch_size support for (almost) all pipelines by @Narsil in https://github.com/huggingface/transformers/pull/13724 Remove n_ctx from configs by @thomasw21 in https://github.com/huggingface/transformers/pull/14165 Add BlenderbotTokenizerFast by @stancld in https://github.com/huggingface/transformers/pull/13720 Adding handle_long_generation paramters for text-generation pipeline. by @Narsil in https://github.com/huggingface/transformers/pull/14118 Fix pipeline tests env and fetch by @sgugger in https://github.com/huggingface/transformers/pull/14209 Generalize problem_type to all sequence classification models by @sgugger in https://github.com/huggingface/transformers/pull/14180 Fixing image segmentation with inference mode. by @Narsil in https://github.com/huggingface/transformers/pull/14204 Add a condition for checking labels by @hrxorxm in https://github.com/huggingface/transformers/pull/14211 Torch 1.10 by @LysandreJik in https://github.com/huggingface/transformers/pull/14169 Add more missing models to models/init.py by @ydshieh in https://github.com/huggingface/transformers/pull/14177 Clarify QA examples by @NielsRogge in https://github.com/huggingface/transformers/pull/14172 Fixing image-segmentation tests. by @Narsil in https://github.com/huggingface/transformers/pull/14223 Tensor location is already handled by @Narsil in https://github.com/huggingface/transformers/pull/14224 Raising exceptions instead of using assertions for few models by @pdcoded in https://github.com/huggingface/transformers/pull/14219 Fix the write problem in trainer.py comment by @wmathor in https://github.com/huggingface/transformers/pull/14202 [GPTJ] enable common tests and few fixes by @patil-suraj in https://github.com/huggingface/transformers/pull/14190 improving efficiency of mlflow metric logging by @wamartin-aml in https://github.com/huggingface/transformers/pull/14232 Fix generation docstring by @qqaatw in https://github.com/huggingface/transformers/pull/14216 Fix test_configuration_tie in FlaxEncoderDecoderModelTest by @ydshieh in https://github.com/huggingface/transformers/pull/14076 [Tests] Fix DistilHubert path by @anton-l in https://github.com/huggingface/transformers/pull/14245 Add PushToHubCallback in main init by @sgugger in https://github.com/huggingface/transformers/pull/14246 Fixes Beit training for PyTorch 1.10+ by @sgugger in https://github.com/huggingface/transformers/pull/14249 Added Beit model ouput class by @lumliolum in https://github.com/huggingface/transformers/pull/14133 Update Transformers to huggingface_hub >= 0.1.0 by @sgugger in https://github.com/huggingface/transformers/pull/14251 Add cross attentions to TFGPT2Model by @ydshieh in https://github.com/huggingface/transformers/pull/14038 [Wav2Vec2] Adapt conversion script by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14258 Put load_image function in image_utils.py & fix image rotation issue by @mishig25 in https://github.com/huggingface/transformers/pull/14062 minimal fixes to run DataCollatorForWholeWordMask with return_tensors="np" and return_tensors="tf" by @dwyatte in https://github.com/huggingface/transformers/pull/13891 Adding support for truncation parameter on feature-extraction pipeline. by @Narsil in https://github.com/huggingface/transformers/pull/14193 Fix of issue #13327: Wrong weight initialization for TF t5 model by @dshirron in https://github.com/huggingface/transformers/pull/14241 Fixing typo in error message. by @Narsil in https://github.com/huggingface/transformers/pull/14226 Pin Keras cause they messed their release by @sgugger in https://github.com/huggingface/transformers/pull/14262 Quality explain by @sgugger in https://github.com/huggingface/transformers/pull/14264 Add more instructions to the release guide by @sgugger in https://github.com/huggingface/transformers/pull/14263 Fixing slow pipeline tests by @Narsil in https://github.com/huggingface/transformers/pull/14260 Fixing mishandling of ignore_labels. by @Narsil in https://github.com/huggingface/transformers/pull/14274 improve rewrite state_dict missing _metadata by @changwangss in https://github.com/huggingface/transformers/pull/14276 Removing Keras version pinning by @Rocketknight1 in https://github.com/huggingface/transformers/pull/14280 Pin TF until tests are fixed by @sgugger in https://github.com/huggingface/transformers/pull/14283 [Hubert Docs] Make sure example uses a fine-tuned model by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14291 Add new LFS prune API by @sgugger in https://github.com/huggingface/transformers/pull/14294 Remove DPRPretrainedModel from docs by @xhlulu in https://github.com/huggingface/transformers/pull/14300 Handle long answer needs to be updated. by @Narsil in https://github.com/huggingface/transformers/pull/14279 [tests] Fix SegFormer and BEiT tests by @NielsRogge in https://github.com/huggingface/transformers/pull/14289 Fix typo on PPLM example README by @Beomi in https://github.com/huggingface/transformers/pull/14287 [Marian Conversion] Fix eos_token_id conversion in conversion script by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14320 [Tests] Update audio classification tests to support torch 1.10 by @anton-l inMore information
- DOI: 10.5281/zenodo.5770483
Dates
- Publication date: 2020
- Issued: October 01, 2020
Notes
Other: If you use this software, please cite it using these metadata.Rights
- info:eu-repo/semantics/openAccess Open Access
Format
electronic resource
Relateditems
Description | Item type | Relationship | Uri |
---|---|---|---|
IsSupplementTo | https://github.com/huggingface/transformers/tree/v4.13.0 | ||
IsVersionOf | https://doi.org/10.5281/zenodo.3385997 | ||
IsPartOf | https://zenodo.org/communities/zenodo |