This is a limited proof of concept to search for research data, not a production system.

Search the MIT Libraries

Title: Transformers: State-of-the-Art Natural Language Processing

Type Software Wolf, Thomas, Debut, Lysandre, Sanh, Victor, Chaumond, Julien, Delangue, Clement, Moi, Anthony, Cistac, Perric, Ma, Clara, Jernite, Yacine, Plu, Julien, Xu, Canwen, Le Scao, Teven, Gugger, Sylvain, Drame, Mariama, Lhoest, Quentin, Rush, Alexander M. (2020): Transformers: State-of-the-Art Natural Language Processing. Zenodo. Software. https://zenodo.org/record/5784317

Authors: Wolf, Thomas ; Debut, Lysandre ; Sanh, Victor ; Chaumond, Julien ; Delangue, Clement ; Moi, Anthony ; Cistac, Perric ; Ma, Clara ; Jernite, Yacine ; Plu, Julien ; Xu, Canwen ; Le Scao, Teven ; Gugger, Sylvain ; Drame, Mariama ; Lhoest, Quentin ; Rush, Alexander M. ;

Links

Summary

Perceiver

The Perceiver model was released in the previous version:

Perceiver

Eight new models are released as part of the Perceiver implementation: PerceiverModel, PerceiverForMaskedLM, PerceiverForSequenceClassification, PerceiverForImageClassificationLearned, PerceiverForImageClassificationFourier, PerceiverForImageClassificationConvProcessing, PerceiverForOpticalFlow, PerceiverForMultimodalAutoencoding, in PyTorch.

The Perceiver IO model was proposed in Perceiver IO: A General Architecture for Structured Inputs & Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.

Add Perceiver IO by @NielsRogge in https://github.com/huggingface/transformers/pull/14487

Compatible checkpoints can be found on the hub: https://huggingface.co/models?other=perceiver

Version v4.14.0 adds support for Perceiver in multiple pipelines, including the fill mask and sequence classification pipelines.

Keras model cards

The Keras push to hub callback now generates model cards when pushing to the model hub. Additionally to the callback, model cards will be generated by default by the model.push_to_hub() method.

TF model cards by @Rocketknight1 in https://github.com/huggingface/transformers/pull/14720 What's Changed Fix : wrong link in the documentation (ConvBERT vs DistilBERT) by @Tikquuss in https://github.com/huggingface/transformers/pull/14705 Put back open in colab markers by @sgugger in https://github.com/huggingface/transformers/pull/14684 Fix doc examples: KeyError by @ydshieh in https://github.com/huggingface/transformers/pull/14699 Fix doc examples: 'CausalLMOutput...' object has no attribute 'last_hidden_state' by @ydshieh in https://github.com/huggingface/transformers/pull/14678 Adding Perceiver to AutoTokenizer. by @Narsil in https://github.com/huggingface/transformers/pull/14711 Fix doc examples: unexpected keyword argument by @ydshieh in https://github.com/huggingface/transformers/pull/14689 Automatically build doc notebooks by @sgugger in https://github.com/huggingface/transformers/pull/14718 Fix special character in MDX by @sgugger in https://github.com/huggingface/transformers/pull/14721 Fixing tests for perceiver (texts) by @Narsil in https://github.com/huggingface/transformers/pull/14719 [doc] document MoE model approach and current solutions by @stas00 in https://github.com/huggingface/transformers/pull/14725 [Flax examples] remove dependancy on pytorch training args by @patil-suraj in https://github.com/huggingface/transformers/pull/14636 Update bug-report.md by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14715 [Adafactor] Fix adafactor by @patrickvonplaten in https://github.com/huggingface/transformers/pull/14713 Code parrot minor fixes/niceties by @ncoop57 in https://github.com/huggingface/transformers/pull/14666 Fix doc examples: modify config before super().init by @ydshieh in https://github.com/huggingface/transformers/pull/14697 Improve documentation of some models by @NielsRogge in https://github.com/huggingface/transformers/pull/14695 Skip Perceiver tests by @LysandreJik in https://github.com/huggingface/transformers/pull/14745 Add ability to get a list of supported pipeline tasks by @codesue in https://github.com/huggingface/transformers/pull/14732 Fix the perceiver docs by @LysandreJik in https://github.com/huggingface/transformers/pull/14748 [CI/pt-nightly] switch to cuda-11.3 by @stas00 in https://github.com/huggingface/transformers/pull/14726 Swap TF and PT code inside two blocks by @LucienShui in https://github.com/huggingface/transformers/pull/14742 Fix doc examples: cannot import name by @ydshieh in https://github.com/huggingface/transformers/pull/14698 Fix: change tooslow to slow by @ydshieh in https://github.com/huggingface/transformers/pull/14734 Small fixes for the doc by @sgugger in https://github.com/huggingface/transformers/pull/14751 Update transformers metadata by @sgugger in https://github.com/huggingface/transformers/pull/14724 Mention no images added to repository by @LysandreJik in https://github.com/huggingface/transformers/pull/14738 Avoid using tf.tile in embeddings for TF models by @ydshieh in https://github.com/huggingface/transformers/pull/14735 Change how to load config of XLNetLMHeadModel by @josutk in https://github.com/huggingface/transformers/pull/14746 Improve perceiver by @NielsRogge in https://github.com/huggingface/transformers/pull/14750 Convert Trainer doc page to MarkDown by @sgugger in https://github.com/huggingface/transformers/pull/14753 Update Table of Contents by @sgugger in https://github.com/huggingface/transformers/pull/14755 Fixing tests for Perceiver by @Narsil in https://github.com/huggingface/transformers/pull/14739 Make data shuffling in run_clm_flax.py respect global seed by @bminixhofer in https://github.com/huggingface/transformers/pull/13410 Adding support for multiple mask tokens. by @Narsil in https://github.com/huggingface/transformers/pull/14716 Fix broken links to distillation on index page of documentation by @amitness in https://github.com/huggingface/transformers/pull/14722 [doc] performance: groups of operations by compute-intensity by @stas00 in https://github.com/huggingface/transformers/pull/14757 Fix the doc_build_test job by @sgugger in https://github.com/huggingface/transformers/pull/14774 Fix preprocess_function in run_summarization_flax.py by @ydshieh in https://github.com/huggingface/transformers/pull/14769

Simplify T5 docs by @xhlulu in https://github.com/huggingface/transformers/pull/14776

Update Perceiver code examples by @NielsRogge in https://github.com/huggingface/transformers/pull/14783

New Contributors @Tikquuss made their first contribution in https://github.com/huggingface/transformers/pull/14705 @codesue made their first contribution in https://github.com/huggingface/transformers/pull/14732 @LucienShui made their first contribution in https://github.com/huggingface/transformers/pull/14742 @josutk made their first contribution in https://github.com/huggingface/transformers/pull/14746 @amitness made their first contribution in https://github.com/huggingface/transformers/pull/14722

Full Changelog: https://github.com/huggingface/transformers/compare/v4.13.0...v4.14.0

More information

  • DOI: 10.5281/zenodo.5784317

Dates

  • Publication date: 2020
  • Issued: October 01, 2020

Notes

Other: If you use this software, please cite it using these metadata.

Rights

  • info:eu-repo/semantics/openAccess Open Access

Much of the data past this point we don't have good examples of yet. Please share in #rdi slack if you have good examples for anything that appears below. Thanks!

Format

electronic resource

Relateditems

DescriptionItem typeRelationshipUri
IsSupplementTohttps://github.com/huggingface/transformers/tree/v4.14.0
IsVersionOfhttps://doi.org/10.5281/zenodo.3385997
IsPartOfhttps://zenodo.org/communities/zenodo