The Wayback Machine - https://web.archive.org/web/20201222010213/https://github.com/topics/language-models
Skip to content
#

language-models

Here are 40 public repositories matching this topic...

transformers
patrickvonplaten
patrickvonplaten commented Dec 11, 2020

🚀 Feature request

Bart is a seq2seq model, but there might be applications where one would like to use only the pre-trained BartDecoder in an EncoderDecoder setting with a "long" encoder, such as

from transformers import EncoderDecoderModel

model = EncoderDecoderModel("allenai/longformer-large-4096", "facebook/bart-large")

# fine-tune model ...

This is already p

EricFillion
EricFillion commented Jan 9, 2020

All other language models do not perform as well as HappyROBERTA large for masked word prediction. We should encourage users to use HappyROBERTA Large by displaying a logger message if they use a suboptimal language model. This message will encourage them to use HappyROBERTA Large.

There are still some situations where a user may want to use another model, so we will keep them available.

The iltalk-protobuf (InterLingual-Talk) plugin performs the interlingual bidirectional procedural calls which are unlimited in depth (reentrant) and also are unlimited in the choice of a language pair. The language pair must support the Google Protobuf technique and its extensions. The default pair is Java/Logtalk.
  • Updated Oct 12, 2019
  • Java

Improve this page

Add a description, image, and links to the language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the language-models topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.