language-model
Here are 600 public repositories matching this topic...
-
Updated
Dec 1, 2019
-
Updated
Oct 19, 2020 - Rust
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
-
Updated
Oct 19, 2020 - Python
-
Updated
Jul 28, 2020 - Python
-
Updated
Oct 7, 2019 - Python
-
Updated
May 7, 2020 - Python
-
Updated
Aug 4, 2020 - Python
-
Updated
Oct 18, 2020 - Scala
-
Updated
Sep 14, 2020
-
Updated
Oct 14, 2020 - Python
-
Updated
Sep 25, 2020 - Python
-
Updated
Feb 7, 2019 - Python
-
Updated
Aug 5, 2020
-
Updated
Jan 1, 2019 - Python
Question
Hi, I have been experimenting with the QA capabilities of Haystack and so far. I was wondering if it was possible for the model to generate paragraph-like contexts.
Additional context
So far, when a question is asked, the model outputs an answer and the context the answer can be found in. The context output by the model is oftentimes fragments of a sentence or fragments of a para
-
Updated
Oct 14, 2020 - Python
-
Updated
Jun 20, 2019 - Python
-
Updated
Oct 2, 2020 - Go
-
Updated
Jan 10, 2020 - Python
-
Updated
Sep 18, 2020 - Python
-
Updated
Dec 18, 2017 - Python
-
Updated
Oct 9, 2020 - TeX
-
Updated
Sep 23, 2020 - Python
-
Updated
Nov 15, 2018 - Jupyter Notebook
-
Updated
Sep 6, 2020 - Jupyter Notebook
-
Updated
Jan 9, 2020 - Python
Improve this page
Add a description, image, and links to the language-model topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the language-model topic, visit your repo's landing page and select "manage topics."
The architecture
GPT2ForSequenceClassification
was added in #7501 in PyTorch. It would be great to have it in TensorFlow (cf. issues #7622), but it would also be great to have it for other causal models:OpenAI GPT, CTRL, TransfoXLCurrently working on OpenAI GPT: @fmcurti(done)Below is a list of items to follow to make sure the integration of such an architect