-
Updated
Oct 8, 2020 - Python
natural-language-inference
Here are 75 public repositories matching this topic...
-
Updated
Nov 13, 2020 - Scheme
-
Updated
May 19, 2019 - Python
-
Updated
Dec 28, 2019
-
Updated
Feb 12, 2019 - Python
-
Updated
Dec 7, 2020
-
Updated
Nov 13, 2020 - Python
-
Updated
Feb 9, 2019 - Python
-
Updated
Sep 25, 2020 - Python
-
Updated
Aug 14, 2020 - Python
-
Updated
Feb 24, 2018 - Python
-
Updated
Jan 12, 2019 - Python
-
Updated
Nov 14, 2018 - Python
-
Updated
Nov 13, 2020 - Python
-
Updated
Nov 16, 2019 - Jupyter Notebook
-
Updated
Nov 3, 2020 - Python
-
Updated
Jul 25, 2019 - Jupyter Notebook
-
Updated
Aug 3, 2018 - Python
-
Updated
Feb 18, 2019
-
Updated
Aug 1, 2019 - Python
-
Updated
Sep 11, 2020 - Python
-
Updated
Apr 16, 2019 - Python
-
Updated
May 23, 2018 - Jupyter Notebook
-
Updated
Nov 24, 2017 - Python
-
Updated
Dec 5, 2019 - Python
-
Updated
Jul 10, 2019 - Python
-
Updated
Aug 27, 2019 - Python
-
Updated
Dec 2, 2020
-
Updated
Nov 5, 2019 - Python
Improve this page
Add a description, image, and links to the natural-language-inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the natural-language-inference topic, visit your repo's landing page and select "manage topics."


Description
While using tokenizers.create with the model and vocab file for a custom corpus, the code throws an error and is not able to generate the BERT vocab file
Error Message
ValueError: Mismatch vocabulary! All special tokens specified must be control tokens in the sentencepiece vocabulary.
To Reproduce
from gluonnlp.data import tokenizers
tokenizers.create('spm', model_p