word-embeddings
Here are 518 public repositories matching this topic...
I tried selecting hyper parameters of my model following "Tutorial 8: Model Tuning" below:
https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_8_MODEL_OPTIMIZATION.md
Although I got the "param_selection.txt" file in the result directory, I am not sure how to interpret the file, i.e. which parameter combination to use. At the bottom of the "param_selection.txt" file, I found "
-
Updated
Mar 8, 2020 - Python
-
Updated
Jun 12, 2020 - Jupyter Notebook
-
Updated
Nov 18, 2019 - Python
Hi there,
I think there might be a mistake in the documentation. The Understanding Scaled F-Score
section says
The F-Score of these two values is defined as:
$$ \mathcal{F}_\beta(\mbox{prec}, \mbox{freq}) = (1 + \beta^2) \frac{\mbox{prec} \cdot \mbox{freq}}{\beta^2 \cdot \mbox{prec} + \mbox{freq}}. $$
$\beta \in \mathcal{R}^+$ is a scaling factor where frequency is favored if $\beta
-
Updated
May 25, 2020 - Python
-
Updated
Mar 25, 2020 - CSS
I would like to know what all the abbreviations mean? Some I can guess, like "PUNCT", but no idea what "X" might be. I want to retain contractions, but hard to choose options without documentation.
Thanks. Great performance code!
OS: MacOS 10.13.3
I installed the MeTA as instructed on setup guide. When I do the unit test
describe [ranker regression]
libc++abi.dylib: terminating with uncaught exception of type meta::corpus::corpus_exception: corpus configuration file (../data//cranfield/line.toml) not present
-
Updated
Apr 6, 2020 - Python
-
Updated
Nov 1, 2019 - Jupyter Notebook
ERROR: tensorflow 1.15.0 has requirement numpy<2.0,>=1.16.0, but you'll have numpy 1.14.6 which is incompatible.
ERROR: spacy 2.1.8 has requirement numpy>=1.15.0, but you'll have numpy 1.14.6 which is incompatible.
ERROR: mxnet-cu92 1.5.1.post0 has requirement numpy<2.0.0,>1.16.0, but you'll have numpy 1.14.6 which is incompatible.
ERROR: imgaug 0.2.9 has requirement numpy>=1.15.0, but you'll h
-
Updated
Mar 11, 2020 - Jupyter Notebook
-
Updated
Mar 27, 2019 - Python
-
Updated
Jun 6, 2020 - Go
-
Updated
Apr 21, 2020 - Python
-
Updated
Feb 26, 2020 - Jupyter Notebook
-
Updated
Apr 17, 2020 - Jupyter Notebook
fastText supervised model does not take into account of the document and words representation, it just learns bag of words and labels.
embeddings are computed only on the relation word->label. it would be interesting to learn jointly the semantic relation label<->document<->word<->context.
for now it is only possible to pre-train word embeddings and then use them as initial vectors for the clas
-
Updated
Oct 9, 2019
-
Updated
Jun 8, 2018 - Python
-
Updated
Feb 24, 2020 - Java
-
Updated
Jan 29, 2020 - Jupyter Notebook
-
Updated
May 9, 2020 - Python
-
Updated
Mar 14, 2020 - Python
-
Updated
May 8, 2017 - Java
-
Updated
Dec 8, 2019 - Python
-
Updated
Nov 15, 2019 - Python
-
Updated
Apr 2, 2018 - Jupyter Notebook
Improve this page
Add a description, image, and links to the word-embeddings topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the word-embeddings topic, visit your repo's landing page and select "manage topics."
Example (from TfidfTransformer)
This method expects a list of tuples, instead of an iterable. This means that the entire corpus has to be stored as a lis