pytorch
Here are 11,165 public repositories matching this topic...
-
Updated
Jul 3, 2020 - Jupyter Notebook
-
Updated
Jul 12, 2020 - Python
-
Updated
Jul 10, 2020 - Jupyter Notebook
Hi, is there any plan to provide a tutorial of showing an example of employing the Transformer as an alternative of RNN for seq2seq task such as machine translation?
For some reason, when I open the web document, real_a and fake_b are matching, but the real_b is from another image; however in the images folder the images are correct. Does someone know why does this happen?
-
Updated
Jul 8, 2020 - Jupyter Notebook
-
Updated
Jul 16, 2020 - Python
-
Updated
Jul 15, 2020
-
Updated
Jul 16, 2020 - JavaScript
-
Updated
Jun 25, 2020 - Jupyter Notebook
Example scripts contains some dependencies not listed for Horovod, and in some cases require datasets without explaining how to obtain them. We should provide a README file along with a set of packages (requirements.txt) for successfully running the examples.
-
Updated
Jun 9, 2020 - Jupyter Notebook
我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。
值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。
static void qsort_descent_inplace(std::vector<FaceObject>& faceobjects, int left, int right)
{
int i = left;
int j = right;
float p = faceobjects[(left + right) / 2].prob;
...
// #pragma omp parallel sections
{
// #pragmalabel:"help wanted" Interpreting param_selection.txt for model tuning (selecting hyper parameters)
I tried selecting hyper parameters of my model following "Tutorial 8: Model Tuning" below:
https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_8_MODEL_OPTIMIZATION.md
Although I got the "param_selection.txt" file in the result directory, I am not sure how to interpret the file, i.e. which parameter combination to use. At the bottom of the "param_selection.txt" file, I found "
more details at: allenai/allennlp#2264 (comment)
Several parts of the op sec like the main op description, attributes, input and output descriptions become part of the binary that consumes ONNX e.g. onnxruntime causing an increase in its size due to strings that take no part in the execution of the model or its verification.
Setting __ONNX_NO_DOC_STRINGS doesn't really help here since (1) it's not used in the SetDoc(string) overload (s
❓ Questions and Help
I followed the fine-tuning example described in here: https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md
However I didn't manage to reproduce the results described in the paper for EN-RO translation.
How to reproduce fine tuning with mbart?
- Can you clarify where did you get the data and how did you preprocess it for training in more de
The documentation about edge orientation is inconsistent. In the Creating Message Passing Networks tutorial, the main expression says that e𝑖,𝑗 denotes (optional) edge features from node 𝑖 to node 𝑗., the attached expression also suggests it. However, in documentation to MessagePassing.message(), the documentation says Constructs messages from node 𝑗 to node 𝑖 (this is actually true).
I
-
Updated
Jul 13, 2020 - Jupyter Notebook
add Trainer states
🚀 Feature
Add trainer states so in each time user get info what is happening
Motivation
simplify the return values and clean up the interface
Pitch
The state will be implemented as enum and updated according to Trainer phase:
- INITIALIZE
- RUNNING
- FINISHED
- TEARING_DOWN
- TERMINATED
- ...
Excuse me, https://github.com/graykode/nlp-tutorial/blob/master/1-1.NNLM/NNLM-Torch.py#L50 The comment here may be wrong. It should be X = X.view(-1, n_step * m) # [batch_size, n_step * m]
Sorry for disturbing you.
Describe the bug
I try to run tensorboardX/examples/demo_graph.py for jupyter notebook (launched by anaconda navigator) and I get the error seen at Additional context.
I just copy paste the code to notebook from Github.
Minimal runnable code to reproduce the behavior
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
this doesn't seem very well documented at present.
-
Updated
Jul 16, 2020 - Python
-
Updated
Jan 31, 2019 - Python
Split the code from create_sandbox into two separate functions and remove the noqa: C901.
Describe alternatives you've considered
Simplify the function such that no split is required.
Additional context
Code quality:
2020-04-29T13:13:32.5968940Z ./syft/sandbox.py:12:1: C901 'create_sandbox' is too complex (26)
2020-04-29T13:13:32.5969257Z def create_sandbox(gbs,
-
Updated
Jul 2, 2020
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."


Consider this code that downloads models and tokenizers to disk and then uses
BertTokenizer.from_pretrainedto load the tokenizer from disk.ISSUE:
BertTokenizer.from_pretrained()does not seem to be compatible with Python's native pathlib module.