gan
Here are 1,819 public repositories matching this topic...
-
Updated
Aug 13, 2019 - Lua
-
Updated
Jan 23, 2020 - Python
-
Updated
May 23, 2020 - Python
I want see the values while training, tf.log(predict_real + EPS) and tf.log(1 - predict_fake + EPS).
so, i'm add the code(#add code) in the main.
if should(a.progress_freq):
fetches["discrim_loss"] = model.discrim_loss
fetches["gen_loss_GAN"] = model.gen_loss_GAN
fetches["gen_loss_L1"] = model.gen_loss_L1
-
Updated
Jan 5, 2020 - Jupyter Notebook
-
Updated
May 3, 2020 - Python
It seems like you copied and pasted from the is_trail from the line above. It currently says
"True for training, False for testing [False]"
https://github.com/carpedm20/DCGAN-tensorflow/blob/9166ef99bf9fcd1d0bf641ae752428bb06903b00/main.py#L26
I understand that these two python files show two different methods to construct a model. The original n_epoch is 500 which works perfect for both python files. But if I change n_epoch to 20, only tutorial_mnist_mlp_static.py can achieve a high test accuracy (~0.97). The other file tutorial_mnist_mlp_static_2.py only get 0.47.
The models built from these two files looks the same for me (the s
-
Updated
Jan 31, 2019 - Python
-
Updated
Apr 28, 2020
I tried some RNN regression learning based on the code in the "PyTorch-Tutorial/tutorial-contents/403_RNN_regressor.py" file, which did not work for me at all.
According to an accepted answer on stack-overflow (https://stackoverflow.com/questions/52857213/recurrent-network-rnn-wont-learn-a-very-simple-function-plots-shown-in-the-q?noredirect=1#comment92916825_52857213), it turns out that the li
According to scipy, scipy.misc.toimage()
toimage is deprecated! toimage is deprecated in SciPy 1.0.0, and will be removed in 1.2.0. Use Pillow’s Image.fromarray directly instead.
which is used on line 46 of utils/visualizer.py is now a deprecated function under the newest scipy version. As a result this co
Hi, thanks for the great code!
I wonder do you have plans to support resuming from checkpoints for classification? As we all know, in terms of training ImageNet, the training process is really long and it can be interrupted somehow, but I haven't notice any code related to "resume" in scripts/classification/train_imagenet.py.
Maybe @hetong007 ? Thanks in advance.
-
Updated
Feb 18, 2019 - Python
-
Updated
Jan 25, 2019 - Python
-
Updated
Jul 21, 2018 - Python
-
Updated
Dec 5, 2019 - Jupyter Notebook
首先感谢楼主的分享,对于basic_seq2seq中的代码,运行时产生如下错误,我的tensorflow是最新的1.3.0 gpu版本,执行时所有代码均未改动,想楼主帮看看
Traceback (most recent call last):
File "D:\Workspaces\Eclipse\PythonLearn1\src\seq2seq_init_.py", line 227, in
num_layers)
File "D:\Workspaces\Eclipse\PythonLearn1\src\seq2seq_init_.py", line 189, in seq2seq_model
decoder_input)
File "D:\Workspaces\Eclipse\PythonLearn1\sr
-
Updated
Apr 30, 2020 - JavaScript
Is there any place I can read about how to produce high-quality results? The pictures from the README look very well produced whereas the result I get from the collab demo is not as good. Is there any documentation about the parameters for that Trump-Cage example?
Thank you so much
In config.py:
parser.add_argument('--padd_size',type=int,help='net pad size',default=0)#math.floor(opt.ker_size/2)
I'm wondering why you commented out the code for same padding to use valid padding instead? Is this important? (I checked both paper and ICCV talk, and it wasn't mentioned)
-
Updated
May 10, 2020 - Python
-
Updated
Oct 1, 2018
-
Updated
May 20, 2020 - Jupyter Notebook
-
Updated
May 2, 2020 - Python
-
Updated
Apr 12, 2020 - Python
-
Updated
Mar 23, 2019 - Python
Traceback (most recent call last):
File "train.py", line 45, in
train_display_images_b = torch.stack([train_loader_b.dataset[i] for i in range(display_size)]).cuda()
File "train.py", line 45, in
train_display_images_b = torch.stack([train_loader_b.dataset[i] for i in range(display_size)]).cuda()
File "/content/MUNIT/data.py", line 119, in getitem
pat
This project is fantastic.
If anyone would be interested in trying this project out on Kubeflow let me know (kubeflow.slack.com) I'd be happy to support that by providing a Kubeflow cluster.
It would be great to understand how well the following works
Try to run the sample notebook on Kubeflow
- Deploy Kubeflow
- Navigate to JupyterHub
- Launch Jupyter
Improve this page
Add a description, image, and links to the gan topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gan topic, visit your repo's landing page and select "manage topics."


For some reason, when I open the web document, real_a and fake_b are matching, but the real_b is from another image; however in the images folder the images are correct. Does someone know why does this happen?