Skip to content

Generative loss stuck #26

@tmralmeida

Description

@tmralmeida

Hi,

Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

E.g while running:

python -m trajnetbaselines.sgan.trainer --k 1

It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions