Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] eval_split_size in recipe and config.json don't match up #1423

Closed
fijipants opened this issue Mar 18, 2022 · 1 comment
Closed

[Bug] eval_split_size in recipe and config.json don't match up #1423

fijipants opened this issue Mar 18, 2022 · 1 comment
Labels
bug Something isn't working

Comments

@fijipants
Copy link
Contributor

fijipants commented Mar 18, 2022

🐛 Description

Preface: I'm not sure if this is expected behaviour or not, I'm just guessing here.

Scenario 1:

Let's say I want to run a training with a custom eval_split_size. I see that eval_split_size is in BaseTTSConfig, so I modify this line:

config = VitsConfig(

to this:

config = VitsConfig(eval_split_size=0.05,

Result: The eval_split_size does not get set properly. (It's set to 0.01 instead over here instead.)

Scenario 2:

Let's say I want to run a training with a custom eval_split_size, and I know that editing the base config won't work. Instead I modify this line:

train_samples, eval_samples = load_tts_samples(dataset_config, eval_split=True)

To this:

train_samples, eval_samples = load_tts_samples(dataset_config, eval_split=True, eval_split_size=0.05)

Result: Although it works, the outputted config.json has the wrong eval_split_size:

    "eval_split_size": 0.01,

Scenario 3:

Let's say I completed a training with a custom eval_split_size, and now I want to change the eval_split_size and continue it, so I open up the outputted config.json and change this line:

    "eval_split_size": 0.01,

To this:

    "eval_split_size": 0.07,

Then I start the training with --continue_path.

Result: It uses whichever eval_split_size is set on this line, instead of the one from config.json.

To Reproduce

See description.

Expected behavior

  1. I should be able to set it with config = VitsConfig(eval_split_size=0.05,.
  2. If that's not possible, then when I set it with load_tts_samples(..., eval_split_size=0.05), I'd like it to be outputted to the config.json.
  3. When I continue a run, it should re-use the eval_split_size from the config.json.

Environment

{
    "CUDA": {
        "GPU": [
            "NVIDIA GeForce RTX 3090",
            "NVIDIA GeForce RTX 3090"
        ],
        "available": true,
        "version": "10.2"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "1.11.0+cu102",
        "TTS": "0.6.1",
        "numpy": "1.19.5"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            ""
        ],
        "processor": "x86_64",
        "python": "3.7.11",
        "version": "#202202230823 SMP PREEMPT Wed Feb 23 14:53:24 UTC 2022"
    }
}

Additional context

@fijipants fijipants added the bug Something isn't working label Mar 18, 2022
@Edresson
Copy link
Contributor

@fijipants Thanks for report the bug. Indeed in #1251 we forget to change it for all the recipes. The PR #1424 will fix it :).

@Edresson Edresson mentioned this issue Mar 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants