-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Immediate evaluation of variables does not reset between prompts #517
Comments
I encountered the same problem after update today. |
@akx what are your thoughts on this? It isn't a bug per se in that the sampler is designed to fix the value of the variable once it has been evaluated but the expectation of randomised values within a batch is reasonable. This problem would be resolved if we created a new sampler for every batch but then other session-based state like cyclical samplers wouldn't work. Perhaps we should reset variables each time we sample? |
same bug for me |
@adieyal Hm, don't we reset (or at least overwrite) them in the context for each generation? 🤔 |
I'm not 100% sure. The variable assignment is takes place in def _get_sequence(
self,
command: SequenceCommand,
context: SamplingContext,
) -> StringGen:
tokens, context = context.process_variable_assignments(command.tokens)
sub_generators = [context.generator_from_command(c) for c in tokens]
while True:
yield rotate_and_join(sub_generators, separator=command.separator) the variables are processed outside of the loop on line 71 but the generators are sampled inside of the loop and yielded to the caller. From my reading, I think that variables are only ever processed once. Here is a unit test that fails def test_random_variables_across_prompts(wildcard_manager: WildcardManager):
cmd = parse("${ball=!{red|blue}} ${ball} ${ball} ${ball}")
scon = SamplingContext(
default_sampling_method=SamplingMethod.RANDOM,
wildcard_manager=wildcard_manager
)
gen = scon.sample_prompts(cmd)
seen = set(islice(gen, 40))
assert len(seen) == 2 This change lets the test pass def _get_sequence(
self,
command: SequenceCommand,
context: SamplingContext,
) -> StringGen:
while True:
tokens, context = context.process_variable_assignments(command.tokens)
sub_generators = [context.generator_from_command(c) for c in tokens]
yield rotate_and_join(sub_generators, separator=command.separator) but breaks a lot of other stuff |
Not sure if I understand this topic correctly but I'm wondering whether #668 is related? |
I have the following prompt (and corresponding wildcard files for f_sentient and f_animal):
${animal=!{__f_animal__}} ${sentient=!{__f_sentient__}} [${animal}|${sentient}|${animal}] AND ${sentient} in an ancient forest. Dramatic lighting. dappled lighting. deeps shadows.
Every time I run with a "batch count" > 1, each resulting image uses exactly the same wildcard values. The same is true if I set batch-size > 1 and batch_count = 1. The values seem to repeat no matter what.
However, if I hit "Generate" again, the values are different.
Dynamic Prompts version
45b2137 | Sat Jun 3 10:48:47 2023
version: v1.3.2 • python: 3.10.7 • torch: 2.0.1+cu118 • xformers: N/A • gradio: 3.32.0 • checkpoint: 914077285245b21373 Sat Jun 3 10:48:47 2023
The text was updated successfully, but these errors were encountered: