Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplification: Remove model.running #1230

Open
rht opened this issue Apr 3, 2022 · 36 comments
Open

Simplification: Remove model.running #1230

rht opened this issue Apr 3, 2022 · 36 comments
Milestone

Comments

@rht
Copy link
Contributor

rht commented Apr 3, 2022

In most cases, models are run with explicit, N number of steps. In case where it has a halting condition, it is easy to override model.run_model like this. E.g. for forest fire
before

    def step(self):
        ...
        # Halt if no more fire
        if self.count_type(self, "On Fire") == 0:
            self.running = False

    def run_model(self):
        while self.running:
            self.step()

after

    def run_model(self):
        while True:
            self.step()
            if self.count_type(self, "On Fire") == 0:
                break
@rht
Copy link
Contributor Author

rht commented Apr 3, 2022

This way, people won't have to stop and read what model.running is, what it is used for, etc -- saving time.

@rht
Copy link
Contributor Author

rht commented Apr 3, 2022

Based on reading https://juliadynamics.github.io/Agents.jl/stable/comparison/, it seems that Agents.jl considers it a feature to have a boolean model state to tell whether a model has terminated or not. I can't find the relevant termination condition checking in Agents.jl code base (see https://github.com/JuliaDynamics/Agents.jl/blob/7e5c3dcbc7d49993b7155595a8427d2510a7a232/src/simulations/collect.jl#L109).

The table comparison https://juliadynamics.github.io/Agents.jl/stable/comparison/ inaccurately describes that users can only write explicit loops for terminating a simulation. But it is bound to become true if we agree to remove model.running.

@Libbum if I may ask, what is the reason that you have this model termination feature in Agents.jl? I may be missing something about its benefit.

@rht
Copy link
Contributor Author

rht commented Apr 5, 2022

OK, I'm answering my own question regarding with Agents.jl termination. The run! has n as an argument, which can be an Int or a function.

The until function handles both cases of n:
https://github.com/JuliaDynamics/Agents.jl/blob/7e5c3dcbc7d49993b7155595a8427d2510a7a232/src/simulations/step.jl#L42-L43

@rht
Copy link
Contributor Author

rht commented Apr 5, 2022

IMO, the simulation termination condition should be defined separately from the model step. I think, writing it in run_model is cleaner.

@jackiekazil
Copy link
Member

agree

@Corvince
Copy link
Contributor

Corvince commented Apr 7, 2022

I think the biggest advantage of the current approach is that the model can be stopped from anywhere and not just from the run model loop itself.

This is for example used by our Frontend, where the stop button sets model.running to false. So we should be extra careful not to shoot ourselves in the foot by removing it.

And I can also imagine models where single agents have the power to stop the model. Right now they just have to set model.running = false
Breaking a while loop from the outside is a much harder problem.

@rht
Copy link
Contributor Author

rht commented Apr 7, 2022

Regarding with this.running (

) in the front-end, it is actually not related to model.running. The only interaction with the server is the get_step message in
send({ type: "get_step", step: this.tick });
.

The models that have single agents stopping the simulation within the agents step() is probably not that common. Should this be needed, the modeler can define model.running just for this use case.

@rht
Copy link
Contributor Author

rht commented Apr 7, 2022

It looks like the ModelController is not leveraging the model's run_model when the "start" button is clicked where it should have. This seems incorrect.

@Corvince
Copy link
Contributor

Corvince commented Apr 7, 2022

Ha, you are right, I was misremembering how the server works.

But the start button not triggering run model is by design with the current state. If it works trigger, the model would keep on running and Frontend and backend would quickly be out of sync

@rht
Copy link
Contributor Author

rht commented Apr 8, 2022

I think it is actually safer with run_model, because the client is passively waiting for websocket messages from the server anyway. I have seen that currently, the simulation keeps doing step() indefinitely, even though model.running is already False.

There is one use case that needs model.running: when clicking the step button manually in the GUI, without the model.running, the step() keeps happening. But even in latest main branch, the model.step() method doesn't check for model.running.

@Corvince
Copy link
Contributor

Corvince commented Apr 8, 2022

Well yes I think there are several tradeoffs to consider. But calling run_model would run the model until the stop condition is reached or even indefinitely. There would be no way to stop the model from the Frontend - you could only stop rendering but there would be no sane way to continue. So the tricky part is not start, but stop.

But just to be clear: I now agree that currently we don't need/use model.running.

So I think it is okay to remove it.

@Corvince
Copy link
Contributor

Corvince commented Apr 9, 2022

Sorry for spamming the issue, but model.running is used by the visualization Server. So be sure to be careful with any changes

https://github.com/projectmesa/mesa/blob/main/mesa/visualization/ModularVisualization.py#L206

@rht
Copy link
Contributor Author

rht commented Apr 9, 2022

OK, I think a middle ground solution is to not specify model.running = True for examples that never set model.running = False (only few of the examples ever set it to False).
This also means that

self.running = True
needs to be moved to __new__, because I could imagine people forgetting to do super().__init__() from time to time.

@rht
Copy link
Contributor Author

rht commented Apr 9, 2022

This is what Agents.jl does:
https://github.com/JuliaDynamics/InteractiveDynamics.jl/blob/15c271d7de77717864548fc6b999d1c11701d0e1/src/agents/interaction.jl#L47-L52. It looks like in Agents.jl, the step button steps the simulation for all situation. But I'm not sure. cc @Datseris (sorry to include you in this thread, but I really want to know how it is done in Agents.jl)

@Datseris
Copy link

Hi,
happy to help, but what is the exact question I should be answering?

Agents.jl considers it a feature to have a boolean model state to tell whether a model has terminated or not.

There is no such thing in Agents.jl. There is no model property such as model.running or anything like that (unless a user manually makes this a model level propery, although this doesn't make much sense). The stepping is function-based, not "class-based". A user provides a function that takes a model and returns a bool. In this case iteration continues until the function returns true.

if I may ask, what is the reason that you have this model termination feature in Agents.jl? I may be missing something about its benefit

Because for some models it is more intuitive, scientifically, to terminate evolution when a condition is met rather than after a pre-defined and arbitrary amount of steps. E.g., in Schelling, terminante when all agents are happy.

@rht
Copy link
Contributor Author

rht commented Apr 10, 2022

@Datseris thank you for the answer! I have actually answered my question regarding with the termination condition of Agents.jl

This one: #1230 (comment); this my question to you specifically.

Because for some models it is more intuitive, scientifically, to terminate evolution when a condition is met rather than after a pre-defined and arbitrary amount of steps. E.g., in Schelling, terminante when all agents are happy.

What I had in mind is that the user could do a while-true loop by hand, by themselves, which does the checking of whether all agents are happy for each step. There is this concern with the cost adding complexity to the API of the library, whereas the same feature could have been succinctly written by the users themselves. This is in contrast with e.g. abmplot, which does a lot of things under the hood.

@rht
Copy link
Contributor Author

rht commented Apr 10, 2022

@Datseris OK, I'm formulating my question in a way that is self-contained, so that you don't have to read the whole thread.

In Agents.jl, does the step button in the GUI checks if the model has completed the run (e.g. all agents are happy) and so does nothing even when the step button is clicked once completed, or does it NOT do some checking of some sort, where it blindly steps the model?

Edit: clarification

@Datseris
Copy link

Datseris commented Apr 10, 2022

No, the GUI app does not check if the "model has completed the run", because as I said above, in Agents.jl this concept is not a property of the model. If we allowed the user to provide a termination function to the GUI app, then we could do that, but we thought that for an interactive app termination functions aren't really relevant.

What I had in mind is that the user could do a while-true loop by hand, by themselves, which does the checking of whether all agents are happy for each step.

Sure. The users could also write their entire ABM by hand as well, they don't have to use Agents.jl or Mesa ;) The point is how convenient does a library make things for you. In Agents.jl the termination condition simply replaces the step number in step!. Instead of writing step!(model, f, 5) you can write

terminate(model) = all(agent -> agent.happy, allagents(model))
step!(model, f, terminate)

this is very convenient, and no loops are written.

@rht
Copy link
Contributor Author

rht commented Apr 10, 2022

That's very informative, thank you!

but we thought that for an interactive app termination functions aren't really relevant.

Hmm, ok, I'm taking your design choice into consideration. I do think that the simulation in the interactive GUI ought to stop if all agents are happy / the forest fire has stopped.

Regarding with the termination function:
Though my point is, the users have to write a lot of code if they want to reinvent InteractiveDynamics.abmplot or mesa.visualization.ModularVisualization.ModularServer. But not so for reinventing the running the model until the termination condition is reached.

But your example shows that it is super concise for those who already know the API.

The closest Mesa equivalent would be

def terminate(model):
    return model.happy == model.schedule.get_agent_count()
model.run_model(terminate)

Where run_model is defined as

def run_model(self, terminate):
    while not terminate(self):
        self.step()

Though this may not be much more concise than just defining the whole thing in run_model itself, as I wrote in #1230 (comment).

@Datseris
Copy link

I do think that the simulation in the interactive GUI ought to stop if all agents are happy / the forest fire has stopped.

Sure, that makes sense for the examples you cited, but a generic framework for ABM interactive exploration should not make design decisions based on specific cases. Nevertheless, the possibility of allowing what you want is something I fully agree with.

I do not agree that you'd need to write a lot of code to "re-invent" our GUI app to add a termination function possibility. In fact, I can already tell you that this will be an at most 10 lines of code change to InteractiveDynamics.jl. So far, no user has asked for this, but if you'd like this, feel free to open an issue at InteractiveDynamics.jl, where we can guide you on how to actually implement the feature (although, you already know where in the source code the stepping happens, so you already know where to do the simple modification :) )

@rht
Copy link
Contributor Author

rht commented Apr 10, 2022

I do not agree that you'd need to write a lot of code to "re-invent" our GUI app to add a termination function possibility.

What I actually meant is comparing the effort of reinventing the wheel of writing the GUI app, vs reinventing the wheel of writing a loop with a termination condition.

@rht
Copy link
Contributor Author

rht commented Apr 10, 2022

I see your point for ensuring a generic framework for interactive exploration, but having a termination condition is a general condition that is common in models, which is the reason why Agents.step! has the termination function as its arguments in the first place. I'm citing the Schelling / Forest Fire example to make the point that they are common (though not necessarily >=80% of the case).

@Corvince
Copy link
Contributor

Corvince commented Apr 11, 2022

I think it is actually safer with run_model, because the client is passively waiting for websocket messages from the server anyway. I have seen that currently, the simulation keeps doing step() indefinitely, even though model.running is already False.

There is one use case that needs model.running: when clicking the step button manually in the GUI, without the model.running, the step() keeps happening. But even in latest main branch, the model.step() method doesn't check for model.running.

I was confused by this and could now finally try it out again. I can not reproduce models running indefinitely (/edit: models that should terminate, that is, e.g. Schelling, Forest Fire) . For the interactive models the visualization server indeed checks for model.running and no more steps are run if model.running = False. Is there a specific example you had this experience with?

However I see a simplification of this if we switch from model.running to model.terminated. This way users of models without a termination condition can run their models without caring about model.running and those with a termination condition have an easy way to handle that.

@rht
Copy link
Contributor Author

rht commented Apr 11, 2022

I can not reproduce models running indefinitely (/edit: models that should terminate, that is, e.g. Schelling, Forest Fire)

Ah, right, my mistake. I might be running a model without a termination condition, where I assumed it has one.

I'm OK with model.terminated. I suppose then run_model becomes

def run_model(self):
    while not (hasattr(self, "terminated") and self.terminated):
        self.step()

But I think adding cls.running = True in the model.__new__ also works. This will not break the existing models that still use model.running.

@rht
Copy link
Contributor Author

rht commented Apr 11, 2022

By existing models, I meant other people's code that use Mesa. Changing to model.terminated is a backward incompatible change.

@EwoutH
Copy link
Member

EwoutH commented Jan 21, 2024

One thing I haven't heard in this discussion is it's use in the batch_run() function. Last week a student asked how he could stop a model conditionally, and I remembered you could set Model.running to False, which would be recognized in the batch runner. If we remove it, this is an important use case to remember.

mesa/mesa/batchrunner.py

Lines 135 to 136 in 72b0a9d

while model.running and model.schedule.steps <= max_steps:
model.step()

@rht
Copy link
Contributor Author

rht commented Jan 21, 2024

The batch_run() use case is indeed indispensable. Closing now.

@rht rht closed this as completed Jan 21, 2024
@EwoutH
Copy link
Member

EwoutH commented Jan 21, 2024

Don’t think it’s totally indispensable, but it would break API. So removing it - if deemed useful- would be a 3.0 change.

@rht rht reopened this Jan 22, 2024
@EwoutH EwoutH added this to the v3.0 milestone Sep 3, 2024
@EwoutH
Copy link
Member

EwoutH commented Oct 2, 2024

Should we revisit this for Mesa 3.0? I'm reminded again because of:

@rht
Copy link
Contributor Author

rht commented Oct 2, 2024

As of today, I think it's a matter of API choice. Either:

  1. retain model.running where you can stop the model from everywhere at any time if needed. The stopping condition is defined inside step()
  2. have a terminate(model) function as it is done in Agents.jl. While there is no more hardcoded model.running attribute, however, the function name to check for stopping condition has to be terminate
  3. drop stopping condition and do not impose on user on how they want to stop. Not feasible because viz and batch_runner both need this.

A tie breaker would be that you can check whether a model has stopped just by reading the value of model.running, but it is situational as it is updated only in model.step(). terminate(model) checks the condition every time from scratch, but it guarantees the info to be up to date. But most of the time, this performance difference is negligible anyway.

@quaquel
Copy link
Member

quaquel commented Oct 2, 2024

  1. retain model.running where you can stop the model from everywhere at any time if needed. The stopping condition is defined inside step()

This is a good point that I had not thought about. There might be many reasons for why a run is completed. In many cases, it will be a simple stopping condition tied to the number of steps. However, there might be conditions tied to the dynamics of the model over time that would indicate that e.g., some equilibrium has been reached and no further running is required. Having a model.running flag that is broadly accessible is quite useful to cater to both use cases.

So, I am coming around to keeping model.running and instead add explicit support for the stopping condition to make run_model more useful. The simplest approach that would probably cover 80% of all use cases is to have an explicit n_steps keyword argument on run_model and a stopping_condition keyword argument. So something along the lines of

def run_model(self, n_steps: int|None=None, stopping_condition: Callable|None=None)
    """run the model for the specified number of steps or until stopping condition is reached.

        Args:
        n_steps : number of steps for which to run the model
        stopping_condition : a callable called with the model as only argument that returns True
                if the model should stop running and False otherwise
    """
    if stopping_condition is None:
        stopping_condition = lambda x : return False
    if n_steps is None:
        n_steps = sys.maxint    

    while self.running:
        self.step()
        if self.steps >= n_steps or stopping_condition(self):
            self.running = False

@EwoutH
Copy link
Member

EwoutH commented Oct 3, 2024

@quaquel I like your proposal, especially the part about having both self.running and a stopping condition.

I was thinking however, what value does the whole run_model add? Because users also can just do themselves:

model = Model()
while model.running:
    model.step()

model.running can then be set false from anywhere in the model, agents, or any other component with model access. It can also be set False on reaching a number of steps.

Want to run for some amount of steps beforehand, with some condition in your model?

while model.running and model.steps < 1000:
    model.step()

Do have an condition outside the model you want to apply?

while model.running and model.steps < 1000 and not stopping_condition(model):
    model.step()

These are all oneliners, and make it very explicit what's happening and how.

I think I would be a proponent on removing the whole run_model concept and just documenting this clearly. Especially now that model.steps increase automatically, all these things are trivial.

@quaquel
Copy link
Member

quaquel commented Oct 3, 2024

fair point, but what about the batch runner? For this it would be quite convenient if it has a default run_model method or function it could call.

@EwoutH
Copy link
Member

EwoutH commented Oct 3, 2024

model.running is already there, a maxsteps also, so the only thing needed would be a stopping condition you can input. That can also be easily done inside the model by setting model.running = False, but we can provide it for convenience.

I would say keep model.running, remove run_model.

@rht
Copy link
Contributor Author

rht commented Oct 3, 2024

I was about to say what @EwoutH said, because the code added is too short and easy to write at the expense of more documentation to read. What we should provide are functions that are hard to write + test.

For batch runner, perhaps it could just define its own run_model function?

@quaquel
Copy link
Member

quaquel commented Oct 3, 2024

I was about to say what @EwoutH said, because the code added is too short and easy to write at the expense of more documentation to read. What we should provide are functions that are hard to write + test.

I am not sure I fully understand this point. As stated, I disagree. In fact, if we use this line of reasoning, a lot of mesa code should be removed. Many of the methods in Mesa are relatively short and could easily be written by a user. The value of Mesa or any library is that it offers you a framework that allows you to focus on the important stuff and not have to think about default stuff. Having a convenience function for running the model fits with my view of what MESA should offer. It is also easy to document and test.

This does not imply that I am against removing run_model. However, I would like to see a clear argument if we do so, and I would like to understand what it would imply for the batch runner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants