-
Notifications
You must be signed in to change notification settings - Fork 948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplification: Remove model.running #1230
Comments
This way, people won't have to stop and read what |
Based on reading https://juliadynamics.github.io/Agents.jl/stable/comparison/, it seems that Agents.jl considers it a feature to have a boolean model state to tell whether a model has terminated or not. I can't find the relevant termination condition checking in Agents.jl code base (see https://github.com/JuliaDynamics/Agents.jl/blob/7e5c3dcbc7d49993b7155595a8427d2510a7a232/src/simulations/collect.jl#L109). The table comparison https://juliadynamics.github.io/Agents.jl/stable/comparison/ inaccurately describes that users can only write explicit loops for terminating a simulation. But it is bound to become true if we agree to remove @Libbum if I may ask, what is the reason that you have this model termination feature in Agents.jl? I may be missing something about its benefit. |
OK, I'm answering my own question regarding with Agents.jl termination. The The |
IMO, the simulation termination condition should be defined separately from the model step. I think, writing it in |
agree |
I think the biggest advantage of the current approach is that the model can be stopped from anywhere and not just from the run model loop itself. This is for example used by our Frontend, where the stop button sets And I can also imagine models where single agents have the power to stop the model. Right now they just have to set |
Regarding with
model.running . The only interaction with the server is the get_step message in
The models that have single agents stopping the simulation within the agents |
It looks like the |
Ha, you are right, I was misremembering how the server works. But the start button not triggering run model is by design with the current state. If it works trigger, the model would keep on running and Frontend and backend would quickly be out of sync |
I think it is actually safer with There is one use case that needs |
Well yes I think there are several tradeoffs to consider. But calling But just to be clear: I now agree that currently we don't need/use So I think it is okay to remove it. |
Sorry for spamming the issue, but |
OK, I think a middle ground solution is to not specify Line 32 in e549107
__new__ , because I could imagine people forgetting to do super().__init__() from time to time.
|
This is what Agents.jl does: |
Hi,
There is no such thing in Agents.jl. There is no model property such as
Because for some models it is more intuitive, scientifically, to terminate evolution when a condition is met rather than after a pre-defined and arbitrary amount of steps. E.g., in Schelling, terminante when all agents are happy. |
@Datseris thank you for the answer! I have actually answered my question regarding with the termination condition of Agents.jl This one: #1230 (comment); this my question to you specifically.
What I had in mind is that the user could do a while-true loop by hand, by themselves, which does the checking of whether all agents are happy for each step. There is this concern with the cost adding complexity to the API of the library, whereas the same feature could have been succinctly written by the users themselves. This is in contrast with e.g. |
@Datseris OK, I'm formulating my question in a way that is self-contained, so that you don't have to read the whole thread. In Agents.jl, does the step button in the GUI checks if the model has completed the run (e.g. all agents are happy) and so does nothing even when the step button is clicked once completed, or does it NOT do some checking of some sort, where it blindly steps the model? Edit: clarification |
No, the GUI app does not check if the "model has completed the run", because as I said above, in Agents.jl this concept is not a property of the model. If we allowed the user to provide a termination function to the GUI app, then we could do that, but we thought that for an interactive app termination functions aren't really relevant.
Sure. The users could also write their entire ABM by hand as well, they don't have to use Agents.jl or Mesa ;) The point is how convenient does a library make things for you. In Agents.jl the termination condition simply replaces the step number in terminate(model) = all(agent -> agent.happy, allagents(model))
step!(model, f, terminate) this is very convenient, and no loops are written. |
That's very informative, thank you!
Hmm, ok, I'm taking your design choice into consideration. I do think that the simulation in the interactive GUI ought to stop if all agents are happy / the forest fire has stopped. Regarding with the termination function: But your example shows that it is super concise for those who already know the API. The closest Mesa equivalent would be def terminate(model):
return model.happy == model.schedule.get_agent_count()
model.run_model(terminate) Where def run_model(self, terminate):
while not terminate(self):
self.step() Though this may not be much more concise than just defining the whole thing in |
Sure, that makes sense for the examples you cited, but a generic framework for ABM interactive exploration should not make design decisions based on specific cases. Nevertheless, the possibility of allowing what you want is something I fully agree with. I do not agree that you'd need to write a lot of code to "re-invent" our GUI app to add a termination function possibility. In fact, I can already tell you that this will be an at most 10 lines of code change to InteractiveDynamics.jl. So far, no user has asked for this, but if you'd like this, feel free to open an issue at InteractiveDynamics.jl, where we can guide you on how to actually implement the feature (although, you already know where in the source code the stepping happens, so you already know where to do the simple modification :) ) |
What I actually meant is comparing the effort of reinventing the wheel of writing the GUI app, vs reinventing the wheel of writing a loop with a termination condition. |
I see your point for ensuring a generic framework for interactive exploration, but having a termination condition is a general condition that is common in models, which is the reason why |
I was confused by this and could now finally try it out again. I can not reproduce models running indefinitely (/edit: models that should terminate, that is, e.g. Schelling, Forest Fire) . For the interactive models the visualization server indeed checks for However I see a simplification of this if we switch from |
Ah, right, my mistake. I might be running a model without a termination condition, where I assumed it has one. I'm OK with def run_model(self):
while not (hasattr(self, "terminated") and self.terminated):
self.step() But I think adding |
By existing models, I meant other people's code that use Mesa. Changing to |
One thing I haven't heard in this discussion is it's use in the Lines 135 to 136 in 72b0a9d
|
The |
Don’t think it’s totally indispensable, but it would break API. So removing it - if deemed useful- would be a 3.0 change. |
Should we revisit this for Mesa 3.0? I'm reminded again because of: |
As of today, I think it's a matter of API choice. Either:
A tie breaker would be that you can check whether a model has stopped just by reading the value of |
This is a good point that I had not thought about. There might be many reasons for why a run is completed. In many cases, it will be a simple stopping condition tied to the number of steps. However, there might be conditions tied to the dynamics of the model over time that would indicate that e.g., some equilibrium has been reached and no further running is required. Having a So, I am coming around to keeping def run_model(self, n_steps: int|None=None, stopping_condition: Callable|None=None)
"""run the model for the specified number of steps or until stopping condition is reached.
Args:
n_steps : number of steps for which to run the model
stopping_condition : a callable called with the model as only argument that returns True
if the model should stop running and False otherwise
"""
if stopping_condition is None:
stopping_condition = lambda x : return False
if n_steps is None:
n_steps = sys.maxint
while self.running:
self.step()
if self.steps >= n_steps or stopping_condition(self):
self.running = False |
@quaquel I like your proposal, especially the part about having both I was thinking however, what value does the whole model = Model()
while model.running:
model.step()
Want to run for some amount of steps beforehand, with some condition in your model? while model.running and model.steps < 1000:
model.step() Do have an condition outside the model you want to apply? while model.running and model.steps < 1000 and not stopping_condition(model):
model.step() These are all oneliners, and make it very explicit what's happening and how. I think I would be a proponent on removing the whole |
fair point, but what about the batch runner? For this it would be quite convenient if it has a default run_model method or function it could call. |
I would say keep |
I was about to say what @EwoutH said, because the code added is too short and easy to write at the expense of more documentation to read. What we should provide are functions that are hard to write + test. For batch runner, perhaps it could just define its own |
I am not sure I fully understand this point. As stated, I disagree. In fact, if we use this line of reasoning, a lot of mesa code should be removed. Many of the methods in Mesa are relatively short and could easily be written by a user. The value of Mesa or any library is that it offers you a framework that allows you to focus on the important stuff and not have to think about default stuff. Having a convenience function for running the model fits with my view of what MESA should offer. It is also easy to document and test. This does not imply that I am against removing |
In most cases, models are run with explicit, N number of steps. In case where it has a halting condition, it is easy to override
model.run_model
like this. E.g. for forest firebefore
after
The text was updated successfully, but these errors were encountered: