Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate terms being processed in tick #1560

Open
xsebek opened this issue Sep 30, 2023 · 7 comments
Open

Investigate terms being processed in tick #1560

xsebek opened this issue Sep 30, 2023 · 7 comments
Labels
Bug The observed behaviour is incorrect or unexpected. Z-Performance This issue concerns memory or time efficiency of the Swarm game.

Comments

@xsebek
Copy link
Member

xsebek commented Sep 30, 2023

Describe the bug

It seems we might be too lazy with term processing, leading to terms being repeatedly processed during the tick:

Inspecting profile with Profiteur
Screenshot 2023-09-30 at 15 31 52

To Reproduce

Profile the non-inlined version in #1556 (comment).

Add a cabal.project.local file like this one:

ignore-project: False

program-options
  ghc-options:
    -haddock
    -O2
    -fprof-auto
    "-with-rtsopts=-N -p -s -h -i0.1"

profiling: True
profiling-detail: all-functions
library-profiling: True
executable-profiling: True

Then run:

cabal run swarm -- -i data/scenarios/Challenges/wave.yaml --autoplay
profiteur swarm.prof
open swarm.prof.html

Expected behavior
I would expect there to be no (significant) runParser in tick profile, that should be done ages ago in initAppState.

@xsebek xsebek added Bug The observed behaviour is incorrect or unexpected. Z-Performance This issue concerns memory or time efficiency of the Swarm game. labels Sep 30, 2023
@xsebek
Copy link
Member Author

xsebek commented Sep 30, 2023

This occurred to me only after writing it all, but I bet it is the run/import problem again.

@byorgey @kostmo if that is true, we can move some of the info from here to import Issue or keep it open and link it.

@byorgey
Copy link
Member

byorgey commented Sep 30, 2023

Yes, you're probably right, though on some level I don't know how well we can reasonably optimize the situation where multiple robots import the same file. In general, if robot A and robot B both import x.sw and I (1) first run robot A, then (2) edit x.sw, then (3) run robot B, I want robot B to re-parse + process x.sw from scratch.

However, I suppose in the special case that a bunch of robots are created as part of a scenario description and all import the same file, we should be able to only parse & process the file once.

@xsebek
Copy link
Member Author

xsebek commented Sep 30, 2023

Yes, in this case parsing an unchanged file 242 times is a bit silly. 😅

If import works at parse time then all imports of the same file that are parsed at same time can reuse the result, no? But at least the same robot template should.

@byorgey
Copy link
Member

byorgey commented Sep 30, 2023

If import works at parse time then all imports of the same file that are parsed at same time can reuse the result, no? But at least the same robot template should.

Yes, good point.

I will get back to hacking on import soon since it seems every time I turn around there is another issue caused by run. But it is rather difficult for some annoying reasons. Last time I kind of gave up at some point, hopefully this time I will have some new insights after not thinking about it for a while.

@kostmo
Copy link
Member

kostmo commented Mar 14, 2024

I don't know how well we can reasonably optimize the situation where multiple robots import the same file. In general, if robot A and robot B both import x.sw and I (1) first run robot A, then (2) edit x.sw, then (3) run robot B, I want robot B to re-parse + process x.sw from scratch.

A cache that allows re-use of a parsed and processed file among multiple robots should be keyed by the SHA1 of the file's contents. This will properly invalidate it if edited between uses.

@byorgey
Copy link
Member

byorgey commented Mar 15, 2024

A cache that allows re-use of a parsed and processed file among multiple robots should be keyed by the SHA1 of the file's contents.

Sure, though that would still require reading the file every time to calculate its SHA1 hash. If we're reading the entire file from disk anyway I don't know how much time we really save by not parsing + processing it.

Perhaps a better idea would be to key the cache on file modification time.

@byorgey
Copy link
Member

byorgey commented Oct 7, 2024

Linking #495 here since ideally solving that should solve this issue as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug The observed behaviour is incorrect or unexpected. Z-Performance This issue concerns memory or time efficiency of the Swarm game.
Projects
None yet
Development

No branches or pull requests

3 participants