Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: add benchmarking #192

Closed
wants to merge 4 commits into from
Closed

WIP: add benchmarking #192

wants to merge 4 commits into from

Conversation

madig
Copy link
Collaborator

@madig madig commented Oct 10, 2021

I'm playing a bit with criterion, as per #177.

Needs https://github.com/madig/noto-amalgamated and the glyphs2ufo output for NotoSans-MM.glyphs.

@github-actions
Copy link

🗜 Bloat check ⚖️

Comparing 38003b7 against 65fb995

target old size new size difference
target/release/examples/load_save 1.78 MB 1.79 MB 1.53 KB (0.08%)
target/debug/examples/load_save 8.31 MB 8.32 MB 2.72 KB (0.03%)

@linebender linebender deleted a comment from github-actions bot Oct 10, 2021
@github-actions
Copy link

🗜 Bloat check ⚖️

Comparing 10f87bf against 65fb995

target old size new size difference
target/release/examples/load_save 1.78 MB 1.79 MB 1.53 KB (0.08%)
target/debug/examples/load_save 8.31 MB 8.32 MB 2.72 KB (0.03%)

Copy link
Member

@cmyr cmyr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

benchmarks that involve IO are going to be especially difficult to interpret, since so much depends on hardware, read congestion, caching, etc etc. My inclination would be to just use the load_save example which does time those operations, which is useful enough for getting a loose sense of the impact of various changes?

If you want to do actual benchmarks, I think it is best to keep IO out of the benchmarks: load some files into memory first, and then benchmark the parsing.

I think it would also be useful to try and keep these benchmarks narrow in scope, e.g. benchmark reading just some .glif files, and benchmark just parsing font_info.plist, etc?

pub fn load(
base_dir: &Path,
glyph_names: &NameList,
request: &crate::DataRequest,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not include DataRequest as part of the base API here, especially if it is only being used to thread through whether or not we load the glyph libs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh right, could have made it a bool.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or make it a separate method or something? I'd prefer not to break the public API for this (and we may not need to merge this at all, it sounds like?)

@@ -13,6 +13,7 @@
pub struct DataRequest {
pub layers: bool,
pub lib: bool,
pub glyph_lib: bool,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a useful addition, or is it only useful in the context of benchmarking?

Copy link
Collaborator Author

@madig madig Oct 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was primarily added to test the impact of glyph lib loading on loading times. Result: it roughly doubles parsing time on my Linux and Windows machines. Ouch. I currently have no other use for it, but this PR is a WIP for a reason :)

@madig
Copy link
Collaborator Author

madig commented Oct 12, 2021

benchmarks that involve IO are going to be especially difficult to interpret, since so much depends on hardware, read congestion, caching, etc etc.

Hm, yes. That's actually what I wanted to test here. I found my Windows work machine takes roughly double the time of my Linux PC :(

I agree with the rest of your comment. I haven't fully thought through yet what I want to benchmark and why.

@rsheeter
Copy link
Collaborator

This is very old, do you plan to finish it or could it be closed?

@madig madig closed this Aug 27, 2023
@RickyDaMa RickyDaMa deleted the add-benchmarking branch August 28, 2023 09:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants