Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Badger2 integration #9230

Closed
wants to merge 9 commits into from
Closed

Badger2 integration #9230

wants to merge 9 commits into from

Conversation

feld
Copy link

@feld feld commented Aug 26, 2022

Supercedes #7705

@welcome
Copy link

welcome bot commented Aug 26, 2022

Thank you for submitting this PR!
A maintainer will be here shortly to review it.
We are super grateful, but we are also overloaded! Help us by making sure that:

  • The context for this PR is clear, with relevant discussion, decisions
    and stakeholders linked/mentioned.

  • Your contribution itself is clear (code comments, self-review for the
    rest) and in its best form. Follow the code contribution
    guidelines

    if they apply.

Getting other community members to do a review would be great help too on complex PRs (you can ask in the chats/forums). If you are unsure about something, just leave us a comment.
Next steps:

  • A maintainer will triage and assign priority to this PR, commenting on
    any missing things and potentially assigning a reviewer for high
    priority items.

  • The PR gets reviews, discussed and approvals as needed.

  • The PR is merged by maintainers when it has been approved and comments addressed.

We currently aim to provide initial feedback/triaging within two business days. Please keep an eye on any labelling actions, as these will indicate priorities and status of your contribution.
We are very grateful for your contribution!

@feld
Copy link
Author

feld commented Aug 26, 2022

This is working for me so far and could use more testers. I haven't experimented with Badger 3 yet, but this should work for larger IPFS storage that needs reliable garbage collection as flatfs is at the mercy of your OS filesystem's ability to handle small files and your kernel's ability to handle the flood of syscalls...

@Jorropo
Copy link
Contributor

Jorropo commented Aug 31, 2022

@feld CI is not passing try running go vet ./... on your code.

@dokterbob
Copy link
Contributor

@feld Would LOVE this! Please please please clean up the branch and save the world by finally having a performant DS in IPFS!

@Zir0h
Copy link

Zir0h commented Oct 24, 2022

Hi, having this would be super awesome!

@acejam
Copy link

acejam commented Oct 28, 2022

It would be great to see this merged!

@CsterKuroi
Copy link
Contributor

How about Badger2's GC performance

@feld
Copy link
Author

feld commented Nov 9, 2022

How about Badger2's GC performance

it's working for me, but definitely needs more testers

gammazero and others added 9 commits November 9, 2022 22:01
This is a provisional PR and should not be merged until dependencies are merged into their master branches.

- Add badger2 plugin
- Update docs to describe badger2
- Add tests for badger2
- Add options to configure compression and blockCacheSize
- Document vlogFileSize config param
- Update go-ipfs-config
- Update go-ds-badger2
We were opening the database and telling it to keep infinite copies by not defining a value.
@dokterbob
Copy link
Contributor

Soon(tm) we'll be bumping our kubo built at ipfs-search.com. Once that's gathered a bit of data, I could give this a spin and see what it does in production. ETA ~2 weeks.

@feld
Copy link
Author

feld commented Nov 10, 2022

Soon(tm) we'll be bumping our kubo built at ipfs-search.com. Once that's gathered a bit of data, I could give this a spin and see what it does in production. ETA ~2 weeks.

I actually think IPFS should be using MDBX instead of something like Badger as the default storage engine. I'm quite surprised nobody has gone down that rabbit hole yet.

https://github.com/torquem-ch/mdbx-go

https://twitter.com/deplinenoise/status/840453414546898944

https://gitflic.ru/project/erthink/libmdbx#improvements-beyond-lmdb

And then if you need to scale REALLY big it should be some object storage backend like CEPH

maybe someone with more free time will pull on this thread for me

Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for submitting this PR.

Before one of Kubo maintainers reviews this, I need to ask two questions:

Merging unsupported datastore is not something we want, so team is unlikely to invest time in reviewing this until we have confidence this won't become dead on arrival.
Hope you understand our hesitance.

I actually think IPFS should be using MDBX instead of something like Badger as the default storage engine

Looking at alternatives to badger that are actively maintained sounds sensible. Mind opening a dedicated issue for discussing this? (ideally with working code and benchmarks that show which workloads would benefit from it over flatfs, but a stub issue with links you provided is ok too).

Let's keep this PR about Badger 2.

@guseggert
Copy link
Contributor

guseggert commented Nov 15, 2022

I don't like the fact that Kubo makes it hard for you to use whatever datastore you want, and that Kubo maintainers have to act as gatekeepers for this, which is why I wrote https://github.com/guseggert/go-ds-grpc. I have a PR here to add it to Kubo, but it has failed to gain traction (and in retrospect I'd bias towards HTTP instead of gRPC).

If a mechanism like this were added in Kubo, allowing you to download or build an out-of-process datastore, such as Badger 2, and wire it up to a vanilla Kubo binary with config, would that suffice for your use cases? The wiring is slightly more complicated, but it would let folks distribute datastores for Kubo without having to fork it, make a plugin, or convince Kubo maintainers to support it.

(I'm not saying we won't accept this or support it, just offering an alternative that would solve this problem more generally.)

@Jorropo
Copy link
Contributor

Jorropo commented Dec 1, 2022

@hsanjuan started work on badger3: https://github.com/ipfs/go-ds-badger3/ (and http://github.com/ipfs/go-ds-pebble) which sounds better than [no longer maintained] 2.
I join @lidel and @guseggert on their comments.

Filecoin also made work on datastores, what would be awesome is some testsuite that accepts a datastore interface and benchmark all of the features (we might actually have code for this somewhere, probably on the filecoin side).

But given than neither of them (badger 2 or 3) is maintained I don't think we will be adding them, this is easy to ship as a plugin.

@Jorropo Jorropo closed this Dec 1, 2022
@hsanjuan
Copy link
Contributor

hsanjuan commented Dec 1, 2022

Note badger3 is mantained, they had a release some weeks ago: https://github.com/dgraph-io/badger/releases

Unfortunately I don't think the "come back with benchmarks" is a useful approach.

Badger is optimized for SSDs. Both Badger and Pebble have lots of options that control many performance factors depending on things like available memory, need for compression etc. One big problem in Kubo is that badger options cannot be tuned, and defaults are specially bad for larger scale (they work on a 1GB raspberry pi though).

The underlying filesystem also matters, and the disk layout too (some datastores may work better with layouts in which writes are sharded across disks etc). In the case of flatfs, it is generally very slow, specially with ext4, but faster with XFS. Some datastores could be optimized for spinning disks, rather tan SSDs.

What benchmarks show is that badger cannot be used at scale (I mean, don't need a benchmark for that). Node restarts takes multiple minutes at certain scales. Also compaction don't work well at all in Badger v1, which means a lot of disk usage overhead. But perhaps badger3 with the right tuning works much much better, and perhaps Pebble even more.

Another issue is that Blockstore introduces caching and bloomfilter layers, but Badger3/Pebble already include those, so they would probably be doing things even worse.

Ultimately, things depend a lot on usage patterns (read vs writes, total amount of blocks, block-sizes etc). Badger v3 includes everything under 1MB in the LSM tree (memory) by default. That means someone using 1MB ipfs blocks is going to get a completely different behaviour than someone using 256KB blocks. Even worse, badgerv1 used 32 byes for this, which means even small things like CIDs go into disk.

What I mean is that the only way of making Kubo-storage not suck right now (it is really at a very bad place) is to add support for additional stores (badger3, pebble, potentially bolt), along with control of all configuration values, so that people can actually test different configurations. Whether that's via plugins or core support is up to you, but it is very low hanging fruit to just "pass".


https://github.com/guseggert/go-ds-grpc

This is a nice experiment, but the network/serialization overhead here would kill performance. Some use-cases may not mind, but definitely not for heavy use like other backends could provide. I would add it nevertheless as it allows remote backends etc.


On another note, there is also not-so-low-hanging fruit: blockstore implementations like https://github.com/ipld/go-storethehash, sqlite-backed storage ala https://github.com/whyrusleeping/whypfs with refcounting etc. these do require time and more refactoring.

@Jorropo
Copy link
Contributor

Jorropo commented Dec 2, 2022

@hsanjuan thx for your answer,


Note badger3 is mantained, they had a release some weeks ago: https://github.com/dgraph-io/badger/releases

I've jumped to conclusion too quickly, but looking at the releases it seems quite light:

Nov 4:

Fixed

  • fix(manifest): fix manifest corruption due to race condition in concurrent compactions (Fix manifest corruption dgraph-io/badger#1756)
  • Chores
    -Add CI/CD jobs to release branch
  • Add PR and Issue templates to release branch
  • Update Codeowners in release branch
  • Update Readme in release branch

Oct 15:

fix(arm64): bump ristretto v0.1.0 --> v0.1.1 (dgraph-io/badger#1806)

Looking at the code frequency we can see a clear shutdown early 2021: https://github.com/dgraph-io/badger/graphs/code-frequency

The project being maintained isn't a key point (we use badger1 after all), but AFAIT this isn't a project we can add now and iron later, so we should be more carefull when adding it.


Unfortunately I don't think the "come back with benchmarks" is a useful approach.

My understanding is that for now badger2 and badger3 datastores are experiments / toes in the water, I hope that we (Kubo maintainers) don't need to get pulled in for this kind of work.
If it is hard for people to run their own fork or add plugins, this is a thing I would like to fix.


What I mean is that the only way of making Kubo-storage not suck right now (it is really at a very bad place) is to add support for additional stores (badger3, pebble, potentially bolt), along with control of all configuration values, so that people can actually test different configurations. Whether that's via plugins or core support is up to you, but it is very low hanging fruit to just "pass".

This sounds awesome, but I think this shouldn't involve us. (see point above)


On an other note, we don't have any good visibility on how bad the issue is, I mean I know I'm using Kubo every days, but flatfs and badger are fine enough for my hundreds of GiBs workflow on my machine with too much ram, I know there is work in different places mainly around supporting bigger sizes, but I havn't been able to find an issue summering the issue, if you know of one or some other docs it would be nice so I could raise this at out triage meetings and we could place it on our board of priorities.

@hsanjuan
Copy link
Contributor

hsanjuan commented Dec 2, 2022

@Jorropo you mean Kubo maintainers should not be involved in doing benchmarking or something else?

this is a thing I would like to fix.

How, whats on your mind?


Note that benchmarking was done early on Lotus development and that's how Badger2 was chosen for Lotus. Badger2 handles huge datastores there. Even if badger is no longer heavily maintained, that doesn't mean it is bad more many use-cases (better GC is what many people complain about and that should be way better in Badger3). Pebble is a more open question, as it is heavily maintained so it probably has improved since it was tried out. Unfortunately the benchmarks done back then (2 years ago) may have little meaning now, given how codebases and hardware improve etc.

@ajnavarro has some recent benchmarks that can give an idea of how things are today, but in the end I think it all depends on usecases.

@lidel lidel mentioned this pull request Jun 5, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.