-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On the GC vs noGC divide #34
Comments
perhaps D operator new should have a default argument of the allocator.
Also, a global allocator would be set like the following: |
Yeah, I've been thinking a little about this again too, what I'm leaning toward right now is a push/pop global allocator facility. I'll write more later, gotta run again. |
out of time again but remind me later, the idea is you can push/pop things to work with an arena allocator while still being able to force the full gc for something you know will escape. emscripten is my goal rn, got a druntime running w/o a c library on plain linux too, but this is pretty soon on the list so ill write about it in... ikdk a week or two if nothign else comes up frist |
Would it be too much to ask to have an You technically can overload
|
opNew I say is a bad idea. D actually used to have it, and it was removed (this is the reason you could pass parameters to new, like
is a parse error nowdays, but try it with D1 and you'll get
The Basically a clone of C++'s thing, just without the option of a global overload. (D never does global overloads.) All that remains of this in current day D is the ability to (if you've never tried D1, i do kinda suggest trying it, not just for this historical curiosity, but also knowing how freaking fast it can compile. It is a nice little language. I like modern day D better, but that old D really wasn't bad.) Anyway, this was removed - a decision i agree with - because it doesn't really work, and even if it did, the use cases are iffy:
Given those constraints, and D's general lack of What we do instead in D is you can
That commented string is not supported... but should be. I need to add that to the work list, but even without that, the disabled new at least prompts them to check the documentation, like for example here: https://opendlang.org/library/arsd.simpleaudio.AudioOutputThread.html the first note you see is "DO NOT USE NEW ON THIS" and tells you what to do instead. So the error message saying it would be nice, but at least the error message prevents you from making a mistake and the next natural place to check, the docs, tell you what yo do. I'm pretty happy with this right now for the case where you want do something different for specific objects. (You might note it is fairly rare I actually implement some other thing, usually the answer is to just disable it in favor of a stack construction). But what about places where you can't edit the source? You're right you can overload the https://github.com/opendlang/opend/blob/master/druntime/src/core/internal/gc/impl/manual/gc.d GCs currently must be set before calling I'm fairly happy with this too. You still have the trouble of not typically calling .... because swapping it could be really useful. Consider a case like this:
Or heck:
For these usage points, you know all the memory used in a block of code can be freed at once; it is a nice place to use an arena allocator of some sort. You set it at the beginning, then free everything all at once when you're finished, similar to automatic stack variables when the function ends. But since you can't modify the called functions, the most realistic way to do this is with some kind of global overload. What I propose is something like:
When the Arena refcount goes back to zero, it can free everything at once, done automatically here on the call to pop. Now, what if the function needs something to survive long term? They can push the main GC back on for a while:
So that's your escape when you know something must have a long lifetime, even when the user might have overloaded it. (Instead of push/pop calls, it might be a RAII thing, or a function that takes a lambda with the overridden GC present inside. This way it can automatically set up scope exit for you). This would be useful both on a bare metal case and in regular programs where you need to adapt some existing code to the new usage. Downside is if a function does a global cache thing when you've overridden it and they didn't know so you end up with a use after free situation... that's why I said Have any thoughts on this? |
This kind of implies then that it would either have to be either a compiler builtin or a keyword/trait/etc.
wait..how much are we talking? It's far too late for me to go back to D1 for osdev but I am very interested in fast-compiling languages. dmd no optimizations currently compiles faster than or as fast as gcc on O0 (tested with a hello world program with dmd in betterC). I have some cases where D is actually faster to compile than C as well. If it in any way competes with tcc, I see it as a win. TCC should be the end goal.
Syntactic sugar (and maybe some safety). e.g.
This requires a class and an interface. This is a chicken-and-egg problem.
Don't tempt me to do a garbage-collected kernel and write a paper on it, because now I really want to.
Now this is getting messy. In kernelspace, if something lives "long term", that means we never free it. You don't ever free the memory used by the various platform-dependent structures because they are constantly used throughout the system's life. Take this example from SerenityOS: https://github.com/SerenityOS/serenity/blob/6123113255b7aec0d7f9e81041c3229f534b49ba/Kernel/Devices/PCISerialDevice.cpp#L36
Also a threading issue, but I'm not terribly worried about this because I wrap things in locks. |
|
yeah, thread local, don't wanna have to take the lock to make the switch. (one of the biggest problems with the current GC is a lot of things take the global lock. steve and amaury upstream are working on changing that, im following their progress (it is on the SDC repository: https://github.com/snazzy-d/sdc/commits/master/ and ill pull here when it shows results too...)) that jai thing looks similar, druntime does a lot of that already too, just more scattered, keeping it close is a good idea. and he also does push/pop allocators so that's a good sign im not completely off track lol |
Yeah, I'm aware of that. It's nice.
It seems you can push/pop the entire context or just allocators. The macros also do poor man's RAII by automatically popping at scope exit. |
My RTS game does a full recompile and link in 1/10th of a second in D1. D2 can replicate this too; in fact I think most the slowness in a standard D2 build is linking Phobos rather than anything else. My current freestanding D2 program - which uses druntime mind you - has similar timing, 96 milliseconds, with ldc (no optimizations enabled). Though that's just a sample program and not a whole working game! Still very promising and makes me think we can have those good times again. Of course, compiling and not linking something like arsd.minigui eats 580ms, so i don't think we'll ever get amazing but to be fair, minigui is by itself bigger than that old game... (the game about 15k lines, all D dependencies included except phobos, minigui is 54k lines including D dependencies except druntime (minigui does not actually require phobos!)) anyway, moving on, it isn't that important just ive never seen a D1 program compile more slowly than a D2 program and i liked it.
The alternative to a struct-provided opNew is a struct-provided factory, so the user side would look like
do it! Actually I think a kernel providing GC as a service to all applications is potentially interesting for a few reasons - it has some global knowledge about available memory on the system, dirty pages for knowing what to scan, etc. to schedule it smarter than an application. I think Microsoft Research did some studies about it.
Yeah, in those cases you'd similarly avoid the arena/pool... |
This, although not first pioneered in that language, is a popular choice in the Rust ecosystem. I really thought about this approach, but .alloc would be a static method, which means it wouldn't be able to autofree in the destructor. even if it could, RAII-style, why not then put the allocation in the constructor? This is a real-world example of a nogc struct that has to build from the ground-up: https://github.com/Connor-GH/xv6-public/blob/relics/kernel/d/kobject.d#L78-L125.
I mean that the kmalloc implementation will be a garbage collector. Currently in xv6, userspace malloc calls sbrk, which calls growproc, which calls allocuvm, which calls kpage_alloc(), which, as you may guess, allocates a page. The more I think about this though, the worse of an idea it becomes: a large, fat guard in the way of every allocation, stopping the world sometimes to collect memory.....sigh. Yeah, scratch that. I can just deal with memory lifetimes but first I need the proper features in D. |
You can make the alloc function return a smart pointer or similar that deallocates the rest in its dtor. And yeah, once you have two pieces like that, it could indeed alloc in its constructor too, but since D has no default constructors, the named static method is more reliable for it. It works ok for something like your KArray which has to have elements to allocate anyway, but less reliable for smaller individual objects. Destructors aren't called on pointers without either a wrapper or explicit call anyway.... we could prolly make
You might be able to make it work better using MMU protected pages, but yeah idk the kernel space is a different zone. You might be able to make use of the "manual gc" which hooks new into your custom kalloc and depends on you to delete it though. |
I didn't say a default constructor. I meant a constructor for structs in the sense of just
you overestimate my allocator severely. It literally just page maps memory up to 224M and userspace steals memory from the kernel as it needs to. (Not used pages but just free one. Yes, a rogue process could cause a kernel panic, but I could so easily make an OOM killer. The problem is that I don't have e820 memory maps, or 64 bit support for that matter.)
This seems pretty ideal. I can implement
If I say "new", I want this to be a heap-allocated object, period. The only exception is compile time objects. How hard could this be? I would probably be willing to implement it myself if the code around |
i don't think it'd be hard to change the definition of new to always call the allocator, but the optimizer might still inline and dead code remove things too. the problem is that is a breaking change to existing code so it'd be kinda iffy to change at this point. |
One problem with |
So there's this: Apparently registerGCFactory exists if you overload interface GC, but in order to register it, you need to pass an initialization function pointer that calls I did try taking this approach and making a manual GC, but it would not only be difficult, but also not easy to understand and also adds a lot of cruft. |
The chicken and egg problem is not really a problem - eggs obviously came first, since many other animals can also lay eggs. for example, the The Though I'd note that there is already a manual GC, you can see it for yourself in the Museum of druntime core/impl/gc/manual. It wasn't functional at all until we did some repairs to its decayed structure but it looks like a pretty study skeleton now. |
core.internal.gc.impl.manual |
I think this divide is completely avoidable if data structures just accept an allocator argument. That would also make it much easier to mix strategies. Sure, that might mean that builtin dynamic arrays and hashmaps have to go because they don't have a destructor. But to be honest, thanks to operator overloading, library solutions are just as ergonomic. In practice you don't feel any difference. And maybe it would be a good thing to have fewer primitives builtin.
The text was updated successfully, but these errors were encountered: