Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debounced computeds #283

Open
pimterry opened this issue Nov 18, 2020 · 6 comments
Open

Debounced computeds #283

pimterry opened this issue Nov 18, 2020 · 6 comments

Comments

@pimterry
Copy link

I've been playing with debouncing computeds. I have a complex & CPU-expensive computed that subscribes to many observables, and I'd to debounce recalculations there.

I haven't been able to find a nice way to do this. #24 gets close, but was never merged, and only works for debouncing downstream updates, not debouncing incoming updates, so isn't really sufficient (unless your computed has only one or two inputs).

I know this breaks the general contract of @computed, fully agree this shouldn't be a common case, but I do think it's a useful niche case. I've got the below currently working as drop-in alternative for @computed decorators, which I think works nicely.

A) Is there anything obviously wrong with this approach, or any other existing alternatives I've missed?
B) If not, would you be interested in including it in mobx-utils?

function debounceComputed<T>(timeoutMs: number, computedOptions: IComputedValueOptions<T> = {}) {
    return <T>(target: any, key: string, descriptor: PropertyDescriptor): void => {
        if (!descriptor.get) throw new Error('debounceComputed requires a getter');

        const internalFn = descriptor.get as () => T;
        let cachedValue: { value: T, atom: IAtom } | undefined;

        return computed(computedOptions)(target, key, { ...descriptor, get: function () {
            if (cachedValue) {
                // Don't calculate until the atom pings us
                cachedValue.atom.reportObserved();
            } else {
                // Calculate and cache the result:
                cachedValue = { value: internalFn.apply(this), atom: createAtom("DebounceAtom") };

                // Batch subsequent runs for the next timeoutMs:
                setTimeout(() => {
                    const { atom } = cachedValue!;
                    cachedValue = undefined;
                    atom.reportChanged(); // Ping subscribers to update
                }, timeoutMs);
            }
            return cachedValue.value;
        }});
    };
}

I'm still using mobx 5, but as far as I'm aware the same concept (same code?) should work just the same in v6.

@NaridaL
Copy link
Collaborator

NaridaL commented Mar 13, 2021

If you want to create a PR, this seems like a useful function 👍.

@Dragomir-Ivanov
Copy link

Hi @pimterry
I would like to use this function, however I find hard time understanding its usage. Can you give a brief example. I also am willing to dug into, and make a PR into mobx-utils.

@pimterry
Copy link
Author

One example in my application: performant free-text filtering over lots of data.

I have a free-text entry field where you can type in strings to search for in a large data set. The searching is quite expensive. A simple approach would be to have a computed filteredData that returns the data filtered by the input value, but this can cause performance issues if it runs for every keypress while you're typing. There are also issues if the dataset receives a few changes in very rapid succession.

Instead I debounce this computed, so it only runs every 250ms. This means:

  • The user presses a key in the input, and the observable input value updates
  • The computed getter immediately runs, and recalculates the filtering
  • The user presses more inputs during the next 250ms, and nothing happens
  • At the end of the 250ms, the computed runs again, with the final value of the observable

This can happen repeatedly, e.g. if you're typing constantly for a second it should update the filtering 5 times throughout that time (0ms, 250ms, 500ms, 750ms, 1s) instead of every single key press.

This does potentially result in an inconsistent data model (e.g. in this case the filter input and the filtered results might be out of sync) for up to the max duration of the debouncing, but it should always settle into the correct in-sync value by the end.

My current code for this is here and there's a few quick tests here. It's used by just decorating the getter with @debounceComputed(250) as here (actual usage there is a bit more complicated than the example above, but the concepts are still the same).

@Dragomir-Ivanov
Copy link

@pimterry
That is awesome!
This is exactly what I need. I will use it to calculate dirty flag for a model, having cloneDeep-ed the initial value. However I will do this with deep isEqual and I would like to spare cpu usage exactly when data are changing fast.

One question. I have a class that I make makeAutoObservable. MobX deduces it's annotations atomatically in this case, how do I specify that a function needs to be debounced computed.

Thank you!

@pimterry
Copy link
Author

One question. I have a class that I make makeAutoObservable. MobX deduces it's annotations atomatically in this case, how do I specify that a function needs to be debounced computed.

No idea, sorry! I'm still using decorators instead, and I'm actually using Mobx 5 anyway so there might be minor differences there.

@Dragomir-Ivanov
Copy link

Thank you @pimterry, for the valuable information. For other, will ask chatgpt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants