You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Another user feedback, when plotting too much data (eg 100k+), the graph becomes polluted, the points are too close together and we don't see any differences.
An example is when rendering the examples/CSV/sine_huge_100k.csv, the sine looks like a green horizontal stripe.
My proposal:
set a max number of (logical) points that should be rendered on a graph (resolution=10k)
if the number of actual data points (eg. size=100k) is bigger,
compute reduction ratio (resolution/size)
subsample (choose every nth, average, ...?) the data
plot the subsampled graph
What are the statistics/preprocessing frameworks for JS? Do you have any favourites?
I just learned about vega (preprocessing & presentation)
Another user feedback, when plotting too much data (eg 100k+), the graph becomes polluted, the points are too close together and we don't see any differences.
An example is when rendering the
examples/CSV/sine_huge_100k.csv
, thesine
looks like a green horizontal stripe.My proposal:
resolution=10k
)size=100k
) is bigger,resolution/size
)What are the statistics/preprocessing frameworks for JS? Do you have any favourites?
What do you think?
The text was updated successfully, but these errors were encountered: