Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Influxdb deletes too much #40

Open
genofire opened this issue Mar 13, 2017 · 11 comments
Open

Influxdb deletes too much #40

genofire opened this issue Mar 13, 2017 · 11 comments

Comments

@genofire
Copy link
Member

genofire commented Mar 13, 2017

Config

[influxdb]
enable   = true
address  = "http://localhost:8086"
database = "ffhb"
username = ""
password = ""
# cleaning data of measurement node,
#   which are older than 7d
delete_after = "7d"
#   how often run the cleaning
delete_interval = "1h"

Version
016e2dc

Description
Influxdb lost more then all data after 7 days :(
But this habben just on some nodes not on every one.

privat-genofire

@genofire genofire added the bug label Mar 13, 2017
@genofire
Copy link
Member Author

We did not handle by any nodes, we use the complete measurement look

db.client.Query(client.NewQuery(query, db.config.Influxdb.Database, "m"))

@genofire genofire changed the title [BUG] Influxdb clean to much Influxdb clean to much Mar 16, 2017
@corny corny changed the title Influxdb clean to much Influxdb deletes too much Apr 11, 2017
@genofire
Copy link
Member Author

Issues at influxdb was closed - should be looking at and maybe closed nearly

@valcryst
Copy link

Hi, we still have problems with this issue running lastest influx > 1.4.0~n201710310800
(yanic is configured to held 365 days)
https://map.ff-en.de/graph/dashboard/db/freifunk-en-router?refresh=1m&orgId=1&from=now-30d&to=now

Why yanic does not use the build-in mechanism "RETENTION POLICY" ?
It`s highly suggested by the devs todo so.

Regards
Matthias

@genofire
Copy link
Member Author

Thx for the feedback.

If it read the documentation it sound like that a "RETENTION POLICY" is for a hole database and not for a specific measurement.
(We want to have the data in the global measurement till we die.)

Or i am wrong?

@valcryst
Copy link

valcryst commented Nov 19, 2017

Sorry for the delay.
The lastest InfluxDB works perfect without deleting too much.

actualy using: InfluxDB shell version: 1.5.0~n201711120800

Yanic is configured to prune after 60d, before this new InfluxDB version it deleted everything after 14~ days.
https://map.ff-en.de/graph/dashboard/db/freifunk-en-router?refresh=1m&orgId=1&from=now-30d&to=now

This is very satisfying after this long time :)

@genofire You´re correct with this, it`s defined for one database.
How about moving "global" to an seperate database, since its not used
together with regular "node" queries?
Dont get me wrong (no critism) but i would realy like to use fully supported build-in functions.

Regards
Matthias

@ghost
Copy link

ghost commented Jul 28, 2018

Now it doesn't delete anything anymore. The nodes config is:

[nodes]
# Cache file
# a json file to cache all data collected directly from respondd
state_path    = "/var/lib/yanic/state.json"
# prune data in RAM, cache-file and output json files (i.e. nodes.json)
# that were inactive for longer than
prune_after   = "7d"
# Export nodes and graph periodically
save_interval = "5s"
# Set node to offline if not seen within this period
offline_after = "10m"

But I still have the data of nodes from end of june in influxdb

@genofire
Copy link
Member Author

@CharlemagneLasse thank you for the feedback - it still seen to be a bug in influxdb.
Which version of influxdb are you using?

@ghost
Copy link

ghost commented Jul 29, 2018

1.5.3. But it is actually a bug in your issue report. The deletion rate is configured using "[database]" and not "[nodes]" as you said in your issue report.

[database]
# this will send delete commands to the database to prune data
# which is older than:
delete_after    = "30d"
# how often run the cleaning
delete_interval = "1h"

@genofire
Copy link
Member Author

genofire commented Aug 2, 2018

@CharlemagneLasse You was right, it was not paste the right content of the config file. Is now all okay?

@ghost
Copy link

ghost commented Aug 3, 2018

Yes

@maurerle
Copy link
Contributor

I think this issue can be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants