Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please read memory usage data from cgroups #2

Open
baryluk opened this issue Mar 23, 2019 · 1 comment
Open

Please read memory usage data from cgroups #2

baryluk opened this issue Mar 23, 2019 · 1 comment
Labels
help wanted Extra attention is needed

Comments

@baryluk
Copy link

baryluk commented Mar 23, 2019

Similar to #1 but memory related stats:

# cd /sys/fs/cgroup/memory/system.slice/system-postgresql.slice
/sys/fs/cgroup/memory/system.slice/system-postgresql.slice# grep . *
cgroup.clone_children:0
grep: cgroup.event_control: Invalid argument
memory.failcnt:0
grep: memory.force_empty: Invalid argument
memory.kmem.failcnt:0
memory.kmem.limit_in_bytes:9223372036854771712
memory.kmem.max_usage_in_bytes:52396032
memory.kmem.tcp.failcnt:0
memory.kmem.tcp.limit_in_bytes:9223372036854771712
memory.kmem.tcp.max_usage_in_bytes:0
memory.kmem.tcp.usage_in_bytes:0
memory.kmem.usage_in_bytes:17018880
memory.limit_in_bytes:9223372036854771712
memory.max_usage_in_bytes:17903652864
memory.move_charge_at_immigrate:0
memory.numa_stat:total=0 N0=0
memory.numa_stat:file=0 N0=0
memory.numa_stat:anon=0 N0=0
memory.numa_stat:unevictable=0 N0=0
memory.numa_stat:hierarchical_total=226104 N0=226104
memory.numa_stat:hierarchical_file=18420 N0=18420
memory.numa_stat:hierarchical_anon=207684 N0=207684
memory.numa_stat:hierarchical_unevictable=0 N0=0
memory.oom_control:oom_kill_disable 0
memory.oom_control:under_oom 0
memory.oom_control:oom_kill 0
grep: memory.pressure_level: Invalid argument
memory.soft_limit_in_bytes:9223372036854771712
memory.stat:cache 0
memory.stat:rss 0
memory.stat:rss_huge 0
memory.stat:shmem 0
memory.stat:mapped_file 0
memory.stat:dirty 0
memory.stat:writeback 0
memory.stat:pgpgin 0
memory.stat:pgpgout 0
memory.stat:pgfault 0
memory.stat:pgmajfault 0
memory.stat:inactive_anon 0
memory.stat:active_anon 0
memory.stat:inactive_file 0
memory.stat:active_file 0
memory.stat:unevictable 0
memory.stat:hierarchical_memory_limit 9223372036854771712
memory.stat:total_cache 921415680
memory.stat:total_rss 7901184
memory.stat:total_rss_huge 0
memory.stat:total_shmem 846168064
memory.stat:total_mapped_file 135168
memory.stat:total_dirty 811008
memory.stat:total_writeback 0
memory.stat:total_pgpgin 18059580
memory.stat:total_pgpgout 17833660
memory.stat:total_pgfault 17232468
memory.stat:total_pgmajfault 0
memory.stat:total_inactive_anon 27570176
memory.stat:total_active_anon 823103488
memory.stat:total_inactive_file 67342336
memory.stat:total_active_file 8105984
memory.stat:total_unevictable 0
memory.swappiness:60
memory.usage_in_bytes:943161344
memory.use_hierarchy:1
notify_on_release:0
grep: [email protected]: Is a directory
# 

memory.total_rss and memory.usage_in_bytes are probably of biggest interest, but would be great to have almost all of that (especially total_* ones) as metrics.

@povilasv
Copy link
Contributor

povilasv commented Mar 25, 2019

Hey, the plan is to figure out what we want from cgroups and what from procfs, but this is looks definitely right way to go.

PRs are welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants