Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support maxmemory-clients config #2513

Closed

Conversation

AntiTopQuark
Copy link
Contributor

relate to #2284
Implemented a new maxmemory-clients configuration to enable client eviction based on overall memory usage, preventing potential OOM issues due to excessive client connections.


auto db_stats = storage->GetDBStats();
string_stream << "keyspace_hits:" << db_stats->keyspace_hits << "\r\n";
string_stream << "keyspace_misses:" << db_stats->keyspace_misses << "\r\n";

{
{https://www.anyknew.com/go/10112884
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry for the mistake

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll review the code again shortly.

@AntiTopQuark AntiTopQuark changed the title feat: support maxmemory-clients [WIP]feat: support maxmemory-clients Aug 31, 2024
AntiTopQuark and others added 13 commits August 31, 2024 13:32
…#2516)

This closes #2512.

Currently, the replication thread will wait for the worker's exclusive guard stop before closing db.
But it now stops the worker from running new commands after acquiring the worker's exclusive guard,
and it might cause deadlock if switches at the same time.

The following steps will show how it may happen:

- T0: client A sent `slaveof MASTER_IP0 MASTER_PORT0`, then the replication thread was started and waiting for the exclusive guard.

- T1: client B sent `slaveof MASTER_IP1 MASTER_PORT1` and `AddMaster` will stop the previous replication thread, which is waiting for the exclusive guard. But the exclusive guard is acquired by the current thread.

The workaround is also straightforward, just stop workers from running new commands by enabling `is_loading_` to 
true before acquiring the lock in the replication thread.
@AntiTopQuark AntiTopQuark changed the title [WIP]feat: support maxmemory-clients feat: support maxmemory-clients Sep 5, 2024
@AntiTopQuark AntiTopQuark changed the title feat: support maxmemory-clients feat: support maxmemory-clients config Sep 5, 2024
@caipengbo
Copy link
Contributor

I want to discuss how we can implement this feature. My idea is to count client memory within each worker (using libevent's callback to count evbuffer) to avoid locking across workers. @AntiTopQuark @PragmaTwice @git-hulk @mapleFU @torwig

@git-hulk
Copy link
Member

git-hulk commented Sep 6, 2024

I want to discuss how we can implement this feature. My idea is to count client memory within each worker (using libevent's callback to count evbuffer) to avoid locking across workers. @AntiTopQuark @PragmaTwice @git-hulk @mapleFU @torwig

Yes, as mentioned in previous comment. It would be better to limit the connection output first by checking the libevent buffer. And for the maxmemory-clients, I think we don't need to count every byte, just checking the libevent input and output buffer is good to me as well.

@AntiTopQuark
Copy link
Contributor Author

I want to discuss how we can implement this feature. My idea is to count client memory within each worker (using libevent's callback to count evbuffer) to avoid locking across workers. @AntiTopQuark @PragmaTwice @git-hulk @mapleFU @torwig

Yes, as mentioned in previous comment. It would be better to limit the connection output first by checking the libevent buffer. And for the maxmemory-clients, I think we don't need to count every byte, just checking the libevent input and output buffer is good to me as well.

I have modified the code and moved the evictionClients function into Worker.
On another note, I believe that maxmemory-clients is used to limit the total size of connections. If you want to limit the size of the output buffer, you can use the client-output-buffer-limit configuration option. I will submit a PR for this in the future.

@mapleFU
Copy link
Member

mapleFU commented Sep 7, 2024

want to discuss how we can implement this feature. My idea is to count client memory within each worker (using libevent's callback to count evbuffer) to avoid locking across workers.

Yeah I think holding a lock is harmful here, I can accept some untimely memory change, but global checking is harmful...

@@ -559,4 +562,37 @@ void Connection::ResetMultiExec() {
DisableFlag(Connection::kMultiExec);
}

size_t Connection::GetConnectionMemoryUsed() const {
size_t total_memory = sizeof(*this); // 包含所有成员变量的静态内存大小
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please dont include chinese comments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines +568 to +593
total_memory += name_.capacity();
total_memory += ns_.capacity();
total_memory += ip_.capacity();
total_memory += announce_ip_.capacity();
total_memory += addr_.capacity();
total_memory += last_cmd_.capacity();
total_memory += output_buffer_.capacity();
total_memory += slave_output_buffer_.capacity();
total_memory += evbuffer_get_length(Output()) + evbuffer_get_length(Input());

for (const auto &channel : subscribe_channels_) {
total_memory += channel.capacity();
}
for (const auto &pattern : subscribe_patterns_) {
total_memory += pattern.capacity();
}
for (const auto &channel : subscribe_shard_channels_) {
total_memory += channel.capacity();
}
for (const auto &cmd : multi_cmds_) {
total_memory += cmd.capacity();
}

if (saved_current_command_) {
total_memory += saved_current_command_->GetMemoryUsage();
}
Copy link
Member

@PragmaTwice PragmaTwice Sep 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code seems hard to maintain and also not like a precise estimation of memory usage.

I'm wondering if it's really necessary to limit the connection memory usage..

@PragmaTwice
Copy link
Member

I believe implementing an elegant solution to track and restrict client connection memory usage is challenging and could reduce code maintainability.

Instead, I suggest exploring alternative approaches, like limiting the number of clients.

@AntiTopQuark
Copy link
Contributor Author

I believe implementing an elegant solution to track and restrict client connection memory usage is challenging and could reduce code maintainability.

Instead, I suggest exploring alternative approaches, like limiting the number of clients.

I believe implementing an elegant solution to track and restrict client connection memory usage is challenging and could reduce code maintainability.

Instead, I suggest exploring alternative approaches, like limiting the number of clients.

When I looked at Redis previously, it uses the maxclients configuration option to limit the number of clients, and maxmemory-clients to limit the total memory used by connections. If you only want to limit the output buffer, you can use client-output-buffer-limit to restrict the usage per connection.

@AntiTopQuark
Copy link
Contributor Author

@PragmaTwice @git-hulk Hi, should we continue pushing this PR forward, or can it be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants