-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support maxmemory-clients config #2513
feat: support maxmemory-clients config #2513
Conversation
97c7ed7
to
d666668
Compare
d666668
to
67db430
Compare
src/server/server.cc
Outdated
|
||
auto db_stats = storage->GetDBStats(); | ||
string_stream << "keyspace_hits:" << db_stats->keyspace_hits << "\r\n"; | ||
string_stream << "keyspace_misses:" << db_stats->keyspace_misses << "\r\n"; | ||
|
||
{ | ||
{https://www.anyknew.com/go/10112884 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry for the mistake
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll review the code again shortly.
…ction feature (#2506) Co-authored-by: mwish <[email protected]>
…#2516) This closes #2512. Currently, the replication thread will wait for the worker's exclusive guard stop before closing db. But it now stops the worker from running new commands after acquiring the worker's exclusive guard, and it might cause deadlock if switches at the same time. The following steps will show how it may happen: - T0: client A sent `slaveof MASTER_IP0 MASTER_PORT0`, then the replication thread was started and waiting for the exclusive guard. - T1: client B sent `slaveof MASTER_IP1 MASTER_PORT1` and `AddMaster` will stop the previous replication thread, which is waiting for the exclusive guard. But the exclusive guard is acquired by the current thread. The workaround is also straightforward, just stop workers from running new commands by enabling `is_loading_` to true before acquiring the lock in the replication thread.
I want to discuss how we can implement this feature. My idea is to count client memory within each worker (using |
Yes, as mentioned in previous comment. It would be better to limit the connection output first by checking the libevent buffer. And for the |
I have modified the code and moved the |
Yeah I think holding a lock is harmful here, I can accept some untimely memory change, but global checking is harmful... |
src/server/redis_connection.cc
Outdated
@@ -559,4 +562,37 @@ void Connection::ResetMultiExec() { | |||
DisableFlag(Connection::kMultiExec); | |||
} | |||
|
|||
size_t Connection::GetConnectionMemoryUsed() const { | |||
size_t total_memory = sizeof(*this); // 包含所有成员变量的静态内存大小 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please dont include chinese comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
total_memory += name_.capacity(); | ||
total_memory += ns_.capacity(); | ||
total_memory += ip_.capacity(); | ||
total_memory += announce_ip_.capacity(); | ||
total_memory += addr_.capacity(); | ||
total_memory += last_cmd_.capacity(); | ||
total_memory += output_buffer_.capacity(); | ||
total_memory += slave_output_buffer_.capacity(); | ||
total_memory += evbuffer_get_length(Output()) + evbuffer_get_length(Input()); | ||
|
||
for (const auto &channel : subscribe_channels_) { | ||
total_memory += channel.capacity(); | ||
} | ||
for (const auto &pattern : subscribe_patterns_) { | ||
total_memory += pattern.capacity(); | ||
} | ||
for (const auto &channel : subscribe_shard_channels_) { | ||
total_memory += channel.capacity(); | ||
} | ||
for (const auto &cmd : multi_cmds_) { | ||
total_memory += cmd.capacity(); | ||
} | ||
|
||
if (saved_current_command_) { | ||
total_memory += saved_current_command_->GetMemoryUsage(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code seems hard to maintain and also not like a precise estimation of memory usage.
I'm wondering if it's really necessary to limit the connection memory usage..
I believe implementing an elegant solution to track and restrict client connection memory usage is challenging and could reduce code maintainability. Instead, I suggest exploring alternative approaches, like limiting the number of clients. |
When I looked at Redis previously, it uses the maxclients configuration option to limit the number of clients, and maxmemory-clients to limit the total memory used by connections. If you only want to limit the output buffer, you can use client-output-buffer-limit to restrict the usage per connection. |
@PragmaTwice @git-hulk Hi, should we continue pushing this PR forward, or can it be closed? |
relate to #2284
Implemented a new maxmemory-clients configuration to enable client eviction based on overall memory usage, preventing potential OOM issues due to excessive client connections.