Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

router: Make the number of router threads configurable #1385

Merged
merged 1 commit into from
Aug 19, 2024

Conversation

ac000
Copy link
Member

@ac000 ac000 commented Aug 13, 2024

Unit generally creates an extra number of router threads (to handle client connections, not incl the main thread) to match the number of available CPUs.

There are cases when this can go wrong, e.g on a high CPU count machine and Unit is being effectively limited to a few CPUs via the cgroups cpu controller. So Unit may create a large number of router threads when they are only going to effectively run on a couple of CPUs or so.

There may be other cases where you would like to tweak the number of router threads, depending on your workload.

As it turns out it looks like it was intended to be made configurable but was just never hooked up to the config system.

This adds a new '/settings/router/threads' config option which can be set like

  {
      "listen": {
          ...
      },

      "settings": {
          "router": {
              "threads": 2
          }
      },

      ...
  }

Before this patch (on a four cpu system)

  $  ps -efL | grep router
  andrew   419832 419829 419832  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419833  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419834  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 445145  0    5 03:31 pts/10   00:00:00 unit: router
  andrew   419832 419829 445146  0    5 03:31 pts/10   00:00:00 unit: router

After, with a threads setting of 2.

  $ ps -efL | grep router
  andrew   419832 419829 419832  0    3 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419833  0    3 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419834  0    3 Aug12 pts/10   00:00:00 unit: router

@ac000 ac000 requested a review from hongzhidao August 13, 2024 02:35
@ac000 ac000 linked an issue Aug 13, 2024 that may be closed by this pull request
Copy link
Contributor

@hongzhidao hongzhidao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, about the usage in the commit example.

  1. Use "listeners" instead in the example.
 {
      "listen": {
          ...
      },

      "settings": {
          "router": {
              "threads": 2
          }
      },

      ...
  }
  1. I feel developers who are familiar with unit code can understand the "router" object in the "settings" well since threads run in the router process. But I'm not sure if it's good enough to introduce the new "router" option in the configuration. For users, can they understand it well? Thus we have two usages to choose:
    a. the current way.
    b. just use "listen_threads"
{
    "settings": {
             "listen_threads": 3
     }
}

I'd like to ask the team to suggest.

static nxt_conf_vldt_object_t nxt_conf_vldt_router_members[] = {
{
.name = nxt_string("threads"),
.type = NXT_CONF_VLDT_INTEGER,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we need to add more checks to its value. It's similar to "process" with less check.

static nxt_int_t
nxt_conf_vldt_processes(nxt_conf_validation_t *vldt, nxt_conf_value_t *value,
    void *data)
{
    int64_t                         int_value;
    nxt_int_t                       ret;
    nxt_conf_vldt_processes_conf_t  proc;

    if (nxt_conf_type(value) == NXT_CONF_INTEGER) {
        int_value = nxt_conf_get_number(value);

        if (int_value < 1) {
            return nxt_conf_vldt_error(vldt, "The \"processes\" number must be "
                                       "equal to or greater than 1.");
        }

        if (int_value > NXT_INT32_T_MAX) {
            return nxt_conf_vldt_error(vldt, "The \"processes\" number must "
                                       "not exceed %d.", NXT_INT32_T_MAX);
        }

        return NXT_OK;
    }
    ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect I can just re-use one of the "existing" "threads" check...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this one should do the trick...

static nxt_int_t                                                                
nxt_conf_vldt_threads(nxt_conf_validation_t *vldt, nxt_conf_value_t *value,     
    void *data)                                                                 
{                                                                               
    int64_t  threads;                                                           
                                                                                
    threads = nxt_conf_get_number(value);                                       
                                                                                
    if (threads < 1) {                                                          
        return nxt_conf_vldt_error(vldt, "The \"threads\" number must be "      
                                   "equal to or greater than 1.");              
    }                                                                           
                                                                                
    if (threads > NXT_INT32_T_MAX) {                                            
        return nxt_conf_vldt_error(vldt, "The \"threads\" number must "         
                                   "not exceed %d.", NXT_INT32_T_MAX);          
    }                                                                           
                                                                                
    return NXT_OK;                                                              
} 

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless of course we change the name... shame to basically duplicate that function...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could leave it until the team are happy with the usage.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the team's efforts. I am a developer from China. There are 700M people using nginx products here. if ok I will promote it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the more the merrier!

@hongzhidao
Copy link
Contributor

It would be great to have tests on it. Is it good to put in test/test_reconfigure.py?
Others look good.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

2. I feel developers who are familiar with unit code can understand the "router" object in the "settings" well since threads run in the router process. But I'm not sure if it's good enough to introduce the new "router" option in the configuration. For users, can they understand it well? Thus we have two usages to choose:

Well, if it's documented...

   a. the current way.
   b. just use "listen_threads"

I don't mind adding it as a top-level "settings" setting. But maybe there will be other 'router' related things that want configuring.

It seems like we will need to allow the listen(2) backlog to be configurable. Would that perhaps go under a 'router' settings? Seeing as it's the router process/threads that do the listening...

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

It would be great to have tests on it. Is it good to put in test/test_reconfigure.py?

Heh, maybe!

@hongzhidao
Copy link
Contributor

2. I feel developers who are familiar with unit code can understand the "router" object in the "settings" well since threads run in the router process. But I'm not sure if it's good enough to introduce the new "router" option in the configuration. For users, can they understand it well? Thus we have two usages to choose:

Well, if it's documented...

   a. the current way.
   b. just use "listen_threads"

I don't mind adding it as a top-level "settings" setting. But maybe there will be other 'router' related things that want configuring.

It seems like we will need to allow the listen(2) backlog to be configurable. Would that perhaps go under a 'router' settings? Seeing as it's the router process/threads that do the listening...

It depends on if it’s a global option, I feel backlog is bound to specific listener.

Let’s discuss it with the team.

@oopsoop2

This comment was marked as off-topic.

1 similar comment
@oopsoop2
Copy link

Thank you for the team's efforts. I am a developer from China. There are 700M people using nginx products here. I will promote it.

@hongzhidao
Copy link
Contributor

Hi @ac000,
Considering #1383, we could extend the option to make the feature more general and powerful. For example:

    "settings": {
          "router": {
              "threads": "the number of available CPU numbers"
          }
      },

I found nginx has a similar feature.
https://nginx.org/en/docs/ngx_core_module.html#worker_processes

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

It seems like we will need to allow the listen(2) backlog to be configurable. Would that perhaps go under a 'router' settings? Seeing as it's the router process/threads that do the listening...

It depends on if it’s a global option, I feel backlog is bound to specific listener.

Yes, the listen backlog is per listen(2) call. However I'm not sure why you'd want it to be different in each of the threads...

@hongzhidao
Copy link
Contributor

It seems like we will need to allow the listen(2) backlog to be configurable. Would that perhaps go under a 'router' settings? Seeing as it's the router process/threads that do the listening...

It depends on if it’s a global option, I feel backlog is bound to specific listener.

Yes, the listen backlog is per listen(2) call. However I'm not sure why you'd want it to be different in each of the threads...

I only mean it belongs to listen and should be the same between threads.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

Yes, the listen backlog is per listen(2) call. However I'm not sure why you'd want it to be different in each of the threads...

I only mean it belongs to listen and should be the same between threads.

Exactly!

Which is why I suggested to put it under /settings/router...

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

Hi @ac000, Considering #1383, we could extend the option to make the feature more general and powerful. For example:

    "settings": {
          "router": {
              "threads": "the number of available CPU numbers"
          }
      },

I found nginx has a similar feature. https://nginx.org/en/docs/ngx_core_module.html#worker_processes

Currently we try to do "auto" by default, but there are cases where it's wrong. With #1383 there are more cases where it's correct. But there are still cases where it could be wrong which is when you'd use this setting.

IOW

"threads": "the number of available CPU numbers"

That is essentially how it is now. But can be wrong in certain cases.

Or did I misunderstand?

@hongzhidao
Copy link
Contributor

There might be something confusing between the issues, and let me show my thoughts again.

  1. About the listen backlog. The option is per-listener, for example:
{
    "listeners": {
        "*:80": {
            "backlog": 511
       }
    }
}

It's independent of the feature of router threads.

  1. configurable threads in router process.
{
    "settings": {
         "router": { "threads": 3 }
    }
}
or 
{
   "settings": {
        "listen_threads": 3
   }
}

Anyway, it's inside the "settings" object.

Let me know if I missed something.

@hongzhidao
Copy link
Contributor

hongzhidao commented Aug 13, 2024

Currently we try to do "auto" by default

It seems the default value from the available CPU numbers makes more sense than the default of the total CPU numbers.
That's what you changed.

The other solution is to keep the default value as it is and introduce a new feature that is configurable.
So it will be like this:

  1. Make the number of router threads configurable
  2. There is no "Better available CPU count determination on Linux", but using a specific string to simplify the option.
    For example:
{
    "threads": "auto" /* auto means the available number */
}

The advantage is users can change available CPU numbers anytime but not compile Unit again.
The difference is to move the code from lib init to the configuration phase.
3. Introduce the ability to specify CPU affinity with router threads, I feel this feature can improve Unit performance.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

There might be something confusing between the issues, and let me show my thoughts again.

1. About the listen backlog. The option is per-listener, for example:
{
    "listeners": {
        "*:80": {
            "backlog": 511
       }
    }
}

OK, yes, that does make sense...

It's independent of the feature of router threads.

Correct.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

Currently we try to do "auto" by default

It seems the default value from the available CPU numbers makes more sense than the default of the total CPU numbers. That's what you changed.

You've lost me.

All this patch does is allow you to set the number of router threads that are created.

The other solution is to keep the default value as it is and introduce a new feature that is configurable. So it will be like this:

1. Make the number of router threads configurable

Which is exactly what this patch does!

2. There is no "Better available CPU count determination on Linux", but using a specific string to simplify the option.
   For example:

Not sure I understand.

Are you referring to #1383 ?

Sure, with this patch we technically wouldn't need it, but why not try to do better by default? So I still want it... we can do similar with the BSDs I think also...

{
    "threads": "auto" /* auto means the available number */
}

That's how it is now! We try to determine how many threads to create via sysconf(_SC_NPROCESSORS_ONLN).

The advantage is users can change available CPU numbers anytime but not compile Unit again. The difference is to

Yes, that's exactly what this patch allows!

move the code from lib init to the configuration phase. 3. Introduce the ability to specify CPU affinity with router threads, I feel this feature can improve Unit performance.

Yes, that could be done as future work, but is a separate issue from this patch...

@hongzhidao
Copy link
Contributor

hongzhidao commented Aug 13, 2024

Are you referring to #1383 ?
Sure, with this patch we technically wouldn't need it, but why not try to do better by default? So I still want it... we can do similar with the BSDs I think also...

I’m fine with both the ways. I mean set the default CPU numbers from total CPUs or available CPUs.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

Are you referring to #1383 ?
Sure, with this patch we technically wouldn't need it, but why not try to do better by default? So I still want it... we can do similar with the BSDs I think also...

I’m fine with both the ways. I mean set the default CPU numbers from total CPUs or available CPUs.

If by "total CPUs" you mean what is returned by _SC_NPROCESSORS_ONLN, then that is precisely the problem!

It doesn't take into account CPU allowed masks and so can give drastically wrong answers...

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

Add missing threads validation check

$ git range-diff e93f1588...079ce2b2
1:  e93f1588 ! 1:  079ce2b2 router: Make the number of router threads configurable
    @@ src/nxt_conf_validation.c: static nxt_conf_vldt_object_t  nxt_conf_vldt_http_mem
     +    {
     +        .name       = nxt_string("threads"),
     +        .type       = NXT_CONF_VLDT_INTEGER,
    ++        .validator  = nxt_conf_vldt_threads,
     +    },
     +
     +    NXT_CONF_VLDT_END

Copy link
Contributor

@hongzhidao hongzhidao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation looks good to me, the left thing is about the usage to be made with the team.

@ac000
Copy link
Member Author

ac000 commented Aug 13, 2024

The implementation looks good to me, the left thing is about the usage to be made with the team.

So to re-cap, we need to decide two things

  1. The name of the setting, "threads" vs "listeners_threads", I think the plural was right, as it's not a thread per listen(2), but each thread can have multiple listeners.
  2. The location of the setting, whether it's directly under /settings or under a new /settings/router category.

Well, that's my understanding anyway...

@hongzhidao
Copy link
Contributor

hongzhidao commented Aug 13, 2024

"listeners" inside "listeners_threads" means the created threads are working for "listeners". And the configuration is applied to the router process.
Btw, maybe "threads" is easy to understand.

@ac000
Copy link
Member Author

ac000 commented Aug 14, 2024

"listeners" inside "listeners_threads" means the created threads are working for "listeners". And the configuration is applied to the router process.

What I meant was that I think "listeners_threads" (which is what it was originally called) is more accurate than "listener_thread", if we went down that naming route.

Btw, maybe "threads" is easy to understand.

It is if we do it under /setting/router Also remember that "router" is front and centre when doing a process listing

andrew    18287 152794  18287  0    1 Aug13 pts/10   00:00:00 unit: main v1.33.0 [/opt/unit/sbin/unitd --no-daemon]
andrew    18289  18287  18289  0    1 Aug13 pts/10   00:00:00 unit: controller
andrew    18290  18287  18290  0    5 Aug13 pts/10   00:00:00 unit: router
andrew    18290  18287  18291  0    5 Aug13 pts/10   00:00:00 unit: router
andrew    18290  18287  18292  0    5 Aug13 pts/10   00:00:00 unit: router
andrew    18290  18287  18293  0    5 Aug13 pts/10   00:00:00 unit: router
andrew    18290  18287  18294  0    5 Aug13 pts/10   00:00:00 unit: router

The main concern I have about putting the setting directly under /settings is that if we ever want to add more router related options and we decide to create a new sub-settings category, e.g /settings/router then we'd have router related settings in two different places. Or that the /settings top-level will simply become a dumping ground.

@callahad
Copy link
Collaborator

I think y'all were talking past each other earlier on. I believe there's agreement that:

  1. We should absolutely improve the default by merging Better available CPU count determination on Linux #1383.
  2. For places where the default isn't optimal, we should provide configurability by merging this PR.

The remaining question is what we name the option and where we put it in the hierarchy.

And honestly, we shouldn't care too much. If what we choose ends up feeling wrong down the line, we can rename and alias it in a future release.

The current patch LGTM.

@callahad
Copy link
Collaborator

(all of the other naming proposals also LGTM. just pick one and run with it)

@ac000
Copy link
Member Author

ac000 commented Aug 15, 2024

@hongzhidao What's your preference for the name and location?

@hongzhidao
Copy link
Contributor

@hongzhidao What's your preference for the name and location?

I prefer the top location but can't find a good name. I'd like just "threads".

@ac000
Copy link
Member Author

ac000 commented Aug 15, 2024

@hongzhidao What's your preference for the name and location?

I prefer the top location but can't find a good name. I'd like just "threads".

OK, I'll put it under /settings

However just "threads" is a little ambiguous there, I'm just gonna call it "listen_threads"...

@ac000
Copy link
Member Author

ac000 commented Aug 15, 2024

  • Change the name and location of the threads setting to /settings/listen_threads
$ git range-diff --creation-factor=100 079ce2b2...1a87fd8c
1:  079ce2b2 ! 1:  1a87fd8c router: Make the number of router threads configurable
    @@ Commit message
         As it turns out it looks like it was intended to be made configurable
         but was just never hooked up to the config system.
     
    -    This adds a new '/settings/router/threads' config option which can be
    +    This adds a new '/settings/listen_threads' config option which can be
         set like
     
           {
    @@ Commit message
               },
     
               "settings": {
    -              "router": {
    -                  "threads": 2
    -              }
    +              "listen_threads": 2,
    +
    +              ...
               },
     
               ...
    @@ Commit message
           andrew   419832 419829 445145  0    5 03:31 pts/10   00:00:00 unit: router
           andrew   419832 419829 445146  0    5 03:31 pts/10   00:00:00 unit: router
     
    -    After, with a threads setting of 2.
    +    After, with a threads setting of 2
     
           $ ps -efL | grep router
           andrew   419832 419829 419832  0    3 Aug12 pts/10   00:00:00 unit: router
    @@ Commit message
         Signed-off-by: Andrew Clayton <[email protected]>
     
      ## src/nxt_conf_validation.c ##
    -@@ src/nxt_conf_validation.c: static nxt_int_t nxt_conf_vldt_js_module_element(nxt_conf_validation_t *vldt,
    +@@ src/nxt_conf_validation.c: static nxt_int_t nxt_conf_vldt_python_protocol(nxt_conf_validation_t *vldt,
    +     nxt_conf_value_t *value, void *data);
    + static nxt_int_t nxt_conf_vldt_python_prefix(nxt_conf_validation_t *vldt,
    +     nxt_conf_value_t *value, void *data);
    ++static nxt_int_t nxt_conf_vldt_listen_threads(nxt_conf_validation_t *vldt,
    ++    nxt_conf_value_t *value, void *data);
    + static nxt_int_t nxt_conf_vldt_threads(nxt_conf_validation_t *vldt,
    +     nxt_conf_value_t *value, void *data);
    + static nxt_int_t nxt_conf_vldt_thread_stack_size(nxt_conf_validation_t *vldt,
    +@@ src/nxt_conf_validation.c: static nxt_conf_vldt_object_t  nxt_conf_vldt_root_members[] = {
      
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_setting_members[];
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_http_members[];
    -+static nxt_conf_vldt_object_t  nxt_conf_vldt_router_members[];
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_websocket_members[];
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_static_members[];
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_forwarded_members[];
    -@@ src/nxt_conf_validation.c: static nxt_conf_vldt_object_t  nxt_conf_vldt_setting_members[] = {
    + static nxt_conf_vldt_object_t  nxt_conf_vldt_setting_members[] = {
    +     {
    ++        .name       = nxt_string("listen_threads"),
    ++        .type       = NXT_CONF_VLDT_INTEGER,
    ++        .validator  = nxt_conf_vldt_listen_threads,
    ++    }, {
    +         .name       = nxt_string("http"),
              .type       = NXT_CONF_VLDT_OBJECT,
              .validator  = nxt_conf_vldt_object,
    -         .u.members  = nxt_conf_vldt_http_members,
    -+    }, {
    -+        .name       = nxt_string("router"),
    -+        .type       = NXT_CONF_VLDT_OBJECT,
    -+        .validator  = nxt_conf_vldt_object,
    -+        .u.members  = nxt_conf_vldt_router_members,
    - #if (NXT_HAVE_NJS)
    -     }, {
    -         .name       = nxt_string("js_module"),
    -@@ src/nxt_conf_validation.c: static nxt_conf_vldt_object_t  nxt_conf_vldt_http_members[] = {
    - };
    +@@ src/nxt_conf_validation.c: nxt_conf_vldt_python_prefix(nxt_conf_validation_t *vldt,
    +     return NXT_OK;
    + }
      
    ++static nxt_int_t
    ++nxt_conf_vldt_listen_threads(nxt_conf_validation_t *vldt,
    ++    nxt_conf_value_t *value, void *data)
    ++{
    ++    int64_t  threads;
    ++
    ++    threads = nxt_conf_get_number(value);
    ++
    ++    if (threads < 1) {
    ++        return nxt_conf_vldt_error(vldt, "The \"listen_threads\" number must "
    ++                                   "be equal to or greater than 1.");
    ++    }
    ++
    ++    if (threads > NXT_INT32_T_MAX) {
    ++        return nxt_conf_vldt_error(vldt, "The \"listen_threads\" number must "
    ++                                   "not exceed %d.", NXT_INT32_T_MAX);
    ++    }
    ++
    ++    return NXT_OK;
    ++}
    ++
      
    -+static nxt_conf_vldt_object_t  nxt_conf_vldt_router_members[] = {
    -+    {
    -+        .name       = nxt_string("threads"),
    -+        .type       = NXT_CONF_VLDT_INTEGER,
    -+        .validator  = nxt_conf_vldt_threads,
    -+    },
    -+
    -+    NXT_CONF_VLDT_END
    -+};
    -+
    -+
    - static nxt_conf_vldt_object_t  nxt_conf_vldt_websocket_members[] = {
    -     {
    -         .name       = nxt_string("read_timeout"),
    + static nxt_int_t
    + nxt_conf_vldt_threads(nxt_conf_validation_t *vldt, nxt_conf_value_t *value,
     
      ## src/nxt_router.c ##
     @@ src/nxt_router.c: nxt_router_conf_send(nxt_task_t *task, nxt_router_temp_conf_t *tmcf,
    @@ src/nxt_router.c: nxt_router_conf_send(nxt_task_t *task, nxt_router_temp_conf_t
      static nxt_conf_map_t  nxt_router_conf[] = {
          {
     -        nxt_string("listeners_threads"),
    -+        nxt_string("threads"),
    ++        nxt_string("listen_threads"),
              NXT_CONF_MAP_INT32,
              offsetof(nxt_router_conf_t, threads),
          },
    @@ src/nxt_router.c: nxt_router_conf_create(nxt_task_t *task, nxt_router_temp_conf_
      #endif
          nxt_conf_value_t            *root, *conf, *http, *value, *websocket;
     -    nxt_conf_value_t            *applications, *application;
    -+    nxt_conf_value_t            *applications, *application, *router_conf;
    ++    nxt_conf_value_t            *applications, *application, *settings;
          nxt_conf_value_t            *listeners, *listener;
          nxt_socket_conf_t           *skcf;
          nxt_router_conf_t           *rtcf;
    @@ src/nxt_router.c: nxt_router_conf_create(nxt_task_t *task, nxt_router_temp_conf_
          nxt_router_app_conf_t       apcf;
          nxt_router_listener_conf_t  lscf;
      
    -+    static const nxt_str_t  router_path = nxt_string("/settings/router");
    ++    static const nxt_str_t  settings_path = nxt_string("/settings");
          static const nxt_str_t  http_path = nxt_string("/settings/http");
          static const nxt_str_t  applications_path = nxt_string("/applications");
          static const nxt_str_t  listeners_path = nxt_string("/listeners");
    @@ src/nxt_router.c: nxt_router_conf_create(nxt_task_t *task, nxt_router_temp_conf_
     -    if (ret != NXT_OK) {
     -        nxt_alert(task, "root map error");
     -        return NXT_ERROR;
    -+    router_conf = nxt_conf_get_path(root, &router_path);
    -+    if (router_conf != NULL) {
    -+        ret = nxt_conf_map_object(mp, router_conf, nxt_router_conf,
    ++    settings = nxt_conf_get_path(root, &settings_path);
    ++    if (settings != NULL) {
    ++        ret = nxt_conf_map_object(mp, settings, nxt_router_conf,
     +                                  nxt_nitems(nxt_router_conf), rtcf);
     +        if (ret != NXT_OK) {
     +            nxt_alert(task, "router_conf map error");

@callahad callahad added this to the 1.33 milestone Aug 19, 2024
Unit generally creates an extra number of router threads (to handle
client connections, not incl the main thread) to match the number of
available CPUs.

There are cases when this can go wrong, e.g on a high CPU count machine
and Unit is being effectively limited to a few CPUs via the cgroups cpu
controller. So Unit may create a large number of router threads when
they are only going to effectively run on a couple of CPUs or so.

There may be other cases where you would like to tweak the number of
router threads, depending on your workload.

As it turns out it looks like it was intended to be made configurable
but was just never hooked up to the config system.

This adds a new '/settings/listen_threads' config option which can be
set like

  {
      "listen": {
          ...
      },

      "settings": {
          "listen_threads": 2,

          ...
      },

      ...
  }

Before this patch (on a four cpu system)

  $  ps -efL | grep router
  andrew   419832 419829 419832  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419833  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419834  0    5 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 445145  0    5 03:31 pts/10   00:00:00 unit: router
  andrew   419832 419829 445146  0    5 03:31 pts/10   00:00:00 unit: router

After, with a threads setting of 2

  $ ps -efL | grep router
  andrew   419832 419829 419832  0    3 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419833  0    3 Aug12 pts/10   00:00:00 unit: router
  andrew   419832 419829 419834  0    3 Aug12 pts/10   00:00:00 unit: router

Closes: nginx#1042
Signed-off-by: Andrew Clayton <[email protected]>
@ac000
Copy link
Member Author

ac000 commented Aug 19, 2024

Rebased wity master

@ac000 ac000 merged commit 57c88fd into nginx:master Aug 19, 2024
24 checks passed
@ac000 ac000 deleted the rthreads branch August 19, 2024 22:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

How to manually set the number of router threads
4 participants