Prevent Kubernetes breaking (kubectl does not respond) when too many Pods












1














Kubernetes breaks (no response from kubectl) when I have too many Pods running in the cluster (1000 pods).



There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.



The workload I need to run can be massively parallel processed, hence I have a high number of Pods.



Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.



My Kubernetes master node is an AWS EC2 m4.xlarge instance.



My intuition tells me that it is the master node's network performance that is holding the cluster back?



Any ideas?



Details:

I am running 1000 Pods in a Deployment.

when I do kubectl get deploy

it shows:



DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  
1000 1000 1000 458


and through my application-side DB, I can see that there are only 458 Pods working.



when I do kops validate cluster

I receive the warning:



VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy









share|improve this question





























    1














    Kubernetes breaks (no response from kubectl) when I have too many Pods running in the cluster (1000 pods).



    There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.



    The workload I need to run can be massively parallel processed, hence I have a high number of Pods.



    Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.



    My Kubernetes master node is an AWS EC2 m4.xlarge instance.



    My intuition tells me that it is the master node's network performance that is holding the cluster back?



    Any ideas?



    Details:

    I am running 1000 Pods in a Deployment.

    when I do kubectl get deploy

    it shows:



    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  
    1000 1000 1000 458


    and through my application-side DB, I can see that there are only 458 Pods working.



    when I do kops validate cluster

    I receive the warning:



    VALIDATION ERRORS
    KIND NAME MESSAGE
    ComponentStatus controller-manager component is unhealthy
    ComponentStatus scheduler component is unhealthy
    Pod kube-system/kube-controller-manager-<ip>.ec2.internal
    kube-system pod
    "kube-controller-manager-<ip>.ec2.internal" is not healthy
    Pod
    kube-system/kube-scheduler-<ip>.ec2.internal
    kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy









    share|improve this question



























      1












      1








      1


      1





      Kubernetes breaks (no response from kubectl) when I have too many Pods running in the cluster (1000 pods).



      There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.



      The workload I need to run can be massively parallel processed, hence I have a high number of Pods.



      Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.



      My Kubernetes master node is an AWS EC2 m4.xlarge instance.



      My intuition tells me that it is the master node's network performance that is holding the cluster back?



      Any ideas?



      Details:

      I am running 1000 Pods in a Deployment.

      when I do kubectl get deploy

      it shows:



      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  
      1000 1000 1000 458


      and through my application-side DB, I can see that there are only 458 Pods working.



      when I do kops validate cluster

      I receive the warning:



      VALIDATION ERRORS
      KIND NAME MESSAGE
      ComponentStatus controller-manager component is unhealthy
      ComponentStatus scheduler component is unhealthy
      Pod kube-system/kube-controller-manager-<ip>.ec2.internal
      kube-system pod
      "kube-controller-manager-<ip>.ec2.internal" is not healthy
      Pod
      kube-system/kube-scheduler-<ip>.ec2.internal
      kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy









      share|improve this question















      Kubernetes breaks (no response from kubectl) when I have too many Pods running in the cluster (1000 pods).



      There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.



      The workload I need to run can be massively parallel processed, hence I have a high number of Pods.



      Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.



      My Kubernetes master node is an AWS EC2 m4.xlarge instance.



      My intuition tells me that it is the master node's network performance that is holding the cluster back?



      Any ideas?



      Details:

      I am running 1000 Pods in a Deployment.

      when I do kubectl get deploy

      it shows:



      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  
      1000 1000 1000 458


      and through my application-side DB, I can see that there are only 458 Pods working.



      when I do kops validate cluster

      I receive the warning:



      VALIDATION ERRORS
      KIND NAME MESSAGE
      ComponentStatus controller-manager component is unhealthy
      ComponentStatus scheduler component is unhealthy
      Pod kube-system/kube-controller-manager-<ip>.ec2.internal
      kube-system pod
      "kube-controller-manager-<ip>.ec2.internal" is not healthy
      Pod
      kube-system/kube-scheduler-<ip>.ec2.internal
      kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy






      amazon-ec2 kubernetes kops






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 13 at 7:45

























      asked Nov 13 at 3:46









      cryanbhu

      387313




      387313
























          1 Answer
          1






          active

          oldest

          votes


















          2














          The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.



          The issue you are seeing is more about the kubeapi-server being able query/reply a large number of pods or resources.



          So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).



          You can try:




          • Setting up an HA external etcd cluster with pretty beefy machines and fast disks.


          • Upgrade the machines where your kubeapi-server(s) lives.


          • Follow more guidelines described here.







          share|improve this answer





















          • thanks, @Rico i have updated with more error messages, does it align with what you suspect?
            – cryanbhu
            Nov 13 at 7:45






          • 1




            I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
            – Rico
            Nov 13 at 15:16











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273488%2fprevent-kubernetes-breaking-kubectl-does-not-respond-when-too-many-pods%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.



          The issue you are seeing is more about the kubeapi-server being able query/reply a large number of pods or resources.



          So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).



          You can try:




          • Setting up an HA external etcd cluster with pretty beefy machines and fast disks.


          • Upgrade the machines where your kubeapi-server(s) lives.


          • Follow more guidelines described here.







          share|improve this answer





















          • thanks, @Rico i have updated with more error messages, does it align with what you suspect?
            – cryanbhu
            Nov 13 at 7:45






          • 1




            I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
            – Rico
            Nov 13 at 15:16
















          2














          The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.



          The issue you are seeing is more about the kubeapi-server being able query/reply a large number of pods or resources.



          So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).



          You can try:




          • Setting up an HA external etcd cluster with pretty beefy machines and fast disks.


          • Upgrade the machines where your kubeapi-server(s) lives.


          • Follow more guidelines described here.







          share|improve this answer





















          • thanks, @Rico i have updated with more error messages, does it align with what you suspect?
            – cryanbhu
            Nov 13 at 7:45






          • 1




            I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
            – Rico
            Nov 13 at 15:16














          2












          2








          2






          The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.



          The issue you are seeing is more about the kubeapi-server being able query/reply a large number of pods or resources.



          So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).



          You can try:




          • Setting up an HA external etcd cluster with pretty beefy machines and fast disks.


          • Upgrade the machines where your kubeapi-server(s) lives.


          • Follow more guidelines described here.







          share|improve this answer












          The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.



          The issue you are seeing is more about the kubeapi-server being able query/reply a large number of pods or resources.



          So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).



          You can try:




          • Setting up an HA external etcd cluster with pretty beefy machines and fast disks.


          • Upgrade the machines where your kubeapi-server(s) lives.


          • Follow more guidelines described here.








          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 13 at 6:44









          Rico

          25.9k94864




          25.9k94864












          • thanks, @Rico i have updated with more error messages, does it align with what you suspect?
            – cryanbhu
            Nov 13 at 7:45






          • 1




            I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
            – Rico
            Nov 13 at 15:16


















          • thanks, @Rico i have updated with more error messages, does it align with what you suspect?
            – cryanbhu
            Nov 13 at 7:45






          • 1




            I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
            – Rico
            Nov 13 at 15:16
















          thanks, @Rico i have updated with more error messages, does it align with what you suspect?
          – cryanbhu
          Nov 13 at 7:45




          thanks, @Rico i have updated with more error messages, does it align with what you suspect?
          – cryanbhu
          Nov 13 at 7:45




          1




          1




          I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
          – Rico
          Nov 13 at 15:16




          I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
          – Rico
          Nov 13 at 15:16


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273488%2fprevent-kubernetes-breaking-kubectl-does-not-respond-when-too-many-pods%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Guess what letter conforming each word

          Port of Spain

          Run scheduled task as local user group (not BUILTIN)