Prevent Kubernetes breaking (kubectl does not respond) when too many Pods
Kubernetes breaks (no response from kubectl
) when I have too many Pods running in the cluster (1000 pods).
There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.
The workload I need to run can be massively parallel processed, hence I have a high number of Pods.
Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.
My Kubernetes master node is an AWS EC2 m4.xlarge
instance.
My intuition tells me that it is the master node's network performance that is holding the cluster back?
Any ideas?
Details:
I am running 1000 Pods in a Deployment.
when I do kubectl get deploy
it shows:
DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
and through my application-side DB, I can see that there are only 458 Pods working.
when I do kops validate cluster
I receive the warning:
VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
amazon-ec2 kubernetes kops
add a comment |
Kubernetes breaks (no response from kubectl
) when I have too many Pods running in the cluster (1000 pods).
There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.
The workload I need to run can be massively parallel processed, hence I have a high number of Pods.
Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.
My Kubernetes master node is an AWS EC2 m4.xlarge
instance.
My intuition tells me that it is the master node's network performance that is holding the cluster back?
Any ideas?
Details:
I am running 1000 Pods in a Deployment.
when I do kubectl get deploy
it shows:
DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
and through my application-side DB, I can see that there are only 458 Pods working.
when I do kops validate cluster
I receive the warning:
VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
amazon-ec2 kubernetes kops
add a comment |
Kubernetes breaks (no response from kubectl
) when I have too many Pods running in the cluster (1000 pods).
There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.
The workload I need to run can be massively parallel processed, hence I have a high number of Pods.
Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.
My Kubernetes master node is an AWS EC2 m4.xlarge
instance.
My intuition tells me that it is the master node's network performance that is holding the cluster back?
Any ideas?
Details:
I am running 1000 Pods in a Deployment.
when I do kubectl get deploy
it shows:
DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
and through my application-side DB, I can see that there are only 458 Pods working.
when I do kops validate cluster
I receive the warning:
VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
amazon-ec2 kubernetes kops
Kubernetes breaks (no response from kubectl
) when I have too many Pods running in the cluster (1000 pods).
There are more than enough resources (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods.
The workload I need to run can be massively parallel processed, hence I have a high number of Pods.
Actually, I would like to be able to run many more times 1000 Pods. Maybe even 100,000 Pods.
My Kubernetes master node is an AWS EC2 m4.xlarge
instance.
My intuition tells me that it is the master node's network performance that is holding the cluster back?
Any ideas?
Details:
I am running 1000 Pods in a Deployment.
when I do kubectl get deploy
it shows:
DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
and through my application-side DB, I can see that there are only 458 Pods working.
when I do kops validate cluster
I receive the warning:
VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
amazon-ec2 kubernetes kops
amazon-ec2 kubernetes kops
edited Nov 13 at 7:45
asked Nov 13 at 3:46
cryanbhu
387313
387313
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your
kubeapi-server
(s) lives.Follow more guidelines described here.
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273488%2fprevent-kubernetes-breaking-kubectl-does-not-respond-when-too-many-pods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your
kubeapi-server
(s) lives.Follow more guidelines described here.
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
add a comment |
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your
kubeapi-server
(s) lives.Follow more guidelines described here.
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
add a comment |
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your
kubeapi-server
(s) lives.Follow more guidelines described here.
The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.
The issue you are seeing is more about the kubeapi-server
being able query/reply a large number of pods or resources.
So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say kubectl get pods
(Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).
You can try:
Setting up an HA external etcd cluster with pretty beefy machines and fast disks.
Upgrade the machines where your
kubeapi-server
(s) lives.Follow more guidelines described here.
answered Nov 13 at 6:44
Rico
25.9k94864
25.9k94864
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
add a comment |
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
thanks, @Rico i have updated with more error messages, does it align with what you suspect?
– cryanbhu
Nov 13 at 7:45
1
1
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
I would check the logs on those components, but it would make sense that if your kube-apiserver is overloaded that it would affect other components like the kube-controller-manager and the kube-scheduler.
– Rico
Nov 13 at 15:16
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273488%2fprevent-kubernetes-breaking-kubectl-does-not-respond-when-too-many-pods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown