Kops pause cluster should bring EC2 instance cluster in stopped state











up vote
1
down vote

favorite












I am really new to Kubernetes.



I have deployed Kubernetes using kops.
My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost.



Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?



How to overcome on this issue??










share|improve this question




























    up vote
    1
    down vote

    favorite












    I am really new to Kubernetes.



    I have deployed Kubernetes using kops.
    My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost.



    Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?



    How to overcome on this issue??










    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I am really new to Kubernetes.



      I have deployed Kubernetes using kops.
      My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost.



      Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?



      How to overcome on this issue??










      share|improve this question















      I am really new to Kubernetes.



      I have deployed Kubernetes using kops.
      My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost.



      Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?



      How to overcome on this issue??







      kubernetes aws-cli kops






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 19 hours ago









      aurelius

      782110




      782110










      asked yesterday









      Ashish Kamat

      113




      113
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote













          You have actually answered yourself. All that is required to do is to scale the instance size to 0.
          Following this tutorial, the steps are:





          • kops edit ig nodes change minSize and maxSize to 0


          • kops get ig- to get master node name


          • kops edit ig - change min and max size to 0


          • kops update cluster --yes


          • kops rolling-update cluster


          After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).



          I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state?
          You need to run these commands before running the kops cluster:



          aws s3api create-bucket --bucket ... --region eu-central-1a
          aws s3api put-bucket-versioning --bucket ... --versioning-configuration




          kops lets you manage your clusters even after installation. To do
          this, it must keep track of the clusters that you have created, along
          with their configuration, the keys they are using etc. This
          information is stored in an S3 bucket. S3 permissions are used to
          control access to the bucket.




          This one is after scaling down to 0:





          And this screenshot after scaling back.






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














             

            draft saved


            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53204326%2fkops-pause-cluster-should-bring-ec2-instance-cluster-in-stopped-state%23new-answer', 'question_page');
            }
            );

            Post as a guest
































            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote













            You have actually answered yourself. All that is required to do is to scale the instance size to 0.
            Following this tutorial, the steps are:





            • kops edit ig nodes change minSize and maxSize to 0


            • kops get ig- to get master node name


            • kops edit ig - change min and max size to 0


            • kops update cluster --yes


            • kops rolling-update cluster


            After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).



            I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state?
            You need to run these commands before running the kops cluster:



            aws s3api create-bucket --bucket ... --region eu-central-1a
            aws s3api put-bucket-versioning --bucket ... --versioning-configuration




            kops lets you manage your clusters even after installation. To do
            this, it must keep track of the clusters that you have created, along
            with their configuration, the keys they are using etc. This
            information is stored in an S3 bucket. S3 permissions are used to
            control access to the bucket.




            This one is after scaling down to 0:





            And this screenshot after scaling back.






            share|improve this answer

























              up vote
              1
              down vote













              You have actually answered yourself. All that is required to do is to scale the instance size to 0.
              Following this tutorial, the steps are:





              • kops edit ig nodes change minSize and maxSize to 0


              • kops get ig- to get master node name


              • kops edit ig - change min and max size to 0


              • kops update cluster --yes


              • kops rolling-update cluster


              After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).



              I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state?
              You need to run these commands before running the kops cluster:



              aws s3api create-bucket --bucket ... --region eu-central-1a
              aws s3api put-bucket-versioning --bucket ... --versioning-configuration




              kops lets you manage your clusters even after installation. To do
              this, it must keep track of the clusters that you have created, along
              with their configuration, the keys they are using etc. This
              information is stored in an S3 bucket. S3 permissions are used to
              control access to the bucket.




              This one is after scaling down to 0:





              And this screenshot after scaling back.






              share|improve this answer























                up vote
                1
                down vote










                up vote
                1
                down vote









                You have actually answered yourself. All that is required to do is to scale the instance size to 0.
                Following this tutorial, the steps are:





                • kops edit ig nodes change minSize and maxSize to 0


                • kops get ig- to get master node name


                • kops edit ig - change min and max size to 0


                • kops update cluster --yes


                • kops rolling-update cluster


                After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).



                I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state?
                You need to run these commands before running the kops cluster:



                aws s3api create-bucket --bucket ... --region eu-central-1a
                aws s3api put-bucket-versioning --bucket ... --versioning-configuration




                kops lets you manage your clusters even after installation. To do
                this, it must keep track of the clusters that you have created, along
                with their configuration, the keys they are using etc. This
                information is stored in an S3 bucket. S3 permissions are used to
                control access to the bucket.




                This one is after scaling down to 0:





                And this screenshot after scaling back.






                share|improve this answer












                You have actually answered yourself. All that is required to do is to scale the instance size to 0.
                Following this tutorial, the steps are:





                • kops edit ig nodes change minSize and maxSize to 0


                • kops get ig- to get master node name


                • kops edit ig - change min and max size to 0


                • kops update cluster --yes


                • kops rolling-update cluster


                After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).



                I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes documentation example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that stores the cluster state?
                You need to run these commands before running the kops cluster:



                aws s3api create-bucket --bucket ... --region eu-central-1a
                aws s3api put-bucket-versioning --bucket ... --versioning-configuration




                kops lets you manage your clusters even after installation. To do
                this, it must keep track of the clusters that you have created, along
                with their configuration, the keys they are using etc. This
                information is stored in an S3 bucket. S3 permissions are used to
                control access to the bucket.




                This one is after scaling down to 0:





                And this screenshot after scaling back.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered yesterday









                aurelius

                782110




                782110






























                     

                    draft saved


                    draft discarded



















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53204326%2fkops-pause-cluster-should-bring-ec2-instance-cluster-in-stopped-state%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest




















































































                    Popular posts from this blog

                    Guess what letter conforming each word

                    Port of Spain

                    Run scheduled task as local user group (not BUILTIN)