`kubectl logs counter` not showing any output following official Kubernetes example












1















I am not able to see any log output when deploying a very simple Pod:



myconfig.yaml:



apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']


then



kubectl apply -f myconfig.yaml


This was taken from this official tutorial: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes



The pod appears to be running fine:



kubectl describe pod counter
Name: counter
Namespace: default
Node: ip-10-0-0-43.ec2.internal/10.0.0.43
Start Time: Tue, 20 Nov 2018 12:05:07 -0500
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
Status: Running
IP: 10.0.0.81
Containers:
count:
Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
Image: busybox
Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
State: Running
Started: Tue, 20 Nov 2018 12:05:08 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-r6tr6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r6tr6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6"
Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox"
Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox"
Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container


Nothing appears when running:



kubectl logs counter --follow=true









share|improve this question





























    1















    I am not able to see any log output when deploying a very simple Pod:



    myconfig.yaml:



    apiVersion: v1
    kind: Pod
    metadata:
    name: counter
    spec:
    containers:
    - name: count
    image: busybox
    args: [/bin/sh, -c,
    'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']


    then



    kubectl apply -f myconfig.yaml


    This was taken from this official tutorial: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes



    The pod appears to be running fine:



    kubectl describe pod counter
    Name: counter
    Namespace: default
    Node: ip-10-0-0-43.ec2.internal/10.0.0.43
    Start Time: Tue, 20 Nov 2018 12:05:07 -0500
    Labels: <none>
    Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
    Status: Running
    IP: 10.0.0.81
    Containers:
    count:
    Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
    Image: busybox
    Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
    Port: <none>
    Host Port: <none>
    Args:
    /bin/sh
    -c
    i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
    State: Running
    Started: Tue, 20 Nov 2018 12:05:08 -0500
    Ready: True
    Restart Count: 0
    Environment: <none>
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    PodScheduled True
    Volumes:
    default-token-r6tr6:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-r6tr6
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
    Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6"
    Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox"
    Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox"
    Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
    Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container


    Nothing appears when running:



    kubectl logs counter --follow=true









    share|improve this question



























      1












      1








      1








      I am not able to see any log output when deploying a very simple Pod:



      myconfig.yaml:



      apiVersion: v1
      kind: Pod
      metadata:
      name: counter
      spec:
      containers:
      - name: count
      image: busybox
      args: [/bin/sh, -c,
      'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']


      then



      kubectl apply -f myconfig.yaml


      This was taken from this official tutorial: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes



      The pod appears to be running fine:



      kubectl describe pod counter
      Name: counter
      Namespace: default
      Node: ip-10-0-0-43.ec2.internal/10.0.0.43
      Start Time: Tue, 20 Nov 2018 12:05:07 -0500
      Labels: <none>
      Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
      Status: Running
      IP: 10.0.0.81
      Containers:
      count:
      Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
      Image: busybox
      Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
      Port: <none>
      Host Port: <none>
      Args:
      /bin/sh
      -c
      i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
      State: Running
      Started: Tue, 20 Nov 2018 12:05:08 -0500
      Ready: True
      Restart Count: 0
      Environment: <none>
      Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
      Conditions:
      Type Status
      Initialized True
      Ready True
      PodScheduled True
      Volumes:
      default-token-r6tr6:
      Type: Secret (a volume populated by a Secret)
      SecretName: default-token-r6tr6
      Optional: false
      QoS Class: BestEffort
      Node-Selectors: <none>
      Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
      node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
      Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6"
      Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox"
      Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox"
      Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
      Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container


      Nothing appears when running:



      kubectl logs counter --follow=true









      share|improve this question
















      I am not able to see any log output when deploying a very simple Pod:



      myconfig.yaml:



      apiVersion: v1
      kind: Pod
      metadata:
      name: counter
      spec:
      containers:
      - name: count
      image: busybox
      args: [/bin/sh, -c,
      'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']


      then



      kubectl apply -f myconfig.yaml


      This was taken from this official tutorial: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes



      The pod appears to be running fine:



      kubectl describe pod counter
      Name: counter
      Namespace: default
      Node: ip-10-0-0-43.ec2.internal/10.0.0.43
      Start Time: Tue, 20 Nov 2018 12:05:07 -0500
      Labels: <none>
      Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"counter","namespace":"default"},"spec":{"containers":[{"args":["/bin/sh","-c","i=0...
      Status: Running
      IP: 10.0.0.81
      Containers:
      count:
      Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
      Image: busybox
      Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
      Port: <none>
      Host Port: <none>
      Args:
      /bin/sh
      -c
      i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
      State: Running
      Started: Tue, 20 Nov 2018 12:05:08 -0500
      Ready: True
      Restart Count: 0
      Environment: <none>
      Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6tr6 (ro)
      Conditions:
      Type Status
      Initialized True
      Ready True
      PodScheduled True
      Volumes:
      default-token-r6tr6:
      Type: Secret (a volume populated by a Secret)
      SecretName: default-token-r6tr6
      Optional: false
      QoS Class: BestEffort
      Node-Selectors: <none>
      Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
      node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
      Normal SuccessfulMountVolume 16m kubelet, ip-10-0-0-43.ec2.internal MountVolume.SetUp succeeded for volume "default-token-r6tr6"
      Normal Pulling 16m kubelet, ip-10-0-0-43.ec2.internal pulling image "busybox"
      Normal Pulled 16m kubelet, ip-10-0-0-43.ec2.internal Successfully pulled image "busybox"
      Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
      Normal Started 16m kubelet, ip-10-0-0-43.ec2.internal Started container


      Nothing appears when running:



      kubectl logs counter --follow=true






      kubernetes amazon-eks






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 '18 at 22:45









      Rico

      28.6k95167




      28.6k95167










      asked Nov 20 '18 at 17:23









      seenickcodeseenickcode

      375211




      375211
























          5 Answers
          5






          active

          oldest

          votes


















          1














          The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:



          {
          "log-driver": "anything-but-json-file",
          }


          That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):



          $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>





          share|improve this answer
























          • I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

            – seenickcode
            Nov 21 '18 at 14:22













          • I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

            – seenickcode
            Nov 21 '18 at 15:17











          • Ahh makes sense! with no firewall access, you can't proxy to the container.

            – Rico
            Nov 21 '18 at 15:23



















          1














          I followed Seenickode's comment and i got it working.



          I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.



          Here is what i learned:




          1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.

          2. Allowed port 443 Egress from Control Plane to Worker Nodes.


          Then the kubectl logs started to work.



          Sample Cloudformation template updates here:



            NodeSecurityGroupFromControlPlaneIngress:
          Type: AWS::EC2::SecurityGroupIngress
          DependsOn: NodeSecurityGroup
          Properties:
          Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
          GroupId: !Ref NodeSecurityGroup
          SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
          IpProtocol: tcp
          FromPort: 1025
          ToPort: 65535


          Also



            ControlPlaneEgressToNodeSecurityGroupOn443:
          Type: AWS::EC2::SecurityGroupEgress
          DependsOn: NodeSecurityGroup
          Properties:
          Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
          GroupId:
          Ref: ControlPlaneSecurityGroup
          DestinationSecurityGroupId:
          Ref: NodeSecurityGroup
          IpProtocol: tcp
          FromPort: 443
          ToPort: 443





          share|improve this answer































            0














            Use this:



            $ kubectl logs -f counter --namespace default





            share|improve this answer
























            • That didn't work I'm afraid.

              – seenickcode
              Nov 20 '18 at 17:59











            • But that should work. Actually my command is same as yours.

              – Shudipta Sharma
              Nov 20 '18 at 18:07













            • It is working fine in my cluster for the pod yaml you attached.

              – Shudipta Sharma
              Nov 20 '18 at 18:13











            • are you using AWS EKS?

              – seenickcode
              Nov 20 '18 at 21:00



















            0














            The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.



            ss -tnpl |grep 10250
            LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))


            Check the above command and see if pid changes continuously within some interval.



            Also, check the /var/log/messages if there is any node related issue. Hope this helps.






            share|improve this answer































              0














              I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.






              share|improve this answer























                Your Answer






                StackExchange.ifUsing("editor", function () {
                StackExchange.using("externalEditor", function () {
                StackExchange.using("snippets", function () {
                StackExchange.snippets.init();
                });
                });
                }, "code-snippets");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "1"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53398316%2fkubectl-logs-counter-not-showing-any-output-following-official-kubernetes-exam%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                5 Answers
                5






                active

                oldest

                votes








                5 Answers
                5






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                1














                The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:



                {
                "log-driver": "anything-but-json-file",
                }


                That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):



                $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>





                share|improve this answer
























                • I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                  – seenickcode
                  Nov 21 '18 at 14:22













                • I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                  – seenickcode
                  Nov 21 '18 at 15:17











                • Ahh makes sense! with no firewall access, you can't proxy to the container.

                  – Rico
                  Nov 21 '18 at 15:23
















                1














                The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:



                {
                "log-driver": "anything-but-json-file",
                }


                That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):



                $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>





                share|improve this answer
























                • I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                  – seenickcode
                  Nov 21 '18 at 14:22













                • I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                  – seenickcode
                  Nov 21 '18 at 15:17











                • Ahh makes sense! with no firewall access, you can't proxy to the container.

                  – Rico
                  Nov 21 '18 at 15:23














                1












                1








                1







                The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:



                {
                "log-driver": "anything-but-json-file",
                }


                That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):



                $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>





                share|improve this answer













                The only thing I can think of that may be causing it to not output the logs is if you configured the default logging driver for Docker in your /etc/docker/docker.json config file for the node where your pod is running:



                {
                "log-driver": "anything-but-json-file",
                }


                That would essentially make Docker, not output stdout/stderr logs for something like kubectl logs <podid> -c <containerid>. You can take a look at what's configured in the container in your pod in your node (10.0.0.43):



                $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container-id>






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 20 '18 at 23:03









                RicoRico

                28.6k95167




                28.6k95167













                • I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                  – seenickcode
                  Nov 21 '18 at 14:22













                • I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                  – seenickcode
                  Nov 21 '18 at 15:17











                • Ahh makes sense! with no firewall access, you can't proxy to the container.

                  – Rico
                  Nov 21 '18 at 15:23



















                • I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                  – seenickcode
                  Nov 21 '18 at 14:22













                • I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                  – seenickcode
                  Nov 21 '18 at 15:17











                • Ahh makes sense! with no firewall access, you can't proxy to the container.

                  – Rico
                  Nov 21 '18 at 15:23

















                I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                – seenickcode
                Nov 21 '18 at 14:22







                I'm trying to shell into a running container but I can't even do that, it just hangs when I run: kubectl exec -it <my pod name> -- /bin/bash ... I get: "Error from server: error dialing backend: dial tcp 10.0.0.43:10250: getsockopt: connection timed out"

                – seenickcode
                Nov 21 '18 at 14:22















                I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                – seenickcode
                Nov 21 '18 at 15:17





                I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now. I'm now wondering what the "best practice" is for security groups for worker nodes. Maybe you can offer some insight there?

                – seenickcode
                Nov 21 '18 at 15:17













                Ahh makes sense! with no firewall access, you can't proxy to the container.

                – Rico
                Nov 21 '18 at 15:23





                Ahh makes sense! with no firewall access, you can't proxy to the container.

                – Rico
                Nov 21 '18 at 15:23













                1














                I followed Seenickode's comment and i got it working.



                I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.



                Here is what i learned:




                1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.

                2. Allowed port 443 Egress from Control Plane to Worker Nodes.


                Then the kubectl logs started to work.



                Sample Cloudformation template updates here:



                  NodeSecurityGroupFromControlPlaneIngress:
                Type: AWS::EC2::SecurityGroupIngress
                DependsOn: NodeSecurityGroup
                Properties:
                Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
                GroupId: !Ref NodeSecurityGroup
                SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
                IpProtocol: tcp
                FromPort: 1025
                ToPort: 65535


                Also



                  ControlPlaneEgressToNodeSecurityGroupOn443:
                Type: AWS::EC2::SecurityGroupEgress
                DependsOn: NodeSecurityGroup
                Properties:
                Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
                GroupId:
                Ref: ControlPlaneSecurityGroup
                DestinationSecurityGroupId:
                Ref: NodeSecurityGroup
                IpProtocol: tcp
                FromPort: 443
                ToPort: 443





                share|improve this answer




























                  1














                  I followed Seenickode's comment and i got it working.



                  I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.



                  Here is what i learned:




                  1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.

                  2. Allowed port 443 Egress from Control Plane to Worker Nodes.


                  Then the kubectl logs started to work.



                  Sample Cloudformation template updates here:



                    NodeSecurityGroupFromControlPlaneIngress:
                  Type: AWS::EC2::SecurityGroupIngress
                  DependsOn: NodeSecurityGroup
                  Properties:
                  Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
                  GroupId: !Ref NodeSecurityGroup
                  SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
                  IpProtocol: tcp
                  FromPort: 1025
                  ToPort: 65535


                  Also



                    ControlPlaneEgressToNodeSecurityGroupOn443:
                  Type: AWS::EC2::SecurityGroupEgress
                  DependsOn: NodeSecurityGroup
                  Properties:
                  Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
                  GroupId:
                  Ref: ControlPlaneSecurityGroup
                  DestinationSecurityGroupId:
                  Ref: NodeSecurityGroup
                  IpProtocol: tcp
                  FromPort: 443
                  ToPort: 443





                  share|improve this answer


























                    1












                    1








                    1







                    I followed Seenickode's comment and i got it working.



                    I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.



                    Here is what i learned:




                    1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.

                    2. Allowed port 443 Egress from Control Plane to Worker Nodes.


                    Then the kubectl logs started to work.



                    Sample Cloudformation template updates here:



                      NodeSecurityGroupFromControlPlaneIngress:
                    Type: AWS::EC2::SecurityGroupIngress
                    DependsOn: NodeSecurityGroup
                    Properties:
                    Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
                    GroupId: !Ref NodeSecurityGroup
                    SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
                    IpProtocol: tcp
                    FromPort: 1025
                    ToPort: 65535


                    Also



                      ControlPlaneEgressToNodeSecurityGroupOn443:
                    Type: AWS::EC2::SecurityGroupEgress
                    DependsOn: NodeSecurityGroup
                    Properties:
                    Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
                    GroupId:
                    Ref: ControlPlaneSecurityGroup
                    DestinationSecurityGroupId:
                    Ref: NodeSecurityGroup
                    IpProtocol: tcp
                    FromPort: 443
                    ToPort: 443





                    share|improve this answer













                    I followed Seenickode's comment and i got it working.



                    I found the new cloudformation template for 1.10.11 or 1.11.5 (current version in aws) useful to compare with my stack.



                    Here is what i learned:




                    1. Allowed ports 1025 - 65535 from cluster security group to worker nodes.

                    2. Allowed port 443 Egress from Control Plane to Worker Nodes.


                    Then the kubectl logs started to work.



                    Sample Cloudformation template updates here:



                      NodeSecurityGroupFromControlPlaneIngress:
                    Type: AWS::EC2::SecurityGroupIngress
                    DependsOn: NodeSecurityGroup
                    Properties:
                    Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
                    GroupId: !Ref NodeSecurityGroup
                    SourceSecurityGroupId: !Ref ControlPlaneSecurityGroup
                    IpProtocol: tcp
                    FromPort: 1025
                    ToPort: 65535


                    Also



                      ControlPlaneEgressToNodeSecurityGroupOn443:
                    Type: AWS::EC2::SecurityGroupEgress
                    DependsOn: NodeSecurityGroup
                    Properties:
                    Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
                    GroupId:
                    Ref: ControlPlaneSecurityGroup
                    DestinationSecurityGroupId:
                    Ref: NodeSecurityGroup
                    IpProtocol: tcp
                    FromPort: 443
                    ToPort: 443






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Jan 26 at 22:50









                    vpackvpack

                    111




                    111























                        0














                        Use this:



                        $ kubectl logs -f counter --namespace default





                        share|improve this answer
























                        • That didn't work I'm afraid.

                          – seenickcode
                          Nov 20 '18 at 17:59











                        • But that should work. Actually my command is same as yours.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:07













                        • It is working fine in my cluster for the pod yaml you attached.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:13











                        • are you using AWS EKS?

                          – seenickcode
                          Nov 20 '18 at 21:00
















                        0














                        Use this:



                        $ kubectl logs -f counter --namespace default





                        share|improve this answer
























                        • That didn't work I'm afraid.

                          – seenickcode
                          Nov 20 '18 at 17:59











                        • But that should work. Actually my command is same as yours.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:07













                        • It is working fine in my cluster for the pod yaml you attached.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:13











                        • are you using AWS EKS?

                          – seenickcode
                          Nov 20 '18 at 21:00














                        0












                        0








                        0







                        Use this:



                        $ kubectl logs -f counter --namespace default





                        share|improve this answer













                        Use this:



                        $ kubectl logs -f counter --namespace default






                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Nov 20 '18 at 17:27









                        Shudipta SharmaShudipta Sharma

                        1,185413




                        1,185413













                        • That didn't work I'm afraid.

                          – seenickcode
                          Nov 20 '18 at 17:59











                        • But that should work. Actually my command is same as yours.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:07













                        • It is working fine in my cluster for the pod yaml you attached.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:13











                        • are you using AWS EKS?

                          – seenickcode
                          Nov 20 '18 at 21:00



















                        • That didn't work I'm afraid.

                          – seenickcode
                          Nov 20 '18 at 17:59











                        • But that should work. Actually my command is same as yours.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:07













                        • It is working fine in my cluster for the pod yaml you attached.

                          – Shudipta Sharma
                          Nov 20 '18 at 18:13











                        • are you using AWS EKS?

                          – seenickcode
                          Nov 20 '18 at 21:00

















                        That didn't work I'm afraid.

                        – seenickcode
                        Nov 20 '18 at 17:59





                        That didn't work I'm afraid.

                        – seenickcode
                        Nov 20 '18 at 17:59













                        But that should work. Actually my command is same as yours.

                        – Shudipta Sharma
                        Nov 20 '18 at 18:07







                        But that should work. Actually my command is same as yours.

                        – Shudipta Sharma
                        Nov 20 '18 at 18:07















                        It is working fine in my cluster for the pod yaml you attached.

                        – Shudipta Sharma
                        Nov 20 '18 at 18:13





                        It is working fine in my cluster for the pod yaml you attached.

                        – Shudipta Sharma
                        Nov 20 '18 at 18:13













                        are you using AWS EKS?

                        – seenickcode
                        Nov 20 '18 at 21:00





                        are you using AWS EKS?

                        – seenickcode
                        Nov 20 '18 at 21:00











                        0














                        The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.



                        ss -tnpl |grep 10250
                        LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))


                        Check the above command and see if pid changes continuously within some interval.



                        Also, check the /var/log/messages if there is any node related issue. Hope this helps.






                        share|improve this answer




























                          0














                          The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.



                          ss -tnpl |grep 10250
                          LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))


                          Check the above command and see if pid changes continuously within some interval.



                          Also, check the /var/log/messages if there is any node related issue. Hope this helps.






                          share|improve this answer


























                            0












                            0








                            0







                            The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.



                            ss -tnpl |grep 10250
                            LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))


                            Check the above command and see if pid changes continuously within some interval.



                            Also, check the /var/log/messages if there is any node related issue. Hope this helps.






                            share|improve this answer













                            The error you mentioned in comment is indication that either your kubelet process is not running or keep restarting.



                            ss -tnpl |grep 10250
                            LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))


                            Check the above command and see if pid changes continuously within some interval.



                            Also, check the /var/log/messages if there is any node related issue. Hope this helps.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Nov 21 '18 at 14:38









                            Prafull LadhaPrafull Ladha

                            3,6381523




                            3,6381523























                                0














                                I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.






                                share|improve this answer




























                                  0














                                  I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.






                                  share|improve this answer


























                                    0












                                    0








                                    0







                                    I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.






                                    share|improve this answer













                                    I found the issue. The AWS tutorial here docs.aws.amazon.com/eks/latest/userguide/getting-started.html cites CloudFormation templates that fail to set the required security groups so that one can properly see logs. I basically opened up all traffic and ports for my k8s worker nodes (EC2 instances) and things work now.







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Nov 21 '18 at 15:17









                                    seenickcodeseenickcode

                                    375211




                                    375211






























                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Stack Overflow!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53398316%2fkubectl-logs-counter-not-showing-any-output-following-official-kubernetes-exam%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        Guess what letter conforming each word

                                        Port of Spain

                                        Run scheduled task as local user group (not BUILTIN)