Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I have a StatefulSet with 2 pods. It has a headless service and each pod has a LoadBalancer service that makes it available to the world. </p> <p>Let's say pod names are pod-0 and pod-1. </p> <p>If I want to delete pod-0 but keep pod-1 active, I am not able to do that. </p> <p>I tried</p> <pre><code>kubectl delete pod pod-0 </code></pre> <p>This deletes it but then restarts it because StatefulSet replica is set to 2. </p> <p>So I tried</p> <pre><code>kubectl delete pod pod-0 kubectl scale statefulset some-name --replicas=1 </code></pre> <p>This deletes pod-0, deletes pod-1 and then restarts pod-0. I guess because when replica is set to 1, StatefulSet wants to keep pod-0 active but not pod-1. </p> <p>But how do I keep pod-1 active and delete pod-0?</p>
<p>This is not supported by the StatefulSet controller. Probably the best you could do is try to create that pod yourself with a sleep shim and maybe you could be faster. But then the sts controller will just be unhappy forever.</p>
<p>I am make a nfs file share and using it in kubernetes pods, but when I start pods, it give me tips :</p> <pre><code>2020-05-31 03:00:06+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.30-1debian10 started. chown: changing ownership of '/var/lib/mysql/': Operation not permitted </code></pre> <p>I searching from internet and understand the nfs default map other root login to nfsnobody account, if the privillege not correct, this error should happen, but I follow the steps and still not solve it. This is the ways I having tried:</p> <p>1 addd unsecure config <code>no_root_squash</code> in <code>/etc/exports</code>:</p> <pre><code>/mnt/data/apollodb/apollopv *(rw,sync,no_subtree_check,no_root_squash) </code></pre> <p>2 remove the PVC and PV and directly using nfs in pod like this:</p> <pre><code>volumes: - name: apollo-mysql-persistent-storage nfs: server: 192.168.64.237 path: /mnt/data/apollodb/apollopv containers: - name: mysql image: 'mysql:5.7' ports: - name: mysql containerPort: 3306 protocol: TCP env: - name: MYSQL_ROOT_PASSWORD value: gfwge4LucnXwfefewegLwAd29QqJn4 resources: {} volumeMounts: - name: apollo-mysql-persistent-storage mountPath: /var/lib/mysql terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler </code></pre> <p>this tell me the problem not in pod define but in the nfs config itself.</p> <p>3 give every privillege using this command</p> <pre><code>chmod 777 /mnt/data/apollodb/apollopv </code></pre> <p>4 chown to nfsnobody like this</p> <pre><code>sudo chown nfsnobody:nfsnobody -R apollodb/ sudo chown 999:999 -R apollodb </code></pre> <p>but the problem still not solved,so what should I try to make it works?</p>
<p>You wouldn't set this via chown, you would use <code>fsGroup</code> security setting instead.</p>
<p>This question is regarding the Kubernetes behavior when request need more memory than allocated to Pod containing php app . If GC is not able to free memory and request need more memory due to which pods becomes unresponsive or request gets timed out , will kube restart the pod in itself ?</p> <p>In case swap memory is given, will kube try to use it before it restart the pod.</p>
<p>Swap is not supported with Kubernetes and it will refuse to start if active. Request values for pod resources cannot be overcommitted so if a pod requests 1GB and no node has 1GB available (compared to total RAM minus the requests on all pods scheduled to that node) then the pod will remain unscheduled until something changes. Limits can be overcommitted, but Kubernetes is not involved in that process. It just sets the memory limit in the process cgroup. Out of memory handling works like it always does on Linux, if the kernel decides it is out of memory, then it triggers a thing called the <code>oomkiller</code> which ranks processes and kills the worst one.</p>
<p>I want to create external authentication for Service A, which listens to traffic on port <code>8080</code>. What I desire is to have a second container (<em>Service B</em>) running in the same pod as <em>Service A</em>, that intercepts, evaluates and (maybe) forwards the traffic going in on port <code>8080</code> "Maybe" means that <em>Service B</em> should filter out every request, that is not authenticated.</p> <p><em>Service B</em> would be injected into every service that is getting deployed in order to ensure consistent authorisation and still keep the deployment process simple.</p> <p>(How) is this possible?</p>
<p>Look up Traefiks forward auth mode or nginx’s mod auth request. They both do what you are looking for. Or more generally this kind of thing is called an API gateway and there are many good ones.</p>
<p>I have a Openshift Route of type :</p> <pre><code>- apiVersion: route.openshift.io/v1 kind: Route metadata: name: &lt;Route name&gt; labels: app.kubernetes.io/name: &lt;name&gt; spec: host: &lt;hostname&gt; port: targetPort: http tls: termination: passthrough to: kind: Service name: &lt;serviceName&gt; </code></pre> <p>I want to convert it into a Ingress Object as there are no routes in bare k8s. I couldn't find any reference to tls termination as passthrough in Ingress documentation. Can someone please help me converting this to an Ingress object?</p>
<p>TLS passthrough is not officially part of the Ingress spec. Some specific ingress controllers support it, usually via non-standard TCP proxy modes. But what you probably want is a LoadBalancer-type service.</p>
<p>I have set up an nginx ingres that routes traffic to specific deployments based on host. </p> <blockquote> <p><code>host A --&gt; Service A, host B --&gt; Service B</code></p> </blockquote> <p>If I update the config for Deployment A, that update completes in <em>less than 2 seconds</em>. However, my nginx ingress goes down for host A after that and takes <em>5 to 7 seconds</em> to point to Service A with new pod. </p> <p>How can I reduce this delay? Is there a way to speed up the performance of the nginx ingress so that it points to the new pod as soon as possible ( preferably less than 3 seconds?)</p> <p>Thank you!</p>
<p>You can use the <code>nginx.ingress.kubernetes.io/service-upstream</code> annotation to suppress the normal Endpoints behavior and use the Service directly instead. This has better integration with some deployment models but 5-7 seconds is extreme for ingress-nginx to be seeing the Endpoints update. There can be a short gap from when a pod is removed and when ingress-nginx sees the Endpoint removal. You usually fix that with a pre-stop hook that just sleeps for a few seconds to ensure by the time it actually exits, the Endpoint change has been processed everywhere.</p>
<p>Is there a way to enable caching between pods in Kubernetes cluster? For eg: Lets say we have more than 1 pods running on High availability mode.And we want to share some value between them using distributed caching between the pods.Is this something possible?</p>
<p>There are some experimental projects to let you reuse the etcd that powers the cluster, but I probably wouldn’t. Just run your own using etcd-operator or something. The specifics will massively depend on what your exact use case and software is, distributed databases are among the most complex things ever.</p>
<p>I have a home Kubernetes cluster with multiple SSDs attached to one of the nodes. I currently have one persistence volume per mounted disk. Is there an easy way to create a persistence volume that can access data from multiple disks? I thought about symlink but it doesn't seem to work.</p>
<p>You would have to combine them at a lower level. The simplest approach would be Linux LVM but there's a wide range of storage strategies. Kubernetes orchestrates mounting volumes but it's not a storage management solution itself, just the last-mile bits.</p>
<p>I have an image called <code>http</code> which has a file called <code>httpd-isec.conf</code>. I'd like to edit <code>httpd-isec.conf</code> before the image is started by kubernetes. Is that possible?</p> <p>Would <code>initContainers</code> and mounting the image work in some way? </p>
<p>This is not possible. Images are immutable. What you can do is use an init container to copy the file to an emptyDir volume, edit it, and then mount that volume over the original file in the main container.</p>
<p>As an example, if I define a CRD of <code>kind: Animal</code>, can I define a CRD, <code>Dog</code>, as a specific <code>type</code> of <code>Animal</code>? The <code>Dog</code> CRD would have a different/extended schema requirement than the base <code>Animal</code> CRD. </p> <p>My goal here is to be able to do a <code>kubectl get animals</code> and be able to see all the different <code>type</code>'s of <code>Animals</code> that were created. </p> <p>This seems to have been achieved by using the <code>type</code> metadata for certain resources like <code>Secret</code>, but I can't seem to find how to achieve this with a CRD. </p> <p><strong>Note</strong>: my real use case isn't around storing <code>Animals</code>, but, it's just a typical example from OOP.</p>
<p>No, this is not a feature of Kubernetes. All Secret objects are of the same Kind, and Type is just a string field.</p>
<p>I have an app that requires MS office. This app is docker containerized app and should run on GCP kubernetes cluster. How can i install MS office in a linux docker container?</p>
<p>Via <a href="https://learn.microsoft.com/en-us/dotnet/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/when-not-to-deploy-to-windows-containers" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/dotnet/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/when-not-to-deploy-to-windows-containers</a> this was not possible as of April 2018.</p>
<p>I'm building an SPA using with React and Node.js on Kubernetes. I have separate services and ingresses for the front-end and back-end services. I've seen people also use Nginx to serve the React build, but I found that doing below works well. </p> <pre><code># Dockerfile.production FROM node:8.7.0-alpine RUN mkdir -p /usr/app/client WORKDIR /usr/app/client COPY package*.json /usr/app/client/ RUN npm install RUN npm install -g serve COPY . /usr/app/client EXPOSE 3000 RUN npm run build CMD ["serve", "-s", "build", "-l", "3000" ] </code></pre> <p>Alternatively, I could serve the build with Nginx like the following. This seems like "the right way" to do it, but I'm unsure what the advantage is over using the serve npm package, though it does feel very hacky to me. It seems like everything that could be configured with Nginx to serve the app could also be done in the Ingress, right?</p> <pre><code>server { server_name example.com; ... location ~ / { root /var/www/example.com/static; try_files $uri /index.html; } } </code></pre>
<p>Serve is fine. Nginx might use a few bytes less RAM for serving, but that will be cancelled out by carrying around all the extra features you aren't using. We use a similar Serve setup for a lot of our K8s SPAs and it uses between 60 and 100MB of RAM per pod at full load. For a few other apps we have a cut down version of Caddy and it maxes out around 70MB instead so slightly less but there are probably better ways to worry about 30MB of RAM :)</p>
<p>The idea is instead of installing these scripts they can instead be applied via yaml perhaps and ran with access to kubectl and host tools to find potential issues with the running environment. </p> <p>I figure the pod would need special elevated permissions, etc. I'm not quite sure if there is an example or even a better way of accomplishing the same idea.</p> <p>Is there a way to package scripts in a container to run for diagnostic purposes against kubernetes?</p>
<p>It's an Alpha feature and not recommended for production use, but check out the ephemeral containers system: <a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/</a></p> <p>It's designed for exactly this, having a bundle of debugging tools that you can connect in to an existing file/pid namespace. However the feature is still incomplete as it is being added incrementally.</p>
<p>today while trying to run my pod , I discovered this error which we see in the describe events:</p> <pre><code># kubectl describe pod monitor-prometheus-alertmanager-c94f7b6b7-tg6vc -n monitoring Name: monitor-prometheus-alertmanager-c94f7b6b7-tg6vc Namespace: monitoring Priority: 0 Node: kube-worker-vm2/192.168.1.36 Start Time: Sun, 09 May 2021 20:42:57 +0100 Labels: app=prometheus chart=prometheus-13.8.0 component=alertmanager heritage=Helm pod-template-hash=c94f7b6b7 release=monitor Annotations: cni.projectcalico.org/podIP: 192.168.222.51/32 cni.projectcalico.org/podIPs: 192.168.222.51/32 Status: Running IP: 192.168.222.51 IPs: IP: 192.168.222.51 Controlled By: ReplicaSet/monitor-prometheus-alertmanager-c94f7b6b7 Containers: prometheus-alertmanager: Container ID: docker://0ce55357c5f32c6c66cdec3fe0aaaa06811a0a392d0329c989ac6f15426891ad Image: prom/alertmanager:v0.21.0 Image ID: docker-pullable://prom/alertmanager@sha256:24a5204b418e8fa0214cfb628486749003b039c279c56b5bddb5b10cd100d926 Port: 9093/TCP Host Port: 0/TCP Args: --config.file=/etc/config/alertmanager.yml --storage.path=/data --cluster.advertise-address=[$(POD_IP)]:6783 --web.external-url=http://localhost:9093 State: Running Started: Sun, 09 May 2021 20:52:33 +0100 Ready: False Restart Count: 0 Readiness: http-get http://:9093/-/ready delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: POD_IP: (v1:status.podIP) Mounts: /data from storage-volume (rw) /etc/config from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro) prometheus-alertmanager-configmap-reload: Container ID: docker://eb86ea355b820ddc578333f357666156dc5c5a3a53c63220ca00b98ffada5531 Image: jimmidyson/configmap-reload:v0.4.0 Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:17d34fd73f9e8a78ba7da269d96822ce8972391c2838e08d92a990136adb8e4a Port: &lt;none&gt; Host Port: &lt;none&gt; Args: --volume-dir=/etc/config --webhook-url=http://127.0.0.1:9093/-/reload State: Running Started: Sun, 09 May 2021 20:44:59 +0100 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /etc/config from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: monitor-prometheus-alertmanager Optional: false storage-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: prometheus-pv-claim ReadOnly: false monitor-prometheus-alertmanager-token-kspg6: Type: Secret (a volume populated by a Secret) SecretName: monitor-prometheus-alertmanager-token-kspg6 Optional: false QoS Class: BestEffort Node-Selectors: boardType=x86vm Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m54s default-scheduler Successfully assigned monitoring/monitor-prometheus-alertmanager-c94f7b6b7-tg6vc to kube-worker-vm2 Normal Pulled 7m53s kubelet Container image &quot;jimmidyson/configmap-reload:v0.4.0&quot; already present on machine Normal Created 7m52s kubelet Created container prometheus-alertmanager-configmap-reload Normal Started 7m52s kubelet Started container prometheus-alertmanager-configmap-reload Warning Failed 6m27s (x2 over 7m53s) kubelet Failed to pull image &quot;prom/alertmanager:v0.21.0&quot;: rpc error: code = Unknown desc = context canceled Warning Failed 5m47s (x3 over 7m53s) kubelet Error: ErrImagePull Warning Failed 5m47s kubelet Failed to pull image &quot;prom/alertmanager:v0.21.0&quot;: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Normal BackOff 5m11s (x6 over 7m51s) kubelet Back-off pulling image &quot;prom/alertmanager:v0.21.0&quot; Warning Failed 5m11s (x6 over 7m51s) kubelet Error: ImagePullBackOff Normal Pulling 4m56s (x4 over 9m47s) kubelet Pulling image &quot;prom/alertmanager:v0.21.0&quot; Normal Pulled 19s kubelet Successfully pulled image &quot;prom/alertmanager:v0.21.0&quot; in 4m36.445692759s </code></pre> <p>then I tried to ping first with google.com since it was working I wanted to check <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a> so I tried to ping docker.io but I do not get ping result. what is causing this ?</p> <pre><code>osboxes@kube-worker-vm2:~$ ping google.com PING google.com (142.250.200.14) 56(84) bytes of data. 64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=10 ttl=117 time=35.8 ms 64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=11 ttl=117 time=11.9 ms 64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=12 ttl=117 time=9.16 ms 64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=13 ttl=117 time=11.2 ms 64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=14 ttl=117 time=12.1 ms ^C --- google.com ping statistics --- 14 packets transmitted, 5 received, 64% packet loss, time 13203ms rtt min/avg/max/mdev = 9.163/16.080/35.886/9.959 ms osboxes@kube-worker-vm2:~$ ping docker.io PING docker.io (35.169.217.170) 56(84) bytes of data. </code></pre>
<p>Because <code>docker.io</code> does not respond to pings, from anywhere.</p>
<p>i'm stack at k8s log storage.we have logs that can't output to stdout,but have to save to dir.we want to save to glusterfs shared dir like /data/logs/./xxx.log our apps are written in java ,how can we do that</p>
<p>This is mostly up to your CRI plugin, usually Docker command line options. They already do write to local disk by default, you just need to mount your volume at the right place (probably /var/log/containers or similar, look at your Docker config).</p>
<p>I am doing <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">this exercise</a> on my vagrant built bare metal cluster on a windows machine.</p> <p>Was able to successfully run the app.</p> <p><a href="https://i.stack.imgur.com/VYVTb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VYVTb.png" alt="guestbook app" /></a></p> <p>But I am not able to connect to the database to see the data, say from mongo db compass.</p> <p><a href="https://i.stack.imgur.com/8G9Xi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8G9Xi.png" alt="connecting to mongodb using compass" /></a></p> <p>What should be the user id or password for this?</p> <p>After a bit of research, I used the following steps to get into the mongo container and verify the data. But I want to connect to the database using a client like compass.</p> <p>Used the following command to find where the mongo db backend database pod is deployed.</p> <pre><code>vagrant@kmasterNew:~/GuestBookMonog$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-848d88c7c-95db6 1/1 Running 0 4m51s 192.168.55.11 kworkernew2 &lt;none&gt; &lt;none&gt; mongo-75f59d57f4-klmm6 1/1 Running 0 4m54s 192.168.55.10 kworkernew2 &lt;none&gt; &lt;none&gt; </code></pre> <p>Then ssh into that node and did</p> <pre><code>docker container ls </code></pre> <p>to find the mongo db container</p> <p>It looks something like this. I removed irrelevant data.</p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1ba3d05168ca dc77715107a9 &quot;docker-entrypoint.s…&quot; 53 minutes ago Up 53 minutes k8s_mongo_mongo-75f59d57f4-5tw5b_default_eeddf81b-8dde-4c3e-8505-e08229f97c8b_0 </code></pre> <p>A reference from <a href="https://stackoverflow.com/a/46645243/1977871">SO</a></p> <pre><code>docker exec -it 1ba3d05168ca bash </code></pre> <p>Another reference from <a href="https://stackoverflow.com/a/49450034/1977871">SO</a> in this context</p> <pre><code>mongo show dbs use guestbook show collections db.messages.find() </code></pre> <p>Finally I was able to verify the data</p> <pre><code>&gt; db.messages.find() { &quot;_id&quot; : ObjectId(&quot;6097f6c28088bc17f61bdc32&quot;), &quot;message&quot; : &quot;,message1&quot; } { &quot;_id&quot; : ObjectId(&quot;6097f6c58088bc17f61bdc33&quot;), &quot;message&quot; : &quot;,message1,message2&quot; } </code></pre> <p>But the question is how can I see this data from mongo db compass? I am exposing the both the frontend as well as the backend services using NodePort type. You can see them below.</p> <p>The follow are the k8s manifest files for the deployment that I got from the <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">above example</a>.</p> <p>Front end deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app.kubernetes.io/name: guestbook app.kubernetes.io/component: frontend spec: selector: matchLabels: app.kubernetes.io/name: guestbook app.kubernetes.io/component: frontend replicas: 1 template: metadata: labels: app.kubernetes.io/name: guestbook app.kubernetes.io/component: frontend spec: containers: - name: guestbook image: paulczar/gb-frontend:v5 # image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 80 </code></pre> <p>The front end service.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend labels: app.kubernetes.io/name: guestbook app.kubernetes.io/component: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer type: NodePort ports: - port: 80 nodePort: 30038 # - targetPort: 80 # port: 80 # nodePort: 30008 selector: app.kubernetes.io/name: guestbook app.kubernetes.io/component: frontend </code></pre> <p>Next the mongo db deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo labels: app.kubernetes.io/name: mongo app.kubernetes.io/component: backend spec: selector: matchLabels: app.kubernetes.io/name: mongo app.kubernetes.io/component: backend replicas: 1 template: metadata: labels: app.kubernetes.io/name: mongo app.kubernetes.io/component: backend spec: containers: - name: mongo image: mongo:4.2 args: - --bind_ip - 0.0.0.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 27017 </code></pre> <p>Finally the mongo service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo labels: app.kubernetes.io/name: mongo app.kubernetes.io/component: backend spec: ports: - port: 27017 targetPort: 27017 nodePort: 30068 type: NodePort selector: app.kubernetes.io/name: mongo app.kubernetes.io/component: backend </code></pre>
<p>Short answer: there isn't one.</p> <p>Long answer: you are using the <code>mongo</code> image, do you can pull up the readme for that on <a href="https://hub.docker.com/_/mongo" rel="nofollow noreferrer">https://hub.docker.com/_/mongo</a>. That shows that authentication is disabled by default and must be manually enabled via <code>--auth</code> as a command line argument. When doing that, you can specific the initial auth configuration via environment variables and then more complex stuff in the referenced .d/ folder.</p>
<p>I am still learning kubernetes and I stumbled over the objects 'Ingress' and 'IngressRoute'. What is the different between these two objects? Did IngressRoute replace the 'old' Ingress? I am running a Kubernetes Cluster V1.17 with Traefik 2.1. My IngressRoute works fine but I also found blogs explaining how to define an ingress.</p>
<p>Ingress is a shared abstraction that can be implemented by many providers (Nginx, ALBs, Traefik, HAProxy, etc). It is specifically an abstraction over a fairly simple HTTP reverse proxy that can do routing based on hostnames and path prefixes. Because it has to be a shared thing, that means it's been awkward to handle configuration of provider-specific settings. Some teams on the provider side have decided the benefits of a shared abstraction are not worth the complexities of implementation and have made their own things, so far Contour and Traefik have both named them IngressRoute but there is no connection other than similar naming.</p> <p>Contour handled this fairly well and allowed the two systems to coexist, the Traefik team disregarded our warnings and basically nerfed Ingress to vanilla configs only because they don't see any benefit from supporting it. Can you tell I'm salty about this? Because I definitely am.</p> <p>Basically Ingress is the official thing but it's imperfect, some people are trying to make a new, better thing but it's not going well.</p>
<p>I have created a <strong>k8s <a href="https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/" rel="nofollow noreferrer">cluster</a> by installing &quot;kubelet kubeadm kubectl&quot;</strong>. Now i'm trying to Deploy microservice application as</p> <ol> <li><p>docker build -t demoserver:1.0 . =&gt;<strong>image created successfully</strong></p> </li> <li><p>kubectl run demoserver --image=demoserver --port=8000 --image-pull-policy=Never =&gt;<strong>POD STATUS: ErrImageNeverPull</strong></p> </li> </ol> <p>I tried &quot; eval $(minikube docker-env)&quot; but it says bash: <strong>minikube: command not found...</strong></p> <p>Do i need to install minikube? <em><strong>Is my above cluster setup is not enough for deployment??</strong></em></p>
<p>Minikube and kubeadm are two unrelated tools. Minikube builds a (usually) single node cluster in a local VM for development and learning. Kubeadm is part of how you install Kubernetes in production environments (sometimes, not all installers use it but it's designed to be a reusable core engine).</p>
<p>I know that k8s has default Hard Eviction Threshold memory.available&lt;100Mi. So k8s should evict pods if thresholds exceed. In these conditions can pod provoke SYSTEM OOM? When I talk about SYSTEM OOM I mean such situation when Linux starts to kill processes randomly (or not almost randomly, doesn't matter). Lets suppose that other processes on node consume constant amount of memory. I hope that k8s watches pods and kills them <strong>before</strong> the threshold exceeded. Am I right?</p>
<p>Yes, very yes. Eviction takes time. If the kernel has no memory, oomkiller activates immediately. Also if you set a <code>resources.limits.memory</code> then if you exceed that you get an OOM.</p>
<p>I have a Windows Server 2019 (v1809) machine with Kubernetes (bundled with Docker Desktop for Windows). I want to enable Vertical Pod Autoscaling for the cluster I have created.</p> <p>However, all the documentation I can find is either for a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="nofollow noreferrer">cloud service</a> or a <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Linux-based system</a>. Is it possible to implement VPA for the Docker Desktop Kubernetes in Windows or Minikube?</p>
<p>While VPA itself is a daemon, the pods it controls are just API objects as far it knows and can be anything on any platform. As for compiling a VPA container for Windows, I wouldn't expect any problems, you'll just need to build it yourself since we don't provide one.</p>
<p>When secrets are created, they are 0755 owned by root:</p> <pre><code>/ # ls -al /var/run/secrets/ total 0 drwxr-xr-x 4 root root 52 Apr 16 21:56 . drwxr-xr-x 1 root root 21 Apr 16 21:56 .. drwxr-xr-x 3 root root 28 Apr 16 21:56 eks.amazonaws.com drwxr-xr-x 3 root root 28 Apr 16 21:56 kubernetes.io </code></pre> <p>I want them to be <code>0700</code> instead. I know that for regular secret volumes I can use</p> <pre><code> - name: vol-sec-smtp secret: defaultMode: 0600 secretName: smtp </code></pre> <p>and it will mount (at least the secret files themselves) as 0600. Can I achieve the same with the secrets located at <code>/var/run/secrets</code> directly from the yaml file?</p>
<p>You can disable the default service account token mount and then mount it yourself as you showed.</p>
<p>We tried attaching a shell to container inside "Traefik" Pod using following command but it didn't work. Just FYI, we used helm chart to install Traefik on our k8s cluster. </p> <p><code>kubectl exec -it &lt;traefik Pod name&gt; -- /bin/sh</code> </p> <p>tried this too but no success - <code>kubectl exec -it &lt;traefik Pod name&gt; -- /bin/bash</code></p> <p>Any help in this context will be appreciated. Thanks. </p>
<p>Traefik 1.7 uses a <a href="https://github.com/containous/traefik/blob/master/Dockerfile" rel="nofollow noreferrer"><code>FROM scratch</code></a> container image that has only the Traefik executable and some support files. There is no shell. You would have to switch to the <code>-alpine</code> variant of the image. For 2.x it looks like they use Alpine by default for some reason.</p>
<p>Is it possible to set a liveness probe to check that a separate service is existing? For an app in one pod, and a database in a separate pod, I would like for the app pod to check the liveness of the database pod rather than this pod itself. The reason for this is that once the db is restarted, the app is unable to reconnect to the new database. My idea is to set this so that when the db liveness check fails, the app pod is automatically restarted in order to make a fresh connection to the new db pod.</p>
<p>No, you would need to write that in a script or as part of your http endpoint.</p>
<p>I have created a StorageClass and PersistentVolume but when I try to create a PersistentVolumeClaim I get the following error, "The PersistentVolumeClaim "esp-pv" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims". I have tried to delete the StorageClass PersistentVolume, and PersistentVolumeClaim, as other posts have suggested, and then recreate the sc, pv and pvc but I get the same error. </p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: manual provisioner: kubernetes.io/no-provisioner #volumeBindingMode: WaitForFirstConsumer volumeBindingMode: Immediate allowVolumeExpansion: true </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: esp-pv-volume # name of the pv namespace: espkube # namespace where the p vis applied labels: type: local spec: storageClassName: manual accessModes: - ReadWriteMany # esp, studio and streamviewer can all write to this space hostPath: path: "/mnt/data/" capacity: storage: 10Gi # volume size requested </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: esp-pv namespace: espkube spec: storageClassName: manual accessModes: - ReadWriteMany # esp, studio and streamviewer can all write to this space resources: requests: storage: 10Gi # volume size requested </code></pre>
<p>Solved in comments, deleting a namespaced object (which is most of them) requires specifying the namespace.</p>
<p>We have a statefulset with 2 replicas, on each pod there is Postgres instance. One acts as master while the other acts as replica.</p> <p>There are 2 services exposed, one is PG master service and the other is PG replica service. Both services are without selector, and there are 2 endpoints sharing the name of its related service.</p> <p>The IP of the Postgres pod will be patched into endpoints so that the traffic can be routed to the correct pod when accessing the service.</p> <p>For example, PG master service is corresponding to the endpoint of the same name, and in that endpoint there is ip of the pod running master Postgres instance.</p> <p>There is another traffic pod which will set up the DB connection to the master Postgres service.</p> <p>The problem is:</p> <p>1.The traffic pod(issue DB connections with JDBC) and PG master pod are in the same worker node(let’s call it worker1).</p> <p>2.The PG replica pod is in another worker(worker2).</p> <p>3.Run a testing case which is: “Shutdown network interface on worker1, sleep 60s, take up network interface on worker1”</p> <p>4.Then the previous PG replica pod is promoted to master, and the previous PG master pod is demoted to replica.</p> <p>5.The traffic pod’s target address is the PG master service name but at that time it connects to the replica pod. Thus the traffic could try to write to a PG in ‘read-only’ mode and test case fails.</p> <p>The kube-proxy mode is iptables. We suspect the iptables in kube-proxy doesn’t update the routing information in time. It means the iptables could update the routing information a bit later than the traffic pod establishing the DB connection.</p> <p>We made a restart of the kube-proxy and the problem hasn't been reproduced since then. That's strange. So we hope to know the root cause of that but haven't got clue.</p> <p>Here is the kubectl version:</p> <pre><code>•Kubernetes version (use kubectl version): Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.3&quot;, GitCommit:&quot;1e11e4a2108024935ecfcb2912226cedeafd99df&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-14T12:50:19Z&quot;, GoVersion:&quot;go1.15.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.3&quot;, GitCommit:&quot;91fb1371fc570cfd3b3052012ce68fdd78b41c07&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-28T08:47:48Z&quot;, GoVersion:&quot;go1.15.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Thanks.</p>
<p>Yes? This is just a thing that happens, there is no way to atomically update things because the world is not transactional. Any active-passive HA system doing this kind of failover will have a time in which the system is not converged. The important thing is that Postgres itself never have more than one primary. It can have fewer than one, but never more. It sounds like you have that working, the demoted replica would be in read-only mode and any write queries sent to it would get an error as they should.</p>
<p>In docker host and the containers do have separate process name-space. In case of Kubernetes the containers are wrapped inside a pod. Does that mean in Kubernetes the host (a worker node), the pod in the node and the container in the pod all have separate process namespace?</p>
<p>Pods don't have anything of their own, they are just a collection of containers. By default, all containers run in their own PID namespace (which is separate from the host), however you can set all the containers in a pod to share the same one. This is also used with Debug Containers.</p>
<p>I have a few CRDs and each of them supposed to make edit <code>Container.Spec</code>'s across the cluster. Like ENVs, Labels, etc...</p> <p>Is it okay, if the resource is managed by more that one controller? </p> <p>What are the possible pitfalls of this approach?</p>
<p>Yes, the same object can be updated by multiple controllers. The Pod object is updated by almost a dozen at this point I think. The main problem you can run into is write conflicts. Generally in an operator you do a get, then some stuff happens, then you do an update (usually to the status subresource for a root object case). This can lead to race conditions. I would recommend looking at using Server Side Apply to reduce these issues, it handle per-field tracking rather than whole objects via serial numbers.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: hello-kubernetes-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: hw1.your_domain http: paths: - backend: serviceName: hello-kubernetes-first servicePort: 80 - host: hw2.your_domain http: paths: - backend: serviceName: hello-kubernetes-second servicePort: 80 </code></pre> <p><strong>vs</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress spec: backend: serviceName: nginx-svc servicePort: 80 </code></pre> <p>In the second yaml file nginx-svc points to a nginx controller which has the configMap that configures the routing of requests and other nginx related configuration.</p>
<p>The Ingress system is an abstraction over a simple HTTP fanout proxy, with routing on hostnames and URL prefixes. Nginx can be this kind of proxy, but it can also be an HTTP server. The first Ingres is a hostname-based fanout between two backend services. The second is a fallback route when no other rule matches, presumably aimed at an Nginx server that will send back some kind of simple HTTP page.</p> <p>tl;dr Nginx can be both a proxy and a server. Ingress is proxy, nginx-svc is probably server.</p>
<p>I am trying to parse a helm chart YAML file using python. The file contains some curly braces, that's why I am unable to parse the YAML file.</p> <p>a sample YAML file</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.nginx.name }}-config-map labels: app: {{ .Values.nginx.name }}-config-map data: SERVER_NAME: 12.121.112.12 CLIENT_MAX_BODY: 500M READ_TIME_OUT: '500000' </code></pre> <p>Basically, I could not figure out how to ignore the values present at right side.</p> <p>Thank you,</p>
<p>You would have to write an implementation of Go's <code>text/template</code> library in Python. A better option is probably to push your content through <code>helm template</code> first and then parse it.</p>
<p>I have a error with unhealty pod even though I think the pod works as expected after reschduling. If I restart (delete) it, it becomes ready but I would like to understand why it ends up in a unhealty state.</p> <p>My probe is simple as this:</p> <pre><code>readinessProbe: httpGet: path: / port: 4000 initialDelaySeconds: 30 periodSeconds: 30 </code></pre> <p>Events:</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 99s (x2253 over 35h) kubelet, aks-nodepool1-23887969-vmss000000 Readiness probe failed: Get http://10.244.0.142:4000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>State &amp; Last state</p> <pre><code> State: Running Started: Fri, 17 Apr 2020 19:44:58 +0200 Last State: Terminated Reason: OOMKilled Exit Code: 1 Started: Fri, 17 Apr 2020 00:20:31 +0200 Finished: Fri, 17 Apr 2020 19:44:56 +0200 Ready: False </code></pre> <p>If I run</p> <pre><code> kubectl exec -t other pod -- curl -I 10.244.0.142:4000/ </code></pre> <p>I get 200 OK</p> <p>Can someone explain why pod not gets ready? I guess it has something to do with OOMKilled because of Memory limit, and that should be fixed. But I would like to understand why it doesn´t restart propery. </p>
<p>Oomkilled is the previous state. The current state is running (Ready). The problem is the readiness probe.</p>
<p>Due to customer's requirements, I have to install k8s on two nodes(there's only two available nodes so adding another one is not an option). In this situation, setting up etcd in only one node would cause a problem if that node went down; however, setting up a etcd cluster(etcd for each node) would still not solve the problem since if one node went down, etcd cluster would not be able to achieve the majority vote for the quorum. So I was wondering if it would be possible to override the &quot;majority rule&quot; for the quorum so that quorum could just be 1? Or would there be any other way to solve this</p>
<p>No, you cannot force it to lie to itself. What you see is what you get, two nodes provide the same redundancy as one.</p>
<p>I want to deploy some java (Spring Boot, MicroProfile, ...) apps to k8s. I want to define CPU requests and limits for those apps. The problem with limit is, that the apps need very long (30-90 seconds) time depending on the limit (around 300-500m). This is pretty/too long. The apps also don't need that much CPU. In idle they are &lt;10m. And with load &lt;100m.</p> <p>How do you solve this kind of issues?</p> <p>Is there something planed like the startup-probes for limits? (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes</a>)</p> <p>Thanks</p> <p>P.S. I'm aware of frameworks like quarkus or micronaut. But we got some legacy apps here we want to migrate to k8s.</p>
<p>The usual solution is just to not use CPU limits. They are often best left off unless you know the service abuses the CPU and you can't fix it any other way.</p>
<p>I'm wondering for a batch distributed job I need to run. Is there a way in K8S if I use a Job/Stateful Set or whatever, a way for the pod itself(via ENV var or whatever) to know its 1 of X pods run for this job?</p> <p>I need to chunk up some data and have each process fetch the stuff it needs.</p> <p>--</p> <p>I guess the statefulset hostname setting is one way of doing it. Is there a better option?</p>
<p>This is planned but not yet implemented that I know of. You probably want to look into higher order layers like Argo Workflows or Airflow instead for now.</p>
<p>One of the options to use <code>Kubernetes</code> on Windows 10 is to enable it from <code>Docker for Windows</code>.</p> <p>However reading many tutorials from K8S site they manage something by using minikube - for example adding addons.</p> <p>Since using the option with docker we don't have minikube.</p> <p>For example, how to add addon to such instance?</p>
<p>You would have to manually grab the addon YAML file and <code>kubectl apply -f</code> it. But most things have Helm charts available too so maybe just do that instead?</p>
<p>I have been experimenting with blue green deployment in <code>kubernetes</code> using <code>nginx-ingress</code>. I created few concurrent http request to hit v1 version of my application. In the meanwhile I switched the router to point to v2 version of my application. As expected v2 version was serving the requests after the swap ,but what made me curious is that all the request were success. It is highly probable that there were some in-progress request in v1 while I made the swap. Why didn't such request fail?</p> <p>I also tried the same by introducing some delay in my service so that the request take longer time to process.Still none of the request failed.</p>
<p>Usually in-flight requests are allowed to complete, just no new requests will be sent by the proxy.</p>
<p>I am trying to run a process in only ONE docker pod (and not the other n pods),</p> <p>can I know (from inside a pod/instance) </p> <ul> <li>am I the first pod?</li> <li>how many pods are running?</li> </ul> <p>thanks.</p>
<p>Don't do this. Put that thing in its own deployment (or statefulset more likely) that is unrelated to the others.</p>
<p>What is the best way to deploy a Helm chart using C#? Helm 3 is written in go and I could not find a C# wrapper, any advise on that? Thank you.</p>
<p>Helm is written in Go so unless you want to get incredibly fancy your best bet is running it as a subprocess like normal. A medium-fancy solution would be using one of the many Helm operators and then using a C# Kubernetes api client library to set the objects.</p>
<p>I’m looking for a really simple, lightweight way of persisting logs from a docker container running in kubernetes. I just want the stdout (and stderr I guess) to go to persistent disk, I don’t want anything else for analysing the logs, to send them over the internet to a third party, etc. as part of this.</p> <p>Having done some reading I’ve been considering a DaemonSet with the application container, but then another container which has <code>/var/lib/docker/containers</code> mounted and also a persistent volume (maybe NFS) mounted too. That container would then need a way to copy logs from the default docker JSON logging driver in <code>/var/lib/docker/containers</code> to the persistent volume, maybe rsync running regularly.</p> <p>Would that work (presumably if the rsync container goes down it's going to miss stuff because nothing's queuing, perhaps that's ok rather than trying to queue potentially huge amounts of logs), is this a sensible approach for the desired outcome? It’s only for one or two containers if that makes a difference. Thanks.</p>
<p>Fluentd supports a simple file output plugin (<a href="https://docs.fluentd.org/output/file" rel="nofollow noreferrer">https://docs.fluentd.org/output/file</a>) which you can easily aim at a PersistentVolume mount. Otherwise you would configure Fluentd (or Bit if you prefer) just like normal for Kubernetes so find your favorite guide and follow it.</p>
<p>I have four kubernetes clusters, and I want to check the expiration time of them with kubernetes-python-client.</p> <p>I am following this page: <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">https://github.com/kubernetes-client/python</a></p> <p>Is there anyone know how to get it?</p>
<p>The apiserver certificate is generally handled out of band, either by your Kubernetes installer tool (kubeadm, rancher, talos, etc) or off-cluster in a load balancer layer. As such the K8s API won't help you with this.</p> <p>That said, you can get the certificate of any HTTPS server in Python using <code>ssl.get_server_certificate()</code> (<a href="https://docs.python.org/3/library/ssl.html#ssl.get_server_certificate" rel="nofollow noreferrer">https://docs.python.org/3/library/ssl.html#ssl.get_server_certificate</a>) along with other functions in the <code>ssl</code> module to parse the cert data and then look at the Not After timestamp.</p>
<p>I need to expose application-wide metrics for Prometheus collection from a Kubernetes application that is deployed with multiple instances, e.g. scaled by Horizontal Pod Autoscaler. The scrape point is exposed by every instance of the pod for fail-over purposes, however I do not want Prometheus to actually call the scrape endpoint on every pod's instance, only one instance at a time and failover to another instance only if necessary. </p> <p>The statistics is application-wide, not per-pod instance, all instance endpoints report the same data, and calling them in parallel would serve no useful purpose and only increase a workload on the backend system that has to be queried for statistics. I do not want 30 calls to the backend (assuming the app is scaled up to 30 pods) where just one call would suffice.</p> <p>I hoped that exposing the scrape endpoint as a k8s service (and annotating the service for scraping) should do the trick. However instead of going through the service proxy and let it route the request to one of the pods, Prometheus seems to be going directly to the instances behind the service, and to all of them, rather than only one at a time.</p> <p><a href="https://i.stack.imgur.com/6oevk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6oevk.png" alt="enter image description here"></a></p> <p>Is there a way to avoid Prometheus calling all the instances, and have it call only one?</p> <p>The service is defined as:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: k8worker-msvc labels: app: k8worker-msvc annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '3110' spec: selector: app: k8worker type: LoadBalancer ports: - protocol: TCP port: 3110 targetPort: 3110 </code></pre> <p>In case this is not possible, what are my options other than running leader election inside the app and reporting empty metrics data from non-leader instances?</p> <p>Thanks for advice.</p>
<p>This implies the metrics are coming from some kind of backend database rather than a usual in-process exporter. Move the metrics endpoint to a new service connected to the same DB and only run one copy of it.</p>
<p>I am trying to run a Redis cluster on Kubernetes. I am not planning to persist any Redis data to the disk. Is it possible to run the Redis cluster as Kubernetes deployment and not as a stateful set?</p>
<p>Yes, though I would probably still use StatefulSet specifically for the features to ensure only one pod starts at a time.</p>
<h3>Context</h3> <p>I am running Airflow, and trying to run a proof of concept for a Docker container using Airflow's <a href="https://airflow.apache.org/docs/apache-airflow/1.10.4/_api/airflow/operators/docker_operator/index.html" rel="nofollow noreferrer">DockerOperator</a>. I am deploying to Kubernetes (EKS), but not using Kubernetes Executor yet. Given that I am running pods, by using the <code>DockerOperator</code> I will be running (to my understanding) Docker in Docker.</p> <p>Whenever I run my task, I am receiving the Error: <code>ERROR - Error while fetching server API version</code>. The error happens both on <code>docker-compose</code> as well as EKS (kubernetes).</p> <h3>My current status</h3> <p>Thi is how my airflow Dockerfile looks like:</p> <pre class="lang-sh prettyprint-override"><code>FROM apache/airflow:1.10.14-python3.8 # Use airflow user for pip installs and other things. USER root # Copying Airflow requirements USER airflow COPY requirements.txt /tmp/requirements.txt # Installing requirements. Using airflow user (docs: https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html) RUN pip install --no-cache-dir -r /tmp/requirements.txt </code></pre> <p>This is how the dag I am trying to run looks like:</p> <pre class="lang-py prettyprint-override"><code>with DAG( dag_id='task', default_args=dict( start_date=days_ago(0) ), schedule_interval='@daily' ) as dag: task_1 = DockerOperator( dag=dag, task_id='docker_task', image='centos:latest', api_version=&quot;auto&quot;, docker_url='unix://var/run/docker.sock', command='/bin/sleep 30' ) </code></pre> <p>This is the stack trace of the error I am getting:</p> <pre><code>[2020-12-29 14:18:52,601] {taskinstance.py:1150} ERROR - Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py&quot;, line 670, in urlopen httplib_response = self._make_request( File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py&quot;, line 392, in _make_request conn.request(method, url, **httplib_request_kw) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1255, in request self._send_request(method, url, body, headers, encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1301, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1250, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1010, in _send_output self.send(msg) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 950, in send self.connect() File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/transport/unixconn.py&quot;, line 43, in connect sock.connect(self.unix_socket) FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/requests/adapters.py&quot;, line 439, in send resp = conn.urlopen( File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py&quot;, line 726, in urlopen retries = retries.increment( File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/util/retry.py&quot;, line 410, in increment raise six.reraise(type(error), error, _stacktrace) File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/packages/six.py&quot;, line 734, in reraise raise value.with_traceback(tb) File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py&quot;, line 670, in urlopen httplib_response = self._make_request( File &quot;/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py&quot;, line 392, in _make_request conn.request(method, url, **httplib_request_kw) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1255, in request self._send_request(method, url, body, headers, encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1301, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1250, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 1010, in _send_output self.send(msg) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 950, in send self.connect() File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/transport/unixconn.py&quot;, line 43, in connect sock.connect(self.unix_socket) urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py&quot;, line 214, in _retrieve_server_version return self.version(api_version=False)[&quot;ApiVersion&quot;] File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/api/daemon.py&quot;, line 181, in version return self._result(self._get(url), json=True) File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/utils/decorators.py&quot;, line 46, in inner return f(self, *args, **kwargs) File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py&quot;, line 237, in _get return self.get(url, **self._set_request_timeout(kwargs)) File &quot;/home/airflow/.local/lib/python3.8/site-packages/requests/sessions.py&quot;, line 543, in get return self.request('GET', url, **kwargs) File &quot;/home/airflow/.local/lib/python3.8/site-packages/requests/sessions.py&quot;, line 530, in request resp = self.send(prep, **send_kwargs) File &quot;/home/airflow/.local/lib/python3.8/site-packages/requests/sessions.py&quot;, line 643, in send r = adapter.send(request, **kwargs) File &quot;/home/airflow/.local/lib/python3.8/site-packages/requests/adapters.py&quot;, line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py&quot;, line 984, in _run_raw_task result = task_copy.execute(context=context) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/docker_operator.py&quot;, line 260, in execute self.cli = APIClient( File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py&quot;, line 197, in __init__ self._version = self._retrieve_server_version() File &quot;/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py&quot;, line 221, in _retrieve_server_version raise DockerException( docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) </code></pre> <h3>What I have tried</h3> <ol> <li>Mount the socket into docker compose <code>/var/run/docker.sock:/var/run/docker.sock:ro</code></li> </ol> <p>First, that gives me a new error to:</p> <pre><code>ERROR - Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied')) </code></pre> <p>Second, how will I be able to mount on Kubernetes? I would guess that complicates things</p> <ol start="2"> <li>Install docker within my container and try to give privileges to the airflow user like:</li> </ol> <pre><code>FROM apache/airflow:1.10.14-python3.8 # Use airflow user for pip installs and other things. USER root # Docker RUN curl -sSL https://get.docker.com/ | sh # Usermod RUN usermod -aG docker airflow # Copying Airflow requirements USER airflow COPY requirements.txt /tmp/requirements.txt # Installing requirements. Using airflow user (docs: https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html) RUN pip install --no-cache-dir -r /tmp/requirements.txt </code></pre> <p>But that also did not work.</p> <ol start="3"> <li>Mounting the socket into the DockerOperator task like:</li> </ol> <pre class="lang-py prettyprint-override"><code> task_1 = DockerOperator( dag=dag, task_id='docker_task', image='centos:latest', api_version=&quot;auto&quot;, docker_url='unix://var/run/docker.sock', command='/bin/sleep 30', volumes=['/var/run/docker.sock:/var/run/docker.sock:ro'], ) </code></pre> <p>But that also has had no effect</p>
<p>Copying down from comment:</p> <p>The direct issue is likely that the docker control socket file is owned by something like <code>root:docker</code> on the host, and assuming Airflow isn't running as root (which it shouldn't) then you would need to specifically set it to run with the same gid as the <code>docker</code> group on the host machine (which probably has no matching group inside the container, which could confuse other things).</p> <p>The real answer is to use the KubernetesPodOperator instead. DockerOperator won't work at all once you stop using Docker on the underlying host which will be happening soon now that dockershim is deprecated.</p>
<p>My Situation at the moment: I'm setting up a mail server and just after getting it to work, the logs are flooded with <code>authentication failed</code> messages from an suspicious iran network trying to login to random accounts.</p> <p>After some googeling I found out that <code>fail2ban</code> can stop those attacks, but there's one problem: how to use fail2ban in kubernetes? My Ideas:</p> <ul> <li>I found <a href="https://pilot.traefik.io/plugins/280093067746214409/fail2-ban" rel="nofollow noreferrer">this plugin</a> for traefik, but it requres the traefik instance to be connected to thei SaaS managment service, what I don't need</li> <li>Installing <code>fail2ban</code> on the host: As kubernetes connects multiole nodes, <code>fail2ban</code> on node 1 only gets the logs from this node and cannot block traffik coming in on node 2.</li> </ul> <p>Is there a solution to run fail2ban In kubernetes, maybe linked to the ingress controller, as it is possible with traefik, but without any connection to a SaaS provider?</p>
<p>There isn't really a good way to do this. Both on the log access front, and more importantly on tweaking the iptables rules from inside a container. You could definitely use the core engine of fail2ban to build a tool around the k8s native APIs (<code>pods/logs</code>, NetworkPolicy) however I don't know any such project at time of writing.</p>
<p>Whether the application will be live (In transaction) during the time of POD deployment in AKS?</p> <p>While we are performing the POD deployment, whether the application transactions will go through (or) get error out?</p>
<p>The Deployment system does a rolling update. New pods are created with the new template and once Ready they are added to the service load balancer, and then old ones are removed and terminated.</p>
<p>Ask a question, how to control the usage of each GPU used on each machine in k8s cluster of two machines with two graphics cards on each machine. Now each card has 15g. I want to use 10g + for the other three cards, leaving 7g + free for one graphics card.</p>
<p>That's not how graphics cards work. the GPU RAM is physically part of the graphics card and is exclusive to that GPU.</p>
<p>I have the following manifest:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-psp-rb roleRef: kind: Role name: psp-role apiGroup: rbac.authorization.k8s.io subjects: - kind: Group name: system:serviceaccounts:${NAMESPACE} </code></pre> <p>I would like to deploy this manifest as follows: <code>kubectl apply -f psp-rb.yaml -n some_namespace</code> in a way that the name in the subject is dynamically set. I know that <code>${NAMESPACE}</code> is not allowed, so I am looking for a way on how to do this. I looked into <a href="https://stackoverflow.com/questions/56003777/how-to-pass-environment-variable-in-kubectl-deployment">this</a> but I am not a huge fan of installing dependencies only for passing variables to a manifest (plus I agree variables shouldn't be used in this context). Maybe there is a kubernetesy way to do this?</p>
<p>Copying down from comments and confirming, this is not a thing kubectl supports. It's very simple :)</p>
<p>I'm trying to write a script that runs some commands inside the container using kubectl exec. I'd like to use the environment variables that exist inside the container, but struggling to figure out how to prevent my local shell from evaluating the var and still have it evaluated in the container.</p> <p>This was my first try, but $MONGODB_ROOT_PASSWORD get evaluated by my local shell instead of inside the container:</p> <pre><code>kubectl -n enterprise exec mycontainer -- mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump </code></pre> <p>I tried this, but the had the same issue with pipe, it was evaluated in my local instead of in the container:</p> <pre><code>kubectl -n enterprise exec mycontainer -- echo 'mongodump --username root --password $MONGODB_ROOT_PASSWORD --out /dump' | sh </code></pre> <p>Is there a way to do this with kubectl exec?</p>
<p>You need a <code>sh -c</code> in there, like <code>exec -- sh -c 'whatever $PASSWORD'</code>.</p>
<p>It seems that I can't get my target scraped by <code>prometheus</code> neither via the annotation method nor the <code>ServiceMonitor</code> workaround.</p> <p>Here is the <code>spec</code> scetion of my <code>Service</code> Object exposing the metrics</p> <pre><code>spec: clusterIP: 10.107.228.89 ports: - name: metricsport port: 8282 protocol: TCP targetPort: 8282 selector: app: my-app release: my-app sessionAffinity: None type: ClusterIP </code></pre> <p>This <strong>DOES</strong> expose metrics, which I verify by <code>curl</code>ing it via another pod from within the cluster</p> <pre><code>curl http://my-service-metrics:8282/metrics (...a lot of metrics) </code></pre> <p>Here is my <code>ServiceMonitor</code> <code>spec</code></p> <pre><code>spec: endpoints: - path: /metrics port: metricsport namespaceSelector: matchNames: - default selector: matchLabels: app: my-app release: my-app </code></pre> <p>What else should I do/try to get my metrics being scraped by <code>prometheus</code>?</p> <p>(the target does not appear in my <code>http://prometheus/targets</code>)</p>
<p>Answered in Slack, need to make sure the labels on the ServiceMonitor object itself match the <code>serviceMonitorSelector</code> on the Prometheus object.</p>
<p>I have 2 components that run in a Kubernetes environment. One is listing all the nodes in the cluster (using Kubernetes API) and the other is reporting details of the node it runs on to the first one.</p> <p>I want my first component to match the reported node from the second component to a node in the first component's list.</p> <p>In order to do that, I need 2 things:</p> <ol> <li>Identify a unique value for a node in a Kubernetes cluster.</li> <li>Get the same ID for a node from two different places - The Kubernetes API and from the node itself.</li> </ol> <p><strong>Identifying a node:</strong></p> <p>I'm trying to identify a Kubernetes node. But I still can't get a <strong>unique</strong> identifier for a node.</p> <p>At first, I thought the field machine-id was unique, but it's not (Copied when cloning node). Then I created an identification formula, consist of _.</p> <p>This formula is not unique but it has been working pretty well for now.</p> <p><strong>Getting the same node ID using 2 different methods:</strong></p> <p>It's pretty easy to get a node's machine_id and hostname using the Kubernetes API, and also by running the commands on the node's operating system. The problem I'm facing is that for some cases, the IDs don't match. In particular, the hostname is not identical. <strong>Getting the hostname using the Kubernetes API doesn't return the real hostname of the node.</strong> I've been facing this issue on IKS and ICP.</p> <p>Is there a normal way to get a unique ID for the Kubernetes node? One that will return the same result by running the command on the node and using the API?</p>
<p>You would use the node name, as in the name on the API object. You can pass it in to the DaemonSet process using an env var with a field ref or a downward api volume.</p>
<p>I am trying to run a legacy application inside Kubernetes. The application consists of one of more controllers, and one or more workers. The workers and controllers can be scaled independently. The controllers take a configuration file as a command line option, and the configuration looks similar to the following:</p> <pre><code>instanceId=hostname_of_machine Memory=XXX .... </code></pre> <p>I need to be able to populate the instanceId field with the name of the machine, and this needs to be stable over time. What are the general guidelines for implementing something like this? The application doesn't support environment variables, so my first thought was to record the stateful set stable network ID in an environment variable and rewrite the configuration file with an init container. Is there a cleaner way to approach this? I haven't found a whole lot of solutions when I searched the 'net.</p>
<p>Nope, that's the way to do it (use an initContainer to update the config file).</p>
<p>In a Kubernetes operator based on operator-sdk, do you know how to write code to synchronize CR resource when CR specification is updated with <code>kubectl apply</code>? Could you please provide some code samples?</p>
<p>It is mostly up to how you deploy things. The default skeleton gives you a Kustomize-based deployment structure so <code>kustomize build config/default | kubectl apply -f</code>. This is also wrapped up for you behind <code>make deploy</code>. There is also <code>make install</code> for just installing the generated CRD files.</p>
<p>Is my understanding of the following workflow correct:</p> <ol> <li><p>When a request goes to the Load Balancer, it will also go through the Ingress Object (essentially a map of exactly how to process the incoming request).</p> </li> <li><p>This request is then forwarded to an Ingress Controller to fulfil (the request ultimately gets sent to the appropriate Pod/Service).</p> </li> </ol> <p>But what happens when there is only one Ingress Controller? It seems to me that the purpose of the load balancer will be defeated as all the requests will go to <code>EKS Worker Node - 1</code>?</p> <p>And additionally, let's say that <code>Pod A</code> and <code>Pod B</code> in <code>EKS Worker Node - 1</code> is occupied/down, will the Ingress controller forward that request to <code>EKS Worker Node - 2</code>?</p> <p>Is my assumptions correct? And should you always have multiple ingress controllers across different nodes?</p> <p>I'm confused as I don't understand how these two components work together? Which component is actually balancing the load?</p> <p><a href="https://i.stack.imgur.com/l59Vv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l59Vv.png" alt="enter image description here" /></a></p>
<p>To your main question, having multiple ingress controller replicas for redundancy in case of node failure is very common and probably a good thing to do on any production setup.</p> <p>As to how it works: there's two modes for Load Balancer services depending on the &quot;external traffic policy&quot;. In the default mode, the NLB doesn't talk directly to Nginx, it talk to the kube-proxy mesh on every node which then routes the Nginx pods as needed, even if they are on a different node. With traffic policy &quot;Local&quot;, the NLB will bypass the kube-proxy mesh and only talks to nodes that have at least 1 Nginx pod, so in your example the NLB would not talk to node 2 at all.</p> <p>The extra hop in the default mode improves resiliency and has smoother balancing but at a cost of hiding the client IP (moot here since the NLB does that anyway) and introducing a small bit of latency from the extra packet hops, usually only a few milliseconds.</p>
<p>I've been working on a <code>Kubernetes</code> cluster with microservices written in <code>Flask</code> for some time now and I'm not sure if my current method for containerizing them is correct. </p> <p>I've been using <a href="https://github.com/tiangolo/uwsgi-nginx-flask-docker" rel="nofollow noreferrer">this</a> image as a base.</p> <p>But I've been seeing various posts saying that something like that may be a bit overkill.</p> <p>The problem I have is that whenever I look up an article about using <code>Flask</code> with <code>Kubernetes</code>, they always skip over the details of the actual container and focus on building the cluster, which is something I already have a pretty solid handle on. I guess what I'm wondering is whether there's a better way to build a docker image for a single <code>Flask</code> app because it's hard to find a straight answer.</p>
<p>"better" is entirely relative, but here is the one I use.</p> <pre><code>FROM python:3.7 AS build ENV PYTHONFAULTHANDLER=1 \ PYTHONUNBUFFERED=1 \ PYTHONHASHSEED=random \ PIP_NO_CACHE_DIR=off \ PIP_DISABLE_PIP_VERSION_CHECK=on \ PIP_DEFAULT_TIMEOUT=100 RUN pip install poetry==1.0.5 WORKDIR /app COPY poetry.lock pyproject.toml /app/ RUN poetry config virtualenvs.create false &amp;&amp; \ poetry install --no-dev --no-interaction --no-ansi FROM gcr.io/distroless/python3-debian10 WORKDIR /app ENV PYTHONPATH=/usr/local/lib/python3.7/site-packages/ COPY --from=build /usr/local/lib/python3.7/site-packages/ /usr/local/lib/python3.7/site-packages/ COPY . /app CMD ["-m", "myapp"] </code></pre> <p>With that -m entrypoint looking like:</p> <pre><code>from . import create_app application = create_app() def main() -&gt; None: import sys from twisted import logger # type: ignore from twisted.internet import reactor # type: ignore from twisted.internet.endpoints import TCP4ServerEndpoint # type: ignore from twisted.python import threadpool # type: ignore from twisted.web.server import Site # type: ignore from twisted.web.wsgi import WSGIResource # type: ignore from prometheus_client.twisted import MetricsResource # type: ignore observers = [logger.textFileLogObserver(sys.stdout)] logger.globalLogBeginner.beginLoggingTo(observers) logger.Logger().info("myapp starting on :8000") pool = threadpool.ThreadPool() reactor.callWhenRunning(pool.start) django_resource = WSGIResource(reactor, pool, application) django_site = Site(django_resource) django_endpoint = TCP4ServerEndpoint(reactor, 8000) django_endpoint.listen(django_site) metrics_resource = MetricsResource() metrics_site = Site(metrics_resource) metrics_endpoint = TCP4ServerEndpoint(reactor, 9000) metrics_endpoint.listen(metrics_site) reactor.run() pool.stop() if __name__ == "__main__": main() </code></pre>
<p>In plain nginx, I can use the <a href="http://nginx.org/en/docs/http/ngx_http_geo_module.html" rel="nofollow noreferrer">nginx geo module</a> to set a variable based on the remote address. I can use this variable in the ssl path to choose a different SSL certificate and key for different remote networks accessing the server. This is necessary because the different network environments have different CAs.</p> <p>How can I reproduce this behavior in a Kubernetes nginx ingress? or even Istio?</p>
<p>You can customize the generated config both for the base and for each Ingress. I'm not familiar with the config you are describing but some mix of the various *-snippet configmap options (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet</a>) or a custom template (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/</a>)</p>
<p>Requirement is to orchestrate ETL containers depending upon the number of records present at the Source system (SQL/Google Analytics/SAAS/CSV files).</p> <p>To explain take a Use Case:- ETL Job has to process 50K records present in SQL server, however, it takes good processing time to execute this job by one server/node as this server makes a connection with SQL, fetches the data and process the records. </p> <p>Now the problem is how to orchestrate in Kubernetes this ETL Job so that it scales up/down the containers depending upon number of records/Input. Like the case discussed above if there are 50K records to process in parallel then it should scale up the containers process the records and scales down.</p>
<p>You would generally use a queue of some kind and Horizontal Pod Autoscaler (HPA) to watch the queue size and adjust the queue consumer replicas automatically. Specifics depend on the exact tools you use.</p>
<p>I have a <em>kustomization.yaml</em> file that uses a private repository as a resource:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - https://gitlab.com/my-user/k8s-base-cfg.git patchesStrategicMerge: - app-patch.yaml </code></pre> <p>I want to automate this on a Jenkins Pipeline. I don't know how to pass Git credentials to the kustomize build command. Is there any option to do that?</p> <p>Thank you</p>
<p>You can't, you would set up the credentials in git before starting Kustomize. In this case probably something very simple like <code>git config --global user.password &quot;your password&quot;</code> but look up the <code>credentials.helper</code> setting for more complex options, either from a local file or a tool that reads from some backing store directly.</p>
<p>I'm using <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">Jenkins with Kubernetes plugin</a> but I think the problem will be the same with Tekton or any pipeline that build, test, and deploy a project using Kubernetes'pods and Gradle.</p> <p>Is there a way to share the Gradle daemon process through multiple pods?</p> <p>Note that I enabled remote Gradle caches.</p>
<p>Not easily. The whole model of the Kubernetes plugin is that every build runs in a new environment. You would have to run it outside of the build, probably via a DaemonSet with hostNetwork mode on and then configure Gradle in the build to look at a different IP (the host IP) instead of localhost. </p> <p>Basically everyone just copes with <code>--no-daemon</code> mode :-/</p>
<p>How can i upload Binary file like cert file as Config-map</p> <p>I am trying to upload Cert file like .p12 as config map but it's failing every time. After upload i do not see the file just entry.</p> <p>Command that i used:</p> <pre><code>oc create configmap mmx-cert --from-file=xyz.p12 </code></pre> <p>Failed.</p> <p>Also used: </p> <pre><code>oc create configmap mmx-cert--from-file=game-special-key=example-files/xyz.p12 </code></pre> <p>Also failed.</p>
<p>You cannot, ConfigMaps cannot contain binary data on their own. You will need to encode it yourself and decode it on the other side, usually base64. Or just a Secret instead, which can handle binary data.</p>
<p>I am trying to install Kubernetes on Debian 9 (stretch) server, which is on cloud and therefore can't do virtualization. And it doesn't have systemd. Also, I'm trying for really minimal configuration, not big cluster.</p> <p>I've found Minikube, <a href="https://docs.gitlab.com/charts/development/minikube/index.html" rel="nofollow noreferrer">https://docs.gitlab.com/charts/development/minikube/index.html</a> which is supposed to run without virtualization using docker, but it requires systemd, as mentioned here <a href="https://github.com/kubernetes/minikube/issues/2704" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2704</a> (and yes I get the related error message).</p> <p>I also found k3s, <a href="https://github.com/rancher/k3s" rel="nofollow noreferrer">https://github.com/rancher/k3s</a> which can run either on systemd or openrc, but when I install openrc using <a href="https://wiki.debian.org/OpenRC" rel="nofollow noreferrer">https://wiki.debian.org/OpenRC</a> I don't have the "net" service it depends on.</p> <p>Then I found microk8s, <a href="https://microk8s.io/" rel="nofollow noreferrer">https://microk8s.io/</a> which needs systemd simply because snapd needs systemd.</p> <p>Is there some other alternative or solution to mentioned problems? Or did Poettering already bribed everyone?</p>
<p>Since you are well off the beaten path, you can probably just run things by hand with k3s. It's a single executable AFAIK. See <a href="https://github.com/rancher/k3s#manual-download" rel="nofollow noreferrer">https://github.com/rancher/k3s#manual-download</a> as a simple starting point. You will eventually want some kind of service monitor to restart things if they crash, if not systemd then perhaps Upstart (which is not packaged for Deb9) or Runit (which itself usually runs under supervision).</p>
<p>during deployment of new version of application sequentially 4 pods are terminated and replaced by newer ones; but for those ~10minutes the app is hitting other microservice is hitting older endpoints causing 502/404 errors - anyone know of a way to deploy 4 new pods, then drain traffic from old ones to new ones and after all connections to prev ver are terminated, then terminate the old pods ? </p>
<p>This probably means you don't have a readiness probe set up? Because the default is already to only roll 25% of the pods at once. If you have a readiness probe, this will include waiting until the new pods are actually available and Ready but otherwise it only waits until they start.</p>
<p>I'm reading the Kubernetes docs on <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">Reserve Compute Resources for System Daemons</a>, and it says "Be extra careful while enforcing system-reserved reservation since it can lead to critical system services being CPU starved, OOM killed, or unable to fork on the node."</p> <p>I've seen this warning in a few places, and I'm having a hard time understanding the practical implication.</p> <p>Can someone give me a scenario in which enforcing <code>system-reserved</code> reservation would lead to system services being starved, etc, that would NOT happen if I did not enforce it?</p>
<p>You probably have at least a few things running on the host nodes outside of Kubernetes' view. Like systemd, some hardware stuffs, maybe sshd. Minimal OSes like CoreOS definitely have a lot less, but if you're running on a more stock OS image, you need to leave room for all the other gunk that comes with them. Without leaving RAM set aside, the Kubelet will happily use it all up and then when you go to try and SSH in to debug why your node has gotten really slow and unstable, you won't be able to.</p>
<p>I am working on moving a application which require near real time exchange of data between processes running in multiple containers in kubernetes cluster. I am thinking of using redis cache for this purpose. </p> <p>The type of data that needs to be exchanges are simple types like double,string values. The frequency of exchange needs to near real time(sub seconds)</p> <p>are there any other more performant mechanisms available for exchanging data between containers hosted in kubernetes environment?</p>
<p>This is hugely complex question with way more nuance than can fit in here. It very much depends on the object sizes, uptime requirements, cluster scale, etc. I would recommend you try all of them, evaluate performance, and analyze failure modes as they apply to your use case.</p> <p>Some things you can try out:</p> <ul> <li>Redis</li> <li>Memcached</li> <li>Local files with mmap</li> <li>Network block device with mmap</li> <li>NFS with mmap</li> <li>All three of the above with RocksDB</li> <li>Postgres</li> <li>Kafka</li> </ul> <p>On the encodings side evaluate:</p> <ul> <li>JSON (don't use this, just for baseline)</li> <li>ProtocolBuffers</li> <li>CapnProto</li> <li>Msgpack</li> <li>Maybe BSON?</li> </ul>
<pre><code>values.yaml aa: bb: cc: dd: "hi~!!" </code></pre> <p>In the values ​​file above, the value "cc" is a variable. I'm want to get "hi~!!".</p> <pre><code>myPod.yaml apiVersion: v1 ... ... data: myData: {{ printf "%s\.dd" $variableKey | index .Values.aa.bb }} </code></pre> <p>Is this possible?</p>
<p>You need two separate args, <code>{{ index .Values.aa.bb $variableKey “dd” }}</code></p>
<p>Does anyone know how to get a Kubernetes deployment to automatically update when a configMap is changed?</p>
<p>Unfortunately there is nothing built in for this. You used the <code>helm</code> tag, so with Helm you do this by setting a checksum of the rendered configmap (or secret, same issue there) as an annotation in the pod template. This means that changing the configmap results in a (meaningless) change to the pod template, which triggers a rolling update.</p>
<p>I apologize for my poor English.</p> <p>I created 1 master-node and 1 worker-node in cluster, and deployed container (replicas:4).<br> then <code>kubectl get all</code> shows like as below. (omitted)</p> <pre><code>NAME  NODE pod/container1 k8s-worker-1.local pod/container2 k8s-worker-1.local pod/container3 k8s-worker-1.local pod/container4 k8s-worker-1.local </code></pre> <p>next, I added 1 worker-node to this cluster. but all containers keep to be deployed to worker1.<br> ideally, I want 2 containers to stop, and start up on worker2 like as below.</p> <pre><code>NAME  NODE pod/container1 k8s-worker-1.local pod/container2 k8s-worker-1.local pod/container3 k8s-worker-2.local pod/container4 k8s-worker-2.local </code></pre> <p>Do I need some commands after adding additional node?</p>
<p>Scheduling only happens when a pod is started. After that, it won't be moved. There are tools out there for deleting (evicting) pods when nodes get too imbalanced, but if you're just starting out I wouldn't go that far for now. If you delete your 4 pods and recreate them (or let the Deployment system recreate them as is more common in a real situation) they should end up more balanced (though possibly not 2 and 2 since the system isn't exact and spreading out is only one of the factors used in scheduling).</p>
<p>I am able to access the <code>nginx ingress controller</code> on the <code>NodePort</code>. My goal is to access the controller on <code>port 80</code>.</p> <blockquote> <p>Output of <code>kubectl -n ingress-nginx describe service/ingress-nginx</code></p> </blockquote> <pre><code>Name: ingress-nginx Namespace: ingress-nginx Labels: app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par... Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx Type: NodePort IP: 10.100.48.223 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 30734/TCP Endpoints: 192.168.0.8:80 Port: https 443/TCP TargetPort: 443/TCP NodePort: https 32609/TCP Endpoints: 192.168.0.8:443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>I have few ideas of solving that problem:</p> <ul> <li>redirect traffinc incoming on port 30734 to port 80 via <code>iptables</code></li> <li>resize the range for nodeports so port 80 can be a nodeport as well</li> </ul> <p>I am not sure if these are common ways to do this, so I'd love to hear how you usually deal with this. Probably there is another component necessary?</p>
<p>The normal way to handle this is with a LoadBalancer mode service which puts a cloud load balancer in front of the existing NodePort so that you can remap the normal ports back onto it.</p>
<p>I want to load-balance 2 stateful applications running on 2 pods. This application cannot have 2 replicas as it is stateful. </p> <p>I tried giving the same service names to both the pods but it looks like Kubernetes get confused and nothing is served.</p> <p>I am using on-premies Kubernetes cluster with metallb as a load-balancer.</p> <p>Currently, these pods are exposed over public IP with service TYPE as a load-balancer and added A record to both the pods. But it cannot do a health check with DNS.</p> <p>I only think of having Nginx pod and do mod-proxy to it. Is there any better solution other than this?</p>
<p>The selector on a service can be anything, and can match pods from multiple statefulsets (or deployments). So make a label on your pods and use that in the selector of a new service to target both.</p>
<p>This is my first time running through the Kubernetes tutorial. I installed Docker, Kubectl and Minikube on a headless Ubuntu server (18.04). I ran Minikube like this - </p> <pre><code>minikube start --vm-driver=none </code></pre> <p>I have a local docker image that run a restful service on port 9110. I create a deployment and expose it like this - </p> <pre><code>kubectl run hello-node --image=dbtemplate --port=9110 --image-pull-policy=Never kubectl expose deployment hello-node --type=NodePort </code></pre> <p>status of my service - </p> <pre><code># kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node NodePort 10.98.104.45 &lt;none&gt; 9110:32651/TCP 39m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h2m # kubectl describe services hello-node Name: hello-node Namespace: default Labels: run=hello-node Annotations: &lt;none&gt; Selector: run=hello-node Type: NodePort IP: 10.98.104.45 Port: &lt;unset&gt; 9110/TCP TargetPort: 9110/TCP NodePort: &lt;unset&gt; 32651/TCP Endpoints: 172.17.0.5:9110 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; # minikube ip 192.168.1.216 </code></pre> <p>As you can see, the service is available on the internal IP of 172.17.0.5. </p> <p>Is there some way for me to get this service mapped to/exposed on the IP of the parent host, which is 192.168.1.216. I would like my restful service at 192.168.1.216:9110. </p>
<p>I think <code>minikube tunnel</code> might be what you're looking for. <a href="https://github.com/kubernetes/minikube/blob/master/docs/networking.md" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/docs/networking.md</a></p> <blockquote> <p>Services of type <code>LoadBalancer</code> can be exposed via the <code>minikube tunnel</code> command.</p> </blockquote>
<p>is it possible to pass a function as the value in a K8s' pod command for evaluation? I am passing in JVM arguments to set the MaxRAM parameter and would like to read the cgroups memory to ascertain a value for the argument</p> <p>This is an example of what I'm trying to do</p> <pre><code>- command: - /opt/tools/Linux/jdk/openjdk1.8.0_181_x64/bin/java - -XX:MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 70/100 )) </code></pre> <p>Unfortunately the above doesn't work and fails with the following error:</p> <pre><code>Improperly specified VM option 'MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 100 / 70 ))' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. </code></pre> <p>Is this doable? If so, what's the right way to do it? Thanks!</p>
<p>That is shell syntax so you need to run a shell to interpret it.</p> <pre><code>command: - sh - -c - | exec /opt/tools/Linux/jdk/openjdk1.8.0_181_x64/bin/java -XX:MaxRAM=$(( $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes) * 70/100 )) </code></pre>
<p>If a distributed computing framework spins up nodes for running Java/ Scala operations then it has to include the JVM in every container. E.g. every Map and Reduce step spawns its own JVM.</p> <p>How does the efficiency of this instantiation compare to spinning up containers for languages like Python? Is it a question of milliseconds, few seconds, 30 seconds? Does this cost add up in frameworks like Kubernetes where you need to spin up many containers?</p> <p>I've heard that, much like Alpine Linux is just a few MB, there are stripped down JVMs, but still, there must be a cost. Yet, Scala is the first class citizen in Spark and MR is written in Java.</p>
<p>Linux container technology uses layered filesystems so bigger container images don't generally have a ton of runtime overhead, though you do have to download the image the first time it is used on a node which can potentially add up on truly massive clusters. In general this is not usually a thing to worry about, aside from the well known issues of most JVMs being a bit slow to start up. Spark, however, does not spin up a new container for every operation as you describe. It creates a set of executor containers (pods) which are used for the whole Spark execution run.</p>
<p>I am fairly new to Kubernetes and had a question concerning kube-state-metrics. When I simply monitor Kubernetes using Prometheus I obtain a set of metrics from the cAdvisor, the nodes (node exporter), the pods, etc. When I include the kube-state-metrics, I seem to obtain more "relevant" metrics. Do kube-state-metrics allow to scrape <strong>"new"</strong> information from Kubernetes or are they rather <strong>"formatted"</strong> metrics using the initial Kubernetes metrics (from the nodes, etc. I mentioned earlier). </p>
<p>The two are basically unrelated. Cadvisor is giving you low-level stats about the containers like how much RAM and CPU they are using. KSM gives you info from the Kubernetes API like the Pod object status. Both are useful for different things and you probably want both.</p>
<p>My namespace contains multiple secrets and pods. The secrets are selectively mounted on pods as volumes using the deployment spec. Is it possible to deny specific secrets from being mounted as volumes in certain pods. I have tested RBAC and it prevents pods from accessing secrets over api. Is there a similar mechanism for mounted secrets considering that there is a security risk in allowing all secrets to be mounted in pods in the same namespace.</p>
<p>The other answer is the correct one but in the interest of completeness, you could write an admission controller which checks requests against some kind of policy. This is what the built in NodeRestriction admission controller does to help limit things so the kubelet can only access secrets for pods it is supposed to be running. </p>
<p>I have a config yaml file for a kubernetes deployment that looks like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: &lt;some_app&gt; name: &lt;some_app&gt; namespace: dataengineering spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: &lt;some_app&gt; spec: dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 containers: - image: 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/&lt;some_app&gt;:latest imagePullPolicy: Always name: &lt;some_app&gt; env: - name: ES_HOST value: "vpc-some-name-dev-wrfkk5v7kidaro67ozjrv4wdeq.us-west-2.es.amazonaws.com" - name: ES_PORT value: "443" - name: DATALOADER_QUEUE valueFrom: configMapKeyRef: name: &lt;some_name&gt; key: DATALOADER_QUEUE - name: AWS_DEFAULT_REGION value: "us-west-2" - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: &lt;some_name&gt; key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: &lt;some_name&gt; key: AWS_SECRET_ACCESS_KEY ... </code></pre> <p>Currently, this file is in <code>dev/deployment.yaml</code> but I also want a <code>prod/deployment.yaml</code>. Instead of copying this whole file over, is there a better way to DRY up this file so it can be used for both dev and prod clusters? The parts of this file that differ are some of the environment variables (I used a different <code>DATALOADER_QUEUE</code> variable for prod and dev, and the AWS keys. What can be done?</p> <p>I looked into some options like a configmap. How does one do this? What's a mounted volume? I'm reading this: <code>https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume</code> but I'm not sure what it is.... what is a volume? How do I access the data stored in this "volume"?</p> <p>Can the image be switched from prod to dev? I know that seems odd...</p>
<p>Something like this would help with the env vars:</p> <pre><code> envFrom: - configMapRef: name: myapp-config - secretRef: name: myapp-secrets </code></pre> <p>You can then use different namespaces for dev vs. prod so the references don't have to vary. For handling labels, look at Kustomize overlays and setting labels at the overlay level.</p>
<p>I have a service that generates a picture. Once it's ready, the user will be able to download it.</p> <p>What is the recommended way to share a storage volume between a worker pod and a backend service?</p>
<p>In general the recommended way is "don't". While a few volume providers support multi-mounting, it's very hard to do that in a way that isn't sadmaking. Preferably use an external services like AWS S3 for hosting the actual file content and store references in your existing database(s). If you need a local equivalent, check out Minio for simple cases.</p>
<p>I have a kustomization file that's generating a ConfigMap and behaving as expected. I need to be able to create a new pod that pulls in the environment variables from that same configMap without regenerating the configMap. In other words, I'm having to do this:</p> <pre><code>envFrom: - configMapRef: name: config-name-HASH </code></pre> <p>but I want to do this:</p> <pre><code>envFrom: - configMapRef: name: config-name </code></pre> <p>without needing to regenerate the ConfigMap with kustomize. I've found PodPresets which would seem to be the fix, but that's in Alpha, so it's not good for my organization.</p>
<p>That is not possible. While ConfigMap volumes update in-place and automatically (so you could switch that and make your app re-read the file when it updates), env vars pulled from a ConfigMap (or Secret, all of this applies to both) are only checked when the pod is launched. The usual workaround is to put a checksum or generation ID of your configmap as an annotation in the pod template which will automatically trigger a rolling update through the Deployment, StatefulSet, or DaemonSet controllers.</p>
<p>I have one service called "workspace-service-b6" which is running on port 5000, See the below ingress file. Now I want to serve the static content on the same service (workspace-service-b6) by adding the path route.</p> <p>Example:- Service is working on <a href="https://workspace-b6.dev.example.com" rel="nofollow noreferrer">https://workspace-b6.dev.example.com</a></p> <p>Now if the user adds "/workspace/v2/ "at the end of the URL.</p> <p>Like this:- <a href="https://workspace-b6.dev.example.com" rel="nofollow noreferrer">https://workspace-b6.dev.example.com</a><strong>/workspace/v2/</strong> it will redirect to s3 bucket "<a href="https://s3.console/buckets/xyz/abc/build" rel="nofollow noreferrer">https://s3.console/buckets/xyz/abc/build</a>" where my static content is available.</p> <p>My Ingress file :-</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: b6-ingress namespace: b6 annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" spec: tls: - hosts: - workspace-b6.dev.example.com secretName: xyz-crt rules: - host: workspace-b6.dev.example.com http: paths: - backend: serviceName: workspace-service-b6 service port: 5000 </code></pre>
<p>While it’s kind of possible, the real answer is “don’t”. The ingress system is just a proxy, set up separate pods for content.</p>
<p>The Google Cloud Platform Kubernetes Engine based backend deployment I work on has between 4-60 nodes running at all times, spanning two different services.</p> <p>I want to interface with an API that employs IP whitelisting however, which would mean that all outgoing requests would have to be funneled through one singular IP address. </p> <p>How do I do this? The deployment uses an Nginx Ingress controller, which doesn't allow many options when it comes to the <em>egress</em> part of things.</p> <p>I tried setting up a VM outside of the deployment, but still on GCP in the same region, and was unable to set up a forward proxy. At least, not one that I could connect to off my local device. Not sure if this was because of GCP's firewall or anything of that sort. This was using Squid, as well Apache, with no success in either.</p> <p>I also looked at the Cloud NAT option, but it seems like I would have to recreate all the services, CI/CD pipelines, and DNS settings etc. I would ideally avoid that, as it would be a few days worth of work and would call for some downtime of the systems as well.</p> <p>Ideally I would have a working forward proxy. I tried looking for Docker images that would function as one, but that does not seem to be a thing, sadly. SSHing into a VM to set up such a proxy hasn't led to success yet, either.</p>
<p>You have already found the solution, you have to rebuild things using either Cloud NAT or an equivalent solution made yourself. Even that is relatively recent and I've not actually tried it myself, as recently as a 6 months ago we were told this was not supported for GKE. Our solution was the proxy idea you mentioned, an HTTP proxy running outside of GKE and directing things through it at the app code level rather than infrastructure. It was not fun.</p>
<p>When I type <code>kubectl edit clusterrolebinding foo-role</code>, I can see something like:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: foo-role roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: foo-user namespace: ns1 - kind: ServiceAccount name: foo-user namespace: ns2 </code></pre> <p>I can add a new <code>ClusterRoleBinding</code> for namespace <code>ns3</code> by appending the following config to above file:</p> <pre class="lang-yaml prettyprint-override"><code>- kind: ServiceAccount name: foo-user namespace: ns3 </code></pre> <p>However, I want to use Kustomize to add new bindings instead of manually modifying the above file.</p> <p>I tried to apply the .yaml file below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: foo-role selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/foo-role uid: 64a4a787-d5ab-4c83-be2b-476c1bcb6c96 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: foo-user namespace: ns3 </code></pre> <p>It did add a new <code>ClusterRoleBinding</code> in the namespace <code>ns3</code>, but it will remove existing <code>ClusterRoleBinding</code>s for <code>ns1</code> and <code>ns2</code>.</p> <p>Is there a way to add new <code>ClusterRoleBinding</code> with Kustomize without removing existing ones?</p>
<p>Give them different names in the metadata. You didn't make a new one, you just overwrote the same one.</p>
<p>I came across an <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/" rel="nofollow noreferrer">article</a> which States that we can have mixed os in cluster.</p> <p>Article talk about having flannel as networking plugin but i want to use Calico opensource plugin instead as it provides encryption.</p> <p>Any idea if this is possible using Calico opensource?</p>
<p>Calico for Windows does exist <a href="https://www.tigera.io/tigera-products/calico-for-windows/" rel="nofollow noreferrer">https://www.tigera.io/tigera-products/calico-for-windows/</a></p> <p>But it appears to be a commercial product so you would probably want to contact them to ask about it. Assuming it's equivalent to normal Calico, I don't see any reason it wouldn't work. BGP and IPIP are both standardized protocols that aren't specific to any OS.</p>
<p>The setup is on GCP GKE. I deploy a Postgres database with a persistent volume (retain reclaim policy), and:</p> <pre><code> strategy: type: Recreate </code></pre> <p>Will the data be retained or re-initialized if the database pod gets deleted?</p>
<p>The update strategy has nothing to do with the on-delete behavior. That's used when a change to the pod template triggers an update. Basically does it nuke the old ReplicaSet all at once or gradually scale things up/down. You almost always way RollingUpdate unless you are working with software that requires all nodes be on exactly the same version and understand this will cause downtime on any change.</p> <p>As for the Retain volume mode, this is mostly a safety net for admins. Assuming you used a PVC, deleting the pod will have no effect on the data since the volume is tied to the claim rather than the pod itself (obviously things will go down while the pod restarts but that's unrelated). If you delete the PVC, a Retain volume will be kept on the backend but if you wanted to do anything with it you would have to go in and do it manually. It's like a "oops" protection, requires two steps to actually delete the data.</p>
<p>We got an existed secret in K8S(suppose it is "secret_1") and we want to write a yaml to create a new secret "secret_2", using some values from secret_1.</p> <p>That is, in this yaml we'd like to </p> <ol> <li>Read values from other secret</li> <li>Store values to new secret</li> </ol> <p>Is it possible to do this? It will be great help if a sample can be provided.</p> <p>Thanks in advance.</p>
<p>You cannot do this directly in YAML. You would need to write a script of some kind to do the steps you described, though you can use <code>kubectl get secret -o yaml</code> (or <code>-o json</code>) for a lot of the heavy lifting, possibly with <code>jq</code> for the reformatting.</p>
<p>It's possible to make an Ingress Controller, or anything else (preferably something already done, not needing to code a service per say), to send traffic to an external IP? Why: I have an application which will interact with my k8s cluster from the outside, I already know that I can use an Ingress Controller to make its connection to the cluster, but what if the other applications need to reach this external application? Is there a way to do this?</p>
<p>It depends on the controller, but most will work with an ExternalName type Service to proxy to an arbitrary IP even if that's outside the cluster.</p>
<p>I have a requirements.yaml file:</p> <pre><code>dependencies: - name: mongodb-replicaset # Can be found with "helm search &lt;chart&gt;" version: 3.13.0 # This is the binaries repository, as documented in the GitHub repo repository: https://kubernetes-charts.storage.googleapis.com/ </code></pre> <p>And i want to modify the values.yaml file of the mongodb-replicaset chart , espacialy this section:</p> <pre><code>auth: enabled: false existingKeySecret: "" existingAdminSecret: "" existingMetricsSecret: "" # adminUser: username # adminPassword: password # metricsUser: metrics # metricsPassword: password # key: keycontent </code></pre> <p>How can i override the values.yaml file on initialization in a dependency chart?</p>
<p>You put the values under a key matching the name of the upstream chart so</p> <pre><code>mongodb-replicaset: auth: enabled: true etc etc </code></pre>
<p>I am very confused about why my pods are staying in pending status.</p> <p>Vitess seems have problem scheduling the vttablet pod on nodes. I built a 2-worker-node Kubernetes cluster (nodes A &amp; B), and started vttablets on the cluster, but only two vttablets start normally, the other three is stay in pending state. </p> <p>When I allow the master node to schedule pods, then the three pending vttablets all start on the master (first error, then running normally), and I create tables, two vttablet failed to execute.</p> <p>When I add two new nodes (nodes C &amp; D) to my kubernetes cluster, tear down vitess and restart vttablet, I find that the three vttablet pods still remain in pending state, also if I kick off node A or node B, I get <code>vttablet lost</code>, and it will not restart on new node. I tear down vitess, and also tear down k8s cluster, rebuild it, and this time I use nodes C &amp; D to build a 2-worker-node k8s cluster, and all vttablet now remain in pending status.</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default etcd-global-5zh4k77slf 1/1 Running 0 46m 192.168.2.3 t-searchredis-a2 &lt;none&gt; default etcd-global-f7db9nnfq9 1/1 Running 0 45m 192.168.2.5 t-searchredis-a2 &lt;none&gt; default etcd-global-ksh5r9k45l 1/1 Running 0 45m 192.168.1.4 t-searchredis-a1 &lt;none&gt; default etcd-operator-6f44498865-t84l5 1/1 Running 0 50m 192.168.2.2 t-searchredis-a2 &lt;none&gt; default etcd-test-5g5lmcrl2x 1/1 Running 0 46m 192.168.2.4 t-searchredis-a2 &lt;none&gt; default etcd-test-g4xrkk7wgg 1/1 Running 0 45m 192.168.1.5 t-searchredis-a1 &lt;none&gt; default etcd-test-jkq4rjrwm8 1/1 Running 0 45m 192.168.2.6 t-searchredis-a2 &lt;none&gt; default vtctld-z5d46 1/1 Running 0 44m 192.168.1.6 t-searchredis-a1 &lt;none&gt; default vttablet-100 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-101 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-102 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-103 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; default vttablet-104 0/2 Pending 0 40m &lt;none&gt; &lt;none&gt; &lt;none&gt; apiVersion: v1 kind: Pod metadata: creationTimestamp: 2018-11-27T07:25:19Z labels: app: vitess component: vttablet keyspace: test_keyspace shard: "0" tablet: test-0000000100 name: vttablet-100 namespace: default resourceVersion: "22304" selfLink: /api/v1/namespaces/default/pods/vttablet-100 uid: 98258046-f215-11e8-b6a1-fa163e0411d1 spec: containers: - command: - bash - -c - |- set -e mkdir -p $VTDATAROOT/tmp chown -R vitess /vt su -p -s /bin/bash -c "/vt/bin/vttablet -binlog_use_v3_resharding_mode -topo_implementation etcd2 -topo_global_server_address http://etcd-global-client:2379 -topo_global_root /global -log_dir $VTDATAROOT/tmp -alsologtostderr -port 15002 -grpc_port 16002 -service_map 'grpc-queryservice,grpc-tabletmanager,grpc-updatestream' -tablet-path test-0000000100 -tablet_hostname $(hostname -i) -init_keyspace test_keyspace -init_shard 0 -init_tablet_type replica -health_check_interval 5s -mysqlctl_socket $VTDATAROOT/mysqlctl.sock -enable_semi_sync -enable_replication_reporter -orc_api_url http://orchestrator/api -orc_discover_interval 5m -restore_from_backup -backup_storage_implementation file -file_backup_storage_root '/usr/local/MySQL_DB_Backup/test'" vitess env: - name: EXTRA_MY_CNF value: /vt/config/mycnf/master_mysql56.cnf image: vitess/lite imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /debug/vars port: 15002 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: vttablet ports: - containerPort: 15002 name: web protocol: TCP - containerPort: 16002 name: grpc protocol: TCP resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dev/log name: syslog - mountPath: /vt/vtdataroot name: vtdataroot - mountPath: /etc/ssl/certs/ca-certificates.crt name: certs readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-7g2jb readOnly: true - command: - sh - -c - |- mkdir -p $VTDATAROOT/tmp &amp;&amp; chown -R vitess /vt su -p -c "/vt/bin/mysqlctld -log_dir $VTDATAROOT/tmp -alsologtostderr -tablet_uid 100 -socket_file $VTDATAROOT/mysqlctl.sock -init_db_sql_file $VTROOT/config/init_db.sql" vitess env: - name: EXTRA_MY_CNF value: /vt/config/mycnf/master_mysql56.cnf image: vitess/lite imagePullPolicy: Always name: mysql resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dev/log name: syslog - mountPath: /vt/vtdataroot name: vtdataroot - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-7g2jb readOnly: true dnsPolicy: ClusterFirst priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - hostPath: path: /dev/log type: "" name: syslog - emptyDir: {} name: vtdataroot - hostPath: path: /etc/ssl/certs/ca-certificates.crt type: "" name: certs - name: default-token-7g2jb secret: defaultMode: 420 secretName: default-token-7g2jb status: conditions: - lastProbeTime: null lastTransitionTime: 2018-11-27T07:25:19Z message: '0/3 nodes are available: 1 node(s) had taints that the pod didn''t tolerate, 2 Insufficient cpu.' reason: Unschedulable status: "False" type: PodScheduled phase: Pending qosClass: Guaranteed </code></pre>
<p>As you can see down at the bottom:</p> <pre><code>message: '0/3 nodes are available: 1 node(s) had taints that the pod didn''t tolerate, 2 Insufficient cpu.' </code></pre> <p>Meaning that your two worker nodes are out of resources based on the limits you specified in the pod. You will need more workers, or smaller CPU requests.</p>
<p>I am running filebeat as deamon set with 1Gi memory setting. my pods getting crashed with <code>OOMKilled</code> status.</p> <p>Here is my limit setting </p> <pre><code> resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi </code></pre> <p>What is the recommended memory setting to run the filebeat.</p> <p>Thanks</p>
<p>The RAM usage of Filebeat is relative to how much it is doing, in general. You can limit the number of harvesters to try and reduce things, but overall you just need to run it uncapped and measure what the normal usage is for your use case and scenario.</p>
<p>I want to create a symlink using a kubernetes deployment yaml. Is this possible?</p> <p>Thanks</p>
<p>Not really but you could set your command to something like <code>[/bin/sh, -c, "ln -s whatever whatever &amp;&amp; exec originalcommand"]</code>. Kubernetes isn't involved per se, but it would probably do the job. Normally that should be part of your image build process, not a deployment-time thing.</p>
<p>With the instruction <a href="https://docs.aws.amazon.com/eks/latest/userguide/worker.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/worker.html</a> it is possible to bring up Kube cluster worker nodes. I wanted the worker node not to have public ip. I don't see Amazon gives me that option as when running the cloudformation script. How can I have option not to have public ip on worker nodes</p>
<p>You would normally set this up ahead of time in the Subnet rather than doing it per machine. You can set <code>Auto-assign public IPv4 address</code> to false in the subnets you are using the for the worker instances.</p>
<p>I would like to mount an amazon ebs volume (with data on it) to my pod. The problem is that I didn't find a way to determine in advance the availability zone of the pod before starting it. If the pod doesn't start on the same availability zone of the volume, it leads to a binding error.</p> <p>How can I specify or determine the availability zone of a pod before starting it?</p>
<p>You use the <code>topology.kubernetes.io/zone</code> label and node selectors for this kind of thing. However unless you're on a very old version of Kubernetes, this should be handled automatically by the scheduler.</p>
<p>I have been deploying apps to Kubernetes for the last 2 years. And in my org, all our apps(especially stateless) are running in Kubernetes. I still have a fundamental question, just because very recently we found some issues with respect to our few python apps.</p> <p>Initially when we deployed, our python apps(Written in Flask and Django), we ran it using <code>python app.py</code>. It's known that, because of GIL, python really doesn't have support for system threads, and it will only serve one request at a time, but in case the one request is CPU heavy, it will not be able to process further requests. This is causing sometimes the health API to not work. We have observed that, at this moment, if there is a single request which is not IO and doing some operation, we will hold the CPU and cannot process another request in parallel. And since it's only doing fewer operations, we have observed there is no increase in the CPU utilization also. This has an impact on how <code>HorizontalPodAutoscaler</code> works, its unable to scale the pods.</p> <p>Because of this, we started using <code>uWSGI</code> in our pods. So basically <code>uWSGI</code> can run multiple pods under the hood and handle multiple requests in parallel, and automatically spin new processes on demand. But here comes another problem, that we have seen, <code>uwsgi</code> is lacking speed in auto-scaling the process tocorrected serve the request and its causing <code>HTTP 503</code> errors, Because of this we are unable to serve our few APIs in 100% availability. </p> <p>At the same time our all other apps, written in <code>nodejs</code>, <code>java</code> and <code>golang</code>, is giving 100% availability.</p> <p>I am looking at what is the best way by which I can run a python app in 100%(99.99) availability in Kubernetes, with the following</p> <blockquote> <ol> <li>Having health API and liveness API served by the app</li> <li>An app running in Kubernetes</li> <li>If possible without uwsgi(Single process per pod is the fundamental docker concept)</li> <li>If with uwsgi, are there any specific config we can apply for k8s env</li> </ol> </blockquote>
<p>We use Twisted's WSGI server with 30 threads and it's been solid for our Django application. Keeps to a single process per pod model which more closely matches Kubernetes' expectations, as you mentioned. Yes, the GIL means only one of those 30 threads can be running Python code at time, but as with most webapps, most of those threads are blocked on I/O (usually waiting for a response from the database) the vast majority of the time. Then run multiple replicas on top of that both for redundancy and to give you true concurrency at whatever level you need (we usually use 4-8 depending on the site traffic, some big ones are up to 16).</p>
<p>Dnsjava is an implementation of DNS in Java. We have built some of our application logic around it.. Just wanted to check if Kubernetes would support DNS interfaces at application level</p>
<p>Not entirely sure what you mean, but Kubernetes doesn't care what you run on it. Your workloads are your problem :)</p>
<p>I'm using <code>kubectl apply</code> to update my Kubernetes pods:</p> <pre><code>kubectl apply -f /my-app/service.yaml kubectl apply -f /my-app/deployment.yaml </code></pre> <p>Below is my service.yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-app labels: run: my-app spec: type: NodePort selector: run: my-app ports: - protocol: TCP port: 9000 nodePort: 30769 </code></pre> <p>Below is my deployment.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: selector: matchLabels: run: my-app replicas: 2 template: metadata: labels: run: my-app spec: containers: - name: my-app image: dockerhubaccount/my-app-img:latest ports: - containerPort: 9000 protocol: TCP imagePullSecrets: - name: my-app-img-credentials strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% </code></pre> <p>This works fine the first time, but on subsequent runs, my pods are not getting updated.</p> <p>I have read the suggested workaround at <a href="https://github.com/kubernetes/kubernetes/issues/33664" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/33664</a> which is:</p> <pre><code>kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}" </code></pre> <p>I was able to run the above command, but it did not resolve the issue for me.</p> <p>I know that I can trigger pod updates by manually changing the image tag from "latest" to another tag, but I want to make sure I get the latest image without having to check Docker Hub.</p> <p>Any help would be greatly appreciated.</p>
<p>If nothing changes in the deployment spec, the pods will not be updated for you. This is one of many reasons it is not recommended to use <code>:latest</code>, as the other answer went into more detail on. The <code>Deployment</code> controller is very simple and pretty much just does <code>DeepEquals(old.Spec.Template, new.Spec.Template)</code>, so you need some actual change, such as you have with the PATCH call by setting a label with the current datetime.</p>
<p>I am looking for keeping some kind of baseline for everything applied to kubernetes(or just a namespace).</p> <p>Like versioning microservices in a list and then check that in to github, in case of the need to roll something back.</p>
<p>Check out Velero, it is a backup tool for kubernetes. I don’t think it can use git as a backend, but you could add that (or use s3 or similar).</p>
<p>I am new to K8s. Say I want to start up a RabbitMQ in my cluster but I also want to ensure its default AMQP port is secure (AMQPS). Is it possible to do so using a GCP-managed key + certificate? If so, how? For example, I was thinking of using a LoadBalancer somehow to take care of it. Or, maybe Ingress, although it's not HTTP-based traffic (still, maybe we can work around this?)</p> <p>Thanks</p>
<p>I don’t think so, all the ways you can interact with Google certs are aimed at HTTPS. You can use cert-manager with LetsEncrypt though.</p>
<p><a href="https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls" rel="nofollow noreferrer">https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls</a> </p> <p>I've tried to configure https for my ingress resource by this tutorial. I've done all the needed steps, but when I try to go to my site it send me: </p> <p><a href="https://i.stack.imgur.com/23EGK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/23EGK.png" alt="enter image description here"></a></p> <p>Should I do some additional steps?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: www.domain.com http: paths: - backend: serviceName: front-end-service servicePort: 80 path: / - host: www.domain.com http: paths: - backend: serviceName: back-end-service servicePort: 3000 path: /api tls: - hosts: - www.domain.com secretName: my-sectet </code></pre> <p>Sectet which I've created exist . I've checked it by using this command <code>kubectl get secrets</code> and name the same like I use in ingress resource.</p> <p>If you need additiona info , pls let me know</p>
<p>As mentioned in the comments, this tutorial is guiding you through setting up a self-signed certificate, which is not trusted by your browser. You would need to provide a cert your browser trusts or temporarily ignore the error locally. LetsEncrypt is an easy and free way to get a real cert, and cert-manager is a way to do that via Kubernetes.</p>
<p>I was looking for a load-balancing technique with health checks while making my worker-nodes communicating with the API server. </p> <p>Kubernetes itself has a service called "kubernetes" whose endpoints are the API servers.</p> <p>I entered the domain of this service in kubeconfig of workernodes and it is behaving well.</p> <p>The only concern is there are no health checks of the API server, if any of them falls back, the service will still forward the traffic to the node.</p> <p>Can I configure some health check here??</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2017-09-06T07:54:44Z labels: component: apiserver provider: kubernetes name: kubernetes namespace: default resourceVersion: "96" selfLink: /api/v1/namespaces/default/services/kubernetes uid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx spec: clusterIP: 10.32.0.1 ports: - name: https port: 443 protocol: TCP targetPort: 6443 sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800 type: ClusterIP status: loadBalancer: {} </code></pre> <p>I know I can use LB like Haproxy, and cloud providers LB but I want to achieve that inside cluster only</p>
<p>It's magic ✨. The endpoints of the service are managed directly by the apiservers themselves. That's why it has no selector. The Service is really only there for compat with cluster DNS. It is indeed what you use to talk to the API from inside the cluster, this is generally detected automatically by most client libraries.</p>
<p>My Traefik Ingress DaemonSet shows some awkard metrics in its dashboard. </p> <p>Is it correct? I really doubt that my average response time is beyond minutes.</p> <p>I think I'm doing something wrong but I have no idea what it is.</p> <p><a href="https://i.stack.imgur.com/72SCs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72SCs.png" alt="Traefik Dashboard"></a></p>
<p>Answered in comments, Traefik's stats are very literal and when using Websockets it thinks that's a single HTTP connect (because it technically is) which is lasting for minutes or hours.</p>
<p>I need this information to measure mean time to recovery (MTTR). I have tried using different kube-state-metrics but it does not seem to help much. Any hints on measuring MTTR will also be appreciated</p>
<p>You can use the pod status information, it records last transition time for each status signal. In this case you probably want the time difference between <code>PodScheduled</code> and <code>Ready</code>, but up to you to decide what counts as "startup time" or not (for example, does the time spent on pulling container images count?).</p>
<p>I followed this tutorial: <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/python/django/kubernetes-engine</a> on how to deploy a Django application to GKE. </p> <p>Unfortunately, I made a mistake while deploying the application, and one of my 3 pods in the cluster failed to come up. I believe I've fixed the failure, and now want to redeploy the application.</p> <p>I can't figure out how to do that, or if I didn't fix the error and that's why it is still in error. I don't know how to diagnose if that's the case either...</p> <p>After fixing my Dockerfile, I re-built and re-pushed to the Google Container Registry. It seemed to update, but I have no idea how to track this sort of deployment.</p> <p>How does the traditional model of pushing a new version of an application and rolling back work in GKE? </p> <p>Edit: The issue I'm specifically having is I updated <code>settings.py</code> in my Django application but this is not being propagated to my cluster</p>
<p>The normal way would be to push a new image with a new tag and then edit the container image tag in the Deployment (<a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/78d8a59d59c5eca788495666b43283534a50b7ee/container_engine/django_tutorial/polls.yaml#L42" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/python-docs-samples/blob/78d8a59d59c5eca788495666b43283534a50b7ee/container_engine/django_tutorial/polls.yaml#L42</a>), and then re-apply the file (<code>kubectl apply -f polls.yml</code>). However because their example is not using image tags (read: is implicitly using the tag <code>latest</code>) then you just need to delete the existing pods and force all three to restart. A fast way to do this is <code>kubectl delete pod -n app=polls</code>.</p>
<p>Due to some internal issues, we need to remove unused images as soon as they become unused.<br> I do know it's possible to use <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">Garbage collection</a> but it doesn't offer strict policy as we need. I've come across <a href="https://hub.docker.com/r/meltwater/docker-cleanup/" rel="nofollow noreferrer">this</a> solution but</p> <ol> <li>it's deprecated</li> <li>it also removes containers and possible mounted volumes</li> </ol> <p>I was thinking about setting a <code>cron</code> job directly over the nodes to run <code>docker prune</code> but I hope there is a better way</p> <p>No idea if it makes a difference but we are using AKS</p>
<p>This doesn't really accomplish much since things will be re-downloaded if they are requested again. But if you insist on a silly thing, best bet is a DaemonSet that runs with the host docker control socket hostPath-mounted in and runs <code>docker system prune</code> as you mentioned. You can't use a cron job so you need to write the loop yourself, probably just <code>bash -c 'while true; do docker system prune &amp;&amp; sleep 3600; done'</code> or something.</p>
<p>I've finally managed to run my containers and let them communicate. Currently, they're 1-1 (1 frontend, 1 backend). Now I wish to have n instances of frontend and m instances of the backend, but a question came to me, about handling the logs. If I run only 1 instance of each, I can configure 2 volumes (1 for frontend and 1 for backend) and have them write there. When I have the containers orchestrated by Kubernetes, how can I set the volumes so that node1 of frontend wont' overwrite data written by node2 (frontend)</p> <p>Thanks</p>
<p>You don't write logs to a volume, generally. You write them to stdout/err and then the container runtime system manages them for you. You can then access them via <code>kubectl logs</code> or ship them elsewhere using tools like Fluentd.</p>