Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I am attempting to install gitlab using helm. I have a certificate issued to me by the internal Certificate Authority and I have used the <code>.pem</code> and <code>.key</code> file to generate a tls secrete with this command:</p> <pre><code>kubectl create secret tls gitlab-cert --cert=&lt;cert&gt;.pem --key=&lt;cert&gt;.key </code></pre> <p>When I run the helm installation, I am expecting to be able to view gitlab with <code>https://{internal-domain}</code>, however I get the below image.</p> <p><a href="https://i.stack.imgur.com/CoYaG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CoYaG.png" alt="enter image description here" /></a></p> <p><strong>HELM installtion configuration</strong></p> <pre><code>helm install gitlab gitlab/gitlab \ --timeout 600s \ --set global.hosts.domain=${hosts_domain} \ --namespace ${helm_namespace} \ --set global.hosts.externalIP=${static_ip} \ --set postgresql.install=false \ --set global.psql.host=${postgres_sql_ip} \ --set global.psql.password.secret=${k8s_password_secret} \ --set global.psql.username=${postgres_sql_user} \ --set global.psql.password.key=${k8s_password_key} --set global.psql.ssl.secret=${psql_ssl_secret} \ --set global.psql.ssl.clientCertificate=${psql_ssl_client_certificate} \ --set global.psql.ssl.clientKey=${psql_ssl_client_key} \ --set global.psql.ssl.serverCA=${psql_ssl_server_ca} \ --set global.extraEnv.PGSSLCERT=${extra_env_pg_ssl_cert} \ --set global.extraEnv.PGSSLKEY=${extra_env_pg_ssl_key} \ --set global.extraEnv.PGSSLROOTCERT=${extra_env_pg_ssl_root_cert} \ --set global.host.https=true \ --set global.ingress.tls.enabled=true \ --set global.ingress.tls.secretName=${gitlab-cert} \ --set certmanager.install=false \ --set global.ingress.configureCertmanager=false \ --set gitlab.webservice.ingress.tls.secretName=${gitlab-cert} </code></pre> <p>The pods run fine.</p>
<p>Posted community wiki answer for better visibility. Feel free to expand it.</p> <hr /> <p>Based on @sytech comment:</p> <blockquote> <p>The error you have there is <code>CERT_WEAK_SIGNATURE_ALGORITHM</code>. It seems you should probably regenerate your certificate using a stronger algorithm.</p> </blockquote> <p>You are probably using some weak signature algorithm. Both <a href="https://www.chromium.org/Home/chromium-security/education/tls#TOC-Deprecated-and-Removed-Features" rel="nofollow noreferrer">Chrome</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Weak_Signature_Algorithm" rel="nofollow noreferrer">Mozilla Firefox</a> are not <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Weak_Signature_Algorithm" rel="nofollow noreferrer">treating certificates based on weak algorithms as secure</a>:</p> <blockquote> <p>SHA-1 certificates will no longer be treated as secure by major browser manufacturers beginning in 2017.</p> </blockquote> <blockquote> <p>Support for MD5 based signatures was removed in early 2012.</p> </blockquote> <p>Please make sure that you are <a href="https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/" rel="nofollow noreferrer">using more secure algorithm</a>:</p> <blockquote> <p>We encourage Certification Authorities (CAs) and Web site administrators to upgrade their certificates to use signature algorithms with hash functions that are stronger than SHA-1, such as SHA-256, SHA-384, or SHA-512.</p> </blockquote> <p>Another option is that it may be issue at your end - check your network and browser settings - <a href="https://www.auslogics.com/en/articles/fix-neterr-cert-weak-signature-algorithm-error/" rel="nofollow noreferrer">steps are presented in this article</a>.</p>
<p>In a scenario where a zone/dc dropped and 2 master nodes out of 5 are now offline, I would like to restore etcd on the remaining 3 master nodes.</p> <p>So far the best I could manage was restoring from etcd backup, but I found myself needing to destroy the remaining 2 and recreating them. Otherwise, I got a split brain issue.</p> <p>Is there a way to remove the 2 dropped members from etcd and restore quorum for the remaining 3?</p> <p>(OKD 4.7)</p>
<p>As per this <a href="https://platform9.com/kb/kubernetes/restore-etcd-cluster-from-quorum-loss" rel="nofollow noreferrer">doc</a> by Platform9.</p> <p><strong>Restoring ETCD Backup to Recover Cluster From Loss of Quorum</strong>:</p> <p>Master nodes going offline or a lack of connectivity between the master nodes leading to an unhealthy cluster state could cause a loss of quorum.</p> <blockquote> <p>The idea behind Etcd restore is to restore Etcd from a backup using etcdctl and reduce the master count to 1. In order for it to start up as a brand-new etcd cluster once that is finished, we may need to manually make some adjustments. When things are back up, we increment the expert count individually for example join hubs.</p> <p>If the master nodes are hard offline or unreachable after restoring from the etcd backup, proceed with deauthorizing the nodes as well. From Kubectl perspective, the detached master nodes will be seen to be in &quot;NotReady&quot; state. Delete these nodes from the cluster.</p> <p>At this point, the cluster should be back up and running with a single master node. Verify the same</p> <p>Once the nodes are scaled back up, they should have a PMK stack running on them which will ensure ETCD members will sync amongst each other.</p> </blockquote> <p>Refer to this <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/assembly_restore-etcd-quorum.html" rel="nofollow noreferrer">doc 1</a> and <a href="https://docs.openshift.com/container-platform/4.8/backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.html" rel="nofollow noreferrer">doc 2</a> by Red hat for more information about Restoring etcd quorum and Replacing an unhealthy etcd member</p>
<p>Is it possible to move every deployment on a running k8s cluster to another running k8s cluster? I don't need to move cluster configurations, just every deployment, its pods, secrets, services, endpoints, pv/pvc, ingress etc. Everything that would be attached to a namespace and deleted in case that namespace gets deleted.</p>
<p>As @<strong>Jonas</strong> suggested, except for PVs, <strong>you can apply the same Yaml manifests to migrate every deployment from one kubernetes cluster to another cluster</strong>. For PVs, you need to backup and restore PVs by referring to the official kubernetes doc on <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">Creating volume snapshot</a> and <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">Restoring Volume from Snapshot Support</a>.</p> <blockquote> <p>Volume snapshots provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications.</p> </blockquote> <p>You can also use the <a href="https://velero.io/docs/v1.2.0/" rel="nofollow noreferrer">Velero</a>, <a href="https://github.com/utkuozdemir/pv-migrate#use-cases" rel="nofollow noreferrer">pv-migrate</a> tool/kubectl plugin and similar <a href="https://github.com/kubernetes/kubernetes/issues/24229" rel="nofollow noreferrer">github issue</a> for more details, which will help you to easily migrate the contents of one Kubernetes PVC to another.</p>
<p>Hi I was just trying to install argo CD in my local machine. I have installed and running minikube.</p> <p>After creating argocd namespace, I just try these commands</p> <pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml </code></pre> <p>This error persists:</p> <pre><code>Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout </code></pre> <p>Could I get some help? Byw I'm new to argo...</p>
<p>The error “<code>Unable to connect to the Server TCP I/O timeout</code>” happens usually due to some common causes and you can try to troubleshoot based on below steps :</p> <p>1)Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address.</p> <p>2)There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place which is preventing the access.</p> <p>3)You have configured a cluster that does not exist any more. Also error might have resulted due to the IP address difference in the kubelet configuration.</p> <p>4)Refer to this official <a href="https://argo-cd.readthedocs.io/en/stable/developer-guide/running-locally/" rel="nofollow noreferrer">doc</a> about how to install ArgoCD in a local machine, as mentioned in the doc you need to run in the same namespace where Argo CD is installed. Try setting the current context as the default namespace by using below steps :</p> <pre><code>kubectl config set-context --current --namespace=argocd </code></pre> <p><strong>To see your current context</strong>:</p> <p><code>kubectl config current-context</code></p> <p><strong>To see the contexts you have</strong>:</p> <p><code>kubectl config view</code></p> <p><strong>To switch context</strong>:</p> <pre><code>kubectl config use-context context-cluster-name` </code></pre> <p>Make sure you are using the correct kubectl context.</p> <p>Also you can refer to this <a href="https://www.techcrumble.net/2019/06/kubernetes-error-unable-to-connect-to-the-server-tcp-i-o-timeout/" rel="nofollow noreferrer">doc</a> authored by Aruna Lakmal for more information about this error.</p>
<p>I'm attempting to set up a path rewriting ingress to my backend service using the following:</p> <ul> <li>Kubernetes NGINX ingress controller (<a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a>), version 1.6.4 (built on Nginx 1.21.6)</li> <li>Running in Docker Desktop 4.9.0 with Kubernetes 1.24.0</li> </ul> <p>I've deployed my service and the ingress controller using Helm. When I describe the Ingress resource, it looks exactly like <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this example</a>, but with a few extra labels from Helm.</p> <p>When I attempt to GET or POST to any resource (existing or not) that matches the regex using curl (using a path starting with <code>/something</code>), I get a HTTP 400 response. Logging on my backend service shows that it never received the request. When I attempt to hit any other nonexistent path, I get a HTTP 404 from nginx, which is expected.</p> <p>How do I resolve the HTTP 400s and get nginx to forward the traffic to my service? I'm guessing there's something missing from either my nginx config or from the ingress controller config, but I don't see anything obvious in the docs.</p>
<p>Try these <a href="https://kubernetes.github.io/ingress-nginx/troubleshooting/" rel="nofollow noreferrer">troubleshooting</a> steps which will help you to resolve your issue.</p> <p>1.Check whether your ingress controller has configured to respect the proxy protocol settings in the LB</p> <p>2.Check if you added the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol" rel="nofollow noreferrer">proxy protocol</a> directive to your configmap.</p> <p>3.If the issue is still not resolved then you need to add extra args to the config map as below:</p> <pre><code>extraArgs: # - --enable-skip-login - --enable-insecure-login # - --system-banner=&quot;Welcome to Kubernetes&quot; </code></pre> <p>Attaching the <a href="https://github.com/kubernetes/dashboard/issues/7201" rel="nofollow noreferrer">Git Issue</a> which addresses a similar issue.</p>
<p>After I uninstalled a release(with --keep-history), there will remain a release history with &quot;uninstalled status&quot;.</p> <p>Then if I want to install this release again, <code>install</code> and <code>upgrade --install</code> are both failed.</p> <p><code>install</code> failed because of &quot;cannot re-use a name that is still in use&quot; but <code>upgrade --install</code> failed because of &quot;xxx has no deployed releases&quot;</p> <p>Is the only way that I have to remove the history or uninstall without history?</p> <p>I tried to use <code>install</code> and <code>upgrade --install</code> command, both failed</p>
<p>As described in this <a href="https://phoenixnap.com/kb/helm-has-no-deployed-releases" rel="nofollow noreferrer">doc</a> by phoenixnap.</p> <p><strong>There are several ways to fix the “helm has no deployed releases” error, one ways is by running the following command</strong>:</p> <pre><code>kubectl -n kube-system patch configmap [release name].[release version] --type=merge -p '{&quot;metadata&quot;:{&quot;labels&quot;:{&quot;STATUS&quot;:&quot;DEPLOYED&quot;}}}' </code></pre> <p>[release name] is the name of the release you want to update.</p> <p>[release version] is the current version of your release.</p> <p>Since Helm 3 stores the deployment history as <a href="https://phoenixnap.com/kb/kubernetes-secrets" rel="nofollow noreferrer">Kubernetes secrets</a>. Check the deployment secrets:</p> <pre><code>kubectl get secrets </code></pre> <p>Find the secret referring to the failed deployment, then use the following command to change the deployment status:</p> <pre><code>kubectl patch secret [name-of-secret-related-to-deployment] --type=merge -p '{&quot;metadata&quot;:{&quot;labels&quot;:{&quot;status&quot;:&quot;deployed&quot;}}}' </code></pre> <p>You can also refer this <a href="https://jacky-jiang.medium.com/how-to-fix-helm-upgrade-error-has-no-deployed-releases-mystery-3dd67b2eb126" rel="nofollow noreferrer">blog</a> by Jacky Jiang for more information about how to upgrade helm</p>
<p>When deploying Prisma to Kubernetes locally and trying to connect through another pod, it prompts the next message: &quot;Prisma needs to perform transactions, which requires your MongoDB server to be run as a replica set.&quot;</p> <p>The complete log is as follows:</p> <pre><code>PrismaClientKnownRequestError: [be-auth] Invalid `prisma.user.create()` invocation in [be-auth] /usr/src/app/src/routes/signup.ts:38:36 [be-auth] [be-auth] 35 [be-auth] 36 const hashedPassword = await Password.toHash(password); [be-auth] 37 [be-auth] → 38 const user = await prisma.user.create( [be-auth] Prisma needs to perform transactions, which requires your MongoDB server to be run as a replica set. https://pris.ly/d/mongodb-replica-set [be-auth] at zr.handleRequestError (/usr/src/app/node_modules/@prisma/client/runtime/library.js:122:8308) [be-auth] at zr.handleAndLogRequestError (/usr/src/app/node_modules/@prisma/client/runtime/library.js:122:7697) [be-auth] at zr.request (/usr/src/app/node_modules/@prisma/client/runtime/library.js:122:7307) { [be-auth] code: 'P2031', [be-auth] clientVersion: '5.0.0', [be-auth] meta: {} [be-auth] } </code></pre> <p>This is the .yaml file for deploying the mongodb image:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: be-auth-mongo-depl spec: replicas: 1 selector: matchLabels: app: be-auth-mongo template: metadata: labels: app: be-auth-mongo spec: containers: - name: be-auth-mongo image: mongo --- apiVersion: v1 kind: Service metadata: name: be-auth-mongo-srv spec: selector: app: be-auth-mongo ports: - name: db protocol: TCP port: 27017 targetPort: 27017 </code></pre>
<p>To resolve your issue follow as per <a href="https://www.prisma.io/docs/concepts/database-connectors/mongodb#replica-set-configuration" rel="nofollow noreferrer">Prisma documents</a>;</p> <blockquote> <ol> <li><p>Change your deployment to one with a replica set configured.(Make it <strong>Optional</strong>)</p> </li> <li><p>One simple way to fix your issue is to use <a href="https://www.mongodb.com/atlas/database" rel="nofollow noreferrer">MongoDB Atlas</a>/<a href="https://github.com/prisma/prisma/blob/main/docker/mongodb_replica/Dockerfile" rel="nofollow noreferrer">Docker</a> to launch a free instance that has replica set support out of the box.</p> </li> <li><p>Another way is to run the replica set locally with the help of <a href="https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set/" rel="nofollow noreferrer">Convert a Standalone mongod to a Replica Set</a>.</p> </li> </ol> </blockquote> <p>Also you can refer to the <strong>Medium blog</strong> written by <strong>Grigor Khachatryan</strong> on how to <a href="https://medium.com/devgorilla/deploy-prisma-with-mongodb-to-kubernetes-ccbf1daa51be" rel="nofollow noreferrer">Deploy Prisma with MongoDB to Kubernetes</a> for more information.</p>
<p>I have a baremetal cluster deployed using Kubespray with kubernetes 1.22.2, MetalLB, and ingress-nginx enabled. I am getting <code>404 Not found</code> when trying to access any service deployed via helm when setting <code>ingressClassName: nginx</code>. However, everything works fine if I don't use <code>ingressClassName: nginx</code> but <code>kubernetes.io/ingress.class: nginx</code> instead in the helm chart values.yaml. How can I get it to work using <code>ingressClassName</code>?</p> <p>These are my kubespray settings for <code>inventory/mycluster/group_vars/k8s_cluster/addons.yml</code></p> <pre><code># Nginx ingress controller deployment ingress_nginx_enabled: true ingress_nginx_host_network: false ingress_publish_status_address: &quot;&quot; ingress_nginx_nodeselector: kubernetes.io/os: &quot;linux&quot; ingress_nginx_tolerations: - key: &quot;node-role.kubernetes.io/master&quot; operator: &quot;Equal&quot; value: &quot;&quot; effect: &quot;NoSchedule&quot; - key: &quot;node-role.kubernetes.io/control-plane&quot; operator: &quot;Equal&quot; value: &quot;&quot; effect: &quot;NoSchedule&quot; ingress_nginx_namespace: &quot;ingress-nginx&quot; ingress_nginx_insecure_port: 80 ingress_nginx_secure_port: 443 ingress_nginx_configmap: map-hash-bucket-size: &quot;128&quot; ssl-protocols: &quot;TLSv1.2 TLSv1.3&quot; ingress_nginx_configmap_tcp_services: 9000: &quot;default/example-go:8080&quot; ingress_nginx_configmap_udp_services: 53: &quot;kube-system/coredns:53&quot; ingress_nginx_extra_args: - --default-ssl-certificate=default/mywildcard-tls ingress_nginx_class: &quot;nginx&quot; </code></pre> <p>grafana helm values.yaml</p> <pre><code>ingress: enabled: true # For Kubernetes &gt;= 1.18 you should specify the ingress-controller via the field ingressClassName # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress ingressClassName: nginx # Values can be templated annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: &quot;true&quot; labels: {} path: / # pathType is only for k8s &gt;= 1.1= pathType: Prefix hosts: - grafana.mycluster.org tls: - secretName: mywildcard-tls hosts: - grafana.mycluster.org </code></pre> <p><code>kubectl describe pod grafana-679bbfd94-p2dd7</code></p> <pre><code>... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25m default-scheduler Successfully assigned default/grafana-679bbfd94-p2dd7 to node1 Normal Pulled 25m kubelet Container image &quot;grafana/grafana:8.2.2&quot; already present on machine Normal Created 25m kubelet Created container grafana Normal Started 25m kubelet Started container grafana Warning Unhealthy 24m (x3 over 25m) kubelet Readiness probe failed: Get &quot;http://10.233.90.33:3000/api/health&quot;: dial tcp 10.233.90.33:3000: connect: connection refused </code></pre> <p><code>kubectl get svc</code></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana LoadBalancer 10.233.14.90 10.10.30.52 80:30285/TCP 55m kubernetes ClusterIP 10.233.0.1 &lt;none&gt; 443/TCP 9d </code></pre> <p><code>kubectl get ing</code> (no node address assigned)</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE grafana nginx grafana.mycluster.org 80, 443 25m </code></pre> <p><code>kubectl describe ing grafana</code> (no node address assigned)</p> <pre><code>Name: grafana Namespace: default Address: Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) TLS: mywildcard-tls terminates grafana.mycluster.org Rules: Host Path Backends ---- ---- -------- grafana.mycluster.org / grafana:80 (10.233.90.33:3000) Annotations: meta.helm.sh/release-name: grafana meta.helm.sh/release-namespace: default Events: &lt;none&gt; </code></pre> <p><code>kubectl get all --all-namespaces</code></p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default pod/grafana-b988b9b6-pxccw 1/1 Running 0 2m53s default pod/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4-wjlpv 1/1 Running 0 17h ingress-nginx pod/ingress-nginx-controller-6m2vt 1/1 Running 0 17h ingress-nginx pod/ingress-nginx-controller-xkgxl 1/1 Running 0 17h kube-system pod/calico-kube-controllers-684bcfdc59-kmsst 1/1 Running 0 17h kube-system pod/calico-node-dhlnt 1/1 Running 0 17h kube-system pod/calico-node-r8ktz 1/1 Running 0 17h kube-system pod/coredns-8474476ff8-9sbwh 1/1 Running 0 17h kube-system pod/coredns-8474476ff8-fdgcb 1/1 Running 0 17h kube-system pod/dns-autoscaler-5ffdc7f89d-vskvq 1/1 Running 0 17h kube-system pod/kube-apiserver-node1 1/1 Running 0 17h kube-system pod/kube-controller-manager-node1 1/1 Running 1 17h kube-system pod/kube-proxy-hbjz6 1/1 Running 0 16h kube-system pod/kube-proxy-lfqzt 1/1 Running 0 16h kube-system pod/kube-scheduler-node1 1/1 Running 1 17h kube-system pod/kubernetes-dashboard-548847967d-qqngw 1/1 Running 0 17h kube-system pod/kubernetes-metrics-scraper-6d49f96c97-2h7hc 1/1 Running 0 17h kube-system pod/nginx-proxy-node2 1/1 Running 0 17h kube-system pod/nodelocaldns-64cqs 1/1 Running 0 17h kube-system pod/nodelocaldns-t5vv6 1/1 Running 0 17h kube-system pod/registry-proxy-kljvw 1/1 Running 0 17h kube-system pod/registry-proxy-nz4qk 1/1 Running 0 17h kube-system pod/registry-xzh9d 1/1 Running 0 17h metallb-system pod/controller-77c44876d-c92lb 1/1 Running 0 17h metallb-system pod/speaker-fkjqp 1/1 Running 0 17h metallb-system pod/speaker-pqjgt 1/1 Running 0 17h NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/grafana LoadBalancer 10.233.1.104 10.10.30.52 80:31116/TCP 2m53s default service/kubernetes ClusterIP 10.233.0.1 &lt;none&gt; 443/TCP 17h kube-system service/coredns ClusterIP 10.233.0.3 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 17h kube-system service/dashboard-metrics-scraper ClusterIP 10.233.35.124 &lt;none&gt; 8000/TCP 17h kube-system service/kubernetes-dashboard ClusterIP 10.233.32.133 &lt;none&gt; 443/TCP 17h kube-system service/registry ClusterIP 10.233.30.221 &lt;none&gt; 5000/TCP 17h NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ingress-nginx daemonset.apps/ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 17h kube-system daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 17h kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 17h kube-system daemonset.apps/nodelocaldns 2 2 2 2 2 kubernetes.io/os=linux 17h kube-system daemonset.apps/registry-proxy 2 2 2 2 2 &lt;none&gt; 17h metallb-system daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 17h NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default deployment.apps/grafana 1/1 1 1 2m53s default deployment.apps/nfs-client-nfs-subdir-external-provisioner 1/1 1 1 17h kube-system deployment.apps/calico-kube-controllers 1/1 1 1 17h kube-system deployment.apps/coredns 2/2 2 2 17h kube-system deployment.apps/dns-autoscaler 1/1 1 1 17h kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 17h kube-system deployment.apps/kubernetes-metrics-scraper 1/1 1 1 17h metallb-system deployment.apps/controller 1/1 1 1 17h NAMESPACE NAME DESIRED CURRENT READY AGE default replicaset.apps/grafana-b988b9b6 1 1 1 2m53s default replicaset.apps/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4 1 1 1 17h kube-system replicaset.apps/calico-kube-controllers-684bcfdc59 1 1 1 17h kube-system replicaset.apps/coredns-8474476ff8 2 2 2 17h kube-system replicaset.apps/dns-autoscaler-5ffdc7f89d 1 1 1 17h kube-system replicaset.apps/kubernetes-dashboard-548847967d 1 1 1 17h kube-system replicaset.apps/kubernetes-metrics-scraper-6d49f96c97 1 1 1 17h kube-system replicaset.apps/registry 1 1 1 17h metallb-system replicaset.apps/controller-77c44876d 1 1 1 17h </code></pre> <p><code>kubectl get ing grafana -o yaml</code></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: &quot;true&quot; meta.helm.sh/release-name: grafana meta.helm.sh/release-namespace: default nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; creationTimestamp: &quot;2021-11-11T07:16:12Z&quot; generation: 1 labels: app.kubernetes.io/instance: grafana app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: grafana app.kubernetes.io/version: 8.2.2 helm.sh/chart: grafana-6.17.5 name: grafana namespace: default resourceVersion: &quot;3137&quot; uid: 6c34d3bd-9ab6-42fe-ac1b-7620a9566f62 spec: ingressClassName: nginx rules: - host: grafana.mycluster.org http: paths: - backend: service: name: ssl-redirect port: name: use-annotation path: /* pathType: Prefix - backend: service: name: grafana port: number: 80 path: / pathType: Prefix status: loadBalancer: {} </code></pre>
<blockquote> <p>Running <code>kubectl get ingressclass</code> returned 'No resources found'.</p> </blockquote> <p>That's the main reason of your issue.</p> <p><em>Why?</em></p> <p>When you are specifying <code>ingressClassName: nginx</code> in your Grafana <code>values.yaml</code> file you are <a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress" rel="noreferrer">setting your Ingress resource to use</a> <code>nginx</code> Ingress class which does not exist.</p> <p>I replicated your issue using <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">minikube</a>, <a href="https://metallb.universe.tf/" rel="noreferrer">MetalLB</a> and <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">NGINX Ingress</a> installed via modified <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml" rel="noreferrer">deploy.yaml file</a> with commented <code>IngressClass</code> resource + set NGINX Ingress controller name to <code>nginx</code> as in your example. The result was exactly the same - <code>ingressClassName: nginx</code> didn't work (no address), but annotation <code>kubernetes.io/ingress.class: nginx</code> worked.</p> <hr /> <p>(For the below solution I'm using controller pod name <code>ingress-nginx-controller-86c865f5c4-qwl2b</code>, but in your case it will be different - check it using <code>kubectl get pods -n ingress-nginx</code> command. Also keep in mind it's kind of a workaround - usually <code>ingressClass</code> resource should be installed automatically with a whole installation of NGINX Ingress. I'm presenting this solution to understand <em>why</em> it's not worked for you before, and <em>why</em> it works with NGINX Ingress installed using helm)</p> <p>In the logs of the Ingress NGINX controller I found (<code>kubectl logs ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx</code>):</p> <pre><code>&quot;Ignoring ingress because of error while validating ingress class&quot; ingress=&quot;default/minimal-ingress&quot; error=&quot;no object matching key \&quot;nginx\&quot; in local store&quot; </code></pre> <p>So it's clearly shown that there is no matching key to <code>nginx</code> controller class - because there is no <code>ingressClass</code> resource which is the &quot;link&quot; between the NGINX Ingress controller and running Ingress resource.</p> <p>You can verify which name of controller class is bidden to controller by running <code>kubectl get pod ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx -o yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>... spec: containers: - args: - /nginx-ingress-controller - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/nginx ... </code></pre> <p>Now I will create and apply following Ingress class resource:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: nginx spec: controller: k8s.io/nginx </code></pre> <p>Now in the logs I can see that it's properly configured:</p> <pre><code>I1115 12:13:42.410384 7 main.go:101] &quot;successfully validated configuration, accepting&quot; ingress=&quot;minimal-ingress/default&quot; I1115 12:13:42.420408 7 store.go:371] &quot;Found valid IngressClass&quot; ingress=&quot;default/minimal-ingress&quot; ingressclass=&quot;nginx&quot; I1115 12:13:42.421487 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;minimal-ingress&quot;, UID:&quot;c708a672-a8dd-45d3-a2ec-f2e2881623ea&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;454362&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>I re-applied the ingress resource definition, I get IP address for Ingress resource.</p> <hr /> <p>As I said before, instead of using this workaround, I'd suggest installing the NGINX Ingress resource using a solution that automatically installs <code>IngressClass</code> as well. As you have chosen helm chart, it has <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-ingressclass.yaml" rel="noreferrer">Ingress Class</a> resource so the problem is gone. Other possible ways to install <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">are here</a>.</p>
<p>What happens when Kubernetes <code>liveness-probe</code> returns false? Does Kubernetes restart that pod immediately?</p>
<p>First, please note that <code>livenessProbe</code> concerns <strong>containers</strong> in the pod, not the pod itself. So if you have multiple containers in one pod, only the affected container will be restarted.</p> <p>It's worth noting, that there is <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">parameter <code>failureThreshold</code></a>, which is set by default to 3. So, after 3 failed probes a container will be restarted:</p> <blockquote> <p><code>failureThreshold</code>: When a probe fails, Kubernetes will try <code>failureThreshold</code> times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p> </blockquote> <p>Ok, we have information that a container is restarted after 3 failed probes - but what does it mean to <em>restart</em>?</p> <p>I found a good article about <em>how</em> Kubernetes terminates a pods - <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="noreferrer">Kubernetes best practices: terminating with grace</a>. Seems for container restart caused by liveness probe it's similar - I will share my experience below.</p> <p>So basically when container is being terminated by liveness probe steps are:</p> <ul> <li>if there is a <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="noreferrer"><code>PreStop</code> hook</a>, it will be executed</li> <li><a href="https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html" rel="noreferrer">SIGTERM signal</a> is sent to the container</li> <li>Kubernetes waits for grace period</li> <li>After grace period, <a href="https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html" rel="noreferrer">SIGKILL signal</a> is sent to a pod</li> </ul> <p>So... if an app in your container is catching the SIGTERM signal properly, then the container will shut-down and will be started again. Typically it's happening pretty fast (as I tested for the NGINX image) - almost immediately.</p> <p>Situation is different when SIGTERM is not supported by your application. It means after <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="noreferrer"><code>terminationGracePeriodSeconds</code> period</a> the SIGKILL signal is sent, it means the container will be forcibly removed.</p> <p>Example below (modified example from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="noreferrer">this doc</a>) + I set <code>failureThreshold: 1</code></p> <p>I have following pod definition:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: nginx livenessProbe: exec: command: - cat - /tmp/healthy periodSeconds: 10 failureThreshold: 1 </code></pre> <p>Of course there is no <code>/tmp/healthy</code> file, so livenessProbe will fail. The NGINX image is properly catching the SIGTERM signal, so the container will be restarted almost immediately (for every failed probe). Let's check it:</p> <pre><code>user@shell:~/liveness-test-short $ kubectl get pods NAME READY STATUS RESTARTS AGE liveness-exec 0/1 CrashLoopBackOff 3 36s </code></pre> <p>So after ~30 sec the container is already restarted a few times and it's status is <a href="https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/" rel="noreferrer">CrashLoopBackOff</a> as expected. I created the same pod without livenessProbe and I measured the time need to shutdown it:</p> <pre><code>user@shell:~/liveness-test-short $ time kubectl delete pod liveness-exec pod &quot;liveness-exec&quot; deleted real 0m1.474s </code></pre> <p>So it's pretty fast.</p> <p>The similar example but I added <code>sleep 3000</code> command:</p> <pre class="lang-yaml prettyprint-override"><code>... image: nginx args: - /bin/sh - -c - sleep 3000 ... </code></pre> <p>Let's apply it and check...</p> <pre><code>user@shell:~/liveness-test-short $ kubectl get pods NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 5 3m37s </code></pre> <p>So after ~4 min there are only 5 restarts. Why? Because we need to wait for full <code>terminationGracePeriodSeconds</code> period (default is 30 seconds) for every restart. Let's measure time needed to shutdown:</p> <pre><code>user@shell:~/liveness-test-short $ time kubectl delete pod liveness-exec pod &quot;liveness-exec&quot; deleted real 0m42.418s </code></pre> <p>It's much longer.</p> <p><strong>To sum up:</strong></p> <blockquote> <p>What happens when Kubernetes liveness-probe return false? Does Kubernetes restart that pod immediately?</p> </blockquote> <p>The short answer is: by default no. Why?</p> <ul> <li>Kubernetes will restart a container in a pod after <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer"><code>failureThreshold</code></a> times. By default it is 3 times - so after 3 failed probes.</li> <li>Depends on your configuration of the container, time needed for container termination could be very differential</li> <li>You can adjust both <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer"><code>failureThreshold</code></a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="noreferrer"><code>terminationGracePeriodSeconds</code> period</a> parameters, so the container will be restarted immediately after every failed probe.</li> </ul>
<p>Conditions: I have a <em>3 tier application</em> to deploy using Kubernetes. I have created <strong>two namespaces</strong> for <em>backend</em> and <em>frontend</em> respectively.</p> <p>Problem: I want to know how would my <strong>backend</strong> talk to the <strong>frontend</strong> or vice versa.</p> <p>In simple words, how do backend and frontend communicate if they are in different namespaces?</p>
<p>You should create <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> for frontend and backend, and then you can communicate using:</p> <ul> <li>(Recommended) a <strong>DNS names of the services</strong> <code>&lt;service-name&gt;.&lt;namespace-name&gt;.svc.cluster.local</code> which will be always the same</li> <li>(Not recommended) a IP address of the services (but they will be different with each re-creation of the service)</li> </ul> <p>Example and further explanation below.</p> <p>Let's create deployments with 3 replicas - one in <code>default</code> namespace, one in <code>test</code> namespace:</p> <pre><code>kubectl create deployment nginx --image=nginx --replicas=3 kubectl create deployment nginx --image=nginx --replicas=3 -n test </code></pre> <p>For each deployment we will <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#expose" rel="nofollow noreferrer">create service type ClusterIP using <code>kubectl expose</code></a>(It's the same if I had created a service from a yaml file:):</p> <pre><code>kubectl expose deployment nginx --name=my-service --port=80 kubectl expose deployment nginx --name=my-service-test --port=80 -n test </code></pre> <p>Time to get IP addresses of the services:</p> <pre><code>user@shell:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 64d my-service ClusterIP 10.107.224.54 &lt;none&gt; 80/TCP 12m </code></pre> <p>And in <code>test</code> namespace:</p> <pre><code>user@shell:~$ kubectl get svc -n test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service-test ClusterIP 10.110.51.62 &lt;none&gt; 80/TCP 8s </code></pre> <p>I will exec into pod in default namespace and <code>curl</code> IP address of the <code>my-service-test</code> in second namespace:</p> <pre><code>user@shell:~$ kubectl exec -it nginx-6799fc88d8-w5q8s -- sh # curl 10.110.51.62 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; </code></pre> <p>Okay, it's working... Let's try with hostname:</p> <pre><code># curl my-service-test curl: (6) Could not resolve host: my-service-test </code></pre> <p>Not working as expected. Let's check <code>/etc/resolv.conf</code> file:</p> <pre><code># cat resolv.conf nameserver 10.96.0.10 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>It's looking for <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-dns-name" rel="nofollow noreferrer">hostnames only in namespace where pod is located</a>.</p> <p>So pod in the <code>test</code> namespace will have something like:</p> <pre><code># cat resolv.conf nameserver 10.96.0.10 search test.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>Let's try to curl <code>my-service-test.test.svc.cluster.local</code> from pod in default namespace:</p> <pre><code># curl my-service-test.test.svc.cluster.local &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; </code></pre> <p>It's working.</p> <p>If you have problems, make sure you have a proper <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">CNI plugin</a> installed for your cluster - also check this article - <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> for more details. If you are using some cloud provided solution (like AWS EKS or GCP GKE) you should have it by default.</p> <p>Also check these:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li> <li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li> </ul>
<p>Today when I tried to use this command to get the kubernetes deployment rollout status, shows error like this:</p> <pre><code>&gt; kubectl rollout status deployment/texhub-server-service -n reddwarf-pro Waiting for deployment spec update to be observed... </code></pre> <p>the command stuck and will never end. I also tried to using this command to restart:</p> <pre><code>kubectl rollout restart deployment/texhub-server-service -n reddwarf-pro </code></pre> <p>still stuck when get status. I did not found this deployment was staying busy in console. why did this happen? what should I do to fixed this issue?</p>
<p>As per <strong>Garry Shutler’s</strong> article written on <a href="https://gshutler.com/2018/11/kubernetes-rollout-stuck/" rel="nofollow noreferrer">Kubernetes rollout stuck</a>;</p> <blockquote> <p>This seemed to put all replica set creation in limbo as other deployments also ended up stuck in a similar state.</p> </blockquote> <blockquote> <p>Wild speculation is that an in-memory lock was blocking the replica set creation and that killing the process broke the lock.</p> </blockquote> <p><strong>Try removing old replica sets, it may help to start work again.</strong> You can achieve this using the command <code>kubectl delete rs &lt;rs name\&gt;</code>. Where rs name is the name of your ReplicaSet.</p> <p>You can also refer to other causes for this error in <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/workloads/controllers/deployment/#failed-deployment" rel="nofollow noreferrer">Failed Deployment</a>;</p> <blockquote> <ol> <li><p>Insufficient quota</p> </li> <li><p>Readiness probe failures (ensure your liveness and readiness probes are correctly set)</p> </li> <li><p>Image pull errors</p> </li> <li><p>Insufficient permissions</p> </li> <li><p>Limit ranges</p> </li> <li><p>Application runtime misconfiguration</p> </li> </ol> </blockquote>
<p>If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?</p> <p>I only find ways to add new user using configmap but this configmap doesn't have the root user in there.</p> <pre><code>$ kubectl edit configmap aws-auth --namespace kube-system </code></pre>
<p>There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.</p>
<p>I installed <code>kubectl</code> and <code>k3d</code> after following these official documentations (<a href="https://k3d.io/v5.4.9/" rel="nofollow noreferrer">k3d</a> and <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="nofollow noreferrer">kubectl</a>).<br /> I created a cluster with the following command:</p> <pre><code>k3d cluster create mycluster </code></pre> <p>Then I ran this command:</p> <pre><code>kubectl cluster-info </code></pre> <p>Which gave me this error message:<br /> <code>The connection to the server localhost:8080 was refused - did you specify the right host or port?</code></p> <p>After searching I found that the file <code>/etc/kubernetes</code> wasn't created (which means no <code>admin.conf</code> file was created), and the folder <code>~/.kube</code> also doesn't exist.<br /> PS: see <a href="https://stackoverflow.com/a/52262765/15368012">this answer</a> to understand</p> <p>Here are the versions of <code>k3d</code> and <code>kubectl</code></p> <pre><code>$ k3d version k3d version v5.4.9 k3s version v1.25.7-k3s1 (default) </code></pre> <pre><code>$ kubectl version --short Client Version: v1.25.2 Kustomize Version: v4.5.7 </code></pre>
<p>Check Kubeadm was properly installed or not?. The command <code>kubeadm version</code> helps you to know the running status of kubeadm.</p> <p><strong>Note :</strong> If you can see the kubeadm version number, your kubeadm was installed properly.</p> <p>And also run <code>kubeadm init</code> to initialize the control plane on your machine and create the necessary configuration files. See <a href="https://discuss.kubernetes.io/t/facing-connection-issue-to-localhost-8080-while-using-the-cmd-kubectl-version/22644" rel="nofollow noreferrer">Kubernetes Community forum</a> issue for more information.</p> <blockquote> <p>You may haven’t set the kubeconfig environment variable and the <code>.kube/config</code> file is not exported to the user $HOME directory.</p> </blockquote> <p>See <strong>Nijo Luca’s</strong> <a href="https://k21academy.com/docker-kubernetes/the-connection-to-the-server-localhost8080-was-refused/" rel="nofollow noreferrer">blog</a> on <strong>k21 academy</strong> for more information, which may help to resolve your issue.</p>
<p>I am following this document to deploy cluster auto scaler in EKS <a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html</a></p> <p>EKS Version is 1.24. Cluster. Public traffic is allowed on the open internet and we have whitelisted the .amazonaws.com domain in the squid proxy.</p> <p>I feel there might be something wrong with the role or policy configuration</p> <p><strong>Error in pod:</strong></p> <blockquote> <p>F0208 05:39:52.442470 1 aws_cloud_provider.go:386] Failed to generate AWS EC2 Instance Types: WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post &quot;https://sts.us-west-1.amazonaws.com/&quot;: dial tcp 176.32.112.54:443: i/o timeout</p> </blockquote> <p>The service account has the annotation in place to make use of the IAM role</p> <p>Kubectl describes cluster-autoscaler service account</p> <pre><code>Name: cluster-autoscaler Namespace: kube-system Labels: k8s-addon=cluster-autoscaler.addons.k8s.io k8s-app=cluster-autoscaler Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::&lt;ID&gt;:role/irsa-clusterautoscaler Image pull secrets: &lt;none&gt; Mountable secrets: &lt;none&gt; Tokens: &lt;none&gt; Events: &lt;none&gt; </code></pre>
<p>It was solved by adding the proxy details on the container env of the deployment. Which is missing in the actual documentation, they could add it as a hint. Pod was not taking the proxy setting available in the node, it was expecting it to be configured.</p>
<p>I created a three node cluster through k8s deployment using the tdengine version 2.4.0.3 image. View the pod information:</p> <pre><code>kubectl get pods -n mytdengine </code></pre> <pre><code>NAME READY STATUS RESTART AGE tdengine-01 1/1 Running 0 1m45s tdengine-02 1/1 Running 0 1m45s tdengine-03 1/1 Running 0 1m45s </code></pre> <p>Everthing was going well.</p> <p>However, when I tried to stop the pods with delete operation:</p> <pre><code>kubectl delete pod tdengine-03 -n mytdengine </code></pre> <p>The target pod is not deleted as expect. The status turns to:</p> <pre><code>NAME READY STATUS RESTART AGE tdengine-01 1/1 Running 0 2m35s tdengine-02 1/1 Running 0 2m35s tdengine-03 1/1 Terminating 0 2m35s </code></pre> <p>After several tests, pod will successfully deleted until 3 mins, which is unnormal. I didn't actually use the tdengine instance, which means there are no excessive load or storage occupation. I cannot find a reason to explain why it cost 3mins to shut down.</p>
<p>After testing, I eliminated the problem of kubernetes configuration. Moreover, I found that the parameter <strong>‘terminationgraceperiodseconds’</strong> configured in the yaml file of Pod: 180</p> <pre><code>terminationgraceperiodseconds:180 </code></pre> <p>This means that the pod was not shut down gracefully, but was forcibly removed after timeout.</p> <p>Generally speaking, the stop of pod usually sends a signal of signterm. The container processes the signal correctly and makes an elegant shutdown. However, if it does not stop or the container does not respond to the signal and exceeds the timeout set by the above parameter <strong>'termination graceriodseconds'</strong>, the container will receive the signal of signkill and forcibly kill the container. Ref: <a href="https://tasdikrahman.me/2019/04/24/handling-singals-for-applications-in-kubernetes-docker/" rel="nofollow noreferrer">https://tasdikrahman.me/2019/04/24/handling-singals-for-applications-in-kubernetes-docker/</a></p> <p>The reason for this is tdengine2 4.0.3 the startup script of the image pulls up taosadapter first and then taosd, but it does not rewrite the processing method of signterm signal. Due to the particularity of Linux PID 1, only PID 1 receives the signterm signal after k8s sends it to the pod content container (as shown in the figure below, PID 1 is the startup script) and does not notify taosadapter and taosd (become zombie processes).</p> <pre><code> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9 root 20 0 2873404 80144 2676 S 2.3 0.5 112:30.81 taosadapter 8 root 20 0 2439240 41364 2996 S 1.0 0.3 130:53.67 taosd 1 root 20 0 20044 1648 1368 S 0.0 0.0 0:00.01 run_taosd.sh 7 root 20 0 20044 476 200 S 0.0 0.0 0:00.00 run_taosd.sh 135 root 20 0 20176 2052 1632 S 0.0 0.0 0:00.00 bash 146 root 20 0 38244 1788 1356 R 0.0 0.0 0:00.00 top </code></pre> <p>I personally choose the way to rewrite hook function in k8s yaml file to immediately delete the container:</p> <pre><code>lifecycle: preStop: command: - /bin/bash - -c - procnum=`ps aux | grep taosd | grep -v -e grep -e entrypoint -e run_taosd | awk '{print $2}'`; kill -15 $procnum; if [&quot;$?&quot; -eq 0]; then echo &quot;kill </code></pre> <p>Of course, once we know the cause of the problem, there are other solutions which are not discussed here.</p>
<p>While installing influxdb2 using k8s manifest from the link <a href="https://docs.influxdata.com/influxdb/v2.6/install/?t=Kubernetes" rel="nofollow noreferrer">influxdb2 installation on k8s</a> I get below &quot;<code>pod has unbound immediate PersistentVolumeClaims</code>&quot; error.</p> <p>The instruction is given for minikube but I am installing it as a normal k8s cluster. Any idea about the issue and how to fix.</p> <pre><code>/home/ravi#kubectl describe pod influxdb-0 -n influxdb Name: influxdb-0 Namespace: influxdb Priority: 0 Node: &lt;none&gt; Labels: app=influxdb controller-revision-hash=influxdb-78bc684b99 statefulset.kubernetes.io/pod-name=influxdb-0 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: StatefulSet/influxdb Containers: influxdb: Image: influxdb:2.0.6 Port: 8086/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: /var/lib/influxdb2 from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-k9d8t (ro) Conditions: Type Status PodScheduled False Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-influxdb-0 ReadOnly: false default-token-k9d8t: Type: Secret (a volume populated by a Secret) SecretName: default-token-k9d8t Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; default-scheduler pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling &lt;unknown&gt; default-scheduler pod has unbound immediate PersistentVolumeClaims /home/ravi# </code></pre> <p>influx db2 yaml file</p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: influxdb --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: influxdb name: influxdb namespace: influxdb spec: replicas: 1 selector: matchLabels: app: influxdb serviceName: influxdb template: metadata: labels: app: influxdb spec: containers: - image: influxdb:2.0.6 name: influxdb ports: - containerPort: 8086 name: influxdb volumeMounts: - mountPath: /var/lib/influxdb2 name: data volumeClaimTemplates: - metadata: name: data namespace: influxdb spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G --- apiVersion: v1 kind: Service metadata: name: influxdb namespace: influxdb spec: ports: - name: influxdb port: 8086 targetPort: 8086 selector: app: influxdb type: ClusterIP </code></pre> <p>k8s version</p> <pre><code>/home/ravi#kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;16&quot;, GitVersion:&quot;v1.16.0&quot;, GitCommit:&quot;2bd9643cee5b3b3a5ecbd3af49d09018f0773c77&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-09-18T14:36:53Z&quot;, GoVersion:&quot;go1.12.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;16&quot;, GitVersion:&quot;v1.16.0&quot;, GitCommit:&quot;2bd9643cee5b3b3a5ecbd3af49d09018f0773c77&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-09-18T14:27:17Z&quot;, GoVersion:&quot;go1.12.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} /home/ravi&gt;sudo kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE influxdb data-influxdb-0 Pending 4h41m ricplt pvc-ricplt-alarmmanager Bound pv-ricplt-alarmmanager 100Mi RWO local-storage 5h17m ricplt pvc-ricplt-e2term-alpha Bound pv-ricplt-e2term-alpha 100Mi RWO local-storage 5h18m ricplt r4-influxdb-influxdb2 Pending 32m /home/ravi&gt; /home/ravi&gt; /home/ravi&gt; /home/ravi&gt;sudo kubectl describe pvc data-influxdb-0 -n influxdb Name: data-influxdb-0 Namespace: influxdb StorageClass: Status: Pending Volume: Labels: app=influxdb Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: influxdb-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 2m12s (x1021 over 4h17m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set /home/ravi&gt; </code></pre>
<p><strong>Looks like there is no auto-provisioning of k8s cluster you are running. You need to create Persistent Volume &amp; Storage classes on <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">local</a> clusters manually by cluster admin.</strong></p> <p>Create one provisioner and pass that to your storage class and give that storage class to a persistence volume template to create automatic PVC and PV.</p> <p>Refer to official kubernetes documentation on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a>, which may help to resolve your issue.</p> <blockquote> <p><strong>To configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process:</strong></p> <ol> <li><p>You, as a cluster administrator, create a PersistentVolume backed by physical storage. You do not associate the volume with any Pod.</p> </li> <li><p>You, now taking the role of a developer / cluster user, create a PersistentVolumeClaim that is automatically bound to a suitable PersistentVolume.</p> </li> <li><p>You create a Pod that uses the above PersistentVolumeClaim for storage.</p> </li> </ol> </blockquote> <p>Also try changing <code>accessModes: ReadWriteMany</code> to volume accessed by all your pods. A <code>subPath</code> needs to be used, If each pod wants to have its own directory. Refer to official Kubernetes document on <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">Using subPath</a>, like below :</p> <pre><code>volumeMounts: - name: data mountPath: /var/lib/influxdb2 subPath: $(POD_NAME) </code></pre>
<p>I use <code>kubespray v2.16.0</code>. I am trying to add a master node by following the instructions <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.16.0/docs/nodes.md#addingreplacing-a-master-node" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/v2.16.0/docs/nodes.md#addingreplacing-a-master-node</a> . In step 3 I need restart <code>kube-system/nginx-proxy</code>, but I use <code>containerd</code>. What should I do in this case?</p>
<p>I wrote a simple playbook for this, it helped me</p> <pre><code>- name: Stop nginx-proxy shell: &quot;{{ bin_dir }}/crictl ps | grep nginx-proxy | awk '{print $1}' | xargs {{ bin_dir }}/crictl stop&quot; - name: Check container state shell: &quot;{{ bin_dir }}/crictl ps --name nginx-proxy --state running | grep nginx-proxy&quot; changed_when: false register: _containerd_state until: _containerd_state is success retries: 10 delay: 10 </code></pre>
<p>I am trying to disable all HTTP ingress traffic for a specific API. I tried deleting the ingress and recreating after adding this annotation <code>kubernetes.io/ingress.allow-http: &quot;false&quot;</code> , but that doesn't work too. I can still hit the API and get a response on http://&lt;ingress-dns-name/shipping-address/api</p> <ul> <li>Both Nginx Controller and the API are deployed of course on the same azure Kubernetes cluster.</li> <li>A secret <code>my-tls-secret</code> is created in the default namespace</li> <li>Nginx controller has its own namespace</li> </ul> <p>Here is the ingress yaml file:</p> <pre><code>kind: Ingress metadata: annotations: kubernetes.io/ingress.allow-http: &quot;false&quot; kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 generation: 1 labels: app.kubernetes.io/managed-by: Helm name: api-shipping-address-ingress namespace: nonprod-dev resourceVersion: &quot;31734103&quot; uid: c9a698a0-3d2e-4f3b-99a9-c16c6fa83774 spec: rules: - http: paths: - backend: service: name: api-shipping-address port: number: 3000 path: /shipping-address(/|$)(.*) pathType: ImplementationSpecific tls: - secretName: my-tls-secret status: loadBalancer: ingress: - ip: 10.86.168.200 </code></pre>
<p>I had the same problem and solved it by adding <code>nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot;</code> as described in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">the ingress-nginx documentation</a></p>
<p>Error when I try to apply a <code>kubectl ou edit</code>, for <code>kubectl get pods</code>, it works normally.</p> <p>&quot;error validating data: the server has asked for the client to provide credentials; if you choose to ignore these errors, turn validation off with <code>--validate=false</code>&quot;</p> <p>edit config and checked all pods on kube-system, all is working normally.</p>
<p>I just found the problem, if you get this error, please check if your kubectl version is the same of server, my computer kubectl was in version v1.27.1 and server was v1.24.10, I used computer v1.24.10 as well and it worked fine!</p> <p>kubectl version # it will show your computer and server version.</p>
<p>I have two mac mini computers and I'm trying to create a K8s cluster using k3d. I can create a cluster on a single host very easily. However, I'm having trouble finding any guidance on creating a cluster that has multiple hosts (machines). Any ideas?</p>
<p>According to your question, you want to create k8s cluster with k3d on multiple hosts.</p> <p>But, based on the information, described on <a href="https://github.com/rancher/k3d/issues/408#issuecomment-733883237" rel="nofollow noreferrer">k3d Issues page on GitHub</a> (issue still open), this solution:</p> <blockquote> <p>with the current version of k3 is not officially &quot;supported&quot;</p> </blockquote> <p>Thus, at the moment it looks like the only version with single host is possible within k3d.</p>
<p>I create some pods with containers for which I set <code>ephemeral-storage</code> request and limit, like: (here <code>10GB</code>)</p> <p><a href="https://i.stack.imgur.com/87ew4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87ew4.png" alt="enter image description here" /></a></p> <p>Unfortunately, for some containers, the <code>ephemeral-storage</code> will be fully filled for unknown reasons. I would like to understand which dirs/files are responsible for filling it all, but I did not find a solution to do it.</p> <p>I tried with <code>df -h</code>, but unfortunately, it will give stats for the whole node and not only for the particular pod/container.</p> <p>Is there a way to retrieve the kubernetes container's <code>ephemeral-storage</code> usage details?</p>
<p>Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers.</p> <p>Depending on your Kubernetes platform, <strong>You may not be able to easily determine where these files are being written, any filesystem can fill up, but rest assured that disk is being consumed somewhere</strong> (or worse, memory - depending on the specific configuration of your emptyDir and/or Kubernetes platform).</p> <p>Refer to this <a href="https://stackoverflow.com/questions/70931881"><strong>SO</strong></a> link for more details on how by default &amp; allocatable ephemeral-storage in a standard kubernetes environment is sourced from filesystem(mounted to /var/lib/kubelet).</p> <p>And also refer to kubernetes documentation on how <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral storage can be managed</a> &amp; <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="nofollow noreferrer">Ephemeral storage consumption management works</a>.</p> <p><strong>I am assuming you're a GCP user, you can get a sense of your ephemeral-storage usage way:</strong> Menu&gt;Monitoring&gt;Metrics Explorer&gt; Resource type: <code>kubernetes node</code> &amp; Metric: <code>Ephemeral Storage</code></p> <p><strong>Try the below commands to know kubernetes pod/container's ephemeral-storage usage details :</strong></p> <ol> <li>Try <strong>du -sh</strong> / [run inside a container] : du -sh will give the space consumed by your container files. Which simply returns the amount of disk space the current directory and all those stuff in it are using as a whole, something like: 2.4G.</li> </ol> <p>Also you can check the complete file size using the <strong>du -h someDir</strong> command.</p> <p><a href="https://i.stack.imgur.com/ptxQT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ptxQT.png" alt="enter image description here" /></a></p> <ol start="2"> <li><a href="https://blog.px.dev/container-filesystems/" rel="nofollow noreferrer">Inspecting container filesystems</a> : You can use <strong>/bin/df</strong> as a tool to monitor <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="nofollow noreferrer">ephemeral storage usage</a> on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers.</li> </ol>
<p>im looking for the OS version(such as Ubuntu 20.04.1 LTS) to get it from container that run on kuberentes server. i mean, i need to OS of the server which on that server i have kubernetes with number of pods(and containers). i saw there is a library which call &quot;kubernetes&quot; but didn't found any relevant info on this specific subject. is there a way to get this info with python? many thanks for the help!</p>
<p>If you need to get an OS version of running container you should read</p> <p><a href="https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/</a></p> <p>as it described above you can get access to your running pod by command:</p> <p>kubectl exec --stdin --tty &lt;pod_name&gt; -- /bin/bash</p> <p>then just type &quot;cat /etc/os-release&quot; and you will see the OS info which your pod running on. In most cases containers run on unix systems and you will find current pod OS.</p> <p>You also can install python or anything else inside your pod. But I do not recommend to do it. Containers have minimum thing to make you app work. For checking it is ok, but after it just deploy new container.</p>
<p>I am trying to make this basic example work on docker desktop on windows, I am not using <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">minikube</a>.</p> <p>I managed to reach the service using NodePort with:</p> <pre><code>http://localhost:31429 </code></pre> <p>But when I try <code>http://hello-world.info</code> (made sure to add it in hosts) - <code>404 not found</code>.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 20m default web NodePort 10.111.220.81 &lt;none&gt; 8080:31429/TCP 6m47s ingress-nginx ingress-nginx-controller LoadBalancer 10.107.29.182 localhost 80:30266/TCP,443:32426/TCP 19m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.138.244 &lt;none&gt; 443/TCP 19m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 20m kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress &lt;none&gt; hello-world.info 80 21m</code></pre> </div> </div> </p> <p>I am lost, can someone please help ? I also noticed that ADDRESS is empty.</p> <p>Many thanks.</p>
<p><em>Reproduced this case on Docker Desktop 4.1.1, Windows 10 Pro</em></p> <ol> <li><p>Install <a href="https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop" rel="nofollow noreferrer">Ingress Controller for Docker Desktop</a>:</p> <p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml</code></p> </li> <li><p>As I understand it, @dev1334 used an example from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">Set up Ingress on Minikube with the NGINX Ingress Controller</a> article. I also tried it with some modifications to the original example.</p> </li> <li><p>In the example for the <code>example-ingress.yaml</code> file in the <code>spec.rules</code> section, the host <code>hello-world.info</code> is specified. Since Docker Desktop for Windows adds to a hosts file in <code>C:\Windows\System32\drivers\etc\hosts</code> during installation the following entry: <code>127.0.0.1 kubernetes.docker.internal</code> I changed the host in the <code>example-ingress.yaml</code> from <code>hello-world.info</code> to <code>kubernetes.docker.internal</code></p> </li> <li><p>But Ingress still didn't work as expected due to the following error: <code>&quot;Ignoring ingress because of error while validating ingress class&quot; ingress=&quot;default/example-ingress&quot; error=&quot;ingress does not contain a valid IngressClass&quot;</code></p> <p>I added this line <code>kubernetes.io/ingress.class: &quot;nginx&quot;</code> to the annotations section in <code>example-ingress.yaml</code></p> </li> </ol> <p>So, the final version of the <code>example-ingress.yaml</code> file is below.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: kubernetes.docker.internal http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 - path: /v2 pathType: Prefix backend: service: name: web2 port: number: 8080 </code></pre> <p><strong>Test results</strong></p> <pre><code>C:\Users\Andrew_Skorkin&gt;kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE default web-79d88c97d6-c8xnf 1/1 Running 0 112m default web2-5d47994f45-cxtzm 1/1 Running 0 94m ingress-nginx ingress-nginx-admission-create-sjdcq 0/1 Completed 0 114m ingress-nginx ingress-nginx-admission-patch-wccc9 0/1 Completed 1 114m ingress-nginx ingress-nginx-controller-5c8d66c76d-jb4w9 1/1 Running 0 114m ... C:\Users\Andrew_Skorkin&gt;kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 7d15h default web NodePort 10.101.43.157 &lt;none&gt; 8080:32651/TCP 114m default web2 NodePort 10.100.4.84 &lt;none&gt; 8080:30081/TCP 96m ingress-nginx ingress-nginx-controller LoadBalancer 10.106.138.217 localhost 80:30287/TCP,443:32664/TCP 116m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.111.208.242 &lt;none&gt; 443/TCP 116m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 7d15h C:\Users\Andrew_Skorkin&gt;curl kubernetes.docker.internal Hello, world! Version: 1.0.0 Hostname: web-79d88c97d6-c8xnf C:\Users\Andrew_Skorkin&gt;curl kubernetes.docker.internal/v2 Hello, world! Version: 2.0.0 Hostname: web2-5d47994f45-cxtzm </code></pre>
<p>I am trying to run a local cluster on Mac with M1 chip using Minikube (Docker driver). I enabled ingress addon in Minikube, I have a separate terminal in which I'm running <code>minikube tunnel</code> and I enabled Minikube dashboard, which I want to expose using Ingress. This is my configuration file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.com http: paths: - backend: service: name: kubernetes-dashboard port: number: 80 pathType: Prefix path: / </code></pre> <p>I also put &quot;dashboard.com&quot; in my /etc/hosts file and it's actually resolving to the right IP, but it's not responding when I put &quot;http://dashboard.com&quot; in a browser or when I try to ping it and I always receive a timeout.</p> <p>NOTE: when I run <code>minikube tunnel</code> I get</p> <pre><code>❗ The service/ingress dashboard-ingress requires privileged ports to be exposed: [80 443] 🔑 sudo permission will be asked for it. </code></pre> <p>I insert my sudo password and then nothing gets printed afterwards. Not sure if this is is an issue or the expected behavior.</p> <p>What am I doing wrong?</p>
<p>I had the same behavior, and apparently what's needed for <code>minikube tunnel</code> to work is to map &quot;127.0.0.1&quot; in <code>/etc/hosts</code>, instead of the output from <code>minikube ip</code> or the ingress description. that fixed it for me</p>
<p>When using emptydir with limits, the pod get killed/recreated when it consumes the disk more than the limits specified. Which is expected behaviour which we also want.</p> <pre><code> limits: ephemeral-storage: &quot;300Gi&quot; </code></pre> <p>However, when using ephemeral from Storage class instead of local, the pod is not getting recreated instead disk usage becomes 100 % and all future build jobs are failing. Is there a way to recreate the pod when such volumes reaches 100 % ?</p> <pre><code> - ephemeral: name: buiddir volumeClaimTemplate: metadata: labels: app.kubernetes.io/name: k8s-helmchart-agent spec: accessModes: - ReadWriteOnce resources: limits: storage: 220Gi requests: storage: 220Gi storageClassName: ot-iscsi-basic volumeMode: Filesystem </code></pre>
As per <a href="https://stackoverflow.com/questions/71787864/node-not-enough-temp-storage-for-ephemeral-storage">RahulKumarShaw</a>, as explained emptyDir is managed by kubelet on each node. emptyDir: empty at Pod startup, with storage coming locally from the kubelet base directory (usually the root disk) or RAM</p> &nbsp;</p> You can also refer this <a href="https://github.com/Azure/AKS/issues/930" rel="nofollow noreferrer">github</a>&nbsp;discussion where different user has reported same kind of issue.</p> &nbsp;</p> Conclusion : ephemeral-storage varies from its requirements size because you have file system running on node is using as ephemeral storage. The kubelet also writes node-level container logs into the first file system, and treats these similarly to ephemeral local storage. That might be reason for extending the size of ephemeral storage and as there is also set limit of 300GB for ephemeral storage.</p> &nbsp;</p> You can try to <a href="https://www.ibm.com/support/pages/pod-getting-evicted-due-pod-ephemeral-local-storage-usage-exceeds-total-limit-containers-256mi" rel="nofollow noreferrer">increase the ephemeral storage</a>:</p> <ol> <li>Make sure&nbsp;wml&nbsp;is in&nbsp;&nbsp;Completed&nbsp;state by running</li> <blockquote> <p>oc get wmlbase -n </p> </blockquote> <li>Proceed with this step only when step 1 is in&nbsp;Completed&nbsp;state.</li> <blockquote> <p>oc get csv -A | grep wml</p> </blockquote> <li>Patch or increase the ephemeral storage limit to 512MB by editing the&nbsp;wml&nbsp;csv</li> <blockquote> <p>oc edit csv ibm-cpd-wml-operator.v2.2.0 -n ibm-common-services</p> </blockquote> <li>And save the csv and now the new&nbsp;wml&nbsp;operator&nbsp;pod comes up with increased ephemeral storage under limits.</li> </ol>
<p>I have an application in Docker Container which connects with the host application using SCTP protocol. When this container is deployed on Kubernetes Pods, connectivity with this pod from another pod inside a cluster is working fine. I have tried exposing this pod using a Load Balancer Service and NodePort service externally. When the host application tries to connect to this pod, I am getting an intermittent &quot;Connection Reset By Peer&quot; error. Sometimes after 1st request itself and sometimes after 3rd request. I have tried other SCTP based demo containers other than my application but having the same issue, where after certain no. of request getting connection reset by peer error. So it isn't problem of my appliation.</p> <p>My application is listening to the correct port. Below is the output of the command &quot;netstat -anp&quot; inside the pod.</p> <pre><code>Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.244.0.27:80 0.0.0.0:* LISTEN 4579/./build/bin/AM sctp 10.244.0.27:38412 LISTEN 4579/./build/bin/AM </code></pre> <p>My Service file is given below:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: clusterIP: 10.100.0.2 selector: app: my-app type: NodePort ports: - name: sctp protocol: SCTP port: 38412 targetPort: 38412 nodePort : 31000 - name: tcp protocol: TCP port: 80 targetPort: 80 </code></pre> <p>I have this whole setup on Minikube.I haven't used any CNI. I am stuck due to this. Am I missing something ? Since I am working with K8s for the last 2 weeks only. Please help with this issue, and if possible mention any resource regarding SCTP on Kubernetes, since I could find very little.</p> <p>The following is the tcpdump collected from inside the pod running the sctp connection.</p> <pre><code>tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 20:59:02.410219 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.41024 &gt; amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 20:59:02.410260 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.41024: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 20:59:02.410308 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.41024 &gt; amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410348 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.41024: sctp (1) [COOKIE ACK] 20:59:02.410552 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.5369 &gt; amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 20:59:02.410590 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.5369: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 20:59:02.410640 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.5369 &gt; amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410673 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.5369: sctp (1) [COOKIE ACK] 20:59:04.643163 IP (tos 0x2,ECT(0), ttl 64, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:05.155162 IP (tos 0x2,ECT(0), ttl 64, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 20:59:05.411135 IP (tos 0x2,ECT(0), ttl 64, id 60101, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; charles-02.41024: sctp (1) [HB REQ] 20:59:05.411293 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.41024 &gt; amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:06.179159 IP (tos 0x2,ECT(0), ttl 64, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 20:59:06.403172 IP (tos 0x2,ECT(0), ttl 64, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:06.695155 IP (tos 0x2,ECT(0), ttl 64, id 58516, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 20:59:06.695270 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 &gt; amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:09.584088 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 116) 10.244.0.1.41024 &gt; amf-6584c544-cvvrs.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 20:59:09.584112 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.41024: sctp (1) [ABORT] 20:59:10.530610 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 40) 10.244.0.1.5369 &gt; amf-6584c544-cvvrs.31000: sctp (1) [SHUTDOWN] 20:59:10.530644 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 &gt; 10.244.0.1.5369: sctp (1) [ABORT] </code></pre> <p>The following is the tcpdump collected from the host trying to connect.</p> <pre><code>tcpdump: listening on br-c54f52300570, link-type EN10MB (Ethernet), capture size 262144 bytes 02:29:02.410177 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.58648 &gt; 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 02:29:02.410282 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 &gt; charles-02.58648: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 02:29:02.410299 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.58648 &gt; 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410360 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 &gt; charles-02.58648: sctp (1) [COOKIE ACK] 02:29:02.410528 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.54336 &gt; 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 02:29:02.410610 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 &gt; charles-02.54336: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 02:29:02.410630 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.54336 &gt; 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410686 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 &gt; charles-02.54336: sctp (1) [COOKIE ACK] 02:29:04.643276 IP (tos 0x2,ECT(0), ttl 63, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 02:29:04.643303 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 &gt; 192.168.49.2.31000: sctp (1) [ABORT] 02:29:05.155288 IP (tos 0x2,ECT(0), ttl 63, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 02:29:05.155322 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 &gt; 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.179324 IP (tos 0x2,ECT(0), ttl 63, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 02:29:06.179376 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 &gt; 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.403290 IP (tos 0x2,ECT(0), ttl 63, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 &gt; charles-02.5369: sctp (1) [HB REQ] 02:29:06.403332 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 &gt; 192.168.49.2.31000: sctp (1) [ABORT] 02:29:09.584056 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 116) charles-02.58648 &gt; 192.168.49.2.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 02:29:09.584132 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 &gt; charles-02.58648: sctp (1) [ABORT] 02:29:10.530566 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 40) charles-02.54336 &gt; 192.168.49.2.31000: sctp (1) [SHUTDOWN] 02:29:10.530668 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 &gt; charles-02.54336: sctp (1) [ABORT] </code></pre>
<p>You cannot expose port 38412 via NodePort.</p> <blockquote> <p>If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag <strong>(default: 30000-32767)</strong>. Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p> <p>Take a look at this link to understand how to translate the port: <a href="https://stackoverflow.com/questions/71100744/unable-to-expose-sctp-server-running-in-a-kubernetes-pod-using-nodeport">Unable to expose SCTP server running in a kubernetes pod using NodePort</a></p> <p>Also, make sure you are using Calico as a network plugin (minimum version 3.3).</p> <blockquote> <p>Kubernetes 1.12 includes alpha Stream Control Transmission Protocol (SCTP) support. Calico v3.3 has been updated to support SCTP if included in your network policy spec.</p> </blockquote>
<p>We are using Lens for developing on Kubernetes and we have started using Lens Metrics Stack. Is there a way to change time period of visualization? It is set to <code>-60m</code> by default and so far we could not find any way to change that.</p>
<p>Yes, you are right. 60 minutes is the default, according to the information from <a href="https://github.com/lensapp/lens/blob/58a446bd45f9ef21fe633e5cda1c695113d5b5c4/src/common/k8s-api/endpoints/metrics.api.ts#L55" rel="nofollow noreferrer">lensapp/lens GitHub repository</a>:</p> <blockquote> <p>time-range in seconds for data aggregation (default: 3600s = last 1h)</p> </blockquote> <p>and there is no way to change this default value directly from the lens app.</p> <p>I found a mention for an <a href="https://github.com/lensapp/lens/issues/428" rel="nofollow noreferrer">improvement of that behavior</a>:</p> <blockquote> <p>Metrics history beyond 1h #428</p> </blockquote> <p>But at the moment it is still in Open status.</p>
<p>I want to know what will be the appropriate position of a field in the resource yaml file (say e.g. <code>capabilities</code> field in a <code>pod</code> yaml file).</p> <p>I can use <code>kubectl explain pod.spec --recursive | less</code> and then <em>search</em> for <code>capabilities</code> and then scroll up to see who is the parent field of <code>capabilities</code> field and so on.</p> <p>Is there a <em>simpler</em> way to know the hierarchy or parents of field?</p> <p>I want to see hierarchical output something like this without having to scroll up and figure out manually:</p> <p><code>pod &gt; spec &gt; containers &gt; securityContext &gt; capabilities</code></p>
<p>I hose this will be helpful, you can use the <code>yq</code> to output your file in many ways.</p> <ul> <li>Get directly what you desire</li> </ul> <pre><code>cat my-yaml-file.yaml | yq e '.spec.template.spec.containers' </code></pre> <ul> <li>The <code>-f</code> will output it in a JSON Format.</li> </ul> <pre><code>cat my-yaml-file.yaml | yq e -j '.spec.template.spec.containers' </code></pre> <ul> <li>Delete everything after a specific point :</li> </ul> <pre><code>cat my-yaml-file.yaml | yq e -j 'del(.spec.template.spec.containers)' </code></pre>
<p>We have some services that can be installed in multiple locations with differing configurations. We've been asked to support multi-level configuration options using environment variables set with defaults, configmaps, secrets, and command-line options passed in via <code>helm install --set</code>. The following works, but is very cumbersome as the number of parameters for some of the services are numerous and the Values dot-notation goes a few levels deeper.</p> <pre><code>env: # Set default values - name: MY_VAR value: default-value - name: OTHER_VAR value: default-other-value # Allow configmap to override - name: MY_VAR valueFrom: configMapKeyRef: name: env-configmap key: MY_VAR optional: true - name: OTHER_VAR valueFrom: configMapKeyRef: name: env-configmap key: OTHER_VAR optional: true # Allow secrets to override - name: MY_VAR valueFrom: secretsKeyRef: name: env-secrets key: MY_VAR optional: true - name: OTHER_VAR valueFrom: secretsKeyRef: name: env-secrets key: OTHER_VAR optional: true # Allow 'helm install --set' to override {{- if .Values.env }} {{- if .Values.env.my }} {{- if .Values.env.my.var }} - name: MY_VAR value: {{ .Values.env.my.var }} {{- end }} {{- end }} {{- if .Values.env.other }} {{- if .Values.env.other.var }} - name: OTHER_VAR value: {{ .Values.env.other.var }} {{- end }} {{- end }} {{- end }} </code></pre> <p>Using envFrom for the ConfigMap and Secrets would be nice, but tests and docs show this would not allow the command-line override, since <code>env:</code> and <code>envFrom:</code> doesn't mix in the way that's needed. As the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core" rel="nofollow noreferrer">v1.9</a> and <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#container-v1-core" rel="nofollow noreferrer">v2.1</a> Kubernetes API states:</p> <blockquote> <p>envFrom: List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.</p> </blockquote> <p>Is there a better way to provide this default-&gt;configmap-&gt;secrets-&gt;cmd-line override precedence?</p>
<p>I found a solution that I mostly like. My issue was caused by giving too much weight to the &quot;Values defined by an Env with a duplicate key will take precedence&quot; comment in the docs, and thinking I needed to exclusively use Env. The defined precedence is exactly what I needed.</p> <p>Here's the helm chart files for my current solution.</p> <p><strong>configmap/templates/configmap.yaml</strong></p> <pre class="lang-yml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: {{- if .Values.env }} {{- toYaml $.Values.env | nindent 2 }} {{- end }} {{- if .Values.applicationYaml }} application.yml: | {{- toYaml $.Values.applicationYaml | nindent 4 }} {{- end }} </code></pre> <p><strong>secrets/templates/secrets.yaml</strong></p> <pre class="lang-yml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: my-secrets type: Opaque data: {{- range $key, $val := .Values.env }} {{ $key }}: {{ $val | b64enc }} {{- end }} stringData: {{- if .Values.applicationYaml }} application.yml: | {{- toYaml $.Values.applicationYaml | nindent 4 }} {{- end }} </code></pre> <p><strong>deployment.yaml</strong></p> <pre class="lang-yml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... spec: ... containers: - name: my-deployment {{- if .Values.env }} env: {{- range $key, $val := .Values.env }} - name: {{ $key }} value: {{ $val }} {{- end }} {{- end }} envFrom: - configMapRef: name: my-configmap - secretRef: name: my-secrets volumeMounts: - name: configmap-application-config mountPath: /application/config/configmap/ - name: secrets-application-config mountPath: /application/config/secrets/ volumes: - name: configmap-application-config configMap: name: my-configmap optional: true - name: secrets-application-config secret: secretName: my-secrets optional: true </code></pre> <p>Since this is a Spring Boot app, I used volumeMounts to allow the application.yml default values to be overridden in the ConfigMap and Secrets. The order of precedence from lowest to highest is:</p> <ul> <li>the application's application.yml (v1 in following examples)</li> <li>the configmap's applicationYaml (v2)</li> <li>the secret's applicationYaml (v3)</li> <li>the configmap env (v4)</li> <li>the secret env (v5)</li> <li>the helm install/uninstall --set (v6)</li> </ul> <p>To complete the example, here's test values yaml files and the command-line.</p> <p><strong>app/src/main/resources/application.yml</strong></p> <pre class="lang-yml prettyprint-override"><code>applicationYaml: test: v1: set-from-this-value v2: overridden v3: overridden v4: overridden v5: overridden v6: overridden </code></pre> <p><strong>configmap/values.yaml</strong></p> <pre class="lang-yml prettyprint-override"><code>applicationYaml: test: v2: set-from-this-value v3: overridden v4: overridden v5: overridden v6: overridden env: TEST_V4: set-from-this-value TEST_V5: overridden TEST_V6: overridden </code></pre> <p><strong>secrets/values.yaml</strong></p> <pre class="lang-yml prettyprint-override"><code>applicationYaml: test: v3: set-from-this-value v4: overridden v5: overridden v6: overridden env: TEST_V5: set-from-this-value TEST_V6: overridden </code></pre> <p><strong>command-line</strong></p> <pre class="lang-bash prettyprint-override"><code>helm install --set env.TEST_V6=set-from-this-value ... </code></pre> <p>Ideally, I'd like to be able to use dot-notation instead of TEST_V6 in the env and --set fields, but I'm not finding a way in helm to operate only on the leaves of yaml. In other words, I'd like something like <code>range $key, $val</code>, but where the key is equal to &quot;test.v6&quot;. If that was possible, the key could be internally converted to an environment variable name with <code>{{ $key | upper | replace &quot;-&quot; &quot;_&quot; | replace &quot;.&quot; &quot;_&quot; }}</code>.</p>
<p>I am deploying to version 1.16 but the pods are getting crashed below are the pod's error.</p> <p>istiod pod:</p> <p>2023-03-21T11:58:09.768255Z info kube controller &quot;extensions.istio.io/v1alpha1/WasmPlugin&quot; is syncing... controller=crd-controller 2023-03-21T11:58:09.868998Z info kube controller &quot;extensions.istio.io/v1alpha1/WasmPlugin&quot; is syncing... controller=crd-controller 2023-03-21T11:58:09.887383Z info klog k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User &quot;system:serviceaccount:istio-system:istiod-service-account&quot; cannot list resource &quot;wasmplugins&quot; in API group &quot;extensions.istio.io&quot; at the cluster scope 2023-03-21T11:58:09.887472Z error watch error in cluster Kubernetes: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User &quot;system:serviceaccount:istio-system:istiod-service-account&quot; cannot list resource &quot;wasmplugins&quot; in API group &quot;extensions.istio.io&quot; at the cluster scope</p> <p>external-dns: time=&quot;2023-03-21T12:17:22Z&quot; level=fatal msg=&quot;failed to sync cache: timed out waiting for the condition&quot;</p> <p>Version istioctl version:</p> <p>client version: 1.17.1 control plane version: 1.16.2 data plane version: none</p> <p>kubectl version --short:</p> <p>Client Version: v1.24.10 Kustomize Version: v4.5.4 Server Version: v1.24.10-eks-48e63af</p>
<p>The error is speaking; the Service Account istiod-service-account has no privileges on the CRDs extensions.istio.io/v1alpha1/WasmPlugin.</p> <p>The solution to your problem is documented here: <a href="https://github.com/istio/istio/issues/36886#issue-1107794465" rel="nofollow noreferrer">https://github.com/istio/istio/issues/36886#issue-1107794465</a></p>
<p>What will be the equivalent ConfigMap YAML code for the following command line?</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl create configmap mongo-initdb --from-file=init-mongo.js</code></pre> </div> </div> </p>
<p>There is actually a kubectl command that lets you get the generated yaml for a created config map. In your case:</p> <pre><code>kubectl get configmaps mongo-initdb -o yaml </code></pre>
<p>I'm trying to use the AWS S3 SDK for Java to connect to a bucket from a Kubernetes pod running an Spring Boot application. In order to get external access I had to create a service as follows:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: s3 namespace: production spec: type: ExternalName externalName: nyc3.digitaloceanspaces.com </code></pre> <p>And then I modified my configuration in <code>application.properties</code> specifying the endpoint:</p> <pre><code>cloud.aws.endpoint=s3 cloud.aws.credentials.accessKey=ASD cloud.aws.credentials.secretKey=123 cloud.aws.credentials.instanceProfile=true cloud.aws.credentials.useDefaultAwsCredentialsChain=true </code></pre> <p>Because the SDK builds the host name for the bucket as <code>bucket.s3...</code> I modified my client to use "path style" access with this configuration:</p> <pre><code>@Bean(name = "amazonS3") public AmazonS3Client amazonS3Client(AWSCredentialsProvider credentialsProvider, RegionProvider regionProvider) { EndpointConfiguration endpointConfiguration = new EndpointConfiguration( endpoint, regionProvider.getRegion().getName()); return (AmazonS3Client) AmazonS3ClientBuilder.standard() .withCredentials(credentialsProvider) .withEndpointConfiguration(endpointConfiguration) .withPathStyleAccessEnabled(true) .build(); } </code></pre> <p>But when I try to perform any bucket operation I get the following error regarding the name mismatch with the SSL certificate:</p> <pre><code>javax.net.ssl.SSLPeerUnverifiedException: Certificate for &lt;s3&gt; doesn't match any of the subject alternative names: [*.nyc3.digitaloceanspaces.com, nyc3.digitaloceanspaces.com] </code></pre> <p>How can I avoid this certificate error?</p>
<p>I am having a similar issue. I believe AmazonS3Client API doesn't resolve the k8s service name. I had to directly use a host name instead of K8s service name.</p>
<p>so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations.</p> <p>The question is: how do I conveniently upgrade the charts versions, while keeping the customizations?</p> <p>For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one.</p> <p>The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task.</p> <p>I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice.</p>
<p>Unfortunately it often happens that the Helm Chart repositories of an application aren't suitable for the environment in which you are going to install them.</p> <p>I suggest modifying the templates following the patterns of how they were written and sending PRs to the application's GitHub repository.</p> <p>Unless these are &quot;reasonable&quot; configurations, such as things that complicate the application for whatever reason, there's no other way than to do a &quot;git diff&quot; before downloading the updated repo.</p>
<p>I have a k8s cluster which uses rancher.io/local-path. There is a PV there</p> <pre><code> $ kubectl describe pv pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Name: pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Labels: &lt;none&gt; Annotations: pv.kubernetes.io/provisioned-by: rancher.io/local-path Finalizers: [kubernetes.io/pv-protection] StorageClass: local-path Status: Bound Claim: gitlab/gitlab-prometheus-server Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 8Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [agent-1] Message: Source: Type: HostPath (bare host directory volume) Path: /var/lib/rancher/k3s/storage/pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f_gitlab_gitlab-prometheus-server HostPathType: DirectoryOrCreate Events: &lt;none&gt; </code></pre> <p>I would like to move that PV to another node, agetn-5. How can I achieve that? Important point that PV in question is <code>rancher.io/local-path</code> provisioned.</p>
<p>It isn't possible to do this in a single step.</p> <p>But you can:</p> <ol> <li>Access the K8s cluster where PVC exists</li> <li>Create a Deployment (or single Pod) that mounts PVC on the path you prefer (Example /xyz)</li> <li>Run</li> </ol> <pre><code> kubectl -n NAMESPACE cp POD_NAME:/xyz /tmp/ </code></pre> <p>to locally copy the contents of the /xyz folder to the /tmp path</p> <ol start="4"> <li><p>Logout from K8s cluster</p> </li> <li><p>Login to the K8s cluster where data will be migrated</p> </li> <li><p>Create new PVC</p> </li> <li><p>Create a Deployment (or Single Pod) that mounts the PVC on the path you prefer (Example /new-xyz)</p> </li> <li><p>Run</p> </li> </ol> <pre><code> kubectl -n NAMESPACE cp /tmp/xyz/ POD_NAME:/new-xyz/ </code></pre> <p>to copy the local content to the path /new-xyz</p>
<p>Looking for some ideas on how to expose an http endpoint from kubernetes cluster that shows the docker images tag for each service that is live and up-to-date as services are updated with newer tags.</p> <p>Example something like this: <code>GET endpoint.com/api/metadata</code></p> <pre><code>{ &quot;foo-service&quot;: &quot;registry.com/foo-service:1.0.1&quot;, &quot;bar-service&quot;: &quot;registry.com/bar-service:2.0.1&quot; } </code></pre> <p>when <code>foo-service</code> is deployed with a new tag <code>registry.com/foo-service:1.0.2</code>, I want the endpoint to reflect that change.</p> <p>I can't just store the values as environment variables as it is not guaranteed the service that exposes that endpoint will be updated on each deploy.</p> <p>Some previous I had but does not seem clean:</p> <ul> <li><p>Update an external file in s3 to keep track of image tags on each deployment, and cache/load data on each request to endpoint.</p> </li> <li><p>Update a key in Redis within the cluster and read from that.</p> </li> </ul>
<p>Folowing on @DavidMaze's suggestion. I ended up using the Kubernetes API to display a formatted version of the deployed services via an rest api endpoint.</p> <p>Steps:</p> <ul> <li>Attach a role with relevant permissions to a service account.</li> </ul> <pre><code># This creates a service role that can be attached to apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: service-role namespace: dev rules: - verbs: - list apiGroups: - apps resources: - deployments </code></pre> <ul> <li>Use one of the available kubernetes api libraries to connect to the kubernetes api from within the cluster, Java in my case. And retrieve the deployment list.</li> </ul> <pre><code>implementation io.kubernetes:client-java:15.0.1 </code></pre> <ul> <li>Output the result of the kubernetes api call and expose a cached version of it through an endpoint (Java snippet)</li> </ul> <pre><code>public Map&lt;String, String&gt; getDeployedServiceVersions() { ApiClient client = Config.defaultClient(); Configuration.setDefaultApiClient(client); AppsV1Api api = new AppsV1Api(); V1DeploymentList v1DeploymentList = api.listNamespacedDeployment(releaseNamespace, null, false, null, null, null, null, null, null, 10, false); # Helper method to map deployments to result return getServiceVersions(v1DeploymentList); } </code></pre>
<p>I am using zsh with oh-my-zsh and I put an entry into my ~/.zshrc to have a shortcut for the option <code>--dry-run=client -o yaml</code> as a variable and be faster for generating yaml files with imperative commands when I enter for example <code>kubectl run test-pod --image=nginx $do</code> I get the error <code>error: Invalid dry-run value (client -o yaml). Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;.</code> as if the equal operator has not been read. it works fine with bash</p> <p>I am using kubectl plugin for auto-completion</p> <p>my zshrc:</p> <pre><code>plugins=(git docker kubectl docker-compose ansible zsh-autosuggestions zsh-syntax-highlighting sudo terraform zsh-completions) alias ls=&quot;exa --icons --group-directories-first&quot; alias ls -l=&quot;exa --icons --group-directories-first -lg&quot; alias ll=&quot;exa --icons --group-directories-first -lg&quot; alias cat=&quot;ccat --bg=dark -G Plaintext=brown -G Punctuation=white&quot; export PATH=$PATH:/usr/local/go/bin # create yaml on-the-fly faster export do='--dry-run=client -o yaml' </code></pre> <p>my bashrc:</p> <pre><code># enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if ! shopt -oq posix; then if [ -f /usr/share/bash-completion/bash_completion ]; then . /usr/share/bash-completion/bash_completion elif [ -f /etc/bash_completion ]; then . /etc/bash_completion fi fi export do='--dry-run=client -o yaml' </code></pre> <p>and when I execute the command it works fine</p> <pre><code>$ kubectl run test-pod --image=nginx $do apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: test-pod name: test-pod spec: containers: - image: nginx name: test-pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} </code></pre>
<p>As @Gairfowl metioned The distinction is that word splitting for unquoted parameter expansions is not performed by zsh (by default).</p> <p>By enabling the SH WORD SPLIT option or by using the = flag on a specific expansion, you can enable &quot;regular&quot; word splitting same as bash. To do that you need to follow this syntax</p> <pre><code>kubectl run test-pod --image=nginx ${=do} </code></pre> <p>or</p> <pre><code>kubectl run test-pod --image=nginx ${do} </code></pre> <p>In case if these two dont work try to use the <code>setopt</code></p> <pre><code>setopt SH_WORD_SPLIT kubectl run test-pod --image=nginx $do </code></pre>
<p>I have Kubernetes cluster running on a VM. A truncated overview of the mounts is:</p> <pre class="lang-sh prettyprint-override"><code>$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 4.5G 15G 24% / /dev/mapper/vg001-lv--docker 140G 33G 108G 23% /var/lib/docker </code></pre> <p>As you can see, I added an extra disk to store the docker images and its volumes. However, when querying the node's capacity, the following is returned</p> <pre><code>Capacity: cpu: 12 ephemeral-storage: 20145724Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 65831264Ki nvidia.com/gpu: 1 pods: 110 </code></pre> <p><code>ephemeral-storage</code> is <code>20145724Ki</code> which is 20G, referring to the disk mounted at <code>/</code>.</p> <p>How does Kubelet calculate its <code>ephemeral-storage</code>? Is it simply looking at the disk space available at <code>/</code>? Or is it looking at another folder like <code>/var/log/containers</code>?</p> <p><a href="https://stackoverflow.com/questions/58269443/how-can-we-increase-the-size-of-ephemeral-storage-in-a-kubernetes-worker-node">This is a similar post</a> where the user eventually succumbed to increasing the disk mounted at <code>/</code>.</p>
<p><strong>Some theory</strong></p> <p>By default <code>Capacity</code> and <code>Allocatable</code> for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to /var/lib/kubelet). This is the default location for kubelet directory.</p> <p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals" rel="noreferrer">The kubelet supports the following filesystem partitions:</a></p> <blockquote> <ol> <li><code>nodefs</code>: The node's main filesystem, used for local disk volumes, emptyDir, log storage, and more. For example, <code>nodefs</code> contains <code>/var/lib/kubelet/</code>.</li> <li><code>imagefs</code>: An optional filesystem that container runtimes use to store container images and container writable layers.</li> </ol> <p>Kubelet auto-discovers these filesystems and ignores other filesystems. Kubelet does not support other configurations.</p> </blockquote> <p>From <a href="https://kubernetes.io/docs/concepts/storage/volumes/#resources" rel="noreferrer">Kubernetes website</a> about volumes:</p> <blockquote> <p>The storage media (such as Disk or SSD) of an <code>emptyDir</code> volume is determined by the medium of the filesystem holding the kubelet root dir (typically <code>/var/lib/kubelet</code>).</p> </blockquote> <p>Location for kubelet directory can be configured by providing:</p> <ol> <li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">Command line parameter during kubelet initialization</a></li> </ol> <p><code>--root-dir</code> string Default: /var/lib/kubelet</p> <ol start="2"> <li>Via <a href="https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-init-configuration-types" rel="noreferrer">kubeadm with config file</a> (e.g.)</li> </ol> <pre><code>apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: root-dir: &quot;/data/var/lib/kubelet&quot; </code></pre> <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#customizing-the-kubelet" rel="noreferrer">Customizing kubelet</a>:</p> <blockquote> <p>To customize the kubelet you can add a <code>KubeletConfiguration</code> next to the <code>ClusterConfiguration</code> or <code>InitConfiguration</code> separated by <code>---</code> within the same configuration file. This file can then be passed to <code>kubeadm init</code>.</p> </blockquote> <p>When bootstrapping kubernetes cluster using kubeadm, <code>Capacity</code> reported by <code>kubectl get node</code> is equal to the disk capacity mounted into <code>/var/lib/kubelet</code></p> <p>However <code>Allocatable</code> will be reported as: <code>Allocatable</code> = <code>Capacity</code> - <code>10% nodefs</code> using the standard kubeadm configuration, since <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#hard-eviction-thresholds" rel="noreferrer">the kubelet has the following default hard eviction thresholds:</a></p> <ul> <li><code>nodefs.available&lt;10%</code></li> </ul> <p>It can be configured during <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet initialization</a> with: <code>-eviction-hard</code> mapStringString Default: imagefs.available&lt;15%,memory.available&lt;100Mi,nodefs.available&lt;10%</p> <hr /> <p><strong>Example</strong></p> <p>I set up a test environment for Kubernetes with a master node and two worker nodes (worker-1 and worker-2).</p> <p>Both worker nodes have volumes of the same capacity: 50Gb.</p> <p>Additionally, I mounted a second volume with a capacity of 20Gb for the Worker-1 node at the path <code>/var/lib/kubelet</code>. Then I created a cluster with kubeadm.</p> <p><strong>Result</strong></p> <p><em>From worker-1 node:</em></p> <pre><code>skorkin@worker-1:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 49G 2.8G 46G 6% / ... /dev/sdb 20G 45M 20G 1% /var/lib/kubelet </code></pre> <p>and</p> <pre><code>Capacity: cpu: 2 ephemeral-storage: 20511312Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4027428Ki pods: 110 </code></pre> <p>Size of ephemeral-storage is the same as volume mounted at /var/lib/kubelet.</p> <p><em>From worker-2 node:</em></p> <pre><code>skorkin@worker-2:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 49G 2.7G 46G 6% / </code></pre> <p>and</p> <pre><code>Capacity: cpu: 2 ephemeral-storage: 50633164Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4027420Ki pods: 110 </code></pre>
<p>I'm using Jenkins configuration as code (JCASC).</p> <p>I'm having a pod template and I want to add NodeSelector + Tolerations. podTemplate doesn't support key of tolerations and NodeSelector so I need to add pod YAML spec...</p> <pre><code> agent: enabled: true podTemplates: podTemplates: jenkins-slave-pod: | - name: jenkins-slave-pod label: global-slave serviceAccount: jenkins idleMinutes: &quot;15&quot; containers: - name: main image: 'xxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: &quot;sleep&quot; args: &quot;30d&quot; privileged: true </code></pre> <p>I was thinking of adding yaml: and just configuring the spec of the pod... But when I'm adding yaml: and adding yamlStrategy: merge/overrid it ignores the YAML it and only uses my podTemplate instead.</p> <p>How can I merge/override my podTemplate and add pod with tolerations/nodeSelecotr?</p> <p>Thats the YAML I want to have inside my podTemplate:</p> <pre><code> apiVersion: v1 kind: Pod serviceAccount: jenkins-non-prod idleMinutes: &quot;15&quot; containers: - name: main image: 'xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: &quot;sleep&quot; args: &quot;30d&quot; privileged: true spec: nodeSelector: karpenter.sh/provisioner-name: jenkins-provisioner tolerations: - key: &quot;jenkins&quot; operator: &quot;Exists&quot; effect: &quot;NoSchedule&quot; </code></pre> <p><a href="https://i.stack.imgur.com/NVjt4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NVjt4.png" alt="enter image description here" /></a></p>
<p>I try to give you a little suggestion, let me know if it works.</p> <p>If you have an up&amp;running Jenkins instance (with the Kubernetes plugin installed), you can go to “Manage Jenkins”/“Configure Clouds” and prepare your Pod Templates as you see fit. There you will also find the definition of nodeSelector and Toleration.</p> <p>Once you have saved the setup you prefer, go to “Manage Jenkins”/“Configuration as Code” and save the JCASC as Code configuration of your Jenkins (click “Download Configuration”).</p> <p>You can replicate this working mode for any new configuration you want to add to your Jenkins.</p>
<p>I got the following error:</p> <pre><code>controller.go:228] unable to sync kubernetes service: Post &quot;https://[::1]:6443/api/v1/namespaces&quot;: dial tcp [::1]:6443: connect: cannot assign requested address </code></pre> <p>I have the following warnings in my cluster kube (3x3 master/workers on prem (kvm)) with 3 etcd on masters.</p> <pre><code>kubectl get events --field-selector type!=Normal -n kube-system LAST SEEN TYPE REASON OBJECT MESSAGE 3m25s Warning Unhealthy pod/kube-apiserver-kube-master-1 Readiness probe failed: HTTP probe failed with statuscode: 500 3m24s Warning Unhealthy pod/kube-apiserver-kube-master-2 Readiness probe failed: HTTP probe failed with statuscode: 500 3m25s Warning Unhealthy pod/kube-apiserver-kube-master-2 Liveness probe failed: HTTP probe failed with statuscode: 500 3m27s Warning Unhealthy pod/kube-apiserver-kube-master-3 Readiness probe failed: HTTP probe failed with statuscode: 500 17m Warning Unhealthy pod/kube-apiserver-kube-master-3 Liveness probe failed: HTTP probe failed with statuscode: 500 </code></pre> <p>This error not affect my cluster or my servicies in any form. It's appear from the begining. How do I solve? :D</p>
<p>I had the same Error. My CoWorker deactivated IPv6 (to try something..) and Kubernetes tried to use IPv6.</p> <p>After rebooting my Master, IPv6 came back and it worked again.</p> <p>I searched for a bit and found this article: <a href="https://kubernetes.io/blog/2021/12/08/dual-stack-networking-ga/" rel="nofollow noreferrer">https://kubernetes.io/blog/2021/12/08/dual-stack-networking-ga/</a> which basically says you can set <code>ipFamilyPolicy</code> to one of three options:</p> <ul> <li>SingleStack</li> <li>PreferDualStack</li> <li>RequireDualStack</li> </ul>
<p>I have this cron job running on kubernetes:</p> <pre><code># cronjob.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: loadjob spec: schedule: &quot;05 10 31 Mar *&quot; successfulJobsHistoryLimit: 3 jobTemplate: spec: template: metadata: # Dictionary name: apiaplication labels: # Dictionary product_id: myprod annotations: vault.security.banzaicloud.io/vault-role: #{KUBERNETES_NAMESPACE}# prometheus.io/path: /metrics prometheus.io/port: &quot;80&quot; prometheus.io/scrape: &quot;true&quot; spec: containers: - name: al-onetimejob image: #{TIMELOAD_IMAGE_TAG}# imagePullPolicy: Always restartPolicy: OnFailure imagePullSecrets: - name: secret </code></pre> <p>In the above cron expression I have set it to today morning 10.05AM using cron syntax schedule: <code>05 10 31 Mar *</code> - but unfortunately when I checked after 10.05 my job (pod) was not running.</p> <p>So I found it's not running as expected at 10.05 using the above expression. Can someone please help me to write the correct cron syntax? Any help would be appreciated. Thanks</p>
<p>Check the <strong>timezone</strong> in your cluster first, by executing the <strong>date</strong> command because most of the time zones will be in <strong>UTC</strong>. Use <strong>date</strong> command like follows</p> <pre><code>$ date Fri Mar 31 07:21:47 UTC 2023 </code></pre> <p>Now set the schedule time in the <strong>cronjob</strong> based on your timezone. For your requirement <code>5 10 31 MAR *</code> should work. Use this <a href="https://crontab.guru/#5_10_31_MAR_*" rel="nofollow noreferrer">crontab</a> site for formulating the time based on the syntax.</p>
<p>I have a docker image that I want to run inside my django code. Inside that image there is an executable that I have written using c++ that writes it's output to google cloud storage. Normally when I run the django code like this:</p> <pre><code>container = client.V1Container(name=container_name, command=[&quot;//usr//bin//sleep&quot;], args=[&quot;3600&quot;], image=container_image, env=env_list, security_context=security) </code></pre> <p>And manually go inside the container to run this:</p> <pre><code>gcloud container clusters get-credentials my-cluster --region us-central1 --project proj_name &amp;&amp; kubectl exec pod-id -c jobcontainer -- xvfb-run -a &quot;path/to/exe&quot; </code></pre> <p>It works as intended and gives off the output to cloud storage. (I need to use a virtual monitor so I'm using xvfb first). However I must call this through django like this:</p> <pre><code>container = client.V1Container(name=container_name, command=[&quot;xvfb-run&quot;], args=[&quot;-a&quot;,&quot;\&quot;path/to/exe\&quot;&quot;], image=container_image, env=env_list, security_context=security) </code></pre> <p>But when I do this, the job gets created but never finishes and does not give off an output to the storage. When I go inside my container to run <code>ps aux</code> I get this output:</p> <pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2888 1836 ? Ss 07:34 0:00 /bin/sh /usr/bin/xvfb-run -a &quot;path/to/exe&quot; root 16 0.0 1.6 196196 66256 ? S 07:34 0:00 Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.r5gaBO/Xauthority root 35 0.0 0.0 7016 1552 ? Rs 10:31 0:00 ps aux </code></pre> <p>It looks like it's stuck inside my code but my code does not have a loop that it can stuck inside, perhaps there is an error occurring (I don't think so since the exact same command is working when typed manually). If there is an error how can I see the console output? Why is my code get stuck and how can I get my desired output? Could there be an error caused by permissions (The code does a lot of stuff that requires permissions like writing to storage and reading files inside the pod, but like mentioned works normally when i run it via the command line)?</p>
<p>Apparently for anyone having a similar issue, we fixed it by adding the command we want to run at the end of the <code>Dockerfile</code> instead of passing it as a parameter inside django's container call like this:</p> <pre><code>cmd[&quot;entrypoint.sh&quot;] </code></pre> <p>entrypoint.sh:</p> <pre><code>xvfb-run -a &quot;path/to/exe&quot; </code></pre> <p>Instead of calling it inside django like we did before and simply removing the command argument from the container call so it looked like this:</p> <pre><code>container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security) </code></pre>
<p>Apologies for a basic question. I have a simple Kubernetes deployment where I have 3 containers (each in their own pod) deployed to a Kubernetes cluster.</p> <p>The <code>RESTapi</code> container is dependent upon the <code>OracleDB</code> container starting. However, the <code>OracleDB</code> container takes a while to startup and by that time the <code>RESTapi</code> container has restarted a number of times due to not being able to connect and ends up in a <code>Backoff</code> state.</p> <p>Is there a more elegant solution for this?</p> <p>I’ve also noticed that when the <code>RESTapi</code> container goes into the <code>Backoff</code> state it stops retrying?</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p>The best approach in this case is to improve your “RESTapi” application to provide a more reliable and fault-tolerant service that will allow it reconnect to the database anyway.</p> <p>From <a href="https://github.com/learnk8s/kubernetes-production-best-practices/blob/dfebcd934ee6d6a6b8c4b8f7aa3cfad9a980f592/application-development.md#the-app-retries-connecting-to-dependent-services" rel="nofollow noreferrer">Kubernetes production best practices</a>:</p> <blockquote> <p>When the app starts, it shouldn't crash because a dependency such as a database isn't ready.</p> <p>Instead, the app should keep retrying to connect to the database until it succeeds.</p> <p>Kubernetes expects that application components can be started in any order.</p> </blockquote> <p>In other case you can use solution with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a>.</p> <p>You can look at this <a href="https://stackoverflow.com/questions/50838141/how-can-we-create-service-dependencies-using-kubernetes">question on stackoverflow</a>, which is just one of many others about the practical use of Init Containers for the case described.</p>
<p>I am currently having issues trying to get Prometheus to scrape the metrics for my Minikube cluster. Prometheus is installed via the <code>kube-prometheus-stack</code></p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>I am currently accessing Prometheus from an Ingress with a locally signed TLS certificate and it appears it's leading to conflicts as connection keeps getting refused by the cluster.</p> <p>TLS is set up via Minikube ingress <a href="https://minikube.sigs.k8s.io/docs/tutorials/custom_cert_ingress/" rel="nofollow noreferrer">add-on</a>:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create secret -n kube-system tls mkcert-tls-secret --cert=cert.pem --key=key.pem minikube addons configure ingress &lt;&lt;&lt; &quot;kube-system/mkcert-tls-secret&quot; &amp;&amp; \ minikube addons disable ingress &amp;&amp; \ minikube addons enable ingress </code></pre> <p>It seems Prometheus can't get access to <code>http-metrics</code> as a target. I installed Prometheus via a helm chart:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>Here is my Prometheus configuration:</p> <pre class="lang-yaml prettyprint-override"><code>global: scrape_interval: 30s scrape_timeout: 10s evaluation_interval: 30s external_labels: prometheus: monitoring/prometheus-stack-kube-prom-prometheus prometheus_replica: prometheus-prometheus-stack-kube-prom-prometheus-0 alerting: alert_relabel_configs: - separator: ; regex: prometheus_replica replacement: $1 action: labeldrop alertmanagers: - follow_redirects: true enable_http2: true scheme: http path_prefix: / timeout: 10s api_version: v2 relabel_configs: - source_labels: [__meta_kubernetes_service_name] separator: ; regex: prometheus-stack-kube-prom-alertmanager replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-web replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - monitoring rule_files: - /etc/prometheus/rules/prometheus-prometheus-stack-kube-prom-prometheus-rulefiles-0/*.yaml scrape_configs: - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-controller-manager/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: https authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-controller-manager);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-etcd/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: http authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-etcd);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system </code></pre> <p>I am also currently accessing (works just fine) the Prometheus instance outside of the cluster with an Ingress using the TLS certificate:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheusdashboard-ingress namespace: monitoring labels: name: prometheusdashboard-ingress spec: tls: - hosts: - prometheus.demo secretName: mkcert-tls-secret rules: - host: prometheus.demo http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: prometheus-stack-kube-prom-prometheus port: number: 9090 </code></pre> <p>Here's the output in the target page of Prometheus:</p> <p><a href="https://i.stack.imgur.com/iOd8O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOd8O.png" alt="Prometheus targets" /></a></p> <p>What do I get the stack access to this TLS certificate which I assume is the main issue here?</p>
<h1>After some more analysis</h1> <p>You can use the <code>kubernetes-dashboard</code> namespace using the following commmand to access the metrics.</p> <p>After some debugging, these are the following images used by MiniKube:</p> <ul> <li><a href="https://hub.docker.com/r/kubernetesui/metrics-scraper" rel="nofollow noreferrer">kubernetesui/metrics-scraper</a></li> <li><a href="https://hub.docker.com/r/kubernetesui/dashboard" rel="nofollow noreferrer">kubernetesui/dashboard:v2.3.1</a></li> <li><a href="https://console.cloud.google.com/gcr/images/k8s-minikube/GLOBAL/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944?tag=v5" rel="nofollow noreferrer">K8S Provisioner</a></li> </ul> <h2>Okay now, what do I do with the Namespace?</h2> <p>You can access the namespace and check out the services.</p> <p><code>kubectl get svc</code></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.98.229.151 &lt;none&gt; 8000/TCP 3m55s kubernetes-dashboard ClusterIP 10.107.41.221 &lt;none&gt; 80/TCP 3m55s </code></pre> <p>Using the kubernetes-dashboard service, you can access metrics from the services. It goes to <code>localhost:9090/metrics</code> where you can get some more metrics like this:</p> <p><a href="https://i.stack.imgur.com/DE3W5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DE3W5.png" alt="enter image description here" /></a></p> <h3>EDIT</h3> <p><em>You can configure prometheus to access these metrics.</em></p> <h3>EDIT II</h3> <p>I am not a prometheus expert but I am of a certain opinion that the above mentioned config file isn't best suited for mining metrics. You should consider using a custom config file.</p>
<p>I'm trying to configure prometheus to target kubernetes pods in a namespace, but only port 8081, even though the pods expose both 8080 and 8081. With this configuration:</p> <pre><code> - job_name: 'api' metrics_path: &quot;/actuator/prometheus&quot; kubernetes_sd_configs: - role: endpoints namespaces: names: - &quot;api&quot; </code></pre> <p>I get two targets for each pod - one for each port. I can't figure out how to narrow down the targets further to just the 8081 ports. Thanks for any help with this!</p>
<p>You can use the regex in <strong>kubernete_sd_configs</strong> to match the port and endpoint you are trying to target. Example :</p> <pre><code>- job_name: my_job metrics_path: /metrics kubernetes_sd_configs: - roles: endpoints namespaces: name: - api relable_configs: - source_labels: [_meta_kubernetes_service_name,_meta_kubernetes_endpoint_port_name] regex: my_service;8081 </code></pre> <p>Here we are using regex to match the endpoints with the <strong>service name</strong> called <code>my_service</code> and <strong>port</strong> number <code>8081</code>. If you need to scrape the metrics from different endpoints then you need to modify the regular expression accordingly.</p> <p>Make sure you mentioned appropriate service discovery annotations on your kubernetes services to ensure the prometheus can discover them using <strong>kubernetes_sd_configs</strong>.</p> <p>For more information have a glance at this <a href="https://se7entyse7en.dev/posts/how-to-set-up-kubernetes-service-discovery-in-prometheus/" rel="nofollow noreferrer">blog</a> written by Lou Marvin Caraig.</p> <p>Check this <a href="https://discuss.prometheus.io/t/how-to-scrape-only-a-single-port-of-a-pod-using-kubernetes-sd-config/203/5" rel="nofollow noreferrer">prometheus forum</a> discussion for more inputs.</p>
<p>I'm trying to deploy a Flask python API to Kubernetes (EKS). I've got the Dockerfile setup, but with some weird things going on.</p> <p><code>Dockerfile</code>:</p> <pre><code>FROM python:3.8 WORKDIR /app COPY . /app RUN pip3 install -r requirements.txt EXPOSE 43594 ENTRYPOINT [&quot;python3&quot;] CMD [&quot;app.py&quot;] </code></pre> <p>I build the image running <code>docker build -t store-api .</code>.</p> <p>When I try running the container and hitting an endpoint, I get <code>socker hung up</code>. However, if I run the image doing</p> <pre class="lang-sh prettyprint-override"><code>docker run -d -p 43594:43594 store-api </code></pre> <p>I can successfully hit the endpoint with a response.</p> <p><strong>My hunch is the port mapping.</strong></p> <p>Now having said all that, running the image in a Kubernetes pod, I cannot get anything back from the endpoint and get <code>socket hung up</code>.</p> <p>My question is, how do I explicitly add port mapping to my Kubernetes deployment/service?</p> <p>Part of the <code>Deployment.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code> spec: containers: - image: store-api name: store-api ports: - containerPort: 43594 resources: {} volumeMounts: - mountPath: /usr/local/bin/wait-for-it.sh name: store-api-claim0 imagePullPolicy: Always </code></pre> <p><code>Service.yaml</code>:</p> <pre><code>spec: type: LoadBalancer ports: - port: 43594 protocol: TCP targetPort: 43594 selector: app: store-api status: loadBalancer: {} </code></pre> <p>If I port forward using <code>kubectl port-forward deployment/store-api 43594:43594</code> and post the request to <code>localhost:43594/</code> it works fine.</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p><strong>Problem</strong></p> <p>Output for <code>kubectl describe service &lt;name_of_the_service&gt;</code> command contains <code>Endpoints: &lt;none&gt;</code></p> <p><strong>Some theory</strong></p> <p>From Kubernetes Glossary:</p> <p><a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-service" rel="nofollow noreferrer">Service</a></p> <blockquote> <p>An abstract way to expose an application running on a set of Pods as a network service. The set of Pods targeted by a Service is (usually) determined by a selector. If more Pods are added or removed, the set of Pods matching the selector will change. The Service makes sure that network traffic can be directed to the current set of Pods for the workload.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-endpoints" rel="nofollow noreferrer">Endpoints</a></p> <blockquote> <p>Endpoints track the IP addresses of Pods with matching selectors.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-selector" rel="nofollow noreferrer">Selector</a>:</p> <p>Allows users to filter a list of resources based on labels. Selectors are applied when querying lists of resources to filter them by labels.</p> <p><strong>Solution</strong></p> <p>Labels in <code>spec.template.metadata.labels</code> of the Deployment should be the same as in <code>spec.selector</code> from the Service.</p> <p>Additional information related to such issue can be found at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-have-any-endpoints" rel="nofollow noreferrer">Kubernetes site</a>:</p> <blockquote> <p>If the ENDPOINTS column is &lt;none&gt;, you should check that the spec.selector field of your Service actually selects for metadata.labels values on your Pods.</p> </blockquote>
<p>I am currently having issues trying to get Prometheus to scrape the metrics for my Minikube cluster. Prometheus is installed via the <code>kube-prometheus-stack</code></p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>I am currently accessing Prometheus from an Ingress with a locally signed TLS certificate and it appears it's leading to conflicts as connection keeps getting refused by the cluster.</p> <p>TLS is set up via Minikube ingress <a href="https://minikube.sigs.k8s.io/docs/tutorials/custom_cert_ingress/" rel="nofollow noreferrer">add-on</a>:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create secret -n kube-system tls mkcert-tls-secret --cert=cert.pem --key=key.pem minikube addons configure ingress &lt;&lt;&lt; &quot;kube-system/mkcert-tls-secret&quot; &amp;&amp; \ minikube addons disable ingress &amp;&amp; \ minikube addons enable ingress </code></pre> <p>It seems Prometheus can't get access to <code>http-metrics</code> as a target. I installed Prometheus via a helm chart:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>Here is my Prometheus configuration:</p> <pre class="lang-yaml prettyprint-override"><code>global: scrape_interval: 30s scrape_timeout: 10s evaluation_interval: 30s external_labels: prometheus: monitoring/prometheus-stack-kube-prom-prometheus prometheus_replica: prometheus-prometheus-stack-kube-prom-prometheus-0 alerting: alert_relabel_configs: - separator: ; regex: prometheus_replica replacement: $1 action: labeldrop alertmanagers: - follow_redirects: true enable_http2: true scheme: http path_prefix: / timeout: 10s api_version: v2 relabel_configs: - source_labels: [__meta_kubernetes_service_name] separator: ; regex: prometheus-stack-kube-prom-alertmanager replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-web replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - monitoring rule_files: - /etc/prometheus/rules/prometheus-prometheus-stack-kube-prom-prometheus-rulefiles-0/*.yaml scrape_configs: - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-controller-manager/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: https authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-controller-manager);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-etcd/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: http authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-etcd);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system </code></pre> <p>I am also currently accessing (works just fine) the Prometheus instance outside of the cluster with an Ingress using the TLS certificate:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheusdashboard-ingress namespace: monitoring labels: name: prometheusdashboard-ingress spec: tls: - hosts: - prometheus.demo secretName: mkcert-tls-secret rules: - host: prometheus.demo http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: prometheus-stack-kube-prom-prometheus port: number: 9090 </code></pre> <p>Here's the output in the target page of Prometheus:</p> <p><a href="https://i.stack.imgur.com/iOd8O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOd8O.png" alt="Prometheus targets" /></a></p> <p>What do I get the stack access to this TLS certificate which I assume is the main issue here?</p>
<h1>Solution to the problem</h1> <p>After some thorough analysis of my various kubernetes clusters, I have found out the following errors exist:</p> <h3>Docker Desktop Environment throws this:</h3> <p><code>Warning Failed 6s (x3 over 30s) kubelet Error: failed to start container &quot;node-exporter&quot;: Error response from daemon: path / is mounted on / but it is not a shared or slave mount</code></p> <p>The <a href="https://stackoverflow.com/questions/70556984/kubernetes-node-exporter-container-is-not-working-it-shows-this-error-message">solution</a> tries solving it but it didn't work out for me.</p> <h3>Minikube Environment</h3> <p>The same error was replicated there too, I opened the web UI for minikube and found that these services were related to these ports <a href="https://i.stack.imgur.com/X7fyw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X7fyw.png" alt="Web UI" /></a></p> <p>Through this image, we can understand that these services are relevant to these ports. You can try port-forwarding them but even that doesn't help much.</p> <p>The only way I can see this working is to apply the configuration through <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus chart</a>.</p>
<p>This is my eks cluster details: kubectl get all</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/feed-4fqdrrf-fwcc3 1/1 Running 0 64m pod/gst-7adn3njl-fg43 1/1 Running 0 71m pod/ingress-nginx-controller-f567efvef-f653dc 1/1 Running 0 9d pod/app-24dfs2d-m2fdqw 1/1 Running 0 66m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/feed NodePort 10.100.24.643 &lt;none&gt; 8082:30002/TCP 64m service/gst NodePort 10.100.54.543 &lt;none&gt; 8081:30004/TCP 71m service/ingress-nginx-controller LoadBalancer 10.100.643.256 ******************.&lt;region&gt;.elb.amazonaws.com 80:30622/TCP,443:30721/TCP 9d service/ingress-nginx-controller-admission ClusterIP 10.100.654.542 &lt;none&gt; 443/TCP 9d service/kubernetes ClusterIP 10.100.0.7 &lt;none&gt; 443/TCP 14d service/app NodePort 10.100.456.34 &lt;none&gt; 3001:30003/TCP 66m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/feed 1/1 1 1 64m deployment.apps/gst 1/1 1 1 71m deployment.apps/ingress-nginx-controller 1/1 1 1 9d deployment.apps/app 1/1 1 1 66m NAME DESIRED CURRENT READY AGE replicaset.apps/feed-4fqdrrf 1 1 1 64m replicaset.apps/gst-7adn3njl 1 1 1 71m replicaset.apps/ingress-nginx-controller-f567efvef 1 1 1 9d replicaset.apps/app-m2fdqw 1 1 1 66m kubectl logs feed-4fqdrrf-fwcc3 helloworld: listening on port 8082 ~ % kubectl logs gst-7adn3njl-fg43 helloworld: listening on port 8081 ~ % kubectl logs app-24dfs2d-m2fdqw helloworld: listening on port 3001 </code></pre> <p>these are my deployment and service yamls:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: feed labels: app: feed spec: selector: matchLabels: app: feed template: metadata: labels: app: feed spec: containers: - name: feed-container image: **************.dkr.ecr.******.amazonaws.com/feed:latest ports: - containerPort: 8082 --- apiVersion: v1 kind: Service metadata: name: feed spec: ports: - port: 8082 protocol: TCP targetPort: 8082 selector: app: feed type: NodePort </code></pre> <p>Similar for other 2 services and this is my ingress yaml:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/enable-websocket: &quot;true&quot; nginx.org/websocket-services: app nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: ingressClassName: nginx rules: - http: paths: - path: /feed pathType: Prefix backend: service: name: expertfeed port: number: 8082 - path: /app pathType: Prefix backend: service: name: wehealapp port: number: 3001 - path: /socket.io/app pathType: Prefix backend: service: name: app port: number: 3001 - path: /gst pathType: Prefix backend: service: name: gst port: number: 8001 </code></pre> <p>I have multiple get request in all 3 services and I have exposed them like this:</p> <pre><code> app.get('/getrequest1', jsonParser, async (request, response) =&gt; { //my code )} app.get('/getrequest2', jsonParser, async (request, response) =&gt; { //my code )} . . . const port = process.env.PORT || 8082; app.listen(port, () =&gt; { console.log(`helloworld: listening on port ${port}`); }); </code></pre> <p>Similar pattern in all 3 node services. I am getting 404 Not Found nginx when I hit: ******************.elb.amazonaws.com and when I hit: ******************.elb.amazonaws.com/feed getting this error:Cannot GET / If I use this url ******************.elb.amazonaws.com/feed/getreqest1 still getting same error</p>
<p>I found one way to do it that I add my nginx load balancer EXTERNAL-IP to my DNS records and add a CNAME record to it. Here is the YAML below:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: nginx nginx.org/websocket-services: &quot; ************.svc.cluster.local&quot; spec: rules: - host: expertfeed.************* http: paths: - path: / pathType: Prefix backend: service: name: expertfeed port: number: 8082 - host: wehealapp.************* http: paths: - path: /socket.io pathType: Prefix backend: service: name: wehealapp port: number: 3001 - host: gst.************ http: paths: - path: / pathType: Prefix backend: service: name: gst port: number: 8081 </code></pre>
<p>When running the following command:</p> <pre class="lang-sh prettyprint-override"><code>helm upgrade --cleanup-on-fail \ -- install $releaseName $dockerHubName/$dockerHubRepo:$tag \ -- namespace $namespace \ -- create-namespace \ -- values config.yaml </code></pre> <p>I get the following error:</p> <pre><code>Error: Failed to download &quot;$dockerHubName/$dockerHubRepo&quot; </code></pre> <p>I've also tried with different tags, with semantic versioning (tag=&quot;1.0.0&quot;) and there's a image with the tag &quot;latest&quot; on the DockerHub repo (which is Public)</p> <p>This also works with the base JuPyTerHub image <code>jupyterhub/jupyterhub</code></p>
<p>Based on information from the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/user-environment.html#choose-and-use-an-existing-docker-image" rel="nofollow noreferrer">jupyterhub for kubernetes site</a>, to use a different image from jupyter/docker-stacks, the following steps are required:</p> <blockquote> <ol> <li>Modify your config.yaml file to specify the image. For example:</li> </ol> </blockquote> <pre><code> singleuser: image: # You should replace the &quot;latest&quot; tag with a fixed version from: # https://hub.docker.com/r/jupyter/datascience-notebook/tags/ # Inspect the Dockerfile at: # https://github.com/jupyter/docker-stacks/tree/HEAD/datascience-notebook/Dockerfile name: jupyter/datascience-notebook tag: latest </code></pre> <blockquote> <ol start="2"> <li>Apply the changes by following the directions listed in <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/extending-jupyterhub.html#apply-config-changes" rel="nofollow noreferrer">apply the changes</a>.</li> </ol> <p>If you have configured prePuller.hook.enabled, all the nodes in your cluster will pull the image before the hub is upgraded to let users use the image. The image pulling may take several minutes to complete, depending on the size of the image.</p> <ol start="3"> <li>Restart your server from JupyterHub control panel if you are already logged in.</li> </ol> </blockquote>
<p>Hi I am looking in the values.yaml file of the helm chart <code>kube-prometheus-stack</code> and I am not able to find the key/value for the <code>startupProbe</code>. I've got an issue where loading of the prometheus pod takes more time than the default which is 15 min and I am trying to increase it, also trying to editing the <code>statefulset</code> would not work as they would be get overwritten by the prometheus itself. could anyone help me with this issue?<br /> Thanks.</p>
<p>You can just override default <code>startupProbe</code> settings in <code>Prometheuses</code> custom resource as follows:</p> <pre><code>spec: containers: - name: prometheus startupProbe: failureThreshold: 120 </code></pre>
<p>I am trying to understand where is this warning coming from ? I have disabled the PSP support in my cluster and indeed using k8s version lower than 1.25. But I want to understand and disable this warning. Is it possible ? Which controller is responsible to handle this WARNING?</p> <pre><code>kubectl get psp -A Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ </code></pre>
<p>There is a blog K8s where various aspects of the topic &quot;Warnings&quot; are explained. <a href="https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings</a></p> <p>In summary, these have been introduced since version 1.19 and you can’t remove them easily (unless you use the k8s.io/client-go library to modify the K8s API). <a href="https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling</a></p> <p>The last resort might be to &quot;throw away&quot; the output:</p> <pre><code>kubectl get psp -A 2&gt;&amp;1 | grep -vi &quot;warn&quot; | grep -vi &quot;deprecat&quot; </code></pre>
<p>I am trying to install using Helm Chart Repository image of Keycloak so that MariaDB Galera is used as database.</p> <p><strong>Installation</strong></p> <pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm upgrade keycloak bitnami/keycloak --create-namespace --install --namespace default --values values-keycloak.yaml --version 13.3.0 </code></pre> <p>**values-keycloak.yaml **</p> <pre><code>global: storageClass: &quot;hcloud-volumes&quot; auth: adminUser: user adminPassword: &quot;user&quot; tls: enabled: true autoGenerated: true production: true extraEnvVars: - name: KC_DB value: 'mariadb' - name: KC_DB_URL value: 'jdbc:mariadb://mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;' replicaCount: 1 service: type: ClusterIP ingress: enabled: true hostname: example.com annotations: cert-manager.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-buffer-size: 128k tls: true postgresql: enabled: false externalDatabase: host: &quot;mariadb-galera.default.svc.cluster.local&quot; port: 3306 user: bn_keycloak database: bitnami_keycloak password: &quot;password&quot; </code></pre> <p><strong>Error</strong></p> <pre><code>kubectl logs -n default keycloak-0 keycloak 23:50:06.59 keycloak 23:50:06.59 Welcome to the Bitnami keycloak container keycloak 23:50:06.60 Subscribe to project updates by watching https://github.com/bitnami/containers keycloak 23:50:06.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues keycloak 23:50:06.60 keycloak 23:50:06.60 INFO ==&gt; ** Starting keycloak setup ** keycloak 23:50:06.62 INFO ==&gt; Validating settings in KEYCLOAK_* env vars... keycloak 23:50:06.66 INFO ==&gt; Trying to connect to PostgreSQL server mariadb-galera.default.svc.cluster.local... keycloak 23:50:06.69 INFO ==&gt; Found PostgreSQL server listening at mariadb-galera.default.svc.cluster.local:3306 keycloak 23:50:06.70 INFO ==&gt; Configuring database settings keycloak 23:50:06.78 INFO ==&gt; Enabling statistics keycloak 23:50:06.79 INFO ==&gt; Configuring http settings keycloak 23:50:06.82 INFO ==&gt; Configuring hostname settings keycloak 23:50:06.83 INFO ==&gt; Configuring cache count keycloak 23:50:06.85 INFO ==&gt; Configuring log level keycloak 23:50:06.89 INFO ==&gt; Configuring proxy keycloak 23:50:06.91 INFO ==&gt; Configuring Keycloak HTTPS settings keycloak 23:50:06.94 INFO ==&gt; ** keycloak setup finished! ** keycloak 23:50:06.96 INFO ==&gt; ** Starting keycloak ** Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=keycloak-headless.default.svc.cluster.local Changes detected in configuration. Updating the server image. Updating the configuration and installing your custom providers, if any. Please wait. 2023-03-18 23:50:13,551 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice 2023-03-18 23:50:14,494 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice 2023-03-18 23:50:25,703 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 15407ms Server configuration updated and persisted. Run the following command to review the configuration: kc.sh show-config Next time you run the server, just run: kc.sh start --optimized -cf=/opt/bitnami/keycloak/conf/keycloak.conf 2023-03-18 23:50:28,160 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: &lt;unset&gt;, Hostname: &lt;request&gt;, Strict HTTPS: false, Path: &lt;request&gt;, Strict BackChannel: false, Admin URL: &lt;unset&gt;, Admin: &lt;request&gt;, Port: -1, Proxied: true 2023-03-18 23:50:30,398 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource &lt;default&gt; enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly 2023-03-18 23:50:31,267 WARN [io.agroal.pool] (agroal-11) Datasource '&lt;default&gt;': Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; 2023-03-18 23:50:31,269 WARN [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread: keycloak-default) HHH000342: Could not obtain connection to query metadata: java.sql.SQLNonTransientConnectionException: Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:136) at org.mariadb.jdbc.client.impl.StandardClient.&lt;init&gt;(StandardClient.java:103) at org.mariadb.jdbc.Driver.connect(Driver.java:70) at org.mariadb.jdbc.MariaDbDataSource.getXAConnection(MariaDbDataSource.java:225) at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:232) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:535) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:516) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.net.UnknownHostException: mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) at java.base/java.net.Socket.connect(Socket.java:633) at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:130) ... 11 more </code></pre> <p>I would like to get the correct connection. Perhaps this link will help (<a href="https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion" rel="nofollow noreferrer">https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion</a>), but I can't figure it out.</p>
<p>Try using</p> <pre><code>jdbc:mariadb://host/database jdbc:mariadb://mariadb-galera.default.svc.cluster.local/bitnami_keycloak </code></pre> <p>as KC_DB_URL value.</p> <p><a href="https://www.keycloak.org/server/containers" rel="nofollow noreferrer">https://www.keycloak.org/server/containers</a> (chapter “Relevant options”)</p> <p><a href="https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb</a> (“Run tests:” step)</p>
<p>Currently, I have this in my <code>springboot</code> <code>application.dev.yaml</code>:</p> <pre><code>datasource: url: jdbc:mysql://mysql/$DB_HOST?useSSL=false&amp;allowPublicKeyRetrieval=true username: root password: root driver-class-name: com.mysql.cj.jdbc.Driver </code></pre> <p>I want to add a dynamic value in username, passwords, <code>$DB_HOST</code> field so that it can pick those values from the secrets file in <code>Kubernetes</code>. The secrets file in <code>Kubernetes</code> is encrypted with base64</p>
<p>In Spring Boot, any property can be overridden by an environment variable of the same name, with the characters changed to upper case, and the dots changed to underscores.</p> <p>For example datasource.url can be overridden by setting an environment variable like DATASOURCE_URL, which you define in Kubernetes</p> <p>Source: <a href="https://developers.redhat.com/blog/2017/10/04/configuring-spring-boot-kubernetes-secrets#setup" rel="nofollow noreferrer">https://developers.redhat.com/blog/2017/10/04/configuring-spring-boot-kubernetes-secrets#setup</a></p>
<p>I am trying to run OpenEBS on Minikube v1.29.0 with --driver=docker and --kubernetes-version=v1.23.12. I have installed OpenEBS using the following command:</p> <pre><code>kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml </code></pre> <p>However, the openebs-ndm pod is stuck in ContainerCreating status.</p> <p>When I run <code>kubectl describe pod openebs-ndm-bbj6s -n openebs</code>, I get the following error message:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned openebs/openebs-ndm-bbj6s to minikube Warning FailedMount 19s (x7 over 51s) kubelet MountVolume.SetUp failed for volume &quot;udev&quot; : hostPath type check failed: /run/udev is not a directory </code></pre> <p>I have tried installing udev as suggested <a href="https://stackoverflow.com/questions/70507005/etc-udev-directory-not-present-in-docker-container">here</a> on my host but it didn't work. Any ideas on how to solve this issue?</p>
<p>If <code>/run/udev</code> is available in a local machine and not present in minkube cluster, then try to mount that folder into minkube cluster by using the <code>minkube mount</code> command, because to run the OpenEBS properly it required access to <code>/run/udev</code>.</p> <pre><code>#Syntax of minkube mount $ minikube start --mount-string=&quot;source_path:destination_path&quot; --mount #In your case try something like this $ minikube start --mount-string=&quot;/run/udev:/run/udev&quot; --mount </code></pre> <p>This will mount the <code>/run/udev</code> to the minkube cluster. Now redeploy the pods and monitor the volume mount of the pod.</p> <p>Have a glance at a similar error reference in <a href="https://github.com/openebs/openebs/issues/3489" rel="nofollow noreferrer">github issues</a>.</p>
<p>In the context of Kubernetes and <em>Nginx ingress-controller</em>, I can't grasp the <strong>difference between an <em>external</em> ingress and an <em>internal</em> ingress</strong>.</p> <ul> <li><p><em>what an <em>external</em> ingress and an <em>internal</em> ingress differ in?</em></p> </li> <li><p><em>when should they be used and what use cases do they serve?</em></p> </li> <li><p><em>when should one use <code>ingressClassName: nginx-internal</code>, <code>ingressClassName: nginx</code>, <code>metadata.annotations: [ kubernetes.io/ingress.class: nginx-external ]</code> aut similis?</em></p> </li> </ul> <p>I can't find much on the net, that discusses such difference or exemplifies how to use them. There's always some implicit knowledge assumed.</p>
<p>Both types allow ingress into the service in the ingress definition, but the external ingress will expose it to the internet. The internal does not, it's only on the local subnet outside the kubernetes bubble. Internal is used in situations where you need to allow a connection from another workload cluster on the same network but don't want it exposed to the internet.</p>
<p>I have one service with 10 pods inside of my cluster and it sends multiple requests to the master node for more than an hour. I put an annotation in my ingress resource and modified it with some annotation, the first annotation I used is only to change the load balancer method to ewma load balancing.</p> <pre><code>Annotations: nginx.ingress.kubernetes.io/load-balance: ewma </code></pre> <p>During the time when the client requested the service from the master node, it received the request perfectly, which means the client received different responses from different pods and from different agent nodes inside of the cluster.</p> <p>but when I changed the annotation to this annotation</p> <pre><code>nginx.ingress.kubernetes.io/upstream-hash-by: &quot;ewma&quot; </code></pre> <p>the client received the same answer every time from the same pod and from the same node, even I send requests for the service 5 times in a second for more than 30 minutes. are they implemented correctly? and why are they different?</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p><strong>Root cause:</strong></p> <p>The nodes should receive equal requests from clients (load balancing) and at the same time saving machine resources.</p> <p>Due to this, need to find the best solution between annotations below (based on the question):</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/load-balance: ewma </code></pre> <p>and</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/upstream-hash-by: ewma </code></pre> <p><strong>Solution:</strong></p> <p>Usage of <code>nginx.ingress.kubernetes.io/load-balance: ewma</code> annotation is preferable solution for the mentioned purpose.</p> <p>Based on the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#load-balance" rel="nofollow noreferrer">documents</a>: <code>load-balance</code> use the Peak EWMA method for routing. In contrast, of <code>upstream-hash-by</code>, which is load balance using consistent hashing of IP or other variables and provides connection to the same pod.</p> <p>For more information, see this article <a href="https://flugel.it/kubernetes/kubernetes-nginx-ingress-consistent-hash-subset-load-balancer/" rel="nofollow noreferrer">Kubernetes Nginx Ingress: Consistent hash subset load balancer</a>.</p>
<p>i got a configuration of a postgres deployment, something like this..</p> <p>on the volumeMounts level</p> <pre><code> volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-persistance-storage-new </code></pre> <p>on the volumes level</p> <pre><code> volumes: - name: postgres-persistance-storage-new persistentVolumeClaim: claimName: postgres-pv-claim-new </code></pre> <p>the PersistentVolumeClaim spec</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim-new # name of PVC essential for identifying the storage data labels: app: postgres tier: database spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>My problem is the following : when i delete persistentVolumeClaim via a kubectl command, and then launch again my postgres deployment yaml spec, the persistentVolume seems to be always here, as logs in postgres container say :</p> <pre><code>PostgreSQL Database directory appears to contain a database; Skipping initialization </code></pre> <p>How is it possible ?</p>
<p>When you delete a PVC, if there is a resource that uses it (for example if the volume is attached to a Deployments with running Pods) this remains ACTIVE.</p> <p>This is the reason: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection</a></p> <p>When you reapply the YAML describing the Deployment, Kubernetes will upgrade to rolling-update mode.</p> <blockquote> <p>Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p> </blockquote> <p><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p> <p>This means that your &quot;old&quot; Pod will remain active until the &quot;new&quot; becomes Up&amp;Running (but continuing to fail, the &quot;old&quot; will never be killed and consequently the PVC will never be destroyed, continuing to maintain the application configurations).</p> <p>To conclude, I suggest you delete the resources (postgresql Deployment in this case) that use the deleted PVC before re-installing them.</p>
<p>We are setting up a fleet server in Kubernetes. It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:</p> <blockquote> <p>E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving resource lock default/elastic-agent-cluster-leader: leases.coordination.k8s.io &quot;elastic-agent-cluster-leader&quot; is forbidden: User &quot;system:serviceaccount:default:elastic-agent&quot; cannot get resource &quot;leases&quot; in API group &quot;coordination.k8s.io&quot; in the namespace &quot;default&quot;</p> </blockquote> <p>I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!</p>
<p>You need to make sure that you have applied the <code>ServiceAccount</code>, <code>ClusterRoles</code> and <code>ClusterRoleBindings</code> from the setup files.</p> <p>An example of these can be found in the quickstart documentation.</p> <p><a href="https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html</a></p> <p>Service Account</p> <pre><code>kind: ServiceAccount metadata: name: elastic-agent namespace: default </code></pre> <p>Cluster Role</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: elastic-agent rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: - pods - nodes - namespaces verbs: - get - watch - list - apiGroups: [&quot;coordination.k8s.io&quot;] resources: - leases verbs: - get - create - update </code></pre> <p>Cluster Role Binding</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: elastic-agent subjects: - kind: ServiceAccount name: elastic-agent namespace: default roleRef: kind: ClusterRole name: elastic-agent apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I have a k8s service running on my PC locally and works fine. The Database and the services. everything works fine and are accessible via localhost on my browser and postman. However, I want to be able to connect to it via other PC/mobile app on the same router/internet. So, I have tried port forwarding, and I've tried mapping using <em><strong>netsh interface portproxy add v4tov4 listenport=8014 listenaddress=0.0.0.0 connectport=8014 connectaddress=172.18.xx.xx</strong></em> But no luck yet. My question is that &quot;Is this possible or am I just wasting my time?&quot;. I have spent up to 5 days on this already and I'm not sure if it is possible or not. Please any lead or solution will be highly appreciated.<a href="https://i.stack.imgur.com/PnRby.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PnRby.png" alt="enter image description here" /></a></p>
<p>Try listening on <code>127.0.0.1</code>, or completely remove the <code>listenaddress</code> which should make it default to <code>localhost</code>.</p>
<p>My goal is to have a kubernetes cluster running with stateless replicas of some kind of frontend which can schedule jobs.</p> <p>Afterwards my plan is to have multiple runners (as pods) which are polling for scheduled jobs. Once they receive job data they should launch job executors (also as a pod) on demand.</p> <p>This should look somehow like this:</p> <pre><code> pod A pod B ________ __________ | | kube | | | runner | ------&gt; | executor | |________| |__________| . . pod A' . pod B' ________ __________ | | kube | | | runner | ------&gt; | executor | |________| |__________| </code></pre> <p>Basically I am kind of inspired by the gitlab architecture. <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram</a></p> <p>Therefore it would be really awesome if you could give me a hint where to start with. I was considering to use the python api of the cluster but in this I have to somehow make the cluster administration available to the runner.</p> <p>So I'm really thankful for any idea or approach how to realize such an architecture of pod lifecycling.</p> <p>Do you see any security risks with such an approach or is this reasonable securitywise?</p> <p>Thank you very much for your help and hints</p>
<p>Here is a <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">list of available kubernetes api clients</a>. You will use a <strong><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">service account</a></strong> to make authorized calls with your api client against the <strong>kube-apiserver</strong> running in the <strong>controlplane</strong> node. I made a first idea draft for your architecture. Maybe that is inspiring to you. Good look with your project! Alternative to the database you may want to use a message queue.</p> <p><a href="https://i.stack.imgur.com/SFBOV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SFBOV.png" alt="enter image description here" /></a></p>
<p>In k8s, dns name can be unchangable, the ip is unflexable. the cnosdb github repo provide ip to config the clusters, but in kubernetes the cluster should use dns name, please provide a workaround method to config.</p> <p>know the best practise to deploy cnosdb in k8s</p>
<p>I don't know the code of the tool you indicate in the question, but not giving the possibility to configure a DNS name in favor of a static IP is generally an anti-pattern, especially on Kubernetes.</p> <p>However, Network plug-ins like Calico allow you to reserve a static IP address for your Pod.</p> <p>Take a look here: <a href="https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip" rel="nofollow noreferrer">https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip</a></p>
<p>I recently updated the nginx-controller in my kubernetes cluster. The current behavior when multiple ingress resources are defined is &quot;choose winner&quot;. Meaning that if there are 2 ingress resources to the same host, nginx will only define one of them, instead of merging the paths.</p> <p>The way to get around this seems to be documented <a href="https://github.com/nginxinc/kubernetes-ingress/tree/v2.3.0/examples/mergeable-ingress-types" rel="nofollow noreferrer">here</a>, which describes the solution by adding annotations of <code>nginx.org/mergeable-ingress-type</code>.</p> <p>However, to the best of my knowledge, cert-manager does not create the ingresses with these annotation. Resulting with a 404 for the challenge url (or worse: a 404 to the site I'm trying to create a certificate for)</p> <p>Here is the ingress serving the site:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: farmers annotations: nginx.org/mergeable-ingress-type: &quot;master&quot; spec: ingressClassName: nginx - host: farmers.klino.me http: paths: - path: / pathType: Prefix backend: service: name: frontend port: number: 80 </code></pre> <p>And the ingress of the challenge:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0 generateName: cm-acme-http-solver- generation: 3 labels: acme.cert-manager.io/http-domain: &quot;...&quot; acme.cert-manager.io/http-token: &quot;...&quot; acme.cert-manager.io/http01-solver: &quot;true&quot; name: cm-acme-http-solver-7677r namespace: staging spec: rules: - host: farmers.klino.me http: paths: - backend: service: name: cm-acme-http-solver-hkx6g port: number: 8089 path: /.well-known/acme-challenge/&lt;...reducted...&gt; pathType: ImplementationSpecific </code></pre> <p>In summary, I have 2 ingress resources to the host <code>farmers.klino.me</code>, on serving the site at <code>/</code>, the other - created automatically by certmanager - serving the challenge at <code>/.well-known/acme-challenge/&lt;...reducted...&gt;</code>.</p> <p>How should I configure the nginx-ingress and cert-manager to merge the ingresses and work together without human intervention?</p>
<p><strong>EDIT:</strong> There appears to be two different NGINX Ingress controller deployments. One called <a href="https://artifacthub.io/packages/helm/nginx/nginx-ingress" rel="nofollow noreferrer">nginx-ingress</a> and the other called <a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a></p> <p>The one you want is <a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a>. This version automatically merges the ingress rules created by cert-manager and instantly all my certificates renewed. The controller documentation can be found <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">here</a> and the cert-manager documentation found <a href="https://cert-manager.io/docs/tutorials/acme/nginx-ingress/" rel="nofollow noreferrer">here</a> is applicable.</p> <p>Hope this helps!</p> <p><strong>Original Post:</strong> I'm also experiencing the same issue. I've not yet tried it but it looks like you can configure an ingressTemplate for your given issuer. For example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ... spec: acme: server: ... privateKeySecretRef: name: ... solvers: - http01: ingress: ingressTemplate: metadata: labels: foo: &quot;bar&quot; annotations: &quot;nginx.ingress.kubernetes.io/whitelist-source-range&quot;: &quot;0.0.0.0/0,::/0&quot; &quot;nginx.org/mergeable-ingress-type&quot;: &quot;minion&quot; &quot;traefik.ingress.kubernetes.io/frontend-entry-points&quot;: &quot;http&quot; </code></pre> <p>See <a href="https://cert-manager.io/v1.6-docs/configuration/acme/http01/#ingresstemplate" rel="nofollow noreferrer">https://cert-manager.io/v1.6-docs/configuration/acme/http01/#ingresstemplate</a> for more details.</p> <p>This doesn't seem like the most efficient way to handle this issue though so if anyone has a better solution I'm all ears.</p>
<p>I am getting - when installing <a href="https://cilium.io" rel="nofollow noreferrer">Cilium</a>:</p> <pre><code>Warning FailedScheduling 4m21s (x17 over 84m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. </code></pre> <p>How can I see the rule and can I change it?</p> <p>If I do <code>kubectl describe node</code>, id do not have anyy <code>nodeAffinity</code> settings. And the Node has <code>Taints:&lt;none&gt;</code></p>
<p>Run <code>$ kubectl get pods</code> , it shows Pending status. (<code>kubectl get pods -o wide</code>)</p> <p>To describe the pod run <code>$ kubectl describe pod POD_NAME</code>, it shows a warning as part of events, if not works, try as suggested by @ <strong>Chris</strong> run <code>kubectl get pod &lt;name&gt; -o yaml</code>. There you'll find <code>spec.affinity</code>.</p> <p>After identifying which anti-affinity rules triggers the warning. You can choose to</p> <blockquote> <p>either rectify the rule or make some changes in the cluster to support the rule</p> </blockquote> <p><strong>For example :</strong> let's take a case, try to deploy 4 replicas of Ngnix deployment with <strong>podAntiAffinity</strong> rule in a 3 Node cluster. Here the last replica cannot be scheduled because no available Nodes left.</p> <blockquote> <p>You can choose to reduce the number of replicas, increase the number of Nodes, adjust the rule to use soft/preference requirements or remove the <strong>podAntiAffinity</strong> rule.</p> </blockquote>
<p>The below container.conf works fine in Kubernetes 1.23 but fails after migrating to 1.25. I have also specified the deamonset that I have used to push the logs to cloudwatch. When I look into the logs of the fluentd deamonset I could see a lot of below errors</p> <p>2023-04-03 01:32:06 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: &quot;2023-04-03T01:32:02.9256618Z stdout F [2023-04-03T01:32:02.925Z] DEBUG transaction-677fffdfc4-tc4rx-18/TRANSPORTER: NATS client pingTimer: 1&quot;</p> <pre><code> container.conf ============== &lt;source&gt; @type tail @id in_tail_container_logs @label @containers path /var/log/containers/*.log exclude_path [&quot;/var/log/containers/fluentd*&quot;] pos_file /var/log/fluentd-containers.log.pos tag * read_from_head true &lt;parse&gt; @type json time_format %Y-%m-%dT%H:%M:%S.%NZ &lt;/parse&gt; &lt;/source&gt; &lt;label @containers&gt; &lt;filter **&gt; @type kubernetes_metadata @id filter_kube_metadata &lt;/filter&gt; &lt;filter **&gt; @type record_transformer @id filter_containers_stream_transformer &lt;record&gt; stream_name ${tag_parts[3]} &lt;/record&gt; &lt;/filter&gt; &lt;match **&gt; @type cloudwatch_logs @id out_cloudwatch_logs_containers region &quot;#{ENV.fetch('AWS_REGION')}&quot; log_group_name &quot;/k8s-nest/#{ENV.fetch('AWS_EKS_CLUSTER_NAME')}/containers&quot; log_stream_name_key stream_name remove_log_stream_name_key true auto_create_stream true &lt;buffer&gt; flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true &lt;/buffer&gt; &lt;/match&gt; &lt;/label&gt; Deamonset ========== apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: fluentd-cloudwatch name: fluentd-cloudwatch namespace: kube-system spec: selector: matchLabels: k8s-app: fluentd-cloudwatch template: metadata: labels: k8s-app: fluentd-cloudwatch annotations: iam.amazonaws.com/role: fluentd spec: serviceAccount: fluentd serviceAccountName: fluentd containers: - env: - name: AWS_REGION value: us-west-1 - name: AWS_EKS_CLUSTER_NAME value: dex-eks-west #image: 'fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch' image: 'fluent/fluentd-kubernetes-daemonset:v1.15.3-debian-cloudwatch-1.1' imagePullPolicy: IfNotPresent name: fluentd-cloudwatch resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true - mountPath: /run/log/journal name: runlogjournal readOnly: true dnsPolicy: ClusterFirst initContainers: - command: - sh - '-c' - cp /config-volume/..data/* /fluentd/etc image: busybox imagePullPolicy: Always name: copy-fluentd-config resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: fluentd-config name: config-volume - emptyDir: {} name: fluentdconf - hostPath: path: /var/log type: '' name: varlog - hostPath: path: /var/lib/docker/containers type: '' name: varlibdockercontainers - hostPath: path: /run/log/journal type: '' name: runlogjournal </code></pre>
<p>I had the same problem a while ago.</p> <blockquote> <p>It seems to be an issue between the logs being emitted from the container and what is being written to the log file. Something is prefixing all logs with the &lt;stdout/stderr&gt; &lt;?&gt; </p> </blockquote> <p>Ref. <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567</a></p> <p>Try following the discussion in the link I pasted you above; I solved it like this:</p> <pre><code> &lt;parse&gt; @type regexp expression /^(?&lt;time&gt;.+) (?&lt;stream&gt;stdout|stderr)( (?&lt;logtag&gt;.))? (?&lt;log&gt;.*)$/ &lt;/parse&gt; </code></pre>
<p>How can I get <strong>real client IP</strong> from Nginx ingress load balancer in GKE? According to the online resource, I have configured the External Traffic Policy: Local and added use-proxy-protocol: &quot;true&quot; property also.</p> <p>But still, I'm seen the GKE node IP/interface in the log, not the real client IP.</p> <p>My load balancer service -&gt;</p> <pre><code>Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=0.41.2 helm.sh/chart=ingress-nginx-3.10.1 Annotations: networking.gke.io/load-balancer-type: Internal Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: xx.xxx.xx.xx IPs: xx.xx.xxx.xx LoadBalancer Ingress: xx.xx.xx.xx Port: http 80/TCP TargetPort: http/TCP NodePort: http 32118/TCP Endpoints: xx.x.xx.xx:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 31731/TCP Endpoints: xx.x.xx.xxx:443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 30515 </code></pre> <p>My config map -&gt;</p> <pre><code>apiVersion: v1 data: access-log-path: /var/log/nginx-logs/access.log compute-full-forwarded-for: &quot;true&quot; enable-real-ip: &quot;true&quot; enable-underscores-in-headers: &quot;true&quot; error-log-path: /var/log/nginx-logs/error.log large-client-header-buffers: 4 64k log-format-upstream: $remote_addr - $request_id - [$proxy_add_x_forwarded_for] - $remote_user [$time_local] &quot;$request&quot; $status $body_bytes_sent &quot;$http_referer&quot; &quot;$http_user_agent&quot; $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status proxy-read-timeout: &quot;240&quot; proxy-send-timeout: &quot;240&quot; real-ip-header: proxy_protocol use-forwarded-headers: &quot;true&quot; use-proxy-protocol: &quot;true&quot; </code></pre>
<p>I tried to use the following <a href="/questions/tagged/nginx" class="post-tag" title="show questions tagged &#39;nginx&#39;" aria-label="show questions tagged &#39;nginx&#39;" rel="tag" aria-labelledby="tag-nginx-tooltip-container">nginx</a> ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 name: ingress-nginx-controller namespace: ingress-nginx data: allow-snippet-annotations: &quot;true&quot; enable-real-ip: &quot;true&quot; use-forwarded-headers: &quot;true&quot; proxy-real-ip-cidr: &quot;&lt;pods_cidr&gt;,&lt;services_cidr&gt;,&lt;load_balance_ip&gt;/32&quot; use-proxy-protocol: &quot;false&quot; </code></pre> <p>And added the statement <code>externalTrafficPolicy: Local</code> on <a href="/questions/tagged/nginx" class="post-tag" title="show questions tagged &#39;nginx&#39;" aria-label="show questions tagged &#39;nginx&#39;" rel="tag" aria-labelledby="tag-nginx-tooltip-container">nginx</a> Service that assign the load balance:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 name: ingress-nginx-controller namespace: ingress-nginx spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - appProtocol: https name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx type: LoadBalancer loadBalancerIP: &lt;load_balance_ip&gt; </code></pre> <p>I hadn't success. Then, I also tried to configure <em>ip-masq-agent</em> with the follow ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: ip-masq-agent namespace: kube-system data: config: | nonMasqueradeCIDRs: - &lt;load_balance_ip&gt;/32 - &lt;pods_cidr&gt; - &lt;services_cidr&gt; masqLinkLocal: false resyncInterval: 30s </code></pre> <p>So, I deleted the DaemonSet <em>ip-masq-agent</em> and <a href="/questions/tagged/gke" class="post-tag" title="show questions tagged &#39;gke&#39;" aria-label="show questions tagged &#39;gke&#39;" rel="tag" aria-labelledby="tag-gke-tooltip-container">gke</a> automatic recreated it.</p> <p>After that, I got my <a href="/questions/tagged/gke" class="post-tag" title="show questions tagged &#39;gke&#39;" aria-label="show questions tagged &#39;gke&#39;" rel="tag" aria-labelledby="tag-gke-tooltip-container">gke</a> cluster working as expected.</p> <p>You can find more information about <em>ip-masq-agent</em> on <a href="/questions/tagged/gke" class="post-tag" title="show questions tagged &#39;gke&#39;" aria-label="show questions tagged &#39;gke&#39;" rel="tag" aria-labelledby="tag-gke-tooltip-container">gke</a> accessing <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent</a></p>
<p>how can I do health check for cnosdb in kubernetes, as we know, health-check is important in kubernetes, as common, a binary should provider a restful api for kubernetes periodically health-checking. Does cnosdb provider such api?</p> <p>best practise to do health-checking in kubernetes for cnosdb</p>
<p>It's a very general question, which in my opinion leaves too much room for interpretation.</p> <p>In general, however, you should think about configuring a Monitoring stack, perhaps based on Prometheus, Grafana and Alertmanager that allows you to extrapolate the metrics that are exposed by the DB.</p> <p>Otherwise you can do something very rough by following the following CURL:</p> <pre><code>curl http://127.0.0.1:21001/metrics curl http://127.0.0.1:21002/metrics curl http://127.0.0.1:21003/metrics </code></pre> <p><a href="https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process" rel="nofollow noreferrer">https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process</a></p>
<p>Kuberhealthy deployment health check fails frequently saying [Prometheus]: [FIRING:2] kuberhealthy (ClusterUnhealthy kuberhealthy http kuberhealthy observability/kube-prometheus-stack-prometh</p> <h2>Steps to reproduce:</h2> <p>kuberhealthy runs a deployment check regularly While the deployment seems to complete it fails to report the status on kuberhealthy service</p> <pre><code>$ k get events -nkuberhealthy | grep deployment | tail 12m Normal ScalingReplicaSet deployment/deployment-deployment Scaled down replica set deployment-deployment-XXX to 2 12m Normal ScalingReplicaSet deployment/deployment-deployment Scaled up replica set deployment-deployment-XXXto 4 12m Normal ScalingReplicaSet deployment/deployment-deployment Scaled down replica set deployment-deployment-XXX to 0 3m31s Normal ScalingReplicaSet deployment/deployment-deployment Scaled up replica set deployment-deployment-XXX to 4 3m9s Normal ScalingReplicaSet deployment/deployment-deployment Scaled up replica set deployment-deployment-XXX to 2 3m9s Normal ScalingReplicaSet deployment/deployment-deployment Scaled down replica set deployment-deployment-69459d778b to 2 3m9s Normal ScalingReplicaSet deployment/deployment-deployment Scaled up replica set deployment-deployment-XXX to 4 3m Normal ScalingReplicaSet deployment/deployment-deployment Scaled down replica set deployment-deployment-XXX to 0 63m Warning FailedToUpdateEndpoint endpoints/deployment-svc Failed to update endpoint kuberhealthy/deployment-svc: Operation cannot be fulfilled on endpoints &quot;deployment-svc&quot;: the object has been modified; please apply your changes to the latest version and try again 53m Warning FailedToUpdateEndpoint </code></pre> <h2>debug logs</h2> <pre><code>$ k logs deployment-XXX -nkuberhealthy time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Found instance namespace: kuberhealthy&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Kuberhealthy is located in the kuberhealthy namespace.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Debug logging enabled.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;[/app/deployment-check]&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Parsed CHECK_IMAGE: XXXX&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Parsed CHECK_IMAGE_ROLL_TO: XXX&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Found pod namespace: kuberhealthy&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Performing check in kuberhealthy namespace.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Parsed CHECK_DEPLOYMENT_REPLICAS: 2&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Parsed CHECK_SERVICE_ACCOUNT: default&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Check time limit set to: 14m46.760673918s&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Parsed CHECK_DEPLOYMENT_ROLLING_UPDATE: true&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Check deployment image will be rolled from [XXX] to [XXXX]&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;Allowing this check 14m46.760673918s to finish.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Kubernetes client created.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Waiting for node to become ready before starting check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;Checking if the kuberhealthy endpoint: XXX is ready.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;XXX.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;Kuberhealthy endpoint: XXX is ready. Proceeding to run check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Starting check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Wiping all found orphaned resources belonging to this check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Attempting to find previously created service(s) belonging to this check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;Found 1 service(s).&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=debug msg=&quot;Service: kuberhealthy&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Did not find any old service(s) belonging to this check.&quot; time=&quot;2022-12-16T12:36:43Z&quot; level=info msg=&quot;Attempting to find previously created deployment(s) belonging to this check.&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=debug msg=&quot;Found 1 deployment(s)&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=debug msg=kuberhealthy time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Did not find any old deployment(s) belonging to this check.&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Successfully cleaned up prior check resources.&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Creating deployment resource with 2 replica(s) in kuberhealthy namespace using image XXX]&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Creating container using image [XXX]&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Created deployment resource.&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Creating deployment in cluster with name: deployment-deployment&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=info msg=&quot;Watching for deployment to exist.&quot; time=&quot;2022-12-16T12:36:44Z&quot; level=debug msg=&quot;Received an event watching for deployment changes: deployment-deployment got event ADDED&quot; time=&quot;2022-12-16T12:36:47Z&quot; level=debug msg=&quot;Received an event watching for deployment changes: deployment-deployment got event MODIFIED&quot; time=&quot;2022-12-16T12:36:48Z&quot; level=debug msg=&quot;Received an event watching for deployment changes: deployment-deployment got event MODIFIED&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Received an event watching for deployment changes: deployment-deployment got event MODIFIED&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Deployment is reporting Available with True.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Created deployment in kuberhealthy namespace: deployment-deployment&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Creating service resource for kuberhealthy namespace.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Created service resource.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Creating service in cluster with name: deployment-svc&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Watching for service to exist.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Received an event watching for service changes: ADDED&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Cluster IP found:XXX&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Created service in kuberhealthy namespace: deployment-svc&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Retrieving a cluster IP belonging to: deployment-svc&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Found service cluster IP address: XXX&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Looking for a response from the endpoint.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Setting timeout for backoff loop to: 3m0s&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Beginning backoff loop for HTTP GET request.&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Making GET to XXX&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=debug msg=&quot;Got a 401&quot; time=&quot;2022-12-16T12:36:53Z&quot; level=info msg=&quot;Retrying in 5 seconds.&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=error msg=&quot;error occurred making request to service in cluster: could not get a response from the given address: XXX&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=info msg=&quot;Cleaning up deployment and service.&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=info msg=&quot;Attempting to delete service deployment-svc in kuberhealthy namespace.&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=debug msg=&quot;Checking if service has been deleted.&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=debug msg=&quot;Delete service and wait has not yet timed out.&quot; time=&quot;2022-12-16T12:36:58Z&quot; level=debug msg=&quot;Waiting 5 seconds before trying again.&quot; time=&quot;2022-12-16T12:37:03Z&quot; level=info msg=&quot;Attempting to delete deployment in kuberhealthy namespace.&quot; time=&quot;2022-12-16T12:37:03Z&quot; level=debug msg=&quot;Checking if deployment has been deleted.&quot; time=&quot;2022-12-16T12:37:03Z&quot; level=debug msg=&quot;Delete deployment and wait has not yet timed out.&quot; time=&quot;2022-12-16T12:37:03Z&quot; level=debug msg=&quot;Waiting 5 seconds before trying again.&quot; time=&quot;2022-12-16T12:37:08Z&quot; level=info msg=&quot;Finished clean up process.&quot; time=&quot;2022-12-16T12:37:08Z&quot; level=error msg=&quot;Reporting errors to Kuberhealthy: [could not get a response from the given address: XXX&quot; </code></pre>
<p><strong>Looks like your kubernetes deployment is working fine. It's common behavior of k8s to let the k8s clients (Controllers) know to try again, it's perfectly fine and you can safely ignore that.</strong></p> <p><em><strong>Let me try to explain the generic cause of such warning in the events:</strong></em></p> <p>The K8s API Server is implementing something called <code>&quot;Optimistic concurrency control&quot;</code> (sometimes referred to as optimistic locking). This is a method where instead of locking a piece of data and preventing it from being read or updated while the lock is in place, the piece of data includes a version number. Every time the data is updated, the version number increases.</p> <p>When updating the data, the version number is checked to see if it has increased between the time the client reads the data and the time it submits the update. If this happens, the update is rejected and the client must re-read the new data and try to update it again. The result is that when two clients try to update the same data entry, only the first one succeeds.</p> <p>You can also refer to the <a href="https://stackoverflow.com/questions/57957516/kubernetes-failed-to-update-endpoints-warning">SO</a> for relevant information.</p> <p>Also please go through the <a href="https://komodor.com/blog/kubernetes-health-checks-everything-you-need-to-know/" rel="nofollow noreferrer">Kubernetes Health Checks: Everything You Need to Know</a> for more info.</p> <p><strong>EDIT:</strong> If you're running a version which doesn't contain the latest fixes/ patches,I recommend you upgrade your master and nodes to a newer minor version. For example, 1.12.10-gke.20 or 22. This will help isolate the issue on whether it is the GKE version or some other underlying issue, please go through the <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes#november_11_2019" rel="nofollow noreferrer">GKE release notes</a> for more information. The nodes are unhealthy and deployments are experiencing timeouts. Upgrading might resolve the issue.</p> <p>Seems the above warning message detects when you run the current file from Kubernetes Engine &gt; Workloads &gt; YAML. If so, then to solve the problem, you need to find the exact yaml file and then edit it as per your requirement, after that you can run the following command like <code>$kubectl apply -f nginx-1.yaml</code>. If this way does not work then please check the details of your executed operation (Including deployment of pod).</p> <p>Please check <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/" rel="nofollow noreferrer">Debug Pods</a>instructions, which describe some common troubleshooting to help users debug applications that are deployed into Kubernetes and not behaving correctly.</p> <p>You may also visit troubleshooting document for <a href="https://kubernetes.io/docs/tasks/debug/" rel="nofollow noreferrer">Monitoring, Logging, and Debugging</a> for more information.</p> <p>Also go through another similar <a href="https://stackoverflow.com/questions/60942257/kubernetes-high-latency-access-svc-ip-on-other-nodes-but-works-well-in-nodeport">SO</a>, which may help to resolve your issue.</p>
<p>I am trying to create a k8s tls secret (data and key) using a pfx certificate that I would like to retrieve from Azure key vault using Azure CLI. It doesn't work because Azure downloads the public part(certificate) and the secret part(key) separately and then creating the k8s secret fails. Here's my script.</p> <pre><code>cert_key=cert.key cert_pem=cert.pem cert_pfx=cert.pfx keyvault_name=akv_name cert_name=akv_cert_name secret_name=cert_pw_secret #Get the password of the pfx certificate secret_value=$(az keyvault secret show --name $secret_name --vault-name $keyvault_name -o tsv --query value) #Download the secret az keyvault secret download --file $cert_key --name $cert_name --vault-name $keyvault_name #Download the public part of the certificate az keyvault certificate download --file $cert_pfx --name $cert_name --vault-name $keyvault_name #Convert pfx to pem using openssl #This will return an error: #139728379422608:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1239: #139728379422608:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:405:Type=PKCS12 openssl pkcs12 -in $cert_pfx -clcerts -nokeys -out $cert_pem -password pass:$secret_value #Convert pfx to key using openssl #This will return an error: #140546015532944:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1239: #140546015532944:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:405:Type=PKCS12 openssl pkcs12 -in $cert_pfx -nocerts -out $cert_key -password pass:$secret_value #Create the k8s secret kubectl create secret tls secret-ssl --cert=$cert_pem --key=$cert_key </code></pre> <p>Any idea why it's not working?</p> <p>Thanks in advance</p>
<p><em><strong>I tried to reproduce the issue in my environment and got the below results</strong></em></p> <p><em><strong>I have created the RG &amp; KV and secrets</strong></em></p> <pre><code>az keyvault create -n kv_name -g RG_name az keyvault secret set --vault-name kv_name --name secret_name --value &quot;value&quot; </code></pre> <p><img src="https://i.stack.imgur.com/LG8lx.png" alt="enter image description here" /></p> <p><em><strong>I have added the polices to access the secrets</strong></em></p> <p><em><strong>We can find the SPN id in active directory by creating with keyvault</strong></em></p> <p><img src="https://i.stack.imgur.com/F2wnp.png" alt="enter image description here" /></p> <pre><code>az keyvault set-policy -n &lt;kv-name&gt; --spn &lt;spn-id&gt; --secret-permissions get </code></pre> <p><img src="https://i.stack.imgur.com/GXFGR.png" alt="enter image description here" /></p> <p><em><strong>Added the certificate using below command</strong></em></p> <pre><code>az keyvault certificate create --vault-name &lt;kv-name&gt; --name &lt;cert-name&gt; -p &quot;$(az keyvault certificate get-default-policy -o json)&quot; </code></pre> <p><em><strong>I have added the polices to access the certificates</strong></em></p> <pre><code>az keyvault set-policy -n &lt;kv-name&gt; --spn &lt;spn-id&gt; --certificate-permissions get </code></pre> <p><em><strong>I have generated the private key using below command</strong></em></p> <p><code>openssl genrsa 2048 &gt; private-key.key</code></p> <p><img src="https://i.stack.imgur.com/go2gN.png" alt="enter image description here" /></p> <p><em><strong>Generated the certificate and i have converted the file into .pfx</strong></em></p> <pre><code>openssl req -new -x509 -nodes -sha256 -days 365 -key private-key.key -out certificate.cert </code></pre> <p><img src="https://i.stack.imgur.com/1VEY4.png" alt="enter image description here" /></p> <pre><code>openssl pkcs12 -export -out certificate.pfx -inkey private-key.key -in certificate.cert </code></pre> <p><img src="https://i.stack.imgur.com/MVzLI.png" alt="enter image description here" /></p> <p><em><strong>Encoded string and store it as a secret in Azure Key Vault. I have used the PowerShell commands to convert the <code>.pxf</code> file.</strong></em></p> <pre><code>$fileContentBytes = get-content ‘certificate.pfx' -AsByteStream [System.Convert]::ToBase64String($fileContentBytes) | Out-File ‘pfx-encoded-bytes.pem </code></pre> <p><img src="https://i.stack.imgur.com/Z1Sfj.png" alt="enter image description here" /></p> <p><em><strong>The secret needs to have the content type set to &quot;application/x-pkcs12&quot; to tell Azure Key Vault that it is in PKCS file format.</strong></em></p> <pre><code>az keyvault secret set --vault-name &lt;kv-name&gt; --name &lt;secret-name&gt; --file pfx-encoded-bytes.pem --description &quot;application/x-pkcs12&quot; </code></pre> <p><img src="https://i.stack.imgur.com/h8fUF.png" alt="enter image description here" /></p> <p><em><strong>Downloaded the certificate using below command</strong></em></p> <pre><code>az keyvault certificate download --file certificate1.pem --name my-certificate --vault-name komali-test </code></pre> <p><em><strong>Converted the pfx to pem using openssl</strong></em></p> <pre><code>openssl pkcs12 -in certificate.pfx -clcerts -nokeys -out certificate.pem -password pass:123 </code></pre> <p><img src="https://i.stack.imgur.com/AU4Cz.png" alt="enter image description here" /></p> <p><em><strong>I have converted .pfx to key using openssl</strong></em></p> <pre><code>openssl pkcs12 -in certtificate.pfx -nocerts -out private-key.key -password pass:XXXX </code></pre> <p><img src="https://i.stack.imgur.com/u7pqH.png" alt="enter image description here" /></p>
<p>Kubernetes Version</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.1&quot;, GitCommit:&quot;5e58841cce77d4bc13713ad2b91fa0d961e69192&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-05-13T02:40:46Z&quot;, GoVersion:&quot;go1.16.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.7&quot;, GitCommit:&quot;e1d093448d0ed9b9b1a48f49833ff1ee64c05ba5&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-03T00:20:57Z&quot;, GoVersion:&quot;go1.15.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>I have a Kubernetes crobjob that serves the purpose of running some Azure cli commands on a time based schedule.</p> <p>Running the container locally works fine, however, manually triggering the Cronjob through <a href="https://k8slens.dev/" rel="nofollow noreferrer">Lens</a>, or letting it run per the schedule results in weird behaviour (Running in the cloud as a job yeilds unexpected results).</p> <p>Here is the cronjob definition:</p> <pre><code>--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: development-scale-down namespace: development spec: schedule: &quot;0 22 * * 0-4&quot; concurrencyPolicy: Allow startingDeadlineSeconds: 60 failedJobsHistoryLimit: 5 jobTemplate: spec: backoffLimit: 0 # Do not retry activeDeadlineSeconds: 360 # 5 minutes template: spec: containers: - name: scaler image: myimage:latest imagePullPolicy: Always env: ... restartPolicy: &quot;Never&quot; </code></pre> <p>I ran the cronjob manually and it created job <code>development-scale-down-manual-xwp1k</code>. Describing this job after it completed, we can see the following:</p> <pre><code>$ kubectl describe job development-scale-down-manual-xwp1k Name: development-scale-down-manual-xwp1k Namespace: development Selector: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257 Labels: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257 job-name=development-scale-down-manual-xwp1k Annotations: &lt;none&gt; Parallelism: 1 Completions: 1 Start Time: Wed, 04 Aug 2021 09:40:28 +1200 Active Deadline Seconds: 360s Pods Statuses: 0 Running / 0 Succeeded / 1 Failed Pod Template: Labels: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257 job-name=development-scale-down-manual-xwp1k Containers: scaler: Image: myimage:latest Port: &lt;none&gt; Host Port: &lt;none&gt; Environment: CLUSTER_NAME: ... NODEPOOL_NAME: ... NODEPOOL_SIZE: ... RESOURCE_GROUP: ... SP_APP_ID: &lt;set to the key 'application_id' in secret 'scaler-secrets'&gt; Optional: false SP_PASSWORD: &lt;set to the key 'application_pass' in secret 'scaler-secrets'&gt; Optional: false SP_TENANT: &lt;set to the key 'application_tenant' in secret 'scaler-secrets'&gt; Optional: false Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 24m job-controller Created pod: development-scale-down-manual-xwp1k-b858c Normal SuccessfulCreate 23m job-controller Created pod: development-scale-down-manual-xwp1k-xkkw9 Warning BackoffLimitExceeded 23m job-controller Job has reached the specified backoff limit </code></pre> <p>This differs from <a href="https://stackoverflow.com/questions/55083361/pod-deletes-automatically-with-kubernetes-cronjob">other issues I have read</a>, where it does not mention a &quot;SuccessfulDelete&quot; event.</p> <p>The events received from <code>kubectl get events</code> tell an interesting story</p> <pre><code>$ ktl get events | grep xwp1k 3m19s Normal Scheduled pod/development-scale-down-manual-xwp1k-b858c Successfully assigned development/development-scale-down-manual-xwp1k-b858c to aks-burst-37275452-vmss00000d 3m18s Normal Pulling pod/development-scale-down-manual-xwp1k-b858c Pulling image &quot;myimage:latest&quot; 2m38s Normal Pulled pod/development-scale-down-manual-xwp1k-b858c Successfully pulled image &quot;myimage:latest&quot; in 40.365655229s 2m23s Normal Created pod/development-scale-down-manual-xwp1k-b858c Created container myimage 2m23s Normal Started pod/development-scale-down-manual-xwp1k-b858c Started container myimage 2m12s Normal Killing pod/development-scale-down-manual-xwp1k-b858c Stopping container myimage 2m12s Normal Scheduled pod/development-scale-down-manual-xwp1k-xkkw9 Successfully assigned development/development-scale-down-manual-xwp1k-xkkw9 to aks-default-37275452-vmss000002 2m12s Normal Pulling pod/development-scale-down-manual-xwp1k-xkkw9 Pulling image &quot;myimage:latest&quot; 2m11s Normal Pulled pod/development-scale-down-manual-xwp1k-xkkw9 Successfully pulled image &quot;myimage:latest&quot; in 751.93652ms 2m10s Normal Created pod/development-scale-down-manual-xwp1k-xkkw9 Created container myimage 2m10s Normal Started pod/development-scale-down-manual-xwp1k-xkkw9 Started container myimage 3m19s Normal SuccessfulCreate job/development-scale-down-manual-xwp1k Created pod: development-scale-down-manual-xwp1k-b858c 2m12s Normal SuccessfulCreate job/development-scale-down-manual-xwp1k Created pod: development-scale-down-manual-xwp1k-xkkw9 2m1s Warning BackoffLimitExceeded job/development-scale-down-manual-xwp1k Job has reached the specified backoff limit </code></pre> <p>I cant figure out why the container was killed, the logs all seem fine and there are no resource constraints. The container is removed very quickly meaning I have very little time to debug. The more verbose event line reads as such</p> <pre><code>3m54s Normal Killing pod/development-scale-down-manual-xwp1k-b858c spec.containers{myimage} kubelet, aks-burst-37275452-vmss00000d Stopping container myimage 3m54s 1 development-scale-down-manual-xwp1k-b858c.1697e9d5e5b846ef </code></pre> <p>I note that the image pull takes a good few seconds (40) initially, might this aid in exceeding the startingDeadline or another cron spec?</p> <p>Any thoughts or help appreciated, thank you</p>
<p>Reading logs! Always helpful.</p> <h2>Context</h2> <p>For context, the job itself scales an AKS nodepool. We have two, the default <code>system</code> one, and a new user controlled one. The cronjob is meant to scale the new <code>user</code> (Not <code>system</code> pool).</p> <h2>Investigating</h2> <p>I noticed that the <code>scale-down</code> job always takes longer compared to the <code>scale-up</code> job, this is due to the image pull always happening when the scale down job runs.</p> <p>I also noticed that the <code>Killing</code> event mentioned above originates from the kubelet. (<code>kubectl get events -o wide</code>)</p> <p>I went to check the kubelet logs on the host, and realised that the host name was a little atypical (<code>aks-burst-XXXXXXXX-vmss00000d</code>) in the sense that most hosts in our small development cluster usually has numbers on the end, not <code>d</code></p> <p>There I realised the naming was different because this node was not part of the default nodepool, and I could not check the kubelet logs because the host had been removed.</p> <h2>Cause</h2> <p>The job scales down compute resources. The scale down would fail, because it was always preceeded by a scale up, in which point a new node was in the cluster. This node had nothing running on it, so the next Job was scheduled on it. The Job started on the new node, told Azure to scale down the new node to 0, and subsequently the Kubelet killed the job as it was running.</p> <p>Always being scheduled on the new node explains why the image pull happened each time as well.</p> <h2>Fix</h2> <p>I changed the spec and added a NodeSelector so that the Job would always run on the <code>system</code> pool, which is more stable than the <code>user</code> pool</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: development-scale-down namespace: development spec: schedule: &quot;0 22 * * 0-4&quot; concurrencyPolicy: Allow startingDeadlineSeconds: 60 failedJobsHistoryLimit: 5 jobTemplate: spec: backoffLimit: 0 # Do not retry activeDeadlineSeconds: 360 # 5 minutes template: spec: containers: - name: scaler image: myimage:latest imagePullPolicy: Always env: ... restartPolicy: &quot;Never&quot; nodeSelector: agentpool: default </code></pre>
<p>I try to configure the API server to consume this file during cluster creation. My system is Ubuntu 22.04.2 LTS x86_64</p> <p>kind version is v0.17.0 go1.19.2 linux/amd64</p> <p>minikube version: v1.29.0</p> <p>The config file is:</p> <pre><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: ClusterConfiguration apiServer: extraArgs: admission-control-config-file: /etc/config/cluster-level-pss.yaml extraVolumes: - name: accf hostPath: /etc/config mountPath: /etc/config readOnly: false pathType: &quot;DirectoryOrCreate&quot; extraMounts: - hostPath: /tmp/pss containerPath: /etc/config # optional: if set, the mount is read-only. # default false readOnly: false # optional: if set, the mount needs SELinux relabeling. # default false selinuxRelabel: false # optional: set propagation mode (None, HostToContainer or Bidirectional) # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation # default None propagation: None </code></pre> <p>When I run: <code>kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml --retain</code></p> <p>I get (the last step crushes after a long time):</p> <pre><code>Creating cluster &quot;psa-with-cluster-pss&quot; ... ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration Starting control-pane </code></pre> <p>traceback:</p> <pre><code>0226 15:54:45.575727 123 round_trippers.go:553] GET https://psa-with-cluster-pss-control-plane:6443/healthz?timeout=10s in 0 milliseconds Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID' couldn't initialize a Kubernetes cluster k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll cmd/kubeadm/app/cmd/phases/workflow/runner.go:421 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 cmd/kubeadm/app/cmd/init.go:153 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute vendor/github.com/spf13/cobra/command.go:856 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC vendor/github.com/spf13/cobra/command.go:974 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute vendor/github.com/spf13/cobra/command.go:902 k8s.io/kubernetes/cmd/kubeadm/app.Run cmd/kubeadm/app/kubeadm.go:50 main.main cmd/kubeadm/kubeadm.go:25 runtime.main /usr/local/go/src/runtime/proc.go:250 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1571 error execution phase wait-control-plane k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll cmd/kubeadm/app/cmd/phases/workflow/runner.go:421 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 cmd/kubeadm/app/cmd/init.go:153 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute vendor/github.com/spf13/cobra/command.go:856 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC vendor/github.com/spf13/cobra/command.go:974 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute vendor/github.com/spf13/cobra/command.go:902 k8s.io/kubernetes/cmd/kubeadm/app.Run cmd/kubeadm/app/kubeadm.go:50 main.main cmd/kubeadm/kubeadm.go:25 runtime.main /usr/local/go/src/runtime/proc.go:250 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1571 </code></pre>
<p>Try removing and reinstalling <code>Docker</code>, <code>docker-ce</code> and <code>CNI</code>. In the procedure of kubelet installation you must configure the docker container.</p> <p>The error message is because you missed a few steps which are not mentioned in the document procedure. Please go through the procedure for the <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker" rel="nofollow noreferrer">container runtime official document</a> for more information. Check you may have to reset such as: <code>kubeadm reset</code> then use a permanent IP and then run <code>kubeadm init</code>.</p> <pre><code>sudo kubeadm reset sudo apt-get install -qy kubelet kubectl kubeadm sudo apt-mark hold kubelet kubeadm kubectl sudo mkdir /etc/docker cat &lt;&lt;EOF | sudo tee /etc/docker/daemon.json { &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;], &quot;log-driver&quot;: &quot;json-file&quot;, &quot;log-opts&quot;: { &quot;max-size&quot;: &quot;100m&quot; }, &quot;storage-driver&quot;: &quot;overlay2&quot; } EOF sudo systemctl enable docker sudo systemctl daemon-reload sudo systemctl restart docker sudo kubeadm init --control-plane-endpoint kube-master:6443 --pod-network-cidr=192.168.0.0/16 </code></pre> <p>If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: <code>-'systemctl status kubelet'</code></p> <p><code>-'journalctl -xeu kubelet'</code></p> <p>Refer to <a href="https://discuss.kubernetes.io/t/kubeadm-init-fails-with-controlplaneendpoint/15950" rel="nofollow noreferrer">Kubeadm init fails with controlPlaneEndpoint</a> for more information</p> <p>Also refer to <code>Kind</code> <strong>Known Issues</strong>: <a href="https://kind.sigs.k8s.io/docs/user/known-issues/#troubleshooting-kind" rel="nofollow noreferrer">Troubleshooting kind</a> and also check if <a href="https://kind.sigs.k8s.io/docs/user/known-issues/#failure-to-create-cluster-with-docker-desktop-as-container-runtime" rel="nofollow noreferrer">Failure to Create Cluster with Docker Desktop as Container Runtime</a> for more information.</p>
<p>Following the YugabyteDB Voyager database migration steps (<a href="https://docs.yugabyte.com/preview/migrate/migrate-steps/" rel="nofollow noreferrer">https://docs.yugabyte.com/preview/migrate/migrate-steps/</a>) going from PostgreSQL to YugabyteDB on a local Kubernetes, on Docker Desktop, on WSL2, on Windows. Using Ubuntu 22.04 on WSL2 to run yb-voyager, I get an error on the Import Data step:</p> <pre><code>import of data in &quot;postgres&quot; database started Target YugabyteDB version: 11.2-YB-2.15.2.1-b0 Error Resolving name=yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: lookup yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: no such host </code></pre> <p>The Import Schema step worked correctly (from using pgAdmin connected to the YugabyteDB), so I know that the database can be connected to. Command used:</p> <pre><code>yb-voyager import schema --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME} </code></pre> <p>The command used to import the data, which fails:</p> <pre><code>yb-voyager import data --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME} </code></pre> <p>ENV variables:</p> <pre><code>EXPORT_DIR=/home/abc/db-export TARGET_DB_HOST=127.0.0.1 TARGET_DB_USER=ybvoyager TARGET_DB_PASSWORD=password TARGET_DB_NAME=postgres </code></pre> <p>Why does the import data fail when the import schema works connecting to the same database?</p>
<p>Putting solution here in case anybody runs into this issue. If there is a Load Balancer is present and YugabyteDB server's IP is not resolvable from the voyager machine, then import data command is erroring out. Ideally it should use the load balancer for importing the data.</p> <p>Use <code>--target-endpoints=LB_HOST:LB_PORT</code> to force the server address.</p> <p>See tickets:<br /> <a href="https://github.com/yugabyte/yb-voyager/issues/553" rel="nofollow noreferrer">Import data 'Error Resolving name' on local kubernetes #553</a><br /> <a href="https://github.com/yugabyte/yb-voyager/issues/585" rel="nofollow noreferrer">Import Data failed if LB is present and cluster servers host is not resolvable #585</a></p>
<p>I wrote a <strong>CronJob</strong> that periodically takes volume snapshot of my <strong>PVC</strong>. Below is the <strong>CronJob</strong> file:</p> <pre><code>kind: CronJob metadata: name: mycronjob spec: schedule: &quot;*/2 * * * *&quot; # Run the job every 2 minutes jobTemplate: spec: template: spec: containers: - name: webserver-container image: kubectl:latest command: - /bin/sh - -c - /app/script.sh volumeMounts: - name: script-volume mountPath: /app restartPolicy: OnFailure volumes: - name: script-volume configMap: name: script-configmap </code></pre> <p>The <strong>Volume Snapshot</strong> file looks like this:</p> <pre><code>kind: VolumeSnapshot metadata: name: snap-shot-test spec: volumeSnapshotClassName: csi-snapclass source: persistentVolumeClaimName: my-test-apps </code></pre> <p>And here is the <strong>script</strong> file:</p> <pre><code>#!/bin/bash kubectl apply -f volumesnapshot.yml </code></pre> <p>First time the cronjob is successfully executed, but after that it says <em>volumesnapshot.snapshot.storage.k8s.io/snap-shot-test unchanged</em></p> <p>How can I periodically take volumesnapshot of a PVC with having latest 2 copies of the snapshot?</p>
<p>It seems to me that what you describe mirrors the YAML files you shared; there is nothing that changes the name of the backup and therefore it will never create a new one.</p> <p>If you want to make a backup of one PVC to another in a &quot;raw&quot; way with scripts, maybe think of a job that mounts 2 Volumes (source and destination) and executes a simple <code>cp -Rp /volume-source/* /volume-destination/</code></p> <p>Otherwise, if you want to get the job done right, consider using a tool like Velero.</p> <p><a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p>
<h2>I have a microservices project which has:</h2> <ul> <li>user-service</li> <li>post-service</li> </ul> <h2>Let's talk about the user-service to explain the problem:</h2> <p>After deploying the k8s objects like deployment that contains the user image, pods are created and each pod contains the user container and the cloud-sql-proxy container in a sidecar pattern.</p> <p>Now inside this pod, I am using Prisma inside the user container to connect to the cloud sql proxy container inside the same pod on this url: <code>postgresql://username:password@localhost/db_name?host=/cloudsql/gcp_project:us-central1:db</code></p> <h2>Problem:</h2> <p>When I log the user-service pod, I find this error:</p> <p><code>Error: P1013: The provided database string is invalid. invalid port number in database URL. Please refer to the documentation in https://www.prisma.io/docs/reference/database-reference/connection-urls for constructing a correct connection string. In some cases, certain characters must be escaped. Please check the string for any illegal characters.</code></p> <p>My Dockerfile:</p> <pre><code>FROM node:alpine WORKDIR /app COPY . . RUN npm install # Copy the start.sh into the container COPY start.sh . # Make the shell script executable RUN chmod +x start.sh # Execute the shell script CMD [&quot;/bin/sh&quot;, &quot;start.sh&quot;] </code></pre> <h3>Inside start.sh</h3> <p>#!/bin/bash</p> <p>cd src/</p> <p>npx prisma db push</p> <p>cd ..</p> <p>npm start</p> <p>Inside the src/ directory I have the prisma/ dir.</p> <h3>Note: I have also tried replacing ':' with %3A in the DB string param, but it did not work.</h3> <h2>Deployment File</h2> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: user-depl namespace: social-app spec: replicas: 1 selector: matchLabels: app: user template: metadata: labels: app: user spec: containers: - name: user image: &lt;image_name&gt; resources: limits: cpu: &quot;500m&quot; memory: &quot;512Mi&quot; requests: cpu: &quot;250m&quot; memory: &quot;256Mi&quot; - name: cloud-sql-proxy image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.1.0 args: - &quot;--structured-logs&quot; - &quot;--port=5432&quot; - &quot;PROJECT_ID:asia-south1:POSTGRES_INSTANCE_NAME&quot; - &quot;--credentials-file=/secrets/service_account.json&quot; securityContext: runAsNonRoot: true volumeMounts: - name: cloudsql-sa-volume mountPath: /secrets/ readOnly: true resources: requests: memory: &quot;768Mi&quot; cpu: &quot;500m&quot; volumes: - name: cloudsql-sa-volume secret: secretName: cloudsql-sa --- # Cluster IP service for user service apiVersion: v1 kind: Service metadata: name: user-srv namespace: social-app spec: selector: app: user ports: - name: user protocol: TCP port: 5001 targetPort: 5001 </code></pre> <p>I have also checked if the cloud-sql-proxy container is running or not by logging it, and the message that it is ready for connections.</p> <p>When I run this command: <code>npx prisma db push</code> using the shell script, I am expecting prisma to successfully connect to the cloudsql proxy container which will connect to the cloudsql instance on google cloud.</p>
<p>There seems to be a bit of confusion happening here.</p> <p>The Cloud SQL Proxy can be deployed to connect to Cloud SQL via two different options:</p> <p>a) <a href="https://github.com/GoogleCloudPlatform/cloud-sql-proxy#configuring-port" rel="nofollow noreferrer">TCP connection over a port</a> (default)</p> <p>b) <a href="https://github.com/GoogleCloudPlatform/cloud-sql-proxy#configuring-unix-domain-sockets" rel="nofollow noreferrer">Unix domain socket</a> (using the <code>--unix-socket</code> flag)</p> <p>It seems as though your configuration is deploying the Cloud SQL Proxy to connect via TCP connection (using <code>--port</code>) but your application (Prisma config) is attempting to connect via a Unix socket (by using host as query param, <a href="https://www.prisma.io/docs/concepts/database-connectors/postgresql#connecting-via-sockets" rel="nofollow noreferrer">Prisma docs</a>).</p> <p>Your deployment YAML looks fine as is to me, it should successfully setup the proxy to listen for a TCP connection on <code>127.0.0.1:5432</code> of your pod. You should be able to just update your application and Prisma URL accordingly to the following:</p> <pre><code>postgresql://username:[email protected]:5432/db_name </code></pre>
<p>Kuberenetes newbie here, we have a jms server outside the cluster thats only accessible through our cluster, how can I create a port forward proxy on the cluster so I can connect to it via my local pc?</p>
<p>Proxy is an application layer function or feature, whereas port forwarding is really just a manual entry in one of the NAPT tables. A proxy understands the application protocol and can be used as a single entry point for multiple exposed servers.</p> <p>The NGINX Ingress Controller for Kubernetes (as a proxy) is compatible with the NGINX web server. If you want to <a href="https://docs.giantswarm.io/getting-started/exposing-workloads/" rel="nofollow noreferrer">access workloads</a> that are already running on your cluster from outside of it. Creating an Ingress resource is the standard procedure. In your workload cluster, add an ingress controller. For installation instructions, see this <a href="https://docs.giantswarm.io/getting-started/ingress-controller/" rel="nofollow noreferrer">page</a>.</p> <p><strong>Kubernetes port forwarding:</strong></p> <p>This is especially useful when you want to directly communicate with a specific port on a Pod from your local machine, according to the official kubernetes <a href="https://jamesdefabia.github.io/docs/user-guide/connecting-to-applications-port-forward/#:%7E:text=Compared%20to%20kubectl%20proxy%2C%20kubectl,be%20useful%20for%20database%20debugging" rel="nofollow noreferrer">Connect with Port Forwarding</a> documentation. Additionally, you don't have to manually expose services to accomplish this. Kubectl port-forward, on the other hand, moves connections from a local port to a pod port. Kubectl port-forward is more general than <code>kubectl proxy</code> because it can forward TCP traffic while kubectl proxy can only forward HTTP traffic. Although Kubectl simplifies port forwarding, it should only be utilized for debugging.</p> <p>You can learn more about how to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">use port-forward to access applications</a> in a cluster and another similar info <a href="https://www.containiq.com/post/kubectl-port-forward" rel="nofollow noreferrer">link</a> &amp; <a href="https://stackoverflow.com/questions/51468491">SO</a> aids in better comprehension.</p> <p>Finally, for more information, see <a href="https://docs.armor.com/display/KBSS/Port-Forwarding+and+Proxy+Server+and+Client+Deployment.mobile.phone" rel="nofollow noreferrer">Port-Forwarding and Proxy Server and Client Deployment</a>.</p>
<p>So, according to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">Kubernetes documentation</a>, when you have an external LoadBalancer service set with <code>externalTrafficPolicy=Local</code>, you can explicitly define a healthCheckNodePort.</p> <p>If I understood correctly, since LB services with this specific externalTrafficPolicy are unable to determine if a pod is running inside of a node, this healthCheckNodePort should be used to evaluate just that. As a result, a properly set healthCheckNodePort should avoid situations in which our request is routed to improper nodes.</p> <p>An example would be the image below, extracted from <a href="https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies" rel="nofollow noreferrer">https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies</a>. If I properly understood this concept, the healthCheckNodePort should let the LB know that all traffic should be directed to the left node, as the right one doesn't have a pod to handle it.</p> <p><a href="https://i.stack.imgur.com/0HjGf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0HjGf.png" alt="enter image description here" /></a></p> <p>My question here is: how does the proxy attached to healthCheckNodePort checks for the pod existence? Is this something I should implement from my side, or is Kubernetes auto-assigned port able to handle this health-check on its own? It seems weirdly convenient that any port number would do, so I would like some more insight on how this works.</p>
<p>It's actually very simple; I report a part of OpenShift documentation:</p> <blockquote> <p>healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. <strong>External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type).</strong></p> </blockquote> <p><a href="https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html</a></p> <p>From the point of view of the <strong>kube-proxy</strong>, this always responds to packets sent to the health check node port.</p> <p>I suggest you take a look at this example: <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer</a></p>
<p>I have a server that is receiving <strong>multicast messages</strong> on a network interface called &quot;em2&quot;.</p> <p>I need to get the information coming over into my pods running in minikube so I can use it in an application which I am writing. How would I go about doing this?</p> <p>The network interfaces shown in <code>minikube ssh</code> are: docker0, eth0, eth1, lo, and two &quot;veth&quot;</p> <p>Thank you!</p>
<p>There are a few ways to achieve traffic towards Kubernetes pod(s):</p> <ul> <li><p>Adding <code>hostNetwork: true</code> flag to the yaml file along with <code>hostPort</code> configuration in order to receive the traffic directly to the pod.</p> </li> <li><p>The <a href="https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md" rel="nofollow noreferrer">multus-cni</a> project allows the creation of additional interfaces for your pods (your default one won't accept multicast). Then you will need to bridge the new interface with the em2 interface in your host machine, either by using bridge or <code>macvlan</code></p> </li> <li><p>You could use some firewall (e.g. <code>iptables</code>, <code>ipfw</code>, <code>nftables</code>, etc.) to forward the traffic from the em2 interface to the internal K8 network</p> </li> </ul>
<p>I am trying to run Kubernetes and trying to use <code>sudo kubeadm init</code>. Swap is off as recommended by official doc.</p> <p>The issue is it displays the warning:</p> <pre><code>[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - No internet connection is available so the kubelet cannot pull or find the following control plane images: - k8s.gcr.io/kube-apiserver-amd64:v1.11.2 - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 - k8s.gcr.io/kube-scheduler-amd64:v1.11.2 - k8s.gcr.io/etcd-amd64:3.2.18 - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images are downloaded locally and cached. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster </code></pre> <p>The docker version I am using is <code>Docker version 17.03.2-ce, build f5ec1e2</code> I m using Ubuntu 16.04 LTS 64bit</p> <p>The docker images shows the following images:</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 3 weeks ago 187 MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 3 weeks ago 155 MB k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 3 weeks ago 97.8 MB k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 3 weeks ago 56.8 MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 3 months ago 45.6 MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219 MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 8 months ago 742 kB </code></pre> <p>Full logs can be found here : <a href="https://pastebin.com/T5V0taE3" rel="noreferrer">https://pastebin.com/T5V0taE3</a></p> <p>I didn't found any solution on internet.</p> <p><strong>EDIT:</strong></p> <p><em>docker ps -a</em> output:</p> <pre><code>ubuntu@ubuntu-HP-Pavilion-15-Notebook-PC:~$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS </code></pre> <p><em>journalctl -xeu kubelet</em> output:</p> <pre><code>journalctl -xeu kubelet -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: T -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: M Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: U Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: F lines 788-810/810 (END) -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117131 9107 server.go:408] Version: v1.11.2 Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117406 9107 plugins.go:97] No cloud provider specified. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.121192 9107 certificate_store.go:131] Loading cert/key pair Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.145720 9107 server.go:648] --cgroups-per-qos enabled, but -- Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06.146074 9107 server.go:262] failed to run Kubelet: Running wi Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Unit entered failed state. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Failed with result 'exit-code'. ~ PORTS NAMES </code></pre> <p>Any help/suggestion/comment would be appreciated.</p>
<p>I faced similar issue recently. The problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd. So I created <code>/etc/docker/daemon.json</code> and added below:</p> <pre><code>{ &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;] } </code></pre> <p>Then</p> <pre><code> sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl restart kubelet </code></pre> <p>Run kubeadm init or kubeadm join again.</p>
<p>I want to make a <code>YAML</code> file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried</p> <pre><code>kubectl apply -f (name_file.yaml) </code></pre>
<p>You can it with three dashes on your yaml file like this</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mock spec: ... --- apiVersion: v1 kind: ReplicationController metadata: name: mock spec: </code></pre> <p>Source : <a href="https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a" rel="nofollow noreferrer">https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a</a></p>
<p>I want to deploy postgres using kubernetes</p> <p>This is my postgres pod yaml file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: securityContext: runAsUser: 70 runAsGroup: 70 fsGroup: 70 fsGroupChangePolicy: &quot;Always&quot; containers: - image: docker.io/postgres:14.8-alpine3.18 name: postgres resources: limits: hugepages-2Mi: 512Mi memory: 2Gi cpu: &quot;8&quot; requests: memory: 128Mi cpu: &quot;1&quot; env: - name: POSTGRES_DB value: postgres_db_name - name: POSTGRES_USER value: postgres_db_user - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secrets key: root_password_key - name: PGDATA value: /some/path/here ports: - containerPort: 5432 name: postgres volumeMounts: - name: postgres-volume-name mountPath: /some/path/here volumes: - name: postgres-volume-name persistentVolumeClaim: claimName: postgres-pv-claim </code></pre> <p>After running</p> <blockquote> <p>kubectl get pods</p> </blockquote> <p>I POD status is terminating, so I have checked logs and It shows</p> <blockquote> <p>mkdir: can't create directory '/some/path/here': Permission denied</p> </blockquote> <p>How can I solve this? Thanks!</p>
<p>As per official Kubernetes doc on <a href="https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/#allow-users-to-skip-recursive-permission-changes-on-mount" rel="nofollow noreferrer">Allow users to skip recursive permission changes on mount</a>:</p> <p>While inspecting the YAML used for the <code>StatefulSet</code>, noticed there's the use of a fsGroup inside the pod’s <code>security context</code>, which makes sure that the volume's content can be readable and writable by each new pod. One side-effect of setting fsGroup is that, each time a volume is mounted, <strong>Kubernetes must recursively change the owner and permission of all the files and directories inside the volume</strong>. This happens even if group ownership of the volume already matches the requested 'fsGroup', and can be pretty expensive for larger volumes with lots of small files, which causes pod startup to take a long time.</p> <p><strong>Solution :</strong> As per <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">Configure volume permission and ownership change policy for Pods</a>. Suggest setting <code>'fsGroupChangePolicy' to &quot;OnRootMismatch&quot;</code> so if the root of the volume already has the correct permissions, the recursive permission change can be skipped.</p> <blockquote> <p><code>fsGroupChangePolicy</code> - fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a Pod. This field only applies to volume types that support fsGroup controlled ownership and permissions. This field has two possible values:</p> <p><code>OnRootMismatch:</code> Only change permissions and ownership if the permission and the ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume.</p> <p><code>Always:</code> Always change permission and ownership of the volume when volume is mounted.</p> <p><strong>For example:</strong></p> <p><strong>securityContext</strong>:</p> <pre><code> runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: &quot;OnRootMismatch&quot; </code></pre> </blockquote> <p>*Also refer to the <strong>System Admin</strong> <a href="https://ny55.blogspot.com/2020/06/error-mkdir-cannot-create-directory.html" rel="nofollow noreferrer">blog</a> by <strong>LiveStream</strong> related to the Error, which may help to resolve your issue.</p>
<p>Below is the config for probes in my application helm chart</p> <pre><code>{{- if .Values.endpoint.liveness }} livenessProbe: httpGet: host: localhost path: {{ .Values.endpoint.liveness | quote }} port: 9080 initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} periodSeconds: 5 {{- end }} {{- if .Values.endpoint.readiness }} readinessProbe: httpGet: host: localhost path: {{ .Values.endpoint.readiness | quote }} port: 9080 initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} periodSeconds: 60 {{- end }} {{- end }} </code></pre> <p>when I deploy, in deployment.yaml</p> <pre><code>livenessProbe: httpGet: path: /my/app/path/health port: 9080 host: localhost scheme: HTTP initialDelaySeconds: 8 timeoutSeconds: 1 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /my/app/path/health port: 9080 host: localhost scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 60 successThreshold: 1 failureThreshold: 3 </code></pre> <p>But in pod.yaml, it is</p> <pre><code>livenessProbe: httpGet: path: /app-health/app-name/livez port: 15020 host: localhost scheme: HTTP initialDelaySeconds: 8 timeoutSeconds: 1 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /app-health/app-name/readyz port: 15020 host: localhost scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 60 successThreshold: 1 failureThreshold: 3 </code></pre> <p>and then gives the following error in the pod:</p> <p>`Readiness probe failed: Get http://IP:15021/healthz/ready: dial tcp IP:15021: connect: connection refused spec.containers{istio-proxy}</p> <p>warning Liveness probe failed: Get http://localhost:15020/app-health/app-name/livez: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name}</p> <p>warning Readiness probe failed: Get http://localhost:15020/app-health/app-name/readyz: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name} `</p> <p>why is the pod using a different path and port for the probes and it is failing giving the above error. Can someone please help me with what am missing?</p>
<p>You're getting those different paths because those are globally configured across mesh in Istio's control plane component i.e., istio-sidecar-injector configmap This is coming via sidecar's webhook injection. See for the below property in &quot;istio-sidecar-injector configmap&quot;</p> <blockquote> <p>sidecarInjectorWebhook.rewriteAppHTTPProbe=true</p> </blockquote>
<p>I was wondering what the correct way to split traffic with x % to deployment A and y % to deployment B is. In the kodekloud learning platform, they suggested, to just scale the number of replicas from A and B to certain numbers of replicas in order to obtain the desired ratio. But what if I want a 90% and 10% traffic split without having to spin up 9 replicas on deployment A and have only 1 for deployment B ? I wasn’t able to find anything about this topic in the documentation, I’ve only found third party extensions for this, but I would like to stick with the native api. Can the ingresses be configured somehow in order to do this?</p>
<p>Currently such behavior isn't provided by native K8s API's. You can &quot;fake&quot; this by using two different Deployments with Labels that match one Service, however as you've already mentioned this only works if you have a certain amount of Pods running and is not really scaleable in any way.</p> <p>You can use a &quot;third party&quot; such as Argo Rollouts for such cases: <a href="https://argoproj.github.io/argo-rollouts/concepts/#canary" rel="nofollow noreferrer">https://argoproj.github.io/argo-rollouts/concepts/#canary</a></p> <p>Also some Ingress Controller's such as the NGINX Ingress Controller supports Canary releases: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary</a></p> <p>That being said, there's nothing wrong with using third party products especially such as Argo, which recently graduated to Graduated in CNCF.</p>
<p>Pods in <code>error</code> state are not getting cleaned up.</p> <p>I can manually remove these pods using:</p> <pre><code>kubectl delete pod --field-selector status.phase=Failed </code></pre> <p>Is there any <code>annotation</code> or <code>label</code> that I can use for auto-cleanup? What's the recommended way to do this?</p>
<p>As per Kubernetes official doc on <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection" rel="nofollow noreferrer">Garbage collection of Pods</a>;</p> <blockquote> <p>The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of <code>Succeeded</code> or <code>Failed</code>), when the number of Pods exceeds the configured threshold (determined by <code>terminated-pod-gc-threshold</code> in the <code>kube-controller-manager</code>). This avoids a resource leak as Pods are created and terminated over time.</p> </blockquote> <p>Refer to a similar Github issue on <a href="https://github.com/kubernetes/kubernetes/issues/99986" rel="nofollow noreferrer">Auto-delete failed Pods #99986</a> for more details.</p> <p><strong>Without Cronjob:</strong> To clean up the failed pods created by kubernetes job automatically without using cronjob, you can set a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">TTL</a> on your <strong>Job</strong> to have the pods cleaned up automatically.</p> <p><strong>With Cronjob:</strong> <em>CleanUp Script</em> written by <strong>Chris Ed Rego’s Medium blog</strong> on <a href="https://chrisedrego.medium.com/how-to-automatically-clean-up-failed-kubernetes-pod-every-24-hours-d66ac2a3964d" rel="nofollow noreferrer">How to automatically clean up failed Kubernetes Pod every 24 hours</a>, which may help to resolve your issue.</p> <p><strong>Conclusion :</strong> This may need a fundamental change in kubernetes. Seems you may need a github issue to discuss further on this issue.</p>
<p>Argocd failed to load after restart. In the argocd server logs I see that <code>server.secretkey</code> is missing but I didn't see where it is declared and I think it should be generated by argo server</p> <p>the server logs:</p> <pre><code>time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Starting configmap/secret informers&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Configmap/secret informer synced&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Initialized server signature&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Initialized admin password&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Starting configmap/secret informers&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;configmap informer cancelled&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;secrets informer cancelled&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Configmap/secret informer synced&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Starting configmap/secret informers&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;configmap informer cancelled&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Configmap/secret informer synced&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;secrets informer cancelled&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Creating client app (argo-cd)&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;argocd v2.1.2+7af9dfb serving on port 8080 (url: https://argo.jgjhg.hgg.tech, tls: false, namespace: argocd, sso: true)&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;0xc000d7f380 subscribed to settings updates&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;Starting rbac config informer&quot; time=&quot;2023-01-08T06:48:55Z&quot; level=info msg=&quot;RBAC ConfigMap 'argocd-rbac-cm' added&quot; time=&quot;2023-01-08T06:49:22Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; **time=&quot;2023-01-08T06:50:37Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; **time=&quot;2023-01-08T06:51:22Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; time=&quot;2023-01-08T06:51:49Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; time=&quot;2023-01-08T06:52:22Z&quot; level=warning msg=&quot;Unable to parse updated settings: server.secretkey is missing&quot; time=&quot;2023-01-08T06:58:55Z&quot; level=info msg=&quot;Alloc=14201 TotalAlloc=64664 Sys=74065 NumGC=13 Goroutines=139&quot; time=&quot;2023-01-08T07:03:36Z&quot; level=info msg=&quot;received unary call /version.VersionService/Version&quot; grpc.method=Version grpc.request.claims=null grpc.request.content= grpc.service=version.VersionService grpc.start_time=&quot;2023-01-08T07:03:36Z&quot; span.kind=server system=grpc time=&quot;2023-01-08T07:03:36Z&quot; level=error msg=&quot;finished unary call with code Unknown&quot; **error=&quot;server.secretkey is missing&quot; grpc.code=Unknown grpc.method=Version **grpc.service=version.VersionService grpc.start_time=&quot;2023-01-08T07:03:36Z&quot; grpc.time_ms=20.524 span.kind=server system=grpc </code></pre> <p>I am using argo helm 3.21.0</p> <p>argo should restart and run without problems</p>
<p>The <code>argocd-secret</code> is created only during the initial installation. Within the <a href="https://github.com/argoproj/argo-helm/blob/8ee317128d31b75a07d263bffb07b57cb80a69f5/charts/argo-cd/templates/argocd-configs/argocd-secret.yaml" rel="nofollow noreferrer">Helm chart</a> you see that the condition to create it is defined at the top <code>{{- if .Values.configs.secret.createSecret }}</code>. However I presume that this secret got deleted somehow, and there's no ArgoCD component that &quot;manages&quot; this secret. You could recreate the secret, with other values such as <a href="https://github.com/argoproj/argo-helm/blob/8ee317128d31b75a07d263bffb07b57cb80a69f5/charts/argo-cd/values.yaml#L1919" rel="nofollow noreferrer">admin user credentials</a>.</p> <p>However re-generating the secret can be also achieved by re-applying your Helm Chart which will only install the missing/changed components: <code>helm upgrade RELEASE-NAME argo/argo-cd</code>. Once the Secret exists, you'll need to restart/delete the ArgoCD-Server pod, which will inject the values such as (those will be populated only after the said argocd-sever restart!):</p> <pre><code>data: admin.password: JDJhJDEwJHl4cmhWRE5zRGQuOVdSMVRKNkE2VWVlM24uaXhmRmROblVZVkhQTzdqYVA3LmdmcWdEc2JT admin.passwordMtime: MjAyMy0wMS0wOVQwODoyMDo0OFo= server.secretkey: R1hzK0hDSk1oTVdqT3grK0J6SVJmdzhMUXpqNzUwN0ZSUmVQeXZDdXRBYz0= tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURaakNDQWs2Z0F3SUJBZ0lSQUttbzVJbGc1WmpDM004YlNaazZKeDR3RFFZSktvWklodmNOQVFFTEJRQXcKRWpFUU1BNEdBMVVFQ2hNSFFYSm5ieUJEUkRBZUZ3MHlNekF4TURrd09ESXdORGhhRncweU5EQXhNRGt3T0RJdwpORGhhTUJJeEVEQU9CZ05WQkFvVEIwRnlaMjhnUTBRd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFEYThqTnZ2bTBOVktzQmVDbnQrSTZ0SjVCTWpaS0hoald0SzlmM0Zvak5Ga3JkN0xkWU1yckYKclcrNzQ2Y3d4b2ZCdmtRMXZTd2dRV1haM00wN0dhQWhkMTRXZmcxOE9oTHNKS1RtcEdxMXpBVlhKUWxkYVpSTgp1eG5iOHpZVzhzM3VVYzdOWTNtdllWS0VJVWk0VFIvUGVvY1EwVUdoa2hQRlVJWGM3YlVqQVQrcUtyQjA2TmEyCk5NamtpYUN2ZS9LZGtialhRbVlqdVNIa2tLNnRMNXJiMHFUOW81cThDZjBNQXcyT0dILzhqUmcxNXljQzVlWU4KUithc0F6RVJrcy82NDFCRm5jUCsxODlPcG0xMmU0eUVpU2NPSmRsR1p0L2QrN25kVVM3eDVBcDhCTDhsWjNrawptWS9NbXora3BTa1Zzdlk1VmxjS0V2SEl5MXJXdTJXcEFnTUJBQUdqZ2JZd2diTXdEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdmZ1lEVlIwUkJIY3cKZFlJSmJHOWpZV3hvYjNOMGdnMWhjbWR2WTJRdGMyVnlkbVZ5Z2hWaGNtZHZZMlF0YzJWeWRtVnlMbVJsWm1GMQpiSFNDR1dGeVoyOWpaQzF6WlhKMlpYSXVaR1ZtWVhWc2RDNXpkbU9DSjJGeVoyOWpaQzF6WlhKMlpYSXVaR1ZtCllYVnNkQzV6ZG1NdVkyeDFjM1JsY2k1c2IyTmhiREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVytTZGxtTHEKcDk4UVdBQjRtNVhKL0diOG15Slc1MFRCWHZ0MlFVVjRmalFmTDZaVThFc05sRGpPWDJOK3F5ZjIxR01rZUtTNwpaeXI1ajEwMi9WU3VOYVFseVpobExyWW1SZU9BNXUycEhzQlRxTEgvUVJHYjJtYTI2U1dLeXdkTDJGUFJJRCt5CjJUZVRmZ1VtdmJkNitIM2xad3NKQjBrYmhoTmdqMjZEc283K0k0bEVneXhZNlZjdWx0U2dLbWw0aTV5RjhMU2wKcUljNnU3WUJnU0c0L1U4enJncDBrbFFRNnpFaW9wL2dqa0F3TjdBZkl2L214ZTdySjVvUlk2UHhmU2RxcXJKRApoMmlKL0dJTXlSMVltS0kwY0NuQnRrbDB6aTlrUHFqdjZBckdOeDhUejRJVFhHMTdBdW9lZ2RjSzFyRHFSVEpECnJCZ0tvVExUZGxoanZRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBMnZJemI3NXREVlNyQVhncDdmaU9yU2VRVEkyU2g0WTFyU3ZYOXhhSXpSWkszZXkzCldESzZ4YTF2dStPbk1NYUh3YjVFTmIwc0lFRmwyZHpOT3htZ0lYZGVGbjROZkRvUzdDU2s1cVJxdGN3RlZ5VUoKWFdtVVRic1oyL00yRnZMTjdsSE96V041cjJGU2hDRkl1RTBmejNxSEVORkJvWklUeFZDRjNPMjFJd0UvcWlxdwpkT2pXdGpUSTVJbWdyM3Z5blpHNDEwSm1JN2toNUpDdXJTK2EyOUtrL2FPYXZBbjlEQU1OamhoLy9JMFlOZWNuCkF1WG1EVWZtckFNeEVaTFArdU5RUlozRC90ZlBUcVp0ZG51TWhJa25EaVhaUm1iZjNmdTUzVkV1OGVRS2ZBUy8KSldkNUpKbVB6SnMvcEtVcEZiTDJPVlpYQ2hMeHlNdGExcnRscVFJREFRQUJBb0lCQVFDeHpDV0JCUDdCNkpQRgo2Yk1ERU9tc0s0aSs0ZWl3TFlqQlMrMWhOZWQ1eERTZjYyOG9MR29IeFVRTExGL0UrRE9lWGNnK2E1Uzl6TjNOCkFjV1h6TU9BNmRKNktYc0IrcGNMTk9iRWRaaENjWitVbVByMTVKc09WSFkzYTFYdFpOZGVSUWpQT1l6RG95REQKTTlROTlrTnkxV21CZXF6MWJBNnFHUzNicngxOWdqRG1HaGVWaUN5Mlh4eGY1Qk0rUFl2M0xhT0RiUHR1dlExcApaTlZpOEkxMHdJUUdza2FXNG8wMnRFN2hCaXhrOXh3WDhUbGdLcDlmQzRteEN1dXdVekt3R1FXSXk5SDdhMXFaCi9DTW9kdVZseEozc3pjQWlqcUhheTkxOFV6aTRUTlNNVlczMk1NT3R3UkVYK3J6enFRUjRTTFZZa2dWMXIrRW0KcmhWMVB5S1JBb0dCQU44eUFvQzBlYWllSWRQWE1WMFpVN2YxOGlzVjBVcHFML0FwdTNGZTJCU0F0dCtLQ3ZNbQpTc1p5K1pCdEdDTllCSkEzbVVhUmsrRWw4QjJHV0hPZE1oSlJuVHV3ekhlV0luYzV6N3V2bEwvNXFaQ0tuK2NYCmlPNWJiYmprZEcvWXMzMkxKWmFCWCtVRThYbGxoV1MwSWJWdWFKNnlaZ1dLR2EwYkhoNEU5ZDZ6QW9HQkFQc2cKVHRWWE1CUVJiOUhkaDIxMmU1dkhmMmlUdmdueWtFWXd1dlpZd3RrOVJvOHRRb0hLVHJzRzIxS0JBaVQ3bzZMMApWTUkxWXBHTDZsdjR0NkRBTmlwZ0tPZENvaHRBa01IbVc5WDhSQkhpS052VTg4UzhFQUdXcHpVL3FsU1FYRGtwCkZZVy96T0tKa3Zkd1poWFRCdWVtaU1vOE1PR0o0Rlp4MTFydlE5Z3pBb0dCQUk4amxYTlJTd3lXalg4OGJRNFYKNWhqK2hGYVpZV1hsLytSMy94eFFCU2Z3L0ZjVVFyMTVlMDhXQVhOY1k3U1hDQ1l0WWdGZDc0YmZPOFRUbWZwYgpmL2M3bkNqaDA5K0Z5NGpHN0xDamhEUXlPMHJWZklOS0ZxazJ2WUZzRWppQXMydjZSeHJrMGNrZ2lIU2daUklXCmE5L2RkUDhCem1nVER6QnpTYmRhMnd2OUFvR0JBTVNoU0hqTmV1ekx0bVY1OHRkbjVXTlZjdEo4bEZtUG54NEcKZSszMkZDTXJVbnM3TWc4VVRFOHZFRDNxbTdZL2ZxSlNjNmRaUzZPeERVRVVYeHUwUlBVck0vdlg5YndtOHZHUQpJS3BOaXpNQmhZS1RuSWVYeFdTcEJLV3lBdm1SeTFSS2NmcTI4b01TdTR1WmE2VjlCYlFPZHA3N1FEN295VXFHCmV0eE42aTUzQW9HQkFObjhKZnZoRlEvY2xmS0FTMVlBTmsxR3B1SGI2b3lKTVpFYUE3cmJuRTA4eTZMeXRWTzIKZTZ0czhrSVRYZVB6V0FPS2RFeU5zZXJGeGZJMEN6LzNzekNOeUNaQmRDdTNBeXloZGllbldLTlZSMnBHbGhaTApKc1V6cWYzeE5uS0wzdm4wS1B6SVl6UDdQWVNlUDl1OHhLaTBBYm9SRzJHcTJOVXlha0hSTzhOegotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= </code></pre>
<p>When you read about API Groups in Kubernetes, you may come across group names like <code>core/v1</code>, <code>batch/v1</code>, <code>storage.k8s.io/v1</code>, and <code>scheduling.k8s.io/v1</code>. Some of these groups have the <code>k8s.io</code> postfix, while others do not. It can be confusing when specifying the apiVersion in your Kubernetes manifests, as you need to specify it as like <code>v1</code>, <code>batch/v1</code>, and <code>storage.k8s.io/v1</code>.</p> <p>Knowing the reason will make me less confused.</p> <p>Thanks.</p>
<p>I haven't found a precise explanation for your question, I believe it's an implementation choice.</p> <p>But I'm sharing some resources that might clarify your ideas a bit.</p> <blockquote> <p>Resources are bound together in API groups - each group may have one or more versions that evolve independent of other API groups, and each version within the group has one or more resources. Group names are typically in domain name form - the Kubernetes project reserves use of the empty group, all single word names (&quot;extensions&quot;, &quot;apps&quot;), and any group name ending in &quot;*.k8s.io&quot; for its sole use. When choosing a group name, we recommend selecting a subdomain your group or organization owns, such as &quot;widget.mycompany.com&quot;.</p> </blockquote> <p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions</a></p> <p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions</a></p> <p><a href="https://stackoverflow.com/a/57854939/21404450">https://stackoverflow.com/a/57854939/21404450</a></p>
<p>I had created the a stateful set file for Elasticsearch 7.16.1 but on upgrading the ELK stack to 8.0.0, I get this error in the logs of the elastic pod:- <code>&quot;java.lang.IllegalStateException: failed to obtain node locks, tried [/usr/share/elasticsearch/data];</code> maybe these locations are not writable or multiple nodes were started on the same data path?</p> <p>Likely root cause:</p> <pre><code>java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/node.lock&quot; </code></pre> <p><a href="https://i.stack.imgur.com/aUk3i.png" rel="nofollow noreferrer">Elastic pod error</a></p> <p>Kibana pod gives this error:-</p> <pre><code>&quot;curl: (7) Failed to connect to elastic-cluster port 9200: Connection refused&quot; </code></pre> <p><a href="https://i.stack.imgur.com/Nmt4d.png" rel="nofollow noreferrer">Kibana pod error</a></p> <p>I didn't get this error with the <strong>7.16.1</strong> version.</p> <p>Should I make some changes in the statefulset files or any other files? Please help me solve this.</p>
<p>Adding the following initContainers in my statefulset.yaml fixed the issue for me:</p> <pre><code>initContainers: - name: fix-permissions image: busybox command: [&quot;sh&quot;, &quot;-c&quot;, &quot;chown -R 1000:1000 /usr/share/elasticsearch/data&quot;] securityContext: privileged: true volumeMounts: - name: local-storage mountPath: /usr/share/elasticsearch/data </code></pre>
<p>I have followed successfully this tutorial in order to create a private docker registry hosted in Kubernetes: <a href="https://www.knowledgehut.com/blog/devops/private-docker-registry" rel="nofollow noreferrer">https://www.knowledgehut.com/blog/devops/private-docker-registry</a></p> <p>I've called my private registry <code>registro</code>, and the IP address of the service is: 10.43.48.208</p> <p>With kaniko, I've gathered to push an <code>alpine</code> image to my registry in other to test.</p> <p>I've created a secret in order to authenticate with the registry:</p> <pre><code>kubectl create secret generic cred-reg \ --from-file=.dockerconfigjson=./config.json \ --type=kubernetes.io/dockerconfigjson </code></pre> <p>Then:</p> <pre><code>kubectl edit serviceaccounts default </code></pre> <p>and added:</p> <pre><code>imagePullSecrets: - name: cred-reg </code></pre> <p>Now I want to use that image in a pod.</p> <pre><code>kubectl run -ti --rm -q test --image=registro:5000/alpine </code></pre> <p>The result is this:</p> <pre><code>Failed to pull image &quot;registro:5000/alpine&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;registro:5000/alpine:latest&quot;: failed to resolve reference &quot;registro:5000/alpine:latest&quot;: failed to do request: Head &quot;https://registro:5000/v2/alpine/manifests/latest&quot;: dial tcp: lookup registro on 127.0.0.53:53: server misbehaving </code></pre> <p>It also fails:</p> <pre><code>kubectl run -ti --rm -q test --image=registro:5000/alpine --overrides='{ &quot;spec&quot;: { &quot;template&quot;: { &quot;spec&quot;: { &quot;imagePullSecrets&quot;: [{&quot;name&quot;: &quot;cred-reg&quot;}] } } } }' </code></pre> <p>How can I avoid this error?</p> <p>What is the IP address <code>127.0.0.53</code> about?</p> <p><code>registro:5000</code> is using a self-signed certificate. How do I configure the Kubernetes cluster to accept this certificate?</p> <p>Related questions:</p> <ul> <li><a href="https://stackoverflow.com/questions/56366046/kubernetes-not-pulling-image-from-private-registry">Kubernetes not pulling image from private registry</a></li> </ul> <p><strong>UPDATE 1</strong>:</p> <p>With:</p> <pre><code>kubectl port-forward svc/registro 5000:5000 </code></pre> <p>and placed this line in my <code>/etc/hosts</code>:</p> <pre><code>127.0.0.1 registro </code></pre> <p>and copy my self-signed certificate to: <code>/etc/docker/certs.d/registro:5000/ca.crt</code>, I can do successfully:</p> <pre><code>$ docker login https://registro:5000 $ docker run -ti --rm registro:5000/alpine </code></pre> <p>but I can't still access from kubernetes:</p> <pre><code>Failed to pull image &quot;registro:5000/alpine&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;registro:5000/alpine:latest&quot;: failed to resolve reference &quot;registro:5000/alpine:latest&quot;: failed to do request: Head &quot;https://registro:5000/v2/alpine/manifests/latest&quot;: EOF </code></pre> <p>I've tried with <code>curl</code> and it does work:</p> <pre><code>curl https://registro:5000/v2/alpine/manifests/latest -u myuser:mypassword </code></pre>
Looks like a <strong>DNS issue preventing image pull</strong>. To have a successful connection, <strong>you must always meet DNS, routing and firewall requirements</strong>. </p>If the cluster's nodes do not have external IP addresses themselves, which is the case of a private cluster, you will need to enable private access on the subnet used by the cluster. &nbsp;</p>You can enable private access explicitly; or if you configure certain resources without external IP addresses to create outbound connections to the internet (used for internet egress), we can turn private access implicitly.</p>
<p>I have a service for which i dont want to use autoscaling, so i created a copy of it and now i want to have it behind an internal LB using nginx ingress, one of them will be always active and other one will be passive which can be get through probes status, ingress is as below</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: appcj-ingress spec: ingressClassName: nginx rules: - host: &quot;{{ .Values.ingress.appcjudomain }}&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: {{ .Values.Name }}-appcjuengineprihttp port: number: 81 service: name: {{ .Values.Name }}-appcjuenginesechttp port: number: 81 </code></pre> <p>This is more like a traditional load balancer, like having multiple servers behind a loadbalancer, having multiple backend services for same host, is it possible?</p>
<p>Kubernetes Ingress does NOT allow you to do this.</p> <p>You must necessarily distinguish by host or by path.</p> <p>The only thing you could do is implement a structure like this: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout</a></p> <p>Take a look here: <a href="https://stackoverflow.com/questions/46373616/how-do-i-map-multiple-services-to-one-kubernetes-ingress-path">How do I map multiple services to one Kubernetes Ingress path?</a></p> <p>Otherwise consider replacing the Ingress Kubernetes with a level 7 Load Balancer.</p>
<p>I have followed the AWS official document to create an ALB controller and made sure few things like providing aws region and vpc id when creating a controller.</p> <p><a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html</a></p> <p>However I notice the below error in the ALB contoller pod logs. I am running the controller pods and other resources in Fargate nodes with AKS running on version 1.21.0</p> <blockquote> <p>{&quot;level&quot;:&quot;error&quot;,&quot;ts&quot;:1643650856.9675832,&quot;logger&quot;:&quot;controller-runtime.manager.controller.ingress&quot;,&quot;msg&quot;:&quot;Reconciler error&quot;,&quot;name&quot;:&quot;app-ingress&quot;,&quot;namespace&quot;:&quot;backend&quot;,&quot;error&quot;:&quot;WebIdentityErr: failed to retrieve credentials\ncaused by: RequestError: send request failed\ncaused by: Post &quot;https://sts.us-east-1.amazonaws.com/&quot;: dial tcp: i/o timeout&quot;}</p> </blockquote>
<p>According to your error it looks like your coreDNS setup is not correct.</p> <blockquote> <p>By default, CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. If you want to <em>only</em> run your pods on Fargate in your cluster, complete the following steps.</p> </blockquote> <ol> <li>Create a Fargate profile for CoreDNS.</li> </ol> <pre><code>aws eks create-fargate-profile \ --fargate-profile-name coredns \ --cluster-name [your cluster name] \ --pod-execution-role-arn arn:aws:iam::[your account ID]:role/AmazonEKSFargatePodExecutionRole \ --selectors namespace=kube-system,labels={k8s-app=kube-dns} \ --subnets subnet-[1st ID of your private subnet] subnet-[2nd ID of your private subnet] subnet-[3rd ID of your private subnet] </code></pre> <p>Replace <code>AmazonEKSFargatePodExecutionRole</code> with the name of your pod execution role. If you don't have a pod execution role, you must <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-sg-pod-execution-role" rel="nofollow noreferrer">create one</a> first.</p> <blockquote> <p><strong>NOTE</strong> The format of the role ARN must be <code>arn:aws:iam::`111122223333`:role/`role-name` </code>.</p> </blockquote> <ol start="2"> <li>Following command removes the <code>eks.amazonaws.com/compute-type : ec2</code> annotation from the CoreDNS pods, run it:</li> </ol> <pre class="lang-yaml prettyprint-override"><code>kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{&quot;op&quot;: &quot;remove&quot;, &quot;path&quot;: &quot;/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type&quot;}]' </code></pre> <p><a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html" rel="nofollow noreferrer">Here is a link to the documentation</a>.</p>
<p>I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like &quot;You want to deploy stateful app to k8s? Use StatefulSet!&quot; However, I couldn't see like &quot;You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet&quot; ever.</p> <p>Even nobody says &quot;I don't recommend to use StatefulSet for stateless app&quot;, many stateless apps is deployed through Deployment, like it is the standard.</p> <p>The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps?</p> <p>I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong.</p> <p>I'm really curious about this question.</p> <p>P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs.</p> <p>I googled &quot;When not to use StatefulSet&quot;, &quot;when Deployment is better than StatefulSet&quot;, &quot;Why Deployment is used for stateless apps&quot;, or something more questions.</p> <p>I also see the k8s docs about StatefulSet either.</p>
<p>In contrast to a Kubernetes Deployment, where pods are easily replaceable, each pod in a StatefulSet is given a name and treated individually. Pods with distinct identities are necessary for stateful applications.</p> <p>This implies that if any pod perishes, it will be apparent right away. StatefulSets act as controllers but do not generate ReplicaSets; rather, they generate pods with distinctive names that follow a predefined pattern. The ordinal index appears in the DNS name of a pod. A distinct persistent volume claim (PVC) is created for each pod, and each replica in a StatefulSet has its own state.</p> <p>For instance, a StatefulSet with four replicas generates four pods, each of which has its own volume, or four PVCs. StatefulSets require a headless service to return the IPs of the associated pods and enable direct interaction with them. The headless service has a service IP but no IP address and has to be created separately.The major components of a StatefulSet are the set itself, the persistent volume and the headless service.</p> <p>That all being said, people deploy Stateful Applications with Deployments, usually they mount a RWX PV into the pods so all &quot;frontends&quot; share the same backend. Quite common in CNCF projects.</p>
<p>i'm using a k3s cluster with traefik and i want to expose a nginx web server that is outside of the cluster in a VM.</p> <p>I want to know if it is even possible to do that and if it is, how do i do it.</p> <p>I've tried creating a Service and an Ingress but unfortunately it didn't work</p> <p>I've tried using a Service with the ExternalName type</p> <pre><code>kind: Service apiVersion: v1 metadata: name: website-a-service namespace: test spec: type: ExternalName ports: - port: 80 targetPort: 80 externalName: 10.0.0.1 </code></pre> <p>And an IngressRoute.</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: website-a-service-ingress-route namespace: test spec: entryPoints: - web routes: - match: Host(`website-a-service.local`) kind: Rule services: - name: website-a-service kind: Service port: 80 </code></pre> <p>After reading the kubernetes documentation i've learned that i can't use ExternalName, I don't know any way of making this work.</p>
<p>I've got it working.</p> <p>I've created an Entrypoint that matches the ip of my external machine and an IngressRoute to route everything to the address.</p> <p>I've used a similar approach to <a href="https://stackoverflow.com/q/71210782/21420364">this question</a>.</p> <p>Sorry for any inconvenience.</p>
<p>I'm using Grafana with Helm <a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/tree/main/charts/grafana</a>. I would like to switch from SQLite 3 to PostgreSQL as my backend database. However, I'm concerned about the security of my database credentials, which are currently stored in the values.yaml file as plain text.</p> <p>What is the recommended way to switch to PostgreSQL and hide the database credentials in a secure way? Can I use Kubernetes secrets or some other mechanism to achieve this? (Please I need to know where, in the values.yaml file, I have to do the configuration)</p> <p>I'm connecting Grafana with the PostgreSQL database inside the grafana.ini section in the values.yaml, E.g.:</p> <pre><code>grafana.ini: database: type: &quot;postgres&quot; host: &quot;db.postgres.database.azure.com&quot; name: &quot;grafana-db&quot; user: &quot;grafana-db-user&quot; password: &quot;&quot;grafana-db-pass&quot; ssl_mode: &quot;require&quot; </code></pre> <p>Thanks in advance for your help!</p> <p>I've tried to include use the env section but it's not working.</p>
<blockquote> <p>Had you already seen this section from your link? How to securely reference secrets in grafana.ini – jordanm</p> </blockquote> <p>Thank you so much @jordanm :)</p>
<p>There are several microservices (of the order of 5 or more) and they need to be launched in a certain order.</p> <p>Deployment can be Kubernetes (As an example).</p> <p>Is it possible to specify the launch of applications in a certain order ?</p> <p>At the same time, the launch of the next application should be initiated when the previous application reported a successful start ?</p>
<p>Kubernetes doesn't have a feature that lets you deploy something in &quot;order&quot;. However ArgoCD that manages the deployment of your applications can make this work with Sync Waves/Argo Hooks.</p> <p>You could also just create a shell script that does this for you if you want an easy solution. You could include an init container that would check the previous applications health and then have it start.</p> <p>Edit1: You could add a init container that checks the pod/service of the previous &quot;application&quot; with:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! &amp;&amp; sleep 3600'] initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', &quot;until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done&quot;] - name: init-mydb image: busybox:1.28 command: ['sh', '-c', &quot;until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done&quot;] </code></pre> <p>If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. This way you could &quot;wait&quot; before the preceeding application is startet until the previous application is running.</p>
<p>Follow the mysql example,</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p> <blockquote> <p>kubectl describe deployment mysql</p> </blockquote> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ReplicaSetCreateError 15m deployment-controller Failed to create new replica set &quot;mysql-79c846684f&quot;: Unauthorized Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mysql-79c846684f to 1 </code></pre> <p>Unauthorized? what might be the cause?</p> <p>Kubectl: v1.27.1</p> <p>Server: v1.25.9</p>
<p><strong>Unauthorized :</strong> Indicates that the server can be reached and understands the request, but refuses to take any further action without the client providing appropriate authorization. If the client has provided authorization, this error indicates the provided credentials are insufficient or invalid.</p> <p>The Kubeconfig Authentication method does not support external identity providers or certificate-based authentication. Verify whether RBAC has been implemented correctly in your deployment configuration. See <a href="https://github.com/kubernetes/dashboard#create-an-authentication-token-rbac" rel="nofollow noreferrer">Create An Authentication Token (RBAC)</a>.</p> <p><strong>Try restarting the kube-apiserver pods in UserCluster Controlplane namespaces cluster certificate renewal. Restarting kube-apiserver pods.</strong></p> <p><strong>Refer official Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Authenticating</a> :</strong></p> <blockquote> <p>API requests are tied to a normal user or service account or treated as <code>anonymous requests</code>. This means every process inside or outside the cluster, from a human user typing <code>kubectl</code> on a workstation, to <code>kubelets</code> on nodes, to members of the control plane, must authenticate when making requests to the API server, or be treated as an anonymous user.</p> </blockquote> <p><strong>Refer official Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#writing-a-replicaset-manifest" rel="nofollow noreferrer">Writing a ReplicaSet manifest</a> :</strong></p> <blockquote> <p>When the control plane creates new Pods for a ReplicaSet, the <code>.metadata.name</code> of the ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid <code>DNS subdomain</code> value, but this can produce unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a <code>DNS label</code>.</p> </blockquote> <p>Also refer to <strong>Matt Mickiewicz's Sitepoint Blog</strong> on <a href="https://www.sitepoint.com/troubleshooting-kubernetes-unauthorized-access-and-more/" rel="nofollow noreferrer">Troubleshooting Kubernetes: Unauthorized Access</a></p>
<p>I have Docker Desktop and I want to create multiple clusters so I can work on different projects. For example cluster name 1: hello and cluster name 2: world.</p> <p>I currently have one cluster with the context of docker-desktop that actually working.</p>
<p>To clarify I am posting Community Wiki answer.</p> <p>A tool <code>kind</code> met your expectations in this case.</p> <blockquote> <p><a href="https://sigs.k8s.io/kind" rel="nofollow noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.</p> </blockquote> <p>Here one can find User Guide to this tool. One can install it with 5 ways:</p> <ul> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager" rel="nofollow noreferrer">With A Package Manager</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-release-binaries" rel="nofollow noreferrer">From Release Binaries</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-source" rel="nofollow noreferrer">From Source</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-make" rel="nofollow noreferrer">With <em><strong>make</strong></em></a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-go-get--go-install" rel="nofollow noreferrer">With <em><strong>go get</strong></em> / <em><strong>go install</strong></em></a></li> </ul> <p>To create cluster with this tool run:</p> <pre><code>kind create cluster </code></pre> <p>To specify another image use the <code>--image</code> flag:</p> <pre><code>kind create cluster --image=xyz </code></pre> <p>In <code>kind</code> the node-image is built off the base-image, that installs all the dependencies required for Docker and Kubernetes to run in a container.</p> <p>To assign the cluster a different name than <code>kind</code>, use <code>--name</code> flag.</p> <p>More uses can be found with with:</p> <pre><code>kind create cluster --help </code></pre>
<p>I have a single node kubernetes cluster running in a VM in azure. I have a service running SCTP server in port 38412. I need to expose that port externally. I have tried by changing the port type to NodePort. But no success. I am using flannel as a overlay network. using Kubernetes version 1.23.3.</p> <p>This is my service.yaml file</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: meta.helm.sh/release-name: fivegcore meta.helm.sh/release-namespace: open5gs creationTimestamp: &quot;2022-02-11T09:24:09Z&quot; labels: app.kubernetes.io/managed-by: Helm epc-mode: amf name: fivegcore-amf namespace: open5gs resourceVersion: &quot;33072&quot; uid: 4392dd8d-2561-49ab-9d57-47426b5d951b spec: clusterIP: 10.111.94.85 clusterIPs: - 10.111.94.85 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: tcp nodePort: 30314 port: 80 protocol: TCP targetPort: 80 - name: ngap nodePort: 30090 port: 38412 protocol: SCTP targetPort: 38412 selector: epc-mode: amf sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre> <p>As you can see I changed the port type to NodePort.</p> <pre><code>open5gs fivegcore-amf NodePort 10.111.94.85 &lt;none&gt; 80:30314/TCP,38412:30090/SCTP </code></pre> <p>This is my Configmap.yaml. In this configmap that ngap dev is the server I want to connect which is using default eth0 interface in the container.</p> <pre><code>apiVersion: v1 data: amf.yaml: | logger: file: /var/log/open5gs/amf.log #level: debug #domain: sbi amf: sbi: - addr: 0.0.0.0 advertise: fivegcore-amf ngap: dev: eth0 guami: - plmn_id: mcc: 208 mnc: 93 amf_id: region: 2 set: 1 tai: - plmn_id: mcc: 208 mnc: 93 tac: 7 plmn_support: - plmn_id: mcc: 208 mnc: 93 s_nssai: - sst: 1 sd: 1 security: integrity_order : [ NIA2, NIA1, NIA0 ] ciphering_order : [ NEA0, NEA1, NEA2 ] network_name: full: Open5GS amf_name: open5gs-amf0 nrf: sbi: name: fivegcore-nrf kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: fivegcore meta.helm.sh/release-namespace: open5gs creationTimestamp: &quot;2022-02-11T09:24:09Z&quot; labels: app.kubernetes.io/managed-by: Helm epc-mode: amf </code></pre> <p>I exec in to the container and check whether the server is running or not. This is the netstat of the container.</p> <pre><code>Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.244.0.31:37742 10.105.167.186:80 ESTABLISHED 1/open5gs-amfd sctp 10.244.0.31:38412 LISTEN 1/open5gs-amfd </code></pre> <p>sctp module is also loaded in the host.</p> <pre><code>$lsmod | grep sctp sctp 356352 8 xt_sctp 20480 0 libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,ip_vs,sctp x_tables 49152 18 ip6table_filter,xt_conntrack,xt_statistic,iptable_filter,iptable_security,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,xt_owner,ip6_tables,xt_sctp,ipt_REJECT,ip_tables,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark </code></pre> <p>Is it possible to expose this server externally?</p>
<p>We found that using a nodeport to translate ports breaks the SCTP protocol (see rfc 4960) because the SCTP transport address structure (embedded in each stream) will match the nodeport port, not the sctp port. Using a port translation (e.g. nodeport or firewall) will therefore cause the decryptor to fail in the 5g core NF's we have worked with. This is obviously not a problem if the code listens on the same port as the nodeport. Port 38412 is only recommended, it is not required. Can you recompile your code to use a port in the nodeport range? Or turn off the decryptor error function?</p>
<p>I have downloaded Docker and then enabled Kubernetes on the Desktop. When I execute 'Kubectl version' command on the PowerShell it says:</p> <p>kubectl : Unable to connect to the server: dial tcp : connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. At line:1 char:1</p> <ul> <li>kubectl version</li> </ul>
<p>The same issue I've had encountered just today and I found the solution the same day.</p> <p>Here's the solution to my problem:</p> <ol> <li>I pressed right-click on my Docker icon at the taskbar.</li> <li>I changed the cluster that my k8s is pointing at from the 'gke_regal-hybrid' to 'docker-desktop' (in your case, must be different) and I was able to rerun the commands without any problems anymore.</li> </ol> <p><img src="https://i.stack.imgur.com/KIPNJ.png" alt="Click this for image reference" /></p>
<p><strong>Scenario</strong> : A repo of NestJs is executing a cron job for sending messages to subscribers as per cron expression. Scenario is running fine when only one instance is running.</p> <p><strong>Problem</strong> : Multiple email received by subscriber when <em>multiple</em> instance(pods) are running in K8 cluster.</p> <p><strong>Solution Looking for</strong> : Without database, or separate out service can we do something with &quot;<strong>K8s</strong>&quot; <em><strong>yaml</strong></em> so that at a given point of time only one pod is executing the cron job.</p>
<p>You can try the <code>concurrencyPolicy</code> option in <strong>cronjob</strong> on the kubernetes cluster. This option helps to run only one task at a time. Even though multiple triggers are requested for a job, it will skip the triggers until the execution of the current task is done.</p> <p>In your case you need to set the option to <code>Forbid</code> as shown below, so that it will make sure to execute only request at a time.</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: demo spec: schedule: &quot;* * * * *&quot; concurrencyPolicy: Forbid #Enabling the concurrencyPolicy jobTemplate: spec: template: spec: containers: - name: demo image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre> <p><strong>Note:</strong> Using the <code>concurrencyPolicy</code> field to limit the number of pods might affect the performance of a job, as it may result in longer execution time.</p> <p>You can also limit the creation of a number of pods by using the <code>parallelism</code> field in the job. For more information refer to this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#parallel-jobs" rel="nofollow noreferrer">parallelism</a>.</p> <p>For more detailed information refer to this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#concurrency-policy" rel="nofollow noreferrer">official k8's</a> documentation</p>
<p>We are consuming kubelet <code>/stats/summary</code> endpoint.</p> <p>We noticed that the metrics returned are not always present and might be missing in some scenarios.</p> <p>In particular we are interested in <code>Rootfs.UsedBytes</code> that in missing in <code>minikube</code> but present in other environments.</p> <p>Command to retrieve <code>/stats/summary</code> from kubelet, notice that the port can vary in different k8s flavours</p> <pre><code>token=$(k get secrets &lt;service-account-token-with-enough-privileges&gt; -o json \ | jq .data.token -r | base64 -d -) k run curler --rm -i --restart=Never --image nginx -- \ curl -X GET https://&lt;nodeIP&gt;:10250/stats/summary --header &quot;Authorization: Bearer $token&quot; --insecure </code></pre> <pre><code>&quot;pods&quot;: [ { ... &quot;containers&quot;: [ { ... &quot;rootfs&quot;: { ... &quot;usedBytes&quot;: 36864, ... } </code></pre> <ul> <li>Why is that?</li> <li>Is there a similar metric more reliable?</li> <li>Can add anything in Minikube to enable that?</li> </ul> <p>EDIT:</p> <blockquote> <p>It is possible that the issue is related to --driver=docker option of minikube</p> </blockquote>
<p>To clarify I am posing community wiki answer.</p> <p>The problem here was resolved by changing driver to <em><strong>Hyperkit</strong></em>.</p> <p>According to the <a href="https://minikube.sigs.k8s.io/docs/drivers/hyperkit/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p><a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">HyperKit</a> is an open-source hypervisor for macOS hypervisor, optimized for lightweight virtual machines and container deployment.</p> </blockquote> <p>There are two ways to install HyperKit (if you have installed Docker for Desktop, you don't need to do anything - you already have HyperKit):</p> <ul> <li>you can <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">install HyperKit from GitHub</a></li> <li>if you have <a href="https://brew.sh/" rel="nofollow noreferrer">Brew Package Manager</a> - run the following command:</li> </ul> <pre class="lang-sh prettyprint-override"><code>brew install hyperkit </code></pre> <p>See also <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">this reference</a>.</p>
<p>I've figured out the syntax to mount a volume (Kubernetes YAML):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: ... spec: containers: - name: php volumeMounts: - mountPath: /app/db_backups name: db-backups readOnly: true volumes: - hostPath: path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups type: DirectoryOrCreate name: db-backups </code></pre> <p>And the volume does show when I drop into a shell:</p> <pre><code>kubectl --context docker-desktop exec --stdin --tty deploy/app-deployment-development -cphp -nmyns -- /bin/bash </code></pre> <p>But the <code>db_backups</code> directory is empty, so I guess the volume is backed by nothing -- it's not finding the volume on my Windows host machine.</p> <p>I've tried setting the host path like <code>C:\Users\Mark\PhpstormProjects\proj\db_backups</code> but if I do that then my Deployment fails with a <code>CreateContainerError</code>:</p> <blockquote> <p>Error: Error response from daemon: invalid volume specification: 'C:\Users\Mark\PhpstormProjects\proj\db_backups:/app/db_backups:ro'</p> </blockquote> <p>So I guess it doesn't like the Windows-style filepath.</p> <p>So what then? If neither style of path works, how do I get it to mount?</p>
<p>From <a href="https://stackoverflow.com/questions/66985465/kubernetes-how-to-correctly-mount-windows-path-in-wsl2-backed-environment">here</a> it is clear that, for <strong>WSL2</strong> we need to mention the specific path before we are actually giving the path we desired in the host machine.</p> <p>In your file you are giving like <code>path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups</code> but you need to mention the path like this <code>path: /run/desktop/mnt/host/path_of_directory_in_local_machine</code>. The key is we need to mention <code>/run/desktop/mnt/host/</code> before we are going to give the actual path to the directory.</p> <p>You gave the <code>type: DirectoryOrCreate</code> in the above file, so that is creating an <strong>empty directory</strong> in the path you mentioned. Because it is not actually going to your desired path.</p> <p>So try with this</p> <pre><code>apiVersion: v1 kind: Pod metadata: ... spec: containers: - name: php volumeMounts: - mountPath: /app/db_backups name: db-backups readOnly: true volumes: - hostPath: path: /run/desktop/mnt/host/c/Users/Mark/PhpstormProjects/proj/db_backups #In my case tested with path: /run/desktop/mnt/host/d/K8-files/voldir type: DirectoryOrCreate name: db-backups </code></pre> <p>It worked in our case, we created a directory in <strong>'d'</strong> drive so we used this <code>path: /run/desktop/mnt/host/d/K8-files/voldir</code>. So try giving <code>/run/desktop/mnt/host/</code> before the actual path.</p> <p>For more information refer this <a href="https://devpress.csdn.net/k8s/62ebfca589d9027116a0fe82.html" rel="nofollow noreferrer">Link</a></p>
<p>I have two deployments in kubernetes on Azure both with three replicas. Both deployments use <code>oauth2 reverse proxy</code> for external users/requests authentication. The manifest file for both deployments looks like the following:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: apps/v1 kind: Deployment metadata: name: myservice1 labels: aadpodidbinding: my-pod-identity-binding spec: replicas: 3 progressDeadlineSeconds: 1800 selector: matchLabels: app: myservice1 template: metadata: labels: app: myservice1 aadpodidbinding: my-pod-identity-binding annotations: aadpodidbinding.k8s.io/userAssignedMSIClientID: pod-id-client-id aadpodidbinding.k8s.io/subscriptionID: my-subscription-id aadpodidbinding.k8s.io/resourceGroup: my-resource-group aadpodidbinding.k8s.io/useMSI: 'true' aadpodidbinding.k8s.io/clientID: pod-id-client-id spec: securityContext: fsGroup: 2000 containers: - name: myservice1 image: mycontainerregistry.azurecr.io/myservice1:latest imagePullPolicy: Always ports: - containerPort: 5000 securityContext: runAsUser: 1000 allowPrivilegeEscalation: false readinessProbe: initialDelaySeconds: 1 periodSeconds: 2 timeoutSeconds: 60 successThreshold: 1 failureThreshold: 1 httpGet: host: scheme: HTTP path: /healthcheck port: 5000 httpHeaders: - name: Host value: http://127.0.0.1 resources: requests: memory: &quot;4G&quot; cpu: &quot;2&quot; limits: memory: &quot;8G&quot; cpu: &quot;4&quot; env: - name: MESSAGE value: Hello from the external app!! --- apiVersion: v1 kind: Service metadata: name: myservice1 spec: type: ClusterIP ports: - port: 80 targetPort: 5000 selector: app: myservice1 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: &quot;https://myservice1.com/oauth2/auth&quot; nginx.ingress.kubernetes.io/auth-signin: &quot;https://myservice1.com/oauth2/start?rd=https://myservice1.com/oauth2/callback&quot; kubernetes.io/ingress.class: nginx-external nginx.org/proxy-connect-timeout: 3600s nginx.org/proxy-read-timeout: 3600s nginx.org/proxy-send-timeout: 3600s name: myservice1-external spec: rules: - host: myservice1.com http: paths: - path: / pathType: Prefix backend: service: name: myservice1 port: number: 80 </code></pre> <p>Now, I want to restrict the communication between the pods in two ways:</p> <ol> <li><p>Intra-deployments: I want to deny any communication between the 3 pods of each deployments internally; meaning that all 3 pods can and must only communicate with their corresponding proxy (Ingress part of the manifest)</p> </li> <li><p>Inter-deployments: I want to deny any communications between any two pods belonging to two deployments; meaning that if for example pod1 from deployment1 tries lets say to ping or send http request to pod2 from deployment2; this will be denied.</p> </li> <li><p>Allow requests throw proxies: the only requests that are allowed to enter must go through the correspoding deployment's proxy.</p> </li> </ol> <p>How to implement the manifest for the netwrok policy that achieves these requirements?</p>
<p>You can make use of NetworkPolicies and reference the Policy in your ingress configuration like below:-</p> <p><strong>My networkpolicy.yml:-</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <p>I applied it in my Azure Kubernetes like below:-</p> <pre class="lang-bash prettyprint-override"><code>kubectl apply -f networkpolicy.yml kubectl get networkpolicies </code></pre> <p><img src="https://i.imgur.com/IHgAUw4.png" alt="enter image description here" /></p> <p><img src="https://i.imgur.com/lZxeuGc.png" alt="enter image description here" /></p> <p>Then use the below yml file to reference the NetworkPolicy in the ingress settings:-</p> <p><strong>ingress.yml:-</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-access spec: podSelector: matchLabels: app.kubernetes.io/name: ingress-nginx ingress: - from: - ipBlock: cidr: 192.168.1.0/24 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-to-backends spec: podSelector: matchLabels: app: myapp ingress: - from: - namespaceSelector: matchLabels: ingress: &quot;true&quot; podSelector: matchLabels: app.kubernetes.io/name: ingress-nginx </code></pre> <p><img src="https://i.imgur.com/y4c138J.png" alt="enter image description here" /></p>
<p>I want to install the <code>kube-prometheus-stack</code> (<a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>) but exclude <code>Grafana</code> and <code>Alertmanager</code> from the helm install as I am installing them separately.</p> <p>I would like to install only the following components:</p> <ul> <li>Prometheus Operator</li> <li>Prometheus</li> <li>Prometheus node-exporter</li> <li>Prometheus Adapter for Kubernetes Metrics APIs</li> <li>kube-state-metrics</li> </ul> <p>I tried to create a <code>values.yaml</code> file locally and install using helm chart but I am not successful doing so.</p> <p>What's the best way to do this?</p> <p>TIA.</p>
<p>Those are subcharts(dependencies) in <code>kube-prometheus-stack</code> helm chart. usually all subcharts have something like this:</p> <pre class="lang-yaml prettyprint-override"><code>## subchartname: ## enabled: true grafana: enabled: false </code></pre> <p>and also You can check <code>dependencies</code> section in <code>Chart.yaml</code> file and see what is the condition key to enable/disable a dependency.</p> <p>Links:</p> <ul> <li><a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/Chart.yaml#L53" rel="nofollow noreferrer">grafana condition key in <code>Chart.yaml</code> file</a></li> <li><a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L754-L755" rel="nofollow noreferrer">option to disable grafana in values.yaml</a></li> </ul>