Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I have deployed Gitlab from gitlab official helm chart. When I deployed it I didn't enable LDAP. Be informed that I didn't edit the values.yaml rather I used <code>helm update --install XXX</code> command to do it.</p> <p>My question is how do I extract the helm values.yaml of my existing helm deployment (Name: <code>prime-gitlab</code>). I know how to use <code>helm value show</code> command to download the value.yaml from the gitlab / artifactoryhub but here I would like extract my existing value.yaml so I can edit the LDAP part in the values.yaml file.</p> <pre class="lang-none prettyprint-override"><code>01:36 AM βœ” root on my-k8s-man-01 Ξ” [~] Ξ© helm ls -n prime-gitlab NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION prime-gitlab prime-gitlab 1 2022-02-12 01:02:15.901215658 -0800 PST deployed gitlab-5.7.2 14.7.2 </code></pre>
<p>The answer here is very short. Exactly, like @DavidMaze mentioned in comments section you're looking for <a href="https://docs.helm.sh/docs/helm/helm_get_values/" rel="noreferrer"><code>helm get values</code></a>.</p> <p>To this command one can use several options.</p> <blockquote> <p>This command downloads a values file for a given release.</p> <pre><code>helm get values RELEASE_NAME [flags] </code></pre> </blockquote> <p><em>Options:</em></p> <pre><code> -a, --all dump all (computed) values -h, --help help for values -o, --output format prints the output in the specified format. Allowed values: table, json, yaml (default table) --revision int get the named release with revision </code></pre>
<p>We create around two hundred ingress resources when we deploy the environment to k8s. We see that they are added sequentially in k8s, adding one ingress takes ~5-10 seconds, so adding 200 ingresses took us ~30 minutes.</p> <p>The code looks like this: <code>kubectl apply -n namespace-1 -f file-that-contains-200-ingresses.yml -server-side=true --force-conflicts=true</code></p> <p>Is it possible to speed up that process? Can we do the update of the nginx configuration in one batch?</p>
<p>I don't think this is possible with <code>kubectl apply...</code> and single file since each resource is separate <code>API</code> call and needs to go via all checks. You can find more derail description <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">here</a> if you would like to know what is happening when you send create request to <code>kube-api</code>.</p> <p>What I can advise is to split this file with <a href="https://stackoverflow.com/a/72087064/15441928">yq</a> and apply single files in parallel in your <code>CI</code>.</p>
<p>We have elasticsearch cluster at <code>${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}</code> and filebeat pod at k8s cluster that exports other pods' logs</p> <p>There is <code>filebeat.yml</code>:</p> <pre><code>filebeat.autodiscover: providers: - type: kubernetes templates: - condition: equals: kubernetes.namespace: develop config: - type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log exclude_lines: [&quot;^\\s+[\\-`('.|_]&quot;] hints.enabled: true hints.default_config: type: container multiline.type: pattern multiline.pattern: '^[[:space:]]' multiline.negate: false multiline.match: after http: enabled: true host: localhost port: 5066 output.elasticsearch: hosts: '${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}' username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} indices: - index: &quot;develop&quot; when: equals: kubernetes.namespace: &quot;develop&quot; - index: &quot;kubernetes-dev&quot; when: not: and: - equals: kubernetes.namespace: &quot;develop&quot; filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; - decode_json_fields: fields: [&quot;message&quot;] add_error_key: true process_array: true overwrite_keys: false max_depth: 10 target: json_message </code></pre> <p>I've checked: filebeat has access to <code>/var/log/containers/</code> on kuber but elastic cluster still doesn't get any <code>develop</code> or <code>kubernetes-dev</code> indices. (Cluster has relative index templates for this indices)</p> <p><code>http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cluster/health?pretty</code>:</p> <pre><code>{ &quot;cluster_name&quot; : &quot;elasticsearch&quot;, &quot;status&quot; : &quot;green&quot;, &quot;timed_out&quot; : false, &quot;number_of_nodes&quot; : 3, &quot;number_of_data_nodes&quot; : 3, &quot;active_primary_shards&quot; : 14, &quot;active_shards&quot; : 28, &quot;relocating_shards&quot; : 0, &quot;initializing_shards&quot; : 0, &quot;unassigned_shards&quot; : 0, &quot;delayed_unassigned_shards&quot; : 0, &quot;number_of_pending_tasks&quot; : 0, &quot;number_of_in_flight_fetch&quot; : 0, &quot;task_max_waiting_in_queue_millis&quot; : 0, &quot;active_shards_percent_as_number&quot; : 100.0 } </code></pre> <p>Filebeat log:</p> <pre><code>{ &quot;log.level&quot;: &quot;info&quot;, &quot;@timestamp&quot;: &quot;2022-11-25T08:35:18.084Z&quot;, &quot;log.logger&quot;: &quot;monitoring&quot;, &quot;log.origin&quot;: { &quot;file.name&quot;: &quot;log/log.go&quot;, &quot;file.line&quot;: 184 }, &quot;message&quot;: &quot;Non-zero metrics in the last 30s&quot;, &quot;service.name&quot;: &quot;filebeat&quot;, &quot;monitoring&quot;: { &quot;metrics&quot;: { &quot;beat&quot;: { &quot;cgroup&quot;: { &quot;cpu&quot;: { &quot;stats&quot;: { &quot;periods&quot;: 38 } }, &quot;cpuacct&quot;: { &quot;total&quot;: { &quot;ns&quot;: 1576170001 } }, &quot;memory&quot;: { &quot;mem&quot;: { &quot;usage&quot;: { &quot;bytes&quot;: 4096 } } } }, &quot;cpu&quot;: { &quot;system&quot;: { &quot;ticks&quot;: 13570, &quot;time&quot;: { &quot;ms&quot;: 56 } }, &quot;total&quot;: { &quot;ticks&quot;: 23320, &quot;time&quot;: { &quot;ms&quot;: 90 }, &quot;value&quot;: 23320 }, &quot;user&quot;: { &quot;ticks&quot;: 9750, &quot;time&quot;: { &quot;ms&quot;: 34 } } }, &quot;handles&quot;: { &quot;limit&quot;: { &quot;hard&quot;: 1048576, &quot;soft&quot;: 1048576 }, &quot;open&quot;: 11 }, &quot;info&quot;: { &quot;ephemeral_id&quot;: &quot;a88f461e-1fd3-48d3-a1c7-3bfea1124ab9&quot;, &quot;uptime&quot;: { &quot;ms&quot;: 7530198 }, &quot;version&quot;: &quot;8.1.0&quot; }, &quot;memstats&quot;: { &quot;gc_next&quot;: 17767824, &quot;memory_alloc&quot;: 16385608, &quot;memory_total&quot;: 709250600, &quot;rss&quot;: 104206336 }, &quot;runtime&quot;: { &quot;goroutines&quot;: 23 } }, &quot;filebeat&quot;: { &quot;harvester&quot;: { &quot;open_files&quot;: 0, &quot;running&quot;: 0 } }, &quot;libbeat&quot;: { &quot;config&quot;: { &quot;module&quot;: { &quot;running&quot;: 0 } }, &quot;output&quot;: { &quot;events&quot;: { &quot;active&quot;: 0 } }, &quot;pipeline&quot;: { &quot;clients&quot;: 0, &quot;events&quot;: { &quot;active&quot;: 0 } } }, &quot;registrar&quot;: { &quot;states&quot;: { &quot;current&quot;: 0 } }, &quot;system&quot;: { &quot;load&quot;: { &quot;1&quot;: 3.99, &quot;15&quot;: 3.91, &quot;5&quot;: 4.08, &quot;norm&quot;: { &quot;1&quot;: 0.4988, &quot;15&quot;: 0.4888, &quot;5&quot;: 0.51 } } } }, &quot;ecs.version&quot;: &quot;1.6.0&quot; } } </code></pre> <p>Where can be problem? Im absolutely new to elk so working with it not from the start is a bit hard</p>
<p>In the end I've found out that elastic cluster's version used by company is much older than that of filebeat (8.1 against 7.9). So temporary allowing usage of older versions has solved the issue for time being.</p> <pre><code>output.elasticsearch: allow_older_versions: true </code></pre> <p>BUT @Paulo also had a point and construction <code>not + and</code> was excessive in the end.</p>
<p>I just created a new AKS cluster that has to replace an old cluster. The new cluster is now ready to replace the old one, except for one crucial thing, it's outbound ip address. The address of the old cluster must be used so that our existing DNS records do not have to change.</p> <p><strong>How do I change the public IP address of the Azure load balancer (that is used by the nginx ingress controller) of the new cluster to the one used by the old cluster?</strong> The old cluster is still running, I want to switch it off / delete it when the new cluster is available. Some down time needed to switch the ip address is acceptable.</p> <p>I think that the ip first has to be deleted from the load balancer's Frontend IP configuration of the old cluster and can then be added to the Frontend IP configuration of the load balancer used in the new cluster. But I need to know exactly how to do this and what else need to be done if needed (maybe adding a backend pool?)</p> <p><strong>Update</strong></p> <p>During the installation of the new cluster I already added the public ip address of the load balancer of the old cluster in the yaml of the new ingress-nginx-controller. The nginx controller load balancer in the new cluster is in the state <em>Pending</em> and continuously generating events with message &quot;Ensuring Load Balancer&quot;. Could it be that simple that I only need to assign an other ip address to the ingress-nginx-controller load balancer in the old cluster so that the ip can be used in the new cluster?</p>
<p>You have to create a static public IP address for the AKS cluster. Once you delete the old cluster, the public IP address and load balancer associated with it will be deleted as well. You can check and try this documentation[1] for a detailed guide.</p> <p>[1] <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p>
<p>I am attempting to create a kubernetes ConfigMap with helm, which simply consists of the first line within a config file. I put my file in <code>helm/config/file.txt</code>, which has several lines of content, but I only want to extract the first. My first attempt at this was to loop over the lines of the file (naturally), but quit out after the first loop:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: one-line-cm data: first-line: {{- range .Files.Lines &quot;config/file.txt&quot; }} {{ . }} {{ break }} # not a real thing {{- end }} </code></pre> <p>Unfortunately, <code>break</code> doesn't seem to be a concept/function in helm, even though it is within golang. I discovered this the hard way, as well as reading about a similar question in this other post: <a href="https://stackoverflow.com/questions/60966946/helm-break-loop-range-in-template">Helm: break loop (range) in template</a></p> <p>I'm not stuck on using a loop, I'm just wondering if there's another solution to perform the simple task of extracting the first line from a file with helm syntax.</p>
<p>EDIT:<br /> I've determined the following is the cleanest solution:</p> <pre><code>.Files.Lines &quot;config/file.txt&quot; | first </code></pre> <p>(As a side note, I had to pipe to <code>squote</code> in my acutal solution due to my file contents containing special characters)</p> <hr /> <p>After poking around in the helm <a href="https://helm.sh/docs/chart_template_guide/function_list/" rel="nofollow noreferrer">docs</a> for alternative functions, I came up with a solution that works, it's just not that pretty:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: one-line-cm data: first-line: | {{ index (regexSplit &quot;\\n&quot; (.Files.Get &quot;config/file.txt&quot;) -1) 0 }} </code></pre> <p>This is what's happening above (working inside outward):</p> <ol> <li><code>.Files.Get &quot;config/file.txt&quot;</code> is returning a string representation of the file contents.</li> <li><code>regexSplit &quot;\\n&quot; &lt;step-1&gt; -1</code> is splitting the file contents from step-1 by newline (-1 means return the max number of substring matches possible)</li> <li><code>index &lt;step-2&gt; 0</code> is grabbing the first item (index 0) from the list returned by step-2.</li> </ol> <p>Hope this is able to help others in similar situations, and I am still open to alternative solution suggestions.</p>
<p>I have an API Service running as a Docker Image and now I want to test it on Kubernetes with Docker Desktop, but I can't get it running. The docker image's name is <code>api_service</code></p> <p>this is the yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-api-service spec: selector: matchLabels: app: my-api-service replicas: 1 template: metadata: labels: app: my-api-service spec: containers: - name: api_service image: api_service ports: - containerPort: 5001 apiVersion: v1 kind: Service metadata: name: my-api-service spec: selector: ports: - protocol: TCP port: 5001 targetPort: 5001 </code></pre> <p>By checking with <code>kubectl get pods --all-namespaces</code>, The status is <code>ImagePullBackOff</code>. What am I doing wrong?</p> <p>update:</p> <p>calling kubectl describe:</p> <pre><code>Name: my-api-service-7ffdb9d6b7-x5zs8 Namespace: default Priority: 0 Node: docker-desktop/192.168.65.4 Start Time: Mon, 15 Aug 2022 13:55:47 +0200 Labels: app=my-api-service pod-template-hash=7ffdb9d6b7 Annotations: &lt;none&gt; Status: Pending IP: 10.1.0.15 IPs: IP: 10.1.0.15 Controlled By: ReplicaSet/my-api-service-7ffdb9d6b7 Containers: my-api-service: Container ID: Image: api_service Image ID: Port: 5001/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hgghw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-hgghw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 4m37s (x413 over 99m) kubelet Back-off pulling image &quot;api_service&quot; </code></pre>
<p>You need to have a built image in a repository and not use a local docker image. Kubernetes does not share the repository with your local docker/crd image rep. As I see, you built an image with a tag, and then you want to use it in k8s/deployment, which should not be done. You can however, link kubernetes with your local image docker repository and try it that way. But still, k8s is a big whale and doing this in production may cause grave mistakes. Please follow this <a href="https://medium.com/swlh/how-to-run-locally-built-docker-images-in-kubernetes-b28fbc32cc1d" rel="nofollow noreferrer">link</a></p> <p>Altough anytime you dont know where the problem is, you can check the <strong>kubelet</strong> logs, as <strong>kubelet</strong> pulls the images. Depending on your k8s version, run the command to get logs from the kubelet</p>
<p>I have a new install of K8s master and node both on ubuntu-18. The master is using weave for CNI and all pods are running:</p> <pre><code>$ sudo kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-29qg5 1/1 Running 0 31m kube-system coredns-6d4b75cb6d-kxxc8 1/1 Running 0 31m kube-system etcd-ubuntu-18-extssd 1/1 Running 2 31m kube-system kube-apiserver-ubuntu-18-extssd 1/1 Running 2 31m kube-system kube-controller-manager-ubuntu-18-extssd 1/1 Running 2 31m kube-system kube-proxy-nvqjl 1/1 Running 0 31m kube-system kube-scheduler-ubuntu-18-extssd 1/1 Running 2 31m kube-system weave-net-th4kv 2/2 Running 0 31m </code></pre> <p>When I execute the <code>kubeadm join</code> command on the node I get the following error:</p> <pre><code>sudo kubeadm join 192.168.0.12:6443 --token ikk2kd.177ij0f6n211sonl --discovery-token-ca-cert-hash sha256:8717baa3c634321438065f40395751430b4fb55f43668fac69489136335721dc [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: E0724 16:24:41.009234 8391 remote_runtime.go:925] &quot;Status from runtime service failed&quot; err=&quot;rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; time=&quot;2022-07-24T16:24:41-06:00&quot; level=fatal msg=&quot;getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>The only problem showing up in <code>journalctl -r -u kubelet</code> is:</p> <pre><code>kubelet.service: Main process exited, code=exited, status=1/FAILURE ... Error: failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml </code></pre> <p>That is from several minutes before the <code>join</code> failed when kubelet was trying to start. I would expect that config.yaml file to be missing until the node joined a cluster.</p> <p>The preflight error message says</p> <pre><code>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] &quot;Status from runtime service failed&quot; err=&quot;rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; </code></pre> <p>What is this trying to tell me?</p> <p>====Edit===== I am running CrashPlan on the worker node that is failing, but I have <code>fs.inotify.max_user_watches=1048576</code> in /etc/sysctl.conf.</p> <p>This node worked before both with on-prem master and with GKE with kubernetes 1.20.</p>
<p>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] &quot;Status from runtime service failed&quot; err=&quot;rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot;..............</p> <p>rm /etc/containerd/config.toml... systemctl restart containerd...now u can run kubeadm init command</p>
<p>In the new <strong>Kubespray</strong> release <strong>containerd</strong> is set as default, but the old one isn't.</p> <p>I want to change docker to containerd in old version and install it with that version.</p> <p>When I looked the <code>offline.yml</code> I don't see any option for <strong>containerd</strong> in <strong>Redhat</strong>. Below is the code from <code>offline.yml</code>:</p> <pre><code># CentOS/Redhat/AlmaLinux/Rocky Linux ## Docker / Containerd docker_rh_repo_base_url: &quot;{{ yum_repo }}/docker-ce/$releasever/$basearch&quot; docker_rh_repo_gpgkey: &quot;{{ yum_repo }}/docker-ce/gpg&quot; # Fedora ## Docker docker_fedora_repo_base_url: &quot;{{ yum_repo }}/docker-ce/{{ ansible_distribution_major_version }}/{{ ansible_architecture }}&quot; docker_fedora_repo_gpgkey: &quot;{{ yum_repo }}/docker-ce/gpg&quot; ## Containerd containerd_fedora_repo_base_url: &quot;{{ yum_repo }}/containerd&quot; containerd_fedora_repo_gpgkey: &quot;{{ yum_repo }}/docker-ce/gpg&quot; # Debian ## Docker docker_debian_repo_base_url: &quot;{{ debian_repo }}/docker-ce&quot; docker_debian_repo_gpgkey: &quot;{{ debian_repo }}/docker-ce/gpg&quot; ## Containerd containerd_debian_repo_base_url: &quot;{{ ubuntu_repo }}/containerd&quot; containerd_debian_repo_gpgkey: &quot;{{ ubuntu_repo }}/containerd/gpg&quot; containerd_debian_repo_repokey: 'YOURREPOKEY' # Ubuntu ## Docker docker_ubuntu_repo_base_url: &quot;{{ ubuntu_repo }}/docker-ce&quot; docker_ubuntu_repo_gpgkey: &quot;{{ ubuntu_repo }}/docker-ce/gpg&quot; ## Containerd containerd_ubuntu_repo_base_url: &quot;{{ ubuntu_repo }}/containerd&quot; containerd_ubuntu_repo_gpgkey: &quot;{{ ubuntu_repo }}/containerd/gpg&quot; containerd_ubuntu_repo_repokey: 'YOURREPOKEY' </code></pre> <p>How should I set containerd in <code>offline.ym</code>l and how to find which version of containerd is stable with this Kubespray?</p> <p>Thanks for answering</p>
<p>Always try to dig in history in documentation. Since you're looking for outdated version see <a href="https://github.com/kubernetes-sigs/kubespray/commit/8f2b0772f9ca2d146438638e1fb9f7484cbdbd55#:%7E:text=calicoctl%2Dlinux%2D%7B%7B%20image_arch%20%7D%7D%22-,%23%20CentOS/Redhat,extras_rh_repo_gpgkey%3A%20%22%7B%7B%20yum_repo%20%7D%7D/containerd/gpg%22,-%23%20Fedora" rel="nofollow noreferrer">this fragment</a> of offline.yaml:</p> <pre class="lang-yaml prettyprint-override"><code># CentOS/Redhat ## Docker ## Docker / Containerd docker_rh_repo_base_url: &quot;{{ yum_repo }}/docker-ce/$releasever/$basearch&quot; docker_rh_repo_gpgkey: &quot;{{ yum_repo }}/docker-ce/gpg&quot; ## Containerd extras_rh_repo_base_url: &quot;{{ yum_repo }}/centos/{{ ansible_distribution_major_version }}/extras/$basearch&quot; extras_rh_repo_gpgkey: &quot;{{ yum_repo }}/containerd/gpg&quot; </code></pre> <p>Reference: <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer"><em>kubespray documentation</em></a>.</p>
<p>I've recently used google Kubernetes engine to deploy my magento project and i successfully deployed it. and my next step was in each git push my jenkins pipeline will start building and update the project in my kubernetes cluster. i've been looking for tutorials but i got no documentation about how to run kubectl in jenkins with my GKE credentials. if anyone is familiar with this kind of task and have any reference please help me.</p>
<p>Actually you're asking for this question</p> <blockquote> <p>how to run kubectl in jenkins with my GKE credentials</p> </blockquote> <p>In a research of how you can manage it I found this <a href="https://blog.bewgle.com/2020/04/13/setting-up-ci-cd-for-gke-with-jenkins/" rel="nofollow noreferrer">tutorial</a> about Setting up CI/CD for GKE with Jenkins which contains the steps in order to build an update the project. You can also take a look for whole tutorial, might help you with your project.<br> But we're going to check the part where it says <strong>Jenkins Job Build</strong>; you'll find the authentication method and getting the credentials for cluster:</p> <blockquote> <p>#To activate creds for the first time, can also be done in Jenkins machine directly and get credentials for kubectl</p> <pre><code>gcloud auth activate-service-account account_name --key-file [KEY_FILE] </code></pre> <p>#Get credentials for cluster</p> <pre><code>gcloud container clusters get-credentials &lt;cluster-name&gt; --zone &lt;zone-name&gt; --project &lt;project-name&gt; </code></pre> </blockquote> <p>And also, deploying it to the project:</p> <blockquote> <p>#To create new deployment</p> <pre><code>kubectl create deployment &lt;deployment-name&gt; --image=gcr.io/&lt;project-name&gt;/nginx:${version} </code></pre> <p>#For rolling update</p> <pre><code>kubectl set image deployment/&lt;app_name&gt; nginx=gcr.io/&lt;project-name&gt;/&lt;appname&gt;/nginx:${version} --record </code></pre> </blockquote>
<p>I'm currently learning kubernetes bases and i would like to expose a mongodb outside of my cluser. I've setting up my nginx ingress controller and followig this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">doc</a> to expose plain TCP connexion.</p> <p>This is my Ingress Service configuration :</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.15 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.1 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ipFamilyPolicy: SingleStack externalIPs: - 172.30.63.51 ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https - name: proxied-tcp-27017 port: 27017 protocol: TCP targetPort: 27017 selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller </code></pre> <p>The configmap to proxy TCP connexions :</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 27017: &quot;global-stack/global-stack-mongo-svc:27017&quot; </code></pre> <p>My ingress conroller works well on ports 80 and 443 to expose my services but impossible for me to access to port 27017</p> <p>Result of kubectl get svc -n ingress-nginx :</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ingress-nginx-controller NodePort 10.97.149.93 172.30.63.51 80:30159/TCP,443:32585/TCP,27017:30098/TCP ingress-nginx-controller-admission ClusterIP 10.107.33.165 &lt;none&gt; 443/TCP </code></pre> <p>External IP is well responding to curl 172.30.63.51:80</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But can't respond to port 27017 :</p> <pre><code>curl: (7) Failed to connect to 172.30.63.51 port 27017: Connection refused </code></pre> <p>My mongo service :</p> <pre><code>apiVersion: v1 kind: Service metadata: name: global-stack-mongo-svc namespace: global-stack labels: app: global-stack-mongo-app spec: type: ClusterIP ports: - name: http port: 27017 protocol: TCP targetPort: 27017 selector: app: global-stack-mongo-app </code></pre> <p>The services's cluster IP is 10.244.1.57 and well responding</p> <pre><code>&gt;&gt; curl 10.244.1.57:27017 It looks like you are trying to access MongoDB over HTTP on the native driver port. </code></pre> <p>If anyone could help me I would be very grateful. Thanks</p> <p>Guieme.</p>
<p>After some research I solved my issue.</p> <p>in the nginx-ingress documentation, it's not described but you need to mapp the TCP config map with the ingress-controller container with this lines into the deployment file :</p> <pre><code> args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key - --tcp-services-configmap=ingress-nginx/tcp-services </code></pre>
<p>I am learning kubernetes and created first pod using below command</p> <pre><code>kubectl run helloworld --image=&lt;image-name&gt; --port=8080 </code></pre> <p>The Pod creation was successful. But since it is neither a ReplicationController or a Deloyment, how could I expose it as a service. Please advise.</p>
<p>Please refer to the documentation of <strong>kubernetes service concept</strong> <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a> At the end of the page, there also is an interactive tutorial in minikube</p>
<p>I have installed Minikube, but when I run <code>minikube start</code>, I get this error:</p> <pre class="lang-none prettyprint-override"><code>πŸ˜„ minikube v1.17.1 on Ubuntu 20.04 ✨ Using the docker driver based on existing profile πŸ‘ Starting control plane node minikube in cluster minikube πŸƒ Updating the running docker &quot;minikube&quot; container ... 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.0 ... 🀦 Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared β–ͺ Generating certificates and keys ... β–ͺ Booting up control plane ... πŸ’’ initialization failed, will try again: wait: /bin/bash -c &quot;sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables&quot;: Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.8.0-40-generic DOCKER_VERSION: 20.10.0 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder &quot;/var/lib/minikube/certs&quot; [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing &quot;sa&quot; key [kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot; [kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file [kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file [kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file [kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot; [kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot; [kubelet-start] Starting the kubelet [control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot; [control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot; [control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot; [control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot; [etcd] Creating static Pod manifest for local etcd in &quot;/etc/kubernetes/manifests&quot; [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: [ β–ͺ Generating certificates and keys ... β–ͺ Booting up control plane ... πŸ’£ Error starting cluster: wait: /bin/bash -c &quot;sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables&quot;: Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.8.0-40-generic DOCKER_VERSION: 20.10.0 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder &quot;/var/lib/minikube/certs&quot; [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing &quot;sa&quot; key [kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot; [kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file [kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file [kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file [kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot; [kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot; [kubelet-start] Starting the kubelet [control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot; [control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot; [control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot; [control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot; [etcd] Creating static Pod manifest for local etcd in &quot;/etc/kubernetes/manifests&quot; [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c &quot;sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables&quot;: Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.8.0-40-generic DOCKER_VERSION: 20.10.0 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder &quot;/var/lib/minikube/certs&quot; [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing &quot;sa&quot; key [kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot; [kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file [kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file [kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file [kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot; [kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot; [kubelet-start] Starting the kubelet [control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot; [control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot; [control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot; [control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot; [etcd] Creating static Pod manifest for local etcd in &quot;/etc/kubernetes/manifests&quot; [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get &quot;http://localhost:10248/healthz&quot;: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: πŸ’‘ Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start 🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172 </code></pre> <p>I can't understand what the problem is here. It was working, but then I got a similar error. It says:</p> <blockquote> <p>🐳 Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...| ❌ Unable to load cached images: loading cached images: stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4: no such file or directory</p> </blockquote> <p>I uninstalled it and then reinstalled it, but I still got the error.</p> <p>How can I fix this?</p>
<p>You can use <code>minikube delete</code> to delete the old cluster. After that start Minikube using <code>minikube start</code>.</p>
<p>I have deployed my running application in AKS. I want to add new disk (Harddisk of 30GB) but I don't know how to do it.</p> <p>I want to attach 3 disks.</p> <p>Here is details of AKS:</p> <ul> <li>Node size: <code>Standard_DS2_v2</code></li> <li>Node pools: <code>1 node pool</code></li> <li>Storage is:</li> </ul> <hr /> <pre><code>default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true </code></pre> <p>Please, tell me how to add it.</p>
<p>Based on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#:%7E:text=A%20PersistentVolume%20(PV)%20is%20a,node%20is%20a%20cluster%20resource.&amp;text=Pods%20can%20request%20specific%20levels%20of%20resources%20(CPU%20and%20Memory)." rel="nofollow noreferrer">Kubernetes documentation</a>:</p> <blockquote> <p>A <em>PersistentVolume</em> (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Storage Classes</a>.</p> <p>It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.</p> </blockquote> <p>In the <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-storage#persistent-volumes" rel="nofollow noreferrer">Azure documentation</a> one can find clear guides how to:</p> <ul> <li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Disks</em></a></li> <li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Files</em></a></li> <li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Disks</em></a></li> <li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Files</em></a></li> </ul> <p><strong>NOTE</strong>: Before you begin you should have <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">existing AKS cluster</a> and Azure CLI version 2.0.59 or <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="nofollow noreferrer">later installed</a> and configured. To check your version run:</p> <pre><code>az --version </code></pre> <p>See also <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">this documentation</a>.</p>
<p>Having trouble running the <code>manage.py migrate</code> command in our kubernetes cluster. It seems to have lost permission to run anything. None of the <code>manage.py</code> commands work, they all get the same issue.</p> <p>I have no ability to change the permissions or ownership on the container. This worked in the past (at least Nov 2021) but using the latest version causes this error. Does anyone have any idea why the commands no longer work?</p> <pre><code>bash-4.4$ ./manage.py migrate Traceback (most recent call last): File &quot;./manage.py&quot;, line 12, in &lt;module&gt; execute_from_command_line(sys.argv) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/__init__.py&quot;, line 425, in execute_from_command_line utility.execute() File &quot;/venv/lib64/python3.8/site-packages/django/core/management/__init__.py&quot;, line 419, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/base.py&quot;, line 373, in run_from_argv self.execute(*args, **cmd_options) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/base.py&quot;, line 417, in execute output = self.handle(*args, **options) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/base.py&quot;, line 90, in wrapped res = handle_func(*args, **kwargs) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/commands/migrate.py&quot;, line 75, in handle self.check(databases=[database]) File &quot;/venv/lib64/python3.8/site-packages/django/core/management/base.py&quot;, line 438, in check all_issues = checks.run_checks( File &quot;/venv/lib64/python3.8/site-packages/django/core/checks/registry.py&quot;, line 77, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File &quot;/venv/lib64/python3.8/site-packages/tcms/core/checks.py&quot;, line 15, in check_installation_id with open(filename, &quot;w&quot;, encoding=&quot;utf-8&quot;) as file_handle: PermissionError: [Errno 13] Permission denied: '/Kiwi/uploads/installation-id' </code></pre>
<p>Needed to add this to <code>deployment.yaml</code>:</p> <pre><code>securityContext: fsGroup: 1001 </code></pre>
<p>This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.</p> <p>I have a Terraform configuration that contains a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer">Kubernetes Secret</a> resource which data comes from Vault. The resource configuration looks like this:</p> <pre><code>resource &quot;kubernetes_secret&quot; &quot;external-api-token&quot; { metadata { name = &quot;external-api-token&quot; namespace = local.platform_namespace annotations = { &quot;vault.security.banzaicloud.io/vault-addr&quot; = var.vault_addr &quot;vault.security.banzaicloud.io/vault-path&quot; = &quot;kubernetes/${var.name}&quot; &quot;vault.security.banzaicloud.io/vault-role&quot; = &quot;reader&quot; } } data = { &quot;EXTERNAL_API_TOKEN&quot; = &quot;vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN&quot; } } </code></pre> <p>Everything is working fine so far, but every time I do <code>terraform plan</code> or <code>terraform apply</code>, it marks that resource as &quot;changed&quot; and updates it, even when I didn't touch the resource or other resources related to it. E.g.:</p> <pre><code>... (other actions to be applied, unrelated to the offending resource) ... # kubernetes_secret.external-api-token will be updated in-place ~ resource &quot;kubernetes_secret&quot; &quot;external-api-token&quot; { ~ data = (sensitive value) id = &quot;platform/external-api-token&quot; type = &quot;Opaque&quot; metadata { annotations = { &quot;vault.security.banzaicloud.io/vault-addr&quot; = &quot;https://vault.infra.megacorp.io:8200&quot; &quot;vault.security.banzaicloud.io/vault-path&quot; = &quot;kubernetes/gke-pipe-stg-2&quot; &quot;vault.security.banzaicloud.io/vault-role&quot; = &quot;reader&quot; } generation = 0 labels = {} name = &quot;external-api-token&quot; namespace = &quot;platform&quot; resource_version = &quot;160541784&quot; self_link = &quot;/api/v1/namespaces/platform/secrets/external-api-token&quot; uid = &quot;40e93d16-e8ef-47f5-92ac-6d859dfee123&quot; } } Plan: 3 to add, 1 to change, 0 to destroy. </code></pre> <p>It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.</p> <p>I was thinking on to use the <a href="https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes" rel="nofollow noreferrer"><code>ignore_changes</code></a> lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. <strong>I would like the resource to be updated only when the secret in Vault was changed.</strong></p> <p>Is there a way to do this? What am I missing or doing wrong?</p>
<p>You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.</p> <p>Here is the code with the lifecycle ignore changes meta argument included:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>resource "kubernetes_secret" "external-api-token" { metadata { name = "external-api-token" namespace = local.platform_namespace annotations = { "vault.security.banzaicloud.io/vault-addr" = var.vault_addr "vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}" "vault.security.banzaicloud.io/vault-role" = "reader" } } data = { "EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN" } lifecycle { ignore_changes = [ # Ignore changes to data, and annotations e.g. because a management agent # updates these based on some ruleset managed elsewhere. data,annotations, ] } }</code></pre> </div> </div> </p> <p>link to meta arguments with lifecycle:</p> <p><a href="https://www.terraform.io/language/meta-arguments/lifecycle" rel="nofollow noreferrer">https://www.terraform.io/language/meta-arguments/lifecycle</a></p>
<p>I am encountering a weird behavior when I try to attach <code>podAffinity</code> to the <strong>Scheduler deployment from the official Airflow helm chart</strong>, like:</p> <pre><code> affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - postgresql topologyKey: &quot;kubernetes.io/hostname&quot; </code></pre> <p>With an example Deployment to which the <code>podAffinity</code> should &quot;hook up&quot; to:</p> <pre><code>metadata: name: {{ template &quot;postgresql.fullname&quot; . }} labels: app: postgresql chart: {{ template &quot;postgresql.chart&quot; . }} release: {{ .Release.Name | quote }} heritage: {{ .Release.Service | quote }} spec: serviceName: {{ template &quot;postgresql.fullname&quot; . }}-headless replicas: 1 selector: matchLabels: app: postgresql release: {{ .Release.Name | quote }} template: metadata: name: {{ template &quot;postgresql.fullname&quot; . }} labels: app: postgresql chart: {{ template &quot;postgresql.chart&quot; . }} </code></pre> <p>Which results in:</p> <pre><code>NotTriggerScaleUp: pod didn't trigger scale-up: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod affinity rules </code></pre> <p><strong>However, applying the same <code>podAffinity</code> config to the Webserver deployment works just fine. Plus, changing the example Deployment to a vanilla nginx manifested itself in the outcome.</strong></p> <p>It does not seem to be any resource limitation issue since I already tried various configs, every time with the same result. I do not use any custom configurations apart from node affinity.</p> <p>Has anyone encounter the same or has any idea what I might do wrong?</p> <p><strong>Setup:</strong></p> <ul> <li>AKS cluster</li> <li>Airflow helm chart 1.1.0</li> <li>Airflow 1.10.15 (but I don't think this matters)</li> <li>kubectl client (1.22.1) and server (1.20.7)</li> </ul> <p><strong>Links to Airflow charts:</strong></p> <ul> <li><a href="https://github.com/apache/airflow/blob/main/chart/templates/scheduler/scheduler-deployment.yaml" rel="nofollow noreferrer">Scheduler</a></li> <li><a href="https://github.com/apache/airflow/blob/main/chart/templates/webserver/webserver-deployment.yaml" rel="nofollow noreferrer">Webserver</a></li> </ul>
<p>I've recreated this scenario on my GKE cluster and I've decided to provide a Community Wiki answer to show that the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">podAffinity</a> on the <a href="https://github.com/apache/airflow/blob/main/chart/templates/scheduler/scheduler-deployment.yaml" rel="nofollow noreferrer">Scheduler</a> works as expected. I will describe step by step how I tested it below.</p> <hr /> <ol> <li>In the <code>values.yaml</code> file, I've configured the <code>podAffinity</code> as follows:</li> </ol> <pre><code>$ cat values.yaml ... # Airflow scheduler settings scheduler: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - postgresql topologyKey: &quot;kubernetes.io/hostname&quot; ... </code></pre> <ol start="2"> <li>I've installed the <a href="https://airflow.apache.org/docs/helm-chart/stable/index.html#installing-the-chart" rel="nofollow noreferrer">Airflow</a> on a Kubernetes cluster using the Helm package manager with the <code>values.yaml</code> file specified.</li> </ol> <pre><code>$ helm install airflow apache-airflow/airflow --values values.yaml </code></pre> <p>After a while we can check the status of the <code>scheduler</code>:</p> <pre><code>$ kubectl get pods -owide | grep &quot;scheduler&quot; airflow-scheduler-79bfb664cc-7n68f 0/2 Pending 0 8m6s &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt; </code></pre> <ol start="3"> <li>I've created an example Deployment with the <code>app: postgresql</code> label:</li> </ol> <pre><code>$ cat test.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: postgresql name: test spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - image: nginx name: nginx $ kubectl apply -f test.yaml deployment.apps/test created $ kubectl get pods --show-labels | grep test test-7d4c9c654-7lqns 1/1 Running 0 2m app=postgresql,... </code></pre> <ol start="4"> <li>Finally, we can check that the <code>scheduler</code> was successfully created:</li> </ol> <pre><code>$ kubectl get pods -o wide | grep &quot;scheduler\|test&quot; airflow-scheduler-79bfb664cc-7n68f 2/2 Running 0 14m 10.X.1.6 nodeA test-7d4c9c654-7lqns 1/1 Running 0 2m27s 10.X.1.5 nodeA </code></pre> <hr /> <p>Additionally, detailed informtion on <code>pod affinity</code> and <code>pod anti-affinity</code> can be found in the <a href="https://docs.openshift.com/container-platform/4.9/nodes/scheduling/nodes-scheduler-pod-affinity.html#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity" rel="nofollow noreferrer">Understanding pod affinity</a> documentation:</p> <blockquote> <p>Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods.</p> <p>Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.</p> <p>Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.</p> </blockquote>
<p>I am following the conference demo here <a href="https://www.youtube.com/watch?v=3KtEAa7_duA&amp;list=PLsClnAJ27pEXdSwW2tI0uc0YJ2wzxJG6b" rel="nofollow noreferrer">https://www.youtube.com/watch?v=3KtEAa7_duA&amp;list=PLsClnAJ27pEXdSwW2tI0uc0YJ2wzxJG6b</a></p> <p>My aim would be to start all kubernetes components by hand to understand its architecture better, however I stumble upon this proble when I start the api server:</p> <pre><code>root@BLQ00667LT:/home/user/kubernetes# ./kubernetes/server/bin/kube-apiserver --etcd-servers=http://localhost:2379 W0704 11:13:35.394474 4924 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver. I0704 11:13:35.394558 4924 server.go:391] external host was not specified, using 172.17.89.222 W0704 11:13:35.394569 4924 authentication.go:527] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer E0704 11:13:35.395059 4924 run.go:74] &quot;command failed&quot; err=&quot;[service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]&quot; </code></pre> <p>I suspect the kubernetes verson used by the tutorial is different from mine, mine being much more recent as the video is from 2019 and I downloaded the latest Kub. v1.28</p> <p>Would you know what's wrong or if there is any other tutorial or learning path I could follow ?</p>
<p>Thank for asking,</p> <p>If you want to know more about kubernetes architecture you can try this</p> <p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a> from Kelsey</p> <p>it's easier because it has code that you can copy and paste, but you need to adapt with your environment, such as subnet, route and network interfaces.</p> <p>because naturally it's for google cloud.</p> <p>Good luck</p>
<p>I am getting this error when I want to install docker.io (<code>sudo apt-get install docker.io</code>)</p> <p>he following information may help to resolve the situation:</p> <p>The following packages have unmet dependencies: containerd.io : Conflicts: containerd Conflicts: runc E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.</p> <p>I have tried to reinstall the containerd and runc but didn't solve the problem.</p>
<p><strong>try this</strong></p> <p><code>sudo apt-get remove docker docker-engine docker.io containerd runc</code></p> <p><code>sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose</code></p> <p><code>sudo rm -rf /etc/bash_completion.d/docker /usr/local/bin/docker-compose /etc/bash_completion.d/docker-compose</code></p> <p><code>sudo apt install containerd -y</code></p> <p><code>sudo apt install -y docker.io docker-compose</code></p>
<p>i have a question about kubernetes networking.</p> <p>My working senario:</p> <ul> <li>I have a Jenkins container my localhost and this container up and running. Inside Jenkins, i have a job. Access jenkins , i use &quot;http://localhost:8080&quot; url. (jenkins is not runing inside kubernetes)</li> <li>My flask app, trigger the Jenkins job with this command:</li> </ul> <blockquote> <pre><code> @app.route(&quot;/create&quot;,methods=[&quot;GET&quot;,&quot;POST&quot;]) def create(): if request.method ==&quot;POST&quot;: dosya_adi=request.form[&quot;sendmail&quot;] server = jenkins.Jenkins('http://localhost:8080/', username='my-user-name', password='my-password') server.build_job('jenkins_openvpn', {'FILE_NAME': dosya_adi}, token='my-token') </code></pre> </blockquote> <ul> <li>Then, i did Dockerize this flask app. My image name is: &quot;jenkins-app&quot;</li> <li>If i run this command, everythings perfect:</li> </ul> <blockquote> <p><code>docker run -it --network=&quot;host&quot; --name=jenkins-app jenkins-app</code></p> </blockquote> <p>But i want to do samething with kubernetes. For that i wrote this yml file.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: jenkins-pod spec: hostNetwork: true containers: - name: jenkins-app image: jenkins-app:latest imagePullPolicy: Never ports: - containerPort: 5000 </code></pre> <ul> <li>With this yml file, i access the flask app using port 5000. While i want to trigger jenkins job, i get an error like this: requests.exceptions.ConnectionError</li> </ul> <p>Would you suggest if there is a way to do this with Kubernetes?</p>
<p>I create an endpoint.yml file and add in this file below commands, this solve my problem:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Endpoints metadata: name: jenkins-server subsets: - addresses: - ip: my-ps-ip ports: - port: 8080 </code></pre> <p>Then, I change this line in my flask app like this:</p> <pre class="lang-py prettyprint-override"><code>server = jenkins.Jenkins('http://my-ps-ip:8080/', username='my-user-name', password='my-password') </code></pre>
<p>I have a requirement to convert a multipod setup to a single pod with multiple container. I had pod x running x microservice and pod y running y microservice with below rest endpoint.</p> <ul> <li><code>http://x:8080/{context path-x}/endpoint</code></li> <li><code>http://y:8080/{context path-y}/endpoint</code></li> </ul> <p>I want to have pod z with x and y microservice with container x exposed on 8080 port and y on 8081 within same pod. I am able to achieve these with multi-container pod.</p> <p>My problem is now the URL are changed</p> <ul> <li><code>http://z:8080/{context path-x}/endpoint</code></li> <li><code>http://z:8081/{context path-y}/endpoint</code></li> </ul> <p>I am looking for way in which I can hit endpoints without the change is URL or minimum hit with below URLs</p> <ul> <li><code>http://x:8080/{context path-x}/endpoint</code></li> <li><code>http://y:8081/{context path-y}/endpoint</code></li> </ul> <p>My real project requirement has 5 container on single pods and has 100s of endpoints exposed</p> <p>How can I achieve this?</p>
<p>Here's how I addressed my problem:</p> <p>Application Deployment File (x and y containers on deployment z)</p> <pre><code> --- apiVersion: apps/v1 kind: Deployment metadata: name: z spec: replicas: 1 progressDeadlineSeconds: 600 selector: matchLabels: component: z template: metadata: annotations: version: v1.0 labels: component: z occloud.oracle.com/open-network-policy: allow name: z spec: containers: - name: x image:x:dev ports: - containerPort: 8080 - name: y image: y:dev ports: - containerPort: 8081 --- kind: Service apiVersion: v1 metadata: name: x annotations: version: v1.0 spec: selector: component: z ports: - name: x port: 8080 targetPort: 8080 type: ClusterIP --- kind: Service apiVersion: v1 metadata: name: y annotations: version: v1.0 spec: selector: component: z ports: - name: y port: 8080 targetPort: 8081 type: ClusterIP </code></pre> <p>http://x:8080/{context path-x}/endpoint http://y:8080/{context path-y}/endpoint</p>
<p>I installed Prometheus on my Kubernetes cluster with Helm, using the community chart <a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> - and I get some beautiful dashboards in the bundled Grafana instance. I now wanted the recommender from the Vertical Pod Autoscaler to use Prometheus as a data source for historic metrics, <a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender" rel="nofollow noreferrer">as described here</a>. Meaning, I had to make a change to the Prometheus scraper settings for cAdvisor, and <a href="https://stackoverflow.com/a/65421764/310937">this answer</a> pointed me in the right direction, as after making that change I can now see the correct <code>job</code> tag on metrics from cAdvisor.</p> <p>Unfortunately, now some of the charts in the Grafana dashboards are broken. It looks like it no longer picks up the CPU metrics - and instead just displays &quot;No data&quot; for the CPU-related charts.</p> <p>So, I assume I have to tweak the charts to be able to pick up the metrics correctly again, but I don't see any obvious places to do this in Grafana?</p> <p>Not sure if it is relevant for the question, but I am running my Kubernetes cluster on Azure Kubernetes Service (AKS).</p> <p>This is the full <code>values.yaml</code> I supply to the Helm chart when installing Prometheus:</p> <pre class="lang-yaml prettyprint-override"><code>kubeControllerManager: enabled: false kubeScheduler: enabled: false kubeEtcd: enabled: false kubeProxy: enabled: false kubelet: serviceMonitor: # Diables the normal cAdvisor scraping, as we add it with the job name &quot;kubernetes-cadvisor&quot; under additionalScrapeConfigs # The reason for doing this is to enable the VPA to use the metrics for the recommender # https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender cAdvisor: false prometheus: prometheusSpec: retention: 15d storageSpec: volumeClaimTemplate: spec: # the azurefile storage class is created automatically on AKS storageClassName: azurefile accessModes: [&quot;ReadWriteMany&quot;] resources: requests: storage: 50Gi additionalScrapeConfigs: - job_name: 'kubernetes-cadvisor' scheme: https metrics_path: /metrics/cadvisor tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) </code></pre> <p>Kubernetes version: 1.21.2</p> <p>kube-prometheus-stack version: 18.1.1</p> <p>helm version: version.BuildInfo{Version:&quot;v3.6.3&quot;, GitCommit:&quot;d506314abfb5d21419df8c7e7e68012379db2354&quot;, GitTreeState:&quot;dirty&quot;, GoVersion:&quot;go1.16.5&quot;}</p>
<p>Unfortunately, I don't have access to Azure AKS, so I've reproduced this issue on my GKE cluster. Below I'll provide some explanations that may help to resolve your problem.</p> <p>First you can try to execute this <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> rule to see if it returns any result:</p> <p><a href="https://i.stack.imgur.com/UZ4uk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UZ4uk.png" alt="enter image description here" /></a></p> <p>If it doesn't return any records, please read the following paragraphs.</p> <h2>Creating a scrape configuration for cAdvisor</h2> <p>Rather than creating a completely new scrape configuration for cadvisor, I would suggest using one that is generated by default when <code>kubelet.serviceMonitor.cAdvisor: true</code>, but with a few modifications such as changing the label to <code>job=kubernetes-cadvisor</code>.</p> <p>In my example, the 'kubernetes-cadvisor' scrape configuration looks like this:</p> <p><strong>NOTE:</strong> I added this config under the <code>additionalScrapeConfigs</code> in the <code>values.yaml</code> file (the rest of the <code>values.yaml</code> file may be like yours).</p> <pre><code>- job_name: 'kubernetes-cadvisor' honor_labels: true honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics/cadvisor scheme: https authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true follow_redirects: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name] separator: ; regex: kubelet replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_k8s_app] separator: ; regex: kubelet replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: https-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - separator: ; regex: (.*) target_label: endpoint replacement: https-metrics action: replace - source_labels: [__metrics_path__] separator: ; regex: (.*) target_label: metrics_path replacement: $1 action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true namespaces: names: - kube-system </code></pre> <h3>Modifying Prometheus Rules</h3> <p>By default, Prometheus rules fetching data from cAdvisor use <code>job=&quot;kubelet&quot;</code> in their PromQL expressions:</p> <p><a href="https://i.stack.imgur.com/oNFnF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oNFnF.png" alt="enter image description here" /></a></p> <p>After changing <code>job=kubelet</code> to <code>job=kubernetes-cadvisor</code>, we also need to modify this label in the Prometheus rules:<br /> <strong>NOTE:</strong> We just need to modify the rules that have <code>metrics_path=&quot;/metrics/cadvisor</code> (these are rules that retrieve data from cAdvisor).</p> <pre><code>$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml ... - name: k8s.rules rules: - expr: |- sum by (cluster, namespace, pod, container) ( irate(container_cpu_usage_seconds_total{job=&quot;kubernetes-cadvisor&quot;, metrics_path=&quot;/metrics/cadvisor&quot;, image!=&quot;&quot;}[5m]) ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) ( 1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=&quot;&quot;}) ) record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate ... here we have a few more rules to modify... </code></pre> <p>After modifying Prometheus rules and waiting some time, we can see if it works as expected. We can try to execute <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> as in the beginning.</p> <p>Additionally, let's check out our Grafana to make sure it has started displaying our dashboards correctly: <a href="https://i.stack.imgur.com/Z7LRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7LRc.png" alt="enter image description here" /></a></p>
<p>I created a service account user and got the token for the user. However, ever time I try to access the names spaces I get the following error:</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;namespaces is forbidden: User \&quot;system:serviceaccount:default:svcacc\&quot; cannot list resource \&quot;namespaces\&quot; in API group \&quot;\&quot; at the cluster scope&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;kind&quot;: &quot;namespaces&quot; }, &quot;code&quot;: 403 } </code></pre> <p>This is my service account:</p> <pre><code>Name: svcacc-token-87jd6 Namespace: default Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name: svcacc kubernetes.io/service-account.uid: 384aa590-dac4-472c-a9a7-116c5fb0562b Type: kubernetes.io/service-account-token </code></pre> <p>Do I need to give the service account roles or add it to a group? This is running in AWS EKS, not sure if that make a difference.</p> <p>I am trying to use ServiceNow discovery to discover my Kubernetes cluster. Regardless if I am using ServiceNow or Postman, I get the same message.</p> <p>EDIT: Ended up using YAML to configure the service account and roles.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: svcacc namespace: default --- # Create ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: svcacc roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: svcacc namespace: default </code></pre> <p>Once this was configured I updated the <code>kubeconfig</code> and ran to get token:</p> <pre><code>$(kubectl describe secrets &quot;$(kubectl describe serviceaccount svcacc -n default| grep -i Tokens | awk '{print $2}')&quot; -n default | grep token: | awk '{print $2}') </code></pre>
<p>To clarify I am posting a Community Wiki answer.</p> <p>You solved this problem using YAML file to configure the service account and roles.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ServiceAccount metadata: name: svcacc namespace: default --- # Create ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: svcacc roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: svcacc namespace: default </code></pre> <p>And after that you updated the <code>kubeconfig</code> and ran to get token:</p> <pre class="lang-yaml prettyprint-override"><code>$(kubectl describe secrets &quot;$(kubectl describe serviceaccount svcacc -n default| grep -i Tokens | awk '{print $2}')&quot; -n default | grep token: | awk '{print $2}') </code></pre> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Here</a> is documentation about RBAC Authorization with many examples.</p> <blockquote> <p>Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.</p> </blockquote>
<p>I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the <code>loadBalancerIP</code> for the pod outside of my cluster I am unable to. I also have a <code>kube-verify</code> pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.</p> <p>When I switch the service for the Jenkins pod to be of type <code>NodePort</code> it works but as soon as I switch it back to be of type <code>LoadBalancer</code> it stops working. Both the Jenkins pod and the working <code>kube-verify</code> pod are running on the same node.</p> <p>Cluster Details: The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The <code>kube-proxy</code> is set to use <code>strictArp</code> and <code>ipvs</code> mode.</p> <p><strong>Jenkins Manifest:</strong></p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: jenkins-sa namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend --- apiVersion: v1 kind: Secret metadata: name: jenkins-secret namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend type: Opaque data: jenkins-admin-password: *************** jenkins-admin-user: ******** --- apiVersion: v1 kind: ConfigMap metadata: name: jenkins namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend data: jenkins.yaml: |- jenkins: authorizationStrategy: loggedInUsersCanDoAnything: allowAnonymousRead: false securityRealm: local: allowsSignup: false enableCaptcha: false users: - id: &quot;${jenkins-admin-username}&quot; name: &quot;Jenkins Admin&quot; password: &quot;${jenkins-admin-password}&quot; disableRememberMe: false mode: NORMAL numExecutors: 0 labelString: &quot;&quot; projectNamingStrategy: &quot;standard&quot; markupFormatter: plainText clouds: - kubernetes: containerCapStr: &quot;10&quot; defaultsProviderTemplate: &quot;jenkins-base&quot; connectTimeout: &quot;5&quot; readTimeout: 15 jenkinsUrl: &quot;jenkins-ui:8080&quot; jenkinsTunnel: &quot;jenkins-discover:50000&quot; maxRequestsPerHostStr: &quot;32&quot; name: &quot;kubernetes&quot; serverUrl: &quot;https://kubernetes&quot; podLabels: - key: &quot;jenkins/jenkins-agent&quot; value: &quot;true&quot; templates: - name: &quot;default&quot; #id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f containers: - name: &quot;jnlp&quot; alwaysPullImage: false args: &quot;^${computer.jnlpmac} ^${computer.name}&quot; command: &quot;&quot; envVars: - envVar: key: &quot;JENKINS_URL&quot; value: &quot;jenkins-ui:8080&quot; image: &quot;jenkins/inbound-agent:4.11-1&quot; ttyEnabled: false workingDir: &quot;/home/jenkins/agent&quot; idleMinutes: 0 instanceCap: 2147483647 label: &quot;jenkins-agent&quot; nodeUsageMode: &quot;NORMAL&quot; podRetention: Never showRawYaml: true serviceAccount: &quot;jenkins-sa&quot; slaveConnectTimeoutStr: &quot;100&quot; yamlMergeStrategy: override crumbIssuer: standard: excludeClientIPFromCrumb: true security: apiToken: creationOfLegacyTokenEnabled: false tokenGenerationOnCreationEnabled: false usageStatisticsEnabled: true unclassified: location: adminAddress: url: jenkins-ui:8080 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv-volume labels: type: local spec: storageClassName: local-storage claimRef: name: jenkins-pv-claim namespace: devops-tools capacity: storage: 16Gi accessModes: - ReadWriteMany local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - heine-cluster1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pv-claim namespace: devops-tools labels: app: jenkins version: v1 tier: backend spec: accessModes: - ReadWriteMany resources: requests: storage: 8Gi --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: jenkins-cr rules: - apiGroups: [&quot;&quot;] resources: [&quot;*&quot;] verbs: [&quot;*&quot;] --- # This role is used to allow Jenkins scheduling of agents via Kubernetes plugin. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: jenkins-role-schedule-agents namespace: devops-tools labels: app: jenkins version: v1 tier: backend rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;, &quot;pods/exec&quot;, &quot;pods/log&quot;, &quot;persistentvolumeclaims&quot;, &quot;events&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;, &quot;pods/exec&quot;, &quot;persistentvolumeclaims&quot;] verbs: [&quot;create&quot;, &quot;delete&quot;, &quot;deletecollection&quot;, &quot;patch&quot;, &quot;update&quot;] --- # The sidecar container which is responsible for reloading configuration changes # needs permissions to watch ConfigMaps apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: jenkins-casc-reload namespace: devops-tools labels: app: jenkins version: v1 tier: backend rules: - apiGroups: [&quot;&quot;] resources: [&quot;configmaps&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: jenkins-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins-cr subjects: - kind: ServiceAccount name: jenkins-sa namespace: &quot;devops-tools&quot; --- # We bind the role to the Jenkins service account. The role binding is created in the namespace # where the agents are supposed to run. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: jenkins-schedule-agents namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins-role-schedule-agents subjects: - kind: ServiceAccount name: jenkins-sa namespace: &quot;devops-tools&quot; --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: jenkins-watch-configmaps namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins-casc-reload subjects: - kind: ServiceAccount name: jenkins-sa namespace: &quot;devops-tools&quot; --- apiVersion: v1 kind: Service metadata: name: jenkins namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend annotations: metallb.universe.tf/address-pool: default spec: type: LoadBalancer loadBalancerIP: 172.16.1.5 ports: - name: ui port: 8080 targetPort: 8080 externalTrafficPolicy: Local selector: app: jenkins --- apiVersion: v1 kind: Service metadata: name: jenkins-agent namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend spec: ports: - name: agents port: 50000 targetPort: 50000 selector: app: jenkins --- apiVersion: apps/v1 kind: Deployment metadata: name: jenkins namespace: &quot;devops-tools&quot; labels: app: jenkins version: v1 tier: backend spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins version: v1 tier: backend annotations: checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5 spec: securityContext: runAsUser: 1000 fsGroup: 1000 runAsNonRoot: true containers: - name: jenkins image: &quot;heineza/jenkins-master:2.323-jdk11-1&quot; imagePullPolicy: Always args: [ &quot;--httpPort=8080&quot;] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: JAVA_OPTS value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago - name: JENKINS_SLAVE_AGENT_PORT value: &quot;50000&quot; ports: - containerPort: 8080 name: ui - containerPort: 50000 name: agents resources: limits: cpu: 2000m memory: 4096Mi requests: cpu: 50m memory: 256Mi volumeMounts: - mountPath: /var/jenkins_home name: jenkins-home readOnly: false - name: jenkins-config mountPath: /var/jenkins_home/jenkins.yaml - name: admin-secret mountPath: /run/secrets/jenkins-admin-username subPath: jenkins-admin-user readOnly: true - name: admin-secret mountPath: /run/secrets/jenkins-admin-password subPath: jenkins-admin-password readOnly: true serviceAccountName: &quot;jenkins-sa&quot; volumes: - name: jenkins-cache emptyDir: {} - name: jenkins-home persistentVolumeClaim: claimName: jenkins-pv-claim - name: jenkins-config configMap: name: jenkins - name: admin-secret secret: secretName: jenkins-secret </code></pre> <p>This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are <code>base64</code> encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?</p>
<p>MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.</p> <p>According to <a href="https://metallb.universe.tf/usage/" rel="nofollow noreferrer">MetalLB documentation</a>:</p> <blockquote> <p>MetalLB respects the <code>spec.loadBalancerIP</code> parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.</p> <p>If MetalLB <strong>does not own</strong> the requested address, or if the address is <strong>already in use</strong> by another service, assignment will fail and MetalLB will log a warning event visible in <code>kubectl describe service &lt;service name&gt;</code>.<a href="https://metallb.universe.tf/usage/#requesting-specific-ips" rel="nofollow noreferrer">[1]</a></p> </blockquote> <p>In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the <code>metallb.universe.tf/allow-shared-ip</code> annotation to services.</p> <blockquote> <p>The value of the annotation is a β€œsharing key.” Services can share an IP address under the following conditions:</p> <ul> <li>They both have the same sharing key.</li> <li>They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).</li> <li>They both use the <code>Cluster</code> external traffic policy, or they both point to the <em>exact</em> same set of pods (i.e. the pod selectors are identical). <a href="https://metallb.universe.tf/usage/#ip-address-sharing" rel="nofollow noreferrer">[2]</a></li> </ul> </blockquote> <hr /> <p><strong>UPDATE</strong></p> <p>I tested your setup successfully with one minor difference - I needed to remove: <code>externalTrafficPolicy: Local</code> from Jenkins Service spec.</p> <p>Try this solution, if it still doesn't work then it's a problem with your cluster environment.</p>
<p>I have two separate IngressControllers, one internal and one external. I would like to define which controller to use for each Ingress.</p> <p>I have defined <code>--ingress.class=hapxroxy-ext</code> arg for the external controller and <code>--empty-ingress-class</code> for the internal.</p> <p>Ingress Services</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: labels: run: ext-haproxy-ingress name: ext-haproxy-ingress namespace: ext-haproxy-controller spec: selector: run: ext-haproxy-ingress type: LoadBalancer --- apiVersion: v1 kind: Service metadata: annotations: &quot;service.beta.kubernetes.io/azure-load-balancer-internal&quot;: &quot;true&quot; labels: run: haproxy-ingress name: haproxy-ingress namespace: haproxy-controller spec: selector: run: haproxy-ingress type: LoadBalancer </code></pre> <p>I have IngressClasses.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: external-lb spec: controller: haproxy.org/ingress-controller/hapxroxy-ext --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: internal-lb annotations: &quot;ingressclass.kubernetes.io/is-default-class&quot;: &quot;true&quot; spec: controller: haproxy.org/ingress-controller </code></pre> <p>I have one Ingress</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress annotations: &quot;kubernetes.io/ingress.class&quot;: internal-lb spec: ingressClassName: internal-lb ... </code></pre> <p>Despite mapping the Ingress to just <code>internal-lb</code>, both <code>internal-lb</code> and <code>external-lb</code> handle requests.</p> <p>It seems pretty straightforward in the <a href="https://www.haproxy.com/documentation/kubernetes/latest/configuration/controller/#--ingressclass" rel="nofollow noreferrer">docs</a>, but I'm missing something.</p>
<p>This issue is due to a bug in <a href="https://github.com/haproxytech/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/haproxytech/kubernetes-ingress</a> when using IngressClassName in ingress.yaml. If you remove IngressClassName from your ingress.yaml and just use &quot;kubernetes.io/ingress.class&quot;: annotation the issue goes away, it more of a workaround than a fix.</p> <p>This issue has been raised and still open see link below for updates.</p> <p><a href="https://github.com/haproxytech/kubernetes-ingress/issues/354#issuecomment-904551220" rel="nofollow noreferrer">https://github.com/haproxytech/kubernetes-ingress/issues/354#issuecomment-904551220</a></p>
<p>I have a github action that builds a docker image and pushes it to our repo.</p> <pre><code>docker build -t mySuperCoolTag --build-arg PIP_INDEX_URL=${{ secrets.PIP_INDEX_URL }} . docker push mySuperCoolTag </code></pre> <p>Per our deployment process, we take the SHA of the latest image, add it to our yaml files for K8s to read and use.</p> <p>Originally, I incorrectly thought that the local SHA of the image was the same being pushed to the repo, and I grabbed it and added it to the file like so:</p> <pre><code> docker images --no-trunc --quiet mySuperCoolTag dockerSHA=$(docker images --no-trunc --quiet mySuperCoolTag) #replace the current SHA in the configuration with the latest SHA sed -i -E &quot;s/sha256:\w*/$dockerSHA/g&quot; config-file.yaml </code></pre> <p>This ended up not being the SHA I was looking for. πŸ˜…</p> <p><code>docker push</code> <em>does</em> output the expected SHA, but I'm not too sure how to programmatically grab that SHA save having a script read the output and grabbing it from there, but I'm hoping there is a more succinct way to do it. Any idea?</p>
<p>Ended up using this command instead:</p> <pre><code>dockerSHA=$(docker inspect --format='{{index .RepoDigests 0}}' mySuperCoolTag | perl -wnE'say /sha256.*/g') </code></pre> <p>And it just works.</p>
<p>I am trying to create cluster by using <a href="https://gridscale.io/en/community/tutorials/kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">this article</a> in my WSl Ubuntu. But It returns some errors.</p> <p>Errors:</p> <pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl daemon-reload System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl restart kubelet System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.21.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected &quot;cgroupfs&quot; as the Docker cgroup driver. The recommended driver is &quot;systemd&quot;. Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>I don't understand the reason when I use <code>sudo systemctl restart kubelet</code>. Error like this occurs:</p> <pre><code>docker service is not enabled, please run 'systemctl enable docker.service' </code></pre> <p>When I use:</p> <pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ systemctl enable docker.service Failed to enable unit, unit docker.service does not exist. </code></pre> <p>But I have docker images still runnig:</p> <p><a href="https://i.stack.imgur.com/Zs5eQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zs5eQ.png" alt="enter image description here" /></a></p> <p>What is wrong while creating Cluster Kubernetes in WSL? Is there any good tutorial for creating cluster in WSL?</p>
<p>Tutorial you're following is designed for cloud Virtual machines with Linux OS on them (this is important since WSL works a bit differently). E.g. SystemD is not presented in WSL, behaviour you're facing is currently <a href="https://github.com/MicrosoftDocs/WSL/issues/457" rel="nofollow noreferrer">in development phase</a>.</p> <p>What you need is to follow designated tutorial for WSL (WSL2 in this case). Also see that docker is set up on Windows machine and shares its features with WSL integration. Please find <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/" rel="nofollow noreferrer">Kubernetes on Windows desktop tutorial</a> (this uses KinD or minikube which is enough for development and testing)</p> <p>Also there's a <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/#minikube-enabling-systemd" rel="nofollow noreferrer">part for enabling SystemD</a> which can potentially resolve your issue on a state where you are (I didn't test this as I don't have a windows machine).</p>
<p>In short: I want to remove a docker image, but if I do so it tells me that it cannot be removed because the image is being used by a running container. But as far as I can tell there is no container running at all.</p> <p>In detail: I call <code>docker images -a</code> to see call images. This way I determine the Image ID which I want to delete and call <code>docker image rm {ID}</code> where {ID} is the String which should be deleted (it worked for other images so I am pretty confident so far). I get the response:</p> <p><em>Error response from daemon: conflict: unable to delete {ID} (cannot be forced) - image is being used by running container 08815cd48523</em> (The ID at the end seems to change with every call)</p> <p>This error appears to be pretty easy to understand, but if I call <code>docker ps -a</code>, it shows me that I do not have a single container running and therefore no container running with the specified ID.</p> <p>This problem occurs with some images. But all seem to be related to Kubernetes. Does anyone know what the problem is?</p> <p>As asked for in the comments hear docker inspect on one of the invisible containers (I replaced all part where I was not sure if it contains sensetive data with &quot;removed_for_post&quot;):</p> <pre><code>[ { &quot;Id&quot;: &quot;Removed_for_post&quot;, &quot;Created&quot;: &quot;2021-10-05T07:04:33.2059908Z&quot;, &quot;Path&quot;: &quot;/pause&quot;, &quot;Args&quot;: [], &quot;State&quot;: { &quot;Status&quot;: &quot;running&quot;, &quot;Running&quot;: true, &quot;Paused&quot;: false, &quot;Restarting&quot;: false, &quot;OOMKilled&quot;: false, &quot;Dead&quot;: false, &quot;Pid&quot;: 3570, &quot;ExitCode&quot;: 0, &quot;Error&quot;: &quot;&quot;, &quot;StartedAt&quot;: &quot;2021-10-05T07:04:33.4266642Z&quot;, &quot;FinishedAt&quot;: &quot;0001-01-01T00:00:00Z&quot; }, &quot;Image&quot;: &quot;sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c&quot;, &quot;ResolvConfPath&quot;: &quot;/var/lib/docker/containers/ Removed_for_post /resolv.conf&quot;, &quot;HostnamePath&quot;: &quot;/var/lib/docker/containers/ Removed_for_post /hostname&quot;, &quot;HostsPath&quot;: &quot;/var/lib/docker/containers/ Removed_for_post /hosts&quot;, &quot;LogPath&quot;: &quot;/var/lib/docker/containers/ Removed_for_post / Removed_for_post -json.log&quot;, &quot;Name&quot;: &quot;/k8s_POD_etcd-docker-desktop_kube-system_ Removed_for_post &quot;, &quot;RestartCount&quot;: 0, &quot;Driver&quot;: &quot;overlay2&quot;, &quot;Platform&quot;: &quot;linux&quot;, &quot;MountLabel&quot;: &quot;&quot;, &quot;ProcessLabel&quot;: &quot;&quot;, &quot;AppArmorProfile&quot;: &quot;&quot;, &quot;ExecIDs&quot;: null, &quot;HostConfig&quot;: { &quot;Binds&quot;: null, &quot;ContainerIDFile&quot;: &quot;&quot;, &quot;LogConfig&quot;: { &quot;Type&quot;: &quot;json-file&quot;, &quot;Config&quot;: {} }, &quot;NetworkMode&quot;: &quot;host&quot;, &quot;PortBindings&quot;: {}, &quot;RestartPolicy&quot;: { &quot;Name&quot;: &quot;&quot;, &quot;MaximumRetryCount&quot;: 0 }, &quot;AutoRemove&quot;: false, &quot;VolumeDriver&quot;: &quot;&quot;, &quot;VolumesFrom&quot;: null, &quot;CapAdd&quot;: null, &quot;CapDrop&quot;: null, &quot;CgroupnsMode&quot;: &quot;host&quot;, &quot;Dns&quot;: null, &quot;DnsOptions&quot;: null, &quot;DnsSearch&quot;: null, &quot;ExtraHosts&quot;: null, &quot;GroupAdd&quot;: null, &quot;IpcMode&quot;: &quot;shareable&quot;, &quot;Cgroup&quot;: &quot;&quot;, &quot;Links&quot;: null, &quot;OomScoreAdj&quot;: -998, &quot;PidMode&quot;: &quot;&quot;, &quot;Privileged&quot;: false, &quot;PublishAllPorts&quot;: false, &quot;ReadonlyRootfs&quot;: false, &quot;SecurityOpt&quot;: [ &quot;no-new-privileges&quot; ], &quot;UTSMode&quot;: &quot;&quot;, &quot;UsernsMode&quot;: &quot;&quot;, &quot;ShmSize&quot;: 67108864, &quot;Runtime&quot;: &quot;runc&quot;, &quot;ConsoleSize&quot;: [ 0, 0 ], &quot;Isolation&quot;: &quot;&quot;, &quot;CpuShares&quot;: 2, &quot;Memory&quot;: 0, &quot;NanoCpus&quot;: 0, &quot;CgroupParent&quot;: &quot;/kubepods/kubepods/besteffort/removed_for_Post&quot;, &quot;BlkioWeight&quot;: 0, &quot;BlkioWeightDevice&quot;: null, &quot;BlkioDeviceReadBps&quot;: null, &quot;BlkioDeviceWriteBps&quot;: null, &quot;BlkioDeviceReadIOps&quot;: null, &quot;BlkioDeviceWriteIOps&quot;: null, &quot;CpuPeriod&quot;: 0, &quot;CpuQuota&quot;: 0, &quot;CpuRealtimePeriod&quot;: 0, &quot;CpuRealtimeRuntime&quot;: 0, &quot;CpusetCpus&quot;: &quot;&quot;, &quot;CpusetMems&quot;: &quot;&quot;, &quot;Devices&quot;: null, &quot;DeviceCgroupRules&quot;: null, &quot;DeviceRequests&quot;: null, &quot;KernelMemory&quot;: 0, &quot;KernelMemoryTCP&quot;: 0, &quot;MemoryReservation&quot;: 0, &quot;MemorySwap&quot;: 0, &quot;MemorySwappiness&quot;: null, &quot;OomKillDisable&quot;: false, &quot;PidsLimit&quot;: null, &quot;Ulimits&quot;: null, &quot;CpuCount&quot;: 0, &quot;CpuPercent&quot;: 0, &quot;IOMaximumIOps&quot;: 0, &quot;IOMaximumBandwidth&quot;: 0, &quot;MaskedPaths&quot;: [ &quot;/proc/asound&quot;, &quot;/proc/acpi&quot;, &quot;/proc/kcore&quot;, &quot;/proc/keys&quot;, &quot;/proc/latency_stats&quot;, &quot;/proc/timer_list&quot;, &quot;/proc/timer_stats&quot;, &quot;/proc/sched_debug&quot;, &quot;/proc/scsi&quot;, &quot;/sys/firmware&quot; ], &quot;ReadonlyPaths&quot;: [ &quot;/proc/bus&quot;, &quot;/proc/fs&quot;, &quot;/proc/irq&quot;, &quot;/proc/sys&quot;, &quot;/proc/sysrq-trigger&quot; ] }, &quot;GraphDriver&quot;: { &quot;Data&quot;: { &quot;LowerDir&quot;: &quot;/var/lib/docker/overlay2/ Removed_for_post -init/diff:/var/lib/docker/overlay2/ Removed_for_post /diff&quot;, &quot;MergedDir&quot;: &quot;/var/lib/docker/overlay2/Removed_for_post /merged&quot;, &quot;UpperDir&quot;: &quot;/var/lib/docker/overlay2/Removed_for_post /diff&quot;, &quot;WorkDir&quot;: &quot;/var/lib/docker/overlay2/Removed_for_post /work&quot; }, &quot;Name&quot;: &quot;overlay2&quot; }, &quot;Mounts&quot;: [], &quot;Config&quot;: { &quot;Hostname&quot;: &quot;docker-desktop&quot;, &quot;Domainname&quot;: &quot;&quot;, &quot;User&quot;: &quot;&quot;, &quot;AttachStdin&quot;: false, &quot;AttachStdout&quot;: false, &quot;AttachStderr&quot;: false, &quot;Tty&quot;: false, &quot;OpenStdin&quot;: false, &quot;StdinOnce&quot;: false, &quot;Env&quot;: [ &quot;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&quot; ], &quot;Cmd&quot;: null, &quot;Image&quot;: &quot;k8s.gcr.io/pause:3.2&quot;, &quot;Volumes&quot;: null, &quot;WorkingDir&quot;: &quot;/&quot;, &quot;Entrypoint&quot;: [ &quot;/pause&quot; ], &quot;OnBuild&quot;: null, &quot;Labels&quot;: { &quot;annotation.kubeadm.kubernetes.io/etcd.advertise-client-urls&quot;: &quot;https://192.168.65.4:2379&quot;, &quot;annotation.kubernetes.io/config.hash&quot;: &quot; Removed_for_post &quot;, &quot;annotation.kubernetes.io/config.seen&quot;: &quot;2021-10-05T07:04:32.243805800Z&quot;, &quot;annotation.kubernetes.io/config.source&quot;: &quot;file&quot;, &quot;component&quot;: &quot;etcd&quot;, &quot;io.kubernetes.container.name&quot;: &quot;POD&quot;, &quot;io.kubernetes.docker.type&quot;: &quot;podsandbox&quot;, &quot;io.kubernetes.pod.name&quot;: &quot;etcd-docker-desktop&quot;, &quot;io.kubernetes.pod.namespace&quot;: &quot;kube-system&quot;, &quot;io.kubernetes.pod.uid&quot;: &quot;removed for Post&quot;, &quot;tier&quot;: &quot;control-plane&quot; } }, &quot;NetworkSettings&quot;: { &quot;Bridge&quot;: &quot;&quot;, &quot;SandboxID&quot;: &quot; Removed_for_post&quot;, &quot;HairpinMode&quot;: false, &quot;LinkLocalIPv6Address&quot;: &quot;&quot;, &quot;LinkLocalIPv6PrefixLen&quot;: 0, &quot;Ports&quot;: {}, &quot;SandboxKey&quot;: &quot;/var/run/docker/netns/default&quot;, &quot;SecondaryIPAddresses&quot;: null, &quot;SecondaryIPv6Addresses&quot;: null, &quot;EndpointID&quot;: &quot;&quot;, &quot;Gateway&quot;: &quot;&quot;, &quot;GlobalIPv6Address&quot;: &quot;&quot;, &quot;GlobalIPv6PrefixLen&quot;: 0, &quot;IPAddress&quot;: &quot;&quot;, &quot;IPPrefixLen&quot;: 0, &quot;IPv6Gateway&quot;: &quot;&quot;, &quot;MacAddress&quot;: &quot;&quot;, &quot;Networks&quot;: { &quot;host&quot;: { &quot;IPAMConfig&quot;: null, &quot;Links&quot;: null, &quot;Aliases&quot;: null, &quot;NetworkID&quot;: &quot; Removed_for_post &quot;, &quot;EndpointID&quot;: &quot; Removed_for_post &quot;, &quot;Gateway&quot;: &quot;&quot;, &quot;IPAddress&quot;: &quot;&quot;, &quot;IPPrefixLen&quot;: 0, &quot;IPv6Gateway&quot;: &quot;&quot;, &quot;GlobalIPv6Address&quot;: &quot;&quot;, &quot;GlobalIPv6PrefixLen&quot;: 0, &quot;MacAddress&quot;: &quot;&quot;, &quot;DriverOpts&quot;: null } } } } ]`` </code></pre>
<p>Your container seems to be created by an other tool like docker-compose as I guess it is local on your computer..</p> <p>Which means you can not stop its creation with docker rm statement easily.</p> <h2>EDIT 1 - Kubernetes</h2> <ul> <li>You seems to be using docker desktop</li> <li>Your container seems to be <code>etcd</code> which is part of kubernetes master system (very important) :</li> </ul> <blockquote> <p>k8s_POD_etcd-docker-desktop_kube-system_ Removed_for_post</p> </blockquote> <p>So, as it is part of kubernetes components, you should remove it only if you want to not use kubernetes.</p> <blockquote> <p>By removing kubernetes or etcd pod.</p> </blockquote> <p>If docker-desktop, should not be a problem.</p> <p>To summarize, <code>your container is created by a pod which is a resource handled by kubernetes</code></p> <h2>docker-compose</h2> <ul> <li>find your docker-compose directory, usually containers names start with project name and go int its directory.</li> <li>execute <code>docker-compose down</code></li> </ul> <h2>docker</h2> <ul> <li>Try to restart docker daemon, for instance in a linux based environment : <code>sudo systemctl restart docker</code></li> <li>A reboot of your machine should works too.</li> </ul> <p>Finally you should be able to remove containers as proposed in your question.</p>
<p>We are trying to create namespace with specific node pool. How to achieve that on Azure Kubernetes?</p> <pre><code>error: Unable to create namespace with specific node pool. Ex: namespace for user nodepool1 </code></pre>
<p>Posting this as a community wiki, feel free to edit and expend it.</p> <p>As <a href="https://stackoverflow.com/users/5951680/luca-ghersi">Luca Ghersi</a> mentioned in comments, it's possible to have namespaces assigned to a specific nodes. For this matter there's an admission controller <code>PodNodeSelector</code> (you can read about it on <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector" rel="nofollow noreferrer">kubernetes official documentation</a>).</p> <p>In short words:</p> <blockquote> <p>This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.</p> </blockquote> <p>Based on <a href="https://learn.microsoft.com/en-us/azure/aks/faq#what-kubernetes-admission-controllers-does-aks-support-can-admission-controllers-be-added-or-removed" rel="nofollow noreferrer">Azure FAQ</a>, Azure AKS has this admission controller enabled by default.</p> <pre><code>AKS supports the following admission controllers: - NamespaceLifecycle - LimitRanger - ServiceAccount - DefaultStorageClass - DefaultTolerationSeconds - MutatingAdmissionWebhook - ValidatingAdmissionWebhook - ResourceQuota - PodNodeSelector - PodTolerationRestriction - ExtendedResourceToleration Currently, you can't modify the list of admission controllers in AKS. </code></pre>
<p>Is there an immutable equivalent for labels in Kubernetes that we can attach to nodes? I want to use labels to identify and segregate nodes but I want to ensure those labels dont get modified.</p> <p>I was looking at annotations but are they immutable?</p>
<p>There is nothing like immutable labels or anything like that in Kubernetes. But labels attached to the kubernetes nodes can only be updated by a cluster admin.</p>
<p>I am trying to connect a folder in windows to a container folder. This is for a .NET app that needs to read files in a folder. In a normal docker container, with docker-compose, the app works without problems, but since this is only one of several different apps that we will have to monitor, we are trying to get kubernetes involved. That is also where we are failing. As a beginner with kubernetes, I used kompose.exe to convert the compose files to kuberetes style. However, no matter if I use hostPath or persistentVolumeClaim as a flag, I do not get things to work &quot;out of the box&quot;. With hostPath, the path is very incorrect, and with persistentVolumeClaim I get a warning saying volume mount on the host is not supported. I, therefore, tried to do that part myself but can get it to work with neither persistent volume nor entering mount data in the deployment file directly. The closest I have come is that I can enter the folder, and I can change to subfolders within, but as soon as I try to run any other command, be it 'ls' or 'cat', I get &quot;Operation not permitted&quot;. Here is my docker compose file, which works as expected by</p> <pre><code>version: &quot;3.8&quot; services: test-create-hw-file: container_name: &quot;testcreatehwfile&quot; image: test-create-hw-file:trygg network_mode: &quot;host&quot; volumes: - /c/temp/testfiles:/app/files </code></pre> <p>Running konvert compose on that file:</p> <pre><code>PS C:\temp&gt; .\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v DEBU Checking validation of provider: kubernetes DEBU Checking validation of controller: DEBU Docker Compose version: 3.8 WARN Service &quot;test-create-hw-file&quot; won't be created because 'ports' is not specified DEBU Compose file dir: C:\temp DEBU Target Dir: . INFO Kubernetes file &quot;test-create-hw-file-deployment.yaml&quot; created </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v kompose.version: 1.26.1 (a9d05d509) creationTimestamp: null labels: io.kompose.service: test-create-hw-file name: test-create-hw-file spec: replicas: 1 selector: matchLabels: io.kompose.service: test-create-hw-file strategy: type: Recreate template: metadata: annotations: kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v kompose.version: 1.26.1 (a9d05d509) creationTimestamp: null labels: io.kompose.service: test-create-hw-file spec: containers: - image: test-create-hw-file:trygg name: testcreatehwfile resources: {} volumeMounts: - mountPath: /app/files name: test-create-hw-file-hostpath0 restartPolicy: Always volumes: - hostPath: path: C:\temp\c\temp\testfiles name: test-create-hw-file-hostpath0 status: {} </code></pre> <p>Running kubectl apply on that file just gives the infamous error &quot;Error: Error response from daemon: invalid mode: /app/files&quot;, which means, as far as I can understand, not that the &quot;/app/files&quot; is wrong, but the format of the supposedly connected folder is incorrect. This is the quite weird <code>C:\temp\c\temp\testfiles</code> row. After googling and reading a lot, I have two ways of changing that, to either <code>/c/temp/testfiles</code> or <code>/host_mnt/c/temp/testfiles</code>. Both end up in the same &quot;Operation not permitted&quot;. I am checking this via going into the CLI on the container in the docker desktop.</p> <p>The image from the test is just an app that does nothing right now other than wait for five minutes to not quit before I can check the folder. I am logged on to the shell as root, and I have this row for the folder when doing 'ls -lA':</p> <pre><code>drwxrwxrwx 1 root root 0 Feb 7 12:04 files </code></pre> <p>Also, the <code>docker-user</code> has full access to the <code>c:\temp\testfiles</code> folder.</p> <p><strong>Some version data:</strong></p> <pre><code>Docker version 20.10.12, build e91ed57 Kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.5&quot;, GitCommit:&quot;5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-12-16T08:38:33Z&quot;, GoVersion:&quot;go1.16.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;windows/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.5&quot;, GitCommit:&quot;5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-12-16T08:32:32Z&quot;, GoVersion:&quot;go1.16.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Kompose version 1.26.1 (a9d05d509) Host OS: Windows 10, 21H2 </code></pre> <p>//Trygg</p>
<p>Glad that my initial comment solved your issue. I would like to expand my thoughts a little in a form of an official answer.</p> <p>To mount volumes using Kubernetes on Docker Desktop for Windows the path will be:</p> <pre class="lang-yaml prettyprint-override"><code>/run/desktop/mnt/host/c/PATH/TO/FILE </code></pre> <p>Unfortunately there is no official documentation but <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567594291" rel="noreferrer">here</a> is a good comment with explanation that this is related to Docker Daemon:</p> <blockquote> <p>/mnt/wsl is actually the mount point for the cross-distro mounts tmpfs<br /> Docker Daemon mounts it in its /run/desktop/mnt/host/wsl directory</p> </blockquote>
<p>I have used this document for creating kafka <a href="https://kow3ns.github.io/kubernetes-kafka/manifests/" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/</a></p> <p>able to create zookeeper, facing issue with the creation of kafka.getting error to connect with the zookeeper.</p> <p>this is the manifest i have used for creating for kafka:</p> <p><a href="https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml</a> for Zookeeper</p> <p><a href="https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml" rel="nofollow noreferrer">https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml</a></p> <p><strong>The logs of the kafka</strong></p> <pre><code> kubectl logs -f pod/kafka-0 -n kaf [2021-10-19 05:37:14,535] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 broker.id.generation.enable = true broker.rack = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delete.topic.enable = false fetch.purgatory.purge.interval.requests = 1000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 0.10.2-IV0 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT listeners = PLAINTEXT://:9093 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /var/lib/kafka log.dirs = /tmp/kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 0.10.2-IV0 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 1000012 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 port = 9092 principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder producer.purgatory.purge.interval.requests = 1000 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS unclean.leader.election.enable = true zookeeper.connect = zk-cs.default.svc.cluster.local:2181 zookeeper.connection.timeout.ms = 6000 zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2021-10-19 05:37:14,569] INFO starting (kafka.server.KafkaServer) [2021-10-19 05:37:14,570] INFO Connecting to zookeeper on zk-cs.default.svc.cluster.local:2181 (kafka.server.KafkaServer) [2021-10-19 05:37:14,579] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2021-10-19 05:37:14,583] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:host.name=kafka-0.kafka-hs.kaf.svc.cluster.local (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-file-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-json-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-transforms-0.10.2.1.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:java.compiler=&lt;NA&gt; (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:os.version=5.4.141-67.229.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:user.home=/home/kafka (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,583] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,584] INFO Initiating client connection, connectString=zk-cs.default.svc.cluster.local:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e0826e7 (org.apache.zookeeper.ZooKeeper) [2021-10-19 05:37:14,591] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2021-10-19 05:37:14,592] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181 at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228) at org.I0Itec.zkclient.ZkClient.&lt;init&gt;(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.&lt;init&gt;(ZkClient.java:131) at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106) at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88) at kafka.server.KafkaServer.initZk(KafkaServer.scala:326) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at org.apache.zookeeper.client.StaticHostProvider.&lt;init&gt;(StaticHostProvider.java:61) at org.apache.zookeeper.ZooKeeper.&lt;init&gt;(ZooKeeper.java:445) at org.apache.zookeeper.ZooKeeper.&lt;init&gt;(ZooKeeper.java:380) at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70) ... 10 more [2021-10-19 05:37:14,594] INFO shutting down (kafka.server.KafkaServer) [2021-10-19 05:37:14,597] INFO shut down completed (kafka.server.KafkaServer) [2021-10-19 05:37:14,597] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181 at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228) at org.I0Itec.zkclient.ZkClient.&lt;init&gt;(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.&lt;init&gt;(ZkClient.java:131) at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106) at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88) at kafka.server.KafkaServer.initZk(KafkaServer.scala:326) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at org.apache.zookeeper.client.StaticHostProvider.&lt;init&gt;(StaticHostProvider.java:61) at org.apache.zookeeper.ZooKeeper.&lt;init&gt;(ZooKeeper.java:445) at org.apache.zookeeper.ZooKeeper.&lt;init&gt;(ZooKeeper.java:380) at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70) ... 10 more </code></pre> <p><a href="https://i.stack.imgur.com/P1FuF.png" rel="nofollow noreferrer">Crash-Loop-Kafka</a> <a href="https://i.stack.imgur.com/Ang7w.png" rel="nofollow noreferrer">kafka deployed manifest</a></p>
<p>Your Kafka and Zookeeper deployments are running in the <code>kaf</code> namespace according to your screenshots, presumably you have set this up manually and applied the configurations while in that namespace? Neither the Kafka or Zookeeper YAML files explicitly state a namespace in metadata, so will be deployed to the active namespace when created.</p> <p>Anyway, the Kafka deployment YAML you have is hardcoded to assume Zookeeper is setup in the <code>default</code> namespace, with the following line:</p> <pre><code> --override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \ </code></pre> <p>Change this to:</p> <pre><code> --override zookeeper.connect=zk-cs.kaf.svc.cluster.local:2181 \ </code></pre> <p>and it should connect. Whether that's by downloading and locally editing the YAML file etc.</p> <p>Alternatively deploy Zookeeper into the <code>default</code> namespace.</p> <p>I also recommend looking at other options like <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka/#installing-the-chart" rel="nofollow noreferrer">Bitnami Kafka Helm charts</a> which deploy Zookeeper as needed with Kafka, manages most of the connection details and allows for easier customisation. It is also kept far more up to date.</p>
<p>I am new in DevOps amd I am deploying an application on Openshift where users can upload PDf / jpg ...</p> <p>However, I am not sure if provisioning persists volume is enough, and how it's possible to display all these files later ( graphical interface ) . I need some solution similar to S3 bucket in AWS.</p>
<p>MinIO provides a consistent, performant and scalable object store because it is Kubernetes-native by design and S3 compatible from inception.</p> <p><a href="https://min.io/product/private-cloud-red-hat-openshift" rel="nofollow noreferrer">https://min.io/product/private-cloud-red-hat-openshift</a></p> <p>We have installed &amp; tested minio in openshift 3.11 successfully.</p>
<h1>Suspending &amp; resuming my virtual-machine does break the k8s deployment</h1> <p>When I suspend with <code>minikube stop</code> and then resume the Virtual Machine with <code>minikube start</code>, Minikube re-deploys my app from scratch.</p> <p>I see this behaviour with newer versions of Minikube higher than <em>v1.18</em> (I run on <em>v1.19</em>).</p> <hr /> <h1>The setup:</h1> <ul> <li>The <em>Kubernetes</em> deployment mounts a volume with the source code from my host machine, via <code>hostPath</code>.\</li> <li>Also I have a container of <code>initContainers</code> that setups the application.</li> </ul> <p>Since the new <em>&quot;redeploy behaviour on resume&quot;</em> happens, the init-container <strong>breaks my deploy, <em>if</em> I have work-in-progress code on my host machine</strong>..</p> <h1>The issue:</h1> <p>Now, if I have temporary/non-perfectly-running code, I cannot suspend the machine with unfinished work anymore, between working days; because every time I resume it <strong>Minikube will try to deploy again but with broken code</strong> and fail with an <code>Init:CrashLoopBackOff</code>.</p> <h1>The workaround:</h1> <p>For now, each time I resume the machine I need to</p> <ol> <li>stash/commit my WIP code</li> <li>checkout the last commit with working deployment</li> <li>run the deployment &amp; wait for it to complete the initialization (minutes...)</li> <li>checkout/stash-pop the code saved at point <em>1)</em>.</li> </ol> <p>I can survive, but the workflow is terrible.</p> <h1>How do I restore the old behaviour?</h1> <ul> <li><em>How do I make my deploys to stay untouched, as expected when suspending the VM, instead of being re-deployed every time I resume?</em></li> </ul>
<p>In short words there are two ways to achieve what you want:</p> <ul> <li>On current versions of <code>minikube</code> and <code>virtualbox</code> you can use <code>save state</code> option in Virtual box directly.</li> <li>Move initContianer's code to a separate <code>job</code>.</li> </ul> <p><strong>More details about minikube + virtual box</strong></p> <p>I have an environment with minikube version 1.20, virtual box 6.1.22 (from yesterday) and MacOS. Also minikube driver is set to <code>virtualbox</code>.</p> <p>First with <code>minikube</code> + <code>VirtualBox</code>. Different scenarios:</p> <p><code>minikube stop</code> does following:</p> <blockquote> <p>Stops a local Kubernetes cluster. This command stops the underlying VM or container, but keeps user data intact.</p> </blockquote> <p>What happens is virtual machine where minikube is set up stops entirely. <code>minikube start</code> starts the VM and all processes in it. All containers are started as well, so if your pod has an init-container, it will run first anyway.</p> <p><code>minikube pause</code> pauses all processes and free up CPU resourses while memory will still be allocated. <code>minikube unpause</code> brings back CPU resources and continues executing containers from a state when they were paused.</p> <p>Based on different scenarios I tried with <code>minikube</code> it's not achievable using only minikube commands. To avoid any state loss on your <code>minikube</code> environment due to host restart or necessity to stop a VM to get more resources, you can use <code>save state</code> feature in VirtualBox in UI or cli. Below what it does:</p> <blockquote> <p><strong>VBoxManage controlvm savestate</strong>: Saves the current state of the VM to disk and then stops the VM.</p> </blockquote> <p>Virtual box creates something like a snapshot with all memory content within this snapshot. When virtual machine is restarted, Virtual box will restore the state of VM to the state when the VM was saved.</p> <p>One more assumption is if this works the same way in v. 1.20 - this is expected behaviour and not a bug (otherwise it would've been fixed already)</p> <p><strong>Init-container and jobs</strong></p> <p>You may consider moving your init-container's code to a a separate <code>job</code> so you will avoid any issues with unintended pod restarts and braking your deployment in the main container. Also it's advised to have init-container's code idempotent. Here's a quote from official documentation:</p> <blockquote> <p>Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on <code>EmptyDirs</code> should be prepared for the possibility that an output file already exists.</p> </blockquote> <p>This can be achieved by using <code>jobs</code> in Kubernetes which you can run manually when you need to do so. To ensure following the workflow you can place a check for a <code>Job completion</code> or a specific file on a data volume to the deployment's pod init container to indicate that code is working, deployment will be fine.</p> <p>Links with more information:</p> <ul> <li><p><a href="https://www.virtualbox.org/manual/ch08.html#vboxmanage-controlvm" rel="nofollow noreferrer">VirtualBox <code>save state</code></a></p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior" rel="nofollow noreferrer">initContainers</a></p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">kubernetes jobs</a></p> </li> </ul>
<p>I am new to Kubernetes and this is my first time deploying a react-django web app to Kubernetes cluster.</p> <p>I have created:</p> <ol> <li>frontend.yaml # to run npm server</li> <li>backend.yaml # to run django server</li> <li>backend-service.yaml # to make django server accessible for react.</li> </ol> <p>In my frontend.yaml file I am passing <code>REACT_APP_HOST</code> and <code>REACT_APP_PORT</code> as a env variable and changed URLs in my react app to:</p> <pre><code>axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/').then(res =&gt; { setBuckets(res.data); setReload(false); }).catch(err =&gt; { console.log(err); }) </code></pre> <p>and my URL becomes <code>http://backend-service:8000/todolist/api/bucket/</code></p> <p>here <code>backend-service</code> is name of backend-service that I am passing using env variable <code>REACT_APP_HOST</code>.</p> <p>I am not getting any errors, but when I used <code>kubectl port-forward &lt;frontend-pod-name&gt; 3000:3000</code> and accessed <code>localhost:3000</code> I saw my react app page but it did not hit any django apis.</p> <p>On chrome, I am getting error:</p> <pre><code>net::ERR_NAME_NOT_RESOLVED </code></pre> <p>and in Mozilla:</p> <pre><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend-service:8000/todolist/api/bucket/. (Reason: CORS request did not succeed). </code></pre> <p>Please help on this issue, I have spent 3 days but not getting any ideas.</p> <p><strong>frontend.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: frontend name: frontend spec: replicas: 1 selector: matchLabels: app: frontend strategy: {} template: metadata: creationTimestamp: null labels: app: frontend spec: containers: - image: 1234567890/todolist:frontend-v13 name: react-todolist env: - name: REACT_APP_HOST value: &quot;backend-service&quot; - name: REACT_APP_PORT value: &quot;8000&quot; ports: - containerPort: 3000 volumeMounts: - mountPath: /var/log/ name: frontend command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: - npm start; volumes: - name: frontend hostPath: path: /var/log/ </code></pre> <p><strong>backend.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: backend name: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: creationTimestamp: null labels: app: backend spec: serviceAccountName: backend-sva containers: - image: 1234567890/todolist:backend-v11 name: todolist env: - name: DB_NAME value: &quot;todolist&quot; - name: MYSQL_HOST value: &quot;mysql-service&quot; - name: MYSQL_USER value: &quot;root&quot; - name: MYSQL_PORT value: &quot;3306&quot; - name: MYSQL_PASSWORD value: &quot;mysql123&quot; ports: - containerPort: 8000 volumeMounts: - mountPath: /var/log/ name: backend command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: - apt-get update; apt-get -y install vim; python manage.py makemigrations bucket; python manage.py migrate; python manage.py runserver 0.0.0.0:8000 volumes: - name: backend hostPath: path: /var/log/ </code></pre> <p><strong>backend-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: backend name: backend-service spec: ports: - port: 8000 targetPort: 8000 selector: app: backend status: loadBalancer: {} </code></pre> <p><strong>frontend docker file</strong></p> <pre><code>FROM node:14.16.1-alpine COPY package.json /app/react-todolist/react-todolist/ WORKDIR /app/react-todolist/react-todolist/ RUN npm install COPY . /app/react-todolist/react-todolist/ EXPOSE 3000 </code></pre> <p><strong>backend docker file</strong></p> <pre><code>FROM python:3.6 COPY requirements.txt ./app/todolist/ WORKDIR /app/todolist/ RUN pip install -r requirements.txt COPY . /app/todolist/ </code></pre> <p><strong>django settings</strong></p> <pre><code>CORS_ORIGIN_ALLOW_ALL=True # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Rest Frame Work 'rest_framework', # CORS 'corsheaders', # Apps 'bucket', ] MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] </code></pre> <p><strong>ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: todolist-ingress spec: rules: - host: kubernetes.docker.internal http: paths: - path: / backend: serviceName: frontend-service servicePort: 3000 - path: / backend: serviceName: backend-service servicePort: 8000 </code></pre> <p><strong>react axios api</strong></p> <pre><code>useEffect(() =&gt; { axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/', { headers: {&quot;Access-Control-Allow-Origin&quot;: &quot;*&quot;} }).then(res =&gt; { setBuckets(res.data); setReload(false); }).catch(err =&gt; { console.log(err); }) }, [reload]) </code></pre> <p><strong>web app github link</strong> <a href="https://github.com/vgautam99/ToDoList" rel="nofollow noreferrer">https://github.com/vgautam99/ToDoList</a></p>
<p>Welcome to the community!</p> <p>I reproduced your example and made it work fine. I forked your repository, made some changes to js files and package.json and added Dockerfiles (you can see this commit <a href="https://github.com/fivecatscats/ToDoList/commit/74790836659232284832688beb2e1779660d7615" rel="nofollow noreferrer">here</a></p> <p>Since I didn't change database settings in <code>settings.py</code> I attached it as a <code>configMap</code> to backend deployment (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/backend-deploy.yaml#L37-L39" rel="nofollow noreferrer">here</a> how it's done). Config map was created by this command:</p> <p><code>kubectl create cm django1 --from-file=settings.py</code></p> <p>The trickiest part here is to use your domain name <code>kubernetes.docker.internal</code> and add your port with <code>/backend</code> path to environment variables you're passing to your frontend application (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/frontend-deploy.yaml#L21-L24" rel="nofollow noreferrer">here</a>)</p> <p>Once this is done, it's time to set up an ingress controller (this one uses apiVersion - <code>extestions/v1beta1</code> as it's done in your example, however it'll be deprecated soon, so it's advised to use <code>networking.k8s.io/v1</code> - example of a newer apiVersion is <a href="https://github.com/fivecatscats/ToDoList/blob/master/ingress-after-1-22.yaml" rel="nofollow noreferrer">here</a>):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: todolist-backend-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: rules: - host: kubernetes.docker.internal http: paths: - path: /backend(/|$)(.*) backend: serviceName: backend-service servicePort: 8000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: todolist-frontend-ingress annotations: spec: rules: - host: kubernetes.docker.internal http: paths: - path: / backend: serviceName: frontend-service servicePort: 3000 </code></pre> <p>I set it up in two different ingresses because there are some issues with <code>rewrite-target</code> and <code>regex</code> using root path <code>/</code>. As you can see we use <code>rewrite-target</code> here because requests are supposed to hit <code>/todolist/api/bucket</code> path instead of <code>/backend/todolist/api/bucket</code> path. Please see <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">nginx rewrite annotations</a></p> <p>Next step is to find an IP address to test your application from the node where kubernetes is running and from the web. To find IP addresses and ports run <code>kubectl get svc</code> and <code>find ingress-nginx-controller</code>:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend-service ClusterIP 10.100.242.79 &lt;none&gt; 8000/TCP 21h frontend-service ClusterIP 10.110.102.96 &lt;none&gt; 3000/TCP 21h ingress-nginx-controller LoadBalancer 10.107.31.20 192.168.1.240 80:31323/TCP,443:32541/TCP 8d </code></pre> <p>There are two options: <code>CLUSTER-IP</code> and <code>EXTERNAL-IP</code> if you have loadbalancer set up. On your kubernetes control plane, you can run a simple checking test with <code>curl</code> command using <code>CLUSTER-IP</code> address. In my case it looks like:</p> <p><code>curl http://kubernetes.docker.internal/ --resolve kubernetes.docker.internal:80:10.107.31.20</code></p> <p>And next test is:</p> <p><code>curl -I http://kubernetes.docker.internal/backend/todolist/api/bucket --resolve kubernetes.docker.internal:80:10.107.31.20</code></p> <p>Output will be like:</p> <pre><code>HTTP/1.1 301 Moved Permanently Date: Fri, 14 May 2021 12:21:59 GMT Content-Type: text/html; charset=utf-8Content-Length: 0 Connection: keep-alive Location: /todolist/api/bucket/ X-Content-Type-Options: nosniff Referrer-Policy: same-origin Vary: Origin </code></pre> <p>Next step is access your application via web browser. You'll need to correct <code>/etc/hosts</code> on your local machine (linux/Mac OS, for Windows it's a bit different, but very easy to find) to match <code>kubernetes.docker.internal</code> domain with proper IP address.</p> <p>If you're using a <code>load balancer</code> then <code>EXTERNAL-IP</code> is the right address. If you don't have a <code>load balancer</code> then it's possible to reach out to the node directly. You can find IP address in cloud console and add it to <code>/etc/hosts</code>. In this case you will need to use a different port. In my case it was <code>31323</code> (you can find it above in <code>kubectl get svc</code> output).</p> <p>When it's set up, I hit the application in my web-browser by <code>http://kubernetes.docker.internal:31323</code></p> <p>(Repository is <a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">here</a> feel free to use everything you need from it)</p>
<p>I’m looking for a breakdown of the minimal requirements for a kubelet implementation. Something like sequence diagrams/descriptions and APIs.</p> <p>I’m looking to write a minimal kubelet I can run on a reasonably capable microcontroller so that app binaries can be loaded and managed from an existing cluster (the container engine would actually flash to a connected microcontroller and restart). I’ve been looking through the kubelet code and there’s a lot to follow so any starting points would be helpful.</p> <p>A related question, does a kubelet need to run gRPC or can it fall back to a RESTful api? (there’s no existing gRPC I can run on the micro but there is nanopb and existing https APIs)</p>
<p>This probably won't be a full answer, however there are some details that will help you.</p> <p>First I'll start with related question about using <code>gRPC</code> and/or <code>REST API</code>. Based on the <a href="https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go#L160-L174" rel="nofollow noreferrer">kubelet code</a> there is a new server creation part to handle HTTP requests. Taking this into account, we can consider <code>kubelet</code> gets requests to its HTTPS endpoint. Also indirectly seen from <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#overview" rel="nofollow noreferrer">kubelet authentication/authorization documentation</a>, there are details only about <code>HTTPS endpoint</code>.</p> <p>Moving to an API part. It's still not documented properly so the best way to find some information is to look into code, e.g. <a href="https://github.com/kubernetes/kubernetes/blob/bd239d42e463bff7694c30c994abd54e4db78700/pkg/kubelet/server/server.go#L76-L84" rel="nofollow noreferrer">about endpoints</a></p> <p>Last part is <a href="https://www.deepnetwork.com/blog/2020/01/13/kubelet-api.html" rel="nofollow noreferrer">this useful page</a> where a lot of information about <code>kubelet API</code> is gathered</p>
<p>Looks like there is no support to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p> <p>Although It is straightforward to create HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p> <p>E.g.</p> <pre><code> HorizontalPodAutoscalerStatus hpaStatus = k8sClient.resource(createHPA()) .inNamespace(namespace) .createOrReplace().getStatus(); </code></pre> <pre><code>public HorizontalPodAutoscaler createHPA(){ return new HorizontalPodAutoscalerBuilder() .withNewMetadata() .withName(applicationName) .addToLabels(&quot;name&quot;, applicationName) .endMetadata() .withNewSpec() .withNewScaleTargetRef() .withApiVersion(hpaApiVersion) .withKind(&quot;Deployment&quot;) .withName(applicationName) .endScaleTargetRef() .withMinReplicas(minReplica) .withMaxReplicas(maxReplica) .addNewMetric() .withType(&quot;Resource&quot;) .withNewResource() .withName(&quot;cpu&quot;) .withNewTarget() .withType(&quot;Utilization&quot;) .withAverageUtilization(cpuAverageUtilization) .endTarget() .endResource() .endMetric() .addNewMetric() .withType(&quot;Resource&quot;) .withNewResource() .withName(&quot;memory&quot;) .withNewTarget() .withType(&quot;AverageValue&quot;) .withAverageValue(new Quantity(memoryAverageValue)) .endTarget() .endResource() .endMetric() .withNewBehavior() .withNewScaleDown() .addNewPolicy() .withType(&quot;Pods&quot;) .withValue(podScaleDownValue) .withPeriodSeconds(podScaleDownPeriod) .endPolicy() .withStabilizationWindowSeconds(podScaledStabaliztionWindow) .endScaleDown() .endBehavior() .endSpec().build(); } </code></pre> <p>Any solution to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0 will be appriciated.</p>
<p>First, Need to identify which API group <code>(v1, v2beta1, v2beta2)</code> was used during deployment creation based on the same API group the autoscaling function need to be call to get the HPA instance and then go ahead to perform any action on that HPA instance.</p> <p>In my case the deployment was created with v2beta2 API group, Below code snnipt helped me to deleted the HorizontalPodAutoscaler object from the provided name space.</p> <pre><code>k8sClient.autoscaling().v2beta2().horizontalPodAutoscalers().inNamespace(&quot;test&quot;).withName(&quot;myhpa&quot;).delete() </code></pre>
<p>I have a next js app that I am trying to deploy to a kubernetes cluster as a deployment. Parts of the application contain axios http requests that reference an environment variable containing the value of a backend service.</p> <p>If I am running locally, everything works fine, here is what I have in my <code>.env.local</code> file:</p> <pre><code>NEXT_PUBLIC_BACKEND_URL=http://localhost:8080 </code></pre> <p>Anywhere in the app, I can successfully access this variable with <code>process.env.NEXT_PUBLIC_BACKEND_URL</code>.</p> <p>When I create a kubernetes deployment, I try to inject that same env variable via a configMap and the variable shows as <code>undefined</code>.</p> <p><code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: my-site-frontend name: my-site-frontend spec: replicas: 1 selector: matchLabels: app: my-site-frontend strategy: {} template: metadata: creationTimestamp: null labels: app: my-site-frontend spec: containers: - image: my-site:0.1 name: my-site resources: {} envFrom: - configMapRef: name: my-site-frontend imagePullSecrets: - name: dockerhub </code></pre> <p><code>configMap.yaml</code></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: my-site-frontend data: NEXT_PUBLIC_BACKEND_URL: backend_service </code></pre> <p>When I run the deployment and expose the application via a nodePort, I see these environment variables as <code>undefined</code> in my browser console. All api calls to my backend_service (ClusterIP) fail as you can imagine.</p> <p>I can see the env variable is present when I exec into the running pod.</p> <pre><code>my-mac:manifests andy$ k get pods NAME READY STATUS RESTARTS AGE my-site-frontend-77fb459dbf-d996n 1/1 Running 0 25m --- my-mac:manifests andy$ k exec -it my-site-frontend-77fb459dbf-d996n -- sh --- /app $ env | grep NEXT_PUBLIC NEXT_PUBLIC_BACKEND_URL=backend_service </code></pre> <p>Any idea as to why the build process for my app does not account for this variable?</p> <p>Thanks!</p>
<p><strong>Make sure kubernetes part did the job right</strong></p> <p>First what's needed to check if environment actually get to the pod. Your option works, however there are cases when <code>kubectl exec -it pod_name -- sh / bash</code> creates a different session and all configmaps can be reloaded again.</p> <p>So let's check if it works right after pod is created and environment is presented.</p> <p>I created a deployment with your base, put <code>nginx</code> image and extended <code>spec</code> part with:</p> <pre><code>command: [&quot;/bin/bash&quot;, &quot;-c&quot;] args: [&quot;env | grep BACKEND_URL ; nginx -g \&quot;daemon off;\&quot;&quot;] </code></pre> <p>Right after pod started, got logs and confirmed environment is presented:</p> <pre><code>kubectl logs my-site-frontend-yyyyyyyy-xxxxx -n name_space | grep BACKEND NEXT_PUBLIC_BACKEND_URL=SERVICE_URL:8000 </code></pre> <p><strong>Why browser doesn't show environment variables</strong></p> <p>This is part is more tricky. Based on some research on <code>next.js</code>, variables should be set before project building (more details <a href="https://nextjs.org/docs/basic-features/environment-variables#exposing-environment-variables-to-the-browser" rel="nofollow noreferrer">here</a>):</p> <blockquote> <p>The value will be inlined into JavaScript sent to the browser because of the NEXT_PUBLIC_ prefix. This inlining occurs at build time, so your various NEXT_PUBLIC_ envs need to be set when the project is built.</p> </blockquote> <p>You can also see a <a href="https://github.com/vercel/next.js/tree/canary/examples/environment-variables" rel="nofollow noreferrer">good example of using environment variables</a> from <code>next.js</code> github project. You can try <code>Open in StackBlitz</code> option, very convenient and transparent.</p> <p>At this point you may want to introduce DNS names since IPs can be changed and also different URL paths for front and back ends (depending on the application, below is an example of <code>react</code> app)</p> <p><strong>Kubernetes ingress</strong></p> <p>If you decide to use DNS, then you may run into necessity to route the traffic.</p> <p>Short note what ingress is:</p> <blockquote> <p>An API object that manages external access to the services in a cluster, typically HTTP.</p> <p>Ingress may provide load balancing, SSL termination and name-based virtual hosting.</p> </blockquote> <p>Why this is needed. Once you have DNS endpoint, frontend and backend should be separated and have the same domain name to avoid any CORS policies and etc (this is possible to resolve of course, here's more for testing and developing on a local cluster).</p> <p><a href="https://stackoverflow.com/questions/67470540/react-is-not-hitting-django-apis-on-kubernetes-cluster/67534740#67534740">This is a good case</a> for solving issues with <code>react</code> application with <code>python</code> backend. Since <code>next.js</code> is a an open-source React front-end development web framework, it should be useful.</p> <p>In this case, there's a frontend which is located on <code>/</code> and has service on <code>3000</code> port and backend which located on <code>/backend</code> (please see deployment with example). Then below is how to setup <code>/etc/hosts</code>, test it and have the deployed app work.</p> <p>Useful links:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress and how to start with</a></li> <li><a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">repository with all necessary yamls</a> where the SO answer is linked to</li> </ul>
<p>Upon submitting few jobs (say, 50) targeted on a single node, I am getting pod status as &quot;OutOfpods&quot; for few jobs. I have reduced the maximum number of pods on this worker node to &quot;10&quot;, but still observe above issue. Kubelet configuration is default with no changes.</p> <p>kubernetes version: v1.22.1</p> <ul> <li>Worker Node</li> </ul> <p>Os: CentOs 7.9 memory: 528 GB CPU: 40 cores</p> <p>kubectl describe pod :</p> <blockquote> <p>Warning OutOfpods 72s kubelet Node didn't have enough resource: pods, requested: 1, used: 10, capacity: 10</p> </blockquote>
<p>I have realized this to be a known issue for kubelet v1.22 as confirmed <a href="https://github.com/kubernetes/kubernetes/issues/104560" rel="nofollow noreferrer">here</a>. The fix will be reflected in the next latest release.</p> <p>Simple resolution here is to downgrade kubernetes to v1.21.</p>
<p>I'm trying to override the node selector for a <code>kubectl run</code>.</p> <pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;spec&quot;: { &quot;template&quot;: { &quot;spec&quot;: { &quot;nodeSelector&quot;: { &quot;kubernetes.io/os&quot;: &quot;windows&quot; } } } } }' -- pwsh </code></pre> <p>But I get &quot;Invalid Json Path&quot;.</p> <p>This is my yaml if I do a deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: ... nodeSelector: kubernetes.io/os: windows </code></pre> <p>and if I do <code>get pods -o json</code> I get:</p> <pre><code>{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { ... }, &quot;spec&quot;: { ... &quot;nodeSelector&quot;: { &quot;kubernetes.io/os&quot;: &quot;windows&quot; } </code></pre>
<p><code>kubectl run</code> is a command to start a <code>Pod</code>. You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">here</a></p> <pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;spec&quot;: { &quot;template&quot;: { &quot;spec&quot;: { &quot;nodeSelector&quot;: { &quot;kubernetes.io/os&quot;: &quot;windows&quot; } } } } }' -- pwsh </code></pre> <p>Using a command above you are trying run a <code>Pod</code> with specification <code>&quot;template&quot;: { &quot;spec&quot;: { </code> which is used only for <code>Deployment</code> and that is why you get an error <code>Invalid Json Path</code>.</p> <p><code>nodeSelector</code> as you can see in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">documentation</a> could be specify under <code>spec</code> in <code>Pod</code>config file as below:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disktype: ssd </code></pre> <p>When you add <code>--dry-run=client -o yaml</code>to your command to see how the object would be processed, you will see below output which doesn't have <code>nodeSelector</code>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: powershell name: powershell spec: containers: - image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 name: powershell resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} </code></pre> <p>To solve your issue, you can delete <code>template</code> and <code>spec</code> from you command which should look as below:</p> <pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;spec&quot;: { &quot;nodeSelector&quot;: { &quot;kubernetes.io/os&quot;: &quot;windows&quot; } } }' -- pwsh </code></pre> <p>Adding <code>--dry-run=client -o yaml</code>to see what will be changed, you will see that <code>nodeSelector</code> exist:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: powershell name: powershell spec: containers: - image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 name: powershell resources: {} dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: windows restartPolicy: Never status: {} </code></pre>
<p>I have a use case wherein I have a Rest API running on a POD inside kubernetes cluster, and the helm pre-upgrade hook which runs a k8s Job needs to access Rest API, What is the best way to expose this URL so that helm hook can access it. I do not want to hardcode any Ip.</p>
<p>Posting this as a community wiki, feel free to edit and expand it for better experience.</p> <p>As David Maze and Lucia pointed out in comments, services are accessible by their IPs and URLs based on service names.</p> <p>This part is covered and well explained in official kubenetes documentation <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for services and pods</a></p>
<p>I have an (AKS) Kubernetes cluster running a couple of pods. Those pods have dynamic persistent volume claims. An example is:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pvc namespace: prd spec: accessModes: - ReadWriteOnce storageClassName: custom-azure-disk-retain resources: requests: storage: 50Gi </code></pre> <p>The disks are Azure Managed Disks and are backupped (snapshots) with the Azure backup center. In the backup center I can create a disk from a snapshot.</p> <p>Here is my question: how can I use the new disk in the PVC? Because I don't think I can patch the PV with a new DiskURI.</p> <p>What I figured out myself is how to use the restored dik directly as a volume. But if I'm not mistaken this does not use a PVC anymore meaning I can not benefit from dynamically resizing the disk.</p> <p>I'm using kustomize, here is how I can link the restored disk directoy in the deployment's yaml:</p> <pre><code>- op: remove path: &quot;/spec/template/spec/volumes/0/persistentVolumeClaim&quot; - op: add path: &quot;/spec/template/spec/volumes/0/azureDisk&quot; value: {kind: Managed, diskName: mysql-restored-disk, diskURI: &lt;THE_URI&gt;} </code></pre> <p>Some people will tell me to use <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a> but we're not ready for that yet.</p>
<p>You are using dynamic provisioning and then you want to hardcode DiskURIs? With this you also have to bind pods to nodes. This will be a nightmare when you have a disaster recovery case.</p> <p>To be honest, use Velereo :) Invest the time to get comfortable with it, your MTTR will thank you.</p> <p>Here is a quick start article with AKS: <a href="https://dzone.com/articles/setup-velero-on-aks" rel="nofollow noreferrer">https://dzone.com/articles/setup-velero-on-aks</a></p>
<p>I have a Mac with Apple Silicon (M1) and I have minikube installed. The installation was done following <a href="https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669" rel="nofollow noreferrer">https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669</a> by executing:</p> <pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64 sudo install minikube-darwin-arm64 /usr/local/bin/minikube </code></pre> <p>How do I remove minikube?</p>
<p>Have you tried to follow any online material to delete Minikube?? Test if this works for you and let me know if you face any issues.</p> <p>Try using the below command :</p> <pre><code>minikube stop; minikube delete &amp;&amp; docker stop $(docker ps -aq) &amp;&amp; rm -rf ~/.kube ~/.minikube &amp;&amp; sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &amp;&amp; launchctl stop '*kubelet*.mount' &amp;&amp; launchctl stop localkube.service &amp;&amp; launchctl disable localkube.service &amp;&amp; sudo rm -rf /etc/kubernetes/ &amp;&amp; docker system prune -af --volumes </code></pre> <p>Reference used: <a href="https://gist.github.com/rahulkumar-aws/65e6fbe16cc71012cef997957a1530a3" rel="noreferrer">Delete minikube on Mac</a></p>
<p>I have a simple app that I need to deploy in K8S (running on AWS EKS) and expose it to the outside world.</p> <p>I know that I can add a service with the type LoadBalancer and viola K8S will create AWS ALB for me.</p> <pre><code>spec: type: LoadBalancer </code></pre> <p>However, the issue is that it will <strong>create</strong> a new LB.</p> <p>The main reason why this is an issue for me is that I am trying to separate out infrastructure creation/upgrades (vs. software deployment/upgrade). All of my infrastructures will be managed by Terraform and all of my software will be defined via K8S YAML files (may be Helm in the future).</p> <p>And the creation of a load balancer (infrastructure) breaks this model.</p> <p>Two questions:</p> <ul> <li><p>Do I understand correctly that you can't change this behavior (<strong>create</strong> vs. <strong>use existing</strong>)?</p> </li> <li><p>I read multiple articles about K8S and all of them lead me into the direction of Ingress + Ingress Controller. Is this the way to solve this problem?</p> </li> </ul> <p>I am hesitant to go in this direction. There are tons of steps to get it working and it will take time for me to figure out how to retrofit it in Terraform and k8s YAML files</p>
<p>Short Answer , you can only change it to &quot;<strong>NodePort</strong>&quot; and couple the existing LB manually by adding EKS nodes with the right exposed port.</p> <p>like</p> <pre><code>spec: type: NodePort externalTrafficPolicy: Cluster ports: - name: http port: 80 protocol: TCP targetPort: http nodePort: **30080** </code></pre> <p>But to attach it like a native, that is not supported by AWS k8s Controller yet and may not be a priority to do support such behavior as :</p> <ul> <li>Configuration: Controllers get configuration from k8s config maps or special CustomResourceDefinitions(CRDs) that will conflict with any manual config on the already existing LB and my lead to wiping existing configs as not tracked in configs source.</li> </ul> <hr /> <p>Q: Direct expose or overlay ingress :</p> <blockquote> <p>Note: Use ingress ( Nginx or AWS ALB ) if you have (+1) services to expose or you need to add controls on exposed APIs.</p> </blockquote>
<p>In my helm chart, I have a few files that need credentials to be inputted For example</p> <pre><code>&lt;Resource name=&quot;jdbc/test&quot; auth=&quot;Container&quot; driverClassName=&quot;com.microsoft.sqlserver.jdbc.SQLServerDriver&quot; url=&quot;jdbc:sqlserver://{{ .Values.DB.host }}:{{ .Values.DB.port }};selectMethod=direct;DatabaseName={{ .Values.DB.name }};User={{ Values.DB.username }};Password={{ .Values.DB.password }}&quot; /&gt; </code></pre> <p>I created a secret</p> <pre><code>Name: databaseinfo Data: username password </code></pre> <p>I then create environment variables to retrieve those secrets in my deployment.yaml:</p> <pre><code>env: - name: DBPassword valueFrom: secretKeyRef: key: password name: databaseinfo - name: DBUser valueFrom: secretKeyRef: key: username name: databaseinfo </code></pre> <p>In my values.yaml or this other file, I need to be able to reference to this secret/environment variable. I tried the following but it does not work: values.yaml</p> <pre><code>DB: username: $env.DBUser password: $env.DBPassword </code></pre>
<p>you can't pass variables from any template to <code>values.yaml</code> with helm. Just from <code>values.yaml</code> to the templates.</p> <p>The answer you are seeking was posted by <a href="https://stackoverflow.com/users/3061469/mehowthe">mehowthe</a> :</p> <p>deployment.yaml =</p> <pre><code> env: {{- range .Values.env }} - name: {{ .name }} value: {{ .value }} {{- end }} </code></pre> <p>values.yaml =</p> <pre><code>env: - name: &quot;DBUser&quot; value: &quot;&quot; - name: &quot;DBPassword&quot; value: &quot;&quot; </code></pre> <p>then</p> <p><code>helm install chart_name --name release_name --set env.DBUser=&quot;FOO&quot; --set env.DBPassword=&quot;BAR&quot;</code></p>
<p>i wanted to install minikube and after the start command a got the following error text :</p> <pre><code>πŸ˜„ minikube v1.26.1 on Ubuntu 22.04 ❗ minikube skips various validations when --force is supplied; this may lead to unexpected behavior ✨ Using the docker driver based on existing profile πŸ›‘ The &quot;docker&quot; driver should not be used with root privileges. If you wish to continue as root, use --force. πŸ’‘ If you are running minikube within a VM, consider using --driver=none: πŸ“˜ https://minikube.sigs.k8s.io/docs/reference/drivers/none/ πŸ’‘ Tip: To remove this root owned cluster, run: sudo minikube delete πŸ‘ Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... βœ‹ Stopping node &quot;minikube&quot; ... πŸ›‘ Powering off &quot;minikube&quot; via SSH ... πŸ”₯ Deleting &quot;minikube&quot; in docker ... 🀦 StartHost failed, but will try again: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied 😿 Failed to start docker container. Running &quot;minikube delete&quot; may fix it: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied ❌ Exiting due to HOST_JUJU_LOCK_PERMISSION: Failed to start host: boot lock: unable to open /tmp/juju-mke11f63b5835bf422927bf558fccac7a21a838f: permission denied πŸ’‘ Suggestion: Run 'sudo sysctl fs.protected_regular=0', or try a driver which does not require root, such as '--driver=docker' 🍿 Related issue: https://github.com/kubernetes/minikube/issues/6391 </code></pre>
<p>Can you run the below command if this Minikube is installed in a lower environment ?</p> <pre><code>rm /tmp/juju-* </code></pre> <p><a href="https://github.com/kubernetes/minikube/issues/5660" rel="nofollow noreferrer">unable to open /tmp/juju-kubeconfigUpdate: permission denied</a></p>
<p>I have ran a docker container locally and it stores data in a file (currently no volume is mounted). I stored some data using the API. After that I failed the container using <code>process.exit(1)</code> and started the container again. The previously stored data in the container survives (as expected). But when I do this same thing in Kubernetes (minikube) the data is lost.</p>
<p>Posting this as a community wiki for better visibility, feel free to edit and expand it.</p> <p>As described in comments, kubernetes replaces failed containers with new (identical) ones and this explain why container's filesystem will be clean.</p> <p>Also as said containers should be stateless. There are different options how to run different applications and take care about its data:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/" rel="nofollow noreferrer">Run a stateless application using a Deployment</a></li> <li>Run a stateful application either as a <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">single instance</a> or as a <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">replicated set</a></li> <li><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Run automated tasks with a CronJob</a></li> </ul> <p>Useful links:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">Kubernetes workloads</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod lifecycle</a></li> </ul>
<p>How can egress from a Kubernetes pod be limited to only specific FQDN/DNS with Azure CNI Network Policies?</p> <p>This is something that can be achieved with:</p> <p>Istio</p> <pre><code>apiVersion: config.istio.io/v1alpha2 kind: EgressRule metadata: name: googleapis namespace: default spec: destination: service: &quot;*.googleapis.com&quot; ports: - port: 443 protocol: https </code></pre> <p>Cilium</p> <pre><code>apiVersion: &quot;cilium.io/v2&quot; kind: CiliumNetworkPolicy metadata: name: &quot;fqdn&quot; spec: endpointSelector: matchLabels: app: some-pod egress: - toFQDNs: - matchName: &quot;api.twitter.com&quot; - toEndpoints: - matchLabels: &quot;k8s:io.kubernetes.pod.namespace&quot;: kube-system &quot;k8s:k8s-app&quot;: kube-dns toPorts: - ports: - port: &quot;53&quot; protocol: ANY rules: dns: - matchPattern: &quot;*&quot; </code></pre> <p>OpenShift</p> <pre><code>apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default-rules spec: egress: - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 </code></pre> <p>How can something similar be done with Azure CNI Network Policies?</p>
<p>ATM network policies with FQDN/DNS rules are not supported on AKS.</p> <p>If you use Azure CNI &amp; Azure Policy Plugin you get the default Kubernetes Network Policies.</p> <p>If you use Azure CNI &amp; Calico Policy Plugin you get advanced possibilities like Global Network Polices but not the FQDN/DNS one. This is a paid feature on Calico Cloud unfortunately.</p>
<p>I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?</p>
<p>You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs</p> <pre><code>npm install @kubernetes/client-node </code></pre> <p>You can use either way to provide authentication information to your kubernetes client</p> <p>This is a code which worked for me</p> <pre><code>const k8s = require('@kubernetes/client-node'); const cluster = { name: '&lt;cluster-name&gt;', server: '&lt;server-address&gt;', caData: '&lt;certificate-data&gt;' }; const user = { name: '&lt;cluster-user-name&gt;', certData: '&lt;certificate-data&gt;', keyData: '&lt;certificate-key&gt;' }; const context = { name: '&lt;context-name&gt;', user: user.name, cluster: cluster.name, }; const kc = new k8s.KubeConfig(); kc.loadFromOptions({ clusters: [cluster], users: [user], contexts: [context], currentContext: context.name, }); const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api); k8sApi.listNamespacedIngress('&lt;namespace&gt;').then((res) =&gt; { console.log(res.body); }); </code></pre> <p>You need to Api client according to your ingress in my case I was using networkingV1Api</p> <p>You can get further options from <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">https://github.com/kubernetes-client/javascript</a></p>
<p>As part of kubernetes 1.19, <a href="https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/" rel="nofollow noreferrer">structured logging</a> has been implemented.</p> <p>I've <a href="https://kubernetes.io/docs/concepts/cluster-administration/system-logs/" rel="nofollow noreferrer">read</a> that kubernetes log's engine is <code>klog</code> and structured logs are following this format :</p> <pre><code>&lt;klog header&gt; &quot;&lt;message&gt;&quot; &lt;key1&gt;=&quot;&lt;value1&gt;&quot; &lt;key2&gt;=&quot;&lt;value2&gt;&quot; ... </code></pre> <p>Cool ! But even better, you apparently can pass a <code>--logging-format=json</code> flag to <code>klog</code> so logs are generated in <code>json</code> directly !</p> <pre><code>{ &quot;ts&quot;: 1580306777.04728, &quot;v&quot;: 4, &quot;msg&quot;: &quot;Pod status updated&quot;, &quot;pod&quot;:{ &quot;name&quot;: &quot;nginx-1&quot;, &quot;namespace&quot;: &quot;default&quot; }, &quot;status&quot;: &quot;ready&quot; } </code></pre> <p>Unfortunately, I haven't been able to find out how and where I should specify that <code>--logging-format=json</code> flag.</p> <p>Is it a <code>kubectl</code> command? I'm using Azure's aks.</p>
<p><code>--logging-format=json</code> is a flag which need to be set on all Kuberentes System Components ( Kubelet, API-Server, Controller-Manager &amp; Scheduler). You can check all flags <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">here</a>.</p> <p>Unfortunately you cant do it right now with AKS as you have the managed control plane from Microsoft.</p>
<p>I've an existing deployment in my Kubernetes cluster. I want to read its deployment.yaml file from Kubernetes environment using fabric8 client.Functionality similar to this command - kubectl get deploy deploymentname -o yaml. Please help me to get its fabric8 Java client equivalent.</p> <p>Objective : I want to get deployment.yaml for a resource and save it with me , perform some experiments in the Kubernetes environment and after the experiments done,I want to revert back to previous deployment. So I need to have deployment.yaml handy to roll back the operation. Please help.</p> <p>Thanks, Sapna</p>
<p>You can get the yaml representation of an object with the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/utils/Serialization.java#L140" rel="nofollow noreferrer">Serialization#asYaml</a> method.</p> <p>For example:</p> <pre><code>System.out.println(Serialization.asYaml(client.apps().deployments().inNamespace(&quot;abc&quot;).withName(&quot;ms1&quot;).get())); </code></pre>
<p>I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.</p> <p>When I <code>exec</code> myself into one of the two postgres pod (<code>kubectl exec --stdin --tty [postgres_pod] -- /bin/bash</code>) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.</p> <p>So in short I create 4 tables; in one <em>postgres pod</em> I have 4 tables but 2 are empty, in the other <em>postgres pod</em> there are 3 tables and the tables that were empty in the first pod, here are filled with data.</p> <p>Why the pods don't have the same data in it?</p> <p>How can I access and download the entire database?</p> <p>PS. I deploy the cluster using HELM in minikube.</p> <hr /> <p>Here are the YAML files:</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: database-pg POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres PGDATA: /data/pgdata --- kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local app: postgres spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteMany hostPath: path: &quot;/mnt/data&quot; --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres spec: ports: - name: postgres port: 5432 nodePort: 30432 type: NodePort selector: app: postgres --- apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: serviceName: postgres-service selector: matchLabels: app: postgres replicas: 2 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13.2 volumeMounts: - name: postgres-disk mountPath: /data # Config from ConfigMap envFrom: - configMapRef: name: postgres-config volumeClaimTemplates: - metadata: name: postgres-disk spec: accessModes: [&quot;ReadWriteOnce&quot;] --- apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: selector: matchLabels: app: postgres replicas: 2 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13.2 imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pv-claim --- </code></pre>
<p>I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.</p> <p>Here's what I did to download the postgres volume:</p> <p>First of all, minikube supports some specific directories for volume appear:</p> <blockquote> <p>minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.</p> <pre><code>/data /var/lib/minikube /var/lib/docker /tmp/hostpath_pv /tmp/hostpath-provisioner </code></pre> </blockquote> <p>So I've changed the mount path to be under the <code>/data</code> directory. This made the database volume visible.</p> <p>After this I ssh'ed into minikube and copied the database volume to a new directory (I used <code>/home/docker</code> as the user of <code>minikube</code> is <code>docker</code>).</p> <pre><code>sudo cp -R /data/pgdata /home/docker </code></pre> <p>The volume <code>pgdata</code> was still owned by <code>root</code> (access denied error) so I changed it to be owned by <code>docker</code>. For this I also set a new password which I knew:</p> <pre><code>sudo passwd docker # change password for docker user sudo chown -R docker: /home/docker/pgdata # change owner from root to docker </code></pre> <p>Then you can exit and copy the directory into you local machine:</p> <pre><code>exit scp -r $(minikube ssh-key) docker@$(minikube ip):/home/docker/pgdata [your_local_path]. </code></pre> <p><em>NOTE</em></p> <p>Mario's advice, which is to use <code>pgdump</code> is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.</p>
<p>Attempting to deploy autoscaling to my cluster, but the target shows &quot;unknown&quot;, I have tried different metrics servers to no avail. I followed [this githhub issue](https&quot;//github.com/kubernetes/minikube/issues4456/) even thought I'm using Kubeadm not minikube and it did not change the problem.</p> <p>I also <a href="https://stackoverflow.com/questions/54106725/docker-kubernetes-mac-autoscaler-unable-to-find-metrics">followed this Stack post</a> with no success either.</p> <p>I'm running Ubuntu 20.0.4 LTS.</p> <p>Using kubernetes version 1.23.5, for kubeadm ,kubcectl, ect</p> <p>Following the advice the other stack post, I grabbed the latest version via curl</p> <p><code>curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</code></p> <p>I edited the file to be as followed:</p> <pre><code> spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubectl-insecure-tls - --kubelet-preferred-address-types=InternalIP - --kubelet-use-node-status-port - --metric-resolution=15s image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 imagePullPolicy: IfNotPresent </code></pre> <p>I then ran kubectl apply -f components.yaml</p> <p>Still did not work:</p> <p>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE teastore-webui-hpa Deployment/teastore-webui &lt;unknown&gt;/50% 1 20 1 20h</p> <p>Another suggestion was specifically declaring limits.</p> <pre><code>$ kubectl autoscale deployment teastore-webui --max=20 --cpu-percent=50 --min=1 horizontalpodautoscaler.autoscaling/teastore-webui autoscaled group8@group8:~/Downloads/TeaStore-master/examples/kubernetes$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE teastore-webui Deployment/teastore-webui &lt;unknown&gt;/50% 1 20 0 4s teastore-webui-hpa Deployment/teastore-webui &lt;unknown&gt;/50% 1 20 1 20h </code></pre> <p>That also did not work.</p> <p>Here is an exert of the deployment and service config that I'm trying to autoscale.</p> <pre><code> spec: containers: - name: teastore-webui image: descartesresearch/teastore-webui ports: - containerPort: 8080 env: - name: HOST_NAME value: &quot;teastore-webui&quot; - name: REGISTRY_HOST value: &quot;teastore-registry&quot; resources: requests: cpu: &quot;250m&quot; --- apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: teastore-webui-hpa labels: app: teastore spec: maxReplicas: 20 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: teastore-webui metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 --- apiVersion: v1 kind: Service metadata: name: teastore-webui labels: app: teastore run: teastore-webui spec: type: NodePort ports: - port: 8080 nodePort: 30080 protocol: TCP selector: run: teastore-webui </code></pre> <p>Based on other suggestions I have the resource specifically declared as cpu with 50% utilization, and CPU requests are set to 250 milicores.</p> <pre><code> $kubectl describe hpa Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler Name: teastore-webui Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Sat, 02 Apr 2022 16:07:25 -0400 Reference: Deployment/teastore-webui Metrics: ( current / target ) resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 50% Min replicas: 1 Max replicas: 20 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedComputeMetricsReplicas 29m (x12 over 32m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Warning FailedGetResourceMetric 2m12s (x121 over 32m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) </code></pre>
<p>Syntaxerror on line 6 of this yaml. It needs to be <code>- --kubelet-insecure-tls</code> and not <code>- --kubectl-insecure-tls</code></p> <pre><code>spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubectl-insecure-tls - --kubelet-preferred-address-types=InternalIP - --kubelet-use-node-status-port - --metric-resolution=15s image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 imagePullPolicy: IfNotPresent </code></pre> <p>Noticed by checking the log files with</p> <pre><code>kubectl logs -f metric-server -n kube-system </code></pre> <p>Thank you David Maze for the comment.</p>
<p>By using the reference of <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls</a> this document, I'm trying to fetch the TLS secrets from AKV to AKS pods. Initially I created and configured <strong>CSI driver configuration</strong> with using <strong>User Assigned Managed Identity</strong>.</p> <p>I have performed the following steps:</p> <ul> <li>Create AKS Cluster with 1 nodepool.</li> <li>Create AKV.</li> <li>Created user assigned managed identity and assign it to the nodepool i.e. to the VMSS created for AKS.</li> <li>Installed CSI Driver helm chart in AKS's <strong>&quot;kube-system&quot;</strong> namespace. and completed all the requirement to perform this operations.</li> <li>Created the TLS certificate and key.</li> <li>By using TLS certificate and key, created .pfx file.</li> <li>Uploaded that .pfx file in the AKV certificates named as <strong>&quot;ingresscert&quot;</strong>.</li> <li>Created new namespace in AKS named as &quot;ingress-test&quot;.</li> <li>Deployed secretProviderClass in that namespace are as follows.:</li> </ul> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: azure-tls spec: provider: azure secretObjects: # secretObjects defines the desired state of synced K8s secret objects - secretName: ingress-tls-csi type: kubernetes.io/tls data: - objectName: ingresscert key: tls.key - objectName: ingresscert key: tls.crt parameters: usePodIdentity: &quot;false&quot; useVMManagedIdentity: &quot;true&quot; userAssignedIdentityID: &quot;7*******-****-****-****-***********1&quot; keyvaultName: &quot;*****-*****-kv&quot; # the name of the AKV instance objects: | array: - | objectName: ingresscert objectType: secret tenantId: &quot;e*******-****-****-****-***********f&quot; # the tenant ID of the AKV instance </code></pre> <ul> <li>Deployed the <strong>nginx-ingress-controller</strong> helm chart in the same namespace, where certificates are binded with application.</li> <li>Deployed the Busy Box deployment are as follows:</li> </ul> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: busybox-one labels: app: busybox-one spec: replicas: 1 selector: matchLabels: app: busybox-one template: metadata: labels: app: busybox-one spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29-1 command: - &quot;/bin/sleep&quot; - &quot;10000&quot; volumeMounts: - name: secrets-store-inline mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;azure-tls&quot; --- apiVersion: v1 kind: Service metadata: name: busybox-one spec: type: ClusterIP ports: - port: 80 selector: app: busybox-one </code></pre> <ul> <li>Check secret is created or not by using command</li> </ul> <pre><code>kubectl get secret -n &lt;namespaceName&gt; </code></pre> <p><strong>One thing to notice here is, if I attach shell with the busy box pod and go to the mount path which I provided to mount secrets I have seen that secrets are successfully fetched there. But this secrets are not showing in the AKS's secret list.</strong></p> <p>I have troubleshooted all the AKS,KV and manifest files but not found anything. IF there is anything I have missed or anyone has solution for this please let me know.</p> <p>Thanks in advance..!!!</p>
<p>i added this as a new answer, bcs the formatting was bad in the comments:</p> <p>As you are using the Helm chart, you have to activate the secret sync in the <code>values.yaml</code> of the Helm Chart:</p> <pre><code>secrets-store-csi-driver: syncSecret: enabled: true </code></pre> <p>I would still recommend to use the <code>csi-secrets-store-provider-azure</code> as AKS Addon instead of the Helm-Chart</p>
<p>I am running a code which opens a raw socket inside a docker container with kubernetes as the orchestrator.</p> <p>Following is my sample code:</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;sys/socket.h&gt; #include &lt;stdlib.h&gt; #include &lt;errno.h&gt; #include &lt;netinet/tcp.h&gt; #include &lt;netinet/ip.h&gt; #include &lt;arpa/inet.h&gt; #include &lt;unistd.h&gt; int main (void) { //Create a raw socket int s = socket (AF_INET, SOCK_RAW, IPPROTO_SCTP); if(s == -1) { perror(&quot;Failed to create socket&quot;); exit(1); } } </code></pre> <p>On running the code as a non-root user in my container/pod, I got this error.</p> <pre><code>./rawSocTest Failed to create socket: Operation not permitted </code></pre> <p>This is obvious as it requires root level privileges to open a raw socket. This I corrected by setting capability cap_net_raw.</p> <pre><code>getcap rawSocTest rawSocTest = cap_net_raw+eip </code></pre> <p>Now when I run it again. I am getting a different error.</p> <pre><code>./rawSocTest bash: ./rawSocTest: Permission denied </code></pre> <p>As per my understanding, setting the capability should have fixed my issue. Am I missing something here? or Is this a known limitation of container?</p> <p>Thanks in advance.</p>
<p>I ran it using user kubernetes-admin</p> <p>Adding the sample deployment file for container definition to what was stated earlier</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: httpd-deployment labels: app: httpd spec: replicas: 1 selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: my-apache2 image: docker.io/arunsippy12/my-apache2:latest securityContext: allowPrivilegeEscalation: true capabilities: add: [&quot;NET_RAW&quot;] ports: - containerPort: 80 </code></pre> <p>Also u can refer the k8s documentation: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
<p>I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p> <p>After the yaml file is applied and while trying to check the status of the deployment , I get :</p> <pre><code>$ kubectl get deployment -n kube-system aws-load-balancer-controller NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-controller 0/1 1 0 6m39s </code></pre> <p>I tried to debug it and I got this :</p> <pre><code>$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller {&quot;level&quot;:&quot;info&quot;,&quot;logger&quot;:&quot;controller-runtime.metrics&quot;,&quot;msg&quot;:&quot;metrics server is starting to listen&quot;,&quot;addr&quot;:&quot;:8080&quot;} {&quot;level&quot;:&quot;error&quot;,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;unable to create controller&quot;,&quot;controller&quot;:&quot;Ingress&quot;,&quot;error&quot;:&quot;the server could not find the requested resource&quot;} </code></pre> <p>The yaml file is pulled directly from <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml</a> and apart from changing the Kubernetes cluster name, no other modifications are done.</p> <p>Please let me know if I am missing some step in the configuration. Any help would be highly appreciated.</p>
<p>I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.</p> <ul> <li>aws-load-balancer-controller = v2.3.1</li> <li>Kubernetes/EKS = 1.22</li> </ul> <p>Github issue for more information: <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495</a></p>
<p>What is the cause of 'terraform apply' giving me the error below on my local machine? It seems to run fine on the build server.</p> <p>I've also checked the related stackoverflow messages:</p> <ul> <li>Windows Firewall is disabled, thus 80 is allowed on the private network</li> <li>config_path in AKS is not used, no kubeconfig seems to be configured anywhere</li> </ul> <pre><code>Plan: 3 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_namespace.azurevotefront-namespace: Creating... kubernetes_service.azurevotefront-metadata: Creating... kubernetes_deployment.azurevotefront-namespace: Creating... β•· β”‚ Error: Post &quot;http://localhost/api/v1/namespaces&quot;: dial tcp 127.0.0.1:80: connect: connection refused β”‚ β”‚ with kubernetes_namespace.azurevotefront-namespace, β”‚ on kubernetes.tf line 1, in resource &quot;kubernetes_namespace&quot; &quot;azurevotefront-namespace&quot;: β”‚ 1: resource &quot;kubernetes_namespace&quot; &quot;azurevotefront-namespace&quot; { β”‚ β•΅ β•· β”‚ Error: Failed to create deployment: Post &quot;http://localhost/apis/apps/v1/namespaces/azurevotefront-namespace/deployments&quot;: dial tcp 127.0.0.1:80: connect: connection refused β”‚ β”‚ with kubernetes_deployment.azurevotefront-namespace, β”‚ on main.tf line 1, in resource &quot;kubernetes_deployment&quot; &quot;azurevotefront-namespace&quot;: β”‚ 1: resource &quot;kubernetes_deployment&quot; &quot;azurevotefront-namespace&quot; { β”‚ β•΅ β•· β”‚ Error: Post &quot;http://localhost/api/v1/namespaces/azurevotefront-namespace/services&quot;: dial tcp 127.0.0.1:80: connect: connection refused β”‚ β”‚ with kubernetes_service.azurevotefront-metadata, β”‚ on main.tf line 47, in resource &quot;kubernetes_service&quot; &quot;azurevotefront-metadata&quot;: β”‚ 47: resource &quot;kubernetes_service&quot; &quot;azurevotefront-metadata&quot; { </code></pre> <p>Kubernetes.tf</p> <pre class="lang-json prettyprint-override"><code>resource &quot;kubernetes_namespace&quot; &quot;azurevotefront-namespace&quot; { metadata { annotations = { name = &quot;azurevotefront-annotation&quot; } labels = { mylabel = &quot;azurevotefront-value&quot; } name = &quot;azurevotefront-namespace&quot; } } </code></pre> <p>Provider.tf</p> <pre class="lang-json prettyprint-override"><code>terraform { backend &quot;azurerm&quot; { key = &quot;terraform.tfstate&quot; resource_group_name = &quot;MASKED&quot; storage_account_name = &quot;MASKED&quot; access_key = &quot;MASKED&quot; container_name = &quot;MASKED&quot; } required_providers { azurerm = { source = &quot;hashicorp/azurerm&quot; version = &quot;~&gt;2.68&quot; } kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;~&gt; 2.4&quot; } } } provider &quot;azurerm&quot; { tenant_id = &quot;MASKED&quot; subscription_id = &quot;MASKED&quot; client_id = &quot;MASKED&quot; client_secret = &quot;MASKED&quot; features {} } </code></pre>
<p>as mentioned in the comments you are missing the kubernetes provider config:</p> <pre><code>provider &quot;kubernetes&quot; { host = azurerm_kubernetes_cluster.aks.kube_admin_config.0.host client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate) } </code></pre>
<p>I use aws-load-balancer-eip-allocations assign static IP to LoadBalancer service using k8s on AWS. The version of EKS is v1.16.13. The doc at <a href="https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211</a>, line 210 and 211 says &quot;static IP addresses for the NLB. Only supported on elbv2 (NLB)&quot;. I do not know what the elbv2 is. I use the code below. But, I did not get static IP. Is elbv2 the problem? How do I use elbv2? Please also refer to <a href="https://github.com/kubernetes/kubernetes/pull/69263" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/69263</a> as well.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ingress-service annotations: service.beta.kubernetes.io/aws-load-balancer-type: &quot;nlb&quot; service.beta.kubernetes.io/aws-load-balancer-eip-allocations: &quot;eipalloc-0187de53333555567&quot; service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: &quot;true&quot; </code></pre>
<p>have in mind that you need 1 EIP per subnet/zone and by default EKS uses a minimum of 2 zones.</p> <p>This is a working example you may found useful:</p> <pre><code>metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true' service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-subnets: &quot;subnet-xxxxxxxxxxxxxxxx,subnet-yyyyyyyyyyyyyyyyy&quot; service.beta.kubernetes.io/aws-load-balancer-eip-allocations: &quot;eipalloc-wwwwwwwwwwwwwwwww,eipalloc-zzzzzzzzzzzzzzzz&quot; </code></pre> <p>I hope this is useful to you</p>
<p>In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?</p>
<p>Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response &amp; request, and other add-ons/plugin options.</p> <p>API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements</p> <p>For accessing database</p> <p>If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.</p> <p>You can also refer to this <a href="https://medium.com/@ManagedKube/kubernetes-access-external-services-e4fd643e5097" rel="nofollow noreferrer">Kubernetes Access External Services</a> article.</p>
<p>In Terraform I wrote a resource that deploys to AKS. I want to apply the terraform changes multiple times, but don't want to have the error below. The system automatically needs to detect whether the resource already exists / is identical. Currently it shows me 'already exists', but I don't want it to fail. Any suggestions how I can fix this issue?</p> <pre><code>β”‚ Error: services &quot;azure-vote-back&quot; already exists β”‚ β”‚ with kubernetes_service.example2, β”‚ on main.tf line 91, in resource &quot;kubernetes_service&quot; &quot;example2&quot;: β”‚ 91: resource &quot;kubernetes_service&quot; &quot;example2&quot; { </code></pre> <pre class="lang-json prettyprint-override"><code>provider &quot;azurerm&quot; { features {} } data &quot;azurerm_kubernetes_cluster&quot; &quot;aks&quot; { name = &quot;kubernetescluster&quot; resource_group_name = &quot;myResourceGroup&quot; } provider &quot;kubernetes&quot; { host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate) client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key) cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate) } resource &quot;kubernetes_namespace&quot; &quot;azurevote&quot; { metadata { annotations = { name = &quot;azurevote-annotation&quot; } labels = { mylabel = &quot;azurevote-value&quot; } name = &quot;azurevote&quot; } } resource &quot;kubernetes_service&quot; &quot;example&quot; { metadata { name = &quot;azure-vote-front&quot; } spec { selector = { app = kubernetes_pod.example.metadata.0.labels.app } session_affinity = &quot;ClientIP&quot; port { port = 80 target_port = 80 } type = &quot;LoadBalancer&quot; } } resource &quot;kubernetes_pod&quot; &quot;example&quot; { metadata { name = &quot;azure-vote-front&quot; labels = { app = &quot;azure-vote-front&quot; } } spec { container { image = &quot;mcr.microsoft.com/azuredocs/azure-vote-front:v1&quot; name = &quot;front&quot; env { name = &quot;REDIS&quot; value = &quot;azure-vote-back&quot; } } } } resource &quot;kubernetes_pod&quot; &quot;example2&quot; { metadata { name = &quot;azure-vote-back&quot; namespace = &quot;azure-vote&quot; labels = { app = &quot;azure-vote-back&quot; } } spec { container { image = &quot;mcr.microsoft.com/oss/bitnami/redis:6.0.8&quot; name = &quot;back&quot; env { name = &quot;ALLOW_EMPTY_PASSWORD&quot; value = &quot;yes&quot; } } } } resource &quot;kubernetes_service&quot; &quot;example2&quot; { metadata { name = &quot;azure-vote-back&quot; namespace = &quot;azure-vote&quot; } spec { selector = { app = kubernetes_pod.example2.metadata.0.labels.app } session_affinity = &quot;ClientIP&quot; port { port = 6379 target_port = 6379 } type = &quot;ClusterIP&quot; } } </code></pre>
<p>Thats the ugly thing with deploying thing inside Kubernetes with terraform....you will meet this nice errors from time to time and thats why it is not recommended to do it :/</p> <p>You could try to just <a href="https://www.terraform.io/cli/commands/state/rm" rel="nofollow noreferrer">remove the record from the state file</a>:</p> <p><code>terraform state rm 'kubernetes_service.example2'</code></p> <p>Terraform now will no longer track this record and the good thing <strong>it will not be deleted</strong> on the remote system.</p> <p>On the next run terraform then will recognise that this resource exists on the remote system and add the record to the state.</p>
<p>I am trying to enable kubernetes for Docker Desktop. Kubernetes is however failing to start.</p> <p>My log file shows:</p> <pre><code>cannot get lease for master node: Get &quot;https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop&quot;: x509: certificate signed by unknown authority: Get &quot;https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop&quot;: x509: certificate signed by unknown authority </code></pre> <p>I have NO_PROXY env var set already, and my hosts file has <code>127.0.0.1 kubernetes.docker.internal</code> at the end, as was suggested <a href="https://stackoverflow.com/questions/66364587/docker-for-windows-stuck-at-kubernetes-is-starting">here</a></p> <p>I appreciate any help</p>
<p>Below work around can help you resolve your issue.</p> <p>You can solve this by</p> <ul> <li>Open ~.kube\config in a text editor</li> <li>Replace <a href="https://kubernetes.docker.internal:6443" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443</a> to https://localhost:6443</li> <li>Try connecting again</li> </ul> <p>From this <a href="https://forums.docker.com/t/waiting-for-kubernetes-to-be-up-and-running/47009" rel="nofollow noreferrer">issue</a></p> <ul> <li>Reset Docker to factory settings</li> <li>Quit Docker</li> <li>Set the KUBECONFIG environment variable to %USERPROFILE%.kube\config</li> <li>Restart Docker and enable Kubernetes (still took a few minutes to start)</li> </ul> <p>Attaching troubleshooting <a href="https://bobcares.com/blog/docker-x509-certificate-signed-by-unknown-authority/" rel="nofollow noreferrer">blog1</a>, <a href="https://velaninfo.com/rs/techtips/docker-certificate-authority/" rel="nofollow noreferrer">bolg2</a> for your reference.</p>
<p>I am using the opentelemetry-ruby otlp exporter for auto instrumentation: <a href="https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp</a></p> <p>The otel collector was installed as a daemonset: <a href="https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector</a></p> <p>I am trying to get the OpenTelemetry collector to collect traces from the Rails application. Both are running in the same cluster, but in different namespaces.</p> <p>We have enabled auto-instrumentation in the app, but the rails logs are currently showing these errors:</p> <p><code>E, [2022-04-05T22:37:47.838197 #6] ERROR -- : OpenTelemetry error: Unable to export 499 spans</code></p> <p>I set the following env variables within the app:</p> <pre><code>OTEL_LOG_LEVEL=debug OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318 </code></pre> <p>I can't confirm that the application can communicate with the collector pods on this port. Curling this address from the rails/ruby app returns &quot;Connection Refused&quot;. However I am able to curl <code>http://&lt;OTEL_POD_IP&gt;:4318</code> which returns 404 page not found.</p> <p>From inside a pod:</p> <pre><code># curl http://localhost:4318/ curl: (7) Failed to connect to localhost port 4318: Connection refused # curl http://10.1.0.66:4318/ 404 page not found </code></pre> <p>This helm chart created a daemonset but there is no service running. Is there some setting I need to enable to get this to work?</p> <p>I confirmed that otel-collector is running on every node in the cluster and the daemonset has HostPort set to 4318.</p>
<p>The correct solution is to use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes Downward API</a> to fetch the node IP address, which will allow you to export the traces directly to the daemonset pod within the same node:</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: my-app image: my-image env: - name: HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://$(HOST_IP):4318 </code></pre> <p>Note that using the deployment's service as the endpoint (<code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local</code>) is incorrect, as it effectively bypasses the daemonset and sends the traces directly to the deployment, which makes the daemonset useless.</p>
<h2>My specific case</h2> <p>I have machines in a <a href="https://k3s.io" rel="nofollow noreferrer">k3s</a> cluster. I upgraded from an older version (v1.20?) to v1.21.1+k3s1 a few days ago by running <code>curl -sfL https://get.k3s.io | sh -</code> with <code>INSTALL_K3S_CHANNEL</code> set to <code>latest</code>.</p> <p>My main reason for installing was that I wanted the bundled ingress controller to go from using traefik v1 to v2.</p> <p>The upgrade worked, but I still have traefik 1.81.0:</p> <pre><code>$ k -n kube-system describe deployment.apps/traefik Name: traefik Namespace: kube-system CreationTimestamp: Mon, 29 Mar 2021 22:26:11 -0700 Labels: app=traefik app.kubernetes.io/managed-by=Helm chart=traefik-1.81.0 heritage=Helm release=traefik Annotations: deployment.kubernetes.io/revision: 1 meta.helm.sh/release-name: traefik meta.helm.sh/release-namespace: kube-system Selector: app=traefik,release=traefik Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=traefik chart=traefik-1.81.0 heritage=Helm release=traefik </code></pre> <pre><code>$ k -n kube-system describe addon traefik Name: traefik Namespace: kube-system Labels: &lt;none&gt; Annotations: &lt;none&gt; API Version: k3s.cattle.io/v1 Kind: Addon Metadata: Creation Timestamp: 2021-03-25T05:30:34Z Generation: 1 Managed Fields: API Version: k3s.cattle.io/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:checksum: f:source: f:status: Manager: k3s Operation: Update Time: 2021-03-25T05:30:34Z Resource Version: 344 UID: ... Spec: Checksum: 2925a96b84dfaab024323ccc7bf1c836b77b9b5f547e0a77348974c7f1e67ad2 Source: /var/lib/rancher/k3s/server/manifests/traefik.yaml Status: Events: &lt;none&gt; </code></pre> <h2>What I understand about k3s addons</h2> <p>k3s installs with <a href="https://rancher.com/docs/rke/latest/en/config-options/add-ons/" rel="nofollow noreferrer">addons</a> including ingress, DNS, local-storage. These are set up using helm charts and a custom resource definition called <code>addon</code>.</p> <p>For traefik, there's also a job that appears called <code>helm-install-traefik</code>. It looks like this job ran when I upgraded the cluster:</p> <pre><code>$ k describe jobs -A Name: helm-install-traefik Namespace: kube-system Selector: controller-uid=b2130dde-45ff-4d27-8e22-ee8f7a621d35 Labels: helmcharts.helm.cattle.io/chart=traefik objectset.rio.cattle.io/hash=c42f5b5dd9ee50718523a82c68d4392a7dec9fc4 Annotations: objectset.rio.cattle.io/applied: ... objectset.rio.cattle.io/id: helm-controller objectset.rio.cattle.io/owner-gvk: helm.cattle.io/v1, Kind=HelmChart objectset.rio.cattle.io/owner-name: traefik objectset.rio.cattle.io/owner-namespace: kube-system Parallelism: 1 Completions: 1 Start Time: Thu, 17 Jun 2021 09:13:07 -0700 Completed At: Thu, 17 Jun 2021 09:13:22 -0700 Duration: 15s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: controller-uid=b2130dde-45ff-4d27-8e22-ee8f7a621d35 helmcharts.helm.cattle.io/chart=traefik job-name=helm-install-traefik Annotations: helmcharts.helm.cattle.io/configHash: ... Service Account: helm-traefik Containers: helm: Image: rancher/klipper-helm:v0.5.0-build20210505 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: install Environment: NAME: traefik VERSION: REPO: HELM_DRIVER: secret CHART_NAMESPACE: kube-system CHART: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz HELM_VERSION: TARGET_NAMESPACE: kube-system NO_PROXY: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 Mounts: /chart from content (rw) /config from values (rw) Volumes: values: Type: ConfigMap (a volume populated by a ConfigMap) Name: chart-values-traefik Optional: false content: Type: ConfigMap (a volume populated by a ConfigMap) Name: chart-content-traefik Optional: false Events: &lt;none&gt; </code></pre> <p>Looks like my addons weren't re-created in the update:</p> <pre><code>$ k -n kube-system get addons NAME AGE aggregated-metrics-reader 90d auth-delegator 90d auth-reader 90d ccm 90d coredns 90d local-storage 90d metrics-apiservice 90d metrics-server-deployment 90d metrics-server-service 90d resource-reader 90d rolebindings 90d traefik 90d </code></pre> <h2>The question</h2> <p>The docs give the impression that running the k3s install script should update add-ons. Should it? If so, why hasn't my traefik deployment been upgraded? What can I do to force it to upgrade?</p>
<p>Posting this as a community wiki, feel free to edit and expand.</p> <p>First from your question about <code>job run</code>, you can see in output that traefik had chart for 1.18.0 version:</p> <pre><code>CHART: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz </code></pre> <p>Related to <code>traefik</code> I found some information in k3s documentation:</p> <blockquote> <p>If Traefik is not disabled K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2 if v1 is not already present.</p> <p>To migrate from an older Traefik v1 instance please refer to the <a href="https://doc.traefik.io/traefik/migration/v1-to-v2/" rel="nofollow noreferrer">Traefik documentation</a> and <a href="https://github.com/traefik/traefik-migration-tool" rel="nofollow noreferrer">migration tool</a>.</p> </blockquote> <p><a href="https://rancher.com/docs/k3s/latest/en/networking/#traefik-ingress-controller" rel="nofollow noreferrer">Reference for the above</a></p> <p>Based on my research, <a href="https://rancher.com/docs/k3s/latest/en/upgrades/basic/" rel="nofollow noreferrer">upgrading using command line</a> works only for system components of kubernetes as there is no word about addons while for <code>RKE</code> it's clearly stated that addons are updated:</p> <blockquote> <p>When a cluster is upgraded with rke up, using the default options, the following process is used:</p> <p>1 - The etcd plane gets get updated, one node at a time.</p> <p>2 -Controlplane nodes get updated, one node at a time. This includes the controlplane components and worker plane components of the controlplane nodes.</p> <p>3 - Worker plane components of etcd nodes get updated, one node at a time.</p> <p>4 - Worker nodes get updated in batches of a configurable size. The default configuration for the maximum number of unavailable nodes is ten percent, rounded down to the nearest node, with a minimum batch size of one node.</p> <p>5 - <strong>Addons get upgraded one by one</strong>.</p> </blockquote> <p><a href="https://rancher.com/docs/rke/latest/en/upgrades/how-upgrades-work/" rel="nofollow noreferrer">Reference for RKE</a></p>
<p>We are deploying azure ingress.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: name1 annotations: kubernetes.io/ingress.class: nginx spec: rules: </code></pre> <hr /> <p>If we do not explicitly mention ingress class(<code>kubernetes.io/ingress.class:</code>) in the manifest, what would be the default ingress controller type?</p> <p>Nginx documentation says default as N/A. <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/</a></p>
<p>If you do not specify any ingress class explicitly, the value is omitted. You can specify a default ingress class for your cluster as you can read <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class" rel="nofollow noreferrer">here</a>.</p> <p>You should also migrate to the IngressClassName field as the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation" rel="nofollow noreferrer">annotation is deprecated</a>.</p>
<p>I'm unable to find any references other than this link that confirms that the failure has to be consecutive. <a href="https://github.com/kubernetes/website/issues/37414" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/37414</a></p> <p>Background: Our Java application is getting restarted every day because of liveness probe failure. The application's access logs don't show 3 consecutive failures. So wanted to understand the behavior of probes.</p>
<p>Liveness check is created when Kubernetes creates pod and is recreated each time that Pod is restarted. In your configuration you have set initialDelaySeconds: 20 so after creating a pod, Kubernetes will wait 20 seconds, then it will call liveness probe 3 times (as default value failureThreshold: 3). After 3 fails, Kubernetes will restart this pod according to RestartPolicy. Also in logs you will be able to find in logs.</p> <p>When you are using <code>kubectl get events</code> you are getting events only from the last hour.</p> <pre><code>Kubectl get events LAST SEEN TYPE REASON OBJECT 47m Normal Starting node/kubeadm 43m Normal Scheduled pod/liveness-http 43m Normal Pulling pod/liveness-http 43m Normal Pulled pod/liveness-http 43m Normal Created pod/liveness-http 43m Normal Started pod/liveness-http 4m41s Warning Unhealthy pod/liveness-http 40m Warning Unhealthy pod/liveness-http 12m20s Warning BackOff pod/liveness-http </code></pre> <p>same command after ~1 hour:</p> <pre><code>LAST SEEN TYPE REASON OBJECT 43s Normal Pulling pod/liveness-http 8m40s Warning Unhealthy pod/liveness-http 20m Warning BackOff pod/liveness-http </code></pre> <p>So that might be the reason you are seeing only one failure.</p> <p>Liveness probe can be configured using the fields below:</p> <ul> <li><p>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</p> </li> <li><p>periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</p> </li> <li><p>timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p> </li> <li><p>successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p> </li> <li><p>failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of a readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p> </li> </ul> <p>If you set the minimal values for periodSeconds, timeoutSeconds, successThreshold and failureThreshold you can expect more frequent checks and faster restarts.</p> <p>Liveness probe :</p> <ul> <li>Kubernetes will restart a container in a pod after <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">failureThreshold</a> times. By default it is 3 times - so after 3 failed probes.</li> <li>Depending on your configuration of the container, time needed for container termination could be very differential</li> <li>You can adjust both <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">failureThreshold</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="noreferrer">terminationGracePeriodSeconds</a> period parameters, so the container will be restarted immediately after every failed probe</li> </ul> <p>In <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">liveness probe configuration</a> and <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="noreferrer">best practices</a> you can find more information.</p>
<p>I've scanned through all resources, still cannot find a way to change <code>extraPortMappings</code> in Kind cluster without deleting and creating again.</p> <p>Is it possible and how?</p>
<p>It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing <code>extraPortMappings</code> (as well as other cluster settings) is only possible with recreation of the kind cluster.</p> <blockquote> <p>if you use extraPortMappings in your config, they are β€œfixed” and cannot be modified, unless you recreate the cluster.</p> </blockquote> <p><a href="https://blog.devgenius.io/updating-kind-kubernetes-api-certificate-after-reboot-1521b43f7574" rel="nofollow noreferrer">Source - Issues I Came Across</a></p> <blockquote> <p>Note that the cluster configuration cannot be changed. The only workaround is to delete the cluster (see below) and create another one with the new configuration.</p> </blockquote> <p><a href="https://10clouds.com/blog/kubernetes-environment-devs/" rel="nofollow noreferrer">Source - Kind Installation</a></p> <blockquote> <p>However, there are obvious disadvantages in the configuration and update of the cluster, and the cluster can only be configured and updated by recreating the cluster. So you need to consider configuration options when you initialize the cluster.</p> </blockquote> <p><a href="https://www.programmersought.com/article/12347371148/" rel="nofollow noreferrer">Source - Restrictions</a></p>
<p>I have created one docker image and publish that image to Jfrog Artifactory. Now, I am trying to create kubernetes Pod or trying to create deployment using that image.</p> <p>Find the content of pod.yaml file</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: &lt;name of pod&gt; spec: nodeSelector: type: &quot;&lt;name of node&gt;&quot; containers: - name: &lt;name of container&gt; image: &lt;name and path of image&gt; imagePullPolicy: Always </code></pre> <p>But I am getting <strong>ErrImagePull</strong> status after pod creation. That means pod is not getting created succesfully. Error: error: code = Unknown desc = failed to pull and unpack image</p> <p>Can anyone please help me with this?</p>
<p>Please assure that you're creating the secret kubernetes.io/dockerconfigjson under the same namespace.</p>
<p>It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success.</p> <p>This is my current ingress configuration:</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: nginx-ingress namespace: default selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress uid: adcd75ab-b44b-420c-874e-abcfd1059592 resourceVersion: '259992616' generation: 7 creationTimestamp: '2020-06-10T12:15:18Z' annotations: cert-manager.io/cluster-issuer: letsencrypt-prod ingress.kubernetes.io/enable-cors: 'true' ingress.kubernetes.io/force-ssl-redirect: 'true' kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: 'true' nginx.ingress.kubernetes.io/cors-allow-credentials: 'true' nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type' nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS' nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com' nginx.ingress.kubernetes.io/enable-cors: 'true' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: 'true' </code></pre> <p>I also would like to extend the cors-allow-origin like:</p> <pre><code>nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com' </code></pre> <p>Is it possible to allow multiple domains in other ways?</p>
<p>Ingress-nginx doesn’t support CORS with multiple origins.</p> <p>However, you can try to use annotation: <strong>nginx.ingress.kubernetes.io/configuration-snippet</strong></p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($http_origin ~* &quot;^https?://((?:exactmatch\.com)|(?:regexmatch\.com))$&quot;) { add_header &quot;Access-Control-Allow-Origin&quot; &quot;$http_origin&quot; always; add_header &quot;Access-Control-Allow-Methods&quot; &quot;GET, PUT, POST, OPTIONS&quot; always; add_header &quot;Access-Control-Allow-Headers&quot; &quot;DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization&quot; always; add_header &quot;Access-Control-Expose-Headers&quot; &quot;Content-Length,Content-Range&quot; always; } </code></pre> <p>Here to find more information <a href="https://github.com/kubernetes/ingress-nginx/issues/5496" rel="nofollow noreferrer">ingress nginx issues</a>.</p>
<p>I am trying to achieve <strong>Client IP based routing</strong> using Istio features.</p> <p>I have two versions of application <strong>V1(Stable)</strong> and <strong>V2(Canary)</strong>. I want to route the traffic to the canary version(V2) of the application if the Client IP is from a particular CIDR block (Mostly the CIDR my org) and all other traffic should be routed to the stable version(V1) which is the live traffic.</p> <p>Is there any way to achieve this feature using Istio?</p>
<p>Yes, this is possible.</p> <hr /> <p>Since you have a <code>load balancer</code> in front of the kubernetes cluster, first question to address is <strong>preserve</strong> <code>client IP</code> because due to NAT <code>load balancer</code> opens another session to the internal side to kubernetes cluster and <code>source IP</code> is lost. It has to be preserved. This can be done in different ways depending on <code>load balancer</code> type used. Please see:</p> <p><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#source-ip-address-of-the-original-client" rel="nofollow noreferrer">Source IP address of the original client</a></p> <hr /> <p>Next part is to configure a <code>virtual service</code> to route the traffic based on <code>client IP</code>. It was solutioned for HTTP traffic. Below is a working example:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: app-vservice namespace: test spec: hosts: - &quot;app-service&quot; http: - match: - headers: x-forwarded-for: exact: 123.123.123.123 route: - destination: host: app-service subset: v2 - route: - destination: host: app-service subset: v1 </code></pre> <p><a href="https://github.com/istio/istio/issues/24852#issuecomment-647831912" rel="nofollow noreferrer">Source - Github issue comment</a></p> <p>There is also a <a href="https://github.com/istio/istio/issues/24852#issuecomment-647704245" rel="nofollow noreferrer">comment</a> about TCP and usage <code>addresses</code> in <code>ServiceEntry</code>.</p>
<p>For each namespace in K8s (existing ones), I would like to create an object which contains a text, for example the Jenkins URL of the job.</p> <p>Which K8s object should be used for this?</p> <p>Thanks,</p>
<p>As @jordanm said, you can use config maps as volumes, you can add them as values by following the below method</p> <p>Create ConfigMap From Literal Values, using the --from-literal option.</p> <p>To do so, follow the basic syntax:</p> <pre><code>kubectl create configmap [configmap_name] --from-literal [key1]=[value1] </code></pre> <p>To see details from a Kubernetes ConfigMap and the values for keys, use the command:</p> <pre><code>kubectl get configmaps [configmap_name] -o yaml </code></pre> <p>The output should display information in the yaml format:</p> <pre><code>… apiVersion: v1 data: key1: value1 … </code></pre> <p>Once you have created a ConfigMap, you can mount the configuration to the pod by using volumes. Add a volume section to the to the yaml file of your pod:</p> <pre><code>volumes: - name: config configMap name: [configmap_name] items: - key: [key/file_name] path: [inside_the_pod] </code></pre> <p>For more info refer to this <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_create_configmap/" rel="nofollow noreferrer">document</a>.</p>
<p>When I am developing an operator, I need to build a pod object. When I write pod.Spec.Volumes, what I need to mount is a configmap type, so I operate according to the structure in the core\v1 package to ConfigMapVolumeSource When I created the structure, I found that the name field of the configmap was not specified. There were only four other fields. The directory of my file was:</p> <p><a href="https://i.stack.imgur.com/TreRW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TreRW.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/z8sZr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z8sZr.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/xYcTd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYcTd.png" alt="enter image description here" /></a> So when I build the pod, it will report an error. This field is required</p> <p><a href="https://i.stack.imgur.com/ENReK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENReK.png" alt="enter image description here" /></a></p> <p>Am I using the wrong version? thank you very much for your help!</p>
<p>The <code>name</code> field is inherited from <code>LocalObjectReference</code> struct.</p>
<p>Say that I have a job history limit &gt; 1, is there a way to use kubectl to find which jobs that have been spawned by a CronJob?</p>
<p>Use label.</p> <pre><code>$kubectl get jobs -n namespace -l created-by=cronjob </code></pre> <p><strong>created-by=cronjob</strong> which define at your cronjob.</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: hello spec: schedule: &quot;* * * * *&quot; jobTemplate: metadata: labels: created-by: cronjob spec: template: spec: containers: - name: hello image: busybox imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre>
<p>I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.</p> <h2>Background</h2> <p>I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.</p> <p>Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out &quot;Hello world&quot; at the <code>/</code> route.</p> <p>DockerFile:</p> <pre><code>FROM node:lts-slim RUN mkdir /code COPY package*.json server.js /code/ WORKDIR /code RUN npm install EXPOSE 3000 CMD [&quot;node&quot;, &quot;server.js&quot;] </code></pre> <p>I have the following pod spec:</p> <p>web-pod.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: web-pod spec: containers: - name: web image: kahunacohen/hello-kube:latest ports: - containerPort: 3000 </code></pre> <p>The following service:</p> <p>web-service.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: web-service spec: type: NodePort selector: app: web-pod ports: - port: 8080 targetPort: 3000 protocol: TCP name: http </code></pre> <p>And the following deployment:</p> <p>web-deployment.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 selector: matchLabels: app: web-pod service: web-service template: metadata: labels: app: web-pod service: web-service spec: containers: - name: web image: kahunacohen/hello-kube:latest ports: - containerPort: 3000 protocol: TCP </code></pre> <p>All the objects are up and running and look good after I create them with kubectl.</p> <p>I do this:</p> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 7h5m web-service NodePort 10.104.15.61 &lt;none&gt; 8080:32177/TCP 25m </code></pre> <ol start="4"> <li>Then, as per a book I'm reading if I do:</li> </ol> <pre><code>$ curl $(minikube ip):8080 # or :32177, # or :3000 </code></pre> <p>I get no response.</p> <p>I found when I do this, however I can access the app by going to <code>http://127.0.0.1:52650/</code>:</p> <pre><code>$ minikube service web-service |-----------|-------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|---------------------------| | default | web-service | http/8080 | http://192.168.49.2:32177 | |-----------|-------------|-------------|---------------------------| πŸƒ Starting tunnel for service web-service. |-----------|-------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|------------------------| | default | web-service | | http://127.0.0.1:52472 | |-----------|-------------|-------------|------------------------| </code></pre> <h2>Questions</h2> <ol> <li>what this &quot;tunnel&quot; is and why we need it?</li> <li>what the targetPort is for (8080)?</li> <li>What this line means when I do <code>kubectl get services</code>:</li> </ol> <pre><code>web-service NodePort 10.104.15.61 &lt;none&gt; 8080:32177/TCP 25m </code></pre> <p>Specifically, what is that port mapping means and where <code>32177</code> comes from?</p> <ol start="4"> <li>Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?</li> </ol>
<p>Let me answer on all your questions.</p> <p>0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a <code>replicaset</code> which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments in kubernetes</a>.</p> <hr /> <p>1 - <a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">Tunnel</a> is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with <code>LoadBalancer</code> service type. Please refer to <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">access applications in minikube</a>.</p> <p>1.1 - Reason why the application is not accessible on the <code>localhost:NodePort</code> is NodePort is exposed within VM where <code>minikube</code> is running, not on your local machine.</p> <p>You can find minikube VM's IP by running <code>minikube IP</code> and then <code>curl %GIVEN_IP:NodePort</code>. You should get a response from your app.</p> <hr /> <p>2 - <code>targetPort</code> indicates the service with which port connection should be established. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define the service</a>.</p> <p>In <code>minikube</code> it may be confusing since it's pointed to the <code>service port</code>, not to the <code>targetPort</code> which is define within the service. I think idea was to indicate on which port <code>service</code> is accessible within the cluster.</p> <hr /> <p>3 - As for this question, there are headers presented, you can treat them literally. For instance:</p> <pre><code>$ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR web-service NodePort 10.106.206.158 &lt;none&gt; 80:30001/TCP 21m app=web-pod </code></pre> <p><code>NodePort</code> comes from your <code>web-service.yaml</code> for <code>service</code> object. <code>Type</code> is explicitly specified and therefore <code>NodePort</code> is allocated. If you don't specify <code>type</code> of service, it will be created as <code>ClusterIP</code> type and will be accessible only within kubernetes cluster. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Publishing Services (ServiceTypes)</a>.</p> <p>When service is created with <code>ClusterIP</code> type, there won't be a <code>NodePort</code> in output. E.g.</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-service ClusterIP 10.106.206.158 &lt;none&gt; 80/TCP 23m </code></pre> <p><code>External-IP</code> will pop up when <code>LoadBalancer</code> service type is used. Additionally for <code>minikube</code> address will appear once you run <code>minikube tunnel</code> in a different shell. After your service will be accessible on your host machine by <code>External-IP</code> + <code>service port</code>.</p> <hr /> <p>4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:</p> <blockquote> <p>Note: A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.</p> </blockquote> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define a service</a></p> <hr /> <p>Edit:</p> <p>Depending on the <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">driver</a> of <code>minikube</code> (usually this is a <code>virtual box</code> or <code>docker</code> - can be checked on linux VM in <code> .minikube/profiles/minikube/config.json</code>), <code>minikube</code> can have different port forwarding. E.g. I have a <code>minikube</code> based on <code>docker</code> driver and I can see some mappings:</p> <pre><code>$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 &quot;/usr/local/bin/entr…&quot; 5 days ago Up 5 days 127.0.0.1:49157-&gt;22/tcp, 127.0.0.1:49156-&gt;2376/tcp, 127.0.0.1:49155-&gt;5000/tcp, 127.0.0.1:49154-&gt;8443/tcp, 127.0.0.1:49153-&gt;32443/tcp minikube </code></pre> <p>For instance 22 for ssh to ssh into <code>minikube VM</code>. This may be an answer why you got response from <code>http://127.0.0.1:52650/</code></p>
<p>Ingress is not forwarding traffic to pods. Application is deployed on Azure Internal network. I can access app successfully using pod Ip and port but when trying Ingress IP/ Host I am getting 404 not found. I do not see any error in Ingress logs. Bellow are my config files. Please help me if I am missing anything or a how I can troubleshoot to find issue.</p> <p>Deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: aks-helloworld-one spec: replicas: 1 selector: matchLabels: app: aks-helloworld-one template: metadata: labels: app: aks-helloworld-one spec: containers: - name: aks-helloworld-one image: &lt;image&gt; ports: - containerPort: 8290 protocol: &quot;TCP&quot; env: - name: env1 valueFrom: secretKeyRef: name: configs key: env1 volumeMounts: - mountPath: &quot;mnt/secrets-store&quot; name: secrets-mount volumes: - name: secrets-mount csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;azure-keyvault&quot; imagePullSecrets: - name: acr-secret --- apiVersion: v1 kind: Service metadata: name: aks-helloworld-one spec: type: ClusterIP ports: - name: http protocol: TCP port: 8080 targetPort: 8290 selector: app: aks-helloworld-one </code></pre> <p>Ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-world-ingress namespace: ingress-basic annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: aks-helloworld port: number: 80 </code></pre>
<p>Correct your service name and service port in ingress.yaml.</p> <pre><code>spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: # wrong: name: aks-helloworld name: aks-helloworld-one port: # wrong: number: 80 number: 8080 </code></pre> <p>Actually, you can use below command to confirm if ingress has any endpoint.</p> <pre><code>kubectl describe ingress hello-world-ingress -n ingress-basic </code></pre>
<p>I am trying to debug my pod throwing CrashLoopBackOff error. When I run decribe command, I found that <code>Back-off restarting failed container</code> is the error. I excuted the logs for the failing pod and I got the below data.</p> <pre><code>vagrant@master:~&gt; kubectl logs pod_name standard_init_linux.go:228: exec user process caused: exec format error vagrant@master:/vagrant&gt; kubectl logs -p pod_name unable to retrieve container logs for containerd://db0f2dbd549676d8bf1026e5757ff45847c62152049b36037263f81915e948eavagrant </code></pre> <p>Why I am not able to execute the logs command?</p> <p>More details:</p> <p><a href="https://i.stack.imgur.com/wIejs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wIejs.png" alt="enter image description here" /></a></p> <p>yaml file is as follows</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: service: udaconnect-app name: udaconnect-app spec: ports: - name: &quot;3000&quot; port: 3000 targetPort: 3000 nodePort: 30000 selector: service: udaconnect-app type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: labels: service: udaconnect-app name: udaconnect-app spec: replicas: 1 selector: matchLabels: service: udaconnect-app template: metadata: labels: service: udaconnect-app spec: containers: - image: udacity/nd064-udaconnect-app:latest name: udaconnect-app imagePullPolicy: Always resources: requests: memory: &quot;128Mi&quot; cpu: &quot;64m&quot; limits: memory: &quot;256Mi&quot; cpu: &quot;256m&quot; restartPolicy: Always </code></pre> <p>My vagrant file</p> <pre><code>default_box = &quot;opensuse/Leap-15.2.x86_64&quot; Vagrant.configure(&quot;2&quot;) do |config| config.vm.define &quot;master&quot; do |master| master.vm.box = default_box master.vm.hostname = &quot;master&quot; master.vm.network 'private_network', ip: &quot;192.168.0.200&quot;, virtualbox__intnet: true master.vm.network &quot;forwarded_port&quot;, guest: 22, host: 2222, id: &quot;ssh&quot;, disabled: true master.vm.network &quot;forwarded_port&quot;, guest: 22, host: 2000 # Master Node SSH master.vm.network &quot;forwarded_port&quot;, guest: 6443, host: 6443 # API Access for p in 30000..30100 # expose NodePort IP's master.vm.network &quot;forwarded_port&quot;, guest: p, host: p, protocol: &quot;tcp&quot; end master.vm.provider &quot;virtualbox&quot; do |v| v.memory = &quot;3072&quot; v.name = &quot;master&quot; end master.vm.provision &quot;shell&quot;, inline: &lt;&lt;-SHELL sudo zypper refresh sudo zypper --non-interactive install bzip2 sudo zypper --non-interactive install etcd sudo zypper --non-interactive install apparmor-parser curl -sfL https://get.k3s.io | sh - SHELL end config.vm.provider &quot;virtualbox&quot; do |vb| vb.memory = &quot;4096&quot; vb.cpus = 4 end </code></pre> <p>Any help is appreciated.</p>
<p>Summarizing the comments: <code>CrashLoopBackOff</code> error occurs, when there is a mismatch of AMD64 and ARM64 devices. According to your docker image <code>udacity/nd064-udaconnect-app</code>, we can see that it's <a href="https://hub.docker.com/r/udacity/nd064-udaconnect-app/tags" rel="nofollow noreferrer">AMD64 arch</a> and your box <code>opensuse/Leap-15.2.x86_64</code> is <a href="https://en.opensuse.org/openSUSE:AArch64" rel="nofollow noreferrer">ARM64 arch</a>.</p> <p>Hence, you have to change either your docker image, or the box in order to resolve this issue.</p>
<p>I am following <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">setting up the Azure File share to the pod</a>.</p> <ul> <li>created the namespace</li> <li>created the secrets as specified</li> <li>pod configuration</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: test-storage-pod namespace: storage-test spec: containers: - image: nginx:latest name: test-storage-pod resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi volumeMounts: - name: azure mountPath: /mnt/azure-filestore volumes: - name: azure azureFile: secretName: azure-storage-secret shareName: appdata/data readOnly: false </code></pre> <ul> <li><code>kubectl describe -n storage-test pod/&lt;pod-name&gt;</code> or <code>kubectl get -n storage-test event</code></li> </ul> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 2m13s Normal Scheduled pod/test-storage-pod Successfully assigned storage-test/test-storage-pod to aks-default-1231523-vmss00001a 6s Warning FailedMount pod/test-storage-pod MountVolume.SetUp failed for volume &quot;azure&quot; : Couldn't get secret default/azure-storage-secret 11s Warning FailedMount pod/test-storage-pod Unable to attach or mount volumes: unmounted volumes=[azure], unattached volumes=[default-token-gzxk8 azure]: timed out waiting for the condition </code></pre> <p>Question:</p> <ul> <li>the secret is created under the namespace storage-test as well, is that Kubelet first checks the storage under default namespace?</li> </ul>
<p>Probably you are working default namespace, that's why Kubelet first checks the default namespace. Please try to switch to your created namespace with the command:</p> <blockquote> <p>kubens storage-test</p> </blockquote> <p>Try to run your pod under storage-test namespace once again.</p>
<p>I have created a container which is basically a directory containing configuration files. These configuration files will be used by another container which contains the application executables. Both the containers are deployed in the same pod.</p> <p>I have created the pv and pvc for the 1st container like -</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: app-pv labels: type: local spec: storageClassName: standard capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce hostPath: path: /mnt/data type: DirectoryOrCreate --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: app-pvc labels: type: local spec: storageClassName: standard volumeName: app-pv accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>Next I am creating the deployment of both the containers like -</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app labels: app: app spec: selector: matchLabels: app: app strategy: type: Recreate template: metadata: labels: app: app spec: containers: - name: lshome image: &lt;local_registry&gt;/container1:latest imagePullPolicy: Always volumeMounts: - name: container1-persistant-storage mountPath: /home/app/config - name: wildfly image: &lt;local_registry&gt;/container2:latest imagePullPolicy: Always volumeMounts: - name: container1-config mountPath: /home/app/config volumes: - name: container1-persistant-storage persistentVolumeClaim: claimName: app-pvc - name: container1-config persistentVolumeClaim: claimName: app-pvc </code></pre> <p>What I want is that the data under /home/app/config directory in container 1 be available to container 2 at the same directory structure. I have created the same directory structure in container 2 as well.</p> <p>When I am trying to create the deployment, its giving the message - Unable to attach or mount volumes and then not able to create the deployment.</p>
<p>Don't define two volumes, delete <strong>container1-config</strong> and it works. Like below.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app labels: app: app spec: selector: matchLabels: app: app strategy: type: Recreate template: metadata: labels: app: app spec: containers: - name: lshome image: &lt;local_registry&gt;/container1:latest imagePullPolicy: Always volumeMounts: - name: container1-persistant-storage mountPath: /home/app/config - name: wildfly image: &lt;local_registry&gt;/container2:latest imagePullPolicy: Always volumeMounts: - name: container1-persistant-storage mountPath: /home/app/config volumes: - name: container1-persistant-storage persistentVolumeClaim: claimName: app-pvc </code></pre>
<p>I'm using a HPA based on a custom metric on GKE.</p> <p>The HPA is not working and it's showing me this error log:</p> <blockquote> <p>unable to fetch metrics from custom metrics API: the server is currently unable to handle the request</p> </blockquote> <p>When I run <code>kubectl get apiservices | grep custom</code> I get</p> <blockquote> <p>v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d</p> </blockquote> <p>this is the HPA spec config :</p> <pre><code>spec: scaleTargetRef: kind: Deployment name: api-name apiVersion: apps/v1 minReplicas: 3 maxReplicas: 50 metrics: - type: Object object: target: kind: Service name: api-name apiVersion: v1 metricName: messages_ready_per_consumer targetValue: '1' </code></pre> <p>and this is the service's spec config :</p> <pre><code>spec: ports: - name: worker-metrics protocol: TCP port: 8080 targetPort: worker-metrics selector: app.kubernetes.io/instance: api app.kubernetes.io/name: api-name clusterIP: 10.8.7.9 clusterIPs: - 10.8.7.9 type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack </code></pre> <p>What should I do to make it work ?</p>
<p>First of all, confirm that the Metrics Server POD is running in your <code>kube-system</code> namespace. Also, you can use the following manifest:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP imagePullPolicy: Always volumeMounts: - name: tmp-dir mountPath: /tmp </code></pre> <p>If so, take a look into the logs and look for any <em><strong>stackdriver adapter’s</strong></em> line. This issue is commonly caused due to a problem with the <code>custom-metrics-stackdriver-adapter</code>. It usually crashes in the <code>metrics-server</code> namespace. To solve that, use the resource from this <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml" rel="nofollow noreferrer">URL</a>, and for the deployment, use this image:</p> <pre><code>gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.1 </code></pre> <p>Another common root cause of this is an <strong>OOM</strong> issue. In this case, adding more memory solves the problem. To assign more memory, you can specify the new memory amount in the configuration file, as the following example shows:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress resources: limits: memory: &quot;200Mi&quot; requests: memory: &quot;100Mi&quot; command: [&quot;stress&quot;] args: [&quot;--vm&quot;, &quot;1&quot;, &quot;--vm-bytes&quot;, &quot;150M&quot;, &quot;--vm-hang&quot;, &quot;1&quot;] </code></pre> <p>In the above example, the Container has a memory request of 100 MiB and a memory limit of 200 MiB. In the manifest, the &quot;--vm-bytes&quot;, &quot;150M&quot; argument tells the Container to attempt to allocate 150 MiB of memory. You can visit this Kubernetes Official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Documentation</a> to have more references about the Memory settings.</p> <p>You can use the following threads for more reference <a href="https://stackoverflow.com/questions/61098043/gke-hpa-using-custom-metrics-unable-to-fetch-metrics">GKE - HPA using custom metrics - unable to fetch metrics</a>, <a href="https://stackoverflow.com/questions/60541105/stackdriver-metadata-agent-cluster-level-gets-oomkilled/60549732#60549732">Stackdriver-metadata-agent-cluster-level gets OOMKilled</a>, and <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/303" rel="nofollow noreferrer">Custom-metrics-stackdriver-adapter pod keeps crashing</a>.</p>
<p>I am toying with the spark operator in kubernetes, and I am trying to create a Spark Application resource with the following manifest.</p> <pre><code>apiVersion: &quot;sparkoperator.k8s.io/v1beta2&quot; kind: SparkApplication metadata: name: pyspark-pi namespace: spark-jobs spec: batchScheduler: volcano batchSchedulerOptions: priorityClassName: routine type: Python pythonVersion: &quot;3&quot; mode: cluster image: &quot;&lt;image_name&gt;&quot; imagePullPolicy: Always mainApplicationFile: local:///spark-files/csv_data.py arguments: - &quot;10&quot; sparkVersion: &quot;3.0.0&quot; restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 timeToLiveSeconds: 86400 driver: cores: 1 coreLimit: &quot;1200m&quot; memory: &quot;512m&quot; labels: version: 3.0.0 serviceAccount: driver-sa volumeMounts: - name: sparky-data mountPath: /spark-data executor: cores: 1 instances: 2 memory: &quot;512m&quot; labels: version: 3.0.0 volumeMounts: - name: sparky-data mountPath: /spark-data volumes: - name: sparky-data hostPath: path: /spark-data </code></pre> <p>I am running this in kind, where I have defined a volume mount to my local system where the data to be processed is present. I can see the volume being mounted in the kind nodes. But when I create the above resource, the driver pod crashes by giving the error 'no such path'. I printed the contents of the root directory of the driver pod and I could not see the mounted volume. What is the problem here and how do I fix this?</p>
<p>The issue is related to permissions. When mounting a volume to a pod, you need to make sure that the permissions are set correctly. Specifically, you need to make sure that the user or group that is running the application in the pod has the correct permissions to access the data.You should also make sure that the path to the volume is valid, and that the volume is properly mounted.To check if a path exists, you can use the exec command:</p> <pre><code>kubectl exec &lt;pod_name&gt; -- ls </code></pre> <p>Try to add <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a> which gives privilege and access control settings for a Pod</p> <p>For more information follow this <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">document</a>.</p>
<p>I'm trying to understand how the Kubernetes <strong>HorizontalPodAutoscaler</strong> works. Until now, I have used the following configuration:</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: my-deployment spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 </code></pre> <p>This uses the <code>targetCPUUtilizationPercentage</code> parameter but I would like to use a metric for the memory percentage used, but I was not able to find any example. Any hint?</p> <p>I found also that there is this type of configuration to support multiple metrics, but the <strong>apiVersion</strong> is <code>autoscaling/v2alpha1</code>. Can this be used in a production environment?</p> <pre><code>kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2alpha1 metadata: name: WebFrontend spec: scaleTargetRef: kind: ReplicationController name: WebFrontend minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - type: Object object: target: kind: Service name: Frontend metricName: hits-per-second targetValue: 1k </code></pre>
<p>Here is a manifest example for what you need, that includes <strong>Memory Metrics</strong>:</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: web-servers spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: web-servers minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 20 - type: Resource resource: name: memory target: type: AverageValue averageValue: 30Mi </code></pre> <p>An important thing to notice is that, as you can see, it uses the <strong>autoscaling/v2beta2 API version</strong>, so you need to follow all the previous instructions listed <a href="https://loft.sh/blog/kubernetes-horizontal-pod-autoscaling/" rel="nofollow noreferrer">here</a>.</p> <p>Regarding the possibility to use the <strong>autoscaling/v2alpha1</strong>, yes, you can use it, as it includes support for scaling on memory and custom metrics as this <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">URL</a> specifies, but keep in mind that alpha versions are released for testing, as they are not final versions.</p> <p>For more <strong>autoscaling/v2beta2 YAML’s</strong> examples and a deeper look into memory metrics, you can take a look at this <a href="https://stackoverflow.com/questions/69184304/how-memory-metric-is-evaluated-by-kubernetes-hpa">thread</a>.</p>
<p>I have a statefulset deployed with 1 replica for jenkins. few days back the node on which jenkins pod was running went into NotReady State . Once Node went in NotReady state, Jenkins pod went in Terminating state and stuck there for long time until Node went back in Ready State.</p> <p>Ideally, my Jenkins pod should have been re-scheduled to a healthy node in case my current node is not healthy. due to this my jenkins application had downtime for the time node was in Not Ready State.</p> <p>Can anything be done in this case in order to prevent such downtime in statefulset pods</p> <p>Kubectl version:</p> <p>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.13&quot;, GitCommit:&quot;53c7b65d4531a749cd3a7004c5212d23daa044a9&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-15T20:58:11Z&quot;, GoVersion:&quot;go1.15.14&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.13&quot;, GitCommit:&quot;53c7b65d4531a749cd3a7004c5212d23daa044a9&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-15T20:53:19Z&quot;, GoVersion:&quot;go1.15.14&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}</p>
<p>Your issue could be because the cluster doesn't have enough resources or because the pod scheduler can't make a decision because there aren't enough labels on the nodes. You might want to look into the Kubernetes scheduler's logs to find out exactly what happened in this case. To better distribute the workload, you might want to think about expanding the size of your cluster or adding additional nodes if the resources on the cluster were insufficient. In addition, you might want to make sure that all of the cluster's nodes have accurate labels so that the pod scheduler knows which node is best for running your Jenkins pod.</p> <p>A <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">node affinity</a> rule in your Jenkins statefulset's deployment manifest can be set up to prevent downtime caused by a node entering the Not Ready state. The pod will only be scheduled for nodes that match a particular label, such as &quot;Ready=true,&quot; as a result of this. Additionally, you can use additional labels to indicate the health status of your cluster's nodes in your affinity rule. The pod will be rescheduled to a healthy node whenever a node enters the Not Ready state in this manner. Last but not least, you might want to think about using a pod disruption budget for your statefulset. This will make it less likely that the pod will be evicted from a node when it is not healthy.</p> <p>Attaching supporting <a href="https://komodor.com/learn/how-to-fix-kubernetes-node-not-ready-error/" rel="nofollow noreferrer">blog-1</a> and <a href="https://www.datadoghq.com/blog/debug-kubernetes-pending-pods/" rel="nofollow noreferrer">blog-2</a> for reference.</p>
<p>Can someone please help to spot the issue with <code>ingress-2</code> ingress rule? why <code>ingress-1</code> is working vs <code>ingress-2</code> is not working.</p> <p><strong>Description of my setup, I have two deployments:</strong></p> <p>1st deployment is of <code>nginx</code><br /> 2nd deployment is of <code>httpd</code></p> <p>Both of the deployments are exposed via <code>ClusterIP</code> services named <code>nginx-svc</code> and <code>httpd-svc</code> respectively. All the <code>endpoints</code> are proper for the services. However, while setting up the ingress for these services, I am not able to setup the ingress using <code>host</code> (as described in <code>ingress-2</code>). however, when I am using <code>ingress-1</code>, things work fine.</p> <p>// my host file for name resolution</p> <pre><code>grep myapp.com /etc/hosts 127.0.0.1 myapp.com </code></pre> <p>// deployment details</p> <pre><code>kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 29m httpd 3/3 3 3 29m </code></pre> <p>// service details</p> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.152.183.1 &lt;none&gt; 443/TCP 7h48m nginx-svc ClusterIP 10.152.183.233 &lt;none&gt; 80/TCP 28m httpd-svc ClusterIP 10.152.183.58 &lt;none&gt; 80/TCP 27m </code></pre> <p>// endpoints details</p> <pre><code>kubectl get ep NAME ENDPOINTS AGE kubernetes 10.0.2.15:16443 7h51m nginx-svc 10.1.198.86:80,10.1.198.87:80,10.1.198.88:80 31m httpd-svc 10.1.198.89:80,10.1.198.90:80,10.1.198.91:80 31m </code></pre> <p>Attempt-1: <code>ingress-1</code></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-1 spec: rules: - http: paths: - path: /nginx pathType: Prefix backend: service: name: nginx-svc port: number: 80 - path: /httpd pathType: Prefix backend: service: name: httpd-svc port: number: 80 </code></pre> <p>// Example showing that ingress routing is working fine when <code>ingress-1</code> is used:</p> <pre><code> curl myapp.com/nginx &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to nginx!&lt;/h1&gt; &lt;p&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;/p&gt; &lt;p&gt;For online documentation and support please refer to &lt;a href=&quot;http://nginx.org/&quot;&gt;nginx.org&lt;/a&gt;.&lt;br/&gt; Commercial support is available at &lt;a href=&quot;http://nginx.com/&quot;&gt;nginx.com&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; curl myapp.com/httpd &lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>// following ingress rule is not working as I was expecting</p> <p>Attempt-2: <code>ingress-2</code></p> <pre><code>kind: Ingress metadata: name: ingress-2 spec: rules: - host: &quot;myapp.com&quot; http: paths: - pathType: Prefix path: &quot;/nginx&quot; backend: service: name: nginx-svc port: number: 80 - pathType: Prefix path: &quot;/httpd&quot; backend: service: name: httpd-svc port: number: 80 </code></pre> <p>// I could not spot any issue in the ing describe</p> <pre><code>kubectl describe ingress ingress-2 Name: ingress-2 Namespace: default Address: 127.0.0.1 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- myapp.com /nginx nginx-svc:80 (10.1.198.86:80,10.1.198.87:80,10.1.198.88:80) /httpd httpd-svc:80 (10.1.198.89:80,10.1.198.90:80,10.1.198.91:80) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 9m15s (x2 over 10m) nginx-ingress-controller Scheduled for sync </code></pre> <p>// example showing ingress routing is not working with this ingress resource</p> <pre><code>curl myapp.com/nginx &lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.21.1&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; curl myapp.com/httpd &lt;!DOCTYPE HTML PUBLIC &quot;-//IETF//DTD HTML 2.0//EN&quot;&gt; &lt;html&gt;&lt;head&gt; &lt;title&gt;404 Not Found&lt;/title&gt; &lt;/head&gt;&lt;body&gt; &lt;h1&gt;Not Found&lt;/h1&gt; &lt;p&gt;The requested URL was not found on this server.&lt;/p&gt; &lt;/body&gt;&lt;/html&gt; </code></pre>
<h2 id="difference-between-ingresses">Difference between ingresses</h2> <p>I created a one node <code>microk8s</code> cluster <a href="https://microk8s.io/" rel="noreferrer">following official documentation</a> and I wasn't able to reproduce behaviour you described which is correct behaviour. Added two pods with <code>mendhak/http-https-echo</code> image (highly recommend: very convenient for troubleshooting ingress or understanding how ingress works) and two services for each of pods.</p> <p>The difference between two ingress rules is first ingress rule listens on all domains (HOSTS):</p> <pre><code>$ mkctl get ing -o wide NAME CLASS HOSTS ADDRESS PORTS AGE ingress-1 public * 127.0.0.1 80 2m53s $ curl -I --header &quot;Host: myapp.com&quot; http://127.0.0.1/httpd HTTP/1.1 200 OK $ curl -I --header &quot;Host: example.com&quot; http://127.0.0.1/httpd HTTP/1.1 200 OK $ curl -I --header &quot;Host: myapp.com&quot; http://127.0.0.1/missing_url HTTP/1.1 404 Not Found </code></pre> <p>While the second ingress rule will serve only <code>myapp.com</code> domain (HOST):</p> <pre><code>$ mkctl get ing NAME CLASS HOSTS ADDRESS PORTS AGE ingress-2 public myapp.com 127.0.0.1 80 60s $ curl -I --header &quot;Host: myapp.com&quot; http://127.0.0.1/httpd HTTP/1.1 200 OK $ curl -I --header &quot;Host: example.com&quot; http://127.0.0.1/httpd HTTP/1.1 404 Not Found </code></pre> <h2 id="what-exactly-happens">What exactly happens</h2> <p>Last results in your question actually show that ingress is working as expected. You're getting responses not from <code>kubernetes ingress</code> but from pods within the cluster. First response is <code>404</code> from <code>nginx 1.21.0</code> and second is <code>404</code> from <code>apache</code>.</p> <p>This happens because ingress sends requests to pods with the same <code>path</code> from URL without any transformations. For instance (this output I got using image mentioned above):</p> <pre><code>$ curl myapp.com/httpd { &quot;path&quot;: &quot;/httpd&quot; ... </code></pre> <p>While both <code>nginx</code> and <code>apache</code> are serving on <code>/</code>.</p> <h2 id="how-to-resolve-it">How to resolve it</h2> <p>Nginx ingress has a lot of features and one of them is <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">rewriting</a> which helps to transform <code>paths</code> from what ingress gets to what goes to pods.</p> <p>For example, if request goes to <code>http://myapp.com/nginx</code> then it will be directed to <code>nginx</code> service with <code>/nginx</code> path which will cause <code>nginx</code> to throw <code>404</code> since there's nothing on this <code>path</code>.</p> <p>Ingress rule below fixes this by adding <code>rewrite-target</code> to <code>/</code> which we need to pass to <code>nginx</code> and <code>apache</code> services:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-2 annotations: # kubernetes.io/ingress.class: nginx # this should be uncommented if ingress used in &quot;regular&quot; cluster nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: myapp.com http: paths: - path: /nginx pathType: Prefix backend: service: name: service-a port: number: 80 - path: /httpd pathType: Prefix backend: service: name: service-b port: number: 80 </code></pre> <p>Quick test how it works:</p> <pre><code>$ curl myapp.com/nginx { &quot;path&quot;: &quot;/&quot;, ... </code></pre> <p>And</p> <pre><code>$ curl myapp.com/httpd { &quot;path&quot;: &quot;/&quot;, ... </code></pre> <p>As you can see now <code>path</code> is <code>/</code>.</p> <p>Switching image to <code>nginx</code> and:</p> <pre><code>$ curl myapp.com/nginx &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; ... </code></pre> <h2 id="useful-links">Useful links</h2> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Kubernetes ingress</a></li> <li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">Nginx ingress - rewrite</a></li> </ul>
<p>We can check the service accounts in Kubernetes Cluster. Likewise, Is it possible to check the existing users and groups of my Kubernetes cluster with Cluster Admin privileges. If yes then how ? If no then why ?</p> <p>NOTE: I am using EKS</p>
<p>Posting this as a community wiki, feel free to edit and expand.</p> <hr /> <p>This won't answer everything, however there are some concepts and ideas.</p> <p>In short words there's no easy way. It's not possible to do using kubernetes itself. Reason for this is:</p> <blockquote> <p>All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users.</p> <p>It is assumed that a cluster-independent service manages normal users in the following ways:</p> <ul> <li>an administrator distributing private keys</li> <li>a user store like Keystone or Google Accounts</li> <li>a file with a list of usernames and passwords</li> </ul> <p>In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">Source</a></p> <p><a href="https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind">More details and examples from another answer on SO</a></p> <hr /> <p>As for EKS part which is mentioned, it should be done using AWS IAM in connection to kubernetes RBAC. Below articles about setting up IAM roles in kubernetes cluster. Same way it will be possible to find which role has <code>cluster admin</code> permissions:</p> <ul> <li><a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">Managing users or IAM roles for your cluster</a></li> <li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/" rel="nofollow noreferrer">provide access to other IAM users and roles</a></li> </ul> <p>If another tool is used for identity managing, it should be used (e.g. LDAP)</p>
<p>I'm trying to learn Kubernetes as I go and I'm currently trying to deploy a test application that I made.</p> <p>I have 3 containers and each container is running on its own pod</p> <ul> <li>Front end App (Uses Nuxtjs)</li> <li>Backend API (Nodejs)</li> <li>MongoDB</li> </ul> <p>For the Front End container I have configured an External Service (LoadBalancer) which is working fine. I can access the app from my browser with no issues.</p> <p>For the backend API and MongoDB I configured an Internal Service for each. The communication between Backend API and MongoDB is working. The problem that I'm having is communicating the Frontend with the Backend API.</p> <p>I'm using the Axios component in Nuxtjs and in the nuxtjs.config.js file I have set the Axios Base URL to be http://service-name:portnumber/. But that does not work, I'm guessing its because the url is being call from the client (browser) side and not from the server. If I change the Service type of the Backend API to LoadBalancer and configure an IP Address and Port Number, and use that as my axios URL then it works. However I was kind of hoping to keep the BackEnd-API service internal. Is it possible to call the Axios base URL from the server side and not from client-side.</p> <p>Any help/guidance will be greatly appreciated.</p> <p>Here is my Front-End YML File</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: mhov-ipp name: mhov-ipp namespace: mhov spec: replicas: 1 selector: matchLabels: app: mhov-ipp template: metadata: labels: app: mhov-ipp spec: containers: - image: mhov-ipp:1.1 name: mhov-ipp ports: - containerPort: 8080 env: - name: NODE_ENV value: &quot;development&quot; - name: PORT value: &quot;8080&quot; - name: TITLE value: &quot;MHOV - IPP - K8s&quot; - name: API_URL value: &quot;http://mhov-api-service:4000/&quot; --- apiVersion: v1 kind: Service metadata: name: mhov-ipp-service spec: selector: app: mhov-ipp type: LoadBalancer ports: - protocol: TCP port: 8082 targetPort: 8080 nodePort: 30600 </code></pre> <p>Here is the backend YML File</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mhov-api-depl labels: app: mhov-api spec: replicas: 1 selector: matchLabels: app: mhov-api template: metadata: labels: app: mhov-api spec: containers: - name: mhov-api image: mhov-api:1.0 ports: - containerPort: 4000 env: - name: mongoURI valueFrom: configMapKeyRef: name: mhov-configmap key: database_url --- apiVersion: v1 kind: Service metadata: name: mhov-api-service spec: selector: app: mhov-api ports: - protocol: TCP port: 4000 targetPort: 4000 </code></pre>
<h2 id="what-is-ingress-and-how-to-install-it">What is ingress and how to install it</h2> <p>Your guess is correct. Frontend is running in browser and browser &quot;doesn't know&quot; where backend is and how to reach out to it. You have two options here:</p> <ul> <li>as you did with exposing backend outside your cluster</li> <li>use advanced solution such as <code>ingress</code></li> </ul> <p>This will move you forward and will need to change some configuration of your application such as URL since application will be exposed to &quot;the internet&quot; (not really, but you can do it using cloud).</p> <p>What is <code>ingress</code>:</p> <blockquote> <p>Ingress is api object which exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.</p> </blockquote> <p>Most common option is <code>nginx ingress</code> - their page <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p> <p>Installation depends on cluster type, however I suggest using <code>helm</code>. (if you're not familiar with <code>helm</code>, it's a template engine which uses charts to install and setup application. There are quite a lot already created charts, e.g. <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">ingress-nginx</a>.</p> <p>If you're using <code>minikube</code> for example, it already has built-in <code>nginx-ingress</code> and can be enabled as addon.</p> <h2 id="how-to-expose-services-using-ingress">How to expose services using ingress</h2> <p>Once you have working ingress, it's type to create rules for it.</p> <p>What you need is to have ingress which will be able to communicate with frontend and backend as well.</p> <p>Example taken from official kubernetes documentation:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: simple-fanout-example spec: rules: - host: foo.bar.com http: paths: - path: /foo pathType: Prefix backend: service: name: service1 port: number: 4200 - path: /bar pathType: Prefix backend: service: name: service2 port: number: 8080 </code></pre> <p>In this example, there are two different services available on different <code>paths</code> within the <code>foo.bar.com</code> hostname and both services are within the cluster. No need to expose them out of the cluster since traffic will be directed through <code>ingress</code>.</p> <h2 id="actual-solution-how-to-approach">Actual solution (how to approach)</h2> <p><a href="https://stackoverflow.com/q/67470540/15537201">This is very similar configuration</a> which was fixed and started working as expected. This is my answer and safe to share :)</p> <p>As you can see OP faced the same issue when frontend was accessible, while backend wasn't.</p> <p>Feel free to use anything out of that answer/repository.</p> <h2 id="useful-links">Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress</a></li> </ul>
<p>I've been reading the Google Cloud documentation about <strong>hybrid GKE cluster</strong> with <a href="https://cloud.google.com/anthos/multicluster-management/connect/registering-a-cluster" rel="nofollow noreferrer">Connect</a> or completely on prem with GKE on-prem and VMWare.</p> <p>However, I see that GKE with <strong>Connect</strong> you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.</p> <p>But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing <strong>on-prem and cloud nodes</strong>. Graphical example:</p> <p><a href="https://i.stack.imgur.com/r9ZJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r9ZJ0.png" alt="enter image description here" /></a></p> <p>For the above solution, the master node is managed by GCloud, but the ideal solution is to manage <strong>multiple node masters</strong> (High availability) <strong>on cloud</strong> and nodes on prem. Graphical example:</p> <p><a href="https://i.stack.imgur.com/B78pa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B78pa.png" alt="enter image description here" /></a></p> <p>Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?</p>
<p>If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.</p> <p>Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.</p> <p>The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.</p> <p>If you want to know more about Anthos in GCP please <a href="https://cloud.google.com/anthos/docs/concepts/overview" rel="nofollow noreferrer">follow this link.</a></p>
<p>I created a cronjob with the following spec in GKE:</p> <pre><code># cronjob.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: collect-data-cj-111 spec: schedule: &quot;*/5 * * * *&quot; concurrencyPolicy: Allow startingDeadlineSeconds: 100 suspend: false successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: collect-data-cj-111 image: collect_data:1.3 restartPolicy: OnFailure </code></pre> <p>I create the cronjob with the following command:</p> <pre><code>kubectl apply -f collect_data.yaml </code></pre> <p>When I later watch if it is running or not (as I scheduled it to run every 5th minute for for the sake of testing), here is what I see:</p> <pre><code>$ kubectl get pods --watch NAME READY STATUS RESTARTS AGE XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 0s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 1s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ContainerCreating 0 1s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 3s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 17s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 30s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 44s </code></pre> <p>It does not seem to be able to pull the image from Artifact Registry. I have both GKE and Artifact Registry created under the same project.</p> <p>What can be the reason? After spending several hours in docs, I still could not make progress and I am quite new in the world of GKE.</p> <p>If you happen to recommend me to check anything, I really appreciate if you also describe where in GCP I should check/control your recommendation.</p> <hr /> <p>ADDENDUM:</p> <p>When I run the following command:</p> <pre><code>kubectl describe pods </code></pre> <p>The output is quite large but I guess the following message should indicate the problem.</p> <pre><code> Failed to pull image &quot;collect_data:1.3&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/collect_data:1.3&quot;: failed to resolve reference &quot;docker.io/library/collect_data:1.3&quot;: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed </code></pre> <p>How do I solve this problem step by step?</p>
<p>From the error shared, I can tell that the image is not being pulled from Artifact Registry, and the reason for failure is because, by default, GKE pulls it directly from Docker Hub unless specified otherwise. Since there is no collect_data image there, hence the error.</p> <p>The correct way to specify an image stored in Artifact Registry is as follows:</p> <pre><code>image: &lt;location&gt;-docker.pkg.dev/&lt;project&gt;/&lt;repo-name&gt;/&lt;image-name:tag&gt; </code></pre> <p>Be aware that the registry format has to be set to &quot;docker&quot; if you are using a docker-containerized image.</p> <p>Take a look at the <a href="https://cloud.google.com/artifact-registry/docs/docker/quickstart#gcloud" rel="nofollow noreferrer">Quickstart for Docker</a> guide, where it is specified how to pull and push docker images to Artifact Registry along with the permissions required.</p>
<p>I am trying to use the kubernetes extension in vscode. However, when I try to click on any item in the menu list (see image), I receive the error popup <code>Unable to connect to the server: Forbidden</code>.</p> <p><a href="https://i.stack.imgur.com/1JaDO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1JaDO.png" alt="image" /></a></p> <p>The kubernetes debug logs are however completely empty, and the kubectl CLI also seems to work fine. For example the command <code>kubectl config get-contexts</code> returns:</p> <pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE .... * ftxt-gpus-dev.oa ftxt-gpus-dev.oa username my-namespace </code></pre> <p>When I run <code>kubectl auth can-i --list</code> I get the following:</p> <pre><code>Resources Non-Resource URLs Resource Names Verbs pods/exec [] [] [*] pods/portforward [] [] [*] pods/status [] [] [*] pods [] [] [*] secrets [] [] [*] cronjobs.batch [] [] [*] jobs.batch [] [] [*] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.k8s.io [] [] [create] events [] [] [get list watch] namespaces/status [] [] [get list watch] namespaces [] [] [get list watch] nodes/status [] [] [get list watch] nodes [] [] [get list watch] [/api/*] [] [get] [/api] [] [get] [/apis/*] [] [get] [/apis] [] [get] [/healthz] [] [get] [/healthz] [] [get] [/livez] [] [get] [/livez] [] [get] [/openapi/*] [] [get] [/openapi] [] [get] [/readyz] [] [get] [/readyz] [] [get] [/version/] [] [get] [/version/] [] [get] [/version] [] [get] [/version] [] [get] </code></pre>
<p>This error means that the correct Role-Based Access Control (RBAC) permissions or the correct authorization policy are not set. To fix this error, you should first check the RBAC permissions for the user account you are attempting to use. You can do this by running the command <code>kubectl get clusterrolebinding</code> to view the current RBAC permissions. If you don’t have a role binding try to create one using <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC</a>.</p> <p>Also you should check the authorization policy for the cluster. This can be done by running the command <code>kubectl get authorizationpolicies</code>. If the authorization policy is set to deny access to all users, then you should update the policy to allow the user to access the cluster.</p>
<p>Our system runs on GKE in a VPC-native network. We've recently upgraded from v1.9 to v1.21, and when we transferred the configuration, I've noticed the spec.template.spec.affinity.nodeAffinity in out kube-dns deployment is deleted and ignored. I tried manually adding this with &quot;kubectl apply -f kube-dns-deployment.yaml&quot;</p> <p>I get &quot;deployment.apps/kube-dns configured&quot;, but after a few seconds the kube-dns reverts to a configuration without this affinity.</p> <p>This is the relevant code in the yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: addonmanager.kubernetes.io/mode: Reconcile k8s-app: kube-dns kubernetes.io/cluster-service: &quot;true&quot; name: kube-dns namespace: kube-system spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 type: RollingUpdate template: metadata: annotations: components.gke.io/component-name: kubedns prometheus.io/port: &quot;10054&quot; prometheus.io/scrape: &quot;true&quot; scheduler.alpha.kubernetes.io/critical-pod: &quot;&quot; seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: null labels: k8s-app: kube-dns spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: cloud.google.com/gke-nodepool operator: In values: - pool-1 weight: 20 - preference: matchExpressions: - key: cloud.google.com/gke-nodepool operator: In values: - pool-3 - training-pool weight: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/gke-nodepool operator: In values: - pool-1 - pool-3 - training-pool podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: cloud.google.com/hostname containers: .... dnsPolicy: Default nodeSelector: kubernetes.io/os: linux </code></pre> <p>This is what I get when I run <em>$ kubectl get deployment kube-dns -n kube-system -o yaml</em>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: .... labels: addonmanager.kubernetes.io/mode: Reconcile k8s-app: kube-dns kubernetes.io/cluster-service: &quot;true&quot; name: kube-dns namespace: kube-system resourceVersion: &quot;16650828&quot; uid: .... spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 type: RollingUpdate template: metadata: annotations: components.gke.io/component-name: kubedns prometheus.io/port: &quot;10054&quot; prometheus.io/scrape: &quot;true&quot; scheduler.alpha.kubernetes.io/critical-pod: &quot;&quot; seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: null labels: k8s-app: kube-dns spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100 containers: ... dnsPolicy: Default nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 supplementalGroups: - 65534 serviceAccount: kube-dns serviceAccountName: kube-dns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - key: components.gke.io/gke-managed-components operator: Exists volumes: - configMap: defaultMode: 420 name: kube-dns optional: true name: kube-dns-config status: ... </code></pre> <p>As you can see, GKE just REMOVES the NodeAffinity part, as well as one part of the podAffinity.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns" rel="nofollow noreferrer">kube-dns</a> is a service discovery mechanism within GKE, and the default DNS provider used by the clusters. It is managed by Google and that is why the changes are not holding, and most probably that part of the code was removed in the new version.</p> <p>If you need to apply a custom configuration, you can do that following the guide <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns" rel="nofollow noreferrer">Setting up a custom kube-dns Deployment</a>.</p>
<p>I need to create a Kubernetes clientset using a token extracted from JSON service account key file.</p> <p>I explicitly provide this token inside the config, however it still looks for Google Application-Default credentials, and crashes because it cannot find them.</p> <p>Below is my code:</p> <pre><code>package main import ( &quot;context&quot; &quot;encoding/base64&quot; &quot;fmt&quot; &quot;io/ioutil&quot; &quot;golang.org/x/oauth2&quot; &quot;golang.org/x/oauth2/google&quot; gke &quot;google.golang.org/api/container/v1&quot; &quot;google.golang.org/api/option&quot; &quot;k8s.io/client-go/kubernetes&quot; _ &quot;k8s.io/client-go/plugin/pkg/client/auth/gcp&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; &quot;k8s.io/client-go/tools/clientcmd/api&quot; ) const ( projectID = &quot;my_project_id&quot; clusterName = &quot;my_cluster_name&quot; scope = &quot;https://www.googleapis.com/auth/cloud-platform&quot; ) func main() { ctx := context.Background() // Read JSON key and extract the token data, err := ioutil.ReadFile(&quot;sa_key.json&quot;) if err != nil { panic(err) } creds, err := google.CredentialsFromJSON(ctx, data, scope) if err != nil { panic(err) } token, err := creds.TokenSource.Token() if err != nil { panic(err) } fmt.Println(&quot;token&quot;, token.AccessToken) // Create GKE client tokenSource := oauth2.StaticTokenSource(token) gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource)) if err != nil { panic(err) } // Create a dynamic kube config inMemKubeConfig, err := createInMemKubeConfig(ctx, gkeClient, token, projectID) if err != nil { panic(err) } // Use it to create a rest.Config config, err := clientcmd.NewNonInteractiveClientConfig(*inMemKubeConfig, clusterName, &amp;clientcmd.ConfigOverrides{CurrentContext: clusterName}, nil).ClientConfig() if err != nil { panic(err) } // Create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err) // this where the code crashes because it can't find the Google ADCs } fmt.Printf(&quot;clientset %+v\n&quot;, clientset) } func createInMemKubeConfig(ctx context.Context, client *gke.Service, token *oauth2.Token, projectID string) (*api.Config, error) { k8sConf := api.Config{ APIVersion: &quot;v1&quot;, Kind: &quot;Config&quot;, Clusters: map[string]*api.Cluster{}, AuthInfos: map[string]*api.AuthInfo{}, Contexts: map[string]*api.Context{}, } // List all clusters in project with id projectID across all zones (&quot;-&quot;) resp, err := client.Projects.Zones.Clusters.List(projectID, &quot;-&quot;).Context(ctx).Do() if err != nil { return nil, err } for _, f := range resp.Clusters { name := fmt.Sprintf(&quot;gke_%s_%s_%s&quot;, projectID, f.Zone, f.Name) // My custom naming convention cert, err := base64.StdEncoding.DecodeString(f.MasterAuth.ClusterCaCertificate) if err != nil { return nil, err } k8sConf.Clusters[name] = &amp;api.Cluster{ CertificateAuthorityData: cert, Server: &quot;https://&quot; + f.Endpoint, } k8sConf.Contexts[name] = &amp;api.Context{ Cluster: name, AuthInfo: name, } k8sConf.AuthInfos[name] = &amp;api.AuthInfo{ Token: token.AccessToken, AuthProvider: &amp;api.AuthProviderConfig{ Name: &quot;gcp&quot;, Config: map[string]string{ &quot;scopes&quot;: scope, }, }, } } return &amp;k8sConf, nil } </code></pre> <p>and here is the error message:</p> <pre><code>panic: cannot construct google default token source: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. </code></pre>
<p>Here's what worked for me.</p> <p>It is based on this <a href="https://gist.github.com/ahmetb/548059cdbf12fb571e4e2f1e29c48997" rel="nofollow noreferrer">gist</a> and it's exactly what I was looking for. It uses an <code>oauth2.TokenSource</code> object which can be fed with a variety of token types so it's quite flexible.</p> <p>It took me a long time to find this solution so I hope this helps somebody!</p> <pre><code>package main import ( &quot;context&quot; &quot;encoding/base64&quot; &quot;fmt&quot; &quot;io/ioutil&quot; &quot;log&quot; &quot;net/http&quot; gke &quot;google.golang.org/api/container/v1&quot; &quot;google.golang.org/api/option&quot; &quot;golang.org/x/oauth2&quot; &quot;golang.org/x/oauth2/google&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/rest&quot; clientcmdapi &quot;k8s.io/client-go/tools/clientcmd/api&quot; ) const ( googleAuthPlugin = &quot;gcp&quot; projectID = &quot;my_project&quot; clusterName = &quot;my_cluster&quot; zone = &quot;my_cluster_zone&quot; scope = &quot;https://www.googleapis.com/auth/cloud-platform&quot; ) type googleAuthProvider struct { tokenSource oauth2.TokenSource } // These funcitons are needed even if we don't utilize them // So that googleAuthProvider is an rest.AuthProvider interface func (g *googleAuthProvider) WrapTransport(rt http.RoundTripper) http.RoundTripper { return &amp;oauth2.Transport{ Base: rt, Source: g.tokenSource, } } func (g *googleAuthProvider) Login() error { return nil } func main() { ctx := context.Background() // Extract a token from the JSON SA key data, err := ioutil.ReadFile(&quot;sa_key.json&quot;) if err != nil { panic(err) } creds, err := google.CredentialsFromJSON(ctx, data, scope) if err != nil { panic(err) } token, err := creds.TokenSource.Token() if err != nil { panic(err) } tokenSource := oauth2.StaticTokenSource(token) // Authenticate with the token // If it's nil use Google ADC if err := rest.RegisterAuthProviderPlugin(googleAuthPlugin, func(clusterAddress string, config map[string]string, persister rest.AuthProviderConfigPersister) (rest.AuthProvider, error) { var err error if tokenSource == nil { tokenSource, err = google.DefaultTokenSource(ctx, scope) if err != nil { return nil, fmt.Errorf(&quot;failed to create google token source: %+v&quot;, err) } } return &amp;googleAuthProvider{tokenSource: tokenSource}, nil }); err != nil { log.Fatalf(&quot;Failed to register %s auth plugin: %v&quot;, googleAuthPlugin, err) } gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource)) if err != nil { panic(err) } clientset, err := getClientSet(ctx, gkeClient, projectID, org, env) if err != nil { panic(err) } // Demo to make sure it works pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(ctx, metav1.ListOptions{}) if err != nil { panic(err) } log.Printf(&quot;There are %d pods in the cluster&quot;, len(pods.Items)) for _, pod := range pods.Items { fmt.Println(pod.Name) } } func getClientSet(ctx context.Context, client *gke.Service, projectID, name string) (*kubernetes.Clientset, error) { // Get cluster info cluster, err := client.Projects.Zones.Clusters.Get(projectID, zone, name).Context(ctx).Do() if err != nil { panic(err) } // Decode cluster CA certificate cert, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClusterCaCertificate) if err != nil { return nil, err } // Build a config using the cluster info config := &amp;rest.Config{ TLSClientConfig: rest.TLSClientConfig{ CAData: cert, }, Host: &quot;https://&quot; + cluster.Endpoint, AuthProvider: &amp;clientcmdapi.AuthProviderConfig{Name: googleAuthPlugin}, } return kubernetes.NewForConfig(config) } </code></pre>
<p>We have created two machine deployments.</p> <pre><code>kubectl get machinedeployment -A NAMESPACE NAME REPLICAS AVAILABLE-REPLICAS PROVIDER OS KUBELET AGE kube-system abc 3 3 hetzner ubuntu 1.24.9 116m kube-system vnr4jdxd6s-worker-tgl65w 1 1 hetzner ubuntu 1.24.9 13d </code></pre> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION abc-b6647d7cb-bcprj Ready &lt;none&gt; 62m v1.24.9 abc-b6647d7cb-llsq8 Ready &lt;none&gt; 65m v1.24.9 abc-b6647d7cb-mtlsl Ready &lt;none&gt; 58m v1.24.9 vnr4jdxd6s-worker-tgl65w-59ff7fc46c-d9tm6 Ready &lt;none&gt; 13d v1.24.9 </code></pre> <p>We know that we can add a label to a specific node</p> <pre><code>kubectl label nodes abc-b6647d7cb-bcprj key=value </code></pre> <p>But our nodes are autoscaled. We would like to install, for example, MariaDB Galera on specific machinedeployment node. Is it somehow possible to annotate all nodes with a particular machinedeployments?</p> <p>Is it somehow possible to annotate all nodes with a particular machinedeployments?</p>
<p>To annotate all nodes with a particular machinedeployment. You can use the <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_annotate/" rel="nofollow noreferrer">kubectl annotate</a> command to annotate all nodes in a particular machinedeployment with a specific key-value pair. For example, to annotate all nodes with a machinedeployment of nginx-deployment, you can run the following command:</p> <pre><code>kubectl annotate nodes --all deployment=nginx-deployment key=value </code></pre> <p>This will annotate all nodes in the machinedeployment of nginx-deployment with the specified key-value pair.</p> <p>For more information follow this <a href="https://www.kubermatic.com/blog/annotating-machine-deployment-for-autoscaling/" rel="nofollow noreferrer">blog by Seyi Ewegbemi</a>.</p>
<p>I created a node js TLS server, dockerized it, and created a K8S Deployment and ClusterIP service for it. I created a DNS for the LoadBalancer service external IP of istio-ingressgateway and I’m using this DNS to try access this TLS server using istio but for some reason this error appears</p> <pre><code>[2022-02-10T04:28:38.302Z] &quot;- - -&quot; 0 NR filter_chain_not_found - &quot;-&quot; 0 0 3087 - &quot;-&quot; &quot;-&quot; &quot;-&quot; &quot;-&quot; &quot;-&quot; &quot;-&quot; - - 10.120.22.33:7070 10.101.31.172:44748 - - </code></pre> <p>The node server.js file:</p> <pre><code>const tls = require(&quot;tls&quot;); const fs = require(&quot;fs&quot;); const options = { key: fs.readFileSync(&quot;server-key.pem&quot;), cert: fs.readFileSync(&quot;server-cert.pem&quot;), rejectUnauthorized: false, }; process.env[&quot;NODE_TLS_REJECT_UNAUTHORIZED&quot;] = 0; const server = tls.createServer(options, (socket) =&gt; { console.log( &quot;server connected&quot;, socket.authorized ? &quot;authorized&quot; : &quot;unauthorized&quot; ); socket.write(&quot;welcome!\n&quot;); socket.setEncoding(&quot;utf8&quot;); socket.pipe(socket); }); server.listen(7070, () =&gt; { console.log(&quot;server bound&quot;); }); </code></pre> <p>The client.js file I use to connect to the server:</p> <pre><code>const tls = require(&quot;tls&quot;); const fs = require(&quot;fs&quot;); const options = { ca: [fs.readFileSync(&quot;server-cert.pem&quot;, { encoding: &quot;utf-8&quot; })], }; var socket = tls.connect( 7070, &quot;HOSTNAME&quot;, options, () =&gt; { console.log( &quot;client connected&quot;, socket.authorized ? &quot;authorized&quot; : &quot;unauthorized&quot; ); process.stdin.pipe(socket); process.stdin.resume(); } ); socket.setEncoding(&quot;utf8&quot;); socket.on(&quot;data&quot;, (data) =&gt; { console.log(data); }); socket.on(&quot;end&quot;, () =&gt; { console.log(&quot;Ended&quot;); }); </code></pre> <p>The cluster service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nodejs-service namespace: nodejs-tcp spec: ports: - name: web port: 7070 protocol: TCP targetPort: 7070 selector: app: nodejs sessionAffinity: None type: ClusterIP </code></pre> <p>The istio-gateway.yaml</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: nodejs-gw namespace: nodejs-tcp spec: selector: istio: istio-ingressgateway servers: - hosts: - 'HOSTNAME' port: name: tls number: 7070 protocol: TLS tls: credentialName: tls-secret mode: PASSTHROUGH </code></pre> <p>In credentialName, I created a generic secret that holds the values of the private key and the certificate of the server</p> <p>The istio-virtual-service.yaml</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: nodejs-vs namespace: nodejs-tcp spec: gateways: - nodejs-gw hosts: - 'HOSTNAME' tls: - match: - port: 7070 sniHosts: - HOSTNAME route: - destination: host: nodejs-service port: number: 7070 </code></pre> <p>The Istio version I’m using:</p> <pre><code>client version: 1.12.2 control plane version: 1.12.2 data plane version: 1.12.2 (159 proxies) </code></pre> <p>Your help is so much appreciated. Thanks in advance.</p>
<p>One thing I noticed right away is that you are using the incorrect selector in your <code>istio-gateway</code>, it should be:</p> <pre><code>spec: selector: istio: ingressgateway </code></pre> <p>A good troubleshooting starting point would be to get the routes for your <code>ingressgateway</code> and validate that you see the expected ones.</p> <ol> <li>First you need to know your pod's name:</li> </ol> <pre><code>kubectl get pods -n &lt;namespace_of_your_app&gt; NAME READY STATUS RESTARTS AGE pod/my-nginx-xxxxxxxxx-xxxxx 2/2 Running 0 50m </code></pre> <p>In my deployment, it is an nginx pod.</p> <ol start="2"> <li>Once you have the name, you can get the routes specific to your hostname:</li> </ol> <pre><code>istioctl pc routes &lt;your_pod_name&gt;.&lt;namespace&gt; NOTE: This output only contains routes loaded via RDS. NAME DOMAINS MATCH VIRTUAL SERVICE my-nginx.default.svc.cluster.local:443 my-nginx /* </code></pre> <p>This is an output example for a hostname &quot;my-nginx&quot;. If the output returns no route, it usually means that it does not match SNI and/or cannot find a specific route.</p>
<p>i want to access dotted named value with helm to use in ConfigMap the value is something like that</p> <pre><code>valuenum: joji.json: zok </code></pre> <p>i want to use it in ConfigMap with helm as this</p> <pre><code>{{ toYaml .Values.valuenum.joji.json }} </code></pre> <p>it returns syntax error. could not find a fix for it.</p>
<p>I found the answer myself, when using index we can search for nested variables with quotes.</p> <pre><code>{{ index .Values.valuenum &quot;joji.json&quot; }} </code></pre> <p><a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">link for helm doc about index and more</a></p>
<p>I am having trouble with kubernetes cluster and setting up a load balancer with Digital Ocean. The config has worked before, but I'm not sure if something is an outdated version or needs some other change to make this work. Is there a way to ensure the SyncLoadBalancer succeeds? I have waited for more than an hour and the load balancer has long been listed as online in the DigitalOcean dashboard.</p> <pre><code>Name: my-cluster Namespace: default Labels: app.kubernetes.io/instance=prod app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=my-company app.kubernetes.io/part-of=my-company app.kubernetes.io/version=1.1.67 helm.sh/chart=my-company-0.1.51 Annotations: kubernetes.digitalocean.com/load-balancer-id: e7bbf8b7-29e0-407c-adce-XXXXXXXXX meta.helm.sh/release-name: prod meta.helm.sh/release-namespace: default service.beta.kubernetes.io/do-loadbalancer-certificate-id: 8be22723-b242-4bea-9963-XXXXXXXX service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: false service.beta.kubernetes.io/do-loadbalancer-name: prod-load-balancer service.beta.kubernetes.io/do-loadbalancer-protocol: https service.beta.kubernetes.io/do-loadbalancer-size-unit: 1 Selector: app.kubernetes.io/instance=prod,app.kubernetes.io/name=my-company,app.kubernetes.io/part-of=my-company Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.245.16.78 IPs: 10.245.16.78 LoadBalancer Ingress: 24.199.70.237 Port: https 443/TCP TargetPort: http/TCP NodePort: https 32325/TCP Endpoints: 10.244.0.163:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: load-balancer is not yet active (current status: new) Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request &quot;b06545a5-c701-46d1-be84-3740196c21c7&quot;) Load Balancer can't be updated while it processes previous actions Warning SyncLoadBalancerFailed 18m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request &quot;27b58084-7ff0-46a3-830b-6210a12278ab&quot;) Load Balancer can't be updated while it processes previous actions Warning SyncLoadBalancerFailed 17m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request &quot;22ff352c-8486-4a69-8ffc-a4bba64147dc&quot;) Load Balancer can't be updated while it processes previous actions Warning SyncLoadBalancerFailed 17m service-controller Error syncing load balancer: failed to ensure load balancer: failed to update load-balancer with ID e7bbf8b7-29e0-407c-adce-94a3205b38b5: PUT https://api.digitalocean.com/v2/load_balancers/e7bbf8b7-29e0-407c-adce-94a3205b38b5: 403 (request &quot;ec7f0138-99ba-4932-b1ff-1cfe46ed24c5&quot;) Load Balancer can't be updated while it processes previous actions Normal EnsuringLoadBalancer 15m (x10 over 10h) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 15m (x5 over 10h) service-controller Ensured load balancer </code></pre>
<p>Below are troubleshooting steps that might help resolve your issue:</p> <ol> <li>If you are specifying 2 ports in the yaml file the load balancer will take the whole range between the 2 ports, thus blocking and making them unable to be reused for another service</li> <li>If you are already using a port ex:8086 for the forwarding rule it cannot be reused for another service</li> <li>If you have health checks enabled on your load balancer check if those health checks are all passing.</li> <li>Verify that the load balancer is reachable from the public internet.</li> <li>Finally restart the cluster and try to deploy again.</li> </ol> <p>For more information follow troubleshooting <a href="https://docs.digitalocean.com/support/check-your-load-balancers-connectivity/#:%7E:text=Check%20the%20status%20of%20load,pointed%20at%20the%20load%20balancer." rel="nofollow noreferrer">documentation</a>. Adding <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/issues/110" rel="nofollow noreferrer">issue</a> with a similar error.</p>
<p>I have a GKE cluster running with several persistent disks for storage. To set up a staging environment, I created a second cluster inside the same project. Now I want to use the data from the persistent disks of the production cluster in the staging cluster.</p> <p>I already created persistent disks for the staging cluster. What is the best approach to move over the production data to the disks of the staging cluster.</p>
<p>You can use the open source tool <a href="https://velero.io/" rel="nofollow noreferrer">Velero</a> which is designed to migrate Kubernetes cluster resources.</p> <p>Follow these steps to migrate a persistent disk within GKE clusters:</p> <ol> <li>Create a GCS bucket:</li> </ol> <pre><code>BUCKET=&lt;your_bucket_name&gt; gsutil mb gs://$BUCKET/ </code></pre> <ol start="2"> <li>Create a <a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">Google Service Account</a> and store the associated email in a variable for later use:</li> </ol> <pre><code>GSA_NAME=&lt;your_service_account_name&gt; gcloud iam service-accounts create $GSA_NAME \ --display-name &quot;Velero service account&quot; SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter=&quot;displayName:Velero service account&quot; \ --format 'value(email)') </code></pre> <ol start="3"> <li>Create a custom role for the Service Account:</li> </ol> <pre><code>PROJECT_ID=&lt;your_project_id&gt; ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list ) gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title &quot;Velero Server&quot; \ --permissions &quot;$(IFS=&quot;,&quot;; echo &quot;${ROLE_PERMISSIONS[*]}&quot;)&quot; gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET} </code></pre> <ol start="4"> <li>Grant access to Velero:</li> </ol> <pre><code>gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL </code></pre> <ol start="5"> <li>Download and install Velero on the source cluster:</li> </ol> <pre><code>wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz tar -xvzf velero-v1.8.1-linux-amd64.tar.gz sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero </code></pre> <p>Note: The download and installation was performed on a Linux system, which is the OS used by Cloud Shell. If you are managing your GCP resources via Cloud SDK, the release and installation process could vary.</p> <ol start="6"> <li>Confirm that the velero pod is running:</li> </ol> <pre><code>$ kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxxx-xxxx 1/1 Running 0 11s </code></pre> <ol start="7"> <li>Create a backup for the PV,PVCs:</li> </ol> <pre><code>velero backup create &lt;your_backup_name&gt; --include-resources pvc,pv --selector app.kubernetes.io/&lt;your_label_name&gt;=&lt;your_label_value&gt; </code></pre> <ol start="8"> <li>Verify that your backup was successful with no errors or warnings:</li> </ol> <pre><code>$ velero backup describe &lt;your_backup_name&gt; --details Name: your_backup_name Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=21 Phase: Completed Errors: 0 Warnings: 0 </code></pre> <hr /> <p>Now that the Persistent Volumes are backed up, you can proceed with the migration to the destination cluster following these steps:</p> <ol> <li>Authenticate in the destination cluster</li> </ol> <pre><code>gcloud container clusters get-credentials &lt;your_destination_cluster&gt; --zone &lt;your_zone&gt; --project &lt;your_project&gt; </code></pre> <ol start="2"> <li>Install Velero using the same parameters as step 5 on the first part:</li> </ol> <pre><code>velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero </code></pre> <ol start="3"> <li>Confirm that the velero pod is running:</li> </ol> <pre><code>kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxx-xxxxx 1/1 Running 0 19s </code></pre> <ol start="4"> <li>To avoid the backup data being overwritten, change the bucket to read-only mode:</li> </ol> <pre><code>kubectl patch backupstoragelocation default -n velero --type merge --patch '{&quot;spec&quot;:{&quot;accessMode&quot;:&quot;ReadOnly&quot;}}' </code></pre> <ol start="5"> <li>Confirm Velero is able to access the backup from bucket:</li> </ol> <pre><code>velero backup describe &lt;your_backup_name&gt; --details </code></pre> <ol start="6"> <li>Restore the backed up Volumes:</li> </ol> <pre><code>velero restore create --from-backup &lt;your_backup_name&gt; </code></pre> <ol start="7"> <li>Confirm that the persistent volumes have been restored on the destination cluster:</li> </ol> <pre><code>kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redis-data-my-release-redis-master-0 Bound pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914 8Gi RWO standard 79s redis-data-my-release-redis-replicas-0 Bound pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7 8Gi RWO standard 79s redis-data-my-release-redis-replicas-1 Bound pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69 8Gi RWO standard 79s redis-data-my-release-redis-replicas-2 Bound pvc-65d7471a-7885-46b6-a377-0703e7b01484 8Gi RWO standard 79s </code></pre> <p>Check out this <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">tutorial</a> as a reference.</p>
<p>I want to ask why the Kubernetes networking driver <a href="https://www.weave.works/docs/net/latest/overview/" rel="nofollow noreferrer">weave</a> container is generating a lot of logs?</p> <p>The log file size is 700MB after two days.</p> <p>How can I solve that?</p>
<h2 id="logs-in-kubernetes">Logs in kubernetes</h2> <p>As it was said in comment, kubernetes is not responsible for log rotation. This is from kubernetes documentation:</p> <blockquote> <p>An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node. Kubernetes is not responsible for rotating logs, but rather a deployment tool should set up a solution to address that. For example, in Kubernetes clusters, deployed by the kube-up.sh script, there is a logrotate tool configured to run each hour. You can also set up a container runtime to rotate an application's logs automatically.</p> </blockquote> <p>As proposed option, this can be managed on container's runtime level.</p> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level" rel="nofollow noreferrer">Logging at the node level</a>.</p> <h2 id="reducing-logs-for-weave-cni">Reducing logs for Weave CNI</h2> <p>There are two containers in each pod. Weave itself and weave-npc (which is a network policy controller).</p> <p>By default weave's log level is set to INFO. This can be changed to WARNING to see only exceptions. This can be achieved by adding <code>--log-level</code> flag through the <code>EXTRA_ARGS</code> environment variable for the weave:</p> <pre><code>$ kubectl edit daemonset weave-net -n kube-system </code></pre> <p>So <code>weave container</code> part should look like:</p> <pre><code>spec: containers: - command: - /home/weave/launch.sh env: - name: HOSTNAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: EXTRA_ARGS # this was added with value below! value: --log-level=warning - name: INIT_CONTAINER value: &quot;true&quot; image: docker.io/weaveworks/weave-kube:2.8.1 imagePullPolicy: IfNotPresent name: weave </code></pre> <p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-reading-the-logs" rel="nofollow noreferrer">Weave - logs level</a>.</p> <p>A lot of logs go from <code>Weave NPC</code>, there's an option that allows to <code>disable</code> NPC. However based on documentation this is a paid option based on their documentation - <a href="https://www.weave.works/product/cloud/" rel="nofollow noreferrer">cloud.weave.works</a></p> <p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-changing-configuration-options" rel="nofollow noreferrer">Weave - Changing configuration options</a></p>
<p>Good afternoon,</p> <p>I have recently deployed cnvrg CORE application on-premise with Minikube.</p> <p>In cnvrg CORE we can create a &quot;machine resource&quot; to give to the application some computer resources like CPU and GPU from a different machine through SSH.</p> <p>I have found a problem when creating a new resource of any type (in the attached image you can see an example). It says that I can't create the machine because &quot;I have reached the limit&quot; but the only machine I have is the default one (Kubernetes in my case).</p> <p>I haven't found any information about this on the internet, can you please tell me, is it a problem with the version of cnvrg CORE (v3.9.27)? Or is it something I have to configure during the installation?</p> <p>Thank you very much!</p> <p><a href="https://i.stack.imgur.com/zmNRL.png" rel="nofollow noreferrer">cnvrg message &quot;You've reached the maximum machines&quot;</a></p>
<p>I wrote to the support of cnvrg and adding machines is not allowed as CORE is a free community version the option to add new resources is not supported. Other paid versions of cnvrg allow you to add new resources and other functionalities, so cnvrg CORE is not a real open-source version of cnvrg.</p>
<p>I want to deploy redis pod which loads a list. Then I will have kubernetes job which will execute bash script with variable taken from that list in redis.</p> <p>How can I make this redis pod to be auto deleted when all items from a list are used?</p>
<p>By default, Kubernetes keeps the completed jobs and associated objects for debugging purposes, and you will lose all the generated logs by them when deleted.</p> <p>That being said, a job can be automatically deleted by using the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">TTL mechanism for finished Jobs</a>.</p> <p>Here you can find an example of a job's manifest with the TTL enabled and set to delete the job and associated objects (pods, services, etc.) 100 sec after its completion:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre>
<p>when one pod has below tolerations.</p> <pre><code> tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists - effect: NoSchedule operator: Exists </code></pre> <p>why it can be scheduled to the node with below taints. <strong><code>Runtime=true:NoSchedule</code></strong></p> <p>And I also searched the kubernetes documents. Generally one tolerations group will include the 'key', so how does below works?</p> <pre><code> - effect: NoSchedule operator: Exists </code></pre>
<p>I have reproduced the issue,</p> <p>I have tainted the node <code>gke-cluster-4-default-pool-8ad24f8f-2ixm</code> with <code>Runtime=true:NoSchedule</code></p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-cluster-4-default-pool-8ad24f8f-2ixm Ready &lt;none&gt; 10d v1.26.6-gke.1700 gke-cluster-4-default-pool-8ad24f8f-ncy0 Ready &lt;none&gt; 10d v1.26.6-gke.1700 gke-cluster-4-default-pool-8ad24f8f-o537 Ready &lt;none&gt; 10d v1.26.6-gke.1700 $ kubectl taint nodes gke-cluster-4-default-pool-8ad24f8f-2ixm Runtime=true:NoSchedule node/gke-cluster-4-default-pool-8ad24f8f-2ixm tainted </code></pre> <p>Then I have created a deployment without any tolerations, so the pods are not scheduled with nodes with taints:</p> <pre><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-7f456874f4-95x66 1/1 Running 0 34s 10.24.2.6 gke-cluster-4-default-pool-8ad24f8f-o537 &lt;none&gt; &lt;none&gt; nginx-deployment-7f456874f4-9sj68 1/1 Running 0 34s 10.24.0.12 gke-cluster-4-default-pool-8ad24f8f-ncy0 &lt;none&gt; &lt;none&gt; nginx-deployment-7f456874f4-f4s98 1/1 Running 0 34s 10.24.0.13 gke-cluster-4-default-pool-8ad24f8f-ncy0 &lt;none&gt; &lt;none&gt; nginx-deployment-7f456874f4-zbgp9 1/1 Running 0 34s 10.24.2.7 gke-cluster-4-default-pool-8ad24f8f-o537 &lt;none&gt; &lt;none&gt; nginx-deployment-7f456874f4-zs4js 1/1 Running 0 34s 10.24.0.11 gke-cluster-4-default-pool-8ad24f8f-ncy0 &lt;none&gt; &lt;none&gt; </code></pre> <p>Later I have added the tolerations given by you and 2 pods have scheduled on the node with applied taints : (I have deleted the existing deployment and deployed after adding tolerations)</p> <pre><code>kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-6d998db8f-58wr2 1/1 Running 0 6s 10.24.1.7 gke-cluster-4-default-pool-8ad24f8f-2ixm &lt;none&gt; &lt;none&gt; nginx-deployment-6d998db8f-62dcm 1/1 Running 0 6s 10.24.2.8 gke-cluster-4-default-pool-8ad24f8f-o537 &lt;none&gt; &lt;none&gt; nginx-deployment-6d998db8f-srcmg 1/1 Running 0 6s 10.24.0.14 gke-cluster-4-default-pool-8ad24f8f-ncy0 &lt;none&gt; &lt;none&gt; nginx-deployment-6d998db8f-wv48m 1/1 Running 0 6s 10.24.1.6 gke-cluster-4-default-pool-8ad24f8f-2ixm &lt;none&gt; &lt;none&gt; nginx-deployment-6d998db8f-zbck2 1/1 Running 0 6s 10.24.0.15 gke-cluster-4-default-pool-8ad24f8f-ncy0 &lt;none&gt; &lt;none&gt; </code></pre> <p>So the tolerations and taints are working fine so the issue is with the resource itself, it can be due to below reasons:</p> <p>1.Insufficient resources on nodes such as CPU or memory check those with kubectl describe command.</p> <p>2.Double check the taints and tolerations if there are any other taints or tolerations that are stopping the pod to schedule.</p> <p>3.Check if you have any nodeselectors and affinity rules that prevent pods from scheduling the node.</p> <p>For further debugging add the describe command of pod and node. Attaching a <a href="https://foxutech.com/kubernetes-taints-and-tolerations-explained/" rel="nofollow noreferrer">blog</a> written by motoskia for your reference.</p>
<p>So I'm dealing with a structure like this:</p> <pre><code>. β”œβ”€β”€ 1 β”‚Β Β  β”œβ”€β”€ env-vars β”‚Β Β  └── kustomization.yaml β”œβ”€β”€ 2 β”‚Β Β  β”œβ”€β”€ env-vars β”‚Β Β  └── kustomization.yaml β”œβ”€β”€ env-vars β”œβ”€β”€ kustomization.yaml └── shared β”œβ”€β”€ env-vars └── kustomization.yaml </code></pre> <p>while env-vars within each level has some env vars and</p> <pre><code>$cat kustomization.yaml bases: - 1/ - 2/ namePrefix: toplevel- configMapGenerator: - name: env-cm behavior: merge envs: - env-vars </code></pre> <pre><code>$cat 1/kustomization.yaml bases: - ./../shared namePrefix: first- configMapGenerator: - name: env-cm behavior: merge envs: - env-vars </code></pre> <pre><code>$cat 2/kustomization.yaml bases: - ./../shared namePrefix: second- configMapGenerator: - name: env-cm behavior: merge envs: - env-vars </code></pre> <pre><code>$cat shared/kustomization.yaml configMapGenerator: - name: env-cm behavior: create envs: - env-vars </code></pre> <p>I'm essentially trying to create 2 configmaps with some shared values (which are injected from different resources: from <code>shared</code> directory and the top-level directory)</p> <hr /> <p><code>kustomize build .</code> fails with some conflict errors for finding multiple objects:</p> <pre><code>Error: merging from generator &lt;blah&gt;: found multiple objects &lt;blah&gt; that could accept merge of ~G_v1_ConfigMap|~X|env-cm </code></pre> <p>Unfortunately I need to use <code>merge</code> on the top-level <code>configMapGenerator</code>, since there are some labels injected to <code>1</code> and <code>2</code> configmaps (so <code>create</code>ing a top-level configmap altho addresses the env-vars, but excludes the labels)</p> <p>Any suggestion on how to address this issue is appreciated</p>
<p>I believe this should solve your issue.</p> <p><code>kustomization.yaml</code> which is located in <code>base</code> or <code>/</code>:</p> <pre><code>$ cat kustomization.yaml resources: - ./1 - ./2 namePrefix: toplevel- configMapGenerator: - name: first-env-cm behavior: merge envs: - env-vars - name: second-env-cm behavior: merge envs: - env-vars </code></pre> <p>With help of search I found <a href="https://github.com/kubernetes-sigs/kustomize/issues/1442" rel="nofollow noreferrer">this github issue</a> which is I'd say the same issue. And then a pull-request with <a href="https://github.com/kubernetes-sigs/kustomize/pull/1520/files#diff-c1e6b6a8ce9692d830228e40df4a604cf063ef54ca54e157f70981557e72f08bL606-R609" rel="nofollow noreferrer">changes in code</a>. We can see that during <code>kustomize</code> render merge behaviour was changed to look for <code>currentId</code> instead of <code>originalId</code>. Knowing that we can address to exact &quot;pre-rendered&quot; configmaps separately.</p>
<p>I want to implement custom logic to determine readiness for my pod, and I went over this: <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state" rel="noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state</a> and they mention an example property: <code>management.endpoint.health.group.readiness.include=readinessState,customCheck</code></p> <p>Question is - how do I override <code>customCheck</code>? In my case I want to use HTTP probes, so the yaml looks like:</p> <pre><code>readinessProbe: initialDelaySeconds: 10 periodSeconds: 10 httpGet: path: /actuator/health port: 12345 </code></pre> <p>So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)</p>
<p>customCheck is a key for your custom HealthIndicator. The key for a given HealthIndicator is the name of the bean without the HealthIndicator suffix</p> <p>You can read: <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators" rel="nofollow noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators</a></p> <p>You are defining readinessProbe, so probably hiting /actuator/health/readiness is a better choice.</p> <pre><code>public class CustomCheckHealthIndicator extends AvailabilityStateHealthIndicator { private final YourService yourService; public CustomCheckHealthIndicator(ApplicationAvailability availability, YourService yourService) { super(availability, ReadinessState.class, (statusMappings) -&gt; { statusMappings.add(ReadinessState.ACCEPTING_TRAFFIC, Status.UP); statusMappings.add(ReadinessState.REFUSING_TRAFFIC, Status.OUT_OF_SERVICE); }); this.yourService = yourService; } @Override protected AvailabilityState getState(ApplicationAvailability applicationAvailability) { if (yourService.isInitCompleted()) { return ReadinessState.ACCEPTING_TRAFFIC; } else { return ReadinessState.REFUSING_TRAFFIC; } } } </code></pre>
<p>I am trying to prepare my application so that I can deploy it via kubernetes in the cloud. Therefore I installed minikube to get accustomed with how to set up an ingress. Therefore I followed the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">ingress documentation by kubernetes</a>. (NOTE: I did not expose my frontend service like they do in the beginning of the tutorial, as I understood it's not needed for an ingress).</p> <p>But after hours of desperate debugging and no useful help by ChatGPT I am still not able to resolve my bug. Whenever I try to access my application via my custom host (example.com), I get <code>InvalidHostHeader</code> as a response.</p> <p>For simplicity's sake right now my application simply has one deployment with one pod that runs a vuejs frontend. My <code>frontend-deployment.yaml</code> looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend labels: app: frontend spec: replicas: 1 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: XXXXXXXXXXX imagePullPolicy: Always ports: - name: http containerPort: 8080 </code></pre> <p>My <code>frontend-service.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend spec: ports: - name: http port: 8080 targetPort: http selector: app: frontend type: ClusterIP </code></pre> <p>I use the default NGINX ingress controller of minikube. And my <code>ingress.yaml</code> looks like this:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend port: name: http </code></pre> <p>Obviously, I configure my <code>/etc/hosts</code> file to map my minikube ip address to <code>example.com</code></p> <p>Any help and also suggestions on best practices/improvements on the structure of the yaml files is welcome!</p>
<p>An <strong>invalid host header error</strong> occurs may be of different reasons as mentioned below:</p> <ol> <li>Incorrect hostname in your configuration. Check and confirm if the host name is correct using the kubectl describe command.</li> <li>Check theimage you are using is existing and have access.</li> <li>Verify theendpoints and services are configured correctly and running.</li> <li>Checkfirewall rules if it is blocking the traffic.</li> <li>Check the DNS is pointing to the correct host and ip. Check the logs for any errors to debug further.</li> </ol> <p>You can also refer to the <a href="https://medium.com/@ManagedKube/kubernetes-troubleshooting-ingress-and-services-traffic-flows-547ea867b120" rel="nofollow noreferrer">blog</a> by managedkube and another <a href="https://medium.com/@AvinashBlaze/what-is-this-invalid-host-header-error-9cd760ae6d16" rel="nofollow noreferrer">blog</a> written by Avinash Thakur for further information.</p>