Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I have a Kubernetes controller written using client-go <a href="https://pkg.go.dev/k8s.io/client-go/informers" rel="nofollow noreferrer">informers</a> package. It maintains a watch on all Pods in the cluster, there are about 15k of these objects, and their YAML representation takes around 600 MB to print (I assume their in-memory representation is not that different.)</p> <p>As a result, this (otherwise really small) controller watching Pods ends up with a huge memory footprint (north of 1 GiB). Even methods that you'd think offer a way of filtering, such as the one named like <a href="https://pkg.go.dev/k8s.io/client-go/informers#NewFilteredSharedInformerFactory" rel="nofollow noreferrer"><code>NewFilteredSharedInformerFactory</code></a> doesn't really give you a way to specify a predicate function that chooses which objects are stored in the in-memory <code>cache</code>.</p> <p>Instead, that method in client-go offers a <a href="https://pkg.go.dev/k8s.io/[email protected]/informers/internalinterfaces#TweakListOptionsFunc" rel="nofollow noreferrer"><code>TweakListOptionsFunc</code></a>. It helps you control <code>ListOptions</code> but my predicate unfortunately cannot be satisfied with a <code>labelSelector</code> or <code>fieldSelector</code>. I need to drop the objects when they arrive to the controller through a predicate function.</p> <blockquote> <p>Note: the predicate I have is something like <em>&quot;Pods that have an ownerReference by a DaemonSet&quot;</em> (which is <a href="https://stackoverflow.com/questions/59442065/how-to-use-kubernetes-fieldselector-to-query-ownerreferences">not possible with <code>fieldSelectors</code></a> –also another question of mine) and there's no <code>labelSelector</code> that can work in my scenario.</p> </blockquote> <p>How would I go about configuring an informer on Pods that only have DaemonSet owner references to reduce the memory footprint of my controller?</p>
<p>It appears that today if you use <a href="https://pkg.go.dev/k8s.io/client-go/informers#SharedInformerFactory" rel="nofollow noreferrer">SharedInformers</a>, there's no way to filters which objects to keep in the shared cache and which ones to discard.</p> <p>I have found an <a href="https://github.com/kubernetes/kube-state-metrics/blob/734389481ab1a47422f07e80ef93b4359cadbaed/pkg/sharding/listwatch.go" rel="nofollow noreferrer">interesting code snippet</a> in kube-state-metrics project that opts into the lower-layer of abstraction of initiating <code>Watch</code> calls directly (which would normally be considered as an anti-pattern) and using <a href="https://pkg.go.dev/k8s.io/[email protected]/pkg/watch#Filter" rel="nofollow noreferrer"><code>watch.Filter</code></a>, it decides whether to return an object from a Watch() call (to a cache/reflector or not).</p> <p>That said, many controller authors might choose to not go down this path as it requires you to specify your own cache/reflector/indexer around the Watch() call. Furthermore, projects like <code>controller-runtime</code> don't even let you get access to this low-level machinery, as far as I know.</p> <hr /> <p>Another aspect of reducing controllers' memory footprint can be done through field/data erasure on structs (instead of discarding objects altogether). This is possible in newer versions of <code>client-go</code> through <a href="https://pkg.go.dev/k8s.io/[email protected]/tools/cache#TransformFunc" rel="nofollow noreferrer"><code>cache.TransformFunc</code></a>, which can let you delete fields of irrelevant objects (though, these objects would still consume some memory). This one is more of a band-aid that can make your situation better.</p> <hr /> <p>In my case, I mostly needed to watch for DaemonSet Pods in certain namespaces, so I refactored the code from using 1 informer (watching all namespaces) to N namespace-scoped informers running concurrently.</p>
<p>I'm trying to deploy Traefik as an ingress controller on my GKE cluster. It's a basic cluster with 3 nodes.</p> <p>I'm used to deploy Traefik using manifest on a Kubernetes cluster deployed by Kubespray, but we are migrating some of our infrastructures to GCP.</p> <p>So I tried to deploy Traefik using the <a href="https://github.com/kubernetes/charts/tree/master/stable/traefik" rel="nofollow noreferrer">community helm chart</a> with the following configuration:</p> <pre><code>image: traefik imageTag: 1.6.2 serviceType: LoadBalancer loadBalancerIP: X.X.X.X kubernetes: ingressClass: traefik ssl: enabled: false enforced: false insecureSkipVerify: false acme: enabled: false email: [email protected] staging: true logging: false challengeType: http-01 dashboard: enabled: true domain: traefik.mydomain.com ingress: annotations: kubernetes.io/ingress.class: traefik gzip: enabled: true accessLogs: enabled: true format: common </code></pre> <p>And then launch it with the following command:</p> <pre><code>helm install --namespace kube-system --name traefik --values values.yaml stable/traefik </code></pre> <p>All is well deployed on my K8S cluster, except the dashboard-ingress with the following error:</p> <pre><code>kevin@MBP-de-Kevin ~/W/g/s/traefik&gt; kubectl describe ingress traefik-dashboard -n kube-system Name: traefik-dashboard Namespace: kube-system Address: Default backend: default-http-backend:80 (10.20.2.6:8080) Rules: Host Path Backends ---- ---- -------- traefik.mydomain.com traefik-dashboard:80 (10.20.1.14:8080) Annotations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Sync 4m loadbalancer-controller googleapi: Error 400: Invalid value for field 'namedPorts[2].port': '0'. Must be greater than or equal to 1, invalid </code></pre> <p>Any idea where is my error?</p> <p>Thanks a lot!</p>
<pre><code>Invalid value for field 'namedPorts[0].port': '0' </code></pre> <p>This error happens when the <code>Service</code> that's being used by GKE Ingress is of type <code>ClusterIP</code> (and not <code>NodePort</code>). GKE Ingress requires backing Services to be of type NodePort.</p>
<p>When I have a multi zone GKE cluster, the num-nodes run in each zone for my node pools.</p> <p>GKE uses <em>zonal</em> instance groups, one in each zone for my cluster's zones.</p> <p>It seems like this could be implemented with a <em>regional</em> instance group instead.</p> <p>It seems that GKE Node Pools and Regional instance groups are a similar age. Is the only reason node pools don't use regional instance groups simply it wasn't available as a GCE feature at the time?</p>
<p>As the other comment says this questions is not really suitable for Stack Overflow. It's an implementation detail of GKE –and not an important one to a user in practice.</p> <p>I work at Google (but I don't know the implementation details), but my guess would be because GKE needs to choose which 3 zones in a region it needs to use.</p> <p>For example, if user node pool is in <code>-a</code>, <code>-b</code>, <code>-d</code> zones, Google (internally) also needs to create GKE Master instances (not visible to users) in the same set of zones and probably the way to coordinate this is to explicitly describe which zones to use by creating separate "zonal node pools".</p> <p>But I might be wrong. :) In the end, you should not really care how it's implemented. You should not go make edits to managed instance groups created by GKE either. Maybe some day GKE will move on to "regional instance groups", too.</p>
<p>I have a gce airflow (composer) cluster with a bunch of workers:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE airflow-redis-0 1/1 Running 0 7h airflow-scheduler 2/2 Running 0 7h airflow-sqlproxy 1/1 Running 0 8h airflow-worker 50/50 Running 0 7h composer-fluentd-daemon 1/1 Running 0 7h composer-fluentd-daemon 1/1 Running 0 7h </code></pre> <p>I also have a bunch of unique persistent NFS volumes that have data that need processing. Is there a way to dynamically mount a different NFS volume to each of the respective workers.</p> <p>Alternatively, is it possible for the DockerOperator called within the worker to mount the NFS volume pertaining to its specific workload.</p> <p>In theory the workflow would be: <code>Spin up 1x worker per Dataset</code> > <code>Get Dataset</code> > <code>Run Dataset through Model</code> > <code>Dump results</code></p> <p>One way to accomplish this would be to download the Dataset to the given pod that is processing it; however, these Datasets are several hundred gb per and will need to be processed many times against different models.</p> <p>Eventually we plan on putting all of this data in BigTable, but I need to show a proof in concept using volumes with a few hundred gb of data before we get the green light to spin up a BigTable cluster with multiple tb of data in it.</p> <p>Input appreciated. Telling me im doing it wrong with a better solution is also a viable answer.</p>
<p>Deployment, by definition, uses a set of identical replicas as pods (i.e. ReplicaSet). Therefore all pods of a deployment will have the PodSpec, pointing to the same volume.</p> <p>Sounds like you need to write some custom logic yourself to orchestrate spinning up new workloads (i.e. Jobs) with different volumes.</p> <p>You can do this by simply deploying a bash script that calls into kubectl (by default, kubectl inside a pod can work directly) in a loop. Or you can write something that uses Kubernetes API and makes some API calls to discover the new volumes, create workloads to process them (and then maybe clean up the volumes).</p>
<p>TL/DR:</p> <ol> <li>I don't know if I'm using the asynchronous programming features of C# and Blazor correctly.</li> <li>Even though things technically work, I'd like some guidance if I'm doing things correctly.</li> <li>Also, I'm having issues trying to get my &quot;loading spinner&quot; to work. What am I doing wrong?</li> </ol> <p>I'd like some guidance of my code is doing things the correct way.</p> <p>I'm currently trying to use KubernetesClient with a Blazor webapp to interact with my kubernetes cluster.</p> <p>As a test, I've tried to list nodes in a cluster, asynchronously. Things appear to work, but I'm unsure if I'm doing this correctly. Please see the code below:</p> <pre><code>@page &quot;/kclient&quot; @using k8s &lt;PageTitle&gt;Kubernetes Client Test&lt;/PageTitle&gt; &lt;h1&gt;Kubernetes Client Test&lt;/h1&gt; &lt;br /&gt; &lt;button class=&quot;btn btn-primary&quot; @onclick=&quot;@GetNodesAsync&quot;&gt;Refresh Node List&lt;/button&gt; &lt;br /&gt;&lt;br /&gt; &lt;p&gt;LOADING = @spin.ToString()&lt;/p&gt; &lt;label&gt;Node list:&lt;/label&gt; @if (spin) { &lt;div class=&quot;spinner&quot;&gt;&lt;/div&gt; }else { &lt;ul&gt; @if (MyNodes == null || MyNodes.Count == 0) { &lt;li&gt;No Nodes. Please try to refresh the node list.&lt;/li&gt; }else { @foreach(string node in MyNodes) { &lt;li&gt;@node&lt;/li&gt; } } &lt;/ul&gt; } @code { public bool spin = false; public IKubernetes client { get; set; } public List&lt;string&gt; MyNodes { get; set; } protected override void OnInitialized() { spin = false; KubernetesClientConfiguration config = KubernetesClientConfiguration.BuildConfigFromConfigFile(&quot;C:\\Users\\DevAdmin\\.kube\\config&quot;); client = new Kubernetes(config); System.Console.WriteLine(&quot;Base URI: &quot; + client.BaseUri); } async Task GetNodesAsync() { spin = true; Task.Delay(1500).Wait(); await InvokeAsync(GetNodes); spin = false; await Task.CompletedTask; } public async void GetNodes() { MyNodes = null; MyNodes = new List&lt;string&gt;(); System.Console.WriteLine(&quot;=== TRYING TO GET ALL NODES! ===&quot;); var nodeList = await client.ListNodeAsync(); if (nodeList.Items.Count == 0) { Console.WriteLine(&quot;Empty! There are no nodes!&quot;); } else { foreach (var item in nodeList.Items) { //Console.WriteLine(item.Metadata.Name); MyNodes.Add(item.Metadata.Name); } } StateHasChanged(); } } </code></pre> <p>The code above produces the page below: <a href="https://i.stack.imgur.com/Tlmpv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tlmpv.png" alt="enter image description here" /></a></p> <p>When you click on the button &quot;Refresh Node List&quot; button, it will use the KubernetesClient library to get a list of Kubernetes nodes. Because I'm still new to .NET, Blazor, and asynchronous programming, I'm unsure if the way I'm retrieving the list of nodes and then showing it in a Blazor (razor?) page is correct.</p> <p>Also I've been trying to show a loading spinner while waiting for the client to retrieve data, but I've been unable to get that working, and I'm sure it's because I'm not using async/tasks correctly.</p> <p>Any guidance on my usage of the asynchronously programming features of C#/.NET or anything else with the code I've provided would be greatly appreciated.</p> <p>Thanks in advance.</p>
<p>I can spot a few issues. Always avoid <code>async void</code>.</p> <pre class="lang-cs prettyprint-override"><code>async Task GetNodesAsync() { spin = true; //Task.Delay(1500).Wait(); -- .Wait() blocks the UI await Task.Delay(1500); //await InvokeAsync(GetNodes); -- this won't run on another Thread await GetNodes(); spin = false; //await Task.CompletedTask; } //public async void GetNodes() public async Task GetNodes() { ... // as before //StateHasChanged(); -- not needed } </code></pre>
<p>When I make request for this started server: <a href="https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-server" rel="nofollow noreferrer">https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-server</a></p> <p>I get right response:</p> <pre><code>{ "args": {}, "headers": { "Accept-Encoding": "gzip", "Connection": "close", "Host": "httpbin.org", "User-Agent": "Go-http-client/1.1" }, "origin": "", "url": "https://httpbin.org/get" } </code></pre> <p>But when I make request to that server on minikube which was created this way: <a href="https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-serve-yaml" rel="nofollow noreferrer">https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-serve-yaml</a></p> <p>I get error:</p> <pre><code>ERROR: Get https://httpbin.org/get: EOF&lt;nil&gt; </code></pre> <p>How can I make http requests from kubernetes pod?</p>
<p>Knative uses <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> and Istio, by default, doesn't allow outbound traffic to external hosts, such as httpbin.org. That's why your request is failing.</p> <p>Follow <a href="https://github.com/knative/docs/blob/f31d7106a119b453b0dbf208c3b1c0d698af4323/serving/outbound-network-access.md" rel="nofollow noreferrer">this document</a> to learn how to configure Knative (so that it configures Istio correctly) to make outbound connections. Or, you can directly configure the Istio by adding an egress policy: <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/egress/</a></p>
<p>I am trying to follow tutorial of Kubernetes but I am kinda lost on first steps when trying to use Katacoda... When I just try to open minikube dashboard I encounter error:</p> <blockquote> <p>failed to open browser: exec: "xdg-open": executable file not found in $PATH</p> </blockquote> <p>and dashboard itself remains unavailable when I try to open it through host 1.</p> <p>Later steps like running <code>hello-world</code> work fine and I am able to run it locally using my own <code>minikube</code> instance but I am a bit confused with this issue. Can I debug it somehow to access dashboard during course? This is particularly confusing because I am a bit afraid that I might encounter same or similar issue during potential exam that also runs online...</p> <p><a href="https://i.stack.imgur.com/yYf1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yYf1b.png" alt="Katacoda issue"></a></p>
<p>Founder of Katacoda here. When running locally, then xdg provides the wrapper for opening processes on your local machine and installing the package would resolve the issue. As Katacoda runs everything within a sandbox, we cannot launch processes directly on your machine. </p> <p>We have added an override for xdg-open that displays a friendly error message to users. They'll now be prompted to use the Preview Port link provided. The output is now:</p> <pre><code>$ minikube dashboard * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening %s in your default browser... Minikube Dashboard is not supported via the interactive terminal experience. Please click the 'Preview Port 30000' link above to access the dashboard. This will now exit. Please continue with the rest of the tutorial. X failed to open browser: exit status 1 </code></pre>
<p>We have several microservices implemented in Java/Kotlin and Spring MVC, running in Tomcat docker images. These services provide public APIs which are authenticated by user's cookies/sessions. These work correctly.</p> <p>Now, we would like to create an internal endpoint, which wouldn't be accessible either outside of GKE or via some kind of internal authentication. </p> <p>What would be the good way to go especially for Spring MVC and GKE?</p> <p>EDIT:</p> <p>I would like to achieve to authenticate different endpoints on one service. For instance:</p> <ul> <li><code>/public/</code> - no auth</li> <li><code>/private/</code> - user must be logged in</li> <li><code>/internal/</code> - only other microservices can access</li> </ul> <p>I would prefer to implement such auth on the application level, but I am not sure what would be the best way. IP range of internal Google IPs? Some other way of securely identifying the caller?</p> <p>Maybe my idea is bad, if so, I will be happy to change my mind.</p>
<p>Your question isn't GKE specific. It's broadly a Kubernetes question.</p> <p>I encourage you to search Kubernetes service authentication.</p> <p>There are many ways to do this, including rolling your own auth model. One feature that can help here is Kubernetes NetworkPolicy resource (it's like firewalls), you can learn more about it here <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a> and see here for some examples: <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes</a> (Keep in mind that this is a firewall, not authentication.)</p> <p>If you want to get this automatically, you can use Istio (<a href="https://istio.io" rel="nofollow noreferrer">https://istio.io</a>) which allows you to automatically set up mutual TLS between all your services without any code changes. Istio also gives a strong identity to each workload. You can use Istio's authentication policies to set up auth between your microservices <em>without changing your application code</em> which is really cool: <a href="https://istio.io/docs/tasks/security/authn-policy/" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/authn-policy/</a></p>
<p>The application inside the container is inaccessible from the outside i.e if I exec into the docker container and do </p> <pre class="lang-sh prettyprint-override"><code>curl localhost:5000 </code></pre> <p>it works correctly but not on the browser in my computer i get error : This site cant be reached</p> <p>My Dockerfile:</p> <pre><code># Use an official Python runtime as a parent image FROM python:3.7-slim # Set the working directory to /app WORKDIR /web-engine # Copy the current directory contents into the container at /app COPY . /web-engine # Install Gunicorn3 RUN apt-get update &amp;&amp; apt-get install default-libmysqlclient-dev gcc -y # Install any needed packages specified in requirements.txt RUN pip3 install --trusted-host pypi.python.org -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 5000 # Define environment variable ENV username root # Run app.py when the container launches CMD gunicorn --workers 4 --bind 127.0.0.1:5000 application:app --threads 1 </code></pre> <p>UPON executing docker in this way:</p> <pre class="lang-sh prettyprint-override"><code>sudo docker run -e password=$password -p 5000:5000 $reg/web-engine:ve0.0.2 </code></pre> <p>I get the following output:</p> <pre><code>[2019-09-08 11:53:36 +0000] [6] [INFO] Starting gunicorn 19.9.0 [2019-09-08 11:53:36 +0000] [6] [INFO] Listening at: http://127.0.0.1:5000 (6) [2019-09-08 11:53:36 +0000] [6] [INFO] Using worker: sync [2019-09-08 11:53:36 +0000] [9] [INFO] Booting worker with pid: 9 [2019-09-08 11:53:36 +0000] [10] [INFO] Booting worker with pid: 10 [2019-09-08 11:53:36 +0000] [11] [INFO] Booting worker with pid: 11 [2019-09-08 11:53:36 +0000] [12] [INFO] Booting worker with pid: 12 </code></pre> <p>So as you can see I'm mapping port 5000 of the container to port 5000 of my computer but localhost:5000 is not working</p> <p>Therefore i tried everthing the same but with the development server of Flask with the following change in My dockerfile </p> <p>FRom</p> <pre class="lang-sh prettyprint-override"><code>CMD gunicorn --workers 4 --bind 127.0.0.1:5000 application:app --threads 1 </code></pre> <p>TO </p> <pre class="lang-sh prettyprint-override"><code>CMD python3.7 application.py </code></pre> <p>and IT WORKED; I goto localhost:5000 and see the application is working</p> <p>There is nothing wrong with the application. I suppose there's an error in gunicorn server</p> <p>the requirements.txt file :</p> <pre><code>Flask Flask-SQLAlchemy mysqlclient gunicorn bs4 html5lib </code></pre> <p>Please help me out</p> <p>I also tried different forms of gunicorn and docker run command combinations like</p> <pre class="lang-sh prettyprint-override"><code>CMD gunicorn -application:app &amp;&amp; sudo docker run -e password=$password -p 5000:8000 $reg/web-engine:ve0.0.2 </code></pre> <p>It didnt work <a href="https://i.stack.imgur.com/Z6ors.png" rel="noreferrer">terminal image of container working with development server</a></p> <p>I would appreciate a solution involving nothing outside whats mentioned here like nginx, supervisor etc SOmeone please helllppp meeeeee.......😢</p>
<p>By default, 127.0.0.1 means a different thing inside the container than it does outside. Use <code>0.0.0.0:5000</code> instead.</p>
<p>I want to delete namespace and all resources under it.</p> <p>So i ran</p> <pre><code>% kubectl delete namespace observability namespace &quot;observability&quot; deleted </code></pre> <p>but this command was stuck, when I checked the namespace status it was showing <code>Terminating</code></p> <pre><code>$ kubectl get ns NAME STATUS AGE observability Terminating 14h odigos-system Terminating 14h </code></pre> <p>Then I tried to delete it by steps mentioned <a href="https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state" rel="nofollow noreferrer">here</a></p> <p>This command didn't returned any result <code>kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n observability</code> so kill it.</p> <p>Then I tried <strong>Manually delete a terminating namespace</strong></p> <p>But following command return</p> <pre><code>% curl -k -H &quot;Content-Type: application/json&quot; -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/observability/finalize { &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;the object provided is unrecognized (must be of type Namespace): couldn't get version/kind; json parse error: invalid character 'a' looking for beginning of value (61706956657273696f6e3a2076310a6b696e643a204e616d657370616365 ...)&quot;, &quot;reason&quot;: &quot;BadRequest&quot;, &quot;code&quot;: 400 }% </code></pre> <p>Here is the namespace definition</p> <pre><code>kubectl get ns observability -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: &quot;2022-09-20T02:25:56Z&quot; deletionTimestamp: &quot;2022-09-20T15:04:26Z&quot; labels: kubernetes.io/metadata.name: observability name: observability name: observability resourceVersion: &quot;360388862&quot; uid: 8cef9b90-af83-4584-b26e-8aa89212b80c spec: finalizers: - kubernetes status: conditions: - lastTransitionTime: &quot;2022-09-20T15:04:31Z&quot; message: 'Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server (&quot;Internal Server Error: \&quot;/apis/metrics.k8s.io/v1beta1?timeout=32s\&quot;: the server could not find the requested resource&quot;) has prevented the request from succeeding' reason: DiscoveryFailed status: &quot;True&quot; type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: &quot;2022-09-20T15:04:32Z&quot; message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: &quot;False&quot; type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: &quot;2022-09-20T15:05:43Z&quot; message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: &quot;False&quot; type: NamespaceDeletionContentFailure - lastTransitionTime: &quot;2022-09-20T15:05:43Z&quot; message: All content successfully removed reason: ContentRemoved status: &quot;False&quot; type: NamespaceContentRemaining - lastTransitionTime: &quot;2022-09-20T15:04:32Z&quot; message: All content-preserving finalizers finished reason: ContentHasNoFinalizers status: &quot;False&quot; type: NamespaceFinalizersRemaining phase: Terminating </code></pre> <p>I crated these Namespaces following this <a href="https://medium.com/@edeNFed/how-to-build-an-end-to-end-open-source-observability-solution-on-kubernetes-c8725c016dd5" rel="nofollow noreferrer">https://medium.com/@edeNFed/how-to-build-an-end-to-end-open-source-observability-solution-on-kubernetes-c8725c016dd5</a></p> <p>I am on following version</p> <pre><code>% kubectl version --short Client Version: v1.23.6 Server Version: v1.22.12-eks-6d3986b </code></pre> <p>How can i fix this ?</p>
<p>Looks like there is something wrong with your metrics-server. Check to see if its up and try to resolve that root cause <code>kubectl -n kube-system get pods | grep metrics-server</code>. See <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/metrics-server</a> for more info</p>
<p>We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.</p> <p>Do you have the same issue or it is limited to our deployment? I understand Istio support this as part of their upcoming <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/network-topologies/" rel="nofollow noreferrer">Gateway Network Topology</a> but not in the current gcp version.</p>
<p>I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.</p> <p>As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see an <code>x-forwarded-for</code> header, but it contains internal hops (e.g. 10.x.x.x).</p> <p>I am following up with our team on this. It seems that it was not noticed before.</p>
<p>I'm mounting a local folder into minikube and using that folder inside a pod. The folder contains the code I am developing. It works great but changes I make are not being reflected in the browser. If I exec into the pod I can see my code changes, just not in the browser. </p> <p>If I delete the pod when it is automatically recreated the changes are reflected in the browser. Is this a limitation of the solution? </p> <p>Can anybody please advise a novice?</p>
<p>Have a look at <a href="https://github.com/GoogleContainerTools/skaffold/" rel="nofollow noreferrer">Skaffold</a> — and its <a href="https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml" rel="nofollow noreferrer"><code>sync</code></a>; it ensures your yaml files are running inside Minikube and ships files of your selection back and forth.</p>
<p>I have just been certified CKAD (Kubernetes Application Developer) by The Linux Foundation.</p> <p>And from now on I am wondering : is RabbitMQ queueing system unnecessary in a Kubernetes cluster ?</p> <p>We use workers with queueing system in order to avoid http 30 seconds timeout : let's say for example we have a microservice which generates big pdf documents in average of 50 seconds each and you have 20 documents to generate right now, the classical schema would be to make a worker which will queue each documents one by one (this is the case for the company I have been working for lately)</p> <p>But in a Kubernetes cluster by default there is no timeout for http request going inside the cluster. You can wait 1000 seconds without any issue (20 documents * 50 seconds = 1000 seconds)</p> <p>With this last point, is it enought to say that RabbitMQ queueing system (via the amqplib module) is unuseful in a Kubernetes cluster ? moreover Kubernetes manages so well load balancing on each of your microservice replicas...</p>
<blockquote> <p>But in a Kubernetes cluster by default there is no timeout for http request going inside the cluster.</p> </blockquote> <p>Not sure where you got that idea. Depending on your config there might be no timeouts at the proxy level but there's still client and server timeouts to consider. Kubernetes doesn't change what you deploy, just how you deploy it. There's certainly other options than RabbitMQ specifically, and other system architectures you could consider, but &quot;queue workers&quot; is still a very common pattern and likely will be forever even as the tech around it changes.</p>
<p>I have got a Google Cloud IAM service account key file (in json format) that contains below data.</p> <pre><code>{ "type": "service_account", "project_id": "****", "private_key_id":"****", "private_key": "-----BEGIN PRIVATE KEY----blah blah -----END PRIVATE KEY-----\n", "client_email": "*****", "client_id": "****", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth/v1/certs", "client_x509_cert_url": "****" } </code></pre> <p>I can use this service account to access kubernetes API server by passing this key file to kube API client libraries. </p> <p>But, I'm not finding any way to pass this service account to kubectl binary to have kubectl get authenticated to project for which this service account created for.</p> <p>Is there any way that I can use to make Kubectl to use this service account file for authentication ?</p>
<p>This answer provides some guidance: <a href="https://stackoverflow.com/questions/48400966/access-kubernetes-gke-cluster-outside-of-gke-cluster-with-client-go/48412272#48412272">Access Kubernetes GKE cluster outside of GKE cluster with client-go?</a> but it's not complete.</p> <p>You need to do two things:</p> <ol> <li><p>set <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable to path to your JSON key file for the IAM service account, and use <code>kubectl</code> while this variable is set, you should be authenticated with the token.</p></li> <li><p>(this may be optional, not sure) Create a custom <code>KUBECONFIG</code> that only contains your cluster IP and CA certificate, save this file, and use it to connect to the cluster.</p></li> </ol> <p>Step 2 looks like this:</p> <pre><code>cat &gt; kubeconfig.yaml &lt;&lt;EOF apiVersion: v1 kind: Config current-context: cluster-1 contexts: [{name: cluster-1, context: {cluster: cluster-1, user: user-1}}] users: [{name: user-1, user: {auth-provider: {name: gcp}}}] clusters: - name: cluster-1 cluster: server: "https://$(eval "$GET_CMD --format='value(endpoint)'")" certificate-authority-data: "$(eval "$GET_CMD --format='value(masterAuth.clusterCaCertificate)'")" EOF </code></pre> <p>So with this, you should do</p> <pre><code>export GOOGLE_APPLICATION_CREDENTIALS=&lt;path-to-key.json&gt; export KUBECONFIG=kubeconfig.yaml kubectl get nodes </code></pre>
<p>My Kubernetes on AKS is using one resource group to inform their costs.</p> <p>At this moment we have many projects in the company, will be great if each POD report their costs to a different resource group named as the project.</p> <p>How can I do this?</p>
<p>You'll need to implement an intra-cluster cost tool.</p> <p>The most popular in the kubernetes ecosystem is kubecost, and they have recently release and OSS version, OpenCost. <a href="https://www.opencost.io/" rel="nofollow noreferrer">https://www.opencost.io/</a></p> <p>Typically you'd create a namespace for each app/costcentre. The default dashboards show breakdown by namespace cost over a time period.</p>
<p>I have a very basic question about Argo. Apologies if this is triggering.</p> <p>From my understanding, Argo is an extension to the Kubernetes API via being a &quot;Resource&quot; i.e it is invoked by &quot;kubectl argo xyz&quot; i.e endpoint is argo. instead of the endpoint being a pod etc. Each resource has objects. In the case of pods it is containers. In the case of argo resource, it is yaml files which has docker containers/script/dag/task etc.</p> <p>I initiate</p> <p><code>kubectl create ns argo</code></p> <p><code>kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml</code></p> <p>and when i try</p> <p><code>kubectl create ns argo</code></p> <p>i see 4-5 pods already running.</p> <p>is the same word argo being used for a new namespace (group of resources) AND the argo extension for a reason? moroever, when i used <code>kubectl apply -n argo -f</code> was it just creating a new resource argo with that yaml file?</p> <p>So, three entities exist here - argo ns, argo resource, argo api (not used till now, no argo yaml used either with workflow/dag/task defined..)?</p> <p>What exactly is <a href="https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml</a> having?</p> <p><a href="https://i.stack.imgur.com/rwekS.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Argo Workflows is an operator, it's a daemon you deploy into your cluster so it has to run actual code. And the only way you do that is with a pod.</p>
<p>I am trying to create a housekeeping job where I erase namespaces that have not been used for seven days. I have two options:</p> <ol> <li>I can use a Job, but I don't know how to make sure the Jobs are running on the date that I want.</li> <li>I read about <a href="https://stackoverflow.com/questions/5473780/how-to-setup-cron-to-run-a-file-just-once-at-a-specific-time-in-future">CronJob</a>. Unfortunately, CronJob in Kubernetes can only support 5 fields (default format). This means we can only define dates and months, but not years. </li> </ol> <p>Which one is better to use?</p>
<p>Kubernetes CronJob API is very similar to cron as you said and doesn't have a year field.</p> <p>If you need something that gets scheduled on time, you should write a kubernetes controlller that waits until the date you want, and then calls into Kubernetes API to create a <strong>Job</strong> object. This shouldn't be very complicated if you can program Go with the examples here: <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a></p>
<p>I'm using the handy <code>kubectl logs -l label=value</code> command to get log from all my pods matching a label. I want to see which pod outputted what log, but only the log text is displayed. Is there a way to control the log format, or a command argument which will let me do this?</p>
<p><code>kubectl</code> now has a <code>--prefix</code> option that allows you to prefix the pod name before the log message.</p>
<p>I've got several gRPC microservices deployed via Istio in my k8s pod behind a gateway that handles the routing for web clients. Things work great when I need to send an RPC from client (browser) to any of these services.</p> <p>I'm now at the point where I want to call service A from service B directly. How do I go about doing that?</p> <p>Code for how both the servers are instantiated:</p> <pre><code> const server = new grpc.Server(); server.addService(MyService, new MyServiceImpl()); server.bindAsync(`0.0.0.0:${PORT_A}`, grpc.ServerCredentials.createInsecure(), () =&gt; { server.start(); }); </code></pre> <p>A Service Account is being used with GOOGLE_APPLICATION_CREDENTIALS and a secret in my deployment YAML.</p> <p>To call service A from service B, I was thinking the code in service B would look something like:</p> <pre><code> const serviceAClient: MyServiceClient = new MyServiceClient(`0.0.0.0:${PORT_A}`, creds); const req = new SomeRpcRequest()...; serviceAClient.someRpc(req, (err: grpc.ServiceError, response: SomeRpcResponse) =&gt; { // yay! }); </code></pre> <p>Is that naive? One thing I'm not sure about is the creds that I need to pass when instantiating the client. I get complaints that I need to pass ChannelCredentials, but every mechanism I've tried to create those creds has not worked.</p> <p>Another thing I'm realizing is that 0.0.0.0 can't be correct because each service is in its own container paired with a sidecar proxy... so how do I route RPCs properly and attach proper creds?</p> <p>I'm trying to construct the creds this way:</p> <pre><code>let callCreds = grpc.CallCredentials.createFromGoogleCredential(myOauthClient); let channelCreds = grpc.ChannelCredentials.createSsl().compose(callCreds); const serviceAClient = new MyServiceClient(`0.0.0.0:${PORT_A}`, channcelCreds); </code></pre> <p>and I'm mysteriously getting the following error stack:</p> <pre><code>UnhandledPromiseRejectionWarning: TypeError: Channel credentials must be a ChannelCredentials object at new ChannelImplementation (/bish/proto/activities/node_modules/@grpc/grpc-js/build/src/channel.js:69:19) at new Client (/bish/proto/activities/node_modules/@grpc/grpc-js/build/src/client.js:58:36) at new ServiceClientImpl (/bish/proto/activities/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5) at PresenceService.&lt;anonymous&gt; (/bish/src/servers/presence/dist/presence.js:348:44) at step (/bish/src/servers/presence/dist/presence.js:33:23) at Object.next (/bish/src/servers/presence/dist/presence.js:14:53) at fulfilled (/bish/src/servers/presence/dist/presence.js:5:58) at processTicksAndRejections (internal/process/task_queues.js:97:5) </code></pre> <p>Which is odd because channelCreds is a ComposedChannelCredentialsImpl which, in fact, extends ChannelCredentials</p>
<p>Ok, at least the root cause of the &quot;Channel credentials must be a ChannelCredentials object&quot; error is now known. I'm developing node packages side-by-side as symlinks and each of the dependencies has their own copy of grpc-js in it.</p> <p><a href="https://github.com/npm/npm/issues/7742#issuecomment-257186653" rel="nofollow noreferrer">https://github.com/npm/npm/issues/7742#issuecomment-257186653</a></p>
<p>I'm trying to give a group of users permission to scale a specific set of deployments in kubernetes 1.20</p> <p>I've tried using the API reference doc here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#patch-scale-deployment-v1-apps" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#patch-scale-deployment-v1-apps</a> to set resource names like so:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kubeoperator-cr rules: ... #irrelevant rules omitted - apiGroups: [&quot;apps&quot;] resources: - /namespaces/my-namespace-name/deployments/my-deployment-name/scale - deployments/my-deployment-name/scale verbs: - update - patch </code></pre> <p>This doesn't work:</p> <pre><code>$ kubectl scale deployments -n my-namespace-name my-deployment-name --replicas 3 Error from server (Forbidden): deployments.apps &quot;my-deployment-name&quot; is forbidden: User &quot;kubeoperatorrole&quot; cannot patch resource &quot;deployments/scale&quot; in API group &quot;apps&quot; in the namespace &quot;my-namespace-name&quot; </code></pre> <p>The only way I can get the scale command to work is to grant the permission for all deployments (which is <strong>not</strong> what I want) like this:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kubeoperator-cr rules: ... #irrelevant rules omitted - apiGroups: [&quot;apps&quot;] resources: - deployments/scale verbs: - update - patch </code></pre> <pre><code>$ kubectl scale deployments -n my-namespace-name my-deployment-name --replicas 3 deployment.apps/my-deployment-name scaled </code></pre> <p>What is the correct syntax for specifying a specific deployment resource by name, or is this not possible? The deployments I'm targeting cannot be moved to an isolated namespace.</p>
<p><code>resources</code> isn't what you're looking for, it's <code>resourceNames</code> which has to be a specific object name like <code>resourceNames: [my-deployment-name]</code>. In general this isn't a very good approach, the expectation is that you will segment things by namespace and give them permissions in just one namespace (or two or three or whatever it is).</p>
<p>I've added a second TCP port to my Kubernetes service, and have noticed that which port kubelet assigns to the <code>{SVCNAME}_SERVICE_PORT</code> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">environment variable</a> appears to be order dependent.</p> <p>For example, if I declare my ports like this in my service:</p> <pre class="lang-yaml prettyprint-override"><code>ports: - name: example port: 9000 protocol: TCP - name: http port: 8080 protocol: TCP </code></pre> <p>Then <code>FOO_SERVICE_PORT</code> will be assigned the value <code>9000</code>. But if I flip the order ...</p> <pre class="lang-yaml prettyprint-override"><code>ports: - name: http port: 8080 protocol: TCP - name: example port: 9000 protocol: TCP </code></pre> <p>... then <code>FOO_SERVICE_PORT</code> is now <code>8080</code>.</p> <p>Is there a way to force kubelet to pick a specific port to set into this variable so that it's not dependent on the order I've defined my ports in? That is, is there a configuration I can set so it always uses the &quot;http&quot; port (8080) as value it assigns to this variable, regardless of where in the list this particular port is declared?</p>
<p>In older Kubernetes versions, a Service can define only a single port. When they added support for multiple ports, they chose by design to put the <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/envvars/envvars.go#L47" rel="nofollow noreferrer">first port in the backwards compatible environment variable</a>. There is no configuration that changes this behavior.</p> <p>However, Kubernetes also sets <code>{serviceName}_SERVICE_PORT_{portName}</code> environment variables for named ports, so you can get the port number by port name. For example:</p> <pre class="lang-java prettyprint-override"><code>FOO_SERVICE_PORT_EXAMPLE=9000 FOO_SERVICE_PORT_HTTP=8080 </code></pre>
<h1>Scenario</h1> <p>I'm using the <a href="https://martinfowler.com/bliki/CQRS.html" rel="nofollow noreferrer">CQRS pattern</a> for REST services that deal with Sales as in the diagram below. (The question focuses on the REST services.)</p> <p><a href="https://i.stack.imgur.com/be2mZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/be2mZ.jpg" alt="CQRS example" /></a></p> <p>On K8S I have:</p> <ul> <li>A microservice for GET endpoints (queries) that runs on a given pod.</li> <li>Another microservice for POST and PUT endpoints (commands) tat runs on a different pod.</li> </ul> <p>Both operate on the same <em>resource</em>: Sales. The natural URL mapping would be:</p> <ul> <li>GET endpoints on <code>mydomain.com/api/sales/*</code> should go to the first pod.</li> <li>POST and PUT endpoints on <code>mydomain.com/api/sales/*</code> should go to the second pod.</li> </ul> <h1>Ingress</h1> <p>To deploy these two microservices to a K8S cluster, I define a separate <code>Service</code> and <code>Deployment</code> for each. Then I create an <code>Ingress</code> definition to route requests to the Services:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: rules: - host: mydomain.com http: paths: - path: /api/sales pathType: Prefix backend: service: name: sales-queries-service port: name: 80 - path: /api/sales pathType: Prefix backend: service: name: sales-commands-service port: name: 80 ingressClassName: ingress-nginx </code></pre> <h3>Problem</h3> <p>The ingress definition above does not work because it has the mapping of the same path <code>/api/sales</code> to two different services. As such, it's non deterministic.</p> <p>It's a <a href="https://github.com/kubernetes/ingress-nginx/issues/187#issuecomment-499626492" rel="nofollow noreferrer">feature desired by many</a> that the ingress spec can allow the specification of the HTTP methods. In fact, when I asked ChatGPT it suggested adding this snippet to my ingress yaml, but unfortunately it's a parameter that doesn't exist... yet.</p> <pre><code> httpMethods: - GET </code></pre> <p>Is there any other option to specify the routing in a K8S cluster (without installing an API gateway product)?</p>
<p>I think it is better to route based on the URL (like <strong>/queries/xxxx</strong> and <strong>/commands/xxxx</strong>) and not on the HTTP verb/method because it might be useful to be able to do GET requests to the command service.</p> <p>For example, you might want to do GET requests to the /health endpoint for health checks.</p> <p>Or if you do want to do some specific READ queries against the write sides, yes, that is actually OK to do for some specific scenarios when you need an immediately consistent answer.</p> <p>Just as Greg says:</p> <p><em>&quot;Your aggregate state <em>IS</em> a read model just one that often has a very short life time and is used to process transactions&quot;</em> <a href="https://twitter.com/gregyoung/status/1275467796177137670" rel="nofollow noreferrer">https://twitter.com/gregyoung/status/1275467796177137670</a></p> <p>The key here is to be agile, flexible and don't be too strict.</p>
<p>Let's say that I have a running pod named <code>my-pod</code></p> <p><code>my-pod</code> reads the secrets from <code>foobar-secrets</code></p> <p>Now let's say that I update some value in <code>foobar-secrets</code></p> <pre><code>kubectl patch secret foobar-secrets --namespace kube-system --context=cluster-1 --patch &quot;{\&quot;data\&quot;: {\&quot;FOOBAR\&quot;: \&quot;$FOOBAR_BASE64\&quot;}}&quot; </code></pre> <p>What I should do to restart/reload the pod in order to get the new value?</p>
<p><a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">https://github.com/stakater/Reloader</a> is the usual solution for a fully standalone setup. Another option is <a href="https://github.com/jimmidyson/configmap-reload" rel="nofollow noreferrer">https://github.com/jimmidyson/configmap-reload</a> or similar but that requires coordination with the daemon process to have an API of some kind for reloading.</p>
<p>Is there any way to run <code>kubectl proxy</code>, giving it a command as input, and shutting it down when the response is received?</p> <p>I'm imagining something with the <code>-u</code> (unix socket) flag, like this:</p> <pre><code>kubectl proxy -u - &lt; $(echo "GET /api/v1/namespaces/default") </code></pre> <p>I don't think it's possible, but maybe my socket fu just isn't strong enough.</p>
<p>You don't need a long-running <code>kubectl proxy</code> for this.</p> <p>Try this:</p> <pre><code>kubectl get --raw=/api/v1/namespaces/default </code></pre>
<p>We have a bunch of pods that use RabbitMQ. If the pods are shut down by K8S with SIGTERM, we have found that our RMQ client (Python Pika) has no time to close the connection to RMQ Server causing it to think those clients are still alive until 2 heartbeats are missed.</p> <p>Our investigation has turned up that on SIGTERM, K8S kills all in- and most importantly OUTbound TCP connections, among other things (removing endpoints, etc.) Tried to see if any connections were still possible during preStop hooks, but preStop seems very internally focused and no traffic got out.</p> <p>Has anybody else experienced this issue and solved it? All we need to do is be able to get a message out the door before kubelet slams the door. Our pods are not K8S &quot;Services&quot; so some <a href="https://stackoverflow.com/questions/62567844/kubernetes-graceful-shutdown-continue-to-serve-traffic-during-termination">suggestions</a> didn't help.</p> <p>Steps to reproduce:</p> <ol> <li>add preStop hook sleep 30s to Sender pod</li> <li>tail logs of Receiver pod to see inbound requests</li> <li>enter Sender container's shell &amp; loop curl Receiver - requests appear in the logs</li> <li><code>k delete pod</code> to start termination of Sender pod</li> <li>curl requests immediately begin to hang in Sender, nothing in the Receiver logs</li> </ol>
<p>TERM kills nothing, it's up to your application to decide how to handle it. SIGKILL is sent some time later which does forcibly nuke the process but 1) it also closes all sockets which RMQ can detect and 2) you control how long the container has to close cleanly via terminationGracePeriodSeconds</p>
<p>Is it possible to create a Cloud Run on GKE (Anthos) Kubernetes Cluster with Preemptible nodes and if so can you also enable plugins such as gke-node-pool-shifter and gke-pvm-killer or will it interfere with cloud run actions such as autoscaling pods</p> <p><a href="https://hub.helm.sh/charts/rimusz/gke-node-pool-shifter" rel="nofollow noreferrer">https://hub.helm.sh/charts/rimusz/gke-node-pool-shifter</a></p> <p><a href="https://hub.helm.sh/charts/rimusz/gke-pvm-killer" rel="nofollow noreferrer">https://hub.helm.sh/charts/rimusz/gke-pvm-killer</a></p>
<p>Technically a Cloud Run on GKE cluster is still a GKE cluster at the end of the day, so it can have preemptive node pools.</p> <p>However, some Knative Serving components, such as the <code>activator</code> and <code>autoscaler</code> are in the hot path of serving the requests. You need to make sure they don't end up in a preemptible pool. Similarly, the <code>controller</code> and <code>webhook</code> are somewhat central to the control plane lifecycle of Knative API objects, so you also need to make sure these pods end up in a non-preemptible node pool.</p> <p>Secondly, Knative (for now) does not support node selectors or taints/tolerations: <a href="https://knative.tips/pod-config/node-affinity/" rel="nofollow noreferrer">https://knative.tips/pod-config/node-affinity/</a> It simply doesn't give you a way to specify <code>nodeSelector</code> or other affinity fields in the Pod template of Knative Service object.</p> <p>Therefore, you gotta find out a way (like implementing your mutating admission webhook for Knative-created pods) to add such node selectors to the Pods, which is quite tedious.</p> <p>However, by combining node taints and pd tolerations, I think you can have Knative system components end up in a non-preemptible pool, and everything else (i.e. Knative-created pods) in other nodes (i.e. preemptible nodes).</p>
<p>SSH to GKE node private IP from the jump server (Bastion host) is not working.</p> <p>I even tried the following as suggested by one of my friends, but it did not help.</p> <pre><code>gcloud compute instances add-metadata $INSTANCE_NAME_OR_INSTANCE_ID --metadata block-project-ssh-keys=false --zone $YOUR_VM_ZONE --project $YOUR_PROJECT </code></pre> <p>Also please confirm if the solution works for Private GKE too.</p>
<p>GKE node is just a GCE VM. You can just access it as a normal GCE instance if with proper privilege and ssh key configured.</p> <p>One thing worth to mention that <a href="https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_with_ssh" rel="nofollow noreferrer">GCP support IAP based ssh port forwarding</a> </p>
<p>I'm running the leader-elector (v0.5) as a sidecar on three pods, on three different nodes. </p> <p>Arguments: --election=XXX --http=0.0.0.0:4040</p> <p>All works well until I kill the leader pod. </p> <p>Now I get one of the pods into a state where the logs say it switched to the new leader:</p> <pre><code>kubectl logs -c elector upper-0 I0803 21:08:38.645849 7 leaderelection.go:296] lock is held by upper-1 and has not yet expired </code></pre> <p>So that indicates that <strong>upper-1</strong> is now the leader.</p> <p>But if I do query the HTTP server of upper-0, it returns the old leader:</p> <pre><code>kubectl exec -it upper-0 bash root@nso-lsa-upper-0:/# curl http://localhost:4040 {"name":"upper-2"} </code></pre> <p>Do I need to do something for the leader-electors HTTP service to update?</p>
<p>Yes, bug. I've uploaded a fixed container here: <a href="https://hub.docker.com/r/fredrikjanssonse/leader-elector/tags/" rel="nofollow noreferrer">https://hub.docker.com/r/fredrikjanssonse/leader-elector/tags/</a></p>
<p>I am currently running a deployment that is pulling from a database and is set as read only. Unfortunately the deployment freezes on coaction without notice so I came up with the idea of having the code write to a log file and have the liveness probe check to see if the file is up to date.</p> <p>This has been tested and it works great, however, the issue is getting past the read only part. As you well know, I can not create or write to a file in read only mode so when my code attempts this, I get a permission denied error and the deployment ends. I thought I could fix this by including two file paths in the container and using:</p> <pre><code>allowedHostPaths: - pathPrefix: &quot;/code&quot; readonly: true - pathPrefix: &quot;/code/logs&quot; readonly: false </code></pre> <p>and tell the code to wright in the &quot;logs&quot; file, but this did not work. Is there a why to have all my code in a read only file but also have a log file that can be read and written to?</p> <p>Here is my Dockerfile:</p> <pre><code>FROM python:3.9.7-alpine3.14 RUN mkdir -p /code/logs WORKDIR /code COPY requirements.txt . RUN pip install --upgrade pip RUN pip install -r requirements.txt COPY src/ . CMD [ &quot;python&quot;, &quot;code.py&quot;] </code></pre> <p>and here is my yaml file:</p> <pre><code>apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false requiredDropCapabilities: - ALL volumes: # Allow core volume types. - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: rule: 'MustRunAsNonRoot' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'MustRunAs' ranges: - min: 1 max: 65535 runAsGroup: rule: 'MustRunAs' ranges: - min: 1 max: 65535 fsGroup: rule: 'MustRunAs' ranges: - min: 1 max: 65535 readOnlyRootFilesystem: true allowedHostPaths: - pathPrefix: &quot;/code&quot; readOnly: true - pathPrefix: &quot;/code/logs&quot; readOnly: false --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: psp:restricted rules: - apiGroups: - policy resources: - podsecuritypolicies verbs: - use resourceNames: - restricted --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: manage-app namespace: default subjects: - kind: User name: my-app-sa apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: psp:restricted apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: selector: matchLabels: app: orbservice template: metadata: labels: app: orbservice spec: serviceAccountName: my-app-sa securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 containers: - name: my-app image: app:v0.0.1 resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; livenessProbe: exec: command: - python - check_logs.py initialDelaySeconds: 60 periodSeconds: 30 failureThreshold: 1 </code></pre> <p>An explanation of the solution or errors you see would be most appreciated. Thanks!</p>
<p><code>allowedHostPaths</code> is related to hostPath volume mounts (aka bind-mounting in folder from the host system). Nothing to do with stuff inside the container image. What you probably want is <code>readOnlyRootFilesystem: true</code> like you have now plus an emptyDir volume mounted at <code>/code/logs</code> set up to be writable by the container's user.</p>
<p>I have the following CSR object in Kubernetes:</p> <pre><code>$ kubectl get csr NAME AGE REQUESTOR CONDITION test-certificate-0.my-namespace 53m system:serviceaccount:my-namespace:some-user Pending </code></pre> <p>And I would like to approve it using the Python API client:</p> <pre class="lang-py prettyprint-override"><code>from kuberentes import config, client # configure session config.load_kube_config() # get a hold of the certs API certs_api = client.CertificatesV1beta1Api() # read my CSR csr = certs_api.read_certificate_signing_request("test-certificate-0.my-namespace") </code></pre> <p>Now, the contents of the <code>csr</code> object are:</p> <pre><code>{'api_version': 'certificates.k8s.io/v1beta1', 'kind': 'CertificateSigningRequest', 'metadata': {'annotations': None, 'cluster_name': None, 'creation_timestamp': datetime.datetime(2019, 3, 15, 14, 36, 28, tzinfo=tzutc()), 'deletion_grace_period_seconds': None, 'name': 'test-certificate-0.my-namespace', 'namespace': None, 'owner_references': None, 'resource_version': '4269575', 'self_link': '/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace', 'uid': 'b818fa4e-472f-11e9-a394-124b379b4e12'}, 'spec': {'extra': None, 'groups': ['system:serviceaccounts', 'system:serviceaccounts:cloudp-38483-test01', 'system:authenticated'], 'request': 'redacted', 'uid': 'd5bfde1b-4036-11e9-a394-124b379b4e12', 'usages': ['digital signature', 'key encipherment', 'server auth'], 'username': 'system:serviceaccount:test-certificate-0.my-namespace'}, 'status': {'certificate': 'redacted', 'conditions': [{'last_update_time': datetime.datetime(2019, 3, 15, 15, 13, 32, tzinfo=tzutc()), 'message': 'This CSR was approved by kubectl certificate approve.', 'reason': 'KubectlApprove', 'type': 'Approved'}]}} </code></pre> <p>I would like to <strong>approve</strong> this cert programmatically, if I use kubectl to do it with (<code>-v=10</code> will make <code>kubectl</code> output the http trafffic):</p> <pre><code>kubectl certificate approve test-certificate-0.my-namespace -v=10 </code></pre> <p>I get to see the <code>PUT</code> operation used to <strong>Approve</strong> my certificate:</p> <pre><code>PUT https://my-kubernetes-cluster.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace/approval </code></pre> <p>So I need to <code>PUT</code> to the <code>/approval</code> resource of the certificate object. Now, how do I do it with the Python Kubernetes client?</p>
<p>Here's to answer my question based on @jaxxstorm answer and my own investigation:</p> <pre><code># Import required libs and configure your client from datetime import datetime, timezone from kubernetes import config, client config.load_kube_config() # this is the name of the CSR we want to Approve name = 'my-csr' # a reference to the API we'll use certs_api = client.CertificatesV1beta1Api() # obtain the body of the CSR we want to sign body = certs_api.read_certificate_signing_request_status(name) # create an approval condition approval_condition = client.V1beta1CertificateSigningRequestCondition( last_update_time=datetime.now(timezone.utc).astimezone(), message='This certificate was approved by Python Client API', reason='MyOwnReason', type='Approved') # patch the existing `body` with the new conditions # you might want to append the new conditions to the existing ones body.status.conditions = [approval_condition] # patch the Kubernetes object response = certs_api.replace_certificate_signing_request_approval(name, body) </code></pre> <p>After this, the KubeCA will approve and issue the new certificate. The issued certificate file can be obtained from the <code>response</code> object we just got:</p> <pre><code>import base64 base64.b64decode(response.status.certificate) # this will return the decoded cert </code></pre>
<p>I recently installed <strong>FluxCD 1.19.0</strong> on an <strong>Azure AKS</strong> k8s cluster using <strong>fluxctl install</strong>. We use a private git (<strong>self hosted bitbucket</strong>) which Flux is able to reach and check out. </p> <p>Now Flux is not applying anything with the error message:</p> <pre><code>ts=2020-06-10T09:07:42.7589883Z caller=loop.go:133 component=sync-loop event=refreshed url=ssh://[email protected]:7999/infra/k8s-gitops.git branch=master HEAD=7bb83d1753a814c510b1583da6867408a5f7e21b ts=2020-06-10T09:09:00.631764Z caller=sync.go:73 component=daemon info="trying to sync git changes to the cluster" old=7bb83d1753a814c510b1583da6867408a5f7e21b new=7bb83d1753a814c510b1583da6867408a5f7e21b ts=2020-06-10T09:09:01.6130559Z caller=sync.go:539 method=Sync cmd=apply args= count=3 ts=2020-06-10T09:09:20.2097034Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5965923s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output= ts=2020-06-10T09:09:38.7432182Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5334244s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output= ts=2020-06-10T09:09:57.277918Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5346491s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output= ts=2020-06-10T09:09:57.2779965Z caller=sync.go:167 component=daemon err="&lt;cluster&gt;:namespace/dev: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding; &lt;cluster&gt;:namespace/prod: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding; dev:service/hello-world: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" ts=2020-06-10T09:09:57.2879489Z caller=images.go:17 component=sync-loop msg="polling for new images for automated workloads" ts=2020-06-10T09:09:57.3002208Z caller=images.go:27 component=sync-loop msg="no automated workloads" </code></pre> <p>From what I understand, Flux passes the resource definitions to kubectl, which then applies them? </p> <p>The way I interpret the error would mean that kubectl isn't passed anything to. However I opened a shell in the container and made sure Flux was in fact checking something out - which it did. </p> <p>I tried raising the verbosity to 9, but it didn't return anything that I deemed relevant (detailed outputs of the http requests and responses against the Kubernetes API).</p> <p>So what is happening here? </p>
<p>The problem was with the version of kubectl used in the 1.19 flux release, so I fixed it by using a prerelease: <a href="https://hub.docker.com/r/fluxcd/flux-prerelease/tags" rel="nofollow noreferrer">https://hub.docker.com/r/fluxcd/flux-prerelease/tags</a></p>
<p>So I have an Helm template:</p> <pre><code> spec: containers: - name: {{ .Values.dashboard.containers.name }} image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }} imagePullPolicy: Always env: - name: BASE_PATH value: /myapp/web </code></pre> <p>and I want to pass extra environment variable to it</p> <p>my <code>values.yaml</code>:</p> <pre><code> extraEnvs: - name: SOMETHING_ELSE value: hello - name: SOMETHING_MORE value: world </code></pre> <p>how can I do it so that my result would be like this?</p> <pre><code> spec: containers: - name: {{ .Values.dashboard.containers.name }} image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }} imagePullPolicy: Always env: - name: BASE_PATH value: /myapp/web - name: SOMETHING_ELSE value: hello - name: SOMETHING_MORE value: world </code></pre> <p>I was thinking something like this:</p> <pre><code> {{- if .Values.extraEnvs}} env: -| {{- range .Values.extraEnvs }} - {{ . | quote }} {{- end }} {{- end -}} </code></pre> <p>But this will override the previous settings</p>
<p>Just remove the <code>env:</code> from your bit.</p> <pre><code> env: - name: BASE_PATH value: /myapp/web {{- if .Values.extraEnvs}} {{- range .Values.extraEnvs }} - name: {{ .name }} value: {{ .value }} {{- end }} {{- end -}} </code></pre> <p>You can also use <code>toYaml</code> as mentioned in comments rather than iterating yourself.</p>
<p>Assuming I have a Kubernetes Deployment object with the <code>Recreate</code> strategy and I update the Deployment with a new container image version. Kubernetes will:</p> <ol> <li>scale down/kill the existing Pods of the Deployment,</li> <li>create the new Pods,</li> <li>which will pull the new container images</li> <li>so the new containers can finally run.</li> </ol> <p>Of course, the <code>Recreate</code> strategy is exepected to cause a downtime between steps 1 and 4, where no Pod is actually running. However, step 3 can take a lot of time if the container images in question are or the container registry connection is slow, or both. In a test setup (Azure Kubernetes Services pulling a Windows container image from Docker Hub), I see it taking 5 minutes and more, which makes for a really long downtime.</p> <p>So, what is a good option to reduce that downtime? Can I somehow get Kubernetes to pull the new images before killing the Pods in step 1 above? (Note that the solution should work with Windows containers, which are notoriously large, in case that is relevant.)</p> <p>On the Internet, I have found <a href="https://codefresh.io/kubernetes-tutorial/single-use-daemonset-pattern-pre-pulling-images-kubernetes/" rel="nofollow noreferrer">this Codefresh article using a DaemonSet and Docker in Docker</a>, but I guess <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/#so-why-the-confusion-and-what-is-everyone-freaking-out-about" rel="nofollow noreferrer">Docker in Docker is no longer compatible with containerd</a>.</p> <p>I've also found <a href="https://stackoverflow.com/a/59588935/62838">this StackOverflow answer</a> that suggests using an Azure Container Registry with Project Teleport, but that is in private preview and doesn't support Windows containers yet. Also, it's specific to Azure Kubernetes Services, and I'm looking for a more general solution.</p> <p>Surely, this is a common problem that has a &quot;standard&quot; answer?</p> <p><strong>Update 2021-12-21:</strong> Because I've got a corresponding answer, I'll clarify that I cannot easily change the deployment strategy. The application in question does not support running Pods of different versions at the same time because it uses a database that needs to be migrated to the corresponding application version, without forwards or backwards compatibility.</p>
<p>Via <a href="https://www.reddit.com/r/kubernetes/comments/oeruh9/can_kubernetes_prepull_and_cache_images/" rel="nofollow noreferrer">https://www.reddit.com/r/kubernetes/comments/oeruh9/can_kubernetes_prepull_and_cache_images/</a>, I've found these ideas:</p> <ul> <li>Implement a DaemonSet that runs a &quot;sleep&quot; loop on all the images I need.</li> <li>Use <a href="http://github.com/mattmoor/warm-image" rel="nofollow noreferrer">http://github.com/mattmoor/warm-image</a>, which has no Windows support.</li> <li>Use <a href="https://github.com/ContainerSolutions/ImageWolf" rel="nofollow noreferrer">https://github.com/ContainerSolutions/ImageWolf</a>, which says, &quot;ImageWolf is currently alpha software and intended as a PoC - please don't run it in production!&quot;</li> <li>Use <a href="https://github.com/uber/kraken" rel="nofollow noreferrer">https://github.com/uber/kraken</a>, which seems to be a registry, not a pre-pulling solution.</li> <li>Use <a href="https://github.com/dragonflyoss/Dragonfly" rel="nofollow noreferrer">https://github.com/dragonflyoss/Dragonfly</a> (now <a href="https://github.com/dragonflyoss/Dragonfly2" rel="nofollow noreferrer">https://github.com/dragonflyoss/Dragonfly2</a>), which also seems to do somethings completely different.</li> <li>Use <a href="https://github.com/senthilrch/kube-fledged" rel="nofollow noreferrer">https://github.com/senthilrch/kube-fledged</a>, which looks exactly right and more mature than the others, but <a href="https://github.com/senthilrch/kube-fledged/issues/118" rel="nofollow noreferrer">has no Windows support</a>.</li> <li>Use <a href="https://github.com/dcherman/image-cache-daemon" rel="nofollow noreferrer">https://github.com/dcherman/image-cache-daemon</a>, which <a href="https://hub.docker.com/r/exiges/image-cache-daemon/tags" rel="nofollow noreferrer">has no Windows support</a>.</li> <li>Use <a href="https://goharbor.io/blog/harbor-2.1/" rel="nofollow noreferrer">https://goharbor.io/blog/harbor-2.1/</a>, which also seems to be a registry, not a pre-pulling solution.</li> <li>Use <a href="https://openkruise.io/docs/user-manuals/imagepulljob/" rel="nofollow noreferrer">https://openkruise.io/docs/user-manuals/imagepulljob/</a>, which also looks right, but a) OpenKruise is huge and I'm not sure I want to install this just to preload images, and b) it seems <a href="https://hub.docker.com/r/openkruise/kruise-manager/tags" rel="nofollow noreferrer">it has no Windows support</a>.</li> </ul> <p>So, it seems I have to implement this on my own, with a DaemonSet. I still hope someone can provide a better answer than this one 🙂 .</p>
<p>I need to find out if all deployments having label=a is in READY state? Example is below. I need to return true or false based on wether all deployments are in READY or NOT ready? I can parse text but I think there might be a more clever way with just kubectl and json path or something</p> <pre><code>PS C:\Users\artis&gt; kubectl get deployment -n prod -l role=b NAME READY UP-TO-DATE AVAILABLE AGE apollo-api-b 0/3 3 0 107s esb-api-b 0/3 3 0 11m frontend-b 3/3 3 3 11m </code></pre>
<p>Add <code>-o yaml</code> to see the YAML objects for each, which you can then use to build a <code>-o jsonpath</code> like <code>-o jsonpath='{range .items[*]}{.status.conditions[?(@.type == &quot;Available&quot;)].status}{&quot;\n&quot;}{end}'</code>. You can't do logic operations in JSONPath so you'll need to filter externally like <code>| grep False</code> or something.</p>
<p>There's <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-tokens" rel="nofollow noreferrer">a setting</a> to turn off the NGINX Server header, but I'm new to Kubernetes and Helm so I don't actually know how to set it.</p> <p>I've tried to turn off server tokens like so:</p> <pre><code>helm upgrade --reuse-values nginx-ingress stable/nginx-ingress -ndefault --set controller.config.server-tokens='"false"' </code></pre> <p>Which is indeed reflected when I read back the chart values:</p> <pre><code>❯ helm get values nginx-ingress -ndefault USER-SUPPLIED VALUES: controller: config: server-tokens: '"false"' publishService: enabled: true </code></pre> <p>And in the YAML:</p> <pre><code>❯ kubectl get -n default configmap ingress-controller-leader-nginx -oyaml apiVersion: v1 data: server-tokens: "false" kind: ConfigMap metadata: ... </code></pre> <p>But it doesn't seem to be applied against the internal <code>nginx.conf</code>:</p> <pre><code>❯ kubectl exec -ndefault nginx-ingress-controller-b545558d8-829dz -- cat /etc/nginx/nginx.conf | grep tokens server_tokens on; </code></pre> <p>And also my web-server is still sending the <code>server:</code> header.</p> <p>Do I have to reboot the service for the ConfigMap to be reflected or what? How do I do this?</p>
<p><del>I think it has to be a capital F in <code>False</code>.</del> Doesn't need to be. It just wasn't taking before. I don't know why. <a href="https://github.com/nginxinc/kubernetes-ingress/issues/226#issuecomment-391183286" rel="nofollow noreferrer">This</a> and <a href="https://github.com/nginxinc/kubernetes-ingress/issues/226#issuecomment-391210099" rel="nofollow noreferrer">this</a> both work.</p> <pre><code>❯ kubectl get configmap -ndefault nginx-ingress-controller -oyaml apiVersion: v1 data: server-tokens: "False" kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: nginx-ingress meta.helm.sh/release-namespace: default creationTimestamp: "2020-06-19T06:37:21Z" labels: app: nginx-ingress app.kubernetes.io/managed-by: Helm chart: nginx-ingress-1.37.0 component: controller heritage: Helm release: nginx-ingress name: nginx-ingress-controller namespace: default resourceVersion: "7130237" selfLink: /api/v1/namespaces/default/configmaps/nginx-ingress-controller uid: 633efe3a-73cf-4c40-8e40-581937e367e2 </code></pre> <p>You can do this manually by editing the configmap via <code>kubectl edit configmap</code> and then deleting the controller pod, or you can set it via helm values:</p> <pre><code>helm get values nginx-ingress -ndefault -oyaml &gt; tmp/nginx-ingress-values.yaml </code></pre> <p>Change to:</p> <pre><code>controller: config: server-tokens: "False" publishService: enabled: true </code></pre> <p>And apply it:</p> <pre><code>helm upgrade nginx-ingress stable/nginx-ingress -f tmp/nginx-ingress-values.yaml -ndefault </code></pre> <p>Delete the controller pod for good measure.</p> <p>Grep it:</p> <pre><code>kubectl exec -ndefault nginx-ingress-controller-b545558d8-gmz64 -- cat /etc/nginx/nginx.conf | grep tokens </code></pre> <p>Confirm your headers in Chrome dev tools:</p> <p><img src="https://i.stack.imgur.com/0OTNA.png" alt=""></p> <hr> <p>Perhaps more simply this would have worked:</p> <pre><code>helm upgrade --reuse-values nginx-ingress stable/nginx-ingress -ndefault --set-string controller.config.server-tokens=false </code></pre> <p>But my config is working now so I'm not going to try it.</p>
<p>When I tries create pod from docker images, I get create container error. Here is my pod.yml file</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: client spec: containers: - image: es-tutorial_web imagePullPolicy: Never name: es-web ports: - containerPort: 3000 - image: es-tutorial_filebeat imagePullPolicy: Never name: es-filebeat </code></pre> <p>docker-compose.yml</p> <pre><code>version: '3.7' services: web: build: context: . dockerfile: ./Dockerfile container_name: test-app working_dir: /usr/src/app command: /bin/bash startup.sh volumes: - .:/usr/src/app ports: - "3000:3000" networks: - logs filebeat: build: context: . dockerfile: filebeat/Dockerfile container_name: test-filebeat volumes: - .:/usr/src/app depends_on: - web networks: - logs networks: logs: driver: bridge </code></pre> <p>kubectl get pods</p> <pre><code>client 1/2 CreateContainerError 0 24m </code></pre> <p>kubectl describe client</p> <pre><code>Name: client Namespace: default Priority: 0 Node: minikube/10.0.2.15 Start Time: Tue, 15 Oct 2019 15:29:02 +0700 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"client","namespace":"default"},"spec":{"containers":[{"image":"es-tut... Status: Pending IP: 172.17.0.8 Containers: es-web: Container ID: Image: es-tutorial_web Image ID: Port: 3000/TCP Host Port: 0/TCP State: Waiting Reason: CreateContainerError Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5ftqt (ro) es-filebeat: Container ID: docker://4174e7eb5bf8abe7662698c96d7945a546503f3c5494cad2ae10d2a8d4f02762 Image: es-tutorial_filebeat Image ID: docker://sha256:4e3d24ef67bb05b2306eb49eab9d8a3520aa499e7a30cf0856b8658807b49b57 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Tue, 15 Oct 2019 15:29:03 +0700 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5ftqt (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-5ftqt: Type: Secret (a volume populated by a Secret) SecretName: default-token-5ftqt Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned default/client to minikube Normal Pulled 13m kubelet, minikube Container image "es-tutorial_filebeat" already present on machine Normal Created 13m kubelet, minikube Created container es-filebeat Normal Started 13m kubelet, minikube Started container es-filebeat Warning Failed 11m (x11 over 13m) kubelet, minikube Error: Error response from daemon: No command specified Normal Pulled 3m26s (x50 over 13m) kubelet, minikube Container image "es-tutorial_web" already present on machine </code></pre> <p>Dockerfile</p> <pre><code>... RUN apt-get update &amp;&amp; apt-get install -y curl RUN curl -sL "https://deb.nodesource.com/setup_12.x" | bash - &amp;&amp; apt-get install -y nodejs &amp;&amp; echo 'node' &gt; node RUN mkdir -p /usr/src/app COPY . /usr/src/app WORKDIR /usr/src/app RUN chmod +x startup.sh RUN npm install -g nodemon </code></pre> <p>startup.sh</p> <pre><code>if [ ! -d /usr/src/app/node_modules ]; then echo "Install dependencies..." cd /usr/src/app &amp;&amp; npm install --no-bin-links fi cd /usr/src/app &amp;&amp; nodemon -L bin/www </code></pre> <p>Where is my wrong? Please help me</p>
<p>I believe you're missing <a href="https://docs.docker.com/engine/reference/builder/#cmd" rel="nofollow noreferrer"><code>CMD</code></a> or <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer"><code>ENTRYPOINT</code></a> in your <code>Dockerfile</code>. They're required to run the container.</p> <p>It should be set to some default command which you plan to run when executing container.</p> <p>If <code>startup.sh</code> is your script running the app, try the following:</p> <pre><code>ENTRYPOINT /usr/src/app/startup.sh </code></pre> <p>Or modify your <code>Dockerfile</code> to:</p> <pre><code># ... WORKDIR /usr/src/app RUN chmod +x startup.sh RUN npm install -g nodemon RUN test ! -d /usr/src/app/node_modules &amp;&amp; npm install --no-bin-links ENTRYPOINT ["/usr/src/app/nodemon", "-L", "bin/www"] </code></pre>
<pre><code>Kubectl get cs -o ymal </code></pre> <p>returns the healthy status for the control plane, but due some some reason,</p> <pre><code>kubectl get pods --all-namespaces </code></pre> <p>does not show any control plane pods like api-server, schedular, controller manager etc.</p> <p>I can also see the manifest file at /etc/kubernetes/manifests location as well.</p> <p>Please help what am I missing.</p>
<p>GKE does not run the control plane in pods. Google does not really talk about how they run it but it's likely as containers in some GKE-specific management system.</p>
<p>I've found 2 different ways to run a one-off command in my kubernetes cluster:</p> <h2>Method 1</h2> <pre><code>kubectl apply -f kubernetes/migrate.job.yaml kubectl wait --for=condition=complete --timeout=600s job/migrate-job kubectl delete job/migrate-job </code></pre> <p>The problem with this is (a) it doesn't show me the output which I like to see, and (b) it's 3 commands</p> <h2>Method 2</h2> <pre><code>kubectl run migrate --stdin --tty --rm --restart=Never --image=example.org/app/php:v-$(VERSION) --command -- ./artisan -vvv migrate </code></pre> <p>This almost works except I also need a volume mount to run this command, which AFAIK would require a rather lengthy <code>--overrides</code> arg. If I could pull the override in from a file instead it'd probably work well. Can I do that?</p> <p>I also need to to return the exit code if the command fails.</p>
<p>There's an open ticket for this: <a href="https://github.com/kubernetes/kubernetes/issues/63214" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/63214</a></p> <p>A short term solution is to run your job like this:</p> <pre><code>kubectl run migrate --stdin --tty --rm --restart=Never --image=example.org/app/php:v-$(VERSION) --overrides="$(cat kubernetes/migrate.pod.yaml | y2j)" </code></pre> <p>Using <a href="https://github.com/wildducktheories/y2j" rel="nofollow noreferrer">y2j</a> to convert YAML to JSON so that I can use a standard pod manifest.</p> <p><code>migrate.pod.yaml</code> looks like:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: migrate-pod spec: volumes: - name: migrate-secrets-volume secret: secretName: google-service-account containers: - name: migrate-container image: example.org/app/php command: ["./artisan", "-vvv", "migrate"] stdin: true stdinOnce: true tty: true envFrom: - secretRef: name: dev-env volumeMounts: - name: migrate-secrets-volume mountPath: /app/secrets readOnly: true restartPolicy: Never imagePullSecrets: - name: regcred </code></pre>
<p>I have a spring boot web app which simply prints a property that is passed in a Kubernetes' ConfigMap.</p> <p>This is my main class:</p> <pre><code>@SpringBootApplication @EnableDiscoveryClient @RestController public class DemoApplication { private MyConfig config; private DiscoveryClient discoveryClient; @Autowired public DemoApplication(MyConfig config, DiscoveryClient discoveryClient) { this.config = config; this.discoveryClient = discoveryClient; } @RequestMapping("/") public String info() { return config.getMessage(); } public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @RequestMapping("/services") public String services() { StringBuilder b = new StringBuilder(); discoveryClient.getServices().forEach((s) -&gt; b.append(s).append(" , ")); return b.toString(); } } </code></pre> <p>and the <code>MyConfig</code> class is:</p> <pre><code>@Configuration @ConfigurationProperties(prefix = "bean") public class MyConfig { private String message = "a message that can be changed live"; public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } } </code></pre> <p>Basically, by invoking root resource I always get:</p> <blockquote> <p>a message that can be changed live</p> </blockquote> <p>And invoking /services I actually get a list of Kubernetes services.</p> <p>I'm creating the ConfigMap with <code>kubectl create -f configmap-demo.yml</code> being the content:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: demo data: bean.message: This is an info from k8 </code></pre> <p>And the deployment with <code>kubecetl create -f deploy-demo.yml</code> and the content is:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: demo labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: # this service account was created according to # https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions # point 5 - Grant super-user access to all service accounts cluster-wide (strongly discouraged) serviceAccountName: i18n-spring-k8 containers: - name: demo image: aribeiro/sck-demo imagePullPolicy: Never env: - name: JAVA_OPTS value: ports: - containerPort: 8080 volumes: - name: demo configMap: name: demo </code></pre> <p>The problem is that when accessing the root resource <code>/</code> I always get the default hardcoded value and never what is defined in Kubernetes' ConfigMap.</p> <p>Example project also with yaml files and Docker file available at <a href="https://drive.google.com/open?id=107IcwnYIbVpmwVgdgi8Dhx4nHEFAVxV8" rel="nofollow noreferrer">https://drive.google.com/open?id=107IcwnYIbVpmwVgdgi8Dhx4nHEFAVxV8</a> .</p> <p>Also checked the startup DEBUG logs and I don't see any error or clue why it should not work.</p>
<p>The <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/blob/master/README.adoc#kubernetes-propertysource-implementations" rel="noreferrer">Spring Cloud Kubernetes documentation</a> is incomplete. It lacks the instruction to include this dependency to enable loading application properties from ConfigMap's:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-starter-kubernetes-config&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre>
<p>Hi guys I have an error and I can't find the answer. I am trying to deploy a very simple MySQL deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: mysql name: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:8.0.25 name: mysql ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql mountPath: /var/lib/mysql env: - name: MYSQL_USER value: weazel - name: MYSQL_DATABASE value: weazel - name: MYSQL_PASSWORD value: weazel - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql key: rootPassword volumes: - name: mysql persistentVolumeClaim: claimName: mysql --- apiVersion: v1 kind: Service metadata: name: mysql spec: ports: - port: 3306 selector: app: mysql clusterIP: None </code></pre> <p>The deployment starts running and the pod after few minutes of initialization gives me that error:</p> <pre><code>2021-06-20 21:19:58+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started. 2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started. 2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Initializing database files 2021-06-20T21:19:59.461650Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.25) initializing of server in progress as process 43 2021-06-20T21:19:59.510070Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-06-20T21:21:15.206744Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2021-06-20T21:24:18.876746Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. 2021-06-20 21:28:29+00:00 [Note] [Entrypoint]: Database files initialized 2021-06-20 21:28:29+00:00 [Note] [Entrypoint]: Starting temporary server 2021-06-20T21:28:30.333051Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.25) starting as process 126 2021-06-20T21:28:30.538635Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2021-06-20T21:28:32.723573Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2021-06-20T21:28:33.273688Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock 2021-06-20T21:28:39.828471Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2021-06-20T21:28:39.828950Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2021-06-20T21:28:40.155589Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2021-06-20T21:28:40.215423Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.25' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL. 2021-06-20 21:28:40+00:00 [Note] [Entrypoint]: Temporary server started. Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2021-06-20 21:31:13+00:00 [Note] [Entrypoint]: Creating database weazel mysql: [ERROR] unknown option '--&quot;'. </code></pre> <p>Then it will restart the pod, but I will not be able to reconnect to the database anymore with the provided credentials. I have tried to do the same in a docker container, and everything works very well as expected. No matter what I do, I will always get that error. I suppose it can be related to my kubernetes cluster which is a three master on-premise cluster.</p> <p>Please help.</p> <p><strong>SOLUTION</strong></p> <p>My secret variable was encoded with new line at the end of the variable, after encoding the secret variable without a new space all went smooth. Thank you</p>
<p>Answered in comments, the password was corrupted which threw off the setup scripts.</p>
<p>My node.js kubernetes dev server is stuck in a crash loop. The logs show:</p> <pre><code>/bin/busybox:1 ELF ^ SyntaxError: Invalid or unexpected token at wrapSafe (internal/modules/cjs/loader.js:992:16) at Module._compile (internal/modules/cjs/loader.js:1040:27) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10) at Module.load (internal/modules/cjs/loader.js:941:32) at Function.Module._load (internal/modules/cjs/loader.js:776:16) at Function.executeUserEntryPoint [as runMain (internal/modules/run_main.js:72:12) at internal/main/run_main_module.js:17:47 stream closed </code></pre> <p>I found the image that's running via <code>kubectl describe</code> and ran it in a shell. It ought to be running this (from the <code>Dockerfile</code>):</p> <pre><code>... ENTRYPOINT [&quot;node&quot;, &quot;--max-old-space-size=4096&quot;, &quot;--enable-source-maps&quot;] CMD ['node_modules/.bin/rollup','-cw'] </code></pre> <p>So in the shell I ran:</p> <pre><code>❯ dsh myregistry/project/server:R4SUGQt +dsh:1&gt; docker run --rm -it --entrypoint sh myregistry/project/server:R4SUGQt -c '{ command -v zsh &amp;&amp; exec zsh -il ;} || { command -v bash &amp;&amp; bash -il ;} || { command -v ash &amp;&amp; ash -il ;}' /bin/ash 06884c20d5cc:/app# node --max-old-space-size=4096 --enable-source-maps node_modules/.bin/rollup -cw </code></pre> <p>And it works perfectly. So where's that error stemming from? All the paths in the stack trace internal (<code>internal/modules/...</code>) not from my script or rollup.</p> <p>I'm using <code>node:alpine</code> as my base image which is currently on Node v14.11.0.</p> <hr /> <p>Running it directly</p> <pre><code>docker run --rm myregistry/project/server:R4SUGQ </code></pre> <p><strong>does</strong> reproduce the error.</p> <hr /> <p>Running</p> <pre><code>docker run -it myregistry/project/server:R4SUGQt node_modules/.bin/rollup -cw </code></pre> <p>Works fine... is there something wrong with my <code>CMD</code> syntax?</p> <pre><code>CMD ['node_modules/.bin/rollup','-cw'] </code></pre>
<p>It was the single quotes I used in the <code>CMD</code>.</p> <p>Found this reddit post after the fact: <a href="https://www.reddit.com/r/docker/comments/9wn5gw/be_aware_of_the_quotes_in_your_dockerfile/" rel="nofollow noreferrer">https://www.reddit.com/r/docker/comments/9wn5gw/be_aware_of_the_quotes_in_your_dockerfile/</a></p> <p>What a waste of time... leaving this question here in case someone else hits a similar error, maybe it'll be Googlable for them.</p>
<p>I've created the following sample program which I need to create secret values</p> <p>index.js</p> <pre><code>const express = require(&quot;express&quot;); const port = process.env.PORT || 3000; app = express(); app.get('/', (req, res) =&gt; ( res.send(&quot;hello from k8s&quot;)) ) app.listen(3000, function () { console.log(&quot;my secret ---&gt; &quot;, process.env.TOKEN1) console.log(&quot;server is listen to port&quot;, port) }) </code></pre> <p>This is the secret.yaml</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secert1 namespace: trail type: Opaque data: TOKEN1: cmVhbGx5X3NlY3JldF92YWx1ZTE= </code></pre> <p>and this is how I connected between of them</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app1 namespace: trail spec: replicas: 1 template: metadata: labels: app: app1 spec: containers: - name: app1 image: myimage imagePullPolicy: Always ports: - containerPort: 5000 env: - name: myenv valueFrom: secretKeyRef: name: secert1 key: TOKEN1 </code></pre> <p>When deploying the program I see in k8s logs</p> <pre><code> my secret ---&gt; undefined server is listen to port 5000 </code></pre> <p>What am I missing here ? in addition assume that I've more than 20 properties which I need to read from my app, is there a better way or just map each of the key value in the secret ?</p>
<p>The <code>name</code> is the key for the env var so with what you have, that should be <code>process.env.myenv</code>. You probably instead want to use the <code>envFrom</code> option.</p>
<p>I am getting metrics exposed by kube-state-metrics by querying Prometheus-server but the issue is I am getting duplicate metrics with difference only in the job field. . I am doing query such as :</p> <pre><code>curl 'http://10.101.202.25:80/api/v1/query?query=kube_pod_status_phase'| jq </code></pre> <p>The only difference is coming the job field. <a href="https://i.stack.imgur.com/HHXJW.jpg" rel="nofollow noreferrer">Metrics coming when querying Prometheus-Server</a></p> <p>All pods running in the cluster: <a href="https://i.stack.imgur.com/WxNXz.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/WxNXz.jpg</a></p> <p>Any help is appreciated.</p> <p>Thank You</p> <p><strong>prometheus.yml</strong></p> <pre><code>global: scrape_interval: 15s evaluation_interval: 15s rule_files: # - &quot;first.rules&quot; # - &quot;second.rules&quot; scrape_configs: - job_name: prometheus static_configs: - targets: ['localhost:9090'] </code></pre>
<p>You are running (or at least ingesting) two copies of kube-state-metrics. Probably one you installed and configured yourself and another from something like kube-prometheus-stack?</p>
<p>I am trying to enable the local account on an AKS cluster (version 1.24.10) by running commands from an AzureDevOps yaml pipeline.</p> <p>The “old” az aks command : <code>az aks update -g &lt;myresourcegroup&gt; -n &lt;myclustername&gt; --enable-local</code> used to serve me well to enable a local account on an AKS cluster. In the yaml pipeline, however, this does not seem to work and I resorted to running the <a href="https://learn.microsoft.com/en-us/powershell/module/az.aks/set-azakscluster?view=azps-9.7.1" rel="nofollow noreferrer">Set-AzAksCluster</a> command from within a AzurePowerShell@5 task</p> <pre><code> - task: AzurePowerShell@5 displayName: 'disable-local-account' name: disablelocalaccount inputs: azureSubscription: 'myazsubscription' ScriptType: InlineScript Inline: | Set-AzAksCluster -ResourceGroupName myresourcegrp -Name mycluster -DisableLocalAccount azurePowerShellVersion: LatestVersion </code></pre> <p>By passing the -DisableLocalAccount switch to the command we can disable the local account on the cluster. The enabling of the local account on the cluster just seems to elude me somehow…sigh.</p> <p>Does anybody know if it is possible to enable the local account using the <a href="https://learn.microsoft.com/en-us/powershell/module/az.aks/set-azakscluster?view=azps-9.7.1" rel="nofollow noreferrer">Set-AzAksCluster</a> command? And if so, what is the correct way to go about this?</p> <p>Many thanks!</p> <p>Kind regards,<br /> Morné</p>
<p>Wow, great spot... The enable flag isn't there. You might want to raise this on the <a href="https://github.com/Azure/azure-powershell/discussions/categories/feature-requests" rel="nofollow noreferrer">Azure PowerShell GitHub</a>.</p> <p>As a workaround, the Azure CLI does allow enabling of local accounts.</p> <pre class="lang-bash prettyprint-override"><code>az aks update --name cluster --resource-group rg --enable-local-accounts </code></pre> <p><a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update</a></p>
<p>I'm trying to create some reusable Terraform modules that provision Kubernetes resources on a cluster. My modules do not explicitly configure a Kubernetes provider, expecting that a configured one will be created by the "root" module. I believe this is in line with <a href="https://www.terraform.io/docs/configuration/modules.html" rel="nofollow noreferrer">Terraform best practices</a>.</p> <p>If the root module "forgets" to provide a configured Kubernetes provider, though, it appears that Terraform will provide one by default, and with the default behaviour of using whatever context may currently be configured in the executing user's <code>kubeconfig</code>. If the user is not paying attention, they may inadvertently end up modifying resources on the wrong cluster.</p> <p>Is there a way to prevent this behaviour and effectively say "you <em>must</em> explicitly pass in a provider for this module to use"?</p>
<p>The best option I've come up with is to create a Kubernetes provider in the module like:</p> <pre><code># Prevents this module from loading a default context from local kubeconfig when calling module forgets to define a Kubernetes provider provider "kubernetes" { load_config_file = false } </code></pre> <p>Then, as long as the calling module provides a different instance, eg:</p> <pre><code>provider "kubernetes" { # properly configure stuff here } module "my-module" { source = "blah" providers = { kubernetes = kubernetes } etc. } </code></pre> <p>you can avoid accidentally using the default provider.</p> <p>This is fine, but a little non-obvious until you're used to the pattern.</p>
<p>How can I trigger the update (redeploy) of the hearth through the k8s golang client.</p> <p>At the moment, I use these libraries to get information about pods and namespaces:</p> <pre><code>v1 &quot;k8s.io/api/core/v1 k8s.io/apimachinery/pkg/apis/meta/v1 k8s.io/client-go/kubernetes k8s.io/client-go/rest </code></pre> <p>Maybe there is another library or it can be done through linux signals</p>
<p>The standard way to trigger a rolling restart is set/update an annotation in the pod spec with the current timestamp. The change itself does nothing but that changes the pod template hash which triggers the Deployment controller to do its thang. You can use <code>client-go</code> to do this, though maybe work in a language you're more comfortable with if that's not Go.</p>
<p>I have a deployment with multiple pods in Azure Kubernetes Service.<br> There is a K8s service that is used to connect deployment pods.<br> The service has a private IP accessible in Azure Virtual Network. The service type is LoadBalancer.<br> I want to monitor and see if the service is up. If it is not up, trigger an email alert.</p> <p>I have identified two options:</p> <p><strong>Option 1:</strong><br> I enabled AKS diagnostics so that I get the service logs. When I check the logs with the query below, I can see service failure logs. I think I can use these logs in Azure Monitor to trigger an alert. I still need to verify if it will work in every type of failure.</p> <pre><code>KubeEvents | where TimeGenerated &gt; ago(7d) | where not(isempty(Namespace)) | where ObjectKind == 'Service' </code></pre> <p><strong>Option 2:</strong><br> Create an Azure Function with HTTPS API enabled so I can call it externally from Pingdom. Make sure the function uses AppService with a VM so that it can access private IPs and the service (As this is using VM, it is increasing the cost). The function checks the private IP and sees if it is returning 200, and it will return 200; otherwise, it will return an error code. So Pingdom will keep the uptime details and also alert accordingly when it is down.</p> <p><strong>Summary:</strong><br> I am not 100% sure about option one. For the second option, it seems like doing too much work, and I think that there should be a better and more robust way of doing it.</p> <p>So I am interested in getting feedback from some Azure and K8s experts who dealt with the problem and solved it in a more robust way.</p>
<p>Using Azure Application Insights there are two [private monitoring options] (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test</a>) described.</p> <ol> <li>Allowing limited inbound connectivity</li> <li>Using Azure Functions, as you have described in your Option 2.</li> </ol> <p>Personally I prefer endpoint monitoring to be more independent from the resource that's hosting the service.</p>
<p>I just ran through the <code>JHipster</code> demo &quot;Learn JHipster In 15 Minutes&quot; and I'm now trying to deploy the result into kubernetes using <a href="https://www.jhipster.tech/kubernetes/" rel="nofollow noreferrer">these instructions</a>. It's failing on the 2nd question.</p> <pre><code>$ jhipster kubernetes INFO! Using JHipster version installed globally ⎈ Welcome to the JHipster Kubernetes Generator ⎈ Files will be generated in folder: /Users/johnaron/git/JHipDemo/k8s ✔ Docker is installed ? Which *type* of application would you like to deploy? Monolithic application ? Enter the root directory where your applications are located (../) ../ &gt;&gt; No monolith found in /Users/johnaron/git/JHipDemo/ </code></pre> <p>This is the listing of the parent directory</p> <pre><code>$ ll .. total 4384 drwxr-xr-x 40 johnaron staff 1280 Jun 22 16:08 . drwxr-xr-x@ 99 johnaron staff 3168 Jun 22 16:13 .. -rw-r--r-- 1 johnaron staff 853 Jun 15 18:26 .browserslistrc -rw-r--r-- 1 johnaron staff 478 Jun 15 18:26 .editorconfig -rw-r--r-- 1 johnaron staff 92 Jun 15 18:27 .eslintignore -rw-r--r-- 1 johnaron staff 3041 Jun 15 18:26 .eslintrc.json drwxr-xr-x 12 johnaron staff 384 Jun 22 16:13 .git -rw-r--r-- 1 johnaron staff 3413 Jun 15 18:26 .gitattributes -rw-r--r-- 1 johnaron staff 2075 Jun 15 18:26 .gitignore drwxr-xr-x 7 johnaron staff 224 Jun 15 18:41 .gradle -rw-r--r-- 1 johnaron staff 53 Jun 15 18:26 .huskyrc drwxr-xr-x 5 johnaron staff 160 Jun 15 18:31 .jhipster -rw-r--r-- 1 johnaron staff 113 Jun 15 18:26 .lintstagedrc.js -rw-r--r-- 1 johnaron staff 69 Jun 15 18:26 .prettierignore -rw-r--r-- 1 johnaron staff 251 Jun 15 18:26 .prettierrc -rw-r--r-- 1 johnaron staff 143 Jun 15 18:26 .yo-rc-global.json -rw-r--r-- 1 johnaron staff 1468 Jun 22 16:12 .yo-rc.json -rw-r--r-- 1 johnaron staff 7404 Jun 15 18:26 README.md -rw-r--r-- 1 johnaron staff 3210 Jun 15 18:26 angular.json -rw-r--r--@ 1 johnaron staff 418 Jun 15 18:29 blog.jdl drwxr-xr-x 8 johnaron staff 256 Jun 15 18:44 build -rw-r--r-- 1 johnaron staff 10204 Jun 15 18:26 build.gradle -rw-r--r-- 1 johnaron staff 793 Jun 15 18:26 checkstyle.xml drwxr-xr-x 9 johnaron staff 288 Jun 15 18:26 gradle -rw-r--r-- 1 johnaron staff 2424 Jun 15 18:26 gradle.properties -rwxr-xr-x 1 johnaron staff 5960 Jun 15 18:26 gradlew -rw-r--r-- 1 johnaron staff 2842 Jun 15 18:26 gradlew.bat -rw-r--r-- 1 johnaron staff 1448 Jun 15 18:26 jest.conf.js drwxr-xr-x 2 johnaron staff 64 Jun 22 16:08 k8s -rw-r--r-- 1 johnaron staff 530 Jun 15 18:26 ngsw-config.json drwxr-xr-x 1228 johnaron staff 39296 Jun 15 18:33 node_modules -rw-r--r-- 1 johnaron staff 2108706 Jun 15 18:33 package-lock.json -rw-r--r-- 1 johnaron staff 6672 Jun 15 18:33 package.json -rw-r--r-- 1 johnaron staff 694 Jun 15 18:26 settings.gradle -rw-r--r-- 1 johnaron staff 1538 Jun 15 18:26 sonar-project.properties drwxr-xr-x 4 johnaron staff 128 Jun 15 18:26 src -rw-r--r-- 1 johnaron staff 151 Jun 15 18:26 tsconfig.app.json -rw-r--r-- 1 johnaron staff 729 Jun 15 18:26 tsconfig.json -rw-r--r-- 1 johnaron staff 285 Jun 15 18:26 tsconfig.spec.json drwxr-xr-x 5 johnaron staff 160 Jun 15 18:26 webpack </code></pre> <p>What is <code>jhipster</code> looking for?</p> <p>John</p> <p>Now I tried moving the k8s dir to the same level as the JHipDemo, but it was no help.</p> <pre><code>johnaron@JOHNARON-M-841U:k8s$ jhipster kubernetes INFO! Using JHipster version installed globally ⎈ Welcome to the JHipster Kubernetes Generator ⎈ Files will be generated in folder: /Users/johnaron/git/k8s ✔ Docker is installed ? Which *type* of application would you like to deploy? Monolithic application ? Enter the root directory where your applications are located (../) ../JHipDemo &gt;&gt; No monolith found in /Users/johnaron/git/JHipDemo </code></pre>
<p>I believe you need to create it at the same level as your monolith. For example:</p> <ul> <li>JHipDemo</li> <li>k8s</li> </ul> <p>Can you try this?</p>
<p>we have pods on <a href="https://cloud.google.com/kubernetes-engine" rel="nofollow noreferrer">GKE</a>.</p> <p>we can delete pod by <code>kubectl -n &lt;ns&gt; delete pod &lt;pod name&gt;</code>. we can also delete the pod by clicking the following delete button.</p> <p><a href="https://i.stack.imgur.com/ZYP6o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZYP6o.png" alt="enter image description here" /></a></p> <p>what are the differences? what are the results if I did both?</p> <p>thanks</p> <p><strong>UPDATE</strong></p> <p>thanks. The pod has <code>terminationGracePeriodSeconds: 60</code>. what will happen if I run <code>kubectl delete pod pod_name</code> and then <code>ctrl C</code>? Then click the delete button on the web ui? all these are in 60 seconds.</p> <p>I am curious whether it will delete the pod by force without waiting for 60 seconds.</p> <p>thanks</p> <pre><code>$ kubectl -n ns delete pod pod-0 pod &quot;pod-0&quot; deleted ^C </code></pre>
<p>Both trigger the same API call to the kube-apiserver. If you try to delete something twice, the second call will fail either &quot;already deleted&quot; or &quot;not found&quot;.</p>
<p>I have setup a Kubernetes cluster with a few VMs. I know the external static IP of each node. Is there a way for me to manually register these external IPs to my cluster nodes?</p> <p>I don't want to use Service, Ingress, MetalLB, or any cloud-specific load balancer solution.</p> <p>Basically, I want to manually populate these fields for my nodes:</p> <p><a href="https://i.stack.imgur.com/I8An9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I8An9.png" alt="How do I add external IPs to my nodes" /></a></p> <hr /> <p>For context, here's what I'm trying to accomplish: I have some NodePort services which I want to expose to the internet without explicitly specifying the <code>externalIps</code> list in the service definition. I want a request from the browser to hit one of my nodes at its external IP, and inside the cluster behave as if the internal node IP was requested.</p> <p>I'm assuming that if I can somehow tell my nodes what their external IP is, then this scenario will work automatically. Is this assumption correct? And if so, is there a way, <code>kubectl</code> or otherwise, for me to add external IP to my node definitions?</p> <p>Else, can I achieve what I've described via a CNI plugin? Or by changing the content of <code>/etc/cni/net.d/10-weave.conflist</code>?</p>
<p>Okay, so to rewind, this is what the actual data structure looks like:</p> <pre><code> addresses: - address: 10.45.0.53 type: InternalIP - address: 34.127.42.172 type: ExternalIP </code></pre> <p>So multiple address types are present in parallel (there's also usually hostname data set in there too). However the Kubelet only ever sets InternalIP itself, based on either <code>--node-ip</code> or the primary interface address of the system. The other values are populated by cloud controllers which you don't have. So either you need to write your own cloud controller or set up the primary IP of the node to be public (it will be noted as &quot;internal&quot; but IPs are IPs.</p>
<p>I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment.</p> <p>I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach:</p> <ol> <li><p>Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed.</p> </li> <li><p>Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions.</p> </li> <li><p>Some other approach I haven't thought of</p> </li> </ol>
<p>I would start with the most simple approach and that is to place the write and read side together because they belong to the same bounded context.</p> <p>Then in the future if it is needed, then I would consider adding more read side or scaling out to other regions.</p> <p>To get started I would also consider adding the ReadSide inside the same VM as the write side. Just to keep it simple, as getting it all up and working in production is always a big task with a lot of pitfalls.</p> <p>I would consider using a Kafka like system to transport the data to the read-sides because with queues, if you later add a new or if you want to rebuild a read-side instance, then using queues might be troublesome. Here the sender will need to know what read-sides you have. With a Kafka style of integration, each &quot;read-side&quot; can consume the events in its own pace. You can also more easily add more read-sides later on. And the sender does not need to be aware of the receivers.</p> <p>Kafka allows you to decouple the producers of data from consumers of the data, like this picture that is taken form one of my training classes: <a href="https://i.stack.imgur.com/MBste.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MBste.png" alt="enter image description here" /></a></p> <p>In kafka you have a set of producers appending data to the Kafka log:</p> <p><a href="https://i.stack.imgur.com/t0ZIO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t0ZIO.png" alt="enter image description here" /></a></p> <p>Then you can have one or more consumers processing this log of events:</p> <p><a href="https://i.stack.imgur.com/p1hgY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p1hgY.png" alt="enter image description here" /></a></p>
<p>I have created a custom Identity Service that mainly uses Identity Server 4 with Azure AD as an external provider. I have configured azure ad, having all the required ids &amp; secrets and locally was able to authenticate any registered user in Azure.</p> <p>The problem appears when we deployed that service into Kubernetes. <a href="https://i.stack.imgur.com/Js8jp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Js8jp.png" alt="Reply Url Error" /></a></p> <p>I have added a public URL like <a href="https://myidentitydomain.com/signin-oidc" rel="nofollow noreferrer">https://myidentitydomain.com/signin-oidc</a>. In a pod, we have a different domain of identity service than public one (be-identity-service), but it is how it works in Kubernetes. Not sure that its the issue connected to reply URL failure. But also my identity service has to be hosted in private network in Azure.</p> <p>Really appreciate for any given advice.</p>
<p>When you click on the sign-in button to authenticate with IdentityServer, do look at the URL to see what <strong>returnurl</strong> was actually sent to it and add it to the client definition.</p> <p>For example:</p> <pre><code>https://demo.identityserver.io/Account/Login?ReturnUrl=%2Fdiagnostics </code></pre>
<p>we need to initiate Kubernetes cluster and start our development. Is it OK to have 1 master Control Plane node and 1 worker node with our containers to start the development? We can afford for services to be unavailable in case of upgrades, scaling and so on, I've just worried if I am lacking some more important info. I was planning to have 8CPUs and 64 GB since that are the similar resources which we have on one of our VMs without containers with the same apps.</p> <p>We will deploy cluster with Azure Kubernetes Service. Thank you</p>
<p>Sure, you can also have single node clusters. Just as you said, that means if one node goes down, the cluster is unavailable.</p>
<p>While scaling-in, HPA shouldn't terminate a pod that has a job running on it. This is taken care of by AWS autoscaling groups in the form of scale-in protection for instances. Is there something similar in kubernetes?</p>
<p>You use terminationGracePeriodSeconds to make your worker process wait until it is done. It will get a SIGTERM, then has that many seconds to finish (default 9 but you can make it anything, some of my workers have it set to 12 hours), then SIGKILL if it hasn't exited. So stop accepting new work units on SIGTERM, set the threshold to be the length of your longest work unit, and no worries :)</p>
<p>I have an application made of a few microservices. UI has is served via Nginx and accesses the API via reverse proxy. This is a local Kubernetes deployed with Rancher.</p> <p>UI service is NodePort so I can access it outside of the cluster and API can be either NodePort or ClusterIP. In API microservice there is an IP logger that always shows the IP of the Node on which the UI is deployed, no matter what I'm doing.</p> <p>So far I did try 'externalTrafficPolicy: Local' for the NodePort services in combination with setting Nginx headers (X-Forwarded-For and X-Forwarded-Proto) for reverse proxying. Nothing seems to work.</p> <p>Is it possible to get client IP in the application hosted in Kubernetes?</p> <p>Kubernetes Version: v1.17.3</p> <p>Rancher v2.3.4</p>
<p>You need to use <code>externalTrafficPolicy: Local</code> and then look at the actual <code>REMOTE_ADDRESS</code> or whatever your web framework calls it.</p>
<p>We plan to use AWS EKS to run a stateless application.</p> <p>There is a goal to achieve optimal budget by using spot instances and prefer them to on-demand ones.</p> <p>Per <a href="https://aws.github.io/aws-eks-best-practices/cluster-autoscaling/cluster-autoscaling/#spot-instances" rel="nofollow noreferrer">AWS recommendations</a>, we plan to have two Managed Node Groups: one with on-demand instances, and one with spot instances, plus Cluster Autoscaler to adjust groups size.</p> <p>Now, the problem to solve is achieving two somewhat conflicting requirements:</p> <ul> <li>Prefer spot nodes to on-demand, e.g. run 90% of pods on spot instances and 10% on on-demand ones</li> <li>But still, ensure that some pods always do run within on-demand group, so even in case of massive spot instance drain, there still will be some pods that can process requests</li> </ul> <p>After some research I found following possible approaches to solving it:</p> <p>Approach A: Using <code>preferredDuringSchedulingIgnoredDuringExecution</code> with weights based on Node Group capacity type label. E.g. one <code>preferredDuringSchedulingIgnoredDuringExecution</code> rule with weight 90 would prefer nodes with capacity type <code>spot</code>, and other rule with weight 1 would prefer on-demand ones, e.g.:</p> <pre><code>preferredDuringSchedulingIgnoredDuringExecution: - weight: 90 preference: matchExpressions: - key: eks.amazonaws.com/capacityType operator: In values: - spot - weight: 1 preference: matchExpressions: - key: eks.amazonaws.com/capacityType operator: NotIn values: - spot </code></pre> <p>The downside is that — as I understand — you are not guaranteed to have pods running on least preferred group, as those are just (added) weights, not some sort of exact distribution.</p> <p>Other approach, which in theory could be combined with one above (?) is also using <code>topologySpreadConstraints</code>, e.g.:</p> <pre><code>spec: topologySpreadConstraints: - maxSkew: 20 topologyKey: eks.amazonaws.com/capacityType whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: foo: bar </code></pre> <p>Which would distribute pods across nodes with different capacity types, while allowing a skew of, say, 20 pods between them, and probably should (?) be combined with <code>preferredDuringSchedulingIgnoredDuringExecution</code> to achieve the desired effect.</p> <p>How feasible is the approach above? Are those the right tools to achieve the goals? I would very much appreciate any advice on the case!</p>
<p>This is not something the Kubernetes scheduler supports. Weights in affinities are more like score multiplies, and maxSkew is a very general cap on how out of balance things can get, but not the direction of that imbalance.</p> <p>You would have to write something custom AFAIK, or at least I've not seen anything for this when I went looking last. Check out the scheduler extender webhook system for a somewhat easy way to implement it.</p>
<p>I'm planning to have an <em>initcontainer</em> that will handle some crypto stuff and then generate a source file to be sourced by a <em>container</em>. The source file will be dynamically generated, the VARS will be dynamic, this means I will never know the VAR names or it's contents. This also means I cannot use k8s <em>env</em>. The file name will always be the same.</p> <p>I know I can change the Dockerfile from my applications and include an entrypoint to execute a script before running the workload to source the file, but, still, is this the only option? There's no way to achieve this in k8s?</p> <p>My <em>container</em> can mount the dir where the file was created by the <em>initcontainer</em>. But it can't, somehow, source the file?</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod-init namespace: default spec: nodeSelector: env: sm initContainers: name: genenvfile image: busybox imagePullPolicy: Always command: [&quot;/bin/sh&quot;] # just an example, there will be a software here that will translate some encrypted stuff into VARS and then append'em to a file args: [&quot;-c&quot;, &quot;echo MYVAR=func &gt; /tmp/sm/filetobesourced&quot;] volumeMounts: - mountPath: /tmp/sm name: tmpdir containers: image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim imagePullPolicy: IfNotPresent name: mypod-cm tty: true volumeMounts: - mountPath: /tmp/sm readOnly: true name: tmpdir volumes: name: tmpdir emptyDir: medium: Memory </code></pre> <p>The step-by-step that I'm thinking would be:</p> <ol> <li><em>initcontainer</em> mounts /tmp/sm and generates a file called /tmp/sm/filetobesourced</li> <li><em>container</em> mounts the /tmp/sm</li> <li><em>container</em> source the /tmp/sm/filetobesourced</li> <li>workload runs using all the vars sourced by the last step</li> </ol> <p>Am I missing something to get the third step done?</p>
<p>Change the <code>command</code> and/or <code>args</code> on the main container to be more like <code>bash -c 'source /tmp/sm/filetobesourced &amp;&amp; exec whatevertheoriginalcommandwas'</code>.</p>
<p>When I run gdb on my binary inside kubernetes pod in container it starts loading symbol after that it suddenly gets terminated with SIGTERM and exit code 137. I checked for describe pod it shows reason Error. I have added ptrace capabilities in yaml. Can someone help me with this.</p>
<p>Exit code 137 means that the process was killed by signal 9 (137=128+9). Most likely reason is that container was running out of memory and gdb was killed by OOM Killer. Check <code>dmesg</code> output for any messages from OOM Killer.</p>
<h3>Context</h3> <p>I am running an application (Apache Airflow) on EKS, that spins up new workers to fulfill new tasks. Every worker is required to spin up a new pod. I am afraid to run out of memory and/or CPU when there are several workers being spawned. My objective is to trigger auto-scaling.</p> <h3>What I have tried</h3> <p>I am using Terraform for provisioning (also happy to have answers that are not in Terraform, which i can conceptually transform to Terraform code).</p> <p>I have setup a fargate profile like:</p> <pre class="lang-sh prettyprint-override"><code># Create EKS Fargate profile resource &quot;aws_eks_fargate_profile&quot; &quot;airflow&quot; { cluster_name = module.eks_cluster.cluster_id fargate_profile_name = &quot;${var.project_name}-fargate-${var.env_name}&quot; pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn subnet_ids = var.private_subnet_ids selector { namespace = &quot;fargate&quot; } tags = { Terraform = &quot;true&quot; Project = var.project_name Environment = var.env_name } } </code></pre> <p>My policy for auto scaling the nodes:</p> <pre class="lang-sh prettyprint-override"><code># Create IAM Policy for node autoscaling resource &quot;aws_iam_policy&quot; &quot;node_autoscaling_pol&quot; { name = &quot;${var.project_name}-node-autoscaling-${var.env_name}&quot; policy = data.aws_iam_policy_document.node_autoscaling_pol_doc.json } # Create autoscaling policy data &quot;aws_iam_policy_document&quot; &quot;node_autoscaling_pol_doc&quot; { statement { actions = [ &quot;autoscaling:DescribeAutoScalingGroups&quot;, &quot;autoscaling:DescribeAutoScalingInstances&quot;, &quot;autoscaling:DescribeLaunchConfigurations&quot;, &quot;autoscaling:DescribeTags&quot;, &quot;autoscaling:SetDesiredCapacity&quot;, &quot;autoscaling:TerminateInstanceInAutoScalingGroup&quot;, &quot;ec2:DescribeLaunchTemplateVersions&quot; ] effect = &quot;Allow&quot; resources = [&quot;*&quot;] } } </code></pre> <p>And finally a (just a snippet for brevity):</p> <pre class="lang-sh prettyprint-override"><code># Create EKS Cluster module &quot;eks_cluster&quot; { cluster_name = &quot;${var.project_name}-${var.env_name}&quot; # Assigning worker groups worker_groups = [ { instance_type = var.nodes_instance_type_1 asg_max_size = 1 name = &quot;${var.project_name}-${var.env_name}&quot; } ] } </code></pre> <h3>Question</h3> <p>Is increasing the <code>asg_max_size</code> sufficient for auto scaling? I have a feeling that I need to set something where along the lines of: &quot;When memory exceeds X do y&quot; but I am not sure.</p> <p>I don't have so much experience with advanced monitoring/metrics tools, so a somewhat simple solution that does basic auto-scaling would be the best fit for my needs = )</p>
<p>This is handled by a tool called cluster-autoscaler. You can find the EKS guide for it at <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html</a> or the project itself at <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p>
<p><strong>The scenario</strong>: Selenium is a browser automation tool that can be run in a K8s cluster, it consists of Selenium-hub (master) and selenium-nodes (workers) where the hub receives test requests and creates nodes (pods) ondemand (dynamically) to run the test-case, after execution of a test-case the runner node (pod) gets thrown away. also, Selenium supports live-preview of the test being run by the runner and a client (outside of K8s) can basically watch this live preview, there is a little change that when a client is watching the live preview of the test and it ends, another pod gets created with the same IP that the client is actually is still watching, this is a problem since the client may continue watching the run of another test because the client's software is not aware of the length of run and may still fetch the traffic with same user/pass/IP combination.</p> <p><strong>The question</strong>: is it possible to change the way Kubernetes assigns IP addresses? let's say the first pod to be created gets IP 1.1.1.1 and the second one gets 1.1.1.2 and third 1.1.1.3, and before the fourth request the first pod dies and its IP is free then the fourth pod would be created with IP 1.1.1.1,</p> <p>What I am trying to do is to tell to the Kubernetes to use previously assigned IP after some time or change the sequence of IP assignment or something similar.</p> <p>Any ideas?</p>
<p>Technically: yes you can either configure or edit-the-code of your CNI plugin (or write one from scratch).</p> <p>In practice: I know of none that work quite that way. I know Calico does allow having multiple IP pools so you could have a small one just for Selenium pods but I think it still attempts to minimize reuse. But check the docs for your CNI plugin and see what it offers.</p>
<p>I am unable to delete ‘test/deployment/sandbox-v2/tmp/dns’ after deleting the github repository 'test'. I am unable to reclone it in my CentOS system. Even after recloning in other folder,running site.yml file of sandbox fails at logs. So i'm trying to completely remove test repository and reclone it for fresh run. I have tried all ways and commands to remove it, it gets removed as well but then gets recreated automatically with this file mentioned. Any clue how to completely remove it and clone a fresh repo.</p>
<p>It's hard to say but if this was bind-mounted into a container and that container was running its process as root (uid 0) then files it created would be owned by uid 0 even outside the container. Gotta get your <code>sudo</code> on.</p>
<p>I have k8s custer in digital ocean.</p> <p>I would like to expose some app to internet.</p> <p>Do I need for every app DO loadbalancer? It coast 10$/m it F* expensive when I would like to expose 5 aps.</p> <p>Is there any workaround without external DO loadbalancer?</p>
<p>Copying this from the last time someone asked this a few days ago (which has now been delete):</p> <p>You can use a node port service (annoying because you'll have to use random high ports instead of 80/443) or you can switch your ingress controller to listen on the host network directly (allows use of 80/443 but potentially bigger security impact if your ingress controller is compromised).</p>
<p>I have the following to exec into a pod to run a command</p> <pre><code>fmt.Println("Running the command:", command) parameterCodec := runtime.NewParameterCodec(scheme) req.VersionedParams(&amp;corev1.PodExecOptions{ Command: strings.Fields(command), Stdin: stdin != nil, Stdout: true, Stderr: true, TTY: false, }, parameterCodec) </code></pre> <p>I'm looking to run the same command but adding the <code>Container</code> option which is a string. I'm having a hard time figuring how how I can list all the containers in a pod. </p> <p>Thanks</p>
<p>got it</p> <pre><code>pod, err := clientset.CoreV1().Pods("default").Get(podname, metav1.GetOptions{}) if err != nil { return "", err } fmt.Println(pod.Spec.Containers[0]) </code></pre>
<p>I am managing my k8s cluster using terraform and has tiller version 0.10.4,</p> <p>Now I made some changes in my terraform file. so when I run <strong>terraform init</strong> I am getting following error.</p> <p><em>error initializing local helm home: Looks like &quot;https://kubernetes-charts.storage.googleapis.com&quot; is not a valid chart repository or cannot be reached: Failed to fetch <a href="https://kubernetes-charts.storage.googleapis.com/index.yaml" rel="nofollow noreferrer">https://kubernetes-charts.storage.googleapis.com/index.yaml</a> : 403 Forbidden</em></p> <p>So I change the stable url in my terraform file, and now it looks something like</p> <pre><code>data &quot;helm_repository&quot; &quot;stable&quot; { name = &quot;stable&quot; url = &quot;https://charts.helm.sh/stable&quot; } provider &quot;kubernetes&quot; { config_path = &quot;kubeconfig.yaml&quot; } provider &quot;helm&quot; { install_tiller = true version = &quot;0.10.4&quot; service_account = &quot;tiller&quot; namespace = &quot;kube-system&quot; kubernetes { config_path = &quot;kubeconfig.yaml&quot; } } </code></pre> <p>But I am still getting the same error.</p>
<p>The old Google based Chart storage system has been decommissioned. But also Helm 2 is no longer supported at all and Helm 3 does not use Tiller. You can find a static mirror of the old charts repo on Github if you go poking, but you need to upgrade to Helm 3 anyway so just do that instead.</p>
<p>Is it alright to use Google Compute Engine virtual machines for MySQL DB?</p> <p>db-n1-standard-2 costs around $97 DB for single Clould SQL instance and replication makes it double.</p> <p>So I was wondering if its okay to use <code>n1-standard-2</code> which costs around $48 and the applications will be in Kubernetes cluster and the pods would connect to Compute Engine VM for DB connection. Would the pods be able to connect to Compute Engine VM?</p> <p>Also is it true that Google doesn't charge <code>GKE Cluster Management Fees</code> when using Zonal Kubernetes cluster? When I check with calculator it shows they don't charge management fees.</p>
<p>This is entirely up to your needs. If you want to be on call for DB failover and replication management, it will definitely be cheaper to run it yourself. Zalando has a lot of Postgres-on-Kubernetes automation that is very good, but at the end of the day who do you want waking up at 2AM if something breaks. I will never run another production SQL database myself as long as I live, it's just always worth the money.</p>
<p>Running Pods with WorkloadIdentity makes an Google Credential error when auto scaling started.</p> <p>My application is configured with WorkloadIdentity to use Google Pub/Sub and also set HorizontalPodAutoscaler to scale the pods up to 5 replicas.</p> <p>The problem arises when an auto scaler create replicas of the pod, GKE's metadata server does not work for few seconds then after 5 to 10 seconds no error created.</p> <p>here is the error log after a pod created by auto scaler.</p> <pre><code>WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable onattempt 1 of 3. Reason: timed out WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable onattempt 2 of 3. Reason: timed out WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable onattempt 3 of 3. Reason: timed out WARNING:google.auth._default:Authentication failed using Compute Engine authentication due to unavailable metadata server Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started </code></pre> <p>what exactly is the problem here?</p> <p>When I read the doc from here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#limitations" rel="nofollow noreferrer">workload identity docs</a></p> <pre><code>&quot;The GKE metadata server takes a few seconds to start to run on a newly created Pod&quot; </code></pre> <p>I think the problem is related to this issue but is there a solution for this kind situation?</p> <p>Thanks</p>
<p>There is no specific solution other than to ensure your application can cope with this. Kubernetes uses DaemonSets to launch per-node apps like the metadata intercept server but as the docs clearly tell you, that takes a few seconds (noticing the new node, scheduling the pod, pulling the image, starting the container).</p> <p>You can use an initContainer to prevent your application from starting until some script returns, which can just try to hit a GCP API until it works. But that's probably more work than just making your code retry when those errors happen.</p>
<p>I've deployed Kubernetes cluster on my local machine.The default allocatable pods in Kubernetes are 110. I want to increase the number of pods per node in my cluster.Can anyone let me know if it's possible ? If yes, how can we do it?</p>
<p>Yes, you can control this with the <code>max-pods</code> option to the Kubelet, either via a command line flag or Kubelet config file option. But beware that we don't test as much outside the normal scaling targets so you might be able to break things.</p>
<p>I am trying to mount S3 bucket using s3fs-fuse to the Kubernetes pod. My S3 bucket is protected by IAM roles and i dont have Access Keys and Secret Keys to access S3 bucket. I know how to access a S3bucket from the Kubernetes pod using Access &amp; Secrets Keys, but how do we access S3 bucket using IAM roles ?</p> <p>Does anyone has suggestion on doing this ?</p>
<p>You use the IRSA system, attaching an IAM role to a Kubernetes service account and then attaching that K8s SA to your pod. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html</a> for a starting point.</p>
<p>I have a kubectl installed on the my local system. I connect to K8S running on GCP. Where does my k8s master server run ? On my local machine or on GCP?</p>
<p>In GCP. If you mean GKE specifically, it is running on special magic internal GKE systems you can't see directly.</p>
<p>I am guess I am just asking for confirmation really. As had some major issues in the past with our elastic search cluster on kubernetes.</p> <p>Is it fine to add a pod affinity to rule to a already running deployment. This is a live production elastic search cluster and I want to pin the elastic search pods to specific nodes with large storage. I kind of understand kubernetes but not really elastic search so dont want to cause any production issues/outages as there is no one around that could really help to fix it.</p> <p>Currently running 6 replicas but want to reduce to 3 that run on 3 worker nodes with plenty of storage. I have labelled my 3 worker nodes with the label 'priority-elastic-node=true'</p> <p>This is podaffinity i will add to my yaml file and apply:</p> <pre><code> podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: priority-elastic-node operator: In values: - "true" topologyKey: "kubernetes.io/hostname" </code></pre> <p>What I assume will happen is nothing after I apply but then when I start scaling down the elastic node replicas the elastic nodes stay on the preferred worker nodes.</p>
<p>Any change to the pod template will cause the deployment to roll all pods. That includes a change to those fields. So it’s fine to change, but your cluster will be restarted. This should be fine as long as your replication settings are cromulent.</p>
<p>I'm trying to run the command &quot;<strong>oc apply -f basic-ocp-demo.yaml</strong>&quot; in order to <strong>create services and deployment</strong> in my <strong>openshift</strong>, but I get an error regarding &quot;<strong>unable to decode &quot;basic-ocp-demo.yaml &quot;: no kind &quot;Deployment&quot; is registered for version &quot;v1beta1 &quot;</strong></p> <p><strong>My file look like :</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: app: nginx-app type: front-end ports: - port: 80 targetPort: 80 nodePort: 30012 --- kind: Deployment apiVersion: v1beta1 metadata: name: nginx-deployment labels: app: nginx-app type: front-end spec: replicas: 3 selector: matchLabels: app: nginx-app type: front-end template: metadata: labels: app: nginx-app type: front-end spec: containers: - name: nginx-container image: nginx:latest </code></pre> <p><strong>oc apply command output:</strong></p> <pre><code>[root@localhost openshift]# oc apply -f basic-ocp-demo.yaml service &quot;nginx-service&quot; created error: unable to decode &quot;basic-ocp-demo.yaml&quot;: no kind &quot;Deployment&quot; is registered for version &quot;v1beta1&quot; </code></pre> <p>Seems that my service nginx was created but the Deployment kind has no registration for version &quot;<strong>v1bet1</strong>&quot;, I tried to put many other version but the same problem:</p> <p>I've tried with &quot;<strong>apps/v1</strong>&quot;, the same issue:</p> <pre><code>error: unable to decode &quot;basic-ocp-demo.yaml&quot;: no kind &quot;Deployment&quot; is registered for version &quot;apps/v1&quot; </code></pre> <p>I've made a curl where can be seen all the apis available but I don't know what to choose from that list for my Deployment Kind.</p> <p>Here is my apis list : <a href="https://92.42.107.195:8443/apis" rel="nofollow noreferrer">https://92.42.107.195:8443/apis</a></p>
<p>It would appear you are running an extremely old version of OpenShift/Kubernetes. So all you have available is <code>apps/v1alpha1</code>. You should upgrade ASAP.</p>
<p>When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.</p> <p>I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.</p> <p>I tried below code, it isn't working because the env variables will not be available in the context where image path is present.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ template &quot;helm-guestbook.fullname&quot; . }} spec: template: metadata: labels: app: {{ template &quot;helm-guestbook.name&quot; . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} {{- if eq .Values.isOnPrem &quot;true&quot; }} image: {{ printf &quot;%s/%s:%s&quot; $dockerRegistryIP .Values.image.repository .Values.image.tag }} {{- else }} env: - name: DOCKER_REGISTRY_IP valueFrom: configMapKeyRef: name: docker-registry-config key: DOCKER_REGISTRY_IP </code></pre> <p>Any pointers on how can I solve this using helm itself ? Thanks</p>
<p>Check out the <code>lookup</code> function, <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function</a></p> <p>Though this could get very complicated very quickly, so be careful to not overuse it.</p>
<p>I am trying to pull values out of a kubernetes helm chart values.yaml that has a number as one of the keys and I'm getting a <code>parse error unexpected ".1" in operand</code>. How can I access values that contain a number in its path? </p> <p>Let's say my values.yaml looks like this:</p> <pre><code>global: foo: bar1: 1 1: bar2: 2 </code></pre> <p>Using helm charts, I can access <code>bar1</code> by typing: <code>{{ .Values.global.foo.bar1 }}</code>. </p> <p>If I try to do the same with accessing <code>bar2</code> by typing: <code>{{ .Values.global.1.bar2 }}</code> I receive a parse error. It doesn't get better if I try to use brackets <code>{{ .Values.global[1].bar2 }}</code>, quotes <code>{{ .Values.global."1".bar2 }}</code>, or brackets and quotes: <code>{{ .Values.global["1"].bar2 }}</code>. </p> <p>I know that helm charts utilize golang templates under the hood, is there some kind of template I could create to extract this information?</p> <p>Thanks so much!</p>
<p>The easy option is to just quote it in your values file so it's a string but:</p> <pre><code>{{ index .Values.globals 1 "bar2"}} </code></pre> <p>is probably what you want.</p>
<p>In my application with docker-compose I have 2 container, 1 nginx and 1 python script crontab that update some files in nginx/html folder. With docker-compose when I declare </p> <pre><code>volumes: - shared-volume:/usr/share/nginx/html/assets/xxx:ro </code></pre> <p>the initial files in the nginx images are copied to the shared volume.</p> <p>Now I'm trying to move the application to k8s, but when I use shared volume I see that initial files in nginx/html are missing. </p> <p>So the question is, is it possible to copy initial files from my nginx images to the shared volume? How? </p> <p>____________________________EDIT______________________________________</p> <p>To clarify, I'm new to k8s, With VM we usually run script that update an nginx assets folder. With docker-compose I use something like this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>version: '3.7' services: site-web: build: . image: "site-home:1.0.0" ports: - "80:80" volumes: - v_site-home:/usr/share/nginx/html/assets/:ro site-cron: build: ./cronScript image: "site-home-cron:1.0.0" volumes: - v_site-home:/app/my-assets volumes: v_site-home: name: v_site-home</code></pre> </div> </div> </p> <p>Now I'm starting to write a deployment (with persistent volume? Because as I understand even if there is a persistent volume a stateful set is not useful in this case) to convert my docker-compose to k8s. Actually we cannot use any public cloud for security policy (data must be in our country and now there's no big company with this option). So the idea is to run vanilla k8s in multiple bare metal server and start migration with very simple application like this. I tried with the 2 docker, replica:1 and an empty volume in a single pod. In this case I see that initially the application has the nginx folder empty, and I need to wait the crontab update to see my results. So this is the first problem.</p> <p>Now I read your answer and obviously I've other doubts. Is it better to split the pod, so 1 pod for container? A deployment with persistent volume is the way? In this case, I've the old problem, how to see initial nginx assets files? Thank you so much for the help!</p>
<p>This generally requires an initContainer which runs cp. it’s not a great solution but it gets the job done. </p>
<p>The nodeSelectorTerms in a PersistentVolume help the volume identify which node to bind to. For example:</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - mynodename </code></pre> <p>means that we only want to bind to a node of the name <code>mynodename</code>.</p> <p>I would like to replace <code>mynodename</code> with a variable defined in a configMap. For example, the following syntax is what I was imagining, but it does not work:</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - valueFrom: configMapKeyRef: name: my-configmap key: MYNODENAME </code></pre> <p>where <code>my-configmap</code> is a configmap and <code>MYNODENAME</code> is a variable in it.</p> <p>Can I achieve this somehow?</p>
<p>This is not supported. Apparently I need more words than just that.</p>
<p><strong>Context</strong> :</p> <p>We have a Apache Nifi cluster deployed in Kubernetes as Stateful sets, and a volume claim template is used for Nifi repositories.</p> <p><a href="https://github.com/cetic/helm-nifi" rel="nofollow noreferrer">Nifi helm charts we are using</a></p> <p>There is a use case where file processing is done by Nifi. So the file feeds are put into a shared folder and nifi would read it from the shared folder. When multiple nodes of Nifi is present all three would read from the shared folder.</p> <p>In a non kubernetes environment we use NFS file share.</p> <p>In AWS we use AWS S3 for storage and Nifi has processors to read from S3.</p> <p><strong>Problem</strong> :</p> <p>Nifi is already deployed as a statefulset and use volume claim template for the storage repository. How can we mount this NFS share for file feed to all nifi replicas.</p> <p>or in other word putting the question in a generic manner,</p> <p><em><strong>How can we mount a single NFS shared folder to all statefulset replicas ?</strong></em></p> <p><strong>Solutions tried</strong></p> <p>We tried to link separate pvc claimed folders to the nfs share , but looks like a work around.</p> <p>Can somebody please help. Any hints would be highly appreciated.</p>
<p>Put it in the pod template like normal. NFS is a &quot;ReadWriteMany&quot; volume type so you can create one PVC and then use it on every pod simultaneously. You can also configure NFS volumes directly in the pod data but using a PVC is probably better.</p> <p>It sounds like what you have is correct :)</p>
<p>In my company, we have an internal Security Token Service consumed by all web apps to validate the STS token issued by the company central access management server (e.g BigIP/APM). Therefore the same endpoint for token validation REST API has to be repeatedly set as an environment variable in Deployment Configuration for each individual web app (Openshift project). So is an ES256 public key used by each web app for validating JWT token.</p> <p>I'm wondering if there exists a way to set up a global Environment variable or ConfigMap or anything else in Openshift for these kind of <strong>common</strong>, <strong>shared</strong> settings per cluster such that they can be by default accessible for all web apps running in all PODs in the cluster? of coz, each individual Deployment Config should override these default values from the global settings at will.</p>
<p>Nothing built in. You could built that yourself with some webhooks and custom code. Otherwise you need to add the <code>envFrom</code> pointing at a Secret and/or ConfigMap to each pod template and copy that Secret/ConfigMap to all namespaces that needed it (kubed can help with that part at least).</p>
<p>I have an nginx pod in the default namespace and a ClusterIP service exposing the pod.</p> <pre><code>$ kubectl run nginx-pod --image=nginx $ kubectl expose po nginx-pod --name=nginx-service --port=8080 --target-port=80 --type=ClusterIP </code></pre> <p>I can access the service via its internal IP from inside the cluster.</p> <pre><code>$ kubectl get svc nginx-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service ClusterIP 10.100.5.218 &lt;none&gt; 8080/TCP 2m49s $ wget -O- 10.100.5.218:8080 </code></pre> <p>--&gt; 200 OK</p> <p>I can access the service by name from inside a pod.</p> <pre><code>$ kubectl run tmp -it --rm --image=busybox --command -- /bin/sh -c 'wget -O- nginx-service:8080' </code></pre> <p>--&gt; 200 OK</p> <p>However, why can't I access the service by name from outside a pod?</p> <pre><code>$ wget -O- nginx-service:8080 </code></pre> <p>or</p> <pre><code>$ wget -O- nginx-service.default.svc.cluster.local:8080 </code></pre> <p>--&gt; <code>wget: unable to resolve host address ‘nginx-service’</code></p>
<p>The magic service hostnames (and pod hostnames) are provided by the &quot;cluster DNS&quot; service, usually CoreDNS these days. A resolv.conf aimed at the internal CoreDNS is automatically injected into all pods. But I'm guessing by &quot;outside of a pod&quot; you mean on the underlying host which has no such entries in its resolv.conf.</p>
<p>I have two use cases where teams only want Pod A to end up on a Node where Pod B is running. They often have many Copies of Pod B running on a Node, but they only want one copy of Pod A running on that same Node. </p> <p>Currently they are using daemonsets to manage Pod A, which is not effective because then Pod A ends up on a lot of nodes where Pod B is not running. I would prefer not to restrict the nodes they can end up on with labels because that would limit the Node capacity for Pod B (ie- if we have 100 nodes and 20 are labeled, then Pod B's possible capacity is only 20).</p> <p>In short, how can I ensure that one copy of Pod A runs on any Node with at least one copy of Pod B running?</p>
<p>The current scheduler doesn’t really have anything like this. You would need to write something yourself.</p>
<p>I have EKS cluster where my application code resides in pods (container). Client want this cluster to be hosted on their AWS cloud. How do I make sure that my code will be secure in client's environment. How do I make sure that he cannot copy or has no access to the code?</p>
<p>You can't. At most you can compile and obfuscate it with whatever tools your language provides. This is generally pointless though, decompilers are very very good these days.</p>
<p>I have traefik installed via the helm chart.</p> <p>I have a domain: <strong>example.com</strong></p> <p>It has some blog posts.</p> <p>I now created a subdomain: <strong>subdomain.example.com</strong></p> <p>I have the list of my blogs urls:</p> <pre><code>/blog-1 /blog-2 </code></pre> <p>Both the base domain and the subdomain are on the same cluster.</p> <p>I want to have 301 redirects so that if someone tries to visit:</p> <pre><code>example.com/blog-1 </code></pre> <p>they would be redirected to:</p> <pre><code>subdomain.example.com/blog-1 </code></pre> <p>I do not want to direct with a wildcard just with my list of blog urls.</p> <p>Thanks</p> <p>Here is my middleware for redirect to https</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: https-only namespace: exmaple spec: redirectScheme: scheme: https permanent: true </code></pre> <p>a redirect is:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: test-redirectregex spec: redirectRegex: regex: &quot;https://example.com&quot; replacement: &quot;https://subdomain.example.com&quot; permanent: true </code></pre> <p>can I have multiple redirectRegex in the same middleware? I would then just have lots of them to redirect each url</p>
<p>Just one redirect per middleware, but you can have as many middleware as you want.</p> <p>But in this case you can use the regex:</p> <pre><code> redirectRegex: regex: &quot;https://example.com/(blog-1|blog-2|whatever)&quot; replacement: &quot;https://subdomain.example.com/$1&quot; </code></pre>
<p>I want to know if the apiserver_request_duration_seconds accounts the time needed to transfer the request (and/or response) from the clients (e.g. kubelets) to the server (and vice-versa) or it is just the time needed to process the request internally (apiserver + etcd) and no communication time is accounted for ?</p> <p>As a plus, I also want to know where this metric is updated in the apiserver's HTTP handler chains ?</p>
<p>How long API requests are taking to run. Whole thing, from when it starts the HTTP handler to when it returns a response.</p>
<p>I am fairly new to kubernetes and learning kubernetes deployments from scratch. For a microservice based projecct that I am working on, each microservice has to authenticate with their own client-id and client-secret to the auth server, before requesting any information (JWT). These ids and secrets are required for each services and needs to be in their environment variables. Initially the auth service will generate those ids and secrets via database seeds. What is the best way in the world of kubernetes to automatically set this values in the environments of a pod deployment before pod creation?</p>
<p>Depends on how automatic you want it to be. A simple approach would be an initContainer to provision a new token, put that in a shared volume file, and then an entrypoint script in the main container which reads the file and sets the env var.</p> <p>The problem with that is authenticating the initContainer is hard. The big hammer solution would be to write a custom operator to manage this but if you're new to Kubernetes that's going to be super hard and probably overkill anyway.</p>
<p>I'm running a k8 cluster on Docker for Mac. To allow a connection from my database client to my mysql pod, I use the following command <code>kubectl port-forward mysql-0 3306:3306</code>. It works great, however a few hours later I get the following error <code>E0201 18:21:51.012823 51415 portforward.go:233] lost connection to pod</code>.</p> <p>I check the actual mysql pod, and it still appears to be running. This happens every time I run the <code>port-forward</code> command. </p> <p>I've seen the following answer here: <a href="https://stackoverflow.com/questions/47484312/kubectl-port-forwarding-timeout-issue">kubectl port forwarding timeout issue</a> and the solution is to use the following flag <code>--streaming-connection-idle-timeout=0</code> but the flag is now deprecated. </p> <p>So following on from there, It appears that I have to set that parameter via a kubelet config file (<a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="noreferrer">config file</a>)? I'm unsure on how I could achieve this as Docker for Mac runs as a daemon and I don't manually start the cluster. </p> <p>Could anyone send me a code example or instructions as to how i could configure <code>kubectl</code> to set that flag so my port forwarding won't have timeouts?</p>
<p>Port forwards are generally for short term debugging, not “hours”. What you probably want is a NodePort type service which you can then connect to directly.</p>
<p>I am trying to configure kubernetes cluster but as per blog it's telling me to disable SELinux. Is there any specific reason for it?</p>
<p>In theory you could write all the needed policies for it to work. But the Selinux subsystem doesn’t really understand namespaces and this only barely understands containers. So if you’re already running a minimalist host OS it gets you very little added security for a great deal of complexity, most people skip it.</p>
<p>I have an application which makes use of RabbitMQ messages - it sends messages. Other applications can react on these messages but they need to know which messages are available on the system and what they semantically mean.</p> <p>My message queuing system is RabbitMQ and RabbitMQ as well as the applications are hosted and administered using Kubernetes.</p> <p>I am aware of</p> <ul> <li><a href="https://kubemq.io/" rel="nofollow noreferrer">https://kubemq.io/</a>: That seems to be an alternative to RabbitMQ?</li> <li><a href="https://knative.dev/docs/eventing/event-registry/" rel="nofollow noreferrer">https://knative.dev/docs/eventing/event-registry/</a> also an alternative to RabbitMQ? but with a meta-layer approach to integrate existing event sources? The documentation is not clear for me. </li> </ul> <p>Is there a general-purpose "MQ-interface service availabe, a solution, where I can register which messages are sent by an application, how the payload is technically and semantically set up, which serialization format is used and under what circumstances errors will be sent?</p> <p>Can I do this in Kubernetes YAML-files?</p>
<p>RabbitMQ does not have this in any kind of generic fashion so you would have to write it yourself. Rabbit messages are just a series of bytes, there is no schema registry.</p>
<p>For multi tenancy support, I would like to create like a datastore concept limiting the storage per tenant across all namespaces in the tenant. Kubernetes website <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota" rel="nofollow noreferrer">[1]</a> says "given namespace", but I want to be able to set resource quota based on storage class but not limited to "per namespace". How can I do that ?</p>
<p>Not directly. The quota system is per namespace only. But of course you can make a validating webhook yourself which implements whatever logic you want.</p>
<p>What would be the behavior of a multi node kubernetes cluster if it only has a single master node and if the node goes down?</p>
<p>The control plane would be unavailable. Existing pods would continue to run, however calls to the API wouldn't work, so you wouldn't be able to make any changes to the state of the system. Additionally self-repair systems like pods being restarted on failure would not happen since that functionality lives in the control plane as well.</p>
<p>I am trying to ensure that a pod is deleted before proceeding with another Kubernetes Operation. So the idea I have is to Call the Pod Delete Function and then call the Pod Get Function.</p> <pre><code>// Delete Pod err := kubeClient.CoreV1().Pods(tr.namespace).Delete(podName, &amp;metav1.DeleteOptions{}) if err != nil { .... } pod, err := kubeClient.CoreV1().Pods(tr.namespace).Get(podName, &amp;metav1.DeleteOptions{}) // What do I look for to confirm that the pod has been deleted? </code></pre>
<p><code>err != nil &amp;&amp; errors.IsNotFound(err)</code></p> <p>Also this is silly and you shouldn't do it.</p>
<p>My team is using the Python3 kubernetes package in some of our code. What are the exceptions that can be raised by a call to <code>kubernetes.config.load_kube_config</code>?</p>
<p>On top of the standard errors that can be raised at any point (MemoryError, OSError, KeyboardInterrupt, etc), it mostly uses its own ConfigException class. Just go read the code for yourself <a href="https://github.com/kubernetes-client/python-base/blob/master/config/kube_config.py" rel="nofollow noreferrer">https://github.com/kubernetes-client/python-base/blob/master/config/kube_config.py</a></p>
<p>I am attempting to use the k8s API inside the cluster to update a deployment within the namespace home.</p> <p>ClusterRole:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: namespace: home name: home-role rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods, deployments"] verbs: ["get", "watch", "list", "create", "delete", "update"] </code></pre> <p>Service Account:</p> <pre><code>get serviceaccounts -n home NAME SECRETS AGE default 1 3h2m kubectl describe serviceaccounts -n home Name: default Namespace: home Labels: &lt;none&gt; Annotations: &lt;none&gt; Image pull secrets: &lt;none&gt; Mountable secrets: default-token-8rzns Tokens: default-token-8rzns Events: &lt;none&gt; </code></pre> <p>ClusterRoleBinding:</p> <pre><code>kubectl create clusterrolebinding home-role-binding \ --clusterrole=home-role \ --serviceaccount=home:default </code></pre> <p>But I am getting this error when the API call is made:</p> <pre><code>open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory </code></pre> <p>Does anyone have any insight into where the issue may lie?</p>
<p>First off deployments are in apps/v1, not v1. Then you probably need to share the pod definition for the place you are running your api call from. You may have disabled service account token mounting.</p>
<p>I am using kubernetes(v1.15.2) to manage my skywalking-ui(v6.5.0) apps,recently I found some app's not accessable but the pod is still running, I am not sure the app is works fine,there is no error output in pod's logs.But the pod status icon give tips: <code>the pod is in pending state</code>.</p> <p><a href="https://i.stack.imgur.com/rmTwu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rmTwu.png" alt="enter image description here"></a></p> <p>Why the status not same in different places?The service is down now.How to avoid this situation or make the service recover automatic? This is pod info:</p> <pre><code>$ kubectl describe pod ws-red-envelope-service-575dc8f4fb-mg72g Name: ws-red-envelope-service-575dc8f4fb-mg72g Namespace: dabai-fat Priority: 0 Node: azshara-k8s01/172.19.104.231 Start Time: Sat, 29 Feb 2020 23:07:43 +0800 Labels: k8s-app=ws-red-envelope-service pod-template-hash=575dc8f4fb Annotations: &lt;none&gt; Status: Running IP: 172.30.224.4 IPs: &lt;none&gt; Controlled By: ReplicaSet/ws-red-envelope-service-575dc8f4fb Containers: ws-red-envelope-service: Container ID: docker://d1459b7edc1c02f1558b773f89711eeb63c12c9f180a8a426a3dc31d081b2a88 Image: registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/ws-red-envelope:v0.0.1 Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/ws-red-envelope@sha256:448c47db9d1366c9e50984054812ebed9cbcc718e206d20600e6c8ac02a35625 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Sat, 29 Feb 2020 23:07:45 +0800 Ready: True Restart Count: 0 Environment: APOLLO_META: &lt;set to the key 'apollo.meta' of config map 'fat-config'&gt; Optional: false ENV: &lt;set to the key 'env' of config map 'fat-config'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-xnrwt (ro) Conditions: Type Status Initialized True Ready False ContainersReady True PodScheduled True Volumes: default-token-xnrwt: Type: Secret (a volume populated by a Secret) SecretName: default-token-xnrwt Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 360s node.kubernetes.io/unreachable:NoExecute for 360s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12h default-scheduler Successfully assigned dabai-fat/ws-red-envelope-service-575dc8f4fb-mg72g to azshara-k8s01 Normal Pulled 12h kubelet, azshara-k8s01 Container image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/ws-red-envelope:v0.0.1" already present on machine Normal Created 12h kubelet, azshara-k8s01 Created container ws-red-envelope-service Normal Started 12h kubelet, azshara-k8s01 Started container ws-red-envelope-service </code></pre>
<p>So you can see this more clearly in the output. The pod is Running but the Ready flag is false meaning the container is up but is failing the Readiness Probe.</p>
<p>I'm looking a way to call /test endpoint of my service on all the pods I could have (essentially I have only 3 pods).</p> <p>It's is that possible?</p> <p>I tried a Cloud Function that calls my balanced ip (<a href="https://10.10.10.10:1111/test" rel="nofollow noreferrer">https://10.10.10.10:1111/test</a>) however this will send the request to only one pod i.e Pod1 therefore Pod2 and Pod3 don't execute /test and I need the request to be executed on all 3 pods.</p> <p>It doesn't matter what this /test does I just need to be executed on all pods.</p> <p>Any hint to achieve this would be awesome.</p>
<p>There is no specific way, you'll have to use the Kubernetes API to get all the pod IPs and make a request to each individually. This is usually a sign of some wonky software design though.</p>
<p>Let's say we have 2 Nodes in a cluster.</p> <p><code>Node A</code> has 1 replica of a pod, <code>Node B</code> has 2 replicas. According to <a href="https://youtu.be/y2bhV81MfKQ?list=WL&amp;t=1848" rel="nofollow noreferrer">this talk (YouTube video with a time tag)</a> from Google Cloud engineers, a request which was routed to <code>Node A</code> might be rerouted to the <code>Node B</code> by <code>iptables</code> which is inside the <code>Node A</code>. I have several questions regarding this behavior:</p> <ul> <li><p>What information <code>iptables</code> of <code>Node A</code> knows about replicas of a pod outside of it? How does it know where to send the traffic?</p></li> <li><p>Can it be that <code>iptables</code> of the <code>Node B</code> reroutes this request to <code>Node C</code>? If so, then will the egress traffic go back to the <code>Node B</code> -> <code>Node A</code> -> Client?</p></li> </ul>
<p>I think you might be mixing up two subsystems, service proxies and CNI. CNI is first, it’s a plug-in based system that sets up the routing rules across all your nodes so that the network appears flat. A pod IP will work like normal from any node. Exactly how that happens varies by plugin, Calico uses BGP between the nodes. Then there’s the service proxies, usually implemented using iptables though also somewhat pluggable. Those define the service IP -> endpoint IP (read: pod IP) load balancing. But the actual routing is handled by whatever your CNI plugin set up. There’s a lot of special modes and cases but that’s the basic overview.</p>
<p>I was looking into an entirely separate issue and then came across this question which raised some concerns:</p> <p><a href="https://stackoverflow.com/a/50510753/3123109">https://stackoverflow.com/a/50510753/3123109</a></p> <p>I'm doing something pretty similar. I'm using the <a href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" rel="nofollow noreferrer">CSI Driver</a> for Azure to integrate Azure Kubernetes Service with Azure Key Vault. My manifests for the integration are something like:</p> <pre><code>apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentity metadata: name: aks-akv-identity namespace: prod spec: type: 0 resourceID: $identityResourceId clientID: $identityClientId --- apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentityBinding metadata: name: aks-akv-identity-binding namespace: prod spec: azureIdentity: aks-akv-identity selector: aks-akv-identity-binding-selector </code></pre> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: aks-akv-secret-provider namespace: prod spec: provider: azure secretObjects: - secretName: ${resourcePrefix}-prod-secrets type: Opaque data: - objectName: PROD-PGDATABASE key: PGDATABASE - objectName: PROD-PGHOST key: PGHOST - objectName: PROD-PGPORT key: PGPORT - objectName: PROD-PGUSER key: PGUSER - objectName: PROD-PGPASSWORD key: PGPASSWORD parameters: usePodIdentity: &quot;true&quot; keyvaultName: ${resourceGroupName}akv cloudName: &quot;&quot; objects: | array: objectName: PROD-PGDATABASE objectType: secret objectVersion: &quot;&quot; - | objectName: PROD-PGHOST objectType: secret objectVersion: &quot;&quot; - | objectName: PROD-PGPORT objectType: secret objectVersion: &quot;&quot; - | objectName: PROD-PGUSER objectType: secret objectVersion: &quot;&quot; - | objectName: PROD-PGPASSWORD objectType: secret objectVersion: &quot;&quot; tenantId: $tenantId </code></pre> <p>Then in the micro service manifest:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment-prod namespace: prod spec: replicas: 3 selector: matchLabels: component: api template: metadata: labels: component: api aadpodidbinding: aks-akv-identity-binding-selector spec: containers: - name: api image: appacr.azurecr.io/app-api ports: - containerPort: 5000 env: - name: PGDATABASE valueFrom: secretKeyRef: name: app-prod-secrets key: PGDATABASE - name: PGHOST value: postgres-cluster-ip-service-prod - name: PGPORT valueFrom: secretKeyRef: name: app-prod-secrets key: PGPORT - name: PGUSER valueFrom: secretKeyRef: name: app-prod-secrets key: PGUSER - name: PGPASSWORD valueFrom: secretKeyRef: name: app-prod-secrets key: PGPASSWORD volumeMounts: - name: secrets-store01-inline mountPath: /mnt/secrets-store readOnly: true volumes: - name: secrets-store01-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: aks-akv-secret-provider --- apiVersion: v1 kind: Service metadata: name: api-cluster-ip-service-prod namespace: prod spec: type: ClusterIP selector: component: api ports: - port: 5000 targetPort: 5000 </code></pre> <p>Then in my application <code>settings.py</code>:</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.environ['PGDATABASE'], 'USER': os.environ['PGUSER'], 'PASSWORD': os.environ['PGPASSWORD'], 'HOST': os.environ['PGHOST'], 'PORT': os.environ['PGPORT'], } } </code></pre> <p>Nothing in my <code>Dockerfile</code> refers to any of these variables, just the Django micro service code.</p> <p>According to the link, one of the comments was:</p> <blockquote> <p>current best practices advise against doing this exactly. secrets managed through environment variables in docker are easily viewed and should not be considered secure.</p> </blockquote> <p>So I'm second guessing this approach.</p> <p><strong>Do I need to look into revising what I have here?</strong></p> <p>The suggestion in the link is to place the <code>os.environ[]</code> with a call to a method that pulls the credentials from a key vault... but the credentials to even access the key vault would need to be stored in secrets... so I'm not seeing how it is any different.</p> <hr /> <p><strong>Note:</strong> One thing I noticed is this is the use of <code>env:</code> and mounting the secrets to a volume is redundant. The latter was done per <a href="https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes" rel="nofollow noreferrer">the documentation</a> on the integration, but it makes the secrets available from <code>/mnt/secrets-store</code> so you can do something like <code>cat /mnt/secrets-store/PROD-PGUSER</code>. <code>os.environ[]</code> isn't really necessary and the <code>env:</code> I don't think because you could pull the secret from that location in the Pod.</p> <p>At least doing something like the following prints out the secret value:</p> <pre><code>kubectl exec -it $(kubectl get pods -l component=api -o custom-columns=:metadata.name -n prod) -n prod -- cat /mnt/secrets-store/PROD-PGUSER </code></pre>
<p>The comment on the answer you linked was incorrect. I've left a note to explain the confusion. What you have is fine, if possibly over-built :) You're not actually gaining any security vs. just using Kubernetes Secrets directly but if you prefer the workflow around AKV then this looks fine. You might want to look at externalsecrets rather than this weird side feature of the CSI stuff? The CSI driver is more for exposing stuff as files rather than external-&gt;Secret-&gt;envvar.</p>
<p>I want to refer a property of an object created by a CRD.</p> <p>Here is my example. I create a Cloud SQL instance using the CRD from <a href="https://github.com/GoogleCloudPlatform/k8s-config-connector" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-config-connector</a>.</p> <p>This generates an instance with an IP. I want to reference the IP address in another resource.</p> <p>Is there something similar to the downstream API that will allow me to do this?</p> <p>If I can't do it Natively can I do it with third party templating tools like Helm, Helmfile or Kustomize?</p>
<p>Nothing in particular, the way we do it is the controller exposes info like the IP or hostname on the Status sub-struct of the subordinate object, and then copy that into the Status of the root object, and then we read from that and inject it into a config file.</p> <p><a href="https://github.com/Ridecell/ridecell-operator/blob/39344f4318ff3bcb68ce32dd4319b655a60277da/pkg/controller/summon/components/postgres.go#L58-L61" rel="nofollow noreferrer">https://github.com/Ridecell/ridecell-operator/blob/39344f4318ff3bcb68ce32dd4319b655a60277da/pkg/controller/summon/components/postgres.go#L58-L61</a> is an example of the copy-over but it's in our framework so probably not super helpful directly.</p> <p>Another option we use in other places is making an init container that reads from the CRD status and writes out (or transforms) config files. An example of that is <a href="https://github.com/Ridecell/ridecell-operator/blob/39344f4318ff3bcb68ce32dd4319b655a60277da/cmd/initcontainer/main.go#L181-L203" rel="nofollow noreferrer">https://github.com/Ridecell/ridecell-operator/blob/39344f4318ff3bcb68ce32dd4319b655a60277da/cmd/initcontainer/main.go#L181-L203</a></p>
<p>I am trying to track and monitor, how much time does a pod take to come online/healthy/Running.</p> <p>I am using EKS. And I have got HPA and cluster-autoscaler installed on my cluster.</p> <p>Let's say I have a deployment with <code>HorizontalPodAutoscaler</code> scaling policy with 70% <code>targetAverageUtilization</code>.<br /> So whenever the average utilization of deployment will go beyond 70%, HPA will trigger to create new POD. Now, based on different factors, like if nodes are available or not, and if not is already available, then the image needs to be downloaded or is it present on cache, the scaling can take from few seconds to few minutes to come up.</p> <p>I want to track this time/duration, every time the POD is scheduled, how much time does it take to come to <code>Running</code> state. Any suggestions?</p> <p>Or any direction where I should be looking at.</p> <p>I found this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility" rel="nofollow noreferrer">Cluster Autoscaler Visibility Logs</a>, but this is only available in GCE.</p> <p>I am looking for any solution, can be out-of-the-box integration, or raising events and storing them in some time-series DB or <strong>scraping</strong> data from Prometheus. But I couldn't find any solution for this till now.</p> <p>Thanks in advance.</p>
<p>There is nothing out of the box for this, you will need to build something yourself.</p>
<p>I'm trying to build a basic frontend-backend topology in Kubernetes (Amazon EKS actually) with both frontend and backend pods residing on the same node. I want every node to have 2 interfaces: public one, that will connect to internet gateway, and private one, that won't. So it would seem natural to somehow map frontend pods (or service) to the public interface to route traffic to/from the internet and map backend pods to private interface to prevent any external access to them. Is it even possible in Kubernetes? I know that I probably should use public interfaces everywhere and resrict access with ACLs but design with different interfaces looks simplier and more secure to me.</p>
<p>This is not usually how things work in Kubernetes. Pod IPs would always be "private", i.e. cluster IPs that are not used with the internet. You poke specific holes into the cluster IP space using LoadBalancer-type Services. In AWS terms, all pods have private IPs and you use ELBs to bridge specific things to the public network.</p>
<p>I am exercising on K8S, and I need to share some data between my containers inside a POD, the problem I have is that I need to make containers have available some data from other containers that is already present on that containers at startup. Let me show an example of what I mean:</p> <p>Container A at startup: <code>/root/dir/a.txt</code> Container B at startup <code>/root/dirB/b.txt</code></p> <p>In container C I want to have a directory that contains both a.txt file and b.txt file without doing any operation such as writing etc. But just using volumes. How can I do it? Thank you in advance</p>
<p>Make a emptyDir volume, mount it at <code>/newroot</code> on both A and B with both of those set as initContainers which will run <code>command: [bash, -c, &quot;cp /root/dir/a.txt /newroot/dir/a.txt]</code> and similar for B. On C mount that emptyDir using <code>subPath</code> on either <code>/root/dir</code> or the whole path as needed.</p>
<p>I am trying to get Redis failover to work in Kubernetes with a worker-node failure scenario. I have a K8s cluster that consists of a master node and two worker nodes. The master node does not schedule pods. The manifests for Redis are such that there is a master and a slave instance in a stateful set and 3 sentinels in another stateful set. The manifests have affinity to steer the pods to be scheduled on separate worker nodes. If I drain a worker node that has the master instance and one sentinel, failover works like a champ. </p> <p>If, however, there are <strong>2</strong> sentinels that are evicted with the master instance, no master is elected and the 2 sentinels that are re-started on the remaining worker node report: <code>-failover-abort-no-good-slave master jnpr-ipb-redis-masters 10.244.1.209 7380</code>. That IP address in the log message is the IP address of the former slave (which I expected to be promoted to the new master).</p> <p>Is there a bit of wizardry to make this work? Is this a valid cluster config? Not really sure what I should be looking at to get an idea of what is happening.</p>
<p>What you want is a PodDisruptionBudget. That will make voluntary evictions at least not break things. Beyond that you can use hard anti-affinities to force the pods to be scheduled on different nodes. Failures are still possible though, if you lose two nodes at the same time the Sentinels can desync. This is a large part of why Redis Sentinel is mostly no longer used in favor of Cluster Mode.</p>