Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I am running Spring Batch application in Kubernetes environment. The k8s cluster have one master and three worker nodes. I am testing spring batch under high load, which is spawning around 100 worker pods. However, all the 100 pods are coming up only on two out of three worker nodes. No node selector or additional labeling has been done on the nodes.</p> <p>I have used Spring cloud deployer Kubernetes to create worker pods in Kubernetes.</p> <p>The versions involved are:</p> <ul> <li>Spring Boot: 2.1.9.RELEASE</li> <li>Spring Cloud: 2020.0.1</li> <li>Spring Cloud Deployer: 2.5.0</li> <li>Spring Cloud Task: 2.1.1.RELEASE</li> <li>Kubernetes: 1.21</li> </ul> <p>How can I ensure that worker pods get scheduled on all available worker nodes evenly?</p> <p>Following is the partition handler implementation responsible for launching the tasks.</p> <pre><code>@Bean public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer) { Resource resource = this.resourceLoader.getResource(resourceSpec); DeployerPartitionHandler partitionHandler = new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, &quot;worker&quot;); commandLineArgs.add(&quot;--spring.profiles.active=worker&quot;); commandLineArgs.add(&quot;--spring.cloud.task.initialize.enable=false&quot;); commandLineArgs.add(&quot;--spring.batch.initializer.enabled=false&quot;); commandLineArgs.add(&quot;--spring.cloud.task.closecontext_enabled=true&quot;); commandLineArgs.add(&quot;--logging.level.root=DEBUG&quot;); partitionHandler.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs)); partitionHandler.setEnvironmentVariablesProvider(environmentVariablesProvider()); partitionHandler.setApplicationName(appName + &quot;worker&quot;); partitionHandler.setMaxWorkers(maxWorkers); return partitionHandler; } @Bean public EnvironmentVariablesProvider environmentVariablesProvider() { return new SimpleEnvironmentVariablesProvider(this.environment); } </code></pre>
<p>Posting this out of comments as a community wiki for better visibility, feel free to edit and expand.</p> <hr /> <p>There are scheduling mechanics which can prevent scheduling pods on some nodes:</p> <ul> <li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a></p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Node selector</a></p> </li> </ul> <p>If nothing is set, it's worth trying to rejoin the node. For instance it might not be registered correctly (this solved the issue above).</p>
<p>lately I am configuring a k8s cluster composed of 3 nodes(master, worker1 and worker2) that will host an UDP application(8 replicas of it). Everything is done and the cluster is working very well but there is only one problem.</p> <p>Basically there is a Deployment which describes the Pod and it looks like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &lt;name&gt; labels: app: &lt;app_name&gt; spec: replicas: 8 selector: matchLabels: app: &lt;app_name&gt; template: metadata: labels: app: &lt;app_name&gt; spec: containers: - name: &lt;name&gt; image: &lt;image&gt; ports: - containerPort: 6000 protocol: UDP </code></pre> <p>There is also a Service which is used to access to the UDP application:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: &lt;service_name&gt; labels: app: &lt;app_name&gt; spec: type: NodePort ports: - port: 6000 protocol: UDP nodePort: 30080 selector: app: &lt;app_name&gt; </code></pre> <p>When i try to access to the service 2 different scenarios may occur:</p> <ul> <li>The request is assigned to a POD that is in the same node that received the request</li> <li>The request is assigned to a POD that is in the other node</li> </ul> <p>In the second case the request arrives correctly to the POD but with a source IP which ends by 0 (for example 10.244.1.0) so the response will never be delivered correctly.</p> <p>I can't figure it out, I really tried everything but this problem still remains. In this moment to make the cluster working properly i added <code>externalTrafficPolicy: Local</code> and <code>internalTrafficPolicy: Local</code> to the Service in this way the requests will remain locally so when a request is sent to worker1 it will be assigned to a Pod which is running on worker1, the same for the worker2.</p> <p>Do you have any ideas about the problem? Thanks to everyone.</p>
<p>Have you confirmed that the response is not delivered correctly for your second scenario? The source IP address in that case should be the one of the node where the request first arrived.</p> <p>I am under the impression that you are assuming that since the IP address ends in 0 this is necessarily a network address, and that could be a wrong assumption, as it depends on the <a href="https://www.hacksplaining.com/glossary/netmasks#:%7E:text=Netmasks%20(or%20subnet%20masks)%20are,Internet%20Protocol%20(IP)%20address." rel="nofollow noreferrer">Netmask</a> configured for the Subnetwork where the nodes are allocated; for example, if the nodes are in the Subnet 10.244.0.0/23, then the network address is 10.244.0.0, and 10.244.1.0 is just another usable address that can be assigned to a node.</p> <p>Now, if your application needs to preserve the client's IP address, then that could be an issue since, by default, the source IP seen in the target container is not the original source IP of the client. In this case, additionally to configuring the <code>externalTrafficPolicy</code> as Local, you would need to configure a <code>healthCheckNodePort</code> as specified in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">Preserving the client source IP</a> documentation.</p>
<p>I can still access kubectl after disabling Kubernetes from Docker Desktop v4.18.0</p> <p><a href="https://i.stack.imgur.com/LIubu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LIubu.png" alt="Disabled Kubernetes service in Docker Desktop" /></a></p> <p>On running the <code>kubectl version</code> command, following output is displayed:</p> <p><a href="https://i.stack.imgur.com/270LG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/270LG.png" alt="kubectl version command in terminal" /></a></p> <pre><code>Environment details: - OS: MacOS Ventura 13.3.1 (M1) - Docker Desktop: v4.18.0 </code></pre> <p>I want to entirely remove kubectl and related services, so that I can do a fresh install of minikube for local development. Please suggest required steps for the same.</p>
<p>As per official <a href="https://docs.docker.com/desktop/kubernetes/#disable-kubernetes" rel="nofollow noreferrer">doc</a>:</p> <blockquote> <p>After disabling Kubernetes Select Apply &amp; Restart to save the settings.This stops and removes Kubernetes containers, and also removes the /usr/local/bin/kubectl command.</p> </blockquote> <p>Below troubleshooting steps can help you:</p> <ul> <li><p>If you don't have kubectl installed in <code>/usr/local/bin</code>, Docker will install it for you on startup, so check where the kubectl is located then remove the kubectl binary.</p> </li> <li><p>Check if kubernetes is still in the starting state.</p> </li> <li><p>Check if any existing repositories are there using <code>brew search kubectl</code>.</p> </li> <li><p>Try clean / purge data or reset to factory defaults in the <a href="https://docs.docker.com/desktop/troubleshoot/overview/#troubleshoot-menu" rel="nofollow noreferrer">troubleshooting menu</a>.</p> </li> </ul> <p>Also as @David Maze suggested you can install minikube using <code>brew install minikube</code> without removing kubectl.</p>
<p>I try to use SparkKubernetesOperator to run spark job into Kubernetes with the same DAG and yaml files as the following question:</p> <p><a href="https://stackoverflow.com/questions/68371840/unable-to-create-sparkapplications-on-kubernetes-cluster-using-sparkkubernetesop/69129609#69129609">Unable to create SparkApplications on Kubernetes cluster using SparkKubernetesOperator from Airflow DAG</a></p> <p>But airflow shows the following error:</p> <pre><code>HTTP response headers: HTTPHeaderDict({'Audit-Id': 'e2e1833d-a1a6-40d4-9d05-104a32897deb', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 10 Sep 2021 08:38:33 GMT', 'Content-Length': '462'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;the object provided is unrecognized (must be of type SparkApplication): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \&quot;json:\\\&quot;apiVersion,omitempty\\\&quot;\&quot;; Kind string \&quot;json:\\\&quot;kind,omitempty\\\&quot;\&quot; } (222f7573722f6c6f63616c2f616972666c6f772f646167732f636f6e6669 ...)&quot;,&quot;reason&quot;:&quot;BadRequest&quot;,&quot;code&quot;:400} </code></pre> <p>Any suggestion to resolve that problem???</p>
<p>think u had the same problem like me</p> <pre><code>SparkKubernetesOperator( task_id='spark_pi_submit', namespace=&quot;default&quot;, application_file=open(&quot;/opt/airflow/dags/repo/script/spark-test.yaml&quot;).read(), #officially know bug kubernetes_conn_id=&quot;kubeConnTest&quot;, #ns default in airflow connection UI do_xcom_push=True, dag=dag ) </code></pre> <p>I wrapped it like this. and it works like charm</p> <p><a href="https://github.com/apache/airflow/issues/17371" rel="nofollow noreferrer">https://github.com/apache/airflow/issues/17371</a></p>
<p>I am working on an application which is running on the Kubernetes cluster. I want to restart the n number of pods manually in a sequence. Can we do that? Would <code>kubectl scale &lt;options&gt;</code> work here?</p>
<p>The answer is <strong>yes</strong>, you can restart 5 out of 10 pods of a particular deployment. Though it won't be a single command for this.</p> <p>As you correctly assumed <code>kubectl scale</code> will help you here.</p> <p>Restart of 5 pods out of 10 contains 2 operations:</p> <ol> <li><p>Scaling down the deployment from 10 to 5 pods</p> <pre><code>kubectl scale deployment deployment-name --replicas=5 </code></pre> </li> <li><p>Scaling up the deployment from 5 to 10 pods back:</p> <pre><code>kubectl scale deployment deployment-name --replicas=10 </code></pre> </li> </ol> <p>Also you can delete exact pods, <code>kube-controller-manager</code> with <code>deployment/replicaset</code> controllers within will make sure that <code>desired</code> state will match the exact state and therefore missing pods will be automatically rescheduled.</p> <hr /> <p>However following best practice (thanks to @DavidMaze), ideal scenario is restart the whole deployment. This can be done with following command:</p> <pre><code>kubectl rollout restart deployment deployment-name </code></pre> <p>This is safer option and it allows to roll back easily in case of any mistakes/errors.</p> <p>Also it's possible to restart pods 1 by 1 within the deployment when <code>rollout restart</code> is requested.</p> <p><code>.spec.strategy.rollingUpdate.maxUnavailable</code> should be set to <code>1</code> which means only 1 pods at most will be unavailable during the restart - <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">reference to max unavailable</a>.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes Deployments</a></p>
<p>I'm trying to route POST requests through a K8s Load Balancer to a Webhook in Argo Events. I can't find any clear documentation on this. I'm able to get the Webhook created and I can successfully communicate with it when I port forward the webhook-eventsource-svc. The Load Balancer is built fine and displays the external IP that I assign. However when I try to POST to the Load Balancer I just get a connection timed out error. I'm hoping I'm just configuring these manifests wrong.</p> <p>Here is the manifest for both services.</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: EventSource metadata: name: webhook namespace: argo-events spec: service: ports: - port: 12000 targetPort: 12000 webhook: example: endpoint: /deploy method: POST port: &quot;12000&quot; --- apiVersion: v1 kind: Service metadata: name: webhook-loadbalancer namespace: argo-events annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; service.beta.kubernetes.io/azure-load-balancer-internal-subnet: DevelopSubnet spec: type: LoadBalancer loadBalancerIP: 1XX.X.X.XXX ports: - protocol: TCP port: 90 targetPort: 12000 selector: app: webhook-eventsource-svc controller: eventsource-controller </code></pre> <p>And here is how I am sending the request:</p> <pre><code>curl -d '@params.json' -H &quot;Content-Type: application/json&quot; -X POST http://1XX.X.X.XXX:90/deploy </code></pre> <p>Any suggestions?</p>
<p>I'm trying to do something similar in AWS. I can get the sample webhook to work with port forwarding (<a href="https://argoproj.github.io/argo-events/quick_start/" rel="nofollow noreferrer">https://argoproj.github.io/argo-events/quick_start/</a>) But it won't work with regular K8s objects. In my case, an Ingress and a Service object. I can see my Service selector correctly pick the webhook sensor pod. Both Argo Events and Argo Workflow run in the same argo namespace. Once configured, access to the Ingress from Postman returns a 404. What I find confusing is that the actual Port the sensor pod exposes is 7777 in the sample, not 12000. So, I've tried a Service with Port 12000 / TargetPort 12000 or 7777. In either case, the POST returns 404.</p> <p>What I can point out that's applicable in your case and mine is this (<a href="https://argoproj.github.io/argo-events/eventsources/services/" rel="nofollow noreferrer">https://argoproj.github.io/argo-events/eventsources/services/</a>) in the second paragraph it states that you must remove the service field from your EventSource object to refactor the sample from port forwarding. Hope it helps. I'm still trying to make this work.</p>
<p>I have a web application hosted in EKS and there is a matrix in place for CPU utilization for scaling the pods horizontally.</p> <p>If the current number of pods is 10, and I increase the load (increasing requests per minute) then the desired number of pods is dependent on how aggressively I am increasing the load, so it could be 13, 16 etc.</p> <p>But I want that the number of pods should always increase in a multiple of 5 and decrease in a multiple of 3. Is this possible?</p>
<p>Went through documentation and some code, this looks impossible to force horizontal pod autoscaler (HPA) to scale down or up in exact numbers of pods since there's no flags/options for it.</p> <p><strong>The closest you can get</strong> is to set up <code>scaleDown</code> and <code>scaleUp</code> policies.</p> <p>Below the example (<strong>note</strong>, this will work with <code>v2beta2</code> api version), this part should be located under <code>spec</code>:</p> <pre><code>behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 3 periodSeconds: 15 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Pods value: 5 periodSeconds: 15 </code></pre> <p><strong>What this means:</strong></p> <ul> <li><code>scaleDown</code> will be performed at most of 3 pods every 15 seconds.</li> <li><code>scaleUp</code> will be performed at most of 5 pods every 15 seconds.</li> <li><code>stabilizationWindowSeconds</code> - The stabilization window is used to restrict the flapping of replicas when the metrics used for scaling keep fluctuating. The stabilization window is used by the autoscaling algorithm to consider the computed desired state from the past to prevent scaling</li> </ul> <p>This doesn't guarantee that HPA will scale up or down the exact number of specified pods, it's just a policy. However if workload increase or decrease will happen fast, it should be close to behaviour you'd like to see.</p> <p><strong>Useful link:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">Support for configurable scaling behavior</a></li> </ul>
<p>I'm using a PersistentVolume and a Claim and then mounting it to my workdir '/server' to create a simple Minecraft server using K8s, and when I deploy it the jar file isn't there anymore?<a href="https://i.stack.imgur.com/Dj7Du.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dj7Du.png" alt="enter image description here" /></a></p> <pre><code>deployment.yaml --------------- spec: volumes: - name: minecraft-pvstorage persistentVolumeClaim: claimName: minecraft-pvclaim containers: - name: minecraft-deployment image: localhost:32000/minecraft:1.18.2-new imagePullPolicy: Always ports: - containerPort: 30007 volumeMounts: - name: minecraft-pvstorage mountPath: /server </code></pre> <pre><code>pv.yaml ------- apiVersion: v1 kind: PersistentVolume metadata: name: minecraft-pv labels: type: local spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/minecraft&quot; </code></pre> <pre><code>pvclaim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minecraft-pvclaim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>Can anyone help me with this? It works when I delete the volumeMounts from the deployment.yaml.</p>
<p>I figured it out.</p> <p>In my deployment.yaml I defined a mount for just the world and that ended up saving all of the world data.</p> <pre><code>deployment.yaml -------------- volumeMounts: - name: minecraft-pvstorage mountPath: /server/1.18.2/world subPath: world </code></pre>
<p>Good afternoon.</p> <p>So I have an RKE2 cluster with the security policy in place that does not allow root pods to run. I have a pod that has to run as root and have been trying to figure out how to allow my pod to deploy on this cluster without success.</p> <p>So far I have tried to explicitly set the following:</p> <pre><code>securityContext: runAsUser: 0 runAsGroup: 0 </code></pre> <p>The pod still fails to be allowed to run on the environment. Is there a way to not totally disable the security policy and perhaps an an exception for a single namespace? Thank you.</p>
To create a pod that has to run as root when cluster with the security policy in place that does not allow root pods to run you need to create a security policy for each namespace. Role Based Access Control (RBAC) allows you to create fine-grained roles and policies to manage access control for users and software running on your cluster, you can find more information in this <a href="https://docs.giantswarm.io/getting-started/rbac-and-psp/" rel="nofollow noreferrer">document</a>.<br /><br />PodSecurityPolicy is deprecated and will be completely removed in v1.25, you should start considering<a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer"> migrating to Pod Security Admission</a> where "kube-system" namespace is explicitly exempted from PodSecurity.<br /><br /><a href="https://kubernetes.io/docs/reference/access-authn-authz/psp-to-pod-security-standards/" rel="nofollow noreferrer"></a></p> Known limitations: <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/2579-psp-replacement/README.md#namespace-policy-update-warnings" rel="nofollow noreferrer">Namespace policy update warnings</a></p> <br /><br />Follow this <a href="https://capstonec.com/2020/04/22/hands-on-with-kubernetes-pod-security-policies/" rel="nofollow noreferrer">document</a> for more information</p>
<p>I'm having some trouble getting the Nginx ingress controller working in my Minikube cluster. It's likely to be some faults in Ingress configuration but I cannot pick it out.</p> <p>First, I deployed a service and it worked well without ingress.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: online labels: app: online spec: selector: app: online ports: - protocol: TCP port: 8080 targetPort: 5001 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: online labels: app: online spec: replicas: 1 selector: matchLabels: app: online template: metadata: labels: app: online annotations: dapr.io/enabled: &quot;true&quot; dapr.io/app-id: &quot;online&quot; dapr.io/app-port: &quot;5001&quot; dapr.io/log-level: &quot;debug&quot; dapr.io/sidecar-liveness-probe-threshold: &quot;300&quot; dapr.io/sidecar-readiness-probe-threshold: &quot;300&quot; spec: containers: - name: online image: online:latest ports: - containerPort: 5001 env: - name: ADDRESS value: &quot;:5001&quot; - name: DAPR_HTTP_PORT value: &quot;8080&quot; imagePullPolicy: Never </code></pre> <p>Then check its url</p> <pre><code>minikube service online --url http://192.168.49.2:32323 </code></pre> <p>It looks ok for requests.</p> <pre><code>curl http://192.168.49.2:32323/userOnline OK </code></pre> <p>After that I tried to apply nginx ingress offered by minikube. I installed ingress and run an example by referring to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">this</a> and it's all ok.</p> <p>Lastly, I configured my Ingress.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: online-ingress annotations: spec: rules: - host: online http: paths: - path: / pathType: Prefix backend: service: name: online port: number: 8080 </code></pre> <p>And changed /etc/hosts by adding line</p> <pre><code>192.168.49.2 online </code></pre> <p>And Test:</p> <pre><code>curl online/userOnline 502 Bad Gateway </code></pre> <p>The logs are like this:</p> <pre><code>192.168.49.1 - - [26/Aug/2021:09:45:56 +0000] &quot;GET /userOnline HTTP/1.1&quot; 502 150 &quot;-&quot; &quot;curl/7.68.0&quot; 80 0.002 [default-online-8080] [] 172.17.0.5:5001, 172.17.0.5:5001, 172.17.0.5:5001 0, 0, 0 0.004, 0.000, 0.000 502, 502, 502 578ea1b1471ac973a2ac45ec4c35d927 2021/08/26 09:45:56 [error] 2514#2514: *426717 upstream prematurely closed connection while reading response header from upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; 2021/08/26 09:45:56 [error] 2514#2514: *426717 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; 2021/08/26 09:45:56 [error] 2514#2514: *426717 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; W0826 09:45:56.918446 7 controller.go:977] Service &quot;default/online&quot; does not have any active Endpoint. I0826 09:46:21.345177 7 status.go:281] &quot;updating Ingress status&quot; namespace=&quot;default&quot; ingress=&quot;online-ingress&quot; currentValue=[] newValue=[{IP:192.168.49.2 Hostname: Ports:[]}] I0826 09:46:21.349078 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;online-ingress&quot;, UID:&quot;b69e2976-09e9-4cfc-a8e8-7acb51799d6d&quot;, APIVersion:&quot;networking.k8s.io/v1beta1&quot;, ResourceVersion:&quot;23100&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>I found the error is very about annotations of Ingress. If I changed it to:</p> <pre><code> annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>The error would be:</p> <pre><code>404 page not found </code></pre> <p>and logs:</p> <pre><code>I0826 09:59:21.342251 7 status.go:281] &quot;updating Ingress status&quot; namespace=&quot;default&quot; ingress=&quot;online-ingress&quot; currentValue=[] newValue=[{IP:192.168.49.2 Hostname: Ports:[]}] I0826 09:59:21.347860 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;online-ingress&quot;, UID:&quot;8ba6fe97-315d-4f00-82a6-17132095fab4&quot;, APIVersion:&quot;networking.k8s.io/v1beta1&quot;, ResourceVersion:&quot;23760&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync 192.168.49.1 - - [26/Aug/2021:09:59:32 +0000] &quot;GET /userOnline HTTP/1.1&quot; 404 19 &quot;-&quot; &quot;curl/7.68.0&quot; 80 0.002 [default-online-8080] [] 172.17.0.5:5001 19 0.000 404 856ddd3224bbe2bde9d7144b857168e0 </code></pre> <p>Other infos.</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE online LoadBalancer 10.111.34.87 &lt;pending&gt; 8080:32323/TCP 6h54m </code></pre> <p>The example I mentioned above is a <code>NodePort</code> service and mine is a <code>LoadBalancer</code>, that's the biggest difference. But I don't know why it does not work for me.</p>
<p>Moving this out of comments so it will be visible.</p> <hr /> <p><strong>Ingress</strong></p> <p>Main issue was with <code>path</code> in ingress rule since application serves traffic on <code>online/userOnline</code>. If requests go to <code>online</code> then ingress returns <code>404</code>.</p> <p>Rewrite annotation is not needed in this case as well.</p> <p><code>ingress.yaml</code> should look like:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: online-ingress # annotations: spec: rules: - host: online http: paths: - path: /userOnline pathType: Prefix backend: service: name: online port: number: 8080 </code></pre> <p>More details about <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a></p> <hr /> <p><strong>LoadBalancer on Minikube</strong></p> <p>Since minikube is considered as <code>bare metal</code> installation, to get <code>external IP</code> for service/ingress, it's necessary to use specially designed <code>metallb</code> solution.</p> <p><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.</p> <p>It ships as add-on for <code>minikube</code> and can be enabled with:</p> <pre><code>minikube addons enable metallb </code></pre> <p>And it needs to create a <code>configMap</code> with setup. Please refer to <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">metallb configuration</a></p>
<p>I try to deploy the web-app <a href="https://akauntuing.com" rel="nofollow noreferrer">akaunting</a> to a k8s cluster.</p> <p>Therefore, I converted the given (and working!) <a href="https://github.com/akaunting/docker" rel="nofollow noreferrer">docker-compose script</a> using <a href="https://kompose.io" rel="nofollow noreferrer">kompose</a> to k8s yaml files.</p> <p>When I try to apply these files (given <code>AKAUNTING_SETUP=true</code>), I get the following error; I have no clue how to fix it...</p> <pre><code>Call to a member function get() on null Setting locale en-US Creating database tables Connecting to database akaunting@akaunting-db:3306 Creating company [2021-11-22 13:14:32] production.ERROR: Call to a member function get() on null {&quot;exception&quot;:&quot;[object] (Error(code: 0): Call to a member function get() on null at /var/www/html/app/Abstracts/Commands/Module.php:59) [stacktrace] #0 /var/www/html/overrides/akaunting/laravel-module/Commands/InstallCommand.php(50): App\\Abstracts\\Commands\\Module-&gt;createHistory('installed') #1 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Akaunting\\Module\\Commands\\InstallCommand-&gt;handle() #2 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #3 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #4 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #5 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #6 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(136): Illuminate\\Container\\Container-&gt;call(Array) #7 /var/www/html/vendor/symfony/console/Command/Command.php(299): Illuminate\\Console\\Command-&gt;execute(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Illuminate\\Console\\OutputStyle)) #8 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(121): Symfony\\Component\\Console\\Command\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Illuminate\\Console\\OutputStyle)) #9 /var/www/html/vendor/symfony/console/Application.php(978): Illuminate\\Console\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #10 /var/www/html/vendor/symfony/console/Application.php(295): Symfony\\Component\\Console\\Application-&gt;doRunCommand(Object(Akaunting\\Module\\Commands\\InstallCommand), Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #11 /var/www/html/vendor/symfony/console/Application.php(167): Symfony\\Component\\Console\\Application-&gt;doRun(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #12 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Application.php(94): Symfony\\Component\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #13 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Application.php(186): Illuminate\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #14 /var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(263): Illuminate\\Console\\Application-&gt;call('module:install', Array, NULL) #15 /var/www/html/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(261): Illuminate\\Foundation\\Console\\Kernel-&gt;call('module:install', Array) #16 /var/www/html/database/seeds/Modules.php(32): Illuminate\\Support\\Facades\\Facade::__callStatic('call', Array) #17 /var/www/html/database/seeds/Modules.php(20): Database\\Seeds\\Modules-&gt;create() #18 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Database\\Seeds\\Modules-&gt;run() #19 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #20 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #21 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #22 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #23 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Seeder.php(149): Illuminate\\Container\\Container-&gt;call(Array, Array) #24 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Seeder.php(49): Illuminate\\Database\\Seeder-&gt;__invoke(Array) #25 /var/www/html/database/seeds/Company.php(20): Illuminate\\Database\\Seeder-&gt;call('Database\\\\Seeds\\\\...') #26 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Database\\Seeds\\Company-&gt;run() #27 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #28 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #29 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #30 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #31 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Seeder.php(149): Illuminate\\Container\\Container-&gt;call(Array, Array) #32 /var/www/html/app/Console/Commands/CompanySeed.php(36): Illuminate\\Database\\Seeder-&gt;__invoke() #33 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): App\\Console\\Commands\\CompanySeed-&gt;handle() #34 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #35 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #36 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #37 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #38 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(136): Illuminate\\Container\\Container-&gt;call(Array) #39 /var/www/html/vendor/symfony/console/Command/Command.php(299): Illuminate\\Console\\Command-&gt;execute(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Illuminate\\Console\\OutputStyle)) #40 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(121): Symfony\\Component\\Console\\Command\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Illuminate\\Console\\OutputStyle)) #41 /var/www/html/vendor/symfony/console/Application.php(978): Illuminate\\Console\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #42 /var/www/html/vendor/symfony/console/Application.php(295): Symfony\\Component\\Console\\Application-&gt;doRunCommand(Object(App\\Console\\Commands\\CompanySeed), Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #43 /var/www/html/vendor/symfony/console/Application.php(167): Symfony\\Component\\Console\\Application-&gt;doRun(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #44 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Application.php(94): Symfony\\Component\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #45 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Application.php(186): Illuminate\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArrayInput), Object(Symfony\\Component\\Console\\Output\\BufferedOutput)) #46 /var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(263): Illuminate\\Console\\Application-&gt;call('company:seed', Array, NULL) #47 /var/www/html/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(261): Illuminate\\Foundation\\Console\\Kernel-&gt;call('company:seed', Array) #48 /var/www/html/app/Jobs/Common/CreateCompany.php(50): Illuminate\\Support\\Facades\\Facade::__callStatic('call', Array) #49 /var/www/html/app/Jobs/Common/CreateCompany.php(27): App\\Jobs\\Common\\CreateCompany-&gt;callSeeds() #50 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Concerns/ManagesTransactions.php(29): App\\Jobs\\Common\\CreateCompany-&gt;App\\Jobs\\Common\\{closure}(Object(Illuminate\\Database\\MySqlConnection)) #51 /var/www/html/vendor/laravel/framework/src/Illuminate/Database/DatabaseManager.php(388): Illuminate\\Database\\Connection-&gt;transaction(Object(Closure)) #52 /var/www/html/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(261): Illuminate\\Database\\DatabaseManager-&gt;__call('transaction', Array) #53 /var/www/html/app/Jobs/Common/CreateCompany.php(30): Illuminate\\Support\\Facades\\Facade::__callStatic('transaction', Array) #54 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): App\\Jobs\\Common\\CreateCompany-&gt;handle() #55 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #56 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #57 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #58 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #59 /var/www/html/vendor/laravel/framework/src/Illuminate/Bus/Dispatcher.php(128): Illuminate\\Container\\Container-&gt;call(Array) #60 /var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(128): Illuminate\\Bus\\Dispatcher-&gt;Illuminate\\Bus\\{closure}(Object(App\\Jobs\\Common\\CreateCompany)) #61 /var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(103): Illuminate\\Pipeline\\Pipeline-&gt;Illuminate\\Pipeline\\{closure}(Object(App\\Jobs\\Common\\CreateCompany)) #62 /var/www/html/vendor/laravel/framework/src/Illuminate/Bus/Dispatcher.php(132): Illuminate\\Pipeline\\Pipeline-&gt;then(Object(Closure)) #63 /var/www/html/vendor/laravel/framework/src/Illuminate/Bus/Dispatcher.php(98): Illuminate\\Bus\\Dispatcher-&gt;dispatchNow(Object(App\\Jobs\\Common\\CreateCompany), false) #64 /var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/helpers.php(405): Illuminate\\Bus\\Dispatcher-&gt;dispatchSync(Object(App\\Jobs\\Common\\CreateCompany), NULL) #65 /var/www/html/app/Utilities/Installer.php(241): dispatch_sync(Object(App\\Jobs\\Common\\CreateCompany)) #66 /var/www/html/app/Console/Commands/Install.php(82): App\\Utilities\\Installer::createCompany('Schokoladensouf...', 'finance@schokol...', 'en-US') #67 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): App\\Console\\Commands\\Install-&gt;handle() #68 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Util.php(40): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}() #69 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure)) #70 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(37): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure)) #71 /var/www/html/vendor/laravel/framework/src/Illuminate/Container/Container.php(653): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL) #72 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(136): Illuminate\\Container\\Container-&gt;call(Array) #73 /var/www/html/vendor/symfony/console/Command/Command.php(299): Illuminate\\Console\\Command-&gt;execute(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Illuminate\\Console\\OutputStyle)) #74 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Command.php(121): Symfony\\Component\\Console\\Command\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Illuminate\\Console\\OutputStyle)) #75 /var/www/html/vendor/symfony/console/Application.php(978): Illuminate\\Console\\Command-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #76 /var/www/html/vendor/symfony/console/Application.php(295): Symfony\\Component\\Console\\Application-&gt;doRunCommand(Object(App\\Console\\Commands\\Install), Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #77 /var/www/html/vendor/symfony/console/Application.php(167): Symfony\\Component\\Console\\Application-&gt;doRun(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #78 /var/www/html/vendor/laravel/framework/src/Illuminate/Console/Application.php(94): Symfony\\Component\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #79 /var/www/html/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(129): Illuminate\\Console\\Application-&gt;run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #80 /var/www/html/artisan(22): Illuminate\\Foundation\\Console\\Kernel-&gt;handle(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #81 {main} &quot;} In Module.php line 59: Call to a member function get() on null </code></pre> <p>If you need more information to answer, feel free to ask. Any help is appreciated.</p> <p><em>Moved from <a href="https://serverfault.com/q/1084263/940982">ServerFault</a></em></p>
<h2>Workaround</h2> <p>Hi - I found the root cause of the problem - and answered the question yesterday night - but my post was deleted - not sure why (possibly since I hadn't finished registration???)</p> <p>Turns out that the root of the problem is because the default Dockerfile image does NOT have the modules included. These modules seem to be downloaded after the first successful login (which interestingly requires the setup to finish - catch-22)!!!</p> <p>Fix:</p> <ul> <li>Download the following 3 modules from their Github - offline-payments, paypal-standard and bc21.</li> <li>Extract the contents of these to 3 folders to the akaunting-modules volume - folder names need to be OfflinePayments, PaypalStandard and BC21 (guessing case sensitive).</li> <li>Restart the containers.</li> </ul> <p>Let me know if that helps. I've got to the Wizard steps by doing the above. After you pass the Wizard steps remember to restart with AKAUNTING_SETUP=false (checks for existing records don't seem to be great) R. Saravanan</p>
<p>I'm working on Python application which is deployed as &quot;Deployment&quot; on k8s. The application scales up based on SQS messages number. When messages count is zero, the pods are terminated without completing processing. How to allow a pod to continue processing till finish running request processing based on memory and cpu usage?</p>
<p>To allow a pod to continue processing till finish running request processing you can use below suggestions:</p> <p>1.Use prestop and graceful shutdown to delay the termination of pod, Resulting:</p> <ul> <li>Application will wait a few seconds and then it stops accepting new connections.</li> <li>An application will wait until all requests are complete. And close all idle keepalive connections.</li> </ul> <p>You can find more information about it in below blogs:</p> <ul> <li><a href="https://blog.palark.com/graceful-shutdown-in-kubernetes-is-not-always-trivial/" rel="nofollow noreferrer">Blog - Graceful shutdown</a> by Ilya Andreev with NGINX example.</li> <li><a href="https://ubuntu.com/blog/avoiding-dropped-connections-in-nginx-containers-with-stopsignal-sigquit" rel="nofollow noreferrer">Blog Graceful shutdown</a> by Robin Winslow.</li> </ul> <p>2.You can use HPA to scale the deployment based on memory and cpu usage, so that one pod will remain active to take requests. You can find info related to scaling deployment via CPU and Memory in this <a href="https://granulate.io/blog/kubernetes-autoscaling-the-hpa/#:%7E:text=of%20the%20cluster.-,HPA%20Example%3A%20Scaling%20a%20Deployment%20via%20CPU%20and%20Memory%20Metrics,-The%20following%20is" rel="nofollow noreferrer">blog - HPA</a> authored by Naom Salinger.</p> <p>3.In addition, you can also use <a href="https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/horizontal-expansion-of-containers-based-on-rabbitmq-indicator" rel="nofollow noreferrer">Horizontal pod autoscaling based on the metrics of Message Queue for RabbitMQ</a>.</p>
<p>I have a set of environment variables in my <code>deployment</code> using <code>EnvFrom</code> and <code>configMapRef</code>. The environment variables held in these configMaps were set by kustomize originally from json files.</p> <pre><code>spec.template.spec.containers[0]. envFrom: - secretRef: name: eventstore-login - configMapRef: name: environment - configMapRef: name: eventstore-connection - configMapRef: name: graylog-connection - configMapRef: name: keycloak - configMapRef: name: database </code></pre> <p>The issue is that it's not possible for me to access the specific environment variables directly.</p> <p>Here is the result of running <code>printenv</code> in the pod:</p> <pre><code>... eventstore-login={ &quot;EVENT_STORE_LOGIN&quot;: &quot;admin&quot;, &quot;EVENT_STORE_PASS&quot;: &quot;changeit&quot; } evironment={ &quot;LOTUS_ENV&quot;:&quot;dev&quot;, &quot;DEV_ENV&quot;:&quot;dev&quot; } eventstore={ &quot;EVENT_STORE_HOST&quot;: &quot;eventstore-cluster&quot;, &quot;EVENT_STORE_PORT&quot;: &quot;1113&quot; } graylog={ &quot;GRAYLOG_HOST&quot;:&quot;&quot;, &quot;GRAYLOG_SERVICE_PORT_GELF_TCP&quot;:&quot;&quot; } ... </code></pre> <p>This means that from my nodejs app I need to do something like this</p> <pre><code>&gt; process.env.graylog '{\n &quot;GRAYLOG_HOST&quot;:&quot;&quot;,\n &quot;GRAYLOG_SERVICE_PORT_GELF_TCP&quot;:&quot;&quot;\n}\n' </code></pre> <p>This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:</p> <pre><code>process.env.GRAYLOG_HOST </code></pre> <p>To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:</p> <pre><code> env: - name: NODE_ENV value: dev - name: EVENT_STORE_HOST valueFrom: secretKeyRef: name: eventstore-secret key: EVENT_STORE_HOST - name: EVENT_STORE_PORT valueFrom: secretKeyRef: name: eventstore-secret key: EVENT_STORE_PORT - name: KEYCLOAK_REALM_PUBLIC_KEY valueFrom: configMapKeyRef: name: keycloak-local key: KEYCLOAK_REALM_PUBLIC_KEY </code></pre> <p>Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.</p>
<h2>Short answer:</h2> <p>You will need to define variables explicitly or change configmaps so they have <code>1 environment variable = 1 value</code> structure, this way you will be able to refer to them using <code>envFrom</code>. E.g.:</p> <pre><code>&quot;apiVersion&quot;: &quot;v1&quot;, &quot;data&quot;: { &quot;EVENT_STORE_LOGIN&quot;: &quot;admin&quot;, &quot;EVENT_STORE_PASS&quot;: &quot;changeit&quot; }, &quot;kind&quot;: &quot;ConfigMap&quot;, </code></pre> <h2>More details</h2> <p><code>Configmaps</code> are key-value pairs that means for one key there's only one value, <code>configmaps</code> can get <code>string</code> as data, but they can't work with <code>map</code>.</p> <p>I tried edited manually the <code>configmap</code> to confirm the above and got following:</p> <pre><code>invalid type for io.k8s.api.core.v1.ConfigMap.data: got &quot;map&quot;, expected &quot;string&quot; </code></pre> <p>This is the reason why environment comes up as one string instead of structure.</p> <p>For example this is how <code>configmap.json</code> looks:</p> <pre><code>$ kubectl describe cm test2 Name: test2 Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== test.json: ---- environment={ &quot;LOTUS_ENV&quot;:&quot;dev&quot;, &quot;DEV_ENV&quot;:&quot;dev&quot; } </code></pre> <p>And this is how it's stored in kubernetes:</p> <pre><code>$ kubectl get cm test2 -o json { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;data&quot;: { &quot;test.json&quot;: &quot;evironment={\n \&quot;LOTUS_ENV\&quot;:\&quot;dev\&quot;,\n \&quot;DEV_ENV\&quot;:\&quot;dev\&quot;\n}\n&quot; }, </code></pre> <p>In other words observed behaviour is expected.</p> <h2>Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMaps</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Configure a Pod to Use a ConfigMap</a></li> </ul>
<p>I'm trying to deploy a knative service in my local Kubernetes cluster (Docker Desktop for windows). I could create a knative service when I use images from the google cloud container registry (gcr.io/knative-samples/helloworld-go) but I'm facing an issue when I use images from the docker hub. Please note that I not using any private repository in the Docker registry.</p> <p>The revision.serving will be in status <strong>unknown</strong> for the first 10 minutes and later changes to false with the reason <strong>ProgressDeadlineExceeded</strong>. The knative service fails with reason <strong>RevisionMissing</strong>. I have tried using the official hello-world image from docker hub and the response is the same. The issue is only when I'm using images from the docker official registry but now when GCR is used.</p> <p>Below is the Kubernetes manifest file I used to create a knative service.</p> <pre><code>apiVersion: serving.knative.dev/v1 kind: Service metadata: name: ********** spec: template: metadata: # This is the name of our new &quot;Revision,&quot; it must follow the convention {service-name}-{revision-name} name: *******-rev1 spec: containers: - image: docker.io/*****/****:v1 imagePullPolicy: IfNotPresent ports: - containerPort: 3007 </code></pre> <p><a href="https://i.stack.imgur.com/dHT0w.png" rel="nofollow noreferrer">screenshot of kubernetes resources</a></p> <p>Note: I'm using knative-serving version 1.0 Edit: (I have hidden image name)</p> <p><a href="https://i.stack.imgur.com/xaqY1.png" rel="nofollow noreferrer">status of revision.serving</a></p>
<p>Finally, I resolved the issue by removing the ports session in the YAML file. If the container port is included, the application gets started in a container (I have verified the logs) but it never receives the traffic and fails with the ProgressDeadlineExceeded error.</p>
<p>I am using DSBulk to unload data into CSV from a DSE cluster installed under Kubernetes, My cluster consists of 9 Kubernetes Pods each with 120 GB Ram.</p> <p>I have monitored the resources while unloading the data and observed that the more the data is fetched in CSV the more the ram is getting utilised and pods are restarting due to lack of memory.</p> <p>If one Pod is down at a time the DSBulk unload won't fail, but if 2 Pods are down unload will fail with the exception :</p> <blockquote> <p><strong>Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded).</strong></p> </blockquote> <p>Is there a way to avoid this exceeding of memory happening or is there a way to increase the timeout duration.</p> <p>The command I am using is :</p> <pre><code>dsbulk unload -maxErrors -1 -h ‘[“ &lt; My Host &gt; ”]’ -port 9042 -u &lt; My user name &gt; -p &lt; Password &gt; -k &lt; Key Space &gt; -t &lt; My Table &gt; -url &lt; My Table &gt; --dsbulk.executor.continuousPaging.enabled false --datastax-java-driver.basic.request.page-size 1000 --dsbulk.engine.maxConcurrentQueries 128 --driver.advanced.retry-policy.max-retries 100000 </code></pre>
<p>After a lot of Trial and Error, we found out the problem was with Kubernetes Cassandra pods using the main server's memory size as <strong>Max Direct Memory Size</strong>, rather than using the pods max assigned Ram.</p> <p>The pods were assigned 120 GB of Ram, but Cassandra on each pod was assigning 185 GB Ram to <strong>file_cache_size</strong>, which made the unloading process fails as Kubernetes was rebooting each Pod that utilises Ram more than 120 GB.</p> <p>The reason is that <strong>Max Direct Memory Size</strong> is calculated as:</p> <pre><code>Max direct memory = ((system memory - JVM heap size))/2 </code></pre> <p>And each pod was using 325 GB as <strong>Max Direct Memory Size</strong> and each pods <strong>file_cache_size</strong> sets automatically to be half of <strong>Max Direct Memory Size</strong> value, So whenever a pod requests for memory more than 120 GB Kubernetes will restart it.</p> <p>The solution to it was to set <strong>Max Direct Memory Size</strong> as an env variable in Kubernetes cluster's yaml file with a default value or to override it by setting the <strong>file_cache_size</strong> value on each pod's Cassandra yaml's file</p>
<p>I am trying to understand the k8s pod autoscaler and have the following question. Even the k8s documentation does not seem to talk about it.</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">k8s pod autoscaler</a></p> <p>In the below yaml what is the &quot;status&quot; node for? Any pointers to the documentation will be of great help.</p> <pre><code>apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Pods pods: metric: name: packets-per-second target: type: AverageValue averageValue: 1k - type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route target: type: Value value: 10k status: observedGeneration: 1 lastScaleTime: &lt;some-time&gt; currentReplicas: 1 desiredReplicas: 1 currentMetrics: - type: Resource resource: name: cpu current: averageUtilization: 0 averageValue: 0 - type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route current: value: 10k </code></pre>
<p><code>status</code> describes the current state of the object, supplied and updated by the Kubernetes system and its components. The master node (control plane) continually and actively manages every object's actual state to match the desired state you supplied.</p> <p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status" rel="nofollow noreferrer">Kubernetes status</a></p>
<p>I have setup a private registry (Kubernetes) using the following configuration based on this repo <a href="https://github.com/sleighzy/k8s-docker-registry" rel="nofollow noreferrer">https://github.com/sleighzy/k8s-docker-registry</a>:</p> <p>Create the password file, see the Apache htpasswd documentation for more information on this command.</p> <pre><code>htpasswd -b -c -B htpasswd docker-registry registry-password! Adding password for user docker-registry </code></pre> <p>Create namespace</p> <pre><code>kubectl create namespace registry </code></pre> <p>Add the generated password file as a Kubernetes secret.</p> <pre><code>kubectl create secret generic basic-auth --from-file=./htpasswd -n registry secret/basic-auth created </code></pre> <p><em>registry-secrets.yaml</em></p> <pre><code>--- # https://kubernetes.io/docs/concepts/configuration/secret/ apiVersion: v1 kind: Secret metadata: name: s3 namespace: registry data: REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ== </code></pre> <p><em>registry-service.yaml</em></p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: registry namespace: registry spec: ports: - protocol: TCP name: registry port: 5000 selector: app: registry </code></pre> <p>I am using my MinIO (already deployed and running)</p> <p><em>registry-deployment.yaml</em></p> <pre><code>--- kind: Deployment apiVersion: apps/v1 metadata: namespace: registry name: registry labels: app: registry spec: replicas: 1 selector: matchLabels: app: registry template: metadata: labels: app: registry spec: containers: - name: registry image: registry:2 ports: - name: registry containerPort: 5000 volumeMounts: - name: credentials mountPath: /auth readOnly: true env: - name: REGISTRY_LOG_ACCESSLOG_DISABLED value: &quot;true&quot; - name: REGISTRY_HTTP_HOST value: &quot;https://registry.mydomain.io:5000&quot; - name: REGISTRY_LOG_LEVEL value: info - name: REGISTRY_HTTP_SECRET value: registry-http-secret - name: REGISTRY_AUTH_HTPASSWD_REALM value: homelab - name: REGISTRY_AUTH_HTPASSWD_PATH value: /auth/htpasswd - name: REGISTRY_STORAGE value: s3 - name: REGISTRY_STORAGE_S3_REGION value: ignored-cos-minio - name: REGISTRY_STORAGE_S3_REGIONENDPOINT value: charity.api.com -&gt; This is the valid MinIO API - name: REGISTRY_STORAGE_S3_BUCKET value: &quot;charitybucket&quot; - name: REGISTRY_STORAGE_DELETE_ENABLED value: &quot;true&quot; - name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED value: &quot;false&quot; - name: REGISTRY_STORAGE_S3_ACCESSKEY valueFrom: secretKeyRef: name: s3 key: REGISTRY_STORAGE_S3_ACCESSKEY - name: REGISTRY_STORAGE_S3_SECRETKEY valueFrom: secretKeyRef: name: s3 key: REGISTRY_STORAGE_S3_SECRETKEY volumes: - name: credentials secret: secretName: basic-auth </code></pre> <p>I have created an entry in /etc/hosts</p> <blockquote> <p>192.168.xx.xx registry.mydomain.io</p> </blockquote> <p><em>registry-IngressRoute.yaml</em></p> <pre><code>--- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: registry namespace: registry spec: entryPoints: - websecure routes: - match: Host(`registry.mydomain.io`) kind: Rule services: - name: registry port: 5000 tls: certResolver: tlsresolver </code></pre> <p>I have accees to the private registry using <code>http://registry.mydomain.io:5000/</code> and it obviously returns a blank page.</p> <p>I have already pushed some images and <code>http://registry.mydomain.io:5000/v2/_catalog</code> returns:</p> <blockquote> <p>{&quot;repositories&quot;:[&quot;console-image&quot;,&quot;hello-world&quot;,&quot;hello-world-2&quot;,&quot;hello-world-ha&quot;,&quot;myfirstimage&quot;,&quot;ubuntu-my&quot;]}</p> </blockquote> <p>Above configuration seems to work.</p> <p>Then I tried to add a registry-ui provide by joxit with the following configuration:</p> <p><em>registry-ui-service.yaml</em></p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: registry-ui namespace: registry spec: ports: - protocol: TCP name: registry-ui port: 80 selector: app: registry-ui </code></pre> <p><em>registry-ui-deployment.yaml</em></p> <pre><code>--- kind: Deployment apiVersion: apps/v1 metadata: namespace: registry name: registry-ui labels: app: registry-ui spec: replicas: 1 selector: matchLabels: app: registry-ui template: metadata: labels: app: registry-ui spec: containers: - name: registry-ui image: joxit/docker-registry-ui:1.5-static ports: - name: registry-ui containerPort: 80 env: - name: REGISTRY_URL value: https://registry.mydomain.io - name: SINGLE_REGISTRY value: &quot;true&quot; - name: REGISTRY_TITLE value: &quot;CHARITY Registry UI&quot; - name: DELETE_IMAGES value: &quot;true&quot; </code></pre> <p><em>registry-ui-ingress-route.yaml</em></p> <pre><code>--- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: registry-ui namespace: registry spec: entryPoints: - websecure routes: - match: Host(`registry.mydomain.io`) &amp;&amp; PathPrefix(`/ui/`) kind: Rule services: - name: registry-ui port: 80 middlewares: - name: stripprefix tls: certResolver: tlsresolver --- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: stripprefix namespace: registry spec: stripPrefix: prefixes: - /ui/ </code></pre> <p>I have access to the browser ui at <code>https://registry.mydomain.io/ui/</code>, however it returns nothing.</p> <p>Am I missing something here?</p>
<p>As the owner of that repository there may be something missing here. Your <code>IngressRoute</code> rule has an <code>entryPoint</code> of <code>websecure</code> and <code>certResolver</code> of <code>tlsresolver</code>. This is intended to be the https entrypoint for Traefik, see my other repository <a href="https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd" rel="nofollow noreferrer">https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd</a> and associated Traefik document of which this Docker Registry repo is based on.</p> <p>Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.</p> <p>If you don't have these items setup you could help narrow this down further by changing this <code>IngressRoute</code> config to use just the <code>web</code> entrypoint and remove the <code>tls</code> section as well in your <code>registry-ui-ingress-route.yaml</code> manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.</p>
<p>I have VPS which has a public domain, now I installed Kubernetes with Docker driver, and I follow <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">this instruction</a> to expose a simple service on the internet. but I am faced with a <code>Connection refused</code> error.</p> <p>I attached a screenshot of my k8s status below:</p> <p><a href="https://i.stack.imgur.com/CdkBG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CdkBG.png" alt="enter image description here" /></a></p> <p>I try to create an Ingress and load balancer service but no success full result.</p>
<p>In order to expose and application you need 4 things.</p> <ul> <li> <ol> <li><strong>Deployment/Pod</strong> which holds your application image</li> </ol> </li> <li> <ol start="2"> <li><strong>Service</strong> that exposes the port of your container</li> </ol> </li> <li> <ol start="3"> <li><strong>Ingress</strong> which holds the incoming network policy</li> </ol> </li> <li> <ol start="4"> <li><strong>Ingress Controller</strong> that manages ingresses</li> </ol> </li> </ul> <p>If you are using the katacoda environment on the official site, the step by step guide is correct and has no flaws. There may be hidden some more configuration if you are running minikube on a local machine.</p>
<p>I'm currently researching and experimenting with Kubernetes in Azure. I'm playing with AKS and the Application Gateway ingress. As I understand it, when a pod is added to a service, the endpoints are updated and the ingress controller continuously polls this information. As new endpoints are added AG is updated. As they're removed AG is also updated.</p> <p>As pods are added there will be a small delay whilst that pod is added to the AG before it receives requests. However, when pods are removed, does that delay in update result in requests being forwarded to a pod that no longer exists?</p> <p>If not, how does AG/K8S guarantee this? What behaviour could the end client potentially experience in this scenario?</p>
<p>Azure Application gateway ingress is an ingress controller for your kubernetes deployment which allows you to use native Azure Application gateway to expose your application to the internet. Its purpose is to route the traffic to pods directly. At the same moment all questions about pods availability, scheduling and generally speaking management is on kubernetes itself.</p> <p>When a pod receives a command to be terminated it doesn't happen instantly. Right after kube-proxies will update iptables to stop directing traffic to the pod. Also there may be ingress controllers or load balancers forwarding connections directly to the pod (which is the case with an application gateway). It's impossible to solve this issue completely, while adding 5-10 seconds delay can significantly improve users experience.</p> <p>If you need to terminate or scale down your application, you should consider following steps:</p> <ul> <li>Wait for a few seconds and then stop accepting connections</li> <li>Close all keep-alive connections not in the middle of request</li> <li>Wait for all active requests to finish</li> <li>Shut down the application completely</li> </ul> <p>Here are exact kubernetes mechanics which will help you to resolve your questions:</p> <ul> <li><p><strong>preStop hook</strong> - this hook is called immediately before a container is terminated. This is very helpful for graceful shutdowns of an application. For example simple sh command with &quot;sleep 5&quot; command in a preStop hook can prevent users to see &quot;Connection refused errors&quot;. After the pod receives an API request to be terminated, it takes some time to update iptables and let an application gateway know that this pod is out of service. Since preStop hook is executed prior SIGTERM signal, it will help to resolve this issue. (example can be found in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">attach lifecycle event</a>)</p> </li> <li><p><strong>readiness probe</strong> - this type of probe always runs on the container and defines whether pod is ready to accept and serve requests or not. When container's readiness probe returns success, it means the container can handle requests and it will be added to the endpoints. If a readiness probe fails, a pod is not capable to handle requests and it will be removed from endpoints object. It works very well with newly created pods when an application takes some time to load as well as for already running pods if an application takes some time for processing. Before removing from the endpoints readiness probe should fail several times. It's possible to lower this amount to only one fail using <code>failureTreshold</code> field, however it still needs to detect one failed check. (additional information on how to set it up can be found in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">configure liveness readiness startup probes</a>)</p> </li> <li><p><strong>startup probe</strong> - for some applications which require additional time on their first initialisation it can be tricky to set up a readiness probe parameters correctly and not compromise a fast response from the application. Using <code>failureThreshold * periodSeconds</code>fields will provide this flexibility.</p> </li> <li><p><strong>terminationGracePeriod</strong> - is also may be considered if an application requires more than default 30 seconds delay to gracefully shut down (e.g. this is important for stateful applications)</p> </li> </ul>
<p>I am just started learning Jenkins deployment on Google Kubernetes engine. I was able to successfully deploy an application to my GKE instance. However, I couldn't figure out how to manage Nodes and Clouds.</p> <p>Any tutorial or guidance would be highly appreciated.</p>
<p>Underlying idea behind nodes : Just one node may not be sufficient/effective to run multiple jobs so to distribute the load jobs are transferred to a different node to attain a good performance.</p> <h2>Prerequisites</h2> <p>#1 : A instance (Lets’ say it DEV) which is hosting Jenkins (git, maven, Jenkins)</p> <p>#2 : A instance (Let’s call it Slave) which will be used to serve as host machine for our new node</p> <blockquote> <p>In this machine you need to have java installed A pass wordless connection should be established between two instances. To achieve it enable password authentication &gt;generate key in main machine i.e., Dev machine and copy this key into Dev machine. Create a directory “workspace” in this machine (/home/Ubuntu/workspace)</p> </blockquote> <p>Now Let's get started with Jenkins part - Go to manage Jenkins&gt; Manage nodes and cloud</p> <blockquote> <p>By default Jenkins contains only the master node</p> </blockquote> <p>To create a new node one could use the option “new node” available on the right side of the screen.</p> <blockquote> <p>Provide a name to new node, mark it as permanent agent</p> </blockquote> <p>Define remote root directory : It is the directory which is defined by you.</p> <blockquote> <p>For e.g., a location like “/home/Ubuntu/workspace “</p> </blockquote> <p>Provide a label of your choice for e.g., let’s give the label as “Slave_lab”</p> <blockquote> <p>Label = slave_lab</p> </blockquote> <p>Now define your Launch method</p> <blockquote> <p>Let’s select “Launch agent via execution of command on the master”</p> </blockquote> <p>In the command put command as :</p> <blockquote> <p>Ssh Ubuntu@private_IP_of_slave java -jar slave.jar Note : here by @private_IP_of_slave i mean the IP of machine which will be used for our new node</p> </blockquote> <p>Now we could process to configure jobs to be run on our new node</p> <blockquote> <p>For that right click on your job &gt; select configure</p> </blockquote> <p>Under the general tab select the following</p> <blockquote> <p>&quot;Restrict where this project can be run&quot; and provide the label &quot;slave_lab&quot;</p> </blockquote> <p>Now when you’ll run the job it will be executed on the slave node not on the master name.</p>
<p>I am a bit confused about the individual roles of Kubernetes and Terraform when using them both on a project.</p> <p>Until very recently, I had a very clear understanding of both their purposes and everything made sense to me. But, then <strong>I heard in one of <a href="https://youtu.be/l5k1ai_GBDE" rel="nofollow noreferrer">Nana's videos on Terraform</a>, that Terraform was also very advanced in orchestration and I got confused</strong>.</p> <p>Here's my current understanding of both these tools:</p> <ul> <li><p><strong>Kubernetes</strong>: Orchestration software that controls many docker containers working together seamlessly. Kubernetes makes sure that new containers are deployed based on the desired infrastructure defined in configuration files (written with the help of a tool like Terraform, as IaC).</p> </li> <li><p><strong>Terraform</strong>: Tool for provisioning, configuring, and managing infrastructure as IaC.</p> </li> </ul> <p>Clarification:</p> <p>According to RedHat, “Orchestration is the <strong>automated</strong> configuration, management, and coordination of infrastructure.”</p> <p>For example, if we manually describe a Docker container as a resource, and that said Docker container is then deleted, we need to manually run that Terraform plan again for the Docker container to be re-deployed. To automatically handle this, that's where Kubernetes comes into play (orchestration).</p> <p>So, when we say that Terraform is a good tool for orchestration, what are we really saying?</p> <p>I hope someone can clear that out for me!</p>
<p>Terraform is an infrastructure as code tool, which allows declaring a resources and their desired state as code and it will determine the plan and execute it to take the infrastructure to a desired state. Terraform needs providers for managing these resources using respective API's. Once resources are provisioned terraform cannot take care of keeping the desired state unless you again run terraform plan to determine if resources are in desired state or not and apply the script to take it to desired state.</p> <p>On the other hand, kubernetes is expert is orchestrating container workloads which takes care of keeping the workloads in desired state throughout the life cycle of resources. It continuously monitors cluster and make changes to keep desired state of workloads.</p> <p>Major difference between both is kubernetes is an container orchestration platform which manage desired state of container workloads and many other features. Whereas terraform is a tool which helps you write, provision and maintain the state of resources as a code. It uses provider API's to create resources to match the desired state by identifying difference between current state and desired state of resources.</p> <p>Both terraform and kubernetes can be used together. There are kubernetes providers for terraform which can help you define desired state of your cluster resources. Once you apply the terraform state, kubernetes takes care of maintaining the desired state.</p> <p>Kubernetes is very specific to container workload orchestration whereas terraform can be used to work with any resource state management like provisioning cloud resources, server resources or anything that provides terraform provider to manage resources.</p> <p>Simple example I can think of to better understand the difference is, You can use terraform with docker provider to declare that you want to create a container and once you apply that state container will spin up, but if you delete the container it won't get recreated automatically unless you again run terraform plan which will determine the difference from desired state and apply will recreate the container. To solve this specific problem of maintaining a desired state of container workloads kubernetes orchestration helps. Kubernetes has much more features and flexibility than just container orchestration but this is the core idea of container orchestration.</p> <p>I hope that helps you understand the difference. In case my understanding is wrong please correct me.</p>
<p>When creating PVC with storage class as <code>allowvolumeexpansion: true</code> in k8s, how will it work with a Pod?</p> <p>When creating 10 Gb later and trying to extend the PVC with 15 Gb, how will it work along with component PV PVC Pod in real-time? Since when I am trying with a PVC, it is not extended as expected.</p> <p>Could you help me to identify the logic behind this? Because the docs refer to extending the PVC. But I am not sure how is it applicable to the PV (source).</p>
<p>Welcome to the community!</p> <p>Feature <code>allowVolumeExpansion</code> was introduced in kubernetes 1.11 version and it works only with specific volume types.</p> <p>Types of volumes supported:</p> <ul> <li>gcePersistentDisk</li> <li>awsElasticBlockStore</li> <li>Cinder</li> <li>glusterfs</li> <li>rbd</li> <li>Azure File</li> <li>Azure Disk</li> <li>Portworx</li> <li>FlexVolumes</li> <li>CSI</li> </ul> <p>(This can be found here - <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion" rel="nofollow noreferrer">Allow Volume Expansion</a>)</p> <p>As you mentioned storage class should have this field set to <code>true</code>, once this is done, Persistent Volume Claim will trigger the volume expansion.</p> <p>How this works in practice. Once you corrected PVC to increase a volume size, it will be reflected within your PVC and new condition <code>FileSystemResizePending</code>will be added. You can check it with</p> <p><code>kubectl get pvc &lt;your_pvc_name&gt; -o yaml</code></p> <p>To get PV size actually changed, pod restart is required. This can be achieved by deleting the pod or triggering scaling down and up in your deployment.</p> <p>All related details you can find here: <a href="https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/" rel="nofollow noreferrer">Resizing Persistent Volumes using Kubernetes</a></p>
<p>I need to get the current date in Helm with the following format YY-MM-DD. I'm trying <code>{{ now | date }}</code> but this returns an error because the number of arguments expected by date is two, not one.</p>
<p>Try Format the date to YEAR-MONTH-DAY:</p> <pre><code>{{ now | date &quot;20060102150405&quot; }} </code></pre> <p>Source <a href="http://masterminds.github.io/sprig/date.html" rel="nofollow noreferrer">here</a>:</p> <p>or add the following annotation :</p> <pre><code>annotations: deploymentTime: {{ now | date &quot;2006-01-02T15:04:05&quot; }} </code></pre>
<p>I am building my <em>docker</em> image and then deploy using <em>kubernetes</em>.</p> <p>When I execute the command:</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE lmhfhfhf-def-abc 0/1 Error 0 2m </code></pre> <p>I am almost sure that, it's because of my code changes. How do I debug this pod to find out the logical error in my code.</p> <p>Please Help.</p>
<p>Welcome to the community @witti_minds</p> <p>@user2830517 and @andD have generally answered it, however I'd like to add a bit more details and sources.</p> <p>Your pod is crashed, however since it was scheduled, options used for running pods will work (<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#my-pod-is-crashing-or-otherwise-unhealthy" rel="nofollow noreferrer">k8s debug unhealthy pods</a>)</p> <p>First you should get all information about your pod by running a following command, this will give you all information about your pod (<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application-introspection/" rel="nofollow noreferrer">k8s pod inspection</a>):</p> <pre><code>kubectl describe pod %pod_name% </code></pre> <p>Then you should do is to examine POD's logs using following command:</p> <pre><code>kubectl logs lmhfhfhf-def-abc %container_name% </code></pre> <p>Container name is optional here, if your pod has more than 1 container, you have to use it. Also you can check logs from previous state of the pod:</p> <pre><code>kubectl logs --previous %pod_name% %container_name% </code></pre> <p>Once you have your container up and running, you can continue troubleshooting your application by running command directly in pod using <code>exec</code> command:</p> <pre><code>kubectl exec -it %pod_name% -- sh </code></pre> <p>There are different types of possible troubleshooting methods, they are all described here <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">k8s debugging a running pod</a></p>
<p>Is there anyway to list all securityContext including default and defined:</p> <ol> <li>Pod Level</li> <li>Container Level</li> </ol> <p>Using <code>kubectl get pod -o yaml</code> only show the defined in <code>spec.securityContext</code> and <code>spec.containers[*].securityContext</code> of manifest without the default one?</p>
<p>Here you are, source can be found <a href="https://medium.com/@pjbgf/kubectl-list-security-context-settings-for-all-pods-containers-within-a-cluster-93349681ef5d" rel="noreferrer">in this medium article</a>:</p> <pre><code>kubectl get pods --all-namespaces -o go-template \ --template='{{range .items}}{{&quot;pod: &quot;}}{{.metadata.name}} {{if .spec.securityContext}} PodSecurityContext: {{&quot;runAsGroup: &quot;}}{{.spec.securityContext.runAsGroup}} {{&quot;runAsNonRoot: &quot;}}{{.spec.securityContext.runAsNonRoot}} {{&quot;runAsUser: &quot;}}{{.spec.securityContext.runAsUser}} {{if .spec.securityContext.seLinuxOptions}} {{&quot;seLinuxOptions: &quot;}}{{.spec.securityContext.seLinuxOptions}} {{end}} {{else}}PodSecurity Context is not set {{end}}{{range .spec.containers}} {{&quot;container name: &quot;}}{{.name}} {{&quot;image: &quot;}}{{.image}}{{if .securityContext}} {{&quot;allowPrivilegeEscalation: &quot;}}{{.securityContext.allowPrivilegeEscalation}} {{if .securityContext.capabilities}} {{&quot;capabilities: &quot;}}{{.securityContext.capabilities}} {{end}} {{&quot;privileged: &quot;}}{{.securityContext.privileged}} {{if .securityContext.procMount}} {{&quot;procMount: &quot;}}{{.securityContext.procMount}} {{end}} {{&quot;readOnlyRootFilesystem: &quot;}}{{.securityContext.readOnlyRootFilesystem}} {{&quot;runAsGroup: &quot;}}{{.securityContext.runAsGroup}} {{&quot;runAsNonRoot: &quot;}}{{.securityContext.runAsNonRoot}} {{&quot;runAsUser: &quot;}}{{.securityContext.runAsUser}} {{if .securityContext.seLinuxOptions}} {{&quot;seLinuxOptions: &quot;}}{{.securityContext.seLinuxOptions}} {{end}}{{if .securityContext.windowsOptions}} {{&quot;windowsOptions: &quot;}}{{.securityContext.windowsOptions}} {{end}} {{else}} SecurityContext is not set {{end}} {{end}}{{end}}' </code></pre>
<p>I have a k0s Kubernetes cluster on a single node. I am trying to run a <code>selenium/standalone-chrome</code> to create a remote Selenium node. The trouble that I am having is that it responds if I port forward <code>4444</code> from the pod, but cannot seem to access it via a Service port. I get connection refused. I don't know if it's because it's ignore connections that non-localhost.</p> <p>The <code>Pod</code> definition for <code>pod/standalone-chrome</code> is:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: standalone-chrome spec: containers: - name: standalone-chrome image: selenium/standalone-chrome ports: - containerPort: 4444 env: - name: JAVA_OPTS value: '-Dwebdriver.chrome.whitelistedIps=&quot;&quot;' </code></pre> <p>The <code>Service</code> definition I have for <code>service/standalone-chrome-service</code> is:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: standalone-chrome-service labels: app: standalone-chrome spec: ports: - port: 4444 name: standalone-chrome type: ClusterIP selector: app: standalone-chrome </code></pre> <p>This creates the following, along with a <code>busybox</code> container I have just for testing connectivity.</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/busybox1 1/1 Running 70 2d22h pod/standalone-chrome 1/1 Running 0 3m15s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 18d service/standalone-chrome-service ClusterIP 10.111.12.1 &lt;none&gt; 4444/TCP 3m5s </code></pre> <p>The issue I am having now is that I'm not able to access the remote Selenium service via <code>standalone-chrome-service</code>. I get connection refused. For example, here is trying to reach it via the <code>busybox1</code> container:</p> <pre><code>$ wget http://standalone-chrome-service:4444 Connecting to standalone-chrome-service:4444 (10.111.12.1:4444) wget: can't connect to remote host (10.111.12.1): Connection refused </code></pre> <p>I am able to port forward from <code>pod/standalone-chrome</code> to my host machine using <code>kubectl port-forward</code> though and it works OK, which I think confirms a service is successfully running but not accessible via the <code>Service</code>:</p> <pre><code>$ kubectl port-forward pod/standalone-chrome 4444:4444 &amp; Forwarding from 127.0.0.1:4444 -&gt; 4444 Forwarding from [::1]:4444 -&gt; 4444 $ wget http://localhost:4444 --2021-11-22 13:37:20-- http://localhost:4444/ Resolving localhost (localhost)... ::1, 127.0.0.1 Connecting to localhost (localhost)|::1|:4444... connected. ... </code></pre> <p>I'd greatly appreciate any help in figuring out how to get the Selenium remote server accessible via the <code>Service</code>.</p> <hr /> <p>EDIT: Here is the updated Service definition with <code>name</code>...</p> <pre><code>apiVersion: v1 kind: Service metadata: name: standalone-chrome-service labels: app: standalone-chrome spec: ports: - port: 4444 name: standalone-chrome type: ClusterIP selector: name: standalone-chrome </code></pre> <p>Here is the output of describe:</p> <pre><code>Name: standalone-chrome-service Namespace: default Labels: app=standalone-chrome Annotations: &lt;none&gt; Selector: name=standalone-chrome Type: ClusterIP IP Families: &lt;none&gt; IP: 10.100.179.116 IPs: 10.100.179.116 Port: standalone-chrome 4444/TCP TargetPort: 4444/TCP Endpoints: &lt;none&gt; Session Affinity: None Events: &lt;none&gt; </code></pre>
<p>Service's syntax with:</p> <pre><code> selector: app: standalone-chrome </code></pre> <p>is correct, <code>selector</code> should be matched by <code>label</code>.</p> <blockquote> <p>Services match a set of Pods using labels and selectors, a grouping primitive that allows logical operation on objects in Kubernetes. Labels are key/value pairs attached to objects</p> </blockquote> <p>See for more details <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">Using a Service to Expose Your App</a>.</p> <p>Now you need to add this <code>label</code> (which is <code>app: standalone-chrome</code>) to your <code>pod.yaml</code> metadata:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: standalone-chrome labels: app: standalone-chrome # this label should match to selector in service spec: containers: - name: standalone-chrome image: selenium/standalone-chrome ports: - containerPort: 4444 env: - name: JAVA_OPTS value: '-Dwebdriver.chrome.whitelistedIps=&quot;&quot;' </code></pre>
<p>i am trying to make a cicd pipeline github-&gt;travisci-&gt;aws eks everything works fine images are posted to dockerhub and all.but when travis is executing kubectl apply -f &quot;the files&quot; it is throwing a error.. error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot;</p> <p>(theres nothing wrong with the source coe/deployment/service files as i manually deployed them on aws eks and they worked fine.)</p> <pre><code> #-----------------travis.yml------------- sudo: required services: - docker env: global: - SHA=$(git rev-parse HEAD) before_install: # Install kubectl - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl - chmod +x ./kubectl - sudo mv ./kubectl /usr/local/bin/kubectl # Install AWS CLI - if ! [ -x &quot;$(command -v aws)&quot; ]; then curl &quot;https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o &quot;awscliv2.zip&quot; ; unzip awscliv2.zip ; sudo ./aws/install ; fi # export environment variables for AWS CLI (using Travis environment variables) - export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} - export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} - export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION} # Setup kubectl config to use the desired AWS EKS cluster - aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME} - echo &quot;$DOCKER_PASSWORD&quot; | docker login -u &quot;$DOCKER_USERNAME&quot; --password-stdin - docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client # - aws s3 ls script: - docker run -e CI=true akifboi/multi-client npm test deploy: provider: script script: bash ./deploy.sh on: branch: master </code></pre> <pre><code>#----deploy.sh-------- # docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client # docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server # docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker # docker push akifboi/multi-client:latest # docker push akifboi/multi-server:latest # docker push akifboi/multi-worker:latest # docker push akifboi/multi-client:$SHA # docker push akifboi/multi-server:$SHA # docker push akifboi/multi-worker:$SHA echo &quot;starting&quot; aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese! echo &quot;applying k8 files&quot; kubectl apply -f ./k8s/ # kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA # kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA # kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA echo &quot;done&quot; </code></pre> <pre><code>#------travis;logs---------- last few lines: starting &quot;ACTIVE&quot; applying k8 files error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot; done Already up to date. HEAD detached at c1858f7 Untracked files: (use &quot;git add &lt;file&gt;...&quot; to include in what will be committed) aws/ awscliv2.zip nothing added to commit but untracked files present (use &quot;git add&quot; to track) Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225) Done. Your build exited with 0. </code></pre>
<p>We were installing the latest version of kubectl in CI and hit this error today. After pinning to a previous version (1.18) the error was resolved.</p> <p>the last working version was 1.23.6 and we saw errors with 1.24</p>
<p>Currently my application has bitnami container for kafka. I want to change the number of partitions in a topic to three. But I don't know where should I do that. When I go to minikube dashboard, I see that there is a KAFKA_CFG_NUM_PARTITIONS. But I don't know where to define that in my project. I have below files</p> <ol> <li><p>Chart.yaml which has</p> <p>apiVersion: v2 name: infra type: application sources: - https:/examplePorject dependencies: - name: kafka version: 14.x.x respository: <a href="https://charts.bitnami.com/bitnami" rel="nofollow noreferrer">https://charts.bitnami.com/bitnami</a></p> </li> <li><p>values.yaml kafka: url: infra-kafka-0.infra-headless:9092</p> </li> <li><p>statefulset.yaml</p> </li> </ol> <p>I found that there is a property <strong>environment</strong> and that defines KAFKA_CFG_NUM_PARTITIONS inside of it.</p>
<p>Apparently, this ENV variable <code>KAFKA_CFG_NUM_PARTITIONS</code> sets the server config <code>num.partitions</code>: <a href="https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_num.partitions" rel="nofollow noreferrer">https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_num.partitions</a></p> <p>Which is the default number of partitions that auto created topics will have.</p> <p>To increase the number of partitions of an existing topic, you can use the <code>kafka-topics</code> tool:</p> <pre><code>./bin/kafka-topics.sh --alter --bootstrap-server &lt;kafka&gt;:9092 --topic &lt;topic-name&gt; --partitions &lt;new-num-partitions&gt; </code></pre>
<p>I have a cron job set up on the Kubernetes cluster to process millions of records from the database. Sometimes pod corresponding to the cron job gets Evicted or OOM Killed. Now the issue I am facing is whenever this cron job starts again it processes all those records from the beginning.</p> <p>Just wanted to understand how should I approach storing the progress of this cron job somewhere. Let's say I store it in a database then how frequent should I make a db call to store the state?</p>
<p>I recommend fixing OOM issue instead of finding work arounds. I've listed my thoughts on both.</p> <p><strong>Fixing OOM</strong>: Assuming Cronjob is processing millions of records and it is hitting OOM issue, This mostly due to a memory leak. I would check if certain data structures/resources are being released after being done with it. Another way to approach is to increase the memory.</p> <p><strong>Work around:</strong> If you are using database, it doesn't make much sense to introduce another technology just to save the progress. You can create a table for cronjob progress, and update the table after processing a batch of records. You can update the table with pagination number or offset.</p>
<p>We run a Kubernetes-compatible (OKD 3.11) on-prem / private cloud cluster with backend apps communicating with low-latency Redis databases used as caches and K/V stores. The new architecture design is about to divide worker nodes equally between two geographically distributed data centers (&quot;regions&quot;). We can assume static pairing between node names and regions, an now we have added labeling of nodes with region names as well.</p> <p>What would be the recommended approach to protect low-latency communication with the in-memory databases, making client apps stick to the same region as the database they are allowed to use? Spinning up additional replicas of the databases is feasible, but does not prevent round-robin routing between the two regions...</p> <p>Related: <a href="https://stackoverflow.com/questions/55743299/kubernetes-node-different-region-in-single-cluster">Kubernetes node different region in single cluster</a></p>
<p>Posting this out of comments as community wiki for better visibility, feel free to edit and expand.</p> <hr /> <p>Best option to solve this question is to use <a href="https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/" rel="nofollow noreferrer"><code>istio - Locality Load Balancing</code></a>. Major points from the link:</p> <blockquote> <p>A locality defines the geographic location of a workload instance within your mesh. The following triplet defines a locality:</p> <ul> <li><p>Region: Represents a large geographic area, such as us-east. A region typically contains a number of availability zones. In Kubernetes, the label topology.kubernetes.io/region determines a node’s region.</p> </li> <li><p>Zone: A set of compute resources within a region. By running services in multiple zones within a region, failover can occur between zones within the region while maintaining data locality with the end-user. In Kubernetes, the label topology.kubernetes.io/zone determines a node’s zone.</p> </li> <li><p>Sub-zone: Allows administrators to further subdivide zones for more fine-grained control, such as “same rack”. The sub-zone concept doesn’t exist in Kubernetes. As a result, Istio introduced the custom node label topology.istio.io/subzone to define a sub-zone.</p> </li> </ul> <p>That means that a pod running in zone <code>bar</code> of region <code>foo</code> is not considered to be local to a pod running in zone <code>bar</code> of region <code>baz</code>.</p> </blockquote> <hr /> <p>Another option that can be considered with traffic balancing adjusting is suggested in comments:</p> <p>use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer"><code>nodeAffinity</code></a> to achieve consistency between scheduling <code>pods</code> and <code>nodes</code> in specific &quot;regions&quot;.</p> <blockquote> <p>There are currently two types of node affinity, called <code>requiredDuringSchedulingIgnoredDuringExecution</code> and <code>preferredDuringSchedulingIgnoredDuringExecution</code>. You can think of them as &quot;hard&quot; and &quot;soft&quot; respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (similar to nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee. The &quot;IgnoredDuringExecution&quot; part of the names means that, similar to how nodeSelector works, if labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. In the future we plan to offer <code>requiredDuringSchedulingRequiredDuringExecution</code> which will be identical to <code>requiredDuringSchedulingIgnoredDuringExecution</code> except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.</p> <p>Thus an example of <code>requiredDuringSchedulingIgnoredDuringExecution</code> would be &quot;only run the pod on nodes with Intel CPUs&quot; and an example <code>preferredDuringSchedulingIgnoredDuringExecution</code> would be &quot;try to run this set of pods in failure zone XYZ, but if it's not possible, then allow some to run elsewhere&quot;.</p> </blockquote> <p><strong>Update</strong>: based on <a href="https://stackoverflow.com/questions/70006961/best-method-to-keep-client-server-traffic-in-the-same-region-node-in-kubernete#comment123810994_70036319">@mirekphd comment</a>, it will still not be fully functioning in a way it was asked to:</p> <blockquote> <p>It turns out that in practice Kubernetes does not really let us switch off secondary zone, as soon as we spin up a realistic number of pod replicas (just a few is enough to see it)... they keep at least some pods in the other zone/DC/region by design (which is clever when you realize that it removes the dependency on the docker registry survival, at least under default imagePullPolicy for tagged images), <a href="https://github.com/kubernetes/kubernetes/issues/99630#issuecomment-790740081" rel="nofollow noreferrer">GibHub issue #99630 - NodeAffinity preferredDuringSchedulingIgnoredDuringExecution doesn't work well</a></p> </blockquote> <p>Please refer to <a href="https://stackoverflow.com/a/70041931/15537201">@mirekphd's answer</a></p>
<p>How can I configure internode encryption (i.e., TLS) for Cassandra in K8ssandra?</p>
<p>K8ssandra 1.4.0 included some changes that should make it possible to configure TLS. For reference this is the <a href="https://github.com/k8ssandra/k8ssandra-operator/issues/235" rel="nofollow noreferrer">ticket</a>, and this is the corresponding <a href="https://github.com/k8ssandra/k8ssandra/pull/1180" rel="nofollow noreferrer">PR</a>.</p> <p>There is chart property, <code>cassandraYamlConfigMap</code>, with which you can specify a ConfigMap that contains your custom <code>cassandra.yaml</code>. The properties that you supply will be merged with those generated by k8ssandra with yours taking precedence.</p> <p>Note that your <code>cassandra.yaml</code> does not need to be a complete config file. It is sufficient to specify only the properties you are interested in since it will get merged with the based configuration file generated by K8ssandra.</p> <p>There are some additional properties required for internode and client encryption because you need to specify the keystore and truststore secrets so that volume mounts can be created. Note that you need to create the keystore and truststore secrets in advance.</p> <p>See the inline docs for the new chart properties <a href="https://github.com/k8ssandra/k8ssandra/blob/v1.4.1/charts/k8ssandra/values.yaml#L138" rel="nofollow noreferrer">here</a>.</p> <p>Here is an example chart properties file that demonstrates the new properties:</p> <pre class="lang-yaml prettyprint-override"><code>cassandra: version: 4.0.1 cassandraYamlConfigMap: cassandra-config encryption: keystoreSecret: keystore keystoreMountPath: /mnt/keystore truststoreSecret: truststore truststoreMountPath: /mnt/truststore heap: size: 512M datacenters: - name: dc1 size: 1 </code></pre> <p>There are a couple things to note about the charts properties. First, <code>keystoreSecret</code> and <code>truststoreSecret</code> refer to secrets that should live in the same namespace in which k8ssandra is installed. The user should create those secrets before installing (or upgrading k8ssandra).</p> <p>Secondly, <code>keystoreMountPath</code> and <code>truststoreMountPath</code> specify where those secrets should be mounted in the Cassandra pods. These properties must be specified and must match what is specified in <code>cassandra.yaml</code>.</p> <p>Here is an example of a ConfigMap that contains my custom cassandra.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: cassandra-config data: cassandra.yaml: |- server_encryption_options: internode_encryption: all keystore: /mnt/keystore/keystore.jks keystore_password: cassandra truststore: /mnt/truststore/truststore.jks truststore_password: cassandra </code></pre> <p>K8ssandra uses <a href="https://github.com/k8ssandra/cass-operator" rel="nofollow noreferrer">Cass Operator</a> to manage Cassandra. With that in mind I recommend the following for further reading:</p> <ul> <li>This <a href="https://thelastpickle.com/blog/2021/10/28/cassandra-certificate-management-part_2-cert-manager-and-k8s.html" rel="nofollow noreferrer">article</a> covers configuring TLS for a cass-operator managed cluster using cert-manager.</li> <li>This <a href="https://github.com/k8ssandra/cass-operator/issues/217#issuecomment-949779469" rel="nofollow noreferrer">ticket</a> provides a detailed explanation of how Cass Operator configure internode encryption.</li> </ul>
<p>I have a Kubernetes cluster with the followings:</p> <ul> <li>A deployment of some demo web server</li> <li>A ClusterIP service that exposes this deployment pods</li> </ul> <p>Now, I have the cluster IP of the service:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d3h svc-clusterip ClusterIP 10.98.148.55 &lt;none&gt; 80/TCP 16m </code></pre> <p>Now I can see that I can access this service from the host (!) - not within a Pod or anything:</p> <pre><code>$ curl 10.98.148.55 Hello world ! Version 1 </code></pre> <p>The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.</p> <p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kubernetes docs</a> state that:</p> <blockquote> <p>ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType</p> </blockquote> <p>It's not clear what is meant by &quot;within the cluster&quot; - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?</p>
<blockquote> <p>does that mean within a container (pod) in the cluster? or even from the nodes themselves</p> </blockquote> <p>You can access the ClusterIP from KubeNode and pods. This IP is a virtual IP, and It only works within the cluster. One way it works is ( apart from CNI), Using Linux kernel's <code>iptables</code>/<code>IPVS</code> feature it rewrites the packet with Pod IP address and Load balances among the pods. These rules are maintained by <code>KubeProxy</code></p>
<p>How to change CPU Limit for namespace <code>kube-system</code> in Azure Kubernetes? My pod could not be deployed successfully due to some pods from namespace <code>kube-system</code> using lots of resource.</p> <p><a href="https://i.stack.imgur.com/8ccf0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ccf0.png" alt="enter image description here" /></a></p>
<p>Posting this as community wiki out of comment, feel free to edit and expand</p> <hr /> <p>In short words, this is not possible to change limits for <code>coreDNS</code> and other critical resources located within <code>kube-system</code> namespace. (Technically it's possible to set custom values, but they will be overwritten shortly and initial state will get back to pre-defined one, below answer from microsoft how exactly it works).</p> <hr /> <p>There's a very similar question to it on <code>microsoft question platform</code> and this is the answer:</p> <blockquote> <p>The deployment coredns runs system critical workload using the CoreDNS project for cluster DNS management and resolution with all 1.12.x and higher clusters. [Reference].</p> <p>If you do a kubectl describe deployment -n kube-system coredns, you will find a very interesting label addonmanager.kubernetes.io/mode=Reconcile</p> <p>Now, addons with label addonmanager.kubernetes.io/mode=Reconcile will be periodically reconciled. Direct manipulation to these addons through apiserver is discouraged because addon-manager will bring them back to the original state. In particular:</p> <ul> <li><p>Addon will be re-created if it is deleted.</p> </li> <li><p>Addon will be reconfigured to the state given by the supplied fields in the template file periodically.</p> </li> <li><p>Addon will be deleted when its manifest file is deleted from the $ADDON_PATH.</p> </li> </ul> <p>The $ADDON_PATH by default is set to /etc/kubernetes/addons/ on the control plane node(s).</p> <p>For more information please check this document.</p> <p>Since AKS is a managed Kubernetes Service you will not be able to access $ADDON_PATH. We strongly recommend against forcing changes to kube-system resources as these are critical for the proper functioning of the cluster.</p> </blockquote> <p>Which was also confirmed in comment by OP:</p> <blockquote> <p>just contacted MS support that we cannot change the limits form kube-system namespace.</p> </blockquote>
<p>I am trying to understand where do these username field is mapped to in the Kubernetes cluster.</p> <p>This is a sample configmap:</p> <pre><code>apiVersion: v1 data: mapRoles: | - rolearn: arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes mapUsers: | - userarn: arn:aws:iam::111122223333:user/admin username: admin groups: - system:masters - userarn: arn:aws:iam::444455556666:user/ops-user username: ops-user groups: - eks-console-dashboard-full-access-group </code></pre> <ol> <li><p>If I change the username from <code>system:node:{{EC2PrivateDNSName}}</code> to something like <code>mynode:{{EC2PrivateDNSName}}</code> does it really make any difference? Does It make any sense to the k8's cluster by adding the <code>system:</code> prefix ?.</p> </li> <li><p>And where can I see these users in k8's. Can I query it using <code>kubectl</code> just like <code>k get pods</code>, as <code>kubectl get usernames</code>. Is it a dummy user name we are providing to map with or does it hold any special privileges.</p> </li> <li><p>From where do these names <code>{{EC2PrivateDNSName}}</code> comes from. Are there any other variables available? I can't see any information related to this from the documentation.</p> </li> </ol> <p>Thanks in advance!</p>
<p>Posting the answer as a community wiki, feel free to edit and expand.</p> <hr /> <ol> <li>As you can read in <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles" rel="nofollow noreferrer">documentation</a>, <code>system:node</code> require to have prefix <code>system</code>. If you delete <code>system</code>, it won't work correctly:</li> </ol> <blockquote> <p><strong>system:node</strong><br /> Allows access to resources required by the kubelet, <strong>including read access to all secrets, and write access to all pod status objects</strong>. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.</p> </blockquote> <ol start="2"> <li>You can view RBAC users using external plugin example <a href="https://github.com/FairwindsOps/rbac-lookup" rel="nofollow noreferrer">RBAC Lookup</a> and use a command:<code>rbac-lookup</code></li> </ol> <blockquote> <p>RBAC Lookup is a CLI that allows you to easily find Kubernetes roles and cluster roles bound to any user, service account, or group name. Binaries are generated with goreleaser for each release for simple installation.</p> </blockquote> <ol start="3"> <li>Names will come from your AWS IAM. You can read more about it <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">here</a>:</li> </ol> <blockquote> <p>Access to your cluster using AWS IAM entities is enabled by the AWS IAM Authenticator for Kubernetes which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the <code>aws-auth</code> <code>ConfigMap</code>. For all <code>aws-auth</code> <code>ConfigMap</code> settings.</p> </blockquote>
<p>My local system's kubectl is currently pointing to my digitalocean kubernetes cluster. I wrote an app in Rust that can list the pods on my cluster and their CPU usage, which is working fine on local system.</p> <p>Now, I dockerized this application, so now it is running as a container.</p> <p>How can I configure my docker-compose in such a way, so that it can access local system's kubectl ?</p> <hr /> <p>I tried this on the basis of suggested solutions:</p> <pre><code>version: &quot;3.3&quot; services: custom-tool: container_name: custom-tool image: myimage:v16 restart: always command: ./main-thread environment: - KUBECONFIG=/root/config volumes: - /home/keval/.kube/config:/root/config </code></pre> <p>But, no luck yet !</p>
<p>It seems like you can create your own configuration programmatically, have it read from <code>~/.kube/config</code> or from env variables: <a href="https://docs.rs/kube/0.73.0/kube/struct.Config.html" rel="nofollow noreferrer">https://docs.rs/kube/0.73.0/kube/struct.Config.html</a></p> <p>The easiest option you have is to have your local <code>.kube/config</code> available inside your container by using a <a href="https://stackoverflow.com/a/54658017/19059974">bind mount</a> (most likely at <code>/root/.kube/config</code>).</p> <p>It will look like this:</p> <pre><code>version: &quot;3.3&quot; services: custom-tool: container_name: custom-tool image: myimage:v16 restart: always command: ./main-thread volumes: - type: bind source: /home/keval/.kube/config target: /root/.kube/config </code></pre>
<p>I have the following pod setup:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: proxy-test namespace: test spec: containers: - name: container-a image: &lt;Image&gt; imagePullPolicy: Always ports: - name: http-port containerPort: 8083 - name: container-proxy image: &lt;Image&gt; ports: - name: server containerPort: 7487 protocol: TCP - name: container-b image: &lt;Image&gt; </code></pre> <p>I <code>exec</code> into <code>container-b</code> and execute following curl request:</p> <pre><code>curl --proxy localhost:7487 -X POST http://localhost:8083/ </code></pre> <p>Due to some reason, <code>http://localhost:8083/</code> is directly getting called and proxy is ignored. Can someone explain why this can happen ?</p>
<h2>Environment</h2> <p>I replicated the scenario on <code>kubeadm</code> and <code>GCP GKE</code> kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.</p> <p>I created a pod with 3 containers within:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: proxy-pod spec: containers: - image: ubuntu # client where connection will go from name: ubuntu command: ['bash', '-c', 'while true ; do sleep 60; done'] - name: proxy-container # proxy - that's obvious image: ubuntu command: ['bash', '-c', 'while true ; do sleep 60; done'] - name: server # regular nginx server which listens to port 80 image: nginx </code></pre> <p>For this test stand I installed <code>squid</code> proxy on <code>proxy-container</code> (<a href="https://ubuntu.com/server/docs/proxy-servers-squid" rel="nofollow noreferrer">what is squid and how to install it</a>). By default it listens to port <code>3128</code>.</p> <p>As well as <code>curl</code> was installed on <code>ubuntu</code> - client container. (<code>net-tools</code> package as a bonus, it has <code>netstat</code>).</p> <h2>Tests</h2> <p><strong>Note!</strong></p> <ul> <li>I used <code>127.0.0.1</code> instead of <code>localhost</code> because <code>squid</code> has some resolving questions, didn't find an easy/fast solution.</li> <li><code>curl</code> is used with <code>-v</code> flag for verbosity.</li> </ul> <p>We have <code>proxy</code> on <code>3128</code> and <code>nginx</code> on <code>80</code> within the pod:</p> <pre><code># netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN - tcp6 0 0 :::80 :::* LISTEN - </code></pre> <p><code>curl</code> directly:</p> <pre><code># curl 127.0.0.1 -vI * Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) &gt; HEAD / HTTP/1.1 &gt; Host: 127.0.0.1 &gt; User-Agent: curl/7.68.0 &gt; Accept: */* </code></pre> <p><code>curl</code> via proxy:</p> <pre><code># curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI * Trying 127.0.0.1:3128... # connecting to proxy! * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy &gt; HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80` &gt; Host: 127.0.0.1 &gt; User-Agent: curl/7.68.0 &gt; Accept: */* </code></pre> <p><code>squid</code> logs:</p> <pre><code># cat /var/log/squid/access.log 1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html 1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html </code></pre> <h2>NO_PROXY</h2> <p><code>NO_PROXY</code> environment variable might be set up, however by default it's empty.</p> <p>I added it manually:</p> <pre><code># export NO_PROXY=127.0.0.1 # printenv | grep -i proxy NO_PROXY=127.0.0.1 </code></pre> <p>Now <code>curl</code> request via proxy will look like:</p> <pre><code># curl --proxy 127.0.0.1:3128 127.0.0.1 -vI * Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar * Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) &gt; HEAD / HTTP/1.1 &gt; Host: 127.0.0.1 &gt; User-Agent: curl/7.68.0 &gt; Accept: */* </code></pre> <p>It's possible to override <code>NO_PROXY</code> envvar while executing <code>curl</code> command with <code>--noproxy</code> flag.</p> <blockquote> <p>--noproxy no-proxy-list</p> <p>Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single * character, which matches all hosts, and effectively disables the proxy. Each name in this list is matched as either a domain which contains the hostname, or the hostname itself. For example, local.com would match local.com, local.com:80, and <a href="http://www.local.com" rel="nofollow noreferrer">www.local.com</a>, but not <a href="http://www.notlocal.com" rel="nofollow noreferrer">www.notlocal.com</a>. (Added in 7.19.4).</p> </blockquote> <p>Example:</p> <pre><code># curl --proxy 127.0.0.1:3128 --noproxy &quot;&quot; 127.0.0.1 -vI * Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established &gt; HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80 &gt; Host: 127.0.0.1 &gt; User-Agent: curl/7.68.0 &gt; Accept: */* </code></pre> <p>This proves that proxy works! with localhost.</p> <p><strong>Another option</strong> is something incorrectly configured in <code>proxy</code> which is used in the question. You can get this pod and install <code>squid</code> and <code>curl</code> into both containers and try yourself.</p>
<p>I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv1 namespace: registry-space spec: capacity: storage: 5Gi # specify your own size volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain local: path: /opt/registry # can be any path nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kubernetes2 accessModes: - ReadWriteMany # only 1 node will read/write on the path. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv1-claim namespace: registry-space spec: # should match specs added in the PersistenVolume accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 5Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: private-repository-k8s namespace: registry-space labels: app: private-repository-k8s spec: replicas: 1 selector: matchLabels: app: private-repository-k8s template: metadata: labels: app: private-repository-k8s spec: volumes: - name: certs-vol hostPath: path: /opt/certs type: Directory - name: task-pv-storage persistentVolumeClaim: claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace. containers: - image: registry:2 name: private-repository-k8s imagePullPolicy: IfNotPresent env: - name: REGISTRY_HTTP_TLS_CERTIFICATE value: &quot;/opt/certs/registry.crt&quot; - name: REGISTRY_HTTP_TLS_KEY value: &quot;/opt/certs/registry.key&quot; ports: - containerPort: 5000 volumeMounts: - name: certs-vol mountPath: /opt/certs - name: task-pv-storage mountPath: /opt/registry </code></pre> <p>I manually created directories on every node under <code>/opt/certs</code> and <code>/opt/registry</code>.</p> <p>But when I try to deploy the manifest without hardcoded <code>nodeSelectorTerms</code> on tha control plane I get error:</p> <pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s kubernetes@kubernetes1:/opt/registry$ </code></pre> <p>Do you know how I can let Kubernetes to decide where to deploy the pod?</p>
<p>It seems like your node has taints hence pods are not getting scheduled. Can you try using this command to remove taints from your node ?</p> <pre><code>kubectl taint nodes &lt;node-name&gt; node-role.kubernetes.io/master- </code></pre> <p>or</p> <pre><code>kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre> <p>To get the node name use <code>kubectl get nodes</code></p> <p>User was able to get the pod scheduled after running below command:</p> <pre><code>kubectl taint nodes kubernetes1 node-role.kubernetes.io/control-plane:NoSchedule- </code></pre> <p>Now pod is failing due to crashloopbackoff this implies the pod has been scheduled.</p> <p>Can you please check if this pod is getting scheduled and running properly ?</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx1 namespace: test spec: containers: - name: webserver image: nginx:alpine ports: - containerPort: 80 resources: requests: memory: &quot;64Mi&quot; cpu: &quot;200m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;350m&quot; </code></pre>
<p>Would love to get peoples thoughts on this.</p> <p>I have a front-end application that lives on the apple store. It interacts with custom JavaScript APIs that we've built and that are deployed on an EKS cluster. The cluster and the EC2 instances of the cluster live in private subnets in AWS, but are exposed to the world through an application load balancer that lives in a public subnet.</p> <p>Since the front end application lives on apples servers, I can't think of an easy way to securely access the APIs in AWS without exposing them to the world. This is what I have in mind:</p> <ol> <li>Use API keys. Not ideal as the keys could still potentially be scraped from a header</li> <li>Restrict access to the APIs to the apple server network via ACLs and security groups. Again, not something that seems achievable since there is no network CIDR that apple provide (that I know of)</li> <li>Set up some sort of SSH tunnel</li> </ol> <p>I've hit a wall on this and would really appreciate anyones thoughts if they've had a similar issue.</p> <p>Thanks!</p>
<p>In Google CDP you can have another type of ACL which monitors the client URL. If requests wont come from your.frontend.app, they are denied. Check if you can find that in AWS as well</p> <p>I recommend to further think about if possible in you project:</p> <p>1.) CSRF strategy. Apply tokens to clients which must be provided on request to API.<br> 2.) AccessLimiter. Maintain Fingerprint or Session for your clients and count/limit requests as you need. E.g. if the request didnt run through an index file before, no request is possible as clients didnt collect a token.</p>
<p>docker desktop on mac is getting error:</p> <pre><code>Unable to connect to the server: x509: certificate signed by unknown authority </code></pre> <p>The <a href="https://stackoverflow.com/questions/46234295/kubectl-unable-to-connect-to-server-x509-certificate-signed-by-unknown-authori">following answers</a> didn't helped much:</p> <p>My system details:</p> <ul> <li><p>Operating system: macOS Big Sur Version 11.6</p> </li> <li><p>Docker desktop version: v20.10.12</p> </li> <li><p>Kubernetes version: v1.22.5</p> </li> </ul> <p>When I do:</p> <pre><code>kubectl get pods </code></pre> <p>I get the below error:</p> <pre><code>Unable to connect to the server: x509: certificate signed by unknown authority </code></pre>
<p>Posting the answer from comments</p> <hr /> <p>As appeared after additional questions and answers, there was a previous installation of <code>rancher</code> cluster which left its traces: certificate and context in <code>~/.kube/config</code>.</p> <p>The solution in this case for local development/testing is to delete entirely <code>~/.kube</code> folder with configs and init the cluster from the scratch.</p>
<p>I am new to the Kubernetes world but some time ago I developed a Kubernetes operator using OperatorSDK and Golang. I was using cluster-admin role for running the operator pod but now I want to reduce the resources the operator can work with.</p> <p>It there some tool that can scan the code of the operator and generate an appropriate clusterrole? Or is there some clever way to find what resources are used by the operator?</p>
<p>Assuming that you need to add privileges to your clusterrole because your controller is reconciling Kubernetes workloads itself, you could use the operator-builder project (see <a href="https://github.com/vmware-tanzu-labs/operator-builder" rel="nofollow noreferrer">https://github.com/vmware-tanzu-labs/operator-builder</a>) to do it for you. The code that does this automatically for you is found at <a href="https://github.com/vmware-tanzu-labs/operator-builder/blob/main/internal/workload/v1/rbac.go#L104" rel="nofollow noreferrer">https://github.com/vmware-tanzu-labs/operator-builder/blob/main/internal/workload/v1/rbac.go#L104</a> .</p> <p>The pattern from OperatorSDK will be familiar to what you are currently doing as OperatorSDK and Operator Builder are both plugins to Kubebuilder, and thus follow similar patterns (e.g. <code>&lt;command&gt; init &lt;args&gt;</code> and <code>&lt;command&gt; create api &lt;args&gt;</code>.</p> <p><strong>EXAMPLE:</strong></p> <p>Your config (project-specific construct) would look like this:</p> <pre class="lang-yaml prettyprint-override"><code>name: webstore kind: StandaloneWorkload spec: api: domain: acme.com group: apps version: v1alpha1 kind: WebStore clusterScoped: false resources: - resources.yaml </code></pre> <p>Basically your input (<code>resources.yaml</code> from the above config) would be something like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: webstore-deploy spec: replicas: 2 # +operator-builder:field:name=webStoreReplicas,default=2,type=int selector: matchLabels: app: webstore template: metadata: labels: app: webstore spec: containers: - name: webstore-container #+operator-builder:field:name=webstoreImage,default=&quot;nginx:1.17&quot;,type=string,description=&quot;Defines the web store image&quot; image: nginx:1.17 ports: - containerPort: 8080 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: webstore-ing annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: app.acme.com http: paths: - path: / backend: serviceName: webstorep-svc servicePort: 80 --- kind: Service apiVersion: v1 metadata: name: webstore-svc # +operator-builder:field:name=serviceName,type=string,default=&quot;webstore-svc&quot; spec: selector: app: webstore ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>Running the following commands:</p> <pre class="lang-sh prettyprint-override"><code>operator-builder init \ --workload-config &lt;path_to_config&gt; \ --repo github.com/acme/acme-cnp-mgr \ --skip-go-version-check operator-builder create \ create api \ --workload-config &lt;path_to_config&gt; \ --controller \ --resource </code></pre> <p>Your output would be something like (in the controller file):</p> <pre class="lang-golang prettyprint-override"><code>// +kubebuilder:rbac:groups=apps.acme.com,resources=webstores,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=apps.acme.com,resources=webstores/status,verbs=get;update;patch // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete </code></pre> <p>When you run the <code>make manifests</code> command, your RBAC is generated based upon the correct Kubebuilder markers.</p> <p><strong>DISCLAIMER:</strong> I am a key contributor to the project and of course think that this will be helpful to automate the generation of RBAC markers. :)</p>
<p>I have minikube running on Ubuntu 22.04. When I update a docker image, push it to dockerhub, delete the previous deployment, and apply the deployment the result in the log is from code that is several hours old.</p> <p>I see the service is gone after doing <code>kubectl delete -f deploy.yml</code> and exists after doing an <code>apply</code>, but the logs show output from old code.</p> <p>When I do a <code>docker run ...</code> on the local docker image that was pushed to dockerhub the output shows the new code. I've verified that <code>docker push ...</code> has uploaded the new code.</p> <p>Is this a bug in minikube? Here's my deployment script.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: metadata spec: replicas: 2 selector: matchLabels: app: metadata template: metadata: labels: app: metadata spec: containers: - name: metadata image: dwschulze/metadata:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8081 --- apiVersion: v1 kind: Service metadata: name: metadata spec: type: NodePort ports: - name: http port: 80 targetPort: 8081 selector: app: metadata </code></pre>
<p>in your deployment script have:</p> <pre><code>imagePullPolicy: IfNotPresent </code></pre> <p>you need to change to:</p> <pre><code>imagePullPolicy: Always </code></pre> <p>The &quot;Always&quot; value allows you to pull an image every time you deploy</p> <p>I hope I helped :)</p>
<p>service type : Nodeport</p> <p>problem : can't access clusterIP:Nodeport</p> <p>and find kube-proxy pod log like below</p> <p>&quot;can't open port, skipping it&quot; err=&quot;listen tcp4 :32060: bind: address already in use&quot; port={Description:nodePort for default/network-example2 IP: IPFamily:4 Port:32060 Protocol:TCP}</p> <p>what is the problem??</p>
<p>This seems to be caused by a reported <a href="https://github.com/kubernetes/kubernetes/issues/107170" rel="nofollow noreferrer">bug</a> in the kube-proxy versions after v1.20.x (mine is v1.23.4). This <a href="https://github.com/kubernetes/kubernetes/pull/107413" rel="nofollow noreferrer">fix</a> is merged in the upcoming v1.24 release.</p> <p>It is also confirmed in this <a href="https://github.com/kubernetes/kubernetes/issues/107297" rel="nofollow noreferrer">issue</a> that there would be no error if downgraded to the earlier release v1.20.1.</p>
<p>I am trying to execute below commands in a Kubeflow(v1.4.1) Jupyter Notebook.</p> <pre><code>KServe = KServeClient() KServe.create(isvc) </code></pre> <p>I am getting mentioned error while attempting to execute above mentioned command.</p> <pre><code>ApiException: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': '86bb1b59-20ae-4127-9732-d0355671b12f', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': 'a5a5d542-8a9a-4031-90d9-4faf01914391', 'X-Kubernetes-Pf-Prioritylevel-Uid': '6846984d-14c5-4f4d-9251-fe97d91b17fc', 'Date': 'Thu, 02 Jun 2022 07:53:30 GMT', 'Content-Length': '429'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;inferenceservices.serving.kserve.io is forbidden: User \&quot;system:serviceaccount:kubeflow-user-example-com:default-editor\&quot; cannot create resource \&quot;inferenceservices\&quot; in API group \&quot;serving.kserve.io\&quot; in the namespace \&quot;kubeflow-user-example-com\&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;group&quot;:&quot;serving.kserve.io&quot;,&quot;kind&quot;:&quot;inferenceservices&quot;},&quot;code&quot;:403} </code></pre> <p>As a mitigation step I have added underlying manifests to my Kubernetes cluster via kubectl apply -f &lt;manifest-location.yaml&gt;</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: sla-manager-service-role namespace: default //(have tried it with kubeflow-user-example-com as well) labels: app: sla-manager-app rules: - apiGroups: [&quot;serving.kserve.io&quot;] # &quot;&quot; indicates the core API group resources: [&quot;inferenceservices&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;delete&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: sla-manager-service-role subjects: - kind: ServiceAccount name: default //(have added it with default-editor as well) namespace: kubeflow-user-example-com </code></pre> <p>But this does not resolved the issue.</p> <p>Output I am receiving for</p> <pre><code>kubectl auth can-i create inferenceservice --as=system:serviceaccount:kubeflow-user-example-com:default-editor -n default (Output) no </code></pre> <pre><code>kubectl auth can-i create inferenceservices (Output) yes </code></pre> <p>Can you please help me with what I am missing here?</p>
<p>Got to know that KServe is not comaptible with Kubeflow version 1.4 and works for Kubeflow version &gt;=1.5.</p> <p>Switching to kfserving 0.6 resolved my issue.</p>
<p>I write the following content to <code>/etc/modules-load.d/ipvs.conf</code></p> <pre><code>ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4 </code></pre> <p>Then I execute <code>sudo systemctl restart systemd-modules-load.service</code>. However, its error shows,</p> <pre><code>Job for systemd-modules-load.service failed because the control process exited with error code. </code></pre> <p>Then I run <code>sudo systemctl status systemd-modules-load.service</code>, the error message shows that</p> <pre><code>... Failed to find module 'ip_vs_wrr' Failed to find module 'ip_vs_sh' ... </code></pre> <p>Then I use <code>modprobe ip_vs_wrr</code>, it returns the error message,</p> <pre><code>modprobe: FATAL: Module ip_vs_wrr not found in directory /lib/modules/4.9.201-tegra </code></pre> <p>I suppose that my system doesn't have <code>ip_vs_wrr</code> and <code>ip_vs_sh</code> module. My kernel version is <code>Linux version 4.9.253-tegra</code> and the system is <code>Ubuntu 18.04.5 LTS</code>. How can I load the <code>ip_vs_wrr</code> and <code>ip_vs_sh</code> module correctly? If I don't load these modules, can I still use kubernets successfully?</p>
<p>I'll answer to the last question which is directly related to kubernetes:</p> <blockquote> <p>If I don't load these modules, can I still use kubernets successfully?</p> </blockquote> <p>The answer is <strong>yes</strong>, you can use kubernetes.</p> <p>There are 3 different modes of <code>kube-proxy</code> which can be used in kubernetes cluster:</p> <ul> <li>userspace (legacy, not recommended anymore)</li> <li>iptables (default)</li> <li>ipvs</li> </ul> <p>By default <code>iptables</code> is used. Using <code>ipvs</code> is reasonable if there are more than <code>1000</code> services within the cluster. For better understanding and comparison, you can read <a href="https://www.tigera.io/blog/comparing-kube-proxy-modes-iptables-or-ipvs/" rel="nofollow noreferrer">this article</a>.</p> <p>If <code>ipvs</code> kernel modules are not available, cluster will start in <code>iptables</code> mode. This is from <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p>Note: To run kube-proxy in IPVS mode, you must make IPVS available on the node before starting kube-proxy.</p> <p>When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy falls back to running in iptables proxy mode.</p> </blockquote>
<p>In Kubernetes, to enable client-certificate authN, the annotation nginx.ingress.kubernetes.io/auth-tls-verify-client can be used in an ingress. Will client-cert authN work even if I don't do TLS termination in that ingress? For instance, in this <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/ingress.yaml" rel="nofollow noreferrer">ingress</a>, will client-cert authN still work if I remove the tls block from the ingress?</p> <pre><code>tls: - hosts: - mydomain.com secretName: tls-secret </code></pre> <p>(More info: I have two ingresses for the same host, one which has a TLS section, and another ingress which has rule for a specific api-path, and has a client-cert section but no TLS section).</p> <p>Also, if the request is sent on <code>http</code> endpoint (not <code>https</code>) I observed that the client-cert is ignored even if the annotation value is set to <code>on</code>. Is this a documented behavior?</p>
<p>If you define two ingresses as described then a certificate will be required unless you specify auth-tls-verify-client as optional. See the documentation mentioned in the comments.</p> <p>Also TLS is required if you want to do client certificate authentication. The client certificate is used during the TLS handshake which is why specifying client certificates for one ingress applies to all where the host is the same (eg <a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a>)</p>
<p>I'm trying to pass mysql configuration with kubernetes configmap and PV , PVC and i'm getting error.</p> <pre><code>chown: /var/lib/mysql/..data: Read-only file system chown: /var/lib/mysql/crud.sql: Read-only file system chown: /var/lib/mysql/..2021_12_29_12_32_17.053559530: Read-only file system chown: /var/lib/mysql/..2021_12_29_12_32_17.053559530: Read-only file system chown: /var/lib/mysql/: Read-only file system chown: /var/lib/mysql/: Read-only file system </code></pre> <p>if im not <code>initContainers</code> using the im getting error:</p> <pre><code>2021-12-29 12:49:05+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.27-1debian10 started. chown: changing ownership of '/var/lib/mysql/': Read-only file system chown: changing ownership of '/var/lib/mysql/..data': Read-only file system chown: changing ownership of '/var/lib/mysql/crud.sql': Read-only file system chown: changing ownership of '/var/lib/mysql/..2021_12_29_12_43_00.339135384': Read-only file system chown: changing ownership of '/var/lib/mysql/..2021_12_29_12_43_00.339135384/crud.sql': Read-only file system </code></pre> <p><strong>Deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.deployment.mysql.name }} namespace: {{ .Values.namespace }} spec: selector: matchLabels: app: {{ .Values.deployment.mysql.name }} strategy: type: Recreate template: metadata: labels: app: {{ .Values.deployment.mysql.name }} spec: # initContainers: # - name: chmod-er # image: busybox:latest # command: # - /bin/chown # - -R # - &quot;999:999&quot; # - /var/lib/mysql # volumeMounts: # - name: cm2 # mountPath: /var/lib/mysql containers: - image: {{ .Values.deployment.mysql.image }} name: {{ .Values.deployment.mysql.name }} securityContext: runAsUser: 0 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-root-password ports: - containerPort: {{ .Values.deployment.mysql.port }} name: {{ .Values.deployment.mysql.name }} volumeMounts: - name: cm2 mountPath: /var/lib/mysql/ readOnly: false volumes: - name: mysqlvolume persistentVolumeClaim: claimName: mysqlvolume readOnly: false - name: cm2 configMap: name: cm2 </code></pre> <p>as you can see i tried with <code>initContainers</code> but sill got the same error.</p> <p><strong>pv.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysqlvolume spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi hostPath: path: C:\Users\ib151w\.minikube\volume </code></pre> <p><strong>pvc.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysqlvolume namespace: {{ .Values.namespace }} namespace: app spec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 50Mi </code></pre> <p><strong>configmap.yaml</strong></p> <pre><code>apiVersion: v1 data: crud.sql: |- /*!40101 SET NAMES utf8 */; /*!40014 SET FOREIGN_KEY_CHECKS=0 */; /*!40101 SET SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40111 SET SQL_NOTES=0 */; CREATE DATABASE /*!32312 IF NOT EXISTS*/ crud /*!40100 DEFAULT CHARACTER SET utf8mb4 */; USE crud; DROP TABLE IF EXISTS books; CREATE TABLE `books` ( `id` int NOT NULL AUTO_INCREMENT, `first_name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `last_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `email` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `phone` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `department` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `manager` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; kind: ConfigMap metadata: creationTimestamp: &quot;2021-12-29T10:24:41Z&quot; name: cm2 namespace: app resourceVersion: &quot;6526&quot; uid: 31ab7ef8-94a4-41d3-af19-ddfceb04e124 </code></pre> <p>im just trying to pass this configuration to the pod when it starting soo the mysql pod will be with the databases i want .</p>
<p>You're mounting configMap(<code>cm2</code>), and k8s mounts <code>configMap</code>s as readonly. Did you mean to mount <code>mysqlvolume</code> under <code>/var/lib/mysql/</code> and mount <code>cm2</code> somewhere else? Mysql <code>/var/lib/mysql</code> is a data directory where MySQL writes <code>tablespace</code> data and it's not where you mount <code>configMap</code></p> <p>If so:</p> <pre><code>volumeMounts: - name: cm2 - mountPath: /var/lib/mysql/ </code></pre> <p>This should be changed to</p> <pre><code>volumeMounts: - name: mysqlvolume - mountPath: /var/lib/mysql/ volumeMounts: - name: cm2 - mountPath: ~/db_scripts ( or any other path) </code></pre> <p>You also need to execute the command <code>mysql &lt; ~/db_scripts/crud.sql</code> on container to create tables.</p>
<p>I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.</p> <p>I created the configuration using <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer"><code>kubernetes_secret</code></a>.</p> <p>Something like below</p> <pre><code>resource &quot;kubernetes_secret&quot; &quot;service_secret&quot; { metadata { name = &quot;my-secret&quot; namespace = &quot;my-namespace&quot; } data = { username = &quot;admin&quot; password = &quot;P4ssw0rd&quot; } } </code></pre> <p>And I also put this google client configuration to configure the kubernetes provider.</p> <pre><code>data &quot;google_client_config&quot; &quot;current&quot; { } data &quot;google_container_cluster&quot; &quot;cluster&quot; { name = &quot;my-container&quot; location = &quot;asia-southeast1&quot; zone = &quot;asia-southeast1-a&quot; } provider &quot;kubernetes&quot; { host = &quot;https://${data.google_container_cluster.cluster.endpoint}&quot; token = data.google_client_config.current.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate) } </code></pre> <p>when I apply the terraform it shows error message below</p> <p><a href="https://i.stack.imgur.com/JeROi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JeROi.png" alt="error" /></a></p> <p><code>data.google_container_cluster.cluster.endpoint is null</code></p> <p>Do I miss some steps here?</p>
<p>I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. <code>terraform show</code> just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,</p> <pre><code>data &quot;google_container_cluster&quot; &quot;cluster&quot; { name = &quot;my-container&quot; location = &quot;asia-southeast1&quot; zone = &quot;asia-southeast1-a&quot; project = &quot;my-project&quot; } </code></pre> <p><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project</a></p> <blockquote> <p>project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.</p> </blockquote> <p>In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.</p> <p>In addition, you should be able to remove the <code>zone</code> attribute from that block. <code>location</code> should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.</p>
<p>I am trying to install loki with helm</p> <pre><code>$ helm upgrade --install loki grafana/loki-stack </code></pre> <p>I got the following error msg:</p> <pre><code>Release &quot;loki&quot; does not exist. Installing it now. Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: podsecuritypolicies.policy &quot;loki&quot; is forbidden: User &quot;secret user :)&quot; cannot get resource &quot;podsecuritypolicies&quot; in API group &quot;policy&quot; at the cluster scope $ helm list -all NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION </code></pre> <p>I'm a simple user, but I can make deployment/pods via yaml files manual. I need to use helm charts.</p>
<p>It seems that your User has insufficient privileges to create policies. You need to ask your cluster administrator for more privileges, unless you can assign them yourself to this user. I'm providing example yaml below to achieve that. First, create ClusterRole with proper privileges:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: &lt;role name&gt; rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['get'] </code></pre> <p>Then, you need to bind this ClusterRole to user(s):</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: &lt;binding name&gt; roleRef: kind: ClusterRole name: &lt;role name&gt; apiGroup: rbac.authorization.k8s.io subjects: # Authorize all service accounts in a namespace (recommended): - kind: Group apiGroup: rbac.authorization.k8s.io name: system:serviceaccounts:&lt;authorized namespace&gt; # Authorize specific service accounts (not recommended): - kind: ServiceAccount name: &lt;authorized service account name&gt; namespace: &lt;authorized pod namespace&gt; # Authorize specific users (not recommended): - kind: User apiGroup: rbac.authorization.k8s.io name: &lt;authorized user name&gt; </code></pre> <p>Go <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies" rel="nofollow noreferrer">here</a> for more detailed explanation.</p>
<p>Good day!</p> <p>I am facing a strange problem. I have a standard deployment that uses a public image. But when I create it, I get the error <strong>ImagePullBackOff</strong></p> <pre><code>$ kubectl get pods </code></pre> <p>result</p> <pre><code>api-gateway-deployment-74968fbf5c-cvqwj 0/1 ImagePullBackOff 0 6h23m api-gateway-gateway-deployment-74968fbf5c-hpdxb 0/1 ImagePullBackOff 0 6h23m api-gateway-gateway-deployment-74968fbf5c-rctv6 0/1 ImagePullBackOff 0 6h23m </code></pre> <p>my deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway-deployment labels: app: api-gateway-deployment spec: replicas: 3 selector: matchLabels: app: api-gateway-deployment template: metadata: labels: app: api-gateway-deployment spec: containers: - name: api-gateway-node image: creatorsprodhouse/api-gateway:latest imagePullPolicy: Always ports: - containerPort: 80 </code></pre> <p>I am using the docker driver, is there anything I can do wrong?</p> <pre><code>minikube start --driver=docker </code></pre>
<p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p> <p>First, pull the image via <code>Docker</code></p> <pre class="lang-bash prettyprint-override"><code>docker image pull creatorsprodhouse/api-gateway:latest </code></pre> <p>Then load the downloaded image to <code>minikube</code></p> <pre class="lang-bash prettyprint-override"><code>minikube image load creatorsprodhouse/api-gateway:latest </code></pre> <p>And then everything will work because now kubectl will use the image that is stored locally.</p>
<p>I am currently setting up a kubernetes cluster (bare ubuntu servers). I deployed metallb and ingress-nginx to handle the ip and service routing. This seems to work fine. I get a response from nginx, when I wget the externalIP of the ingress-nginx-controller service (works on every node). But this only works inside the cluster network. How do I access my services (the ingress-nginx-controller, because it does the routing) from the internet through a node/master servers ip? I tried to set up routing with iptables, but it doesn't seem to work. What am I doing wrong and is it the best practise ?</p> <pre><code>echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -i eth0 -p tcp -d &lt;Servers IP&gt; --dport 80 -j DNAT --to &lt;ExternalIP of nginx&gt;:80 iptables -A FORWARD -p tcp -d &lt;ExternalIP of nginx&gt; --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT iptables -F </code></pre> <p>Here are some more information:</p> <pre><code>kubectl get services -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.103.219.111 198.51.100.1 80:31872/TCP,443:31897/TCP 41h ingress-nginx-controller-admission ClusterIP 10.108.194.136 &lt;none&gt; 443/TCP 41h </code></pre> <p>Please share some thoughts Jonas</p>
<p>Bare-metal cluster are a bit tricky to set-up because you need to create and manage the point of contact to your services. In cloud environment these are available on-demand.</p> <p>I followed <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">this doc</a> and can assume that your load balancer seems to be working fine as you are able to <code>curl</code> this IP address. However, you are trying to get a response when calling a domain. For this you need some app running inside your cluster, which is exposed to hostname via Ingress component.</p> <p>I'll take you through steps to achieve that. First, create a Deployment to run a webservice, I'm gonna use simple nginx example:</p> <pre><code>kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>Second, create a Service of type LoadBalancer to be able to access it externally. You can do that by simply running this command: <code>kubectl expose deployment nginx-deployment --type=LoadBalancer --name=&lt;service_name&gt;</code> If your software load balancer is set up correctly, this should give external IP address to the Deployment you created before.</p> <p>Last but not least, create <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> service which will manage external access and name-based virtual hosting. Example:</p> <pre><code>kind: Ingress metadata: name: &lt;ingress_name&gt; spec: rules: - host: &lt;your_domain&gt; http: paths: - path: / pathType: Prefix backend: service: name: &lt;service_name&gt; port: number: 80 </code></pre> <p>Now, you should be able to use your domain name as an external access to your cluster.</p>
<p>For some reason, the <code>_confluent_telemetry_metrics</code> gets automatically enabled. This happens even though Confluent Telemetry Reporter is turned off with <code>telemetry.enabled=false</code>. This is with Confluent Operator with Kubernetes on my laptop (Confluent Platform v6.0).</p> <pre><code>[INFO] 2020-12-01 07:21:41,923 [main] io.confluent.telemetry.exporter.kafka.KafkaExporterConfig logAll - KafkaExporterConfig values: enabled = true topic.name = _confluent-telemetry-metrics topic.partitions = 12 topic.replicas = 3 </code></pre> <p>This results in boatloads of errors because it repeatedly tries to create that topic with 3 replicas even though Kafka is configured with only 1 replica.</p> <p>How does one turn this off? I don't see this setting in Kafka's <code>server.properties</code> or in the Operator's <code>values.yaml</code> file. I searched in several places but wasn't able to find any documentation for this setting, or for Kafka Exporter Config (as in the log excerpt above). No answers on Confluent's Slack community either.</p> <p>Thanks so much for any help you can provide!</p>
<p>I had exactly the same problem, and fall on this question. I know the question is old, but I've got a solution from Confluent support : You have to set <code>confluent.reporters.telemetry.auto.enable</code> to <code>false</code> to disable this topic feed. See <a href="https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#confluent.reporters.telemetry.auto.enable" rel="nofollow noreferrer">https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#confluent.reporters.telemetry.auto.enable</a> for side effects (disables self-balancing).</p>
<p>I am java developer and my devops expertice is not deep. Currently we use containers everywhere: docker, k8s etc.</p> <p>In most cases we put single application into separated container, put it into separated k8s pod. Is there ant scenarious when we should put</p> <ul> <li>2 or more applications into the same container</li> <li>2 or more containers into the same pod ?</li> </ul>
<p>Multiple containers <code>in a Pod</code>:</p> <p>Helper or side application to your main application -- Called <code>Side-Car</code> containers. example of side-car containers would be: back-up container, Authentication, Synchronising, etc</p> <hr /> <p>Also we have <code>Pause Containers</code>:</p> <ul> <li><code>Pause</code> container in each Pod</li> <li>Also called <code>sandbox</code> containers</li> <li>Reserve and holds network namespace</li> <li>Enables communication between containers</li> </ul> <hr /> <p>More Resources:</p> <ul> <li><a href="https://medium.com/bb-tutorials-and-thoughts/kubernetes-learn-sidecar-container-pattern-6d8c21f873d" rel="nofollow noreferrer">Learn Sidecar Container Pattern</a></li> </ul>
<p>I am trying to view portal that build with angular uses netcore backend runs on docker swarm fluently. When I try to deploy angular image on openshift, I get following error;</p> <pre><code>[emerg] 1#1: bind() to 0.0.0.0:80 failed (13: Permission denied) nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) </code></pre> <p>First I created nginx deployment as root user using &quot;nginx:1.19.6-alpine&quot; and defined service account(anyuid), it works fine. Then I try to create openshift deployment with &quot;nginxinc/nginx-unprivileged&quot; image to run as non-root user. I had change nginx.conf according to &quot;nginxinc/nginx-unprivileged&quot; image. I defined service account again but it throws &quot;bind() to 0.0.0.0:80 failed (13: Permission denied)&quot; error.</p> <p>Container 80 port open. There was no ingress. Service uses 80 port to expose route. What could be the solution ?</p> <p>Here is my Dockerfile;</p> <pre><code>### STAGE 1: Build ### FROM node:12.18-alpine as build-env ENV TZ=Europe/Istanbul RUN export NG_CLI_ANALYTICS=false COPY ng/package.json ng/package-lock.json ng/.npmrc ./ COPY ng/projects/package.json ./projects/package.json RUN npm install &amp;&amp; pwd &amp;&amp; ls -ltra COPY ./ng/ ./ RUN time node --max_old_space_size=12000 node_modules/@angular/cli/bin/ng build project --configuration production WORKDIR /usr/src/app/dist/ COPY ng/.npmrc ./ RUN npm publish WORKDIR /usr/src/app/ RUN time node --max_old_space_size=12000 node_modules/@angular/cli/bin/ng build portal --configuration production ### STAGE 2: Run ### FROM nginxinc/nginx-unprivileged:1.23-alpine as runtime-env ENV TZ=Europe/Istanbul COPY ng/nginx.conf /etc/nginx/nginx.conf COPY ng/nginx.template.conf /etc/nginx/nginx.template.conf COPY --from=build-env /usr/src/app/dist/portal/ /usr/share/nginx/html/ CMD [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;envsubst &lt; /usr/share/nginx/html/assets/env.template.js &gt; /usr/share/nginx/html/assets/env.js &amp;&amp; envsubst '$API_URL' &lt; /etc/nginx/nginx.template.conf &gt; /etc/nginx/conf.d/default.conf &amp;&amp; exec nginx -g 'daemon off;'&quot;] </code></pre> <p>nginx.conf file :</p> <pre><code> worker_processes auto; # nginx.conf file taken from nginxinc_nginx-unprivileged image error_log /var/log/nginx/error.log notice; pid /tmp/nginx.pid; events { worker_connections 1024; } http { proxy_temp_path /tmp/proxy_temp; client_body_temp_path /tmp/client_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] &quot;$request&quot; ' '$status $body_bytes_sent &quot;$http_referer&quot; ' '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } </code></pre> <p>nginx.template.conf</p> <pre><code>server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; try_files $uri $uri/ /index.html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location /api { proxy_pass ${API_URL}; proxy_pass_request_headers on; #rewrite /api/(.*) /$1 break; } } </code></pre> <p>I have used all service accounts on deployment such as nonroot, hostaccess, hostmount-anyuid, priviledged, restricted and anyuid.</p> <p>Also I tried to add following command to dockerfile:</p> <pre><code>&quot;RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx &amp;&amp; \ chmod -R 770 /var/cache/nginx /var/run /var/log/nginx&quot; </code></pre> <p>Gets it from <a href="https://stackoverflow.com/questions/54360223/openshift-nginx-permission-problem-nginx-emerg-mkdir-var-cache-nginx-cli">here</a>.</p>
<p>I have found the mistake. I had to change the <code>nginx.template.conf</code> from <code>80</code> to <code>8080</code>. But <code>openshift</code> did not renew deployment. So I deployed a new image which fixes the problem.</p>
<p>I have a problem with <code>k8s</code> hosted on my own bare-metal infrastructure.</p> <p>The k8s was installed via <code>kubeadm init</code> without special configuration, and then I apply <code>CNI</code> <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-installation" rel="nofollow noreferrer">plugin</a></p> <p>Everything works perfectly expects external DNS resolution from <code>Pod</code> to the external world (internet).</p> <p>For example:</p> <p>I have <code>Pod</code> with the name <code>foo</code>, if I invoke command <code>curl google.com</code> I receive error <code>curl: (6) Could not resolve host: google.com</code></p> <p>but if I invoke the same command on the same pod a second time I receive properly HTML</p> <pre><code>&lt;HTML&gt;&lt;HEAD&gt;&lt;meta http-equiv=&quot;content-type&quot; content=&quot;text/html;charset=utf-8&quot;&gt; &lt;TITLE&gt;301 Moved&lt;/TITLE&gt;&lt;/HEAD&gt;&lt;BODY&gt; &lt;H1&gt;301 Moved&lt;/H1&gt; The document has moved &lt;A HREF=&quot;http://www.google.com/&quot;&gt;here&lt;/A&gt;. &lt;/BODY&gt;&lt;/HTML&gt; </code></pre> <p>and if I repeat this command again I can receive errors with DNS resolution or HTML and so on. this behavior is random sometimes I must hit 10times and get an error and on 11 hits I can receive Html</p> <p>I also try to debug this error with <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">this</a> guide, but it does not help.</p> <p>Additional information: CoreDNS is up and running and have default config</p> <pre><code>apiVersion: v1 data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } kind: ConfigMap metadata: name: coredns </code></pre> <p>and files <code>/etc/resolv.conf</code> looks fine</p> <pre><code>nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>the problem exists on <code>Centos 8</code>(master, <code>kubeadm init</code>) and on <code>Debian 10</code>(node, <code>kubeadm join</code>) <code>SELinux</code> in on <code>permissive</code> and <code>SWAP</code> is disabled</p> <p>it is looks like after install k8s and weavenet problem appear even on the host machine.</p> <p>I'm not certain where the problem came from either k8s or Linux. It started after I have installed k8s.</p> <p>what have I missed?</p>
<p>I can suggest using different CNI plugin and setting it up from scratch. Remember when using <code>kubeadm</code> , apply CNI plugin after you ran <code>kubeadm init</code>, then add worker nodes. <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">Here</a> you can find supported CNI plugins. If the problem still exists, it's probably within your OS.</p>
<p>Creating a Pod with spec <code>terminationGracePeriodSeconds</code> specified, I can't check whether this spec has been applied successfully using <code>kubectl describe</code>. How can I check whether <code>terminationGracePeriodSeconds</code> option has been successfully applied? I'm running kubernetes version 1.19.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mysql-client spec: serviceAccountName: test terminationGracePeriodSeconds: 60 containers: - name: mysql-cli image: blah command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: - sleep 2000 restartPolicy: OnFailure </code></pre>
<p>Assuming the pod is running successfully. You should be able to see the settings in the manifest.</p> <p><strong>terminationGracePeriodSeconds</strong> is available in v1.19 as per the following page. Search for &quot;terminationGracePeriodSeconds&quot; here. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/</a></p> <p>Now try this command:</p> <pre><code>kubectl get pod mysql-client -o yaml | grep terminationGracePeriodSeconds -a10 -b10 </code></pre>
<p>I have multiple microservices running on my AWS ECS Fargate cluster. Inside a task (pod), there will be multiple containers (1 container for core-business service and additional 3 sidecar containers). Here are the list of those containers:</p> <ul> <li>core-business service container (runs the core business service on specific port)</li> <li>consul-agent container (runs consul-agent and joins it with the consul-master)</li> <li>consul-template container (gets the service information from consul and updates the haproxy.cfg)</li> <li>haproxy container (takes the haproxy.cfg from consul-template and runs)</li> </ul> <p>All these containers are up and running fine as well. The problem is to reload the haproxy. Since the consul-template is responsible for updating the haproxy.cfg file, Should I need to add some configuration on consul-template itself to update the haproxy?</p> <p>Here is the command I am currently using for consul-template:</p> <pre><code>consul-template -consul-addr=xx.xx.xx.xx:8500 -template /etc/consul-template/data/haproxy.cfg.ctmpl:/etc/consul-template/data/haproxy.cfg </code></pre> <p>What can I try to achieve this?</p>
<p>I am currently having the exact same issue, but for Nginx.</p> <p>From my research, there is no good solution. The one that seems to work is to mount the volume</p> <pre><code>-v /var/run/docker.sock:/var/run/docker.sock </code></pre> <p>on the consul-template container.</p> <p>I see you are using :</p> <pre><code>consul-template -consul-addr=xx.xx.xx.xx:8500 -template /etc/consul-template/data/haproxy.cfg.ctmpl:/etc/consul-template/data/haproxy.cfg </code></pre> <p>in order to run a command after the consul-template renders a new config file, use something like:</p> <pre><code>consul-template -consul-addr=xx.xx.xx.xx:8500 -template /etc/consul-template/data/haproxy.cfg.ctmpl:/etc/consul-template/data/haproxy.cfg:docker run haproxy-container command-to-reload-haproxy </code></pre> <hr /> <p><strong>HINT</strong></p> <p>I find it more readable to use consul-template config files, the language is very well specified here: <a href="https://github.com/hashicorp/consul-template/blob/master/docs/configuration.md" rel="nofollow noreferrer">https://github.com/hashicorp/consul-template/blob/master/docs/configuration.md</a> .</p> <p>Using that approach you would have a config file (for example consul_template_config.hcl) for consul-template like:</p> <pre><code>consul { address = &quot;consul-address:8500&quot; } template { source = &quot;/etc/consul-template/data/haproxy.cfg.ctmpl&quot; destination = &quot;/etc/consul-template/data/haproxy.cfg&quot; command = &quot;docker run haproxy-container command-to-reload-haproxy&quot; } </code></pre> <p>Then, you run the consul-template container using</p> <pre><code>consul-template -config /path/to/consul_template_config.hcl </code></pre> <p>or using</p> <pre><code>consul-template -config /path/to/folder/containing-&gt;consul_template_config.hcl (This approach is more advanced, and lets you have consul-template-configs across multiple files, and not only in 1 like consul_template_config.hcl) </code></pre> <p>I know it is not a good solution to use this docker volume (security warnings), but it was what I could find about our use case.</p>
<p>I got an alert while configuring the monitoring module using <code>prometheus/kube-prometheus-stack 25.1.0</code>.</p> <p><strong>Alert</strong></p> <pre><code>[FIRING:1] KubeProxyDown - critical Alert: Target disappeared from Prometheus target discovery. - critical Description: KubeProxy has disappeared from Prometheus target discovery. Details: • alertname: KubeProxyDown • prometheus: monitoring/prometheus-kube-prometheus-prometheus • severity: critical </code></pre> <p>I think it is a new default rule in <code>kube-prometheus-stack 25.x.x</code>. It does not exist in <code>prometheus/kube-prometheus-stack 21.x.x</code>.</p> <p>The same issue happened in the EKS and minikube.</p> <p><strong>KubeProxyDown</strong> Rule</p> <pre><code>alert: KubeProxyDown expr: absent(up{job=&quot;kube-proxy&quot;} == 1) for: 15m labels: severity: critical annotations: description: KubeProxy has disappeared from Prometheus target discovery. runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown summary: Target disappeared from Prometheus target discovery. </code></pre> <p>How can I resolve this issue?</p> <p>I would be thankful if anyone could help me</p>
<p>There was a change in <code>metrics-bind-address</code> in <code>kube-proxy</code>. Following the issues posted <a href="https://stackoverflow.com/questions/60734799/all-kubernetes-proxy-targets-down-prometheus-operator">here</a>, <a href="https://github.com/helm/charts/issues/16476" rel="noreferrer">here</a> and <a href="https://github.com/kubernetes/kubernetes/pull/74300" rel="noreferrer">here</a>. I can suggest the following. Change <code>kube-proxy</code> ConfigMap to different value:</p> <pre><code>$ kubectl edit cm/kube-proxy -n kube-system ## Change from metricsBindAddress: 127.0.0.1:10249 ### &lt;--- Too secure ## Change to metricsBindAddress: 0.0.0.0:10249 $ kubectl delete pod -l k8s-app=kube-proxy -n kube-system </code></pre>
<p>I'm trying to use Kubectl get namespaces command it is fetching the data.</p> <pre><code>kubectl get namespace NAME STATUS AGE default Active 1d kube-node-lease Active 1d kube-public Active 1d kube-system Active 1d </code></pre> <p>but I want to filter it with name only. So when u run the script it should show like this.</p> <pre><code>kubectl get namespace NAME default kube-node-lease kube-public kube-system </code></pre> <p>I've tried some powershell command but it is not working out for me.</p>
<p>Try any one of the command</p> <pre><code>kubectl get namespace --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' </code></pre> <pre><code>kubectl get namespace | awk '{print $1}' </code></pre> <pre><code>kubectl get namespace --no-headers -o custom-columns=&quot;:metadata.name&quot; </code></pre> <pre><code>kubectl get namespace -o=name | sed &quot;s/^.\{10\}//&quot; </code></pre>
<p>I ran the command <code>kubectl cordon &lt;nameNode&gt;</code> and now I don't know how to make my node schedulable again?</p>
<pre><code>kubectl uncordon &lt;node name&gt; </code></pre> <p>try above command</p>
<p>We are using client-go to talk to our kubernetes cluster (the api version: batchv1/appv1/corev1), and we mainly use three types of resources: Job, Deployment and Service.</p> <p>My question is how do we judge when A Job or deployment is ready and is of running status?</p> <p>For A Job, we found that when batchV1.Spec.Active &gt; 0, the pod controlled by this job may be in either of pending status or running status. So to check if one kubernetes Job's pods are all in running status, do we have to enumerate every pods of the Kubernetes Job and check they are all in running status, then the Job is ready and running? Is there simple way to do that?</p> <p>And how about the kubernetes Deployment and Service? Is there simple way to check deployment is ready?</p>
<p>To check the deployment status you need to check the pod status created by this deployment. example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: myapp name: myapp spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - image: myimage name: myapp livenessProbe: # your desired liveness check </code></pre> <p>You can get the desired PodTemplate from deployments using client-go</p> <p>For example:</p> <pre class="lang-golang prettyprint-override"><code>clientset := kubernetes.NewForConfigOrDie(config) deploymentClient := clientset.AppsV1().Deployments(&quot;mynamespace&quot;) deployment, err := deploymentClient.Get(&quot;myapp&quot;, metav1.GetOptions{}) for _, container := range deployment.Spec.Template.Spec.Containers { container.LivenessProbe // add your logic } </code></pre> <p><strong>Note:</strong> The Deployment only contains the desired PodTemplate, so to look at any status, you have to look at the created Pods. Pods</p> <p>You can list the Pods created from the deployment by using the same labels as in the selector of the Deployment.</p> <p><strong>Example list of Pods:</strong></p> <pre class="lang-golang prettyprint-override"><code>pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{ LabelSelector: &quot;app=myapp&quot;, }) // check the status for the pods - to see Probe status for _, pod := range pods.Items { pod.Status.Conditions // use your custom logic here for _, container := range pod.Status.ContainerStatuses { container.RestartCount // use this number in your logic } } </code></pre> <p>The Status part of a Pod contain conditions: with some Probe-information and containerStatuses: with restartCount:, also illustrated in the Go example above. Use your custom logic to use this information.</p> <p>A Pod is restarted whenever the livenessProbe fails.</p> <p>Example of a Pod Status:</p> <pre class="lang-yaml prettyprint-override"><code>status: conditions: - lastProbeTime: null lastTransitionTime: &quot;2020-09-15T07:17:25Z&quot; status: &quot;True&quot; type: Initialized containerStatuses: - containerID: docker://25b28170c8cec18ca3af0e9c792620a3edaf36aed02849d08c56b78610dec31b image: myimage imageID: docker-pullable://myimage@sha256:a432251b2674d24858f72b1392033e0d7a79786425555714d8e9a656505fa08c name: myapp restartCount: 0 </code></pre> <p>I hope that can help you to resolve your issue .</p>
<p>If anyone know solutions please help me how I can do this.<br /> I have “statefulset” which has following “volumeClaimTemplates” inside:<br /> When I scale my replica count:<br /> <strong>“kubectl scale statefulset --replicas=2 my-statefulset”</strong><br /> new “PVC” create from “volumesnapshot” object which name <strong>= “MySnapshot”</strong></p> <pre class="lang-yaml prettyprint-override"><code> volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce dataSource: apiGroup: snapshot.storage.k8s.io kind: VolumeSnapshot name: MySnapshot resources: requests: storage: 800Gi storageClassName: ebs-sc volumeMode: Filesystem </code></pre> <p><strong>My question:</strong><br /> Is it possible to use dynamic name in field</p> <pre class="lang-yaml prettyprint-override"><code>volumeClaimTemplates: dataSource: apiGroup: snapshot.storage.k8s.io kind: VolumeSnapshot name: ? </code></pre> <p><strong>Clarify:</strong><br /> When new snapshot created, modify statefulset and set volumeClaimTemplates.dataSource.name = new-name<br /> <strong>Why I need this:</strong><br /> I have cronjob which automatically create snapshot with new name ex: MySnapshot_1, MySnapshot_2 … And I need latest data into my “PVC” when new replica is created .</p>
<p><a href="https://kyverno.io/" rel="nofollow noreferrer">https://kyverno.io/</a> Can do this Job</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: mutate-pvc spec: rules: - name: set-name-latest-snapshot match: any: - resources: kinds: - PersistentVolumeClaim context: - name: latestSnapshotTime apiCall: urlPath: &quot;/apis/snapshot.storage.k8s.io/v1/namespaces/{{request.namespace}}/volumesnapshots&quot; jmesPath: &quot;items[?status.readyToUse].status.creationTime | sort(@)[-1]&quot; - name: latestSnapshotName apiCall: urlPath: &quot;/apis/snapshot.storage.k8s.io/v1/namespaces/{{request.namespace}}/volumesnapshots&quot; jmesPath: &quot;items[?status.creationTime == '{{latestSnapshotTime}}'][].metadata.name | [0]&quot; mutate: patchStrategicMerge: spec: dataSource: name: &quot;{{latestSnapshotName}}&quot; </code></pre>
<p>I have this ingress and service created on my Kubernetes cluster</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 metadata: name: google-storage-buckets spec: type: ExternalName externalName: storage.googleapis.com --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: proxy-assets-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /kinto-static-websites/gatsby/public/$1 nginx.ingress.kubernetes.io/upstream-vhost: &quot;storage.googleapis.com&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: rules: - host: gatsby.vegeta.kintohub.net http: paths: - path: /(.*)$ backend: serviceName: google-storage-buckets servicePort: 443 </code></pre> <p>However, this works only if I add <code>index.html</code> after gatsby.vegeta.kintohub.net.</p> <p>Same if I go on gatsby.vegeta.kintohub.net/page-2.</p> <p>How could I make this work plz?</p> <p>Thanks</p>
<p>We had a very similar case, gatsby static site on a GCP bucket.</p> <p>we also tested <code>try_files</code> and <code>index</code> directives but didn't work.</p> <p>In our case these hacky <code>configuration-snippets</code> did the trick:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: gcp-storage-bucket spec: type: ExternalName externalName: storage.googleapis.com --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /&lt;BUCKET_NAME&gt;$uri nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/upstream-vhost: &quot;storage.googleapis.com&quot; nginx.ingress.kubernetes.io/configuration-snippet: | if ($uri ~ &quot;^/(.*)/$&quot;) { rewrite ^(.+)/$ $1 last; proxy_pass https://storage.googleapis.com; } if ($uri ~ &quot;^\/$&quot;) { rewrite ^ /&lt;BUCKET_NAME&gt;/index.html break; proxy_pass https://storage.googleapis.com; } if ($uri !~ &quot;^(.*)\.(.*)$&quot;) { rewrite ^ /&lt;BUCKET_NAME&gt;$uri/index.html break; proxy_pass https://storage.googleapis.com; } labels: app.kubernetes.io/instance: static-site.example.com name: static-site.example.com namespace: default spec: rules: - host: static-site.example.com http: paths: - path: /(.*) backend: service: name: gcp-storage-bucket port: number: 443 pathType: Prefix </code></pre> <p>Everything seems to be working fine on our case, except for the 404s</p> <p>There might be a more efficient way to do this but hopes this helps.</p>
<p>I executing a command that give me cpu limit</p> <pre><code>kubectl get pods -o=jsonpath='{.items[*]..resources.limits.cpu}' -A </code></pre> <p>How can I modify the command to show pod name and the memory limit also</p>
<p>You can format the jsonpath like this.</p> <pre><code>kubectl get pods -Ao jsonpath='{range .items[*]}{&quot;name: &quot;}{@.metadata.name}{&quot; cpu: &quot;}{@..resources.limits.cpu}{&quot; memory: &quot;}{@..resources.limits.memory}{&quot;\n&quot;}{&quot;\n&quot;}{end}' </code></pre>
<p>I am trying to use Kubernetes volumes for Nginx, but am facing an issue with it. After volumes are set Nginx is unable to serve the HTML page. Also am tried with PV and PVS this time also got the same error.</p> <p>nginx.yml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: volumes: - name: nginxhtml # persistentVolumeClaim: # claimName: pvc hostPath: path: /home/amjed/Documents/SPS/k8s/mongo/mnt containers: - name: nginx image: nginx volumeMounts: - name: nginxhtml mountPath: /usr/share/nginx/html ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 </code></pre>
<p>First, create folder that you want to mount inside minikube:</p> <pre><code>dobrucki@minikube:~$ minikube ssh Last login: Tue Jan 11 13:54:50 2022 from 192.168.49.1 docker@minikube:~$ ls -l total 4 drwxr-xr-x 2 docker docker 4096 Jan 11 13:56 nginx-mount </code></pre> <p>This folder is what is mapped to <code>/usr/share/nginx/html</code> inside Pods, so files you paste here will be displayed when you connect to your service. Make sure that you have some <code>.html</code> file inside that folder, otherwise you will get 403 error. For me, example <code>index.html</code> is this:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Hello World&lt;h1&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>You also need to add <code>securityContext</code> <code>fsGroup</code> inside your Deployment manifest, so that <code>/usr/share/nginx/html</code> is owned by nginx user (101 uid).</p> <p>Then, apply Deployment and LoadBalancer resources using this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: securityContext: fsGroup: 101 volumes: - name: nginxhtml hostPath: path: /home/docker/nginx-mount containers: - name: nginx image: nginx volumeMounts: - name: nginxhtml mountPath: /usr/share/nginx/html ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>After that you can check if the content is served correctly</p> <pre><code>dobrucki@minikube:~$ curl $(minikube service nginx-service --url) &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Hello World&lt;h1&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Let me know if you have more questions.</p>
<p>I have installed a kubernetes cluster with kubeadm with the instructions base on the below link :</p> <pre><code>https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm </code></pre> <p>and cri-dockerd for Container Runtime via the link :</p> <pre><code>https://github.com/Mirantis/cri-dockerd </code></pre> <p>the installed go version for build the code is : &quot;go1.21.0 linux/amd64&quot;</p> <p>and for pod Networking (Calico) as the link :</p> <pre><code>https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises </code></pre> <p>The cluster and nodes are ok : <a href="https://i.stack.imgur.com/ipOqu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ipOqu.jpg" alt="enter image description here" /></a> and <a href="https://i.stack.imgur.com/N7wGc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N7wGc.jpg" alt="enter image description here" /></a> and <a href="https://i.stack.imgur.com/Iq1La.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iq1La.png" alt="enter image description here" /></a></p> <p>But I have problems with some calico related pods ! <a href="https://i.stack.imgur.com/rMfdm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rMfdm.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/sNdqH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sNdqH.png" alt="enter image description here" /></a></p> <p>The Events : <a href="https://i.stack.imgur.com/7EBVG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7EBVG.png" alt="enter image description here" /></a></p> <p>Please help me, I'm new in Kubernetes learning. Thanks in advance.</p>
<p>I searched for the issue and realize that downgrading &quot;docker&quot; to 24.0.4 and &quot;go&quot; to go1.20.5 is the solution, because there are some security changes at version go1.20.6+ that makes the related error and there are some best related links as below :</p> <p><a href="https://docs.docker.com/engine/release-notes/24.0" rel="nofollow noreferrer">https://docs.docker.com/engine/release-notes/24.0</a></p> <p><a href="https://discuss.kubernetes.io/t/http-invalid-host-header-error-while-executing-a-command-in-pod/24868" rel="nofollow noreferrer">https://discuss.kubernetes.io/t/http-invalid-host-header-error-while-executing-a-command-in-pod/24868</a></p> <p><a href="https://gitlab.com/gitlab-org/gitlab-runner/-/issues/36051" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/-/issues/36051</a></p> <p><a href="https://stackoverflow.com/questions/76698552/invalid-host-header-when-running-k3d-clusterrun">Invalid host header when running k3d clusterRun</a></p> <p><a href="https://bugs.gentoo.org/910491" rel="nofollow noreferrer">https://bugs.gentoo.org/910491</a></p> <p><a href="https://github.com/golang/go/issues/61076" rel="nofollow noreferrer">https://github.com/golang/go/issues/61076</a></p> <p><a href="https://github.com/moby/moby/issues/45935" rel="nofollow noreferrer">https://github.com/moby/moby/issues/45935</a></p> <p><a href="https://i.stack.imgur.com/2RZfp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2RZfp.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/CDpi6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CDpi6.jpg" alt="enter image description here" /></a></p>
<p>I'm trying to automate the deployment of Kong in a Kubernetes cluster using a GitLab CI/CD pipeline. I have an agent connected to my cluster and in my pipeline script I'm able run kubectl commands within my cluster. The next step is to run Helm commands but when I do I get &quot;helm: command not found&quot;. Is there an image I can use that allows for kubectl and helm commands? How will I be able to run those Helm commands inside my cluster? Is it possible to have multiple images in a script?</p> <p>This is my pipeline script in GitLab:</p> <pre class="lang-yaml prettyprint-override"><code>deploy-job: stage: deploy environment: production image: name: bitnami/kubectl:latest entrypoint: [''] script: - echo &quot;Deploying Kong application...&quot; - kubectl config get-contexts - kubectl config use-context test-project/kong:kong-agent - kubectl get pods -n kong - helm version - echo &quot;Kong successfully deployed.&quot; </code></pre>
<p>You'll run into your current problem a lot. My recommendation here is to maintain a custom image that both <code>kubectl</code> and <code>helm</code> (and any other tooling you may need). Also try to explicitly reference a specific tag/version rather than <code>latest</code> -- it goes a long way when you inevitably need to troubleshoot an error that you were &quot;getting today but not yesterday&quot;.</p>
<p>No matter how hard I tune this, I am not able to make the Pods spread evenly across the four nodes in AWS EKS Cluster. Here's the Pod yaml.</p> <pre><code> nodeSelector: node_env: my-node-name affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_env operator: In values: - my-node-name podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - my-app-name topologyKey: topology.kubernetes.io/hostname podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - my-app-name topologyKey: node_env topologySpreadConstraints: - maxSkew: 1 nodeAffinityPolicy: Honor topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchExpressions: - key: app operator: In values: - my-app-name </code></pre> <p>There are seven nodes under the EKS Cluster - four for apps pods, two for envoy pods and one for miscellaneous pods.</p> <p>I am looking to place four replicas of a pod and evenly distribute it in those apps pods node.</p>
<p>While it's possible to utilize <code>topologySpreadConstraints</code> alongside <code>podAntiAffinity/podAffinity</code>, I wouldn't recommend doing so. I suggest you review the following answers for a comparison:</p> <p><a href="https://stackoverflow.com/a/73159361/19206466">https://stackoverflow.com/a/73159361/19206466</a></p> <p>If your objective is to achieve an even distribution across nodes, you can accomplish this by combining <code>nodeAffinity</code> with <code>topologySpreadConstraints</code>. Use <code>nodeAffinity</code> to select the appropriate nodes, ensuring that your EKS nodes have the <code>node_env: my-node-name</code> label attached. To list a node's labels, you can use <code>kubectl describe node &lt;eks_node&gt;</code>.</p> <p>Then, employ <code>topologySpreadConstraints</code> to evenly distribute the pods across these nodes; setting <code>maxSkew: 1</code> should suffice for evenly distributing the pods across the selected nodes. In addition, make sure that the <code>app: my-app-name</code> label exists in your app pods' template.</p> <p><strong>Important Note:</strong> In case you are utilizing an auto-scaler for your EKS cluster (e.g. Karpenter) you may also need to define the <code>minDomains</code> <code>topologySpreadConstraints</code> attribute supported since Kubernetes 1.25. In your case, it would define the minimum amount of required EKS nodes.</p> <p>For more information, refer to: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#spread-constraint-definition" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#spread-constraint-definition</a></p> <pre><code>affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node_env operator: In values: - my-node-name topologySpreadConstraints: - maxSkew: 1 nodeAffinityPolicy: Honor topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: my-app-name </code></pre> <p><strong>References</strong></p> <ul> <li>Karpenter: <a href="https://github.com/aws/karpenter/issues/2572" rel="nofollow noreferrer">https://github.com/aws/karpenter/issues/2572</a></li> </ul>
<p>My DockerFile looks like :</p> <pre><code> FROM openjdk:8-jdk-alpine ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} app.jar ENTRYPOINT [&quot;java&quot;,&quot;-jar&quot;,&quot;/app.jar&quot;] </code></pre> <p>and my yml file looks like :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: imagename namespace: default spec: replicas: 3 selector: matchLabels: bb: web template: metadata: labels: bb: web spec: containers: - name: imagename image: imagename:1.1 imagePullPolicy: Never env: - name: MYSQL_USER value: root ports: - containerPort: 3306 --- apiVersion: v1 kind: Service metadata: name: imagename namespace: default spec: type: NodePort selector: bb: web ports: - port: 8080 targetPort: 8080 nodePort: 30001 </code></pre> <p>i have build docker image using below command :</p> <pre><code>docker build -t dockerimage:1.1 . </code></pre> <p>and running the docker image like :</p> <pre><code>docker run -p 8080:8080 --network=host dockerimage:1.1 </code></pre> <p>When i deploy this image in kubernetes environment i am getting error :</p> <pre><code>ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) </code></pre> <p>Also i have done port forwarding :</p> <pre><code>Forwarding from 127.0.0.1:13306 -&gt; 3306 </code></pre> <p>Any suggestion what is wrong with the above configuration ?</p>
<p>you need to add a service type clusterIP to your database like that:</p> <h3>MySQL Service:</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: mysql-service labels: app: mysql spec: ports: - port: 3306 selector: app: mysql tier: mysql clusterIP: None </code></pre> <h3>MySQL PVC:</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: my-db-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </code></pre> <h3>MySQL Deployment</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql-deployment spec: selector: matchLabels: app: mysql-deployment tier: mysql strategy: type: Recreate template: metadata: labels: app: mysql-deployment tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre> <p>Now on your Spring application what you need to access to the database is :</p> <h3>Spring Boot deployment</h3> <pre><code>apiVersion: apps/v1 # API version kind: Deployment # Type of kubernetes resource metadata: name: order-app-server # Name of the kubernetes resource labels: # Labels that will be applied to this resource app: order-app-server spec: replicas: 1 # No. of replicas/pods to run in this deployment selector: matchLabels: # The deployment applies to any pods mayching the specified labels app: order-app-server template: # Template for creating the pods in this deployment metadata: labels: # Labels that will be applied to each Pod in this deployment app: order-app-server spec: # Spec for the containers that will be run in the Pods imagePullSecrets: - name: testXxxxxsecret containers: - name: order-app-server image: XXXXXX/order:latest ports: - containerPort: 8080 # The port that the container exposes env: # Environment variables supplied to the Pod - name: MYSQL_ROOT_USERNAME # Name of the environment variable valueFrom: # Get the value of environment variable from kubernetes secrets secretKeyRef: name: mysql-secret key: MYSQL_ROOT_USERNAME - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD - name: MYSQL_ROOT_URL valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD </code></pre> <h3>Create your Secret :</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret data: MYSQL_ROOT_USERNAME: &lt;BASE64-ENCODED-PASSWORD&gt; MYSQL_ROOT_URL: &lt;BASE64-ENCODED-DB-NAME&gt; MYSQL_ROOT_USERNAME: &lt;BASE64-ENCODED-DB-USERNAME&gt; MYSQL_ROOT_PASSWORD: &lt;BASE64-ENCODED-DB-PASSWORD&gt; metadata: name: mysql-secret </code></pre> <h3>Spring Boot Service:</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 # API version kind: Service # Type of the kubernetes resource metadata: name: order-app-server-service # Name of the kubernetes resource labels: # Labels that will be applied to this resource app: order-app-server spec: type: LoadBalancer # The service will be exposed by opening a Port on each node and proxying it. selector: app: order-app-server # The service exposes Pods with label `app=polling-app-server` ports: # Forward incoming connections on port 8080 to the target port 8080 - name: http port: 8080 </code></pre>
<p>I'm trying to deploy apps with Kubernetes and minikube. But I have a strange issue: i can access my app with curl in terminal but not from browser. I'm using &quot;minikube tunnel&quot; command for external ip.</p> <p><a href="https://i.stack.imgur.com/rPtqC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rPtqC.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/ZvI8W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZvI8W.png" alt="enter image description here" /></a></p> <p>This is my deployment and service files</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nodeapp-deployment labels: app: nodeapp spec: replicas: 1 selector: matchLabels: app: nodeapp template: metadata: labels: app: nodeapp spec: containers: - name: nodeserver image: tanyadovzhenko/chi-questionnaire-back env: - name: PORT value: "4000" - name: JWT_ACCESS_KEY value: "111" - name: JWT_REFRESH_KEY value: "111" - name: HASH_PASSWORD_ALGORITM value: "sha256" ports: - containerPort: 4000</code></pre> </div> </div> </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nodeapp-service spec: selector: app: nodeapp type: LoadBalancer ports: - protocol: TCP port: 4000 targetPort: 4000</code></pre> </div> </div> </p>
<p>This is a <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">known issue</a> for minikube and ingress is only limited to Linux. Using minikube service with a tunnel the network is limited if using the Docker driver on Darwin, Windows, or WSL, and the Node IP is not reachable directly. Running minikube on Linux with the Docker driver will result in no tunnel being created. Follow this <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#:%7E:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system" rel="nofollow noreferrer">tunneling</a> into Minikube for accessing it on the browser. Alternatively you can run this command <code>minikube service &lt;SERVICE_NAME&gt; --url</code> which will give you a direct URL to access the application and access the URL in the web browser.</p> <p>As per this <a href="https://stackoverflow.com/a/71749078/19230181">Thread</a> by the community member Aniki you can also resolve it by changing minikube base driver to hyperv from docker.</p>
<p>Sorry for that noobish question, but I'm having an issue reaching my pod and I have no idea why.. (I'm using Minikube locally)</p> <p><strong>So I've created this basic pod:</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp type: front-end spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 </code></pre> <p><strong>And this basic service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: service spec: type: NodePort ports: - targetPort: 80 port: 80 nodePort: 30008 selector: app: myapp type: front-end </code></pre> <p>However when I try reaching nginx through the browser I fail to do so.. I enter http://NodeIP:30008 . However when I'm typing minikube service service --url I am able to access it..</p> <p><strong>So basically I have 2 questions-</strong></p> <p><strong>1-</strong> Why does my attempt enteting the nodeip and port fail? I 've seen that when I enter minikube ssh and try to curl here <strong>http://NodeIP:30008</strong> it works, so basically while I'm using Minikube I won't be able to browse to my apps? only curl through the minikube ssh or the below command.?</p> <p><strong>2-</strong> Why does the minikube service --url command works? what's the difference?</p> <p>Thanks a lot!</p>
<p>Use the external IP address (LoadBalancer Ingress) to access to your application:</p> <p>curl http://&lt;external-ip&gt;:&lt;port&gt;</p> <p>where is the external IP address (LoadBalancer Ingress) of your Service, and is the value of Port in your Service description. <em><strong>If you are using minikube, typing minikube service my-service will automatically open your application in a browser.</strong></em></p> <p>You can find more details <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">here</a></p>
<p>I am having a kubernetes deployment. I am trying to scale down it to zero replicas. how will i wait to ensure the number pods are zero ?</p> <p>the following is my command to scale down the deployment</p> <pre><code>kubectl scale deployment openet-k6 --replicas=0 </code></pre> <p>How will i ensure all pods are scaled down ? I am actually using this command in a shell script , so i can write additional logic to check that .</p> <p>thank you</p>
<p>You can do a while loop, something like:</p> <pre><code>#! /bin/bash while true; do kubectl get pods -o json | jq -r '.items[].status.phase' | grep 'Running' RC=$? if [ ${RC} -eq 1 ]; then echo &quot;We're done here&quot; exit 0 fi sleep 3 done </code></pre> <p>The above is just a very simple example. Ideally, you'll want to filter your pods based on labels.</p>
<p>I am trying to create a HPA using Prometeus and Prometeus Adapter.</p> <p>I am getting a &quot;unexpected GroupVersion&quot; error in the Status and I just have no idea what it means.</p> <pre><code>$ kubectl describe hpa Name: pathology-uploader-hpa Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Tue, 08 Mar 2022 15:50:12 +0000 Reference: Deployment/pathology-uploader-dep Metrics: ( current / target ) &quot;pathology_uploader_avg_process_time&quot; on pods: &lt;unknown&gt; / 180k Min replicas: 1 Max replicas: 100 Deployment pods: 0 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: unexpected GroupVersion string: /apps/v1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 4m34s (x41 over 14m) horizontal-pod-autoscaler unexpected GroupVersion string: /apps/v1 </code></pre> <p>Here is my spec:</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: pathology-uploader-hpa spec: maxReplicas: 100 # define max replica count minReplicas: 1 # define min replica count scaleTargetRef: kind: Deployment name: pathology-uploader-dep apiVersion: /apps/v1 metrics: - type: Pods pods: metric: name: pathology_uploader_avg_process_time target: type: AverageValue averageValue: 180000 # allow one job to take 3 minutes </code></pre> <p>I am using the helm charts supplied by the prometheus-community. The rules section that I have supplied to the prometheus-adapter is</p> <pre><code>rules: default: true custom: - seriesQuery: '{__name__=~&quot;^pathology_uploader_process_time$&quot;}' resources: template: &lt;&lt;.Resource&gt;&gt; name: matches: &quot;.*&quot; as: &quot;pathology_uploader_avg_process_time&quot; metricsQuery: &lt;&lt;.Series&gt;&gt; prometheus: url: http://prometheus-server.default.svc port: 80 path: &quot;&quot; </code></pre>
<p>Update your spec like below.</p> <p>&quot;/apps/v1&quot; to &quot;apps/v1&quot;</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: pathology-uploader-hpa spec: maxReplicas: 100 # define max replica count minReplicas: 1 # define min replica count scaleTargetRef: kind: Deployment name: pathology-uploader-dep ## update &quot;/apps/v1&quot; to &quot;apps/v1&quot; apiVersion: apps/v1 </code></pre>
<p>I have K0s in single node mode on AlmaLinux 9 (SELinux disabled). I'm trying to install MetalLB to expose apps via single external IP.</p> <ul> <li>Public IP: <code>51.159.174.224</code></li> <li>Private IP: <code>10.200.106.35</code></li> </ul> <h4>K0s config</h4> <pre class="lang-yaml prettyprint-override"><code>[alma@scw-k8s ~]$ cat /etc/k0s/k0s.yaml apiVersion: k0s.k0sproject.io/v1beta1 kind: ClusterConfig metadata: creationTimestamp: null name: k0s spec: api: address: 51.159.174.224 k0sApiPort: 9443 port: 6443 sans: - 10.200.106.35 - 10.244.0.1 - 2001:bc8:1202:ca11::1 - fe80::dc3c:54ff:fe06:5012 - fe80::c8df:81ff:fe0d:811d - fe80::cc9e:7cff:fe1c:9da3 - fe80::f80c:35ff:fefe:c5e2 tunneledNetworkingMode: false controllerManager: {} extensions: helm: charts: null repositories: null storage: create_default_storage_class: true type: openebs_local_storage images: calico: cni: image: docker.io/calico/cni version: v3.24.1 kubecontrollers: image: docker.io/calico/kube-controllers version: v3.24.1 node: image: docker.io/calico/node version: v3.24.1 coredns: image: docker.io/coredns/coredns version: 1.9.4 default_pull_policy: IfNotPresent konnectivity: image: quay.io/k0sproject/apiserver-network-proxy-agent version: 0.0.32-k0s1 kubeproxy: image: k8s.gcr.io/kube-proxy version: v1.25.2 kuberouter: cni: image: docker.io/cloudnativelabs/kube-router version: v1.5.1 cniInstaller: image: quay.io/k0sproject/cni-node version: 1.1.1-k0s.0 metricsserver: image: k8s.gcr.io/metrics-server/metrics-server version: v0.6.1 pushgateway: image: quay.io/k0sproject/pushgateway-ttl version: edge@sha256:7031f6bf6c957e2fdb496161fe3bea0a5bde3de800deeba7b2155187196ecbd9 installConfig: users: etcdUser: etcd kineUser: kube-apiserver konnectivityUser: konnectivity-server kubeAPIserverUser: kube-apiserver kubeSchedulerUser: kube-scheduler konnectivity: adminPort: 8133 agentPort: 8132 network: calico: null clusterDomain: cluster.local dualStack: {} kubeProxy: mode: iptables kuberouter: autoMTU: true hairpinMode: false metricsPort: 8080 mtu: 0 peerRouterASNs: &quot;&quot; peerRouterIPs: &quot;&quot; podCIDR: 10.244.0.0/16 provider: kuberouter serviceCIDR: 10.96.0.0/12 scheduler: {} storage: etcd: externalCluster: null peerAddress: 10.200.106.35 type: etcd telemetry: enabled: true status: {} </code></pre> <p>MetalLB and nginx ingress controller are installed via Helm Charts:</p> <ul> <li>MetalLB into <code>metallb-system</code> namespace</li> <li>nginx-ingress-controller into <code>default</code> namespace:</li> </ul> <p><code>metallb-system</code> namespace has all needed annotations:</p> <p><a href="https://i.stack.imgur.com/WxX6C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WxX6C.png" alt="enter image description here" /></a></p> <p>I've created ConfigMap with allocation addresses:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: production-public-ips protocol: layer2 addresses: - 51.159.174.224 </code></pre> <p>Added a new service:</p> <pre class="lang-yaml prettyprint-override"><code>... --- apiVersion: v1 kind: Service metadata: name: web-server-service namespace: web annotations: metallb.universe.tf/address-pool: production-public-ips spec: selector: app: web ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer </code></pre> <p>But services with <strong>LoadBalancer</strong> type can't get any external IP:</p> <pre><code>➜ kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 28h default nginx-ingress-controller-1664428090 LoadBalancer 10.104.57.67 &lt;pending&gt; 80:30762/TCP,443:30274/TCP 13h default nginx-ingress-controller-1664428090-default-backend ClusterIP 10.103.23.177 &lt;none&gt; 80/TCP 13h kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 28h kube-system metrics-server ClusterIP 10.101.214.5 &lt;none&gt; 443/TCP 28h metallb-system metallb-webhook-service ClusterIP 10.105.95.57 &lt;none&gt; 443/TCP 14h web web-server-service LoadBalancer 10.110.222.82 &lt;pending&gt; 80:31936/TCP 2m4s </code></pre> <p>Any ideas, why LoadBalancer services can't get external IPs? Thank you.</p>
<p>No ideas and I don't think this will work but can you try <code>51.159.174.224/32</code> at <strong>ConfigMap</strong> instead of <code>51.159.174.224</code>.</p>
<p>I could not get documentation on <code>manual</code> storageclasses and <code>kubernetes.io/no-provisioner</code> provisioner type. Could some one please explain them</p>
<p>Manual storage classes are nothing but a Kubernetes storage mechanism that lets you dynamically provision persistent volumes (PV) in a Kubernetes cluster. Kubernetes administrators define classes of storage, and then pods can dynamically request the specific type of storage.</p> <p>Whereas &quot;Provisioner&quot; is used especially when dynamic provisioning is involved. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">kubernetes.io/no-provisioner</a> is one of the plugins from the storage plugins. This is a storage class that can be used as default when local persistent volumes are manually provisioned. This also indicates that no dynamic provisioning should occur for a specific StorageClass. Essentially, it means that the user is responsible for manually creating PersistentVolumes for any claims that use this StorageClass.</p> <p>Refer to this <a href="https://bluexp.netapp.com/blog/cvo-blg-kubernetes-storageclass-concepts-and-common-operations" rel="nofollow noreferrer">Kubernetes StorageClass: Concepts and Common Operations</a> authored by Yifat Perry and official documentation on <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storage class</a>.</p>
<p>I want to write a script to health check our elasticsearch cluster (deploy on kubernetes)</p> <ol> <li>I go inside pod which run elasticsearch master container and run below commands:</li> </ol> <pre><code>[elasticsearch@elasticsearch-master-0 ~]$ curl localhost:9200/frontend-dev-2021.12.03/_count {&quot;count&quot;:76,&quot;_shards&quot;:{&quot;total&quot;:1,&quot;successful&quot;:1,&quot;skipped&quot;:0,&quot;failed&quot;:0}} [elasticsearch@elasticsearch-master-0 ~]$ curl localhost:9200/_cluster/health?pretty { &quot;cluster_name&quot; : &quot;elasticsearch&quot;, &quot;status&quot; : &quot;green&quot;, &quot;timed_out&quot; : false, &quot;number_of_nodes&quot; : 3, &quot;number_of_data_nodes&quot; : 3, &quot;active_primary_shards&quot; : 617, &quot;active_shards&quot; : 1234, &quot;relocating_shards&quot; : 0, &quot;initializing_shards&quot; : 0, &quot;unassigned_shards&quot; : 0, &quot;delayed_unassigned_shards&quot; : 0, &quot;number_of_pending_tasks&quot; : 0, &quot;number_of_in_flight_fetch&quot; : 0, &quot;task_max_waiting_in_queue_millis&quot; : 0, &quot;active_shards_percent_as_number&quot; : 100.0 } </code></pre> <p>As you can see, both index count and health check command are success. But when I run these command from outside (I give elasticsearch cluster an public endpoint)</p> <pre><code>root@ip-192-168-1-1:~# curl --user username:password esdev.example.com/frontend-dev-2021.12.03/_count {&quot;count&quot;:76,&quot;_shards&quot;:{&quot;total&quot;:1,&quot;successful&quot;:1,&quot;skipped&quot;:0,&quot;failed&quot;:0}} root@ip-192-168-1-1:~# curl --user username:password esdev.example.com/_cluster/health &lt;html&gt; &lt;head&gt;&lt;title&gt;403 Forbidden&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;403 Forbidden&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Only the index count command is success, the health check command always produce <code>403 Forbidden</code> error.</p> <p>I have searched and read through the official <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html" rel="nofollow noreferrer">docs</a> from elasticsearch but event the offcial docs only run command internal elasticsearch cluster or using kibana (http service kubernetes - internal k8s cluster).</p> <p>How can I health check elasticsearch from outside? Or we can not do this because some mechanism of elasticsearch cluster?</p> <p>Notes: I create a basic auth nginx (username:password) stand before the elasticsearch and this nginx has an ingressroute from traefik-v2</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: annotations: meta.helm.sh/release-name: basic-auth-nginx-dev meta.helm.sh/release-namespace: dev creationTimestamp: &quot;2021-01-23T08:12:55Z&quot; generation: 2 labels: app: basic-auth-nginx-dev app.kubernetes.io/managed-by: Helm managedFields: - apiVersion: traefik.containo.us/v1alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/managed-by: {} f:spec: .: {} f:entryPoints: {} f:routes: {} manager: Go-http-client operation: Update time: &quot;2021-01-23T08:12:55Z&quot; name: basic-auth-nginx-dev-web namespace: dev resourceVersion: &quot;103562796&quot; selfLink: /apis/traefik.containo.us/v1alpha1/namespaces/dev/ingressroutes/basic-auth-nginx-dev-web uid: 5832b501-b2d7-4600-93b6-b3c72c420115 spec: entryPoints: - web routes: - kind: Rule match: Host(`esdev.example.com`) &amp;&amp; PathPrefix(`/`) priority: 1 services: - kind: Service name: basic-auth-nginx-dev port: 80 </code></pre>
<p>Could you please show us your nginx config?</p> <ol> <li><p>I think the problem come from your nginx because I see the output you show that nginx return 403 for you, not the elasticsearch.</p> </li> <li><p>Could you please try another command start with <code>_</code> like <code>_template</code> or something like that, there is a chance your nginx prevent access to path start with <code>_</code> character.</p> </li> </ol>
<p>I have Ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress labels: app.kubernetes.io/managed-by: Helm annotations: kubernetes.io/ingress.class: nginx meta.helm.sh/release-name: ingress nginx.ingress.kubernetes.io/configuration-snippet: | location ~ favicon.ico { log_not_found off; } nginx.ingress.kubernetes.io/cors-allow-headers: content-type, x-att-timezone nginx.ingress.kubernetes.io/cors-allow-methods: GET, POST, PUT, DELETE, OPTIONS nginx.ingress.kubernetes.io/cors-allow-origin: '*' nginx.ingress.kubernetes.io/cors-expose-headers: 'x-att-userrole, x-att-userdetails, x-att-userid, xatt-location ' nginx.ingress.kubernetes.io/enable-cors: 'true' nginx.ingress.kubernetes.io/force-ssl-redirect: 'true' nginx.ingress.kubernetes.io/proxy-body-size: 10000m nginx.ingress.kubernetes.io/proxy-connect-timeout: '6000000' nginx.ingress.kubernetes.io/proxy-read-timeout: '6000000' nginx.ingress.kubernetes.io/proxy-send-timeout: '6000000' nginx.ingress.kubernetes.io/use-regex: 'true' spec: tls: - hosts: - st-my-doamin.com secretName: ingress rules: - host: st-my-doamin.com http: paths: - path: /rootpath/.* pathType: Prefix backend: service: name: someService port: number: 80 </code></pre> <p>And i want to create redirection like this :</p> <p>if i go to st-my-doamin.com/rootpath i will be <strong>redirect</strong> to st-my-doamin.com/rootpath/login</p> <p>i tried to create this redirection and got error :</p> <pre><code>This page isn’t working st-my-doamin.com redirected you too many times. Try clearing your cookies. ERR_TOO_MANY_REDIRECTS </code></pre>
<p>As you are getting the error &quot;ERR_TOO_MANY_REDIRECTS&quot; follow this <a href="https://kinsta.com/blog/err_too_many_redirects/" rel="nofollow noreferrer">link</a> it helps in clearing this error. Follow this <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Link</a> in redirecting the Path.</p> <p>Add the below annotation in yaml :</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /get_similarity/$2 </code></pre> <p>And add Path as below:</p> <pre><code> - path: /rootpath(/|$)(.*) </code></pre>
<p>I am new to devops and openshift. I have a simple <code>.sh</code> script that connects to a download website and checks if a new version is present if it gets <code>200</code> response it downloads the file. I need to know if it is possible to run this script using cronjob on each pod (1 in my case) centrally. If so is there any examples where cron job runs an <code>.sh</code> file (ones that I came across has only simple echo commands ran by cronjob).</p>
<blockquote> <p>I need to know if it is possible to run this script using cronjob on each pod(1 in my case) centrally.</p> </blockquote> <p>No, such things are impossible without hacking, because <strong>Kubernetes pods are designed as relatively ephemeral, disposable entities</strong>.<br /> In case you want to update some binary files inside running pods, I suggest the following approach (since you did not provide any details, I will show how you can achieve your goal using the Nginx deployment example):</p> <h4>1. Create a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> for your app and add <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init containers</a> section (nginx example here):</h4> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html initContainers: - name: install image: busybox command: - wget - &quot;-O&quot; - &quot;/work-dir/index.html&quot; - http://time.is volumeMounts: - name: workdir mountPath: &quot;/work-dir&quot; dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre> <p>This <code>init container</code> downloads some html file, and main nginx container takes this file as index.html.</p> <h4>2. Create a k8s CronJob to run a rolling update on deployment:</h4> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: CronJob metadata: name: restart-deployment spec: concurrencyPolicy: Forbid schedule: '0/5 * * * *' jobTemplate: spec: backoffLimit: 2 activeDeadlineSeconds: 10 template: spec: serviceAccountName: restart-deployment restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl command: - /bin/bash - -c - | #place your new version check here, if true run: kubectl rollout restart deployment/nginx-deployment </code></pre> <p>Place your script that checks if the new version of a binary is available in this CronJob.</p> <h4>3. Create a ServiceAccount, Role, RoleBinding to allow the CronJob to manipulate the k8s cluster:</h4> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ServiceAccount metadata: name: restart-deployment --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: restart-deployment rules: - apiGroups: [&quot;apps&quot;, &quot;extensions&quot;] resources: [&quot;deployments&quot;] resourceNames: [&quot;nginx-deployment&quot;] verbs: [&quot;get&quot;, &quot;patch&quot;, &quot;list&quot;, &quot;watch&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: restart-deployment roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: restart-deployment subjects: - kind: ServiceAccount name: restart-deployment </code></pre> <p><strong>As a result, you will have a k8s CronJob that checks something every 5 minutes, and if it's true, runs</strong></p> <pre><code>kubectl rollout restart deployment/nginx-deployment </code></pre> <p>This will perform a <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">Rolling Update</a> (&quot;RollingUpdate&quot; is the default <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">update strategy</a> ).</p> <h4>New pods will use the latest downloaded by init-containers files.</h4>
<p>The Kubernetes documentation about <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">PreStop</a> hooks claims the following:</p> <blockquote> <p>PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent.</p> </blockquote> <p>However, it does not say anything about other containers in the pod. Suppose <code>terminationGracePeriodSeconds</code> has not yet passed.</p> <p>Are all containers in a pod protected from termination until the <code>PreStop</code> hook for each container finishes? Or does each PreStop hook only protect its own container?</p>
<p>I believe <code>preStop</code> hook only protects the container for which it's declared.</p> <p>For example, in the following set up:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: lifecycle-demo spec: containers: - name: lifecycle-demo-container image: nginx lifecycle: preStop: exec: command: [&quot;/bin/sh&quot;,&quot;-c&quot;,&quot;sleep 15&quot;] - name: other-container image: mysql env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: &quot;true&quot; </code></pre> <p>If I terminate the pod, the mysql receives SIGTERM and shuts down immediately while the nginx container stays alive for extra 15 seconds due to its <code>preStop</code> hook</p>
<p>I am a newbie on Ops and need to update through Lens the HPA configuration like:</p> <p>From:</p> <pre><code> minReplicas: 6 maxReplicas: 10 </code></pre> <p>To:</p> <pre><code> minReplicas: 4 maxReplicas: 16 </code></pre> <p>My doubt is if the PODs will be recreated or not once we have 8 instances running.</p>
<blockquote> <p>In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.</p> </blockquote> <p>The HorizontalPodAutoscaler is implemented as a Kubernetes API <code>resource</code> and a <code>controller</code>.</p> <p>By configuring <code>minReplicas</code> and <code>maxReplicas</code> you are configuring the API resource.</p> <p><strong>In this case, the HPA controller does not recreate running pods. And it does not scale up/down the workload if the number of currently running replicas is within the new min/max.</strong></p> <p>The HPA controller then continues to monitor the load:</p> <ul> <li>If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale down.</li> <li>If the load increases, and the number of Pods is below the configured maximum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale up.</li> </ul> <p>See more info about <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling here</a>.</p>
<p>I am new to Loki but all i want to do is to use it as simply as possible with helm.</p> <p>I want to get the logs of my app witch is in kubernetes, but it seems that there is no instructions on how to do that. All I find is how to install Loki and to add it to Grafana as a datasource but I don't think that's what Loki is made for.</p> <p>I simply want to track my app's logs in kubernetes so I am using Loki helm chart and all I can find about a custom config is this line:</p> <pre><code>Deploy with custom config helm upgrade --install loki grafana/loki-stack --set &quot;key1=val1,key2=val2,...&quot; </code></pre>
<p>After installing Loki you can set it as a data source for Grafana. For more details you can follow this example :<a href="https://codersociety.com/blog/articles/loki-kubernetes-logging" rel="nofollow noreferrer">Logging in Kubernetes with Loki and the PLG Stack</a></p> <p>I hope that this can help you to resolve your issue .</p>
<p>I am configuring jenkins + jenkins agents in kubernetes using this guide:</p> <p><a href="https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/" rel="nofollow noreferrer">https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/</a></p> <p>which gives the below example of a jenkins pipeline using multiple/different containers for different stages:</p> <pre><code>def label = &quot;worker-${UUID.randomUUID().toString()}&quot; podTemplate(label: label, containers: [ containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true), containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true), containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true), containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true) ], volumes: [ hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'), hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock') ]) { node(label) { def myRepo = checkout scm def gitCommit = myRepo.GIT_COMMIT def gitBranch = myRepo.GIT_BRANCH def shortGitCommit = &quot;${gitCommit[0..10]}&quot; def previousGitCommit = sh(script: &quot;git rev-parse ${gitCommit}~&quot;, returnStdout: true) stage('Test') { try { container('gradle') { sh &quot;&quot;&quot; pwd echo &quot;GIT_BRANCH=${gitBranch}&quot; &gt;&gt; /etc/environment echo &quot;GIT_COMMIT=${gitCommit}&quot; &gt;&gt; /etc/environment gradle test &quot;&quot;&quot; } } catch (exc) { println &quot;Failed to test - ${currentBuild.fullDisplayName}&quot; throw(exc) } } stage('Build') { container('gradle') { sh &quot;gradle build&quot; } } stage('Create Docker images') { container('docker') { withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'dockerhub', usernameVariable: 'DOCKER_HUB_USER', passwordVariable: 'DOCKER_HUB_PASSWORD']]) { sh &quot;&quot;&quot; docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD} docker build -t namespace/my-image:${gitCommit} . docker push namespace/my-image:${gitCommit} &quot;&quot;&quot; } } } stage('Run kubectl') { container('kubectl') { sh &quot;kubectl get pods&quot; } } stage('Run helm') { container('helm') { sh &quot;helm list&quot; } } } } </code></pre> <p>But why would you bother with this level of granularity? E.g. why not just have one container that have all you need, jnlp, helm, kubectl, java etc. and use that for all your stages?</p> <p>I know from a purist perspective its good to keep container/images as small as possible but if that's the only argument I would rather have it one container + not having to bother my end users (developers writing jenkinsfiles) with picking the right container - they should not have to worry about stuff at this level instead they you need to be able to get an agent and that's it.</p> <p>Or am I missing some <strong>functional</strong> reason for this multiple container setup?</p>
<p>Using one single image to handle all process is funtionally feasible, but it adds burden to your <strong>operation</strong>.</p> <p>We don't always find an image that fulfills all our needs, i.e. desired tools with desired version. Most likely, you are going to build one.</p> <p>To achieve this, you need to build docker images for different arch (amd/arm) and maintain/use a docker registry to store your built image, this process can be time consuming as your image gets more complicated. <strong>More importantly, it is very likely that some of your tools 'favour' some particular linus distro, you will find it difficult and <em>not always functionally ok.</em></strong></p> <p>Imagine you need to use a newer version of docker image in on of your pipeline's step, you will have you repeat the whole process of building and uploading images. Alternatively, you only need to change the image version in your pipeline, it minimises your operation effort.</p>
<p>I'm trying to run this, after creating the folder \data\MongoDb\database and sharing it with everyone (Windows 10 doesn't seem to want to share with localsystem, but that should work anyway)</p> <p>It crashes trying to Create the container with a 'Create Container error' I think that somehow I've specified the mapping on how to mount the claim - I'm trying for /data/db which I've confirmed there is data there if I remove the 'volumeMounts' part at the bottom - but if I don't have that, then how does it know that is where I want it mounted? It appears to not mount that folder if I don't add that, and the server works fine in that case, but of course, it has the data inside the server and when it gets powered down and back up POOF! goes your data.</p> <p>Here is the YAML file I'm using</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongodb labels: app: mongodb spec: ports: - port: 27017 targetPort: 27017 selector: app: mongodb type: LoadBalancer externalIPs: - 192.168.1.9 --- apiVersion: v1 kind: PersistentVolume metadata: name: mongo-storage labels: type: local spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: manual hostPath: path: c:/data/mongodb/database --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-mongo-storage spec: accessModes: - ReadWriteOnce storageClassName: manual resources: requests: storage: 5Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: mongodb labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongo2 image: mongo ports: - containerPort: 27017 name: mongodb env: - name: MONGO_INITDB_ROOT_USERNAME value: xxx - name: MONGO_INITDB_ROOT_PASSWORD value: xxxx - name: MONGO_INITDB_DATABASE value: defaultDatabase volumeMounts: - mountPath: &quot;/data/db&quot; name: mongo-storage volumes: - name: mongo-storage persistentVolumeClaim: claimName: pv-mongo-storage </code></pre> <p>I would presume that there is also some vastly better way to set the password in the MongoDb Container too... This is the only way I've see that worked so far... Oh, I also tried the mountPath without the &quot; around it, because 1. that makes more sense to me, and 2 some of the examples did it that way... No luck</p> <p>The error I'm getting is 'invalid mode: /data/db' - which would imply that the image can't mount that folder because it has the wrong mode... On the other hand, is it because the host is Windows?</p> <p>I don't know... I would hope that it can mount it under Windows...</p>
<p>Adding this from comments so it will be visible to a community.</p> <p>Docker Desktop for Windows provides an ability to access Windows files/directories from Docker containers. The windows directories are mounted in a docker container in <code>/run/desktop/mnt/host/</code> dir.</p> <p>So, you should specify the path to your db directory on Windows (<code>c:/data/mongodb/database</code>) host as:</p> <pre><code>/run/desktop/mnt/host/c/data/mongodb/database </code></pre> <p>This is only specific to Docker Desktop for Windows.</p>
<p>I want to prevent brute force attacks to my services on Kubernetes. My solution is to ban IPs that have many failed attempts but every request will be SNAT (Source NAT) and I don't know what can I do. Is there Any proxy I can use for my TCP requests and after that I can ban IPs?</p>
<p>If You are using a database to your service and facing this brutal attack by more failed attempts then you can block those IP's or users for some time and can release them. Database-persisted short lockout period for the given account (1-5 minutes) is the only way to handle this. Each <code>userid</code> in your database contains a <code>timeOfLastFailedLogin</code> and <code>numberOfFailedAttempts</code>. When <code>numbeOfFailedAttempts &gt; X</code> you can lockout for some minutes.</p> <p>Refer to this <a href="https://stackoverflow.com/questions/424210/preventing-brute-force-logins-on-websites">SO</a> and <a href="https://community.f5.com/t5/technical-articles/how-to-persist-real-ip-into-kubernetes/ta-p/291165" rel="nofollow noreferrer">Doc</a> for More information.</p>
<p>I am currently developing a small API in golang which will connect to a sql database. Below is the snip where I am passing the database connection string details statically ( hardcoded in the code ) .</p> <pre><code>const ( DB_USER = &quot;username&quot; DB_PASSWORD = &quot;password&quot; DB_NAME = &quot;db_name&quot; DB_HOST = &quot;db_server_name&quot; DB_PORT = db_port ) </code></pre> <p>Now I want to use ENV variables to make this static hardcoded values to become dynamic, so anyone on the fly can set these details using ENV vars. Can someone help me how I can achieve this. How to set them in such way from code so it can be dynamically passed as ENV vars in k8's deployment ?</p> <p>My current deployment.yaml file as below</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: go-demo-app spec: replicas: 1 selector: matchLabels: app: go-demo-app template: metadata: labels: app: go-demo-app spec: containers: - name: go-demo-app image: dockerhub/go-api:latest resources: {} ports: - containerPort: 8001 </code></pre> <p>The above is working fine as the db server name is hardcoded in the code and now I can looking to pass it dynamically using ENV vars via code which I can refer in k8's deployment</p>
<p>Change the .yaml to</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: go-demo-app spec: replicas: 1 selector: matchLabels: app: go-demo-app template: metadata: labels: app: go-demo-app spec: containers: - name: go-demo-app image: dockerhub/go-api:latest resources: {} ports: - containerPort: 8001 env: - name: DB_USER value: username - name: DB_PASSWORD value: password - name: DB_NAME value: db_name - name: DB_HOST value: db_server_name - name: DB_PORT value: &quot;3306&quot; </code></pre> <p><code>kubectl apply -f /path/to/file</code></p> <p>on your code, change the initialization into</p> <pre><code>var ( DB_USER = os.Getenv(&quot;DB_USER&quot;) DB_PASSWORD = os.Getenv(&quot;DB_PASSWORD&quot;) DB_NAME = os.Getenv(&quot;DB_NAME&quot;) DB_HOST = os.Getenv(&quot;DB_HOST&quot;) DB_PORT = os.Getenv(&quot;DB_PORT&quot;) ) </code></pre> <p>I change const to var because it is not allowed</p>
<p>I am looking for help in running hop pipelines on Spark cluster, running on kubernetes.</p> <ol> <li>I have spark master deployed with 3 worker nodes on kubernetes</li> <li>I am using hop-run.sh command to run pipeline on spark running on kubernetes.</li> </ol> <p>Facing Below exception -java.lang.NoClassDefFoundError: Could not initialize class com.amazonaws.services.s3.AmazonS3ClientBuilder</p> <p>Looks like fat.jar is not getting associated with the spark when running hop-run.sh command.</p> <hr /> <p>I tried running same with spark-submit command too but not sure how to pass references of pipelines and workflows to Spark running on kubernetes, though I am able to add fat jar to the classpath (can be seen in logs)</p> <p>Any kind of help is appreciated. Thanks like</p>
<p>Could it be that you are using version 1.0? We had a missing jar for S3 VFS which has been resolved in 1.1 <a href="https://issues.apache.org/jira/browse/HOP-3327" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/HOP-3327</a></p> <p>For more information on how to use spark-submit you can take a look at the following documentation: <a href="https://hop.apache.org/manual/latest/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.html#_running_with_spark_submit" rel="nofollow noreferrer">https://hop.apache.org/manual/latest/pipeline/pipeline-run-configurations/beam-spark-pipeline-engine.html#_running_with_spark_submit</a></p> <p>The location to the fat-jar the pipeline and the required metadata-export can all be VFS locations so no need to place those on the cluster itself.</p>
<p>According to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/</a></p> <blockquote> <p>Kubernetes uses QoS classes to make decisions about scheduling and evicting Pods.</p> </blockquote> <p>I don't understand how do QoS classes have anything to do with scheduling? The documentation mentions that QoS classes determine eviction order in case of a node going out of resources.</p> <p>On the other hand, scheduling uses pod priorities (PriorityClass) to set the scheduling order and preemption.</p> <p>My question is what is the link between QoS and scheduling?</p>
<blockquote> <p>My question is what is the link between QoS and scheduling?</p> </blockquote> <p>According to <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/</a></p> <blockquote> <p>kube-scheduler selects a node for the pod in a 2-step operation:</p> <pre><code>Filtering Scoring </code></pre> <p>The filtering step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.</p> <p>In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.</p> <p>Finally, kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, kube-scheduler selects one of these at random.</p> </blockquote> <p>So the QoS takes a role in Filtering step of kube-scheduler operation that corresponding PodFitsResources filter.</p> <p>According to <a href="https://kubernetes.io/docs/reference/scheduling/policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/scheduling/policies/</a></p> <blockquote> <p>PodFitsResources: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.</p> </blockquote>
<p>I have a Kubernetes Multinode system set up 3 three nodes. I am creating a connection between a pod on Node 2 to the Arango deployment using PyArango, the Arango Deployment has two coordinator pods one on Node 2 and one on Node 3.</p> <p>I'm testing out how resilient the system is and I've noticed an issue. It seems that if I'm updating collections on Arango and my program (running on Node 2) connects to the Arango Coordinator pod on Node 3 and I power off Node 3, the connection will not time out, it will simply stay put for as long as 20 minutes.</p> <p>I want the connection to timeout if the connection is idle or getting no response after 30 seconds.</p> <p>I've tried some different things using the PyArango methods and no luck. How do I get python or PyArango to timeout on a stale connection asap?</p> <p>At the minute this is my a snippet of the connection settings code:</p> <pre><code> retry_policy = Retry(total=0, connect=0, read=0 , other=0, backoff_factor=0) while conn == None: try: conn = Connection(arango_url, username, password, max_retries=retry_policy) conn.session.session.headers['Retry-After'] = '10' conn.session.session.headers['Keep-Alive'] = 'timeout=5' else: conn = Connection(arangoURL=arango_url, max_retries=retry_policy) conn.session.session.headers['Retry-After'] = '10' conn.session.session.headers['Keep-Alive'] = 'timeout=5' </code></pre> <p>Any help would be great!</p>
<p>You could always add connecting string to your Connection String: connect timeout=180;</p> <p>This Connection Timeout is for the amount of time it takes to resolve the initial connection to the database. You can refer to this in <a href="https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.connectiontimeout?redirectedfrom=MSDN&amp;view=dotnet-plat-ext-7.0#System_Data_SqlClient_SqlConnection_ConnectionTimeout" rel="nofollow noreferrer">SQL Connection timeout</a> property doc</p> <p>Or else refer to this <a href="https://stackoverflow.com/a/71868410/19230181">SO</a>. You can increase the HTTP client's <a href="https://docs.python-requests.org/en/latest/user/advanced/#timeouts" rel="nofollow noreferrer">timeout</a> by using a custom <a href="https://docs.python-arango.com/en/main/http.html" rel="nofollow noreferrer">HTTP client for Arango</a>. The default is set <a href="https://github.com/ArangoDB-Community/python-arango/blob/ff990fde4a4403da170d8759adb46a7100e403a6/arango/http.py#L69" rel="nofollow noreferrer">here</a> to 60 seconds.</p>
<p>I am trying to backup a database from a kubernetes cluster to my computer as a bson file. I have connected my mongodb compass to the kubernetes cluster using port-forwarding. Can anyone help me with the command I need to download a particular Collection (450gb) from a databank to my desktop?</p> <p>I've been trying for a while now but I cant seem to find the way around it.</p> <p>In mongocompass there is unfortunately no way to download a collection as a bson file. The port I have forwarded the kubernetes pod to is 27017.</p>
<p>From the mongodb official docs:</p> <blockquote> <p>Run <a href="https://docs.mongodb.com/database-tools/mongodump/#mongodb-binary-bin.mongodump" rel="nofollow noreferrer"><code>mongodump</code></a> from the system command line, not the <a href="https://docs.mongodb.com/manual/reference/program/mongo/#mongodb-binary-bin.mongo" rel="nofollow noreferrer"><code>mongo</code></a> shell.</p> </blockquote> <p>So, assuming you configured <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Kubernetes Port Forwarding</a> on your <strong>local machine</strong> like this:</p> <pre><code>$ kubectl port-forward service/mongo 28015:27017 </code></pre> <p>And you've got output similar to this:</p> <pre class="lang-yaml prettyprint-override"><code>Forwarding from 127.0.0.1:28015 -&gt; 27017 Forwarding from [::1]:28015 -&gt; 27017 </code></pre> <p>You can now export data from mongodb using the following command:</p> <pre><code>$ mongodump --username root --port=28015 -p secretpassword </code></pre> <p>This will create a <code>dump</code> directory in the current working directory and export data there.</p> <p>Also, to export only specific collection use the following option:</p> <pre><code>--collection=&lt;collection&gt;, -c=&lt;collection&gt; </code></pre> <blockquote> <p>Specifies a collection to backup. If you do not specify a collection, this option copies all collections in the specified database or instance to the dump files.</p> </blockquote> <p>You can check other available option <a href="https://docs.mongodb.com/database-tools/mongodump/" rel="nofollow noreferrer">here</a>.</p>
<p>I want to create an image that contains all the dependencies needed for development like Java Maven, Node, etc., I want to create that image and then deploy it in different PCs at the same time.</p> <p>I wanted to know if this is possible to do it by using Docker and if you could share with me some guide or information on how to do it, because I want to create an image that contains the dependencies but I want those different to have the software in the image but still remain unique like having their own configuration. I only want the image to deploy a fast environment to program, thanks in advance.</p>
<p>The advantage of docker is that it shares the CLI of the host system while capsule it into its own environment with its own network adapter. That means docker is fast because you don't need to simulate hardware and operation-system as well.</p> <p>But here comes the clue for you: Since it does no simulate/contain the OS you just can't make it executable on all environments. You need to choose the common way to tell all users that you are using linux containers so you can fullfill unix/mac/... already. For windows users there should be a info that they need WSL (Windows subsystem for linux) installed. Thats where windows can run a linux cli parallel to be compatible as well.</p> <p>For your dependency: You can build a container or compose that ocntains java, node, ... and all environment stuff - just Docker itself need to be compatible to host by that WSL/linux-container thing.</p> <p>So now that was a lot about Docker: Same for kubernetes/minicube/... whatever you want to use locally -&gt; of cause you need the correct installation for windows/linux target and if you use linux container and force windows/server to have WSL you can install linux kubernetes as well and be consistant everywhere.</p>
<p>I am having a problem with jenkins kubernetes pod that stopped working after the last pod restart (done by kubernetes).</p> <p>So, I am having errors like this in my log:</p> <pre><code>2021-02-05 11:00:55.856+0000 [id=27] INFO jenkins.InitReactorRunner$1#onAttained: Listed all plugins 2021-02-05 11:00:56.883+0000 [id=30] SEVERE jenkins.InitReactorRunner$1#onTaskFailed: Failed Loading plugin Pipeline: Multibranch v2.22 (workflow-multibranch) java.io.IOException: Failed to load: Pipeline: Multibranch (2.22) - Update required: Pipeline: Job (2.36) to be updated to 2.39 or higher at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:952) at hudson.PluginManager$2$1$1.run(PluginManager.java:549) at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169) at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296) at jenkins.model.Jenkins$5.runTask(Jenkins.java:1131) at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214) at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117) at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) </code></pre> <p>I can see that I should upgrade the version of <code>workflow-job</code> plugin to 2.39.</p> <p>Jenkins is managed with helm. If I download the latest helm chart from S3, I can see it has a folder <code>jenkins</code> in it, where I can see <code>jenkins/values.yaml</code> looking like this:</p> <pre><code># Default values for jenkins. # This is a YAML-formatted file. # Declare name/value pairs to be passed into your templates. # name: value ## Overrides for generated resource names # See templates/_helpers.tpl # nameOverride: # fullnameOverride: Master: Name: jenkins-master Image: &quot;jenkins/jenkins&quot; ImageTag: &quot;lts&quot; ImagePullPolicy: &quot;Always&quot; # ImagePullSecret: jenkins Component: &quot;jenkins-master&quot; NumExecutors: 0 # configAutoReload requires UseSecurity is set to true: UseSecurity: true # SecurityRealm: # Optionally configure a different AuthorizationStrategy using Jenkins XML # AuthorizationStrategy: |- # &lt;authorizationStrategy class=&quot;hudson.security.FullControlOnceLoggedInAuthorizationStrategy&quot;&gt; # &lt;denyAnonymousReadAccess&gt;true&lt;/denyAnonymousReadAccess&gt; # &lt;/authorizationStrategy&gt; HostNetworking: false # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist. # Since the AdminUser is used by configAutoReload, in order to use configAutoReload you must change the # .Master.AdminUser to a valid username on your LDAP (or other) server. This user does not need # to have administrator rights in Jenkins (the default Overall:Read is sufficient) nor will it be granted any # additional rights. Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter # a restart loop. Likewise if you disable the non-Jenkins identity store and instead use the Jenkins internal one, # you should revert Master.AdminUser to your preferred admin user: AdminUser: admin # AdminPassword: &lt;defaults to random&gt; OwnSshKey: false # If CasC auto-reload is enabled, an SSH (RSA) keypair is needed. Can either provide your own, or leave unconfigured\false to allow a random key to be auto-generated. # If you choose to use your own, you must upload your decrypted RSA private key (not the public key above) to a Kubernetes secret using the following command: # kubectl -n &lt;namespace&gt; create secret generic &lt;helm_release_name&gt; --dry-run --from-file=jenkins-admin-private-key=~/.ssh/id_rsa -o yaml |kubectl -n &lt;namespace&gt; apply -f - # Replace ~/.ssh/id_rsa in the above command with the path to your private key file and the &lt;helm_release_name&gt; and &lt;namespace&gt; placeholders to suit. RollingUpdate: {} # Ignored if Persistence is enabled # maxSurge: 1 # maxUnavailable: 25% resources: requests: cpu: &quot;50m&quot; memory: &quot;256Mi&quot; limits: cpu: &quot;2000m&quot; memory: &quot;4096Mi&quot; # Environment variables that get added to the init container (useful for e.g. http_proxy) # InitContainerEnv: # - name: http_proxy # value: &quot;http://192.168.64.1:3128&quot; # ContainerEnv: # - name: http_proxy # value: &quot;http://192.168.64.1:3128&quot; # Set min/max heap here if needed with: # JavaOpts: &quot;-Xms512m -Xmx512m&quot; # JenkinsOpts: &quot;&quot; # JenkinsUrl: &quot;&quot; # If you set this prefix and use ingress controller then you might want to set the ingress path below # JenkinsUriPrefix: &quot;/jenkins&quot; # Enable pod security context (must be `true` if RunAsUser or FsGroup are set) UsePodSecurityContext: true # Set RunAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image. # When setting RunAsUser to a different value than 0 also set FsGroup to the same value: # RunAsUser: &lt;defaults to 0&gt; # FsGroup: &lt;will be omitted in deployment if RunAsUser is 0&gt; ServicePort: 8080 # For minikube, set this to NodePort, elsewhere use LoadBalancer # Use ClusterIP if your setup includes ingress controller ServiceType: LoadBalancer # Master Service annotations ServiceAnnotations: {} # Master Custom Labels DeploymentLabels: {} # foo: bar # bar: foo # Master Service Labels ServiceLabels: {} # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https # Put labels on jeknins-master pod PodLabels: {} # Used to create Ingress record (should used with ServiceType: ClusterIP) # HostName: jenkins.cluster.local # NodePort: &lt;to set explicitly, choose port between 30000-32767 # Enable Kubernetes Liveness and Readiness Probes # ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout. HealthProbes: true HealthProbesLivenessTimeout: 90 HealthProbesReadinessTimeout: 60 HealthProbeReadinessPeriodSeconds: 10 HealthProbeLivenessFailureThreshold: 12 SlaveListenerPort: 50000 # SlaveHostPort: 50000 DisabledAgentProtocols: - JNLP-connect - JNLP2-connect CSRF: DefaultCrumbIssuer: Enabled: true ProxyCompatability: true CLI: false # Kubernetes service type for the JNLP slave service # SlaveListenerServiceType is the Kubernetes Service type for the JNLP slave service, # either 'LoadBalancer', 'NodePort', or 'ClusterIP' # Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default # this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE # security risk: https://github.com/kubernetes/charts/issues/1341 SlaveListenerServiceType: ClusterIP SlaveListenerServiceAnnotations: {} # Example of 'LoadBalancer' type of slave listener with annotations securing it # SlaveListenerServiceType: LoadBalancer # SlaveListenerServiceAnnotations: # service.beta.kubernetes.io/aws-load-balancer-internal: &quot;True&quot; # service.beta.kubernetes.io/load-balancer-source-ranges: &quot;172.0.0.0/8, 10.0.0.0/8&quot; # LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to # set allowed inbound rules on the security group assigned to the master load balancer LoadBalancerSourceRanges: - 0.0.0.0/0 # Optionally assign a known public LB IP # LoadBalancerIP: 1.2.3.4 # Optionally configure a JMX port # requires additional JavaOpts, ie # JavaOpts: &gt; # -Dcom.sun.management.jmxremote.port=4000 # -Dcom.sun.management.jmxremote.authenticate=false # -Dcom.sun.management.jmxremote.ssl=false # JMXPort: 4000 # Optionally configure other ports to expose in the Master container ExtraPorts: # - name: BuildInfoProxy # port: 9000 # List of plugins to be install during Jenkins master start OverwritePlugins: true InstallPlugins: - kubernetes:1.16.1 - workflow-job:2.39 - workflow-aggregator:2.6 - workflow-basic-steps:2.18 - credentials-binding:1.23 - job-dsl:1.76 - git:4.2.2 - parameterized-trigger:2.35.2 - slack:2.34 - global-slack-notifier:1.5 - ansicolor:0.6.2 - simple-theme-plugin:0.5.1 - aws-bucket-credentials:1.0.0 - aws-credentials:1.28 - ssh-agent:1.17 # - blueocean:1.21.0 - basic-branch-build-strategies:1.3.2 - buildtriggerbadge:2.10 - rebuild:1.31 - ghprb:1.42.0 - antisamy-markup-formatter:1.5 - github-oauth:0.31 - role-strategy:2.15 # Enable to always override the installed plugins with the values of 'Master.InstallPlugins' on upgrade or redeployment. # OverwritePlugins: true # Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin. # The plugin is not installed by default, please update Master.InstallPlugins. # EnableRawHtmlMarkupFormatter: true # Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval # ScriptApproval: # - &quot;method groovy.json.JsonSlurperClassic parseText java.lang.String&quot; # - &quot;new groovy.json.JsonSlurperClassic&quot; # List of groovy init scripts to be executed during Jenkins master start InitScripts: # - | # print 'adding global pipeline libraries, register properties, bootstrap jobs...' # Kubernetes secret that contains a 'credentials.xml' for Jenkins # CredentialsXmlSecret: jenkins-credentials # Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory, # useful to manage encryption keys used for credentials.xml for instance (such as # master.key and hudson.util.Secret) # SecretsFilesSecret: jenkins-secrets # Jenkins XML job configs to provision # Jobs: # test: |- # &lt;&lt;xml here&gt;&gt; # Below is the implementation of Jenkins Configuration as Code. Add a key under ConfigScripts for each configuration area, # where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value. # Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label # characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the master in # /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each | # become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials, # etc. Best reference is https://&lt;jenkins_url&gt;/configuration-as-code/reference. The example below creates a welcome message: JCasC: enabled: false PluginVersion: 1.5 SupportPluginVersion: 1.5 ConfigScripts: welcome-message: | jenkins: systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'. Sidecars: configAutoReload: # If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified, # jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the Jenkins CLI # over SSH to reapply config when changes to the ConfigScripts are detected. The admin user (or account you specify in # Master.AdminUser) will have a random SSH private key (RSA 4096) assigned unless you specify OwnSshKey: true. This will be saved to a k8s secret. enabled: false image: shadwell/k8s-sidecar:0.0.2 imagePullPolicy: IfNotPresent resources: # limits: # cpu: 100m # memory: 100Mi # requests: # cpu: 50m # memory: 50Mi # SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random. # Is only used to reload jcasc config from the sidecar container running in the Jenkins master pod. # This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be # accessible via SSH from outside of the pod. Note if you use non-root pod privileges (RunAsUser &amp; FsGroup), # this must be &gt; 1024: sshTcpPort: 1044 # label that the configmaps with dashboards are marked with: label: jenkins_config # folder in the pod that should hold the collected dashboards: folder: /var/jenkins_home/casc_configs # If specified, the sidecar will search for dashboard config-maps inside this namespace. # Otherwise the namespace in which the sidecar is running will be used. # It's also possible to specify ALL to search in all namespaces: # searchNamespace: # Allows you to inject additional/other sidecars other: ## The example below runs the client for https://smee.io as sidecar container next to Jenkins, ## that allows to trigger build behind a secure firewall. ## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall ## ## Note: To use it you should go to https://smee.io/new and update the url to the generete one. # - name: smee # image: docker.io/twalter/smee-client:1.0.2 # args: [&quot;--port&quot;, &quot;{{ .Values.Master.ServicePort }}&quot;, &quot;--path&quot;, &quot;/github-webhook/&quot;, &quot;--url&quot;, &quot;https://smee.io/new&quot;] # resources: # limits: # cpu: 50m # memory: 128Mi # requests: # cpu: 10m # memory: 32Mi # Node labels and tolerations for pod assignment # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature NodeSelector: {} Tolerations: {} PodAnnotations: {} # The below two configuration-related values are deprecated and replaced by Jenkins Configuration as Code (see above # JCasC key). They will be deleted in an upcoming version. CustomConfigMap: false # By default, the configMap is only used to set the initial config the first time # that the chart is installed. Setting `OverwriteConfig` to `true` will overwrite # the jenkins config with the contents of the configMap every time the pod starts. # This will also overwrite all init scripts OverwriteConfig: false ingress: enabled: false labels: {} annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: &quot;true&quot; # Set this path to JenkinsUriPrefix above or use annotations to rewrite path # path: &quot;/jenkins&quot; tls: # - secretName: jenkins.cluster.local # hosts: # - jenkins.cluster.local AdditionalConfig: {} # Master.HostAliases allows for adding entries to Pod /etc/hosts: # https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ HostAliases: [] # - ip: 192.168.50.50 # hostnames: # - something.local # - ip: 10.0.50.50 # hostnames: # - other.local Agent: Enabled: true Image: jenkins/jnlp-slave ImageTag: 3.27-1 CustomJenkinsLabels: [] # ImagePullSecret: jenkins Component: &quot;jenkins-slave&quot; Privileged: false resources: requests: cpu: &quot;200m&quot; memory: &quot;256Mi&quot; limits: cpu: &quot;200m&quot; memory: &quot;256Mi&quot; # You may want to change this to true while testing a new image AlwaysPullImage: false # Controls how slave pods are retained after the Jenkins build completes # Possible values: Always, Never, OnFailure PodRetention: Never # You can define the volumes that you want to mount for this container # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret # Configure the attributes as they appear in the corresponding Java class for that type # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes # Pod-wide ennvironment, these vars are visible to any container in the slave pod envVars: # - name: PATH # value: /usr/local/bin volumes: # - type: Secret # secretName: mysecret # mountPath: /var/myapp/mysecret NodeSelector: {} # Key Value selectors. Ex: # jenkins-agent: v1 # Executed command when side container gets started Command: Args: # Side container name SideContainerName: jnlp # Doesn't allocate pseudo TTY by default TTYEnabled: false # Max number of spawned agent ContainerCap: 10 # Pod name PodName: default Persistence: Enabled: true ## A manually managed Persistent Volume and Claim ## Requires Persistence.Enabled: true ## If defined, PVC must be created manually before volume will be bound # ExistingClaim: ## jenkins data Persistent Volume Storage Class ## If defined, storageClassName: &lt;storageClass&gt; ## If set to &quot;-&quot;, storageClassName: &quot;&quot;, which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # StorageClass: &quot;-&quot; Annotations: {} AccessMode: ReadWriteOnce Size: 8Gi volumes: # - name: nothing # emptyDir: {} mounts: # - mountPath: /var/nothing # name: nothing # readOnly: true NetworkPolicy: # Enable creation of NetworkPolicy resources. Enabled: false # For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1' # For Kubernetes v1.7, use 'networking.k8s.io/v1' ApiVersion: networking.k8s.io/v1 ## Install Default RBAC roles and bindings rbac: install: false serviceAccountName: default # Role reference roleRef: cluster-admin # Role kind (Role or ClusterRole) roleKind: ClusterRole # Role binding kind (RoleBinding or ClusterRoleBinding) roleBindingKind: ClusterRoleBinding ## Backup cronjob configuration ## Ref: https://github.com/nuvo/kube-tasks backup: # Backup must use RBAC # So by enabling backup you are enabling RBAC specific for backup enabled: false # Schedule to run jobs. Must be in cron time format # Ref: https://crontab.guru/ schedule: &quot;0 2 * * *&quot; annotations: # Example for authorization to AWS S3 using kube2iam # Can also be done using environment variables iam.amazonaws.com/role: jenkins image: repository: nuvo/kube-tasks tag: 0.1.2 # Additional arguments for kube-tasks # Ref: https://github.com/nuvo/kube-tasks#simple-backup extraArgs: [] # Add additional environment variables env: # Example environment variable required for AWS credentials chain - name: AWS_REGION value: us-east-1 resources: requests: memory: 1Gi cpu: 1 limits: memory: 1Gi cpu: 1 # Destination to store the backup artifacts # Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage # Additional support can added. Visit this repository for details # Ref: https://github.com/nuvo/skbn destination: s3://nuvo-jenkins-data/backup </code></pre> <p>My <code>requirements.yaml</code> file in the chart looks like</p> <pre><code>dependencies: - name: jenkins version: 0.35.1 repository: https://kubernetes-charts.storage.googleapis.com/ - name: istio-components version: 0.4.1 repository: s3://helm-chart-repository/ </code></pre> <p>I tried updating the plugin version under <code>InstallPlugins</code> and pushing helm chart and listing it (I can see it has been updated), but it had no effect on the error, like plugin is not updated at all.</p> <p>Any advice how to proceed on debugging this problem?</p>
<p>I was able to fix this issue by deleting the /plugins folder under /var/jenkins_home and then restarting my Jenkins pod. This created a new /plugins folder and then I no longer had any plugin dependency issues.</p>
<p>I am creating a new Operator with Kubebuilder to deploy a Kubernetes controller to manage a new CRD Custom Resource Definition.</p> <p>This new CRD (let's say is called <code>MyNewResource</code>), needs to list/create/delete CronJobs.</p> <p>So in the Controller Go code where the <code>Reconcile(...)</code> method is defined I added a new RBAC comment to allow the reconciliation to work on CronJobs (see <a href="https://book.kubebuilder.io/reference/markers/rbac.html" rel="nofollow noreferrer">here</a>):</p> <pre class="lang-golang prettyprint-override"><code>//+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete </code></pre> <p>However after building pushing and deploying the Docker/Kubernetes controller (repo <code>myrepo</code>, <code>make manifests</code>, then <code>make install</code>, then <code>make docker-build docker-push</code>, then <code>make deploy</code>), then in the logs I still see:</p> <pre><code>E0111 09:35:18.785523 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.CronJob: failed to list *v1beta1.CronJob: cronjobs.batch is forbidden: User &quot;system:serviceaccount:myrepo-system:myrepo-controller-manager&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; at the cluster scope </code></pre> <p>I also see issues about the cache, but they might not be related (not sure):</p> <pre><code>2022-01-11T09:35:57.857Z ERROR controller.mynewresource Could not wait for Cache to sync {&quot;reconciler group&quot;: &quot;mygroup.mydomain.com&quot;, &quot;reconciler kind&quot;: &quot;MyNewResource&quot;, &quot;error&quot;: &quot;failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced&quot;} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:696 2022-01-11T09:35:57.858Z ERROR error received after stop sequence was engaged {&quot;error&quot;: &quot;leader election lost&quot;} 2022-01-11T09:35:57.858Z ERROR setup problem running manager {&quot;error&quot;: &quot;failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced&quot;} </code></pre> <p>How can I allow my new Operator to deal with CronJobs resources?</p> <p>At the moment basically I am not able to create new CronJobs programmatically (Go code) when I provide some YAML for a new instance of my CRD, by invoking:</p> <pre><code>kubectl create -f mynewresource-project/config/samples/ </code></pre>
<p>You need to create new Role or ClusterRole (depending if you want your permissions to be namespaced or cluster-wide) and bind that to your <code>system:serviceaccount:myrepo-system:myrepo-controller-manager</code> user using RoleBinding/ClusterRoleBinding. I will provide examples for cluster-wide configuration.</p> <p>ClusterRole:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cronjobs-role rules: - apiGroups: [&quot;&quot;] resources: [&quot;cronjobs&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;delete&quot;] </code></pre> <p>Then, bind that using ClusterRoleBinding:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cronjobs-rolebinding subjects: - kind: User name: system:serviceaccount:myrepo-system:myrepo-controller-manager apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cronjob-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>Judging from your logs you might want to use <code>batch</code> apiGroup but I'll leave more generic example. More about k8s RBAC <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">here</a>.</p> <p><strong>Kubebuilder</strong></p> <p>With Kubebuilder the ClusterRole and the ClusterRoleBinding YAML code is autogenerated and stored in the <code>config/rbac/</code> directory.</p> <p>To grant the binding on all groups (rather than just <code>batch</code>), you can place the Go comment with an asterisk like this:</p> <pre class="lang-golang prettyprint-override"><code>//+kubebuilder:rbac:groups=*,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete </code></pre> <p>This will change the autogenerated YAML for the <code>ClusterRole</code> to:</p> <pre><code>rules: - apiGroups: - '*' # instead of simply: batch </code></pre> <p>When deploying the updated operator, then the controller should be able to list/create/delete CronJobs.</p> <p>See here for a <a href="https://book.kubebuilder.io/reference/markers/rbac.html" rel="nofollow noreferrer">reference RBAC for Kubebuilder comments</a></p>
<p>I would like to be able to limit the amount of jobs of a given &quot;type&quot; that run at the same time (maybe based on their label, e.g. no more than N jobs with label <code>mylabel</code> may run at the same time).</p> <p>I have a long running computation that requires a license key to run. I have N license keys and I would like to limit the amount of simultaneously running jobs to N. Here's how I imagine it working: I label the jobs with some special <code>tag</code>. Then, I schedule N + K jobs, then at most N jobs may be in state &quot;running&quot; and K jobs should be in the queue and may only transition to &quot;running&quot; state when the total number of running jobs labeled <code>mytag</code> is less or equal to N.</p> <p>[UPDATE]</p> <ul> <li>The jobs are independent of each other.</li> <li>The execution order is not important, although I would like them to be FIFO (time wise).</li> <li>The jobs are scheduled on user requests. That is, there is no fixed amount of work known in advance that needs to be processed, the requests to run a job with some set of parameters (configuration file) come sporadically in time.</li> </ul>
<p>Unfortunately there is no built-in feature to do it using labels in k8s. But since your jobs are scheduled based on unpredictable user requests, you can achieve your goal like this:</p> <ul> <li>create a new namespace <code>kubectl create namespace quota-pod-ns</code></li> <li>create a ResourceQuota</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ResourceQuota metadata: name: pod-max-number namespace: quota-pod-ns spec: hard: pods: &quot;5&quot; </code></pre> <p>This will limit max number of pods to 5 in the namespace quota-pod-ns.</p> <ul> <li>create k8s jobs in the quota-pod-ns namespace.</li> </ul> <p>When you want to run the 6th job in that namespace, k8s will try to create the 6th pod and will fail to do that. But once one of the running pods is Completed, job-controller will create that new pod within a max limit.</p>
<p>Just wondering if there is any way in ingress-nginx to enforce rate limiting only if the custom health check url is fine. I have been going searching through, but failed to find a way to do so. Any help will be appreciated.</p>
<p>Rate-limiting is available in NGINX Ingress by using correct <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="nofollow noreferrer">annotations</a> Available options are:</p> <ol> <li><p><code>nginx.ingress.kubernetes.io/limit-connections</code>: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-rps</code>: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-rpm</code>: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-burst-multiplier</code>: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-rate-after</code>: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-rate</code>: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.</p> </li> <li><p><code>nginx.ingress.kubernetes.io/limit-whitelist</code>: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs</p> </li> </ol> <p>There are some limitations of rate-limiting with NGINX ingress:</p> <p>It applies to the whole ingress and is not able to configure exceptions, eg. when you want to exclude a health check path /healthz from your service.</p> <p>You can read more about NGINX rate limiting in kubernetes in this <a href="https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57" rel="nofollow noreferrer">guide</a>.</p>
<p>I have been trying to fix the error on the Manage Jenkins section -</p> <p>Credentials from Kubernetes Secrets will not be available.See the log for more details.<a href="https://i.stack.imgur.com/DVBS6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DVBS6.png" alt="enter image description here" /></a></p> <p>I see the following error below when I click on <em><strong>See the log for more details</strong></em> -</p> <pre><code>java.net.UnknownHostException: kubernetes.default.svc: Name or service not known </code></pre> <p>and Jenkins system logs says <a href="https://i.stack.imgur.com/6EBCO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6EBCO.png" alt="enter image description here" /></a></p> <p>Jenkins version: 2.368 Kubernetes Credentials Provider Plugin version: 1.206.v7ce2cf7b_0c8b</p> <p>Any pointers to fix the same would be really great, thanks</p>
<p>This error <a href="https://aws.plainenglish.io/jenkins-failed-to-initialize-kubernetes-secret-provider-fe87a240477" rel="nofollow noreferrer"><strong>java.net.UnknownHostException: kubernetes.default.svc: Name or service not known</strong></a> is caused by installing the Jenkins plug-in (Kubernetes Credentials Provider) and not using it. You need to close or uninstall the plug-in and this message will not appear. if you need to connect Kubernetes via Jenkins, install the <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">Kubernetes plugin</a></p> <p>Credentials will be added and updated by adding/updating them as secrets to Kubernetes. The format of the Secret is different depending on the type of credential you wish to expose, but will all have several things in common. Find these <a href="https://jenkinsci.github.io/kubernetes-credentials-provider-plugin/examples/" rel="nofollow noreferrer">examples</a>.</p>
<p>I've created a NiFi cluster on the AWS EKS. The initial deployment was working fine. Later I attached Persistent volume and persistent volume claim to the NiFi setup. After starting the NiFi, I'm getting this error:</p> <pre><code>ERROR in ch.qos.logback.core.rolling.RollingFileAppender[USER_FILE] - openFile(/opt/nifi/nifi-current/logs/nifi-user.log,true) call failed. java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) </code></pre> <p>As I'm not an expert in NiFi and Kubernetes, I couldn't identify the issue. It looks like a permission issue on NiFi. The NiFi version I'm using is NiFI 1.15.0.</p> <p>What may be the possible root cause for this? Is that because NiFi is not using the root user or is that something else?</p> <p>I'm sharing the full error here:</p> <pre><code>13:56:22,449 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[USER_FILE] - openFile(/opt/nifi/nifi-current/logs/nifi-user.log,true) call failed. java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) at java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) at at java.io.FileOutputStream.open0(Native Method) at at java.io.FileOutputStream.open(FileOutputStream.java:270) at at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:213) at at ch.qos.logback.core.recovery.ResilientFileOutputStream.&lt;init&gt;(ResilientFileOutputStream.java:26) at at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:204) at at ch.qos.logback.core.FileAppender.start(FileAppender.java:127) at at ch.qos.logback.core.rolling.RollingFileAppender.start(RollingFileAppender.java:100) at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:90) at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309) at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193) at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179) at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:165) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:152) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53) at at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75) at at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150) at at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84) at at org.slf4j.impl.StaticLoggerBinder.&lt;clinit&gt;(StaticLoggerBinder.java:55) at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417) at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362) at at org.apache.nifi.bootstrap.RunNiFi.&lt;init&gt;(RunNiFi.java:145) at at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:284) </code></pre> <p>I'm also sharing the Kubernetes manifest part that describe the pv and PVC I used for creating the NiFi cluster:</p> <pre><code> volumeMounts: - name: &quot;data&quot; mountPath: /opt/nifi/nifi-current/data - name: &quot;flowfile-repository&quot; mountPath: /opt/nifi/nifi-current/flowfile_repository - name: &quot;content-repository&quot; mountPath: /opt/nifi/nifi-current/content_repository - name: &quot;provenance-repository&quot; mountPath: /opt/nifi/nifi-current/provenance_repository - name: &quot;logs&quot; mountPath: /opt/nifi/nifi-current/logs volumeClaimTemplates: - metadata: name: &quot;data&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 1Gi - metadata: name: &quot;flowfile-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;content-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;provenance-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;logs&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 5Gi </code></pre> <p>Any help is appreciated.</p>
<p>Assuming you don´t have any issues creating pv and pvc, try to use an extra <code>initContainers</code> section to allow the NiFi user with UID and GID 1000 read and write to the provisioned EBS volume:</p> <pre class="lang-yaml prettyprint-override"><code>initContainers: - name: fixmount image: busybox command: [ 'sh', '-c', 'chown -R 1000:1000 /opt/nifi/nifi-current/logs' ] volumeMounts: - name: logs mountPath: /opt/nifi/nifi-current/logs </code></pre> <p>I hope this will help solve your issues. Here is the official Kubernetes documentation page <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a>.</p>
<p>Is it possible to create resource:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcalico.org/v3 kind: HostEndpoint </code></pre> <p>using calico operator? I want to get rid of <code>calicoctl</code>.</p>
<p>It is possible only with <code>calicoctl</code> to create a host endpoint resource.</p> <p>As mentioned in the <a href="https://projectcalico.docs.tigera.io/reference/host-endpoints/objects" rel="nofollow noreferrer">document</a>:</p> <blockquote> <p>For each host endpoint that you want Calico to secure, you’ll need to create a host endpoint object in etcd. Use the <code>calicoctl create</code> command to create a host endpoint resource (HostEndpoint).</p> </blockquote> <p>There are two ways to specify the interface that a host endpoint should refer to. You can either specify the name of the interface or its expected IP address. In either case, you’ll also need to know the name given to the Calico node running on the host that owns the interface; in most cases this will be the same as the hostname of the host.</p>
<p>When I run the command <code>kubectl get svc</code> from the <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl" rel="nofollow noreferrer">tutorial</a> I'm following.</p> <p>I get: error: You must be logged in to the server (the server has asked for the client to provide credentials).</p> <p>When I look at my <code>~/.kube/config</code> file all looks good. The user there is the exact same one that I used to create the cluster in the first place.</p> <p>So I see two options:</p> <ol> <li>The user has no IAM policy that allows it to run kubectl get svc which is very probably because all my problems are from IAM</li> <li>It has something to do with the <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#aws-auth-users" rel="nofollow noreferrer">IAM principle</a>.</li> </ol> <p>So my questions are, what IAM prolicies do I need to run <code>kubectl get svc</code> or alternatively, how do I add an IAM principle to the EKS cluster? The <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#aws-auth-users" rel="nofollow noreferrer">doc</a> is using kubectl to add the IAM principle to the cluster which... that's a loop with no end in sight</p>
<p>Here are some troubleshooting steps which you can try to fix the error:</p> <ol> <li>Check if the credentials or certificates are expired.</li> </ol> <p>Try running</p> <pre><code>$ gcloud container clusters get-credentials [cluster-name] </code></pre> <p>While renewing kubernetes certificates, replace the values <code>client-certificate-data</code> and <code>client-key-data</code> in file <code>\~/.kube/config</code> with the values from the updated file in <code>/etc/kubernetes/kubelet.conf</code> of the same name.</p> <ol start="2"> <li><p>The authentication is related to one of the pods which is using a service account that has issues like invalid token.</p> </li> <li><p>When an EKS cluster is created, the user (or role) that creates the cluster is automatically granted with the system:master permissions in the cluster's RBAC configuration. Other users or roles that need the ability to interact with your cluster, it needs to be added explicitly. Refer to the link <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/#You.27re_not_the_cluster_creator" rel="nofollow noreferrer">here</a> for the related info.</p> </li> </ol> <p>You can also refer to this github <a href="https://github.com/kubernetes/kubernetes/issues/63128" rel="nofollow noreferrer">link</a> for additional information.</p>
<p>I'm using the GKE Ingress controller. The ingress is configured to terminate TLS, the underlying service is listening on port 80.</p> <p>I don't need to serve my frontend externally on port 80 and would prefer that the LB created by the controller keep that port closed. I can't figure out if that's even possible, or what to change in the ingress resource definition.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: foo-ingress namespace: foo spec: tls: - hosts: - foo.example.com secretName: foo-example-com-cert rules: - host: foo.example.com http: paths: - path: / pathType: Prefix backend: service: name: foo-service port: number: 80 </code></pre>
<p>You can <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#disabling_http" rel="nofollow noreferrer">disable HTTP on your cluster</a> and then set <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect" rel="nofollow noreferrer">HTTP-to-HTTPS redirect</a> by creating an additional load balancer on the same IP address.</p> <pre><code>apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: ssl-redirect spec: redirectToHttps: enabled: true # add below to ingress # metadata: # annotations: # networking.gke.io/v1beta1.FrontendConfig: ssl-redirect </code></pre> <p><strong>Note</strong> - While disabling you'll need to recreate your cluster for this change to be applied on the load balancer.</p> <p>You can also refer to this <a href="https://github.com/kubernetes/ingress-gce/issues/290" rel="nofollow noreferrer">Github</a> link for more information.</p>
<p>When i do this command <code>kubectl get pods --all-namespaces</code> I get this <code>Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.</code></p> <p>All of my pods are running and ready 1/1, but when I use this <code>microk8s kubectl get service -n kube-system</code> I get</p> <pre><code>kubernetes-dashboard ClusterIP 10.152.183.132 &lt;none&gt; 443/TCP 6h13m dashboard-metrics-scraper ClusterIP 10.152.183.10 &lt;none&gt; 8000/TCP 6h13m </code></pre> <p>I am missing kube-dns even tho dns is enabled. Also when I type this for proxy for all ip addresses <code>microk8s kubectl proxy --accept-hosts=.* --address=0.0.0.0 &amp;</code> I only get this <code>Starting to serve on [::]:8001</code> and I am missing [1]84623 for example.</p> <p>I am using microk8s and multipass with Hyper-V Manager on Windows, and I can't go to dashboard on the net. I am also a beginner, this is for my college paper. I saw something similar online but it was for Azure.</p>
<p>Posting answer from comments for better visibility: Problem solved by reinstalling multipass and microk8s. Now it works.</p>
<p>I'm using kubernetes 1.21 cronjob to schedule a few jobs to run at a certain time every day.</p> <p>I scheduled a job to be run at 4pm, via <code>kubectl apply -f &lt;name of yaml file&gt;</code>. Subsequently, I updated the yaml <code>schedule: &quot;0 22 * * *&quot;</code> to trigger the job at 10pm, using the same command <code>kubectl apply -f &lt;name of yaml file&gt;</code> However, after applying the configuration at around 1pm, the job still triggers at 4pm (shouldn't have happened), and then triggers again at 10pm (intended trigger time).</p> <p>Is there an explanation as to why this happens, and can I prevent it?</p> <p>Sample yaml for the cronjob below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: CronJob metadata: name: job-name-1 spec: schedule: &quot;0 16 * * *&quot; # 4pm successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - image: sample-image name: job-name-1 args: - node - ./built/script.js env: - name: NODE_OPTIONS value: &quot;--max-old-space-size=5000&quot; restartPolicy: Never nodeSelector: app: cronjob </code></pre> <p>I'm expecting the job to only trigger at 10pm.</p> <p>Delete the cronjob and reapply it seems to eliminate such issues, but there are scenarios where I cannot the delete the job (because it's still running).</p>
<ol> <li>As you use <code>kubectl apply -f &lt;name of yaml file&gt;</code> to schedule a second Job at 10pm which means it will schedule a new Job but it will not replace the existing job. so the reason was that the Job at 4pm also scheduled and it runned.</li> </ol> <p>Instead you need to use the below command to replace the Job with another scheduled Job.</p> <pre><code>kubectl patch cronjob my-cronjob -p '{&quot;spec&quot;:{&quot;schedule&quot;: &quot;0 22 * * *&quot;}}' </code></pre> <p>This will run Job only at 10Pm.</p> <ol start="2"> <li>In order to delete the running Job use the below Process :</li> </ol> <p>run in console:</p> <pre><code>crontab -e </code></pre> <p>then you will get crontab opened with an editor, simply delete the line there, save the file and quit the editor - that's it.</p> <p>if you are running with a root user then use the below command and proceed as above step.</p> <pre><code>sudo crontab -e </code></pre>