Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine. While trying to connect to k8s suppose to list pods, a simple <code>kubectl get pods</code> returns an error <code>[Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255</code></p> <p>But if i do <code>saml2aws exec kubectl get pods</code> i am able to fetch pods.</p> <p>I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.</p> <p>Any kind of help will be appreciated.</p>
<p>To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first</p> <ul> <li>Configure Profile</li> </ul> <pre><code>saml2aws configure \ --skip-prompt \ --mfa Auto \ --region &lt;region, ex us-east-2&gt; \ --profile &lt;awscli_profile&gt; \ --idp-account &lt;saml2aws_profile_name&gt;&gt; \ --idp-provider Okta \ --username &lt;your email&gt; \ --role arn:aws:iam::&lt;account_id&gt;:role/&lt;aws_role_initial_assume&gt; \ --session-duration 28800 \ --url &quot;https://&lt;company&gt;.okta.com/home/amazon_aws/.......&quot; </code></pre> <blockquote> <p>URL, region ... can be got from OKTA integration UI.</p> </blockquote> <ul> <li>Login</li> </ul> <pre><code>samle2aws login --idp-account &lt;saml2aws_profile_name&gt; </code></pre> <p>that should prompt you for password and MFA if exist.</p> <ul> <li>Verification</li> </ul> <pre><code>aws --profile=&lt;awscli_profile&gt; s3 ls </code></pre> <p>then finally , Just export AWS_PROFILE by</p> <pre><code>export AWS_PROFILE=&lt;awscli_profile&gt; </code></pre> <p>and use awscli directly</p> <pre><code>aws sts get-caller-identity </code></pre>
<p>I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).</p> <p>I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: echo-server annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; spec: type: LoadBalancer ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: echo-server --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-deployment spec: replicas: 1 selector: matchLabels: app: echo-server template: metadata: labels: app: echo-server spec: containers: - name: echo-server image: ealen/echo-server ports: - name: http containerPort: 8080 </code></pre> <p>The following pictures demonstrate the situation <a href="https://i.stack.imgur.com/b9KFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b9KFj.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/oG4dF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oG4dF.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/kQEUM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQEUM.png" alt="enter image description here" /></a></p> <p>I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip</p>
<p>Can you check your internal-loadbalancer health probe.</p> <p>&quot;For Kubernetes 1.24+ the services of type LoadBalancer with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And / will be used as the default health probe request path. If your service doesn’t respond 200 for /, please ensure you're setting the service annotation service.beta.kubernetes.io/port_{port}_health-probe_request-path or service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path (applies to all ports) with the correct request path to avoid service breakage.&quot; (ref: <a href="https://github.com/Azure/AKS/releases/tag/2022-09-11" rel="nofollow noreferrer">https://github.com/Azure/AKS/releases/tag/2022-09-11</a>)</p> <p>If you are using nginx-ingress controller, try adding the same as mentioned in doc: (<a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration</a>)</p> <pre><code>helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --reuse-values \ --namespace &lt;NAMESPACE&gt; \ --set controller.service.annotations.&quot;service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path&quot;=/healthz </code></pre>
<p>We have configured Fluent-bit to send the logs from our cluster directly to CloudWatch. We have enabled the Kubernetes filter in order to set our log_stream_name as $(kubernetes['container_name']).</p> <p>However, the logs are terrible.</p> <p>Each CloudWatch line looks like this:</p> <pre><code> 2022-06-23T14:17:34.879+02:00 {&quot;kubernetes&quot;:{&quot;redacted_redacted&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;,&quot;redacted_image&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45&quot;,&quot;redacted_name&quot;:&quot;redacted-redacted&quot;,&quot;docker_id&quot;:&quot;b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a&quot;,&quot;host&quot;:&quot;ip-0.0.0.0.region-#.compute.internal&quot;,&quot;namespace_name&quot;:&quot;namespace&quot;,&quot;pod_id&quot;:&quot;podpodpod-296c-podpod-8954-podpodpod&quot;,&quot;pod_name&quot;:&quot;redacted-redacted-redacted-7dcbfd4969-mb5f5&quot;}, 2022-06-23T14:17:34.879+02:00 {&quot;kubernetes&quot;:{&quot;redacted_redacted&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;,&quot;redacted_image&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45&quot;,&quot;redacted_name&quot;:&quot;redacted-redacted&quot;,&quot;docker_id&quot;:&quot;b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a&quot;,&quot;host&quot;:&quot;ip-0.0.0.0.region-#.compute.internal&quot;,&quot;namespace_name&quot;:&quot;namespace&quot;,&quot;pod_id&quot;:&quot;podpodpod-296c-podpod-8954-podpodpod&quot;,&quot;pod_name&quot;:&quot;redacted-redacted-redacted-7dcbfd4969-mb5f5&quot;}, 2022-06-23T14:17:34.879+02:00 {&quot;kubernetes&quot;:{&quot;redacted_redacted&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;,&quot;redacted_image&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45&quot;,&quot;redacted_name&quot;:&quot;redacted-redacted&quot;,&quot;docker_id&quot;:&quot;b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a&quot;,&quot;host&quot;:&quot;ip-0.0.0.0.region-#.compute.internal&quot;,&quot;namespace_name&quot;:&quot;namespace&quot;,&quot;pod_id&quot;:&quot;podpodpod-296c-podpod-8954-podpodpod&quot;,&quot;pod_name&quot;:&quot;redacted-redacted-redacted-7dcbfd4969-mb5f5&quot;}, 2022-06-23T14:20:07.074+02:00 {&quot;kubernetes&quot;:{&quot;redacted_redacted&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;,&quot;redacted_image&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45&quot;,&quot;redacted_name&quot;:&quot;redacted-redacted&quot;,&quot;docker_id&quot;:&quot;b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a&quot;,&quot;host&quot;:&quot;ip-0.0.0.0.region-#.compute.internal&quot;,&quot;namespace_name&quot;:&quot;namespace&quot;,&quot;pod_id&quot;:&quot;podpodpod-296c-podpod-8954-podpodpod&quot;,&quot;pod_name&quot;:&quot;redacted-redacted-redacted-7dcbfd4969-mb5f5&quot;}, </code></pre> <p>Which makes the logs unusable unless expanded, and once expanded the logs look like this:</p> <pre><code>2022-06-23T14:21:34.207+02:00 { &quot;kubernetes&quot;: { &quot;container_hash&quot;: &quot;145236632541.lfl.ecr.region.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;, &quot;container_image&quot;: &quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted:ve3b56a45&quot;, &quot;container_name&quot;: &quot;redacted-redacted&quot;, &quot;docker_id&quot;: &quot;b431f9788f46sd5f4ds65f4sd56f4sd65f4d336fff4ca8030a216ecb9e0a&quot;, &quot;host&quot;: &quot;ip-0.0.0.0.region-#.compute.internal&quot;, &quot;namespace_name&quot;: &quot;redacted&quot;, &quot;pod_id&quot;: &quot;podpodpod-296c-podpod-8954-podpodpod&quot;, &quot;pod_name&quot;: &quot;redacted-redacted-redacted-7dcbfd4969-mb5f5&quot; }, &quot;log&quot;: &quot;[23/06/2022 12:21:34] loglineloglinelogline\ loglineloglinelogline \n&quot;, &quot;stream&quot;: &quot;stdout&quot; } {&quot;kubernetes&quot;:{&quot;redacted_redacted&quot;:&quot;145236632541.lfl.ecr.region-#.amazonaws.com/redacted@sha256:59392fab7hsfghsfghsfghsfghsfghsfghc39c1bee75c0b4bfc2d9f4a405aef449b25&quot;,&quot;redacted_image </code></pre> <p>Which is also a bit horrible because every line is flooded with Kubernetes data. I would like to remove the Kubernetes data from the logs completely, But I would like to keep using $(kubernetes['container_name']) as the log stream name so that the logs are properly named. I have tried using filters with Remove_key and LUA scripts that would remove the Kubernetes data. But as soon as something removes it, the log stream cannot be named $(kubernetes['container_name']).</p> <p>I have found very little documentation on this. And have not found a proper way to remove Kubernetes data and to keep my log_stream_name as my container_name.</p> <p>Here is the raw with the fluent bit config that I used: <a href="https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit-compatible.yaml</a></p> <p>Any help would be appreciated.</p>
<p>There is an instruction <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html</a> / (Optional) Reducing the log volume from Fluent Bit</p> <p>Just adding nest filter in the log config. Eg</p> <pre><code>user-api.conf: | [INPUT] Name tail Tag user-api.* Path /var/log/containers/user-api*.log Docker_Mode On Docker_Mode_Flush 5 Docker_Mode_Parser container_firstline_user Parser docker DB /var/fluent-bit/state/flb_user_api.db Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match user-api.* Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix user-api.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [FILTER] Name grep Match user-api.* Exclude log /.*&quot;GET \/ping HTTP\/1.1&quot; 200.*/ [FILTER] Name nest Match user-api.* Operation lift Nested_under kubernetes Add_prefix Kube. [FILTER] Name modify Match user-api.* Remove kubernetes.kubernetes.host Remove Kube.container_hash Remove Kube.container_image Remove Kube.container_name Remove Kube.docker_id Remove Kube.host Remove Kube.pod_id [FILTER] Name nest Match user-api.* Operation nest Wildcard Kube.* Nested_under kubernetes Remove_prefix Kube. [OUTPUT] Name cloudwatch_logs Match user-api.* region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/user-api log_stream_prefix app- auto_create_group true extra_user_agent container-insights </code></pre>
<p>I have a Kubernetes deployment that is continuously running with a number of replicas that varies over time depending on an autoscaling logic. Is there any command or (series of commands) that return the number of replicas used and their lifetime (e.g., creationTime and terminationTime of each replica)?</p> <p>Note that the deployment is on GKE so a solution that leverages GCP logging query language is also welcomed.</p>
<p>You could run the following command,</p> <pre><code>kubectl get pods -l app=&lt;deployment-name&gt; -o jsonpath='{range .items[*]}{.metadata.creationTimestamp}{&quot;\t&quot;}{.metadata.deletionTimestamp}{&quot;&lt;br&gt;&quot;}{end}' </code></pre> <p>It uses <code>jsonpath</code> to extract the creation timestamp <code>(metadata.creationTimestamp)</code> and deletion timestamp <code>(metadata.deletionTimestamp)</code> for each pod matching the specified deployment label <code>(-l app=&lt;deployment-name&gt;)</code>.</p> <p>It will display the creation time and termination time of each replica.<br /> The result will show the creation time and termination time (if applicable) on each replica associated with the deployment. The termination time will be <code>null</code> for replicas that are currently running.</p>
<p>I have a private GKE cluster with no public endpoint. I have confirmed that I can authenticate and run <code>kubectl</code> commands against the cluster with my principal account. I am deploying Terraform from the same VM I tested the <code>kubectl</code> commands against and have added that IP address to the cluster's Master Authorized Networks.</p> <p>Whenever I try to deploy workload identity with Terraform, I receive this error:</p> <pre><code>Error: Post &quot;https://10.0.0.2/api/v1/namespaces/default/serviceaccounts&quot;: context deadline exceeded on .terraform/modules/gke-workload-identity/modules/workload-identity/main.tf line 48, in resource &quot;kubernetes_service_account&quot; &quot;main&quot;: 48: resource &quot;kubernetes_service_account&quot; &quot;main&quot; { </code></pre> <p>I have granted the Service Account for Terraform deployment the proper IAM Roles for WI. I am using the standard <code>terraform-google-modules</code> for <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/workload-identity" rel="nofollow noreferrer">workload identity</a> and GKE <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/safer-cluster" rel="nofollow noreferrer">cluster</a>.</p> <p>Here is also my TF Kubernetes provider block:</p> <pre><code>provider &quot;kubernetes&quot; { host = &quot;https://${module.gke.endpoint}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } </code></pre>
<p><strong>Kubectl config view</strong></p> <p>Displays merged kubeconfig settings or a specified kubeconfig file.</p> <p><strong>Synopsis</strong></p> <p>Displays merged kubeconfig settings or a specified kubeconfig file.</p> <p>You can use –output jsonpath={…} to extract specific values using a jsonpath expression.</p> <p><strong>Examples :</strong></p> <pre><code>kubectl config view --raw </code></pre> <p>For Community reference <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_config_view/" rel="nofollow noreferrer">kubectl_config_view</a></p>
<p>I recently set up a GKE autopilot but realized it doesn't support webhooks which cert-manager is dependent on. What are the other options we have to add/manage SSL certificates to a GKE auto-pilot cluster?</p>
<p>As of May 2021, GKE Autopilot has no support for 3rd party webhooks. Without webhooks, many Kubernetes plugins such as cert-manager cannot operate correctly. Cert-manager uses a custom mutating admission webhook to manage certificates, which is immutable on GKE Autopilot.</p> <p>To add/manage SSL certificates for Autopilot clusters, you should first start with this official GCP doc <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="noreferrer">Google-managed SSL certificates</a>.</p> <p>You can configure Google-managed SSL certificates using a ManagedCertificate custom resource, which is available in different API versions, depending on your GKE cluster version. It's recommended that you use a newer API version.</p> <ul> <li>ManagedCertificate v1beta2 API is available in GKE cluster versions 1.15 and later.</li> <li>ManagedCertificate v1 API is available in GKE cluster versions 1.17.9-gke.6300 and later.</li> </ul> <blockquote> <p><strong>Note</strong>: Google-managed SSL certificates aren't currently supported for internal HTTPS load balancers. For internal HTTPS load balancers, use self-managed SSL certificates instead. This feature is only available for Ingress for External HTTP(S) Load Balancing, can read more <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">here</a>.</p> </blockquote> <p>To configure a Google-managed SSL certificate and associate it with an Ingress, follow the two basic steps first:</p> <ul> <li>Create a ManagedCertificate object in the same namespace as the Ingress.</li> <li>Associate the ManagedCertificate object to an Ingress by adding an annotation networking.gke.io/managed-certificates to the Ingress. This annotation is a comma-separated list of ManagedCertificate resources, cert1,cert2,cert3 for example. Which is mentioned in detail <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_google-managed_certificate" rel="noreferrer">here</a>.</li> </ul> <p>You have to follow some <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#prerequisites" rel="noreferrer">prerequisites</a>:</p> <ul> <li>You must own the domain name (Google Domains or another registrar).</li> <li>Your &quot;kubernetes.io/ingress.class&quot; must be &quot;gce&quot;.</li> <li>Create a reserved (static) external IP address. If you do not reserve an address, it may change, requiring you to reconfigure your domain's DNS records.</li> </ul> <p>For setting up a Google-managed certificate, go through the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate" rel="noreferrer">sample ManagedCertificate manifest</a>.</p>
<p>I'm configuring app to works in Kubernetes google cloud cluster. I'd like to pass parameters to <code>application.properties</code> in Spring boot application via <code>configMap</code>. I'm trying to pass value by <strong>Environment Variable</strong>.</p> <p>I've created config map in google cloud Kubernetes cluster in namespace <code>default</code> like below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: config-key: 12345789abde kubectl create -f app-config.yaml -n default configmap/app-config created </code></pre> <p>I'm checking if configMap has been created:</p> <p><a href="https://i.stack.imgur.com/qYEoJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qYEoJ.png" alt="enter image description here" /></a></p> <p>key value pair looks fine:</p> <p><a href="https://i.stack.imgur.com/gPgBK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gPgBK.png" alt="enter image description here" /></a></p> <p>I'm trying to deploy Spring boot app with using <code>cloudbuild.yaml</code>(it works when I not use configMap). The content is:</p> <pre><code>substitutions: _CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value _CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value steps: - id: 'Build docker image' name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA', '.'] - id: 'Push image to Container Registry' name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA'] - id: 'Set image in yamls' name: 'ubuntu' args: ['bash','-c','sed -i &quot;s,${_TECH_CONTAINER_IMAGE},gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA,&quot; deployment.yaml'] - id: 'Create or update cluster based on last docker image' name: 'gcr.io/cloud-builders/kubectl' args: ['apply', '-f', 'deployment.yaml'] env: - 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}' - 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}' - id: 'Expose service to outside world via load balancer' name: 'gcr.io/cloud-builders/kubectl' args: [ 'apply', '-f', 'service-load-balancer.yaml' ] env: - 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}' - 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}' options: logging: CLOUD_LOGGING_ONLY </code></pre> <p><code>deployment.yaml</code> with reference to config map (container is also in default namespace) is:</p> <pre><code>apiVersion: &quot;apps/v1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;java-kubernetes-clusters-test&quot; namespace: &quot;default&quot; labels: app: &quot;java-kubernetes-clusters-test&quot; spec: replicas: 3 selector: matchLabels: app: &quot;java-kubernetes-clusters-test&quot; template: metadata: labels: app: &quot;java-kubernetes-clusters-test&quot; spec: containers: - name: &quot;app&quot; image: &quot;kubernetes-cluster-test-image&quot; env: - name: CONFIG_KEY valueFrom: configMapKeyRef: name: app-config key: config-key </code></pre> <p>Spring boot is not able to read placeholder and I'm obtaining an error in attempt to deply app to google cloud like below:</p> <p><a href="https://i.stack.imgur.com/igoGi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/igoGi.png" alt="enter image description here" /></a></p> <p>I'm trying to reference to env with name CONFIG_KEY in <code>application.properties</code>:</p> <pre><code>com.example.dockerkubernetes.property.value=${CONFIG_KEY} </code></pre> <p>and then in Spring Boot controller:</p> <pre><code>@RestController @RequiredArgsConstructor public class MyController { @Value(&quot;${com.example.dockerkubernetes.property.value}&quot;) private String testValue; public String getConfigMapTestKey() { return this.testValue; } } </code></pre> <p>Has anyone any idea why it dosen't work? Maybe some permissions are missing? I would be grateful for help.</p>
<p>You can read properties from the environment, but that could require a bit of coding &amp; refactoring (depending on the rest of your code) to import and use such values, i.e. into a <code>Properties</code> object.</p> <p>If you want to use an <code>application.properties</code> file directly you're better off mounting the single file into the app's container by <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">using SubPath</a>.</p> <p>Make sure you mount the file exactly as specified in the linked official docs or you are likely to get errors related to the container's FS permissions</p>
<p>I want to create an nginx load balancer in kubernetes.</p> <pre><code> apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ecom-ingress spec: defaultBackend: service: name: ecom-ui-service port: number: 80 rules: - http: paths: - path: /identity pathType: ImplementationSpecific backend: service: name: ecom-identity-service port: number: 80 - path: /catalog pathType: ImplementationSpecific backend: service: name: ecom-catalog-service port: number: 80 </code></pre> <p>So I have created services and pods on cluster. And now I want to create an ingress to access my servies using rewrite.</p> <ul> <li>http://ip-of-cubernetes (main mape)</li> <li>http://ip-of-cubernetes/catalog (list prodcts)</li> <li>http://ip-of-cubernetes/catalog/112 (single prodcut)</li> </ul> <p>When I send this nginx yml file using kubectl command &quot;kubectl apply -f ingress.yml&quot; it creates a gce load balancer in ingress tab of google page like following. And I can not access the catalog endpoint. It returns nginx 404 page. <a href="https://i.stack.imgur.com/KuMd2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KuMd2.png" alt="enter image description here" /></a></p> <p>if I use following helm command in google management page console, it creates a loadbalancer in services tab.</p> <pre><code> helm install nginx-ingress nginx-stable/nginx-ingress --set rbac.create=true </code></pre> <p><a href="https://i.stack.imgur.com/UxfIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UxfIp.png" alt="enter image description here" /></a></p> <p>So I want to create an nginx loadbalancer in services tab or I want to use rewrite rules in gce loadbalancers.</p> <p>How can I specify my ingress yml file for this purpose? And I want to use the rewrite for use my services to use query strings etc.</p>
<p>In this <a href="https://github.com/GoogleCloudPlatform/gke-networking-recipes/tree/main/ingress/single-cluster/ingress-nginx" rel="nofollow noreferrer">link</a> one of the use case is rewrite request URI before sending it to application.</p> <p>There is also information about yaml deployment in ingress and in service that you can use as a guidance in your deployment.</p>
<p>I am using below command to patch new storage to volumeclaimtemplate:</p> <pre><code> minikube kubectl -- --namespace default patch pvc elasticsearch-data-elasticsearch-data-0 --patch '{\&quot;spec\&quot;: {\&quot;volumeClaimTemplate\&quot;: {\&quot;requests\&quot;: {\&quot;storage\&quot;: \&quot;2Gi\&quot;}}}}' </code></pre> <p>But, I am getting the below error:</p> <pre><code>error: unable to parse &quot;'{\&quot;spec\&quot;:&quot;: YAML: found unexpected end of stream. </code></pre> <p>Should I use another escape character instead of &quot;&quot;? Please Help.</p>
<p>Replacing single quote and escaping double quotes worked for me.</p> <p>Result is then :</p> <pre><code>--patch &quot;{\&quot;spec\&quot;: {\&quot;volumeClaimTemplate\&quot;: {\&quot;requests\&quot;: {\&quot;storage\&quot;: \&quot;2Gi\&quot;}}}}&quot; </code></pre>
<p>I have the following microservice based node.js program: <code>posts</code> is responsible for creating a post and sending the corresponding event to the <code>event bus</code> via an axios <code>POST</code> request. The <code>event bus</code> also establishes connection to <code>posts</code> via an axios <code>GET</code> request upon startup. The <code>GET</code> request from <code>event bus</code> to <code>posts</code> is successful, but the <code>POST</code> request from <code>posts</code> to <code>event bus</code> fails with the error <code>getaddrinfo ENOTFOUND event-bus-srv</code>. Both <code>posts</code> and <code>event bus</code> are running in pods managed by kubernetes within the same node. Only <code>posts</code> is available to the user vis the URL <code>http://localhost:31218/</code>. When I use postman to post to the endpoint <code>http://localhost:31218/posts</code>, I receive code 201 successfully, but the event is never forwarded to <code>event bus</code>. My question is this: what is causing <code>getaddrinfo ENOTFOUND event-bus-srv</code> and how do I fix it?</p> <p>Relavent code:</p> <ol> <li>posts index.js</li> <li>event bus index.js</li> <li>EVENT BUS KUBERNETES DEPLOYMENT FILE</li> <li>POSTS KUBERNETES DEPLOYMENT FILE</li> <li>POSTS KUBERNETES NODEPORT FILE</li> </ol> <pre><code>_____________________________________POSTS___________________________________________ const express = require('express'); const { randomBytes } = require('crypto'); const bodyParser = require('body-parser'); const cors = require('cors'); const axios = require('axios'); const app = express(); app.use(cors()); app.use(bodyParser.json()); const posts = {}; app.get('/posts', (req, res) =&gt; { res.send(posts); }); app.post('/posts', async(req, res) =&gt; { const id = randomBytes(4).toString('hex'); const { title } = req.body; posts[id] = { id, title }; await axios.post('http://event-bus-srv:4005/events', { type: 'PostCreated', data: { id, title } }).catch((err) =&gt; { console.log(&quot;сбой такой: &quot; + err.message); }); res.status(201).send(posts[id]); }); app.post('/events', (req, res) =&gt; { res.status(200); }) app.listen(4000, () =&gt; { console.log('Listening to port ' + 4000); }); _____________________________________EVENT BUS___________________________________________ const express = require('express'); const bodyParser = require('body-parser'); const axios = require('axios'); const app = express(); app.use(bodyParser.json()); const events = []; app.get('/events', (req, res) =&gt; { res.send(events); }); app.post('/events', (req, res) =&gt; { const event = req.body; events.push(event); axios.post('http://posts-clusterip-svr:4000/events', event).catch((err) =&gt; { console.log(err.message); }); res.send({status: 'OK'}); }); app.listen(4005, async() =&gt; { console.log(&quot;listening to &quot; + 4005); const res = await axios.get('http://posts-clusterip-svr:4000/posts').catch((err) =&gt; { console.log(&quot;сбой такой: &quot; + err.message); }); }); ___________EVENT BUS KUBERNETES DEPLOYMENT/SERVICE FILE_________________________________ apiVersion: apps/v1 kind: Deployment metadata: name: event-bus-depl spec: replicas: 1 selector: matchLabels: app: event-bus template: metadata: labels: app: event-bus spec: containers: - name: event-bus image: wliamwhite/event-bus:latest --- apiVersion: v1 kind: Service metadata: name: event-bus-svr spec: selector: app: event-bus type: ClusterIP ports: - name: event-bus protocol: TCP port: 4005 targetPort: 4005 ___________POSTS KUBERNETES DEPLOYMENT/SERVICE FILE_____________________________________ apiVersion: apps/v1 kind: Deployment metadata: name: posts-depl spec: replicas: 1 selector: matchLabels: app: posts template: metadata: labels: app: posts spec: containers: - name: posts image: wliamwhite/posts:latest --- apiVersion: v1 kind: Service metadata: name: posts-clusterip-svr spec: type: ClusterIP selector: app: posts ports: - name: posts protocol: TCP port: 4000 targetPort: 4000 ___________POSTS KUBERNETES NODEPORT FILE_________________________________________ apiVersion: v1 kind: Service metadata: name: posts-svr spec: type: NodePort selector: app: posts ports: - name: posts protocol: TCP port: 4000 targetPort: 4000 </code></pre>
<p>Aside from the event-bus-srv and event-bus-svr name confusion. The error message you are having typically indicates that the DNS resolution failed from the hostname.</p> <p>Things to consider for this concern:</p> <ol> <li>Just like what @DavidMaze mentioned you need to double check the hostname and make sure it is properly spelled.</li> <li>Double check also the configuration files or code where the hostname is used.</li> <li>Make sure that DNS servers are correctly configured. You can do a ping or nslookup command for testing.</li> <li>Make sure hostname is properly configured in DNS record and point to the right IP address</li> <li>Check firewall Rules that might be blocking the connection from hostname and ensure necessary ports are open.</li> </ol>
<p>I am working on setting up a victoria metrics cluster in minikube to monitor my servers.</p> <p>In this cluster I am using prometheus to send data to vminsert which further sends it to vmstorage and i have used vmselect to fetch the data and show it on grafana.</p> <p>Data flow is like =&gt;&gt;&gt; prometheus---&gt; vminsert ---&gt; vmstorage &lt;--- vmselect &lt;--- grafana</p> <p>Vminsert and vmselect is stateless while vmstorage is stateful. Initially I have 5 replicas of vmstorage. When I apply all the files everything is working fine and I am able to see the data on grafana.</p> <p>I have created a storageclass for this purpose.</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: vmstorage-sc provisioner: k8s.io/minikube-hostpath reclaimPolicy: Retain </code></pre> <p>After that I have also created a pvc for this. This pvc is using the above storageclass. I have used this pvc in my vmstorage deployment.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-vms namespace: vms spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 50Mi storageClassName: vmstorage-sc </code></pre> <p>This is my vmstorage deployment file</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: vmstorage namespace: vms spec: serviceName: vmstorage replicas: 5 selector: matchLabels: app: vmstorage template: metadata: labels: app: vmstorage spec: containers: - name: vmstorage image: victoriametrics/vmstorage imagePullPolicy: &quot;IfNotPresent&quot; args: - -retentionPeriod=1 ports: - containerPort: 8400 name: vminsert - containerPort: 8401 name: vmselect volumeMounts: - name: mypvc mountPath: /victoria-metrics-data volumes: - name: mypvc persistentVolumeClaim: claimName: pvc-vms --- apiVersion: v1 kind: Service metadata: name: vmstorage namespace: vms spec: ports: - name: vminsert port: 8400 targetPort: 8400 - name: vmselect port: 8401 targetPort: 8401 selector: app: vmstorage </code></pre> <p>I am adding other files as well this is my vminsert file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: vminsert namespace: vms spec: replicas: 4 selector: matchLabels: app: vminsert template: metadata: labels: app: vminsert spec: containers: - name: vminsert image: victoriametrics/vminsert args: - &quot;-maxConcurrentInserts=64&quot; - &quot;-insert.maxQueueDuration=5m&quot; - &quot;-replicationFactor=5&quot; # - &quot;-dedup.minScrapeInterval=1ms&quot; - -storageNode=vmstorage-0.vmstorage.vms.svc.cluster.local:8400 - -storageNode=vmstorage-1.vmstorage.vms.svc.cluster.local:8400 - -storageNode=vmstorage-2.vmstorage.vms.svc.cluster.local:8400 - -storageNode=vmstorage-3.vmstorage.vms.svc.cluster.local:8400 - -storageNode=vmstorage-4.vmstorage.vms.svc.cluster.local:8400 ports: - containerPort: 8480 name: http-insert --- apiVersion: v1 kind: Service metadata: name: vminsert namespace: vms spec: ports: - name: http port: 8480 targetPort: 8480 selector: app: vminsert </code></pre> <p>And this is my vmselect file</p> <pre><code>apiVersion : apps/v1 kind : Deployment metadata : name : vmselect namespace : vms spec : replicas : 2 selector : matchLabels : app : vmselect template : metadata : labels : app : vmselect spec : containers : - name : vmselect image : victoriametrics/vmselect args : - -search.maxUniqueTimeseries=1000000 - -storageNode=vmstorage-2.vmstorage.vms.svc.cluster.local:8401 - -storageNode=vmstorage-1.vmstorage.vms.svc.cluster.local:8401 # - -storageNode=vmstorage-3.vmstorage.vms.svc.cluster.local:8401 ports : - containerPort : 8481 name : http-select --- apiVersion: v1 kind: Service metadata: name: vmselect namespace: vms spec: ports: - name: http port: 8481 targetPort: 8481 selector: app: vmselect </code></pre> <p><strong>All the pods are having same data.</strong> Because of the replication factor I mentioned in vminsert file.</p> <p>But the problem arises <strong>when I increase the number of replicas of vmstorage. When I make it 6, a new pod is created but it has no data</strong>, it is just a fresh pod without any data. Or <strong>if I delete a pod, a new pod automatically creates but this pod also have no data.</strong></p> <p>I want to achieve two things</p> <ol> <li>Whenever I increase the number of replica, the new pod should have same data as other pods.</li> <li>If somehow a pod gets restarted then that pod should also have same data as others.</li> </ol> <p>In this setup I need to store the data of at least 1 month. So it is very important for me that if any of pod restarts or I scale up the number of pods then data should exists .</p>
<p>Base on your vminsert file, a recommendation on increasing your</p> <pre><code>&quot;-replicationFactor=5&quot; </code></pre> <p>to 6 and then adding another</p> <pre><code>-storageNode=vmstorage-x.vmstorage.vms.svc.cluster.local:8400 </code></pre> <p>This may fix the concern when a new pod is created without data.</p>
<p>I'm running <code>nvidia/cuda:11.8.0-base-ubuntu20.04</code> on Google Kubernetes Engine using GPU Timesharing on T4 gpus</p> <p>Checking the driver capabilities I get compute and utility. I was hoping to also get graphics and video. Is this a limitation of Timesharing on GKE?</p>
<p>It should let you use the resources for graphics and video, however time-sharing GPU are ideal for workloads that are not using a high amount of the resources all the time.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus#limitations" rel="nofollow noreferrer">Limitations</a> for using Time-Sharing GPU's on GKE's</p> <ul> <li>GKE enforces memory (address space) isolation, performance isolation, and fault isolation between containers that share a physical GPU. However, memory limits aren't enforced on time-shared GPUs. To avoid running into out-of-memory (OOM) issues, set GPU memory limits in your applications. To avoid security issues, only deploy workloads that are in the same trust boundary to time-shared GPUs.</li> <li>GKE might reject certain time-shared GPU requests to prevent unexpected behavior during capacity allocation</li> <li>The maximum number of containers that can share a single physical GPU is 48. When planning your time-sharing configuration, consider the resource needs of your workloads and the capacity of the underlying physical GPUs to optimize your performance and responsiveness.</li> </ul>
<p>hope you are doing fine,</p> <p>i got that error :error:</p> <p><code>error converting YAML to JSON: yaml: line 33: found character that cannot start any token</code></p> <p>while trying to deploy this cronjob on my k8s cluster, can you please check and let me know if you have any clues about the reason of having this error ?</p> <p>the file is as follows:</p> <pre><code>--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: resourcecleanup spec: # 10:00 UTC == 1200 CET schedule: '0 10 * * 1-5' jobTemplate: spec: template: metadata: annotations: iam.amazonaws.com/role: arn:aws:iam::%%AWS_ACCOUNT_NUMBER%%:role/k8s/pod/id_ResourceCleanup spec: containers: - name: resourcecleanup image: cloudcustodian/c7n args: - run - -v - -s - /tmp - -f - /tmp/.cache/cloud-custodian.cache - /home/custodian/delete-unused-ebs-volumes-policies.yaml volumeMounts: - name: cleanup-policies mountPath: /home/custodian/delete-unused-ebs-volumes-policies.yaml subPath: delete-unused-ebs-volumes-policies.yaml env: - name: AWS_DEFAULT_REGION value: %%AWS_REGION%% volumes: - name: cleanup-policies configMap: name: cleanup-policies restartPolicy: Never --- </code></pre>
<p>The problem could be from your indentation method, Try using spaces and not tabs for your indentation. Use 2 spaces for each indentation. Hope this helps.</p>
<p><code>rke --debug up --config cluster.yml</code></p> <p>fails with health checks on etcd hosts with error:</p> <blockquote> <p>DEBU[0281] [etcd] failed to check health for etcd host [x.x.x.x]: failed to get /health for host [x.x.x.x]: Get &quot;https://x.x.x.x:2379/health&quot;: remote error: tls: bad certificate</p> </blockquote> <p>Checking etcd healthchecks</p> <pre><code>for endpoint in $(docker exec etcd /bin/sh -c &quot;etcdctl member list | cut -d, -f5&quot;); do echo &quot;Validating connection to ${endpoint}/health&quot;; curl -w &quot;\n&quot; --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) &quot;${endpoint}/health&quot;; done Running on that master node Validating connection to https://x.x.x.x:2379/health {&quot;health&quot;:&quot;true&quot;} Validating connection to https://x.x.x.x:2379/health {&quot;health&quot;:&quot;true&quot;} Validating connection to https://x.x.x.x:2379/health {&quot;health&quot;:&quot;true&quot;} Validating connection to https://x.x.x.x:2379/health {&quot;health&quot;:&quot;true&quot;} </code></pre> <pre><code>you can run it manually and see if it responds correctly curl -w &quot;\n&quot; --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-etcd-x-x-x-x.pem --key /etc/kubernetes/ssl/kube-etcd-x-x-x-x-key.pem https://x.x.x.x:2379/health </code></pre> <p>Checking my self signed certificates hashes</p> <pre><code># md5sum /etc/kubernetes/ssl/kube-ca.pem f5b358e771f8ae8495c703d09578eb3b /etc/kubernetes/ssl/kube-ca.pem # for key in $(cat /home/kube/cluster.rkestate | jq -r '.desiredState.certificatesBundle | keys[]'); do echo $(cat /home/kube/cluster.rkestate | jq -r --arg key $key '.desiredState.certificatesBundle[$key].certificatePEM' | sed '$ d' | md5sum) $key; done | grep kube-ca f5b358e771f8ae8495c703d09578eb3b - kube-ca </code></pre> <pre><code>versions on my master node Debian GNU/Linux 10 rke version v1.3.1 docker version Version: 20.10.8 kubectl v1.21.5 v1.21.5-rancher1-1 </code></pre> <p>I think my <code>cluster.rkestate</code> gone bad, are there any other locations where rke tool checks for certificates? Currently I cannot do anything with this production cluster, and want to avoid downtime. I experimented on testing cluster different scenarios, I could do as last resort to recreate the cluster from scratch, but maybe I can still fix it... <code>rke remove</code> &amp;&amp; <code>rke up</code></p>
<p><code>rke util get-state-file</code> helped me to reconstruct bad cluster.rkestate file and I was able to successfully <code>rke up</code> and add new master node to fix whole situation.</p>
<p>I have obtained a cert from name.com.</p> <pre class="lang-sh prettyprint-override"><code>➜ tree . . ├── ca.crt ├── vpk.crt ├── vpk.csr └── vpk.key </code></pre> <p><strong>How I created the secrets</strong></p> <p>I added ca.crt content at the end of vpk.crt file.</p> <pre class="lang-sh prettyprint-override"><code>(⎈ | vpk-dev-eks:argocd) ➜ k create secret tls tls-secret --cert=vpk.crt --key=vpk.key --dry-run -o yaml | kubectl apply -f - (⎈ | vpk-dev-eks:argocd) ➜ kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --dry-run -o yaml | kubectl apply -f - </code></pre> <p><strong>This is my ingress:</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: websockets-ingress namespace: development annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;3600&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;3600&quot; # Enable client certificate authentication nginx.ingress.kubernetes.io/auth-tls-verify-client: &quot;optional_no_ca&quot; # Create the secret containing the trusted ca certificates nginx.ingress.kubernetes.io/auth-tls-secret: &quot;development/ca-secret&quot; # Specify the verification depth in the client certificates chain nginx.ingress.kubernetes.io/auth-tls-verify-depth: &quot;1&quot; # Specify if certificates are passed to upstream server nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: &quot;true&quot; argocd.argoproj.io/sync-wave: &quot;10&quot; spec: tls: - hosts: - backend-dev.project.com secretName: tls-secret rules: - host: backend-dev.project.com http: paths: - path: /ws/ backend: serviceName: websockets-service servicePort: 443 </code></pre> <p>The cert is properly validated, I can connect via various CLI WebSocket clients and <a href="https://www.ssllabs.com/ssltest" rel="nofollow noreferrer">https://www.ssllabs.com/ssltest</a> gives me &quot;A+&quot;</p> <p>However if I set</p> <p><code>nginx.ingress.kubernetes.io/auth-tls-verify-client: &quot;on&quot;</code></p> <p>then everything stops working and I get 400 error on the nginx ingress controller side (POD logs).</p> <p><strong>I am confused from the official docs:</strong></p> <p>The optional_no_ca parameter (1.3.8, 1.2.5) requests the client certificate but does not require it to be signed by a trusted CA certificate. This is intended for the use in cases when a service that is external to nginx performs the actual certificate verification. The contents of the certificate is accessible through the $ssl_client_cert variable.</p> <h3>So what exactly &quot;optional_no_ca&quot; is doing and why &quot;on&quot; fails the requests?</h3>
<p><strong>Optional_no_ca</strong> does the optional client certificate validation and it does not fail the request when the client certificate is not signed by the CAs from <strong>auth-tls-secret</strong>. Even after specifying the optional_no_ca parameter, it is necessary to provide the client certificate. As mentioned in the document <a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate" rel="nofollow noreferrer">1</a>, the actual certificate verification is done when the service is external to Nginx.</p> <p>When you set <strong>nginx.ingress.kubernetes.io/auth-tls-verify-client:on</strong>, it requests a client certificate that must be signed by a certificate that is included in the secret key <strong>ca.crt</strong> of the secret specified by <strong>nginx.ingress.kubernetes.io/auth-tls-secret: secretName</strong>.</p> <p>If not so, then certificate verification will fail and result in a status code 400 (Bad Request). Check <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#client-certificate-authentication" rel="nofollow noreferrer">this</a> for further information.</p>
<p>I just installed from scratch a small Kubernetes test cluster in a 4 Armbian/Odroid_MC1 (Debian 10) nodes. The install process is this <a href="https://github.com/rodolfoap/odroid-k8s-armbian/blob/master/README.md" rel="nofollow noreferrer">1</a>, nothing fancy or special, adding k8s apt repo and install with apt.</p> <p>The problem is that the <strong>API server</strong> dies constantly, like every 5 to 10 minutes, after the <strong>controller-manager</strong> and the <strong>scheduler</strong> die together, who seem to stop simultaneously before. Evidently, the API becomes unusable for like a minute. All three services do restart, and things run fine for the next four to nine minutes, when the loop repeats. Logs are here <a href="https://gist.github.com/rodolfoap/f14a8974ad96314ab162bd97478c9198" rel="nofollow noreferrer">2</a>. This is an excerpt:</p> <pre><code>$ kubectl get pods -o wide --all-namespaces The connection to the server 192.168.1.91:6443 was refused - did you specify the right host or port? (a minute later) $ kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-74ff55c5b-8pm9r 1/1 Running 2 88m 10.244.0.7 mc1 &lt;none&gt; &lt;none&gt; kube-system coredns-74ff55c5b-pxdqz 1/1 Running 2 88m 10.244.0.6 mc1 &lt;none&gt; &lt;none&gt; kube-system etcd-mc1 1/1 Running 2 88m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-mc1 0/1 Running 12 88m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-mc1 1/1 Running 5 31m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-fxg2s 1/1 Running 5 45m 192.168.1.94 mc4 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-jvvmp 1/1 Running 5 48m 192.168.1.92 mc2 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-qlvbc 1/1 Running 6 45m 192.168.1.93 mc3 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-ssb9t 1/1 Running 3 77m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-7t9ff 1/1 Running 2 45m 192.168.1.93 mc3 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-8jhc7 1/1 Running 2 88m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-cg75m 1/1 Running 2 45m 192.168.1.94 mc4 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-mq8j7 1/1 Running 2 48m 192.168.1.92 mc2 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-mc1 1/1 Running 5 31m 192.168.1.91 mc1 &lt;none&gt; &lt;none&gt; $ docker ps -a # (check the exited and restarted services) CONTAINER ID NAMES STATUS IMAGE NETWORKS PORTS 0e179c6495db k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_13 Up About a minute 66eaad223e2c 2ccb014beb73 k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_6 Up 3 minutes 21e17680ca2d 3322f6ec1546 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_6 Up 3 minutes a1ab72ce4ba2 583129da455f k8s_kube-apiserver_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_12 Exited (137) About a minute ago 66eaad223e2c 72268d8e1503 k8s_install-cni_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_0 Exited (0) 5 minutes ago 263b01b3ca1f fe013d07f186 k8s_kube-controller-manager_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_5 Exited (255) 3 minutes ago a1ab72ce4ba2 34ef8757b63d k8s_kube-scheduler_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_5 Exited (255) 3 minutes ago 21e17680ca2d fd8e0c0ba27f k8s_coredns_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_2 Up 32 minutes 15c1a66b013b f44e2c45ed87 k8s_coredns_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_2 Up 32 minutes 15c1a66b013b 04fa4eca1240 k8s_POD_coredns-74ff55c5b-8pm9r_kube-system_3b813dc9-827d-4cf6-88cc-027491b350f1_42 Up 32 minutes k8s.gcr.io/pause:3.2 none f00c36d6de75 k8s_POD_coredns-74ff55c5b-pxdqz_kube-system_c3b7fbf2-2064-4f3f-b1b2-dec5dad904b7_42 Up 32 minutes k8s.gcr.io/pause:3.2 none a1d6814e1b04 k8s_kube-flannel_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_3 Up 32 minutes 263b01b3ca1f 94b231456ed7 k8s_kube-proxy_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 33 minutes 377de0f45e5c df91856450bd k8s_POD_kube-flannel-ds-ssb9t_kube-system_dbf3513d-dad2-462d-9107-4813acf9c23a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host b480b844671a k8s_POD_kube-proxy-8jhc7_kube-system_cc637e27-3b14-41bd-9f04-c1779e500a3a_2 Up 34 minutes k8s.gcr.io/pause:3.2 host 1d4a7bcaad38 k8s_etcd_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes 2e91dde7e952 e5d517a9c29d k8s_POD_kube-controller-manager-mc1_kube-system_17cf17caf36ba27e3d2ec4f113a0cf6f_1 Up 35 minutes k8s.gcr.io/pause:3.2 host 3a3da7dbf3ad k8s_POD_kube-apiserver-mc1_kube-system_c55114bd57b1bf357c8f4c0d749ae105_2 Up 35 minutes k8s.gcr.io/pause:3.2 host eef29cdebf5f k8s_POD_etcd-mc1_kube-system_14b7b6d6446e21cc57f0b40571ae3958_2 Up 35 minutes k8s.gcr.io/pause:3.2 host 3631d43757bc k8s_POD_kube-scheduler-mc1_kube-system_fe362b2b6b08ca576b7416df7f2e7845_1 Up 35 minutes k8s.gcr.io/pause:3.2 host </code></pre> <p>I see no weird issues on the logs (I'm a k8s beginner). This was working until a month ago, when I've reinstalled this for practicing, this is probably my tenth install attempt, I've tried different options, versions and googled a lot, but can't find no solution.</p> <p>What could be the reason? What else can I try? How can I get to the root of the problem?</p> <h1>UPDATE 2021/02/06</h1> <p>The problem is not occurring anymore. Apparently, the issue was the version in this specific case. Didn't filed an issue because I didn't found clues regarding what specific issue to report.</p> <p>The installation procedure in all cases was this:</p> <pre><code># swapoff -a # curl -sL get.docker.com|sh # usermod -aG docker rodolfoap # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - # echo &quot;deb http://apt.kubernetes.io/ kubernetes-xenial main&quot; &gt; /etc/apt/sources.list.d/kubernetes.list # apt-get update # apt-get install -y kubeadm kubectl kubectx # Master # kubeadm config images pull # kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=10.244.0.0/16 </code></pre> <ul> <li>Armbian-20.08.1 worked fine. My installation procedure has not changed since.</li> <li>Armbian-20.11.3 had the issue: the API, scheduler and coredns restarted every 5 minutes, blocking the access to the API 5 of each 8 minutes, average..</li> <li>Armbian-21.02.1 works fine. Worked at the first install, same procedure.</li> </ul> <p>All versions were updated to the last kernel, at the moment of the install, current is 5.10.12-odroidxu4.</p> <p>As you can see, after around two hours, no API reboots:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE LABELS kube-system coredns-74ff55c5b-gnvf2 1/1 Running 0 173m 10.244.0.2 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b kube-system coredns-74ff55c5b-wvnnz 1/1 Running 0 173m 10.244.0.3 mc1 k8s-app=kube-dns,pod-template-hash=74ff55c5b kube-system etcd-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=etcd,tier=control-plane kube-system kube-apiserver-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-apiserver,tier=control-plane kube-system kube-controller-manager-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-controller-manager,tier=control-plane kube-system kube-flannel-ds-c4jgv 1/1 Running 0 123m 192.168.1.93 mc3 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-system kube-flannel-ds-cl6n5 1/1 Running 0 75m 192.168.1.94 mc4 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-system kube-flannel-ds-z2nmw 1/1 Running 0 75m 192.168.1.92 mc2 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-system kube-flannel-ds-zqxh7 1/1 Running 0 150m 192.168.1.91 mc1 app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-system kube-proxy-bd596 1/1 Running 0 75m 192.168.1.94 mc4 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1 kube-system kube-proxy-n6djp 1/1 Running 0 75m 192.168.1.92 mc2 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1 kube-system kube-proxy-rf4cr 1/1 Running 0 173m 192.168.1.91 mc1 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1 kube-system kube-proxy-xhl95 1/1 Running 0 123m 192.168.1.93 mc3 controller-revision-hash=b89db7f56,k8s-app=kube-proxy,pod-template-generation=1 kube-system kube-scheduler-mc1 1/1 Running 0 173m 192.168.1.91 mc1 component=kube-scheduler,tier=control-plane </code></pre> <p>Cluster is fully functional :)</p>
<p>I have the same problem, but with Ubuntu:</p> <pre><code>PRETTY_NAME=&quot;Ubuntu 22.04 LTS&quot; NAME=&quot;Ubuntu&quot; VERSION_ID=&quot;22.04&quot; VERSION=&quot;22.04 (Jammy Jellyfish)&quot; VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL=&quot;https://www.ubuntu.com/&quot; SUPPORT_URL=&quot;https://help.ubuntu.com/&quot; BUG_REPORT_URL=&quot;https://bugs.launchpad.net/ubuntu/&quot; PRIVACY_POLICY_URL=&quot;https://www.ubuntu.com/legal/terms-and-policies/privacy-policy&quot; UBUNTU_CODENAME=jammy </code></pre> <p>The cluster works good for:</p> <ul> <li>Ubuntu 20.04 LTS</li> <li>Ubuntu 18.04 LTS Thought it will help someone else who is running ubuntu instead of Armbian.</li> </ul> <p>Solution for ubuntu (possible for Armbian too) is here: <a href="https://stackoverflow.com/questions/72567945/issues-with-stability-with-kubernetes-cluster-before-adding-networking">Issues with &quot;stability&quot; with Kubernetes cluster before adding networking</a></p> <p>Apparently it is a problem with the config of containerd on those versions.</p> <p><strong>UPDATE:</strong></p> <p>The problem is that if you use sudo apt install containerd, you will install the <strong>version v1.5.9</strong> which has the option <code>SystemdCgroup = false</code> that worked, in my case, in Ubuntu 20.04 but on the Ubuntu 22.04 doesn't work. But if you change it to <code>SystemdCgroup = true</code>it works.(this feature is updated in <strong>containerd v1.6.2</strong> so that it is set on true). This will hopefully fix your problem too.</p>
<p>I have an openshift namespace (<code>SomeNamespace</code>), in that namespace I have several pods.</p> <p>I have a route associated with that namespace (<code>SomeRoute</code>).</p> <p>In one of pods I have my spring application. It has REST controllers.</p> <p>I want to send message to that REST controller, how can I do it?</p> <p>I have a route URL: <code>https://some.namespace.company.name</code>. What should I find next?</p> <p>I tried to send requests to <code>https://some.namespace.company.name/rest/api/route</code> but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.</p>
<p>You don't need to specify the pod in the route.</p> <p>The chain goes like this:</p> <ul> <li><code>Route</code> exposes a given port of a <code>Service</code></li> <li><code>Service</code> selects some pod to route the traffic to by its <code>.spec.selector</code> field</li> </ul> <p>You need to check your <code>Service</code> and <code>Route</code> definitions.</p> <p>Example service and route (including only the related parts of the resources):</p> <p><code>Service</code></p> <pre><code>spec: ports: - name: 8080-tcp port: 8080 protocol: TCP targetPort: 8080 selector: &lt;label-key&gt;: &lt;label-value&gt; </code></pre> <p>Where <code>label-key</code> and <code>label-value</code> is any label key-value combination that selects your pod with the http application.</p> <p><code>Route</code></p> <pre><code>spec: port: targetPort: 8080-tcp &lt;port name of the service&gt; to: kind: Service name: &lt;service name&gt; </code></pre> <p>When your app exposes some endpoint on <code>:8080/rest/api</code>, you can invoke it with <code>&lt;route-url&gt;/rest/api</code></p> <p>You can try it out with some example application (some I found randomly on github, to verify everything works correctly on your cluster):</p> <ul> <li><p>create a new app using s2i build from github repository: <code>oc new-app registry.redhat.io/openjdk/openjdk-11-rhel7~https://github.com/redhat-cop/spring-rest</code></p> </li> <li><p>wait until the s2i build is done and the pod is started</p> </li> <li><p>expose the service via route: <code>oc expose svc/spring-rest</code></p> </li> <li><p>grab the route url: <code>oc get route spring-rest -o jsonpath='{.spec.host}'</code></p> </li> <li><p>invoke the api: <code>curl -k &lt;route-url&gt;/v1/greeting</code></p> </li> <li><p>response should be something like: <code>{&quot;id&quot;:3,&quot;content&quot;:&quot;Hello, World!&quot;}</code></p> </li> </ul>
<p>I am trying to mount dags folder to be able to run a python script inside of the KubernetesPodOperator on Aiflow, but can't figure out how to do it. In production I would like to do it in Google Composer. Here is my task:</p> <pre><code>kubernetes_min_pod = KubernetesPodOperator( task_id='pod-ex-minimum', cmds=[&quot;bash&quot;, &quot;-c&quot;], arguments=[&quot;cd /usr/local/tmp&quot;], namespace='default', image='toru2220/scrapy-chrome:latest', is_delete_operator_pod=True, get_logs=True, in_cluster=False, volumes=[ Volume(&quot;my-volume&quot;, {&quot;persistentVolumeClaim&quot;: {&quot;claimName&quot;: &quot;my-volume&quot;}}) ], volume_mounts=[ VolumeMount(&quot;my-volume&quot;, &quot;/usr/local/tmp&quot;, sub_path=None, read_only=False) ], ) </code></pre> <p>I am trying to understand what is the easiest way to mount the current folder where dag is?</p>
<p>As per this <a href="https://cloud.google.com/composer/docs/composer-2/cloud-storage" rel="nofollow noreferrer">doc</a>, when you create an environment, Cloud Composer creates a Cloud Storage bucket and associates the bucket with your environment. The name of the bucket is based on the environment region, name, and a random ID such as “us-central1-b1-6efannnn-bucket”. Cloud Composer stores the source code for your workflows (DAGs) and their dependencies in specific folders in Cloud Storage and uses <a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">Cloud Storage FUSE</a> to map the folders to the Airflow instances in your Cloud Composer environment.</p> <p>The Cloud Composer runs on top of a GKE cluster with all the DAGs, tasks, and services running on a single node pool. As per your requirement, you are trying to mount a DAGs folder in your code, which is already mounted in the Airflow pods under “<strong>/home/airflow/gcs/dags</strong>” path. Please refer to this <a href="https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator#begin" rel="nofollow noreferrer">doc</a> for more information about KubernetesPodOperator in Cloud Composer.</p>
<p>The whole cluster consists of 3 nodes and everything seems to run correctly:</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default ingress-nginx-controller-5c8d66c76d-wk26n 1/1 Running 0 12h ingress-nginx-2 ingress-nginx-2-controller-6bfb65b8-9zcjm 1/1 Running 0 12h kube-system calico-kube-controllers-684bcfdc59-2p72w 1/1 Running 1 (7d11h ago) 7d11h kube-system calico-node-4zdwr 1/1 Running 2 (5d10h ago) 7d11h kube-system calico-node-g5zt7 1/1 Running 0 7d11h kube-system calico-node-x4whm 1/1 Running 0 7d11h kube-system coredns-8474476ff8-jcj96 1/1 Running 0 5d10h kube-system coredns-8474476ff8-v5rvz 1/1 Running 0 5d10h kube-system dns-autoscaler-5ffdc7f89d-9s7rl 1/1 Running 2 (5d10h ago) 7d11h kube-system kube-apiserver-node1 1/1 Running 2 (5d10h ago) 7d11h kube-system kube-controller-manager-node1 1/1 Running 3 (5d10h ago) 7d11h kube-system kube-proxy-2x8fg 1/1 Running 2 (5d10h ago) 7d11h kube-system kube-proxy-pqqv7 1/1 Running 0 7d11h kube-system kube-proxy-wdb45 1/1 Running 0 7d11h kube-system kube-scheduler-node1 1/1 Running 3 (5d10h ago) 7d11h kube-system nginx-proxy-node2 1/1 Running 0 7d11h kube-system nginx-proxy-node3 1/1 Running 0 7d11h kube-system nodelocaldns-6mrqv 1/1 Running 2 (5d10h ago) 7d11h kube-system nodelocaldns-lsv8x 1/1 Running 0 7d11h kube-system nodelocaldns-pq6xl 1/1 Running 0 7d11h kubernetes-dashboard dashboard-metrics-scraper-856586f554-6s52r 1/1 Running 0 4d11h kubernetes-dashboard kubernetes-dashboard-67484c44f6-gp8r5 1/1 Running 0 4d11h </code></pre> <p>The Dashboard service works fine as well:</p> <pre><code>$ kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.233.20.30 &lt;none&gt; 8000/TCP 4d11h kubernetes-dashboard ClusterIP 10.233.62.70 &lt;none&gt; 443/TCP 4d11h </code></pre> <p>What I did recently, was creating an Ingress to expose the Dashboard to be available globally:</p> <pre><code>$ cat ingress.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard spec: defaultBackend: service: name: kubernetes-dashboard port: number: 443 </code></pre> <p>After applying the configuration above, it looks like it works correctly:</p> <pre><code>$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE dashboard &lt;none&gt; * 80 10h </code></pre> <p>However, trying to access the Dashboard on any of the URLs below, both http and https, returns <code>Connection Refused</code> error:</p> <pre><code>https://10.11.12.13/api/v1/namespaces/kube-system/services/kube-dns/proxy https://10.11.12.13/api/v1/ https://10.11.12.13/ </code></pre> <p>What did I miss in this configuration? Additional comment: I don't want to assign any domain to the Dashboard, at the moment it's OK to access its IP address.</p>
<p>Ingress is <strong>namespaced</strong> resource , and kubernetes-dashboard pod located in &quot;kubernetes-dashboard&quot; namespace .</p> <p>so you need to move the ingress to the &quot;kubernetes-dashboard&quot; namespace.</p> <p>:: To list all namespaced k8s resources ::</p> <pre><code>kubectl api-resources --namespaced=true </code></pre>
<p>Following is my application deployment yaml file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: sharemarket-crud-deployment spec: selector: matchLabels: app: sharemarket-k8s-sb-service replicas: 2 template: metadata: labels: app: sharemarket-k8s-sb-service spec: containers: - name: sharemarket-k8s-sb-service-container image: joy999/shareserviceproj:release06 ports: - containerPort: 8080 env: # Setting Enviornmental Variables - name: DB_HOST # Setting Database host address from configMap valueFrom : configMapKeyRef : name : db-config key : host - name: DB_NAME # Setting Database name from configMap valueFrom : configMapKeyRef : name : db-config key : dbName - name: DB_USERNAME # Setting Database username from Secret valueFrom : secretKeyRef : name : mysql-secrets key : username - name: DB_PASSWORD # Setting Database password from Secret valueFrom : secretKeyRef : name : mysql-secrets key : password --- apiVersion: v1 # Kubernetes API version kind: Service # Kubernetes resource kind we are creating metadata: # Metadata of the resource kind we are creating name: springboot-sb-service-svc spec: selector: app: springboot-k8s-sb-service ports: - protocol: &quot;TCP&quot; port: 8080 # The port that the service is running on in the cluster targetPort: 8080 # The port exposed by the service type: NodePort # type of the service. </code></pre> <p>I can see pods are created successfully, services are also good. Database is also good with table created.</p> <p><a href="https://i.stack.imgur.com/OURng.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OURng.jpg" alt="enter image description here" /></a></p> <p>The exposed port service shows as 30119 but if I POST or GET the request from postman, I am getting fallowing error all the time:</p> <p>POST <a href="http://192.168.99.100:30119/stock" rel="nofollow noreferrer">http://192.168.99.100:30119/stock</a> Error: connect ETIMEDOUT 192.168.99.100:30119</p> <p>GET <a href="http://192.168.99.100:30119/stock/1" rel="nofollow noreferrer">http://192.168.99.100:30119/stock/1</a> Error: connect ETIMEDOUT 192.168.99.100:30119</p> <p>Can anyone please help to troubleshoot the issue.</p>
<p>You need to expose the NodePort PORT as well, so you can reach this service from outside the cluster. Your GET request tries to reach port 30119, but you only exposed 8080, make sure to expose 30119</p> <pre><code>type: NodePort ports: port: 8080 TargetPort: 8080 nodePort: 30119 </code></pre> <p>Another way for you is a load balancer, and use the end point.</p> <pre><code>Kubectl get endpoints -A </code></pre>
<p>I have a VueJS app running out of a Docker image in kubernetes. As soon as there is more than one replica / pod the client cannot load the app - many, but not all, calls to load files return a 404.</p> <p>I assume that is because they are sent to a different pod than the one originally servicing the request.</p> <p>How can this be fixed?</p> <p>This is my setup:</p> <ul> <li>VueJS app (node.js-Server) running from a Docker image in kubernetes.</li> <li>Service and endpoint in kubernetes above that.</li> <li>nginx ingress in kubernetes as the next outward layer (see below).</li> <li>haproxy firewall such that myapp.mydomain.com/ gets routed to the ingress on k8s.</li> </ul> <p>This is an example call which gets a 404 returned: GET <a href="https://myapp.mydomain.com/js/chunk-d18c0136.7a3f0664.js" rel="nofollow noreferrer">https://myapp.mydomain.com/js/chunk-d18c0136.7a3f0664.js</a></p> <p>This is my service spec:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ${CI_PROJECT_NAME}-${CI_BUILD_REF_SLUG_SHORT} labels: app: ${CI_ENVIRONMENT_SLUG} spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: ${CI_PROJECT_NAME}-${CI_BUILD_REF_SLUG} selector: app: ${CI_ENVIRONMENT_SLUG} </code></pre> <p>This is my nginx ingress spec:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ${CI_PROJECT_NAME}-${CI_BUILD_REF_SLUG_SHORT} labels: app: ${CI_ENVIRONMENT_SLUG} annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;30&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;3600&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;3600&quot; spec: defaultBackend: service: name: ${CI_PROJECT_NAME}-${CI_BUILD_REF_SLUG_SHORT} port: number: 80 rules: - host: ${CI_APPLICATION_HOST} http: paths: - path: / pathType: Prefix backend: service: name: ${CI_PROJECT_NAME}-${CI_BUILD_REF_SLUG_SHORT} port: number: 80 </code></pre> <p>As a workaround we've configured the firewall to speak directly with only one pod, or running only one replica.</p> <p>Setting session-stickyness &quot;cookie&quot; on the nginx ingress doesn't work.</p>
<p>Sorry, this was a complete red herring.</p> <p>In the end, the problem was a typo in the external proxy routing - only two of six nodes were correctly configured and reachable. That's why &quot;most&quot; requests returned a 404 - the node couldn't be found.</p>
<p>Please let me know what is my mistake!</p> <p>Used this command to backup AWS EKS cluster using velero tool but it's not working :</p> <pre><code>./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0 </code></pre> <p>cat minio.credentials</p> <pre><code>[default] aws_access_key_id=xxxx aws_secret_access_key=yyyyy/zzzzzzzz region=ap-southeast-1 </code></pre> <p>Getting Error:</p> <pre><code>../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero time=&quot;2020-12-09T09:07:12Z&quot; level=error msg=&quot;Error getting backup store for this location&quot; backupLocation=default controller=backup-sync error=&quot;backup storage location's bucket name \&quot;backup-archive/eks-cluster-backup/\&quot; must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)&quot; error.file=&quot;/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110&quot; error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource=&quot;pkg/controller/backup_sync_controller.go:168&quot; </code></pre> <p>Note: I have tried --bucket backup-archive but still no use</p>
<p>Provide s3fullaccess to role that is annotated in velero service account</p>
<p>I'm trying to run in kubernetes mern application (<a href="https://github.com/ibrahima92/fullstack-typescript-mern-todo/" rel="nofollow noreferrer">https://github.com/ibrahima92/fullstack-typescript-mern-todo/</a>). I have a client and a server container, and I need to replace the path to the url client in the backend, so I defined variables in the backend code, but they don't replace the values of the variables from the manifest files. There are variables inside the container, but the backend does not use them. I tried such options 1. ${FRONT_URL}, ${process.env.FRONT_URL}, process.env.FRONT_URL. If I directly insert the URL of the service with the port number in backend code then everything works. How to correctly define variables in a container?</p> <p><strong>I need replace http://localhost:${PORT} to url of service from K8S and the same thing need to do with ${MONGO_URL}</strong></p> <pre><code>import express, { Express } from 'express' import mongoose from 'mongoose' import cors from 'cors' import todoRoutes from './routes' const app: Express = express() const PORT: string | number = process.env.PORT || 4000 app.use(cors()) app.use(todoRoutes) const uri: string = `mongodb://${MONGO_URL}?retryWrites=true&amp;w=majority` const options = { useNewUrlParser: true, useUnifiedTopology: true } mongoose.set('useFindAndModify', false) mongoose .connect(uri, options) .then(() =&gt; app.listen(PORT, () =&gt; console.log(`Server running on http://localhost:${PORT}`) ) ) .catch((error) =&gt; { throw error }) </code></pre> <p><strong>Manifest</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: todo-server-app-deploy spec: replicas: 1 selector: matchLabels: app: todo-server-app template: metadata: labels: app: todo-server-app spec: containers: - image: repo/todo-server-app:24 name: container1 ports: - containerPort: 4000 env: - name: FRONT_URL value: a1ecab155236d4c7fba8b0c6a1b6ad2b-549550669.us-east-1.elb.amazonaws.com:80 - name: MONGO_URL value: todo-mongo-service:27017 imagePullPolicy: IfNotPresent </code></pre>
<p>You can create a <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">config map</a>, giving your container run time variables, Or alternatively, build your own docker image using the <code>ENV</code></p> <p>You can also acheive that using kustomization.</p> <ul> <li>kustomization.yml</li> </ul> <pre><code>secretGenerator:   - name: my-secret     behavior: create     env: .env </code></pre> <ul> <li>Deployment.yml</li> </ul> <pre><code> envFrom: - secretRef:  name: my-secret </code></pre>
<p>Does Argo CD return metrics such as target version? The bottom line is that for each application object in 2 different clusters, we would like to compare versions of all entities described in the application manifest. And if there are any differences, send an alert about the inconsistency of the clusters.</p>
<p>There are several built-in metrics in ArgoCD which can help you to monitor your applications and also the target version of each resource in the kubernetes cluster and it will be shown in Prometheus format.</p> <p>ArgoCD exposes metrics in <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/metrics/" rel="nofollow noreferrer">Prometheus format</a>, which can be scraped and visualized using a variety of monitoring tools.</p> <p>By collecting metrics from the clusters you can identify the differences in the target version and you can use <a href="https://prometheus.io/docs/alerting/latest/alertmanager/" rel="nofollow noreferrer">Prometheus Alertmanager</a> to automate the alerts.</p>
<p>I'm using zerolog in golang, which outputs json formatted log, the app is running on k8s, and has cri-o format as following. <a href="https://i.stack.imgur.com/M1qMe.png" rel="nofollow noreferrer">actual log screenshot on Grafana loki</a></p> <p>My question is, since there's some non-json text prepended to my json log, I can't seem to effectively query the log, one example is, when I tried to pipe the log into logfmt, exceptions were thrown.</p> <p>What I want is to be able to query into the sub field of the json. My intuition is to maybe for each log, only select the parts from <code>{</code> (start of the json), then maybe I can do more interesting manipulation. I'm a bit stuck and not sure what's the best way to proceed.</p> <p>Any help and comments is appreciated.</p>
<p>after some head scratching, problem solved.</p> <p>As I'm directly using the promtail setup from here <a href="https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh" rel="nofollow noreferrer">https://raw.githubusercontent.com/grafana/loki/master/tools/promtail.sh</a></p> <p>And within this setup, the default parser is docker, but we need to change it to <code>cri</code>, afterwards, the logs are properly parsed as json in my Grafana dashboard</p>
<p>I am currently trying to enable and configure audit logs in a k3s cluster. Currently, I am using k3d to set up my k3s cluster. Is there a way how to configure the audit logging?</p> <p>I know you can parse k3s server args when creating a cluster with k3d. So, I tried it with this:</p> <pre><code>k3d cluster create test-cluster --k3s-server-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/apiserver/audit.log' --k3s-server-arg '--kube-apiserver-arg=audit-policy-file=/etc/kubernetes/audit-policies/policy.yaml' </code></pre> <p>The obvious problem is that the audit policy does not exist at the cluster until now. Thus it crashes when creating the cluster.</p> <p>Also tried it, with setting up the cluster using:</p> <pre><code>k3d cluster create test-cluster --k3s-server-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/apiserver/audit.log' </code></pre> <p>Then ssh onto the master node, created the policy file in the wanted dir, but then I cannot find a way to set the cluster variable <code>audit-log-path</code> to this directory. And thus, the policies will not apply.</p> <p>Doing this with minikube is quite simple (since it is also documented), but I couldn't get it to work with k3d - There is also nothing regards to this in the docs. But I am sure, there has to be a way how to configure audit-logs on k3s, without using a third-party-app like <code>Falco</code>.</p> <p>Has someone an idea of how to solve the problem? Or want to share some experiences doing similar?</p>
<p>I used following command to create cluster with auditlog functionality. I used volumes to provide policy file to cluster. I think it requires both audit-policy-file and audit-log-path variables to be set.</p> <pre><code>k3d cluster create test-cluster \\ --k3s-arg '--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/manifests/audit.yaml@server:*' \\ --k3s-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/audit/audit.log@server:*' \\ --volume &quot;$(pwd)/audit/audit.yaml:/var/lib/rancher/k3s/server/manifests/audit.yaml&quot; </code></pre>
<p>I'm developing a NATS based solution with deterministic subject partitioning, I use this type of mapping:</p> <blockquote> <p>service.* --&gt; service.*.&lt;number of partition&gt;</p> </blockquote> <p>Now I need a way to subscribe only one of my replicas per partition, what's the right way to do that?</p> <p>I was thinking about K8s ordinal index, but all the replicas should be stateless.</p>
<p>One way to ensure that only one replica per partition subscribes to messages is to use a queue group subscription in NATS. When multiple subscribers are part of the queue group. Only one of them will receive each message. This allows you to ensure that only one replica per partition processes messages at a time.</p> <p>Example:</p> <ol> <li><p>Assign a unique identifier to each replica such as pod name or other unique identifier.</p> </li> <li><p>If there are multiple subscribers in the queue group. NATS will distribute messages to them in a round-robin fashion.</p> </li> <li><p>If there is only one replica subscriber in the queue group; then it will receive all the messages for the partition.</p> </li> </ol> <p>By above methods only one replica per partition will receive messages and even if it goes down, NATS will automatically reassign remaining users to the group.</p> <p>For more information please check this <a href="https://docs.nats.io/using-nats/developer/receiving/queues" rel="nofollow noreferrer">official page</a>.</p>
<p>I have airflow 1.10.5 on a Kubernetes cluster.</p> <p>The DAGs are written with Kubernetes operator so that they can spin pods for each task inside the DAG on execution, on the k8s cluster.</p> <p>I have 10 worker nodes.</p> <p>The pods created by airflow are being created on the same node, where airflow is running. When many pods have to spin up, they all are queued on the same node, which makes many pod failures due to lack of resources on the node.</p> <p>At the same time, all other 9 nodes are being used very less, as we have huge load only for the airflow jobs.</p> <p>How to make the airflow to use all the worker nodes of the k8s cluster?</p> <p>I do not use any of node affinity or node selector.</p>
<p>Solved this little 'issue' by attaching <code>affinity</code> to workers pods manually in Helm chart suggested by the document: <a href="https://airflow.apache.org/docs/helm-chart/stable/parameters-ref.html#workers" rel="nofollow noreferrer">Airflow Helm-Chart </a>.</p> <pre class="lang-yaml prettyprint-override"><code>workers: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: component: worker topologyKey: kubernetes.io/hostname weight: 100 </code></pre> <p>The Airflow Helm Chart <code>values.yaml</code> defines this affinity and says that it will be the default for worker.</p> <pre class="lang-yaml prettyprint-override"><code> affinity: {} # default worker affinity is: # podAntiAffinity: # preferredDuringSchedulingIgnoredDuringExecution: # - podAffinityTerm: # labelSelector: # matchLabels: # component: worker # topologyKey: kubernetes.io/hostname # weight: 100 </code></pre> <p>But it fails to mention that this doesn't apply to worker pod but only to worker deployment under <code>CeleryExecutor</code> or <code>CeleryKubernetesExecutor</code>in <code>worker-deployment.yaml</code>.</p> <pre><code>... ################################ ## Airflow Worker Deployment ################################# {{- $persistence := .Values.workers.persistence.enabled }} {{- if or (eq .Values.executor &quot;CeleryExecutor&quot;) (eq .Values.executor &quot;CeleryKubernetesExecutor&quot;) }} ... </code></pre> <p>So if your do want do spread out your worker pods more, you need to add this affinity(or other custom affinity) to your worker pod template, which can be done through Helm values.yaml.</p> <p>Though i don't think that this will be considered as a 'issue' as most likely the certain node is free enough so Kubernetes keeps scheduling workers pods to it. When system load goes high, Kubernetes will spread out workers pods. And having worker pods in same node might reduce the network traffic between nodes in some cases.</p> <p>But in my case when all worker pods are being scheduled to the same nodes, the pods initialization latency is higher than having a distributed workload. So i decide to spread them out across the cluster.</p>
<p>I have Elasticsearch Data pods that are currently running on an AKS and are connected to Persistent Volumes that is using a Premium SSD Managed Disk Storage Class and I want to downgrade it to Standard SSD Managed Disk without losing the data I have on the currently used Persistent Volume. I've created a new Storage Class that is defined with Standard SSD Managed Disk, but if I create a new PV from that it obviously doesn't keep the old data and I need to copy it somehow, so I was wondering what would be best practice switching PV's Storage Class.</p>
<p>Unfortunately, once a PVC is created and a PV is provisioned for it, the only thing you can change without creating a new one is the volume's size</p> <p>The only straightforward way I could think of without leveraging CSI snapshots/clones, which you might not have access to (depends on how you created PVCs/PVs AFAIK), would be to create a new PVC and mount both volumes on a Deployment whose Pod has root access and the <code>rsync</code> command.</p> <p>Running <code>rsync -a /old/volume/mount/path /new/volume/mount/path</code> on such a Pod should get you what you want.</p> <p>However, you should make sure that you do so <strong>BEFORE</strong> deleting PVCs or any other resource using your PVs. By default, most of the default storage classes create volumes with reclaim policies that immediately delete the PV as soon as all resources using it are gone, so there's a small risk of data loss</p>
<p>Is there a way to configure Kubernetes to work with HTTP/3 and QUIC?</p> <p><a href="https://quicwg.org/" rel="nofollow noreferrer">QUIC protocol</a> is a quite new transport protocol, explained in the <a href="https://www.rfc-editor.org/rfc/rfc9000.html" rel="nofollow noreferrer">RFC9000</a>, that many research highlight to mix the advantages of TCP and UDP protocols.</p> <p>The only thing I found is that NGINX is developing a branch <a href="https://quic.nginx.org/" rel="nofollow noreferrer">nginx-quic</a> to add the support for QUIC but is still on in beta version and I don't know how to try inside Kubernetes.</p> <p>I cannot find a solution to configure Services and Ingress, within a Kubernetes deployment, with QUIC.</p>
<p>This was the last update from the Roadmap for QUIC and HTTP/3 Support in NGINX <a href="https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/#comment-5884347500" rel="nofollow noreferrer">article</a> from about three months ago:</p> <blockquote> <p>The latest update for our QUIC+HTTP/3 implementation is the release of <a href="https://www.nginx.com/blog/binary-packages-for-preview-nginx-quic-http3-implementation/" rel="nofollow noreferrer">binary packages</a>. The plan is to merge the QUIC+HTTP/3 into the NGINX Open Source mainline branch in the next few months, but I don't have an exact date. To be sure you know as soon as the merge happens, you can subscribe to this <a href="https://mailman.nginx.org/mailman/listinfo/nginx-announce" rel="nofollow noreferrer">link</a></p> </blockquote>
<p>I am a newbie to this k8s ingress, please help. Current problem is I am trying to use k8s to create mongo-express service which inturn will connect to MongoDB service. Now when I tried to expose mongo-express to an external service by setting <code>type: LoadBalancer</code> it quickly creates up an IP and I am able to access my db via it. But same when I am trying to do expose it to a domain name via ingress I am not getting assigned an address</p> <p>Mongo DB Deployment :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo-deployments labels: app: mongodb spec: replicas: 3 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongo-secrets key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: configMapKeyRef: name: mongo-coi key: mongo-root-password </code></pre> <p>Mongo DB Service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - protocol: TCP port : 27017 targetPort: 27017 </code></pre> <p>Mongo express deployment :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo-express-deployments labels: app: mongo-express spec: replicas: 1 selector: matchLabels: app: mongo-express template: metadata: labels: app: mongo-express spec: containers: - name: mongo-express image: mongo-express ports: - containerPort: 8081 env: - name: ME_CONFIG_MONGODB_ADMINUSERNAME valueFrom: secretKeyRef: name: mongo-secrets key: mongo-root-username - name: ME_CONFIG_MONGODB_ADMINPASSWORD valueFrom: secretKeyRef: name: mongo-secrets key: mongo-root-password - name: ME_CONFIG_MONGODB_SERVER valueFrom: configMapKeyRef: name: mongo-config-map key: database-url </code></pre> <p>Mongo express service file</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo-express-service spec: selector: app: mongo-express # type: LoadBalancer ports: - protocol: TCP port : 8081 targetPort: 8081 # nodePort : 31313 </code></pre> <p>mongo ingress file :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mongo-express-dashboard namespace: default annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: mongo-jainva.com http: paths: - path: / pathType: Prefix backend: service: name: mongo-express-service port: number: 8081 </code></pre> <p>I am attaching how my current k8s structure looks like :</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/mongo-deployments-66764645fc-jpjf2 1/1 Running 0 3d2h pod/mongo-express-deployments-976669559-l4s7h 1/1 Running 0 3d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 40d service/mongo-express-service ClusterIP 10.109.215.229 &lt;none&gt; 8081/TCP 3d service/mongodb-service ClusterIP 10.103.145.203 &lt;none&gt; 27017/TCP 3d2h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mongo-deployments 1/1 1 1 3d2h deployment.apps/mongo-express-deployments 1/1 1 1 3d1h NAME DESIRED CURRENT READY AGE replicaset.apps/mongo-deployments-66764645fc 1 1 1 3d2h replicaset.apps/mongo-express-deployments-59f9544cf7 0 0 0 3d1h replicaset.apps/mongo-express-deployments-854b7f8b86 0 0 0 3d replicaset.apps/mongo-express-deployments-976669559 1 1 1 3d </code></pre> <p>Attaching current ingress status</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE mongo-express-dashboard &lt;none&gt; mongo-jainva.com 80 33m </code></pre> <p>And also sharing description of ingres file</p> <pre><code>Name: mongo-express-dashboard Labels: &lt;none&gt; Namespace: default Address: Ingress Class: &lt;none&gt; Default backend: &lt;default&gt; Rules: Host Path Backends ---- ---- -------- mongo-jainva.com / mongo-express-service:8081 (172.17.0.4:8081) Annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: / Events: &lt;none&gt; </code></pre> <p><strong>NOTE</strong> - I have already tried mentioned IP <code>172.17.0.4:8081</code> its not working and also I am not able to understand what IP address it is specifying</p>
<p>Thank you guys whosoever responded, I have figured out the problem Problem is that in ingress file I have mentioned 2 ingress classes somehow the first one <code>kubernetes.io/ingress.class: public</code> is creating a problem after removing it I was able to see the address</p> <p>For now, I am closing this question but still I have one question if anyone can kindly answer When I use below showned ingress-controller to route my mongo-express in ingress file I see all JS and other styling files are not coming with mongo-express due to which I could see mongo-express buttons but completely ruined UI</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>But if I change the above code line to</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>It gives me a clean and stable UI what does this <code>$1</code> specifies due to which I am seeing this behaviour</p>
<p>I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).</p> <p>I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: echo-server annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; spec: type: LoadBalancer ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: echo-server --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-deployment spec: replicas: 1 selector: matchLabels: app: echo-server template: metadata: labels: app: echo-server spec: containers: - name: echo-server image: ealen/echo-server ports: - name: http containerPort: 8080 </code></pre> <p>The following pictures demonstrate the situation <a href="https://i.stack.imgur.com/b9KFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b9KFj.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/oG4dF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oG4dF.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/kQEUM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQEUM.png" alt="enter image description here" /></a></p> <p>I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip</p>
<p>Found the solution, the only thing I need to do is to add the following to the Service declaration:</p> <pre><code>externalTrafficPolicy: 'Local' </code></pre> <p>Full yaml as below</p> <pre><code>apiVersion: v1 kind: Service metadata: name: echo-server annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; spec: type: LoadBalancer externalTrafficPolicy: 'Local' ports: - port: 80 protocol: TCP targetPort: 80 selector: app: echo-server </code></pre> <p>previously it was set to 'Cluster'.</p> <hr /> <p>Just got off with azure support, seems like a specific bug on this (it happens with newer version of the AKS), posting the related link here: <a href="https://github.com/kubernetes/ingress-nginx/issues/8501" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/8501</a></p>
<p>I have deployed strimzi kafka on kubernetes and I have kube setup in my local as well. But each time I want a new topic in kafka, I need to create one by importing the yaml file via rancher and provide the topic name.</p> <p>Is there a way to create a kafka topic directly via kubectl command?</p> <p>These are the commands I use to run kafka:</p> <p><code>Producer: kubectl run kafka-producer1 -ti --image=strimzi/kafka:0.18.0-kafka-2.4.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list 11.23.41.32:31025 --topic topic-name</code></p> <p><code>Consumer: kubectl run kafka-consumer1 -ti --image=strimzi/kafka:0.18.0-kafka-2.4.0 --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server 11.23.41.32:31025 --topic topic-name --from-beginning</code></p>
<p>Steps to create topic via commandline : (assumed zookeeper running on port 2181 and kafka server on 9092)</p> <ol> <li>Get inside the kafka pod by using this command</li> </ol> <p><code>kubectl exec -it kafka-pod-name -- /bin/bash</code></p> <ol start="2"> <li>Create the topic by using below command</li> </ol> <p><code>kafka-topics --bootstrap-server localhost:9092 --create --topic topic-name --replication-factor 1 --partitions 3</code></p> <ol start="3"> <li>you can verify the message produce and consume using below commands-</li> </ol> <p>a) produce--&gt;</p> <ol> <li><code>kafka-console-producer --broker-list localhost:9092 --topic &lt;topic-you-created-before&gt;</code></li> <li>provide some message</li> </ol> <p>b) consume--&gt; <code>kafka-console-consumer --bootstrap-server localhost:9092 --topic audit --from-beginning</code></p> <p>you can see the message(provided by producer) here</p>
<p>i built a little stack in a minikube cluster with :</p> <ul> <li>a java spring-boot application into a Pod/Container</li> <li>a postgres databse into a Pod/Container</li> </ul> <p>the spring boot application has to reach spring boot container but it doesn't. Postgres container seems to start correctly (&quot;&quot;) but spring boot container does not. At start, i have a message like :</p> <pre><code>Could not obtain connection to query metadata : HikariPool-1 - Connection is not available, request timed out after 30000ms. </code></pre> <p>So i'd like to learn debugging this kind of problem, because i don't know to deal with this error</p> <p>-is it a network issue ? (pod can't be contacted ?) -is it a credentials issue ?</p> <ul> <li>other ?</li> </ul> <p>(pod can't be contacted ?)</p>
<p>Well, all is ok now. The mistake I made was that I believed I had built the new image, but I kept building it with the same tag. So, the image changed on DockerHub, but the cluster never pulled the new content of the image, because the tagged version had been cached.</p> <p>When building with a tag N+1, kubernetes then pulled the N+1 version and the pods were started successfully.</p> <p>My new rule: new docker build --&gt; new tag !!!</p>
<p>I have a problem about running Spring Boot Microservices on Kubernetes. After I installed minikube, I started it and open its dashboard.</p> <p>Here is the commands to open dashboards.</p> <pre><code>1 ) minikube start 2 ) minikube dashboard </code></pre> <p>Next, I run all services through this command.</p> <pre><code>kubectl apply -f k8s </code></pre> <p>After waiting for a certain amount of time, I got this issue shown below.</p> <pre><code>15:22:37.395 [main] ERROR org.springframework.boot.SpringApplication - Application run failed org.springframework.cloud.config.client.ConfigClientFailFastException: Could not locate PropertySource and the resource is not optional, failing at org.springframework.cloud.config.client.ConfigServerConfigDataLoader.doLoad(ConfigServerConfigDataLoader.java:197) at org.springframework.cloud.config.client.ConfigServerConfigDataLoader.load(ConfigServerConfigDataLoader.java:102) at org.springframework.cloud.config.client.ConfigServerConfigDataLoader.load(ConfigServerConfigDataLoader.java:61) at org.springframework.boot.context.config.ConfigDataLoaders.load(ConfigDataLoaders.java:107) at org.springframework.boot.context.config.ConfigDataImporter.load(ConfigDataImporter.java:128) at org.springframework.boot.context.config.ConfigDataImporter.resolveAndLoad(ConfigDataImporter.java:86) at org.springframework.boot.context.config.ConfigDataEnvironmentContributors.withProcessedImports(ConfigDataEnvironmentContributors.java:116) at org.springframework.boot.context.config.ConfigDataEnvironment.processWithProfiles(ConfigDataEnvironment.java:311) at org.springframework.boot.context.config.ConfigDataEnvironment.processAndApply(ConfigDataEnvironment.java:232) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:102) at org.springframework.boot.context.config.ConfigDataEnvironmentPostProcessor.postProcessEnvironment(ConfigDataEnvironmentPostProcessor.java:94) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEnvironmentPreparedEvent(EnvironmentPostProcessorApplicationListener.java:102) at org.springframework.boot.env.EnvironmentPostProcessorApplicationListener.onApplicationEvent(EnvironmentPostProcessorApplicationListener.java:87) at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176) at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:131) at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:85) at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$2(SpringApplicationRunListeners.java:66) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:120) at org.springframework.boot.SpringApplicationRunListeners.doWithListeners(SpringApplicationRunListeners.java:114) at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:65) at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:344) at org.springframework.boot.SpringApplication.run(SpringApplication.java:302) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295) at com.microservice.orderservice.OrderServiceApplication.main(OrderServiceApplication.java:15) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) Caused by: org.springframework.web.client.ResourceAccessException: I/O error on GET request for &quot;http://config-server-svc:9296/ORDER-SERVICE/default&quot;: connect timed out; nested exception is java.net.SocketTimeoutException: connect timed out at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602) at org.springframework.cloud.config.client.ConfigServerConfigDataLoader.getRemoteEnvironment(ConfigServerConfigDataLoader.java:303) at org.springframework.cloud.config.client.ConfigServerConfigDataLoader.doLoad(ConfigServerConfigDataLoader.java:118) ... 35 common frames omitted Caused by: java.net.SocketTimeoutException: connect timed out at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.Socket.connect(Socket.java:609) at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177) at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:508) at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:603) at java.base/sun.net.www.http.HttpClient.&lt;init&gt;(HttpClient.java:276) at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:375) at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:396) at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253) at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187) at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081) at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015) at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:76) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776) ... 39 common frames omitted </code></pre> <p>Here is my deployment.yaml file shown below.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: auth-service-app spec: selector: matchLabels: app: auth-service-app template: metadata: labels: app: auth-service-app spec: containers: - name: auth-service-app image: noyandocker/authservice imagePullPolicy: IfNotPresent ports: - containerPort: 7777 env: - name: CONFIG_SERVER_URL valueFrom: configMapKeyRef: name: config-cm key: config_url - name: EUREKA_SERVER_ADDRESS valueFrom: configMapKeyRef: name: eureka-cm key: eureka_service_address - name: DB_HOST valueFrom: configMapKeyRef: name: mysql-cm key: hostname --- apiVersion: v1 kind: Service metadata: name: auth-service-svc spec: selector: app: auth-service-app ports: - port: 80 targetPort: 7777 </code></pre> <p>Here is the configmap yaml file shown below</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config-cm data: config_url: &quot;config-server-svc&quot; --- apiVersion: v1 kind: ConfigMap metadata: name: eureka-cm data: eureka_service_address: &quot;http://eureka-0.eureka:8761/eureka&quot; --- apiVersion: v1 kind: ConfigMap metadata: name: mysql-cm data: hostname: &quot;mysql-0.mysql&quot; </code></pre> <p>Here is the config server yaml file shown below</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: config-server-app spec: selector: matchLabels: app: config-server-app template: metadata: labels: app: config-server-app spec: containers: - name: config-server-app image: noyandocker/configserver imagePullPolicy: IfNotPresent ports: - containerPort: 9296 readinessProbe: httpGet: path: /actuator/health port: 9296 initialDelaySeconds: 20 timeoutSeconds: 10 periodSeconds: 3 failureThreshold: 10 livenessProbe: httpGet: path: /actuator/health port: 9296 initialDelaySeconds: 30 timeoutSeconds: 2 periodSeconds: 8 failureThreshold: 10 env: - name: EUREKA_SERVER_ADDRESS valueFrom: configMapKeyRef: name: eureka-cm key: eureka_service_address --- apiVersion: v1 kind: Service metadata: name: config-server-svc spec: selector: app: config-server-app ports: - port: 80 targetPort: 9296 </code></pre> <p>I thought all the services will start simultaneously. Config Server is the Dependent Service for all other Serivces like auth service and this Auth service should not start until Config Server service is up and running.</p> <p><strong>Editted</strong></p> <p>I added this code snippets shown below in cloud_gateway_deployment.yaml file but it didn't work.</p> <pre><code> initContainers: - name: init-configserver image: noyandocker/configserver command: [ 'sh', '-c', &quot;until nslookup config-server-svc.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for config server; sleep 2; done&quot; ] </code></pre> <p>How can I do that?</p> <p>Here is my repo : <a href="https://github.com/Rapter1990/springbootmicroservicedailybuffer" rel="nofollow noreferrer">Link</a></p> <p>Here is my docker hub : <a href="https://hub.docker.com/search?q=noyandocker" rel="nofollow noreferrer">Link</a></p> <p>Here is git backend system : <a href="https://github.com/Rapter1990/springappconfig" rel="nofollow noreferrer">Link</a></p>
<p>If you have a dependency you need other services to wait on, I'd suggest implementing an init container, which will allow you to program the k8s deployment to wait for some dependency to exist or finish starting up. We've done something similar for a database, since many of our spring boot services in k8s need a database to be up. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p> <p>Also, make sure you familiarize yourself with the way the k8s network routing works, and DNS for other pods, which uses their name. <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
<p>Currently I use <strong>Traefik IngressRoute</strong> to expose the Traefik dashboard. I am using this configuration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard namespace: my-namespace spec: routes: - match: Host(`traefik.example.com`) &amp;&amp; (PathPrefix(`/api`) || PathPrefix(`/dashboard`)) kind: Rule services: - name: api@internal kind: TraefikService middlewares: - name: traefik-dashboard-https-redirect - name: traefik-dashboard-basic-auth tls: certResolver: le </code></pre> <p>and it works fine.</p> <p>However I would like to expose it with a native <strong>Kubernetes Ingress</strong>. I can't find any resource which shows how to access <code>api@internal</code> from an Ingress. Is it even possible?</p>
<p>It is not possible to reference api@internal from an Ingress.</p> <p>There is a workaround I think, which could be:</p> <ul> <li>expose the api as insecure, it exposes the dashboard by default on an entrypoint called traefik on port 8080.</li> <li>update the entrypoint manually in the static conf: <code>entrypoints.traefik.address=&lt;what-you-want&gt;</code></li> <li>create a service pointing to the traefik entrypoint (port 8080 by default).</li> <li>create an ingress pointing to the service</li> </ul>
<p>I have a k8s cluster deployed with kubeadm and its current version is v1.21 and use docker as CRI I want to upgrade it to v1.25, so I was reading the release notes, and I read about the new k8s registry, which is <code>registry.k8s.io</code> in its release notes <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#no-really-you-must-read-this-before-you-upgrade" rel="nofollow noreferrer">there</a> in the last paragraph but I don't know how to check what my current registry is and how to change the default registry to the new one?</p> <p>I found this <a href="https://github.com/kubernetes/kubeadm/issues/2671" rel="nofollow noreferrer">link</a> about this change, but I can't find my answers.</p>
<p>To check your current registry run the below command</p> <pre><code>kubeadm config images list </code></pre> <p>Output will be the images list where you can find the current registry.</p> <p>To change the registry to <code>registry.k8s.io</code> you can use the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">kubeadm init command</a>.</p> <pre><code>Kubeadm init –image-repository= registry.k8s.io </code></pre> <p>Restart the cluster after changing the image as the configuration will be updated correctly,Then run <code>kubeadm upgrade v1.25</code> to update your cluster.Now the clusters will be upgraded and will use the new registry.</p> <p>After that you can use <code>kubeadm config images pull</code> command to pull the images from the new registry, also make sure to update any existing deployments.</p>
<p>The work directory of my docker image is set to &quot;/opt&quot;,and it will contain the jar file and config files,etc. I am trying to mount it pvc in k8s, but find the pvc will overwrite the /opt directory.Is there any way to mount the directory in docker image which is not empty , Thanks.</p>
<p>When a PVC is mounted to a directory, instead of adding a new file it will overwrite the directory you specify. For your case when the PVC is mounted the directory “<code>/opt</code>” will be replaced. To prevent this you need to create a new directory within the <code>/opt</code> directory or you can mount PVC to a different directory other than the current directory.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: standard volumeMode: Filesystem volumeName: my-pvc mountPath: &quot;/opt/my-pvc&quot; </code></pre> <p>This YAML will create a new directory named '<code>my-pvc</code>' within the '<code>/opt</code>' directory and mount the PVC to it. Any files that are already in the '<code>/opt</code>' directory will not be overwritten.</p> <p>Or else:</p> <pre><code>mountOptions: - path: /mnt </code></pre> <p>This will mount the PVC to the <code>/mnt</code> directory instead of <code>/opt</code>, so the data in <code>/opt</code> will not be overwritten.</p> <p>This <a href="https://medium.com/hackernoon/mount-file-to-kubernetes-pod-without-deleting-the-existing-file-in-the-docker-container-in-the-88b5d11661a6" rel="nofollow noreferrer">blog by Iman Tumorang</a> will help you for reference.</p>
<p>I have created an EKS private cluster along with a node group. I'm accessing the cluster through the bastion host. I'm able to access the cluster and run the pods in the cluster but the pods don't have any internet access.</p> <p>EKS nodes have internet access and it is able to pull the public docker images but <strong>the pods running inside it don't have internet access</strong>. I'm not using any different networking like calico or anything.</p> <p>Can someone please help to fix this issue?</p>
<p>Below are the troubleshooting steps for your problem:</p> <p>By default pods are not isolated and they will accept traffic from any source. Please check whether you have met networking requirements or not from this <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">page</a>.</p> <p>You need to expose your pods to the service</p> <p>Ex:</p> <pre><code>$ kubectl run nginx --image=nginx --replicas=5 -n web deployment.apps/nginx created $ kubectl expose deployment nginx --port=80 -n web service/nginx exposed $ kubectl get svc -n web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.100.94.70 &lt;none&gt; 80/TCP 2s # kubectl exec -ti busybox -n web -- nslookup nginx Server: 10.100.0.10 Address 1: 10.100.0.10 ip-10-100-0-10.ap-southeast-2.compute.internal Name: nginx Address 1: 10.100.94.70 ip-10-100-94-70.ap-southeast-2.compute.internal </code></pre> <p>And if it fails; check DNS <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-dns-failure/" rel="nofollow noreferrer">troubleshooting</a>.</p> <ul> <li>If you use any security groups to the pods then you need to confirm whether there is any communication to the group or not.</li> <li>Check ACL does not deny any connection.</li> <li>Subnets should have the default route communications within the VPC.</li> <li>Check whether you have enough <a href="https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-cidr" rel="nofollow noreferrer">IP addresses</a>.</li> <li>Your pods should be scheduled and should be in the <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-pod-status-troubleshooting/" rel="nofollow noreferrer">running state</a>.</li> <li>Finally check the version and whether it is compatible or not.</li> </ul>
<p>I have k8s cluster with version 1.21 and cert-manager (v0.12) which was installed via helm. As far as I understand I should upgrade cert-manager version-by-version but I can't find CRDs file for version 0.13 (<a href="https://github.com/cert-manager/cert-manager/releases/download/v0.13/cert-manager.crds.yaml" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/releases/download/v0.13/cert-manager.crds.yaml</a> - there is no such file. CRDs were installed separately from cert-manager). So, guys, help me!</p>
<p>I can get a file from this <a href="https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml" rel="nofollow noreferrer">URL</a>; please test it and let me know how it works. When upgrading from earlier versions, use this file, which contains all of the Custom Resource Definitions (CRDs) for cert-manager 0.13.</p> <p>You can upgrade using static manifests also by running the below command:</p> <pre><code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/&lt;version&gt;/cert-manager.yaml </code></pre> <p>You can find more information in this <a href="https://cert-manager.io/docs/installation/upgrading/#upgrading-using-static-manifests" rel="nofollow noreferrer">doc</a>.</p>
<p>Here is my working condition:</p> <ol> <li>My laptop @ 192.168.12.85 gw 192.168.1.254</li> <li>Controll plane : bino-k8-master @ 192.168.1.66 gw 192.168.1.247</li> <li>worker node : bino-k8-wnode1 @ 192.168.1.67 gw 192.168.1.247</li> </ol> <p>k8s cluster is build per <a href="https://www.hostafrica.ng/blog/kubernetes/kubernetes-ubuntu-20-containerd/" rel="nofollow noreferrer">https://www.hostafrica.ng/blog/kubernetes/kubernetes-ubuntu-20-containerd/</a></p> <p>I build simple flask app image per <a href="https://faun.pub/run-your-flask-app-on-kubernetes-ff03854db842" rel="nofollow noreferrer">https://faun.pub/run-your-flask-app-on-kubernetes-ff03854db842</a></p> <p>Currently, the app is running:</p> <pre><code>ubuntu@bino-k8-master:~$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES flask-k8s-deployment-59bd54648c-jdgxv 1/1 Running 0 16h 10.244.1.8 bino-k8-wnode1 &lt;none&gt; &lt;none&gt; </code></pre> <p>also the service:</p> <pre><code>ubuntu@bino-k8-master:~$ kubectl get services -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR flask-k8s-service LoadBalancer 10.96.179.198 &lt;pending&gt; 6000:30787/TCP 17h app=flask-k8s kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 18h &lt;none&gt; </code></pre> <p>But I can not access the app or service from control plane ... curl <a href="http://10.244.1.8:5000" rel="nofollow noreferrer">http://10.244.1.8:5000</a> and <a href="http://10.96.179.198:6000" rel="nofollow noreferrer">http://10.96.179.198:6000</a> both failed (no message, just stuck)</p> <p>But both curl will work if I did it from the worker node.</p> <p>Kindly please tell me what to do to make the app or service can be acessed from my laptop (192.168.1.85)</p> <p>Sincerely</p> <p>Bino</p>
<p>If you want to access it from a laptop you need to get the extenal-ip of the load balancer. It is not on screenshot yet. If you want just to test it, you can port forward with correct ports.</p> <pre><code>kubectl port-forward flask-k8s-deployment-59bd54648c-jdgxv 3000:3000 </code></pre> <p>and then just call</p> <pre><code>http://localhost:3000 </code></pre> <p>If you want to access it from internet, create <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>I am using an <code>Ubuntu 22.04</code> machine to run and test Kubernetes locally. I need some functionality like <code>Docker-Desktop</code>. I mean it seems both <code>master</code> and <code>worker</code> nodes/machines will be installed by <code>Docker-Desktop</code> on the same machine. But when I try to install Kubernetes and following the instructions like <a href="https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/" rel="nofollow noreferrer">this</a>, at some points it says run the following codes on <code>master</code> node:</p> <pre><code>sudo hostnamectl set-hostname kubernetes-master </code></pre> <p>Or run the following comands on the <code>worker</code> node machine:</p> <pre><code>sudo hostnamectl set-hostname kubernetes-worker </code></pre> <p>I don't know how to specify <code>master</code>/<code>worker</code> nodes if I have only my local Ubuntu machine?</p> <p>Or should I run <code>join</code> command after <code>kubeadm init</code> command? Because I can't understand the commands I run in my terminal will be considered as a command for which <code>master</code> or <code>worker</code> machine?</p> <p>I am a little bit confused about this <code>master</code>/<code>worker</code> nodes or <code>client</code>/<code>server</code> machine stuff while I am just using one machine for both client and server machines.</p>
<p>Prerequisites for installing kubernetes in cluster:</p> <ol> <li>Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)</li> <li>Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)</li> </ol> <p>It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.</p> <p>For creating an instance in gcp follow this <a href="https://cloud.google.com/compute/docs/create-linux-vm-instance" rel="nofollow noreferrer">guide</a>. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.</p> <p>After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: <code>ssh root@&lt;ip address&gt;</code></p> <p>From there you can follow any guide for installing kubernetes by using worker and master nodes.</p> <p>example:</p> <pre><code>sudo hostnamectl set-hostname &lt;host name&gt; </code></pre> <p>Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.</p>
<p>I am looking for a helm chart of <a href="https://superset.apache.org/" rel="nofollow noreferrer">Superset</a> to set it up on Kubernetes which is hosted remotely. In other words I would like to call &quot;helm repo add&quot; on a remote url.</p> <p>I found this one <a href="https://github.com/helm/charts#%EF%B8%8F-deprecation-and-archive-notice" rel="nofollow noreferrer">here</a> but it says that it is deprecated with no reference to a new location. The only thing I could find is this pull request, but the <a href="https://github.com/helm/charts/pull/24466/files" rel="nofollow noreferrer">repository</a> it is leading to does not seem to contain Superset. Does anyone know if there is still a remote Superset helm chart somewhere out there?</p>
<p>According to the documentation <a href="https://superset.apache.org/docs/installation/running-on-kubernetes" rel="nofollow noreferrer">here</a></p> <p>A helm chart has been published and can be added as</p> <pre><code>helm repo add superset https://apache.github.io/superset </code></pre> <p>Then installed</p> <pre><code>helm upgrade --install --values my-values.yaml superset superset/superset </code></pre>
<p>I am using kubernetes with docker desktop on MacOS Monterey. I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid.</p> <p>How can I renew them ?</p> <p>Error message:</p> <pre><code>Error: Kubernetes cluster unreachable: Get &quot;https://kubernetes.docker.internal:6443/version&quot;: EOF </code></pre> <p>I tried to install <code>kubeadm</code> but I think it is only suitable if I use <code>minikube</code>.</p> <p>Edit: I am using Mac with M1 chip.</p>
<p>You will need to create a new set of <a href="https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates" rel="nofollow noreferrer">certificates</a> and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster.</p> <p>Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent <a href="https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/" rel="nofollow noreferrer">certificates</a> by employing the minikube get-kube-config command.</p> <p>Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.</p>
<p>I'm studying for the CKAD and I'm trying to create a <code>PersistenVolume</code> of type <code>hostPath</code> on my local minikube cluster and mount to a container</p> <p>These are the steps:</p> <ol> <li>I created a PV of type <code>hostPath</code> and <code>path: &quot;/data/vol1/&quot;</code></li> <li>I created a PVC and its state is <code>Bound</code></li> <li>I created a POD and mounted the PVC as volume under &quot;/var/something/&quot;</li> <li>I ran <code>minikube ssh</code> and created a file <code>/data/vol1/foo.bar</code></li> <li>I expected to see the file <code>foo.bar</code> under the folder /var/something/` of the container, but it's not there.</li> </ol> <p>This is the yaml file</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: 1311-pv spec: capacity: storage: 2Gi hostPath: path: &quot;/data/vol1/&quot; accessModes: - ReadWriteMany --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: 1311-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: v1 kind: Pod metadata: name: ex10 spec: volumes: - name: pvc persistentVolumeClaim: claimName: 1311-pvc containers: - image: httpd name: web volumeMounts: - mountPath: &quot;/var/something/&quot; name: pvc </code></pre> <p>And this is the status of the system:</p> <pre><code>k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 1311-pvc Bound pvc-c892f798-8ee6-4040-9177-3e77327e9ec6 1Gi RWX standard 5m </code></pre> <pre><code>k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 1311-pv 2Gi RWX Retain Available 13m pvc-c892f798-8ee6-4040-9177-3e77327e9ec6 1Gi RWX Delete Bound default/1311-pvc standard 13m </code></pre> <pre><code>pod describe ex10 ... Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-lw45q (ro) /var/something/ from pvc (rw) .... </code></pre> <p>Everything looks correct to be, but I don't see the file inside the container:</p> <pre><code>k exec ex10 -- ls /var/something - no results here - </code></pre> <p>that I created into the minikube vm:</p> <pre><code>ssh minikube $ ls /data/vol1/ foo.bar </code></pre>
<p>I found a solution for this issue, the PV must specify the <code>storageClassName: standard</code> in order to work.</p> <p>This is how the PV should look like to properly store data into minikube host</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: 1311-pv spec: storageClassName: &quot;standard&quot; capacity: storage: 2Gi hostPath: path: &quot;/host/&quot; accessModes: - ReadWriteMany </code></pre>
<p>I am studying &quot;kubectl describe&quot; sourcecodes at <a href="https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go</a></p> <p>However, I still could not figure out how &quot;kubectl decsribe [CRD]&quot; works (as in which function/functions are called).</p> <p>I am a Go newbie, so would like to get some pointers please. Thanks.</p> <p>I have read describePod function and understand how it works more or less, but still could not figure out how &quot;kubectl describe [CRD]&quot; works.</p>
<p>The &quot;kubectl describe &quot; function can be found in the command-line interface (CLI) of Kubernetes, specifically in the &quot;kubectl&quot; tool. &quot;kubectl&quot; is used to manage and interact with a Kubernetes cluster and its resources. <a href="https://i.stack.imgur.com/mZ6Tz.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>So I have a JCasC ConfigMap containing</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: master-jcasc namespace: master-namespace data: entries.yaml: | master: JCasC: defaultConfig: false configScripts: jenkins-settings: | field1: ... field2: ... jobs: - script: &gt; someJobDefenition </code></pre> <p>And keeping the <code>jobs</code> section here in the Jcasc file works when I <code>k apply</code>, however when I create another ConfigMap with just the <code>jobs</code> in it, they keep overwriting each other.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: jobs-file namespace: master-namespace data: entries.yaml: | master: JCasC: defaultConfig: false configScripts: jenkins-settings: | jobs: - script: &gt; someJobDefenition </code></pre> <p>How can I seperate a ConfigMap into two? without having them overwrite each other?</p>
<p>You can use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kustomize</a> to use a separate config file.</p> <p>As mentioned in this <a href="https://stackoverflow.com/questions/71008589/">stack question</a> you can use 2 yaml files and create a config map as below :</p> <pre><code>configMapGenerator: - name: my-configmap files: - datasource1.yaml - datasource2.yaml </code></pre> <p>Where datasourse1.yaml and datasource2.yaml are files which are derived by splitting a config map into two files.</p> <p>Here is a <a href="https://stackoverflow.com/questions/71008589/kustomize-overlays-when-using-a-shared-configmap">stack question</a> with another approach.</p>
<p>I have the following ConfigMap which is having a variable called <code>VAR</code>. This variable should get the value from the workflow while accessing it as a volume</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: test-pod-cfg data: test-pod.yaml: |- apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test image: ubuntu command: [&quot;/busybox/sh&quot;, &quot;-c&quot;, &quot;echo $VAR&quot;] </code></pre> <p>Here is the argo workflow which is fetching script <code>test-pod.yaml</code> in ConfigMap and adding it as a volume to container. In this how to pass Environment variable <code>VAR</code> to the ConfigMap for replacing it dynamically</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: name: test-wf- spec: entrypoint: main templates: - name: main container: image: &quot;ubuntu&quot; command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;cat /mnt/vc/test&quot;] volumeMounts: - name: vc mountPath: &quot;/mnt/vc&quot; volumes: - name: vc configMap: name: test-pod-cfg items: - key: test-pod.yaml path: test </code></pre>
<p>To mount the <code>ConfigMap</code> as a volume and make the environment variable VAR available to the container, you will need to add a volume to the pod's spec and set the environment variable in the container's spec.</p> <p>In the volume spec, you will need to add the <code>ConfigMap</code> as a volume source and set the path to the file containing the environment variable. For example:</p> <pre><code>spec: entrypoint: test-pod templates: - name: test-pod container: image: ubuntu command: [&quot;/busybox/sh&quot;, &quot;-c&quot;, &quot;echo $VAR&quot;] volumeMounts: - name: config mountPath: /etc/config env: - name: VAR valueFrom: configMapKeyRef: name: test-pod-cfg key: test-pod.yaml volumes: - name: config configMap: name: test-pod-cfg </code></pre> <p>The environment variable <code>VAR</code> will then be available in the container with the value specified in the ConfigMap.</p> <p>For more information follow this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps" rel="nofollow noreferrer">official doc</a>.</p>
<p>I deployed a pod and service of a Flask API in Kubernetes.</p> <p>When I run the Nifi processor InvoqueHTTP that calls the API, I have the error :</p> <pre><code>File &quot;/opt/app-root/lib64/python3.8/site-packages/psycopg2/__init__.py&quot; </code></pre> <p><code>psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above</code></p> <p>The API connects to PGAAS database, in local it is running fine to connect but in the Kubernetes pod I need libpq library but I'm not finding the right library to install. I also tried to install psycopg2-binary and it's throwing the same error.</p> <p>Do you have any idea how to solve this issue ?</p> <p>version tried in requirements : psycopg2==2.9.3 or psycopg2-binary==2.9.5</p>
<p>For psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above follow the below work arounds:</p> <p><strong>Solution :1</strong></p> <p>Download libpq.dll from <a href="https://www.exefiles.com/en/dll/libpq-dll/" rel="nofollow noreferrer">https://www.exefiles.com/en/dll/libpq-dll/</a> then replace old libpq.dll at php directory with the latest downloaded</p> <p><strong>Solution :2</strong></p> <p>Change authentication to md5, then reset your password and restart the postgresql service and here are step by step:</p> <ul> <li>Find file postgresql.conf in C:\Program Files\PostgreSQL\13\data then set password_encryption = md5</li> <li>Find file pg_hba.conf in C:\Program Files\PostgreSQL\13\data then change all METHOD to md5</li> <li>Open command line (cmd,cmder,git bash...) and run <code>psql -U postgres</code> then enter your password when installed postgres sql -Then change your password by running ALTER USER <code>postgres WITH PASSWORD 'new-password';</code> in command line</li> <li>Restart service postgresql in your Service</li> </ul> <p><strong>Solution :3</strong></p> <p>Check if psycopg is using the additional copy of libpq that may be present on your computer. Recognize that file, then upgrade or remove it. Perhaps psycopg has to be updated for that.</p>
<p>How to pass on my own kubernetes audit policy yaml to the GKE master node?</p> <p>For self-managed kubernetes, we can ssh into master node and pass the policy yaml into <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. How could we achieve the same in GKE?</p>
<p>For creating or updating an <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/audit-policy#kubernetes_audit_policy" rel="nofollow noreferrer">audit policy</a> you have to set <code>--audit-policy-file flag</code> and <code>--audit-webhook-config-file</code> flags as arguments in the API server.</p> <p>Google manages GKE master completely and you cannot reach it or update it.Currently, it's impossible to update cluster networks and remove tags for existing clusters using the gcloud command. I have verified this information using the gcloud container clusters update command <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/update" rel="nofollow noreferrer">documentation</a>.</p> <p>Instead you can filter the audit logs in the Cloud Console, the Logs page has two filtering interfaces: basic and advanced. For information about the two filtering interfaces, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#basic_and_advanced_filter_interfaces" rel="nofollow noreferrer">see Logs Viewer filter interfaces</a>.</p> <p>There is a <a href="https://issuetracker.google.com/issues/185868707" rel="nofollow noreferrer">feature request</a> on it, check it and raise a new request if you need it by using the <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">issue tracker</a>.</p>
<p>I am searching for a way to add a specific label to ConfigMap using Kustomize. Using <code>commonLabels</code> is not an option because it adds a label to every resource. For config map generation, I am using the following:</p> <pre><code>configMapGenerator: - name: ${REPOSITORY_NAME}-env envs: - .env </code></pre> <p>And I would like to achieve something like this:</p> <pre><code>configMapGenerator: - name: ${REPOSITORY_NAME}-env labels: key: value envs: - .env </code></pre>
<p>As per this <a href="https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_1501" rel="nofollow noreferrer">git link</a> you can add specific labels by using patch files in kustomize.</p> <blockquote> <pre><code>apiVersion: v1 data: a: &quot;0&quot; kind: ConfigMap metadata: labels: apps: a name: a-config-map-26kgmbk2md </code></pre> <p>Then add these in the configMapGenerator</p> <pre><code>configMapGenerator: - name: a-config-map envs: - a.properties patchesStrategicMerge: - patch-a.yaml </code></pre> </blockquote> <p>This will add the specific label to individual configMaps to the configmap generator.</p> <p>Additionally you can also refer to this <a href="https://github.com/kubernetes-sigs/kustomize/issues/3108#issuecomment-710344736" rel="nofollow noreferrer">git issue</a> for additional information.</p>
<p>According to articles below, it seems we can pull container image to GKE from Artifact Registry without any additional authentication when these in same project.</p> <p><a href="https://cloud.google.com/artifact-registry/docs/integrate-gke" rel="nofollow noreferrer">https://cloud.google.com/artifact-registry/docs/integrate-gke</a></p> <p><a href="https://www.youtube.com/watch?v=BfS7mvPA-og" rel="nofollow noreferrer">https://www.youtube.com/watch?v=BfS7mvPA-og</a></p> <p><a href="https://stackoverflow.com/questions/73205712/error-imagepullbackoff-and-error-errimagepull-errors-with-gke">Error: ImagePullBackOff and Error: ErrImagePull errors with GKE</a></p> <p>But when I try it, I faced <code>ImagePullBackOff</code> error.<br /> Is there any mistake? misunderstanding? Or should I need use another authentication?</p> <h2>Reproduce</h2> <p>It's convenient to use Google Cloud Shell in some project on <a href="https://console.cloud.google.com" rel="nofollow noreferrer">https://console.cloud.google.com</a> .</p> <h3>Create Artifact Registry</h3> <pre class="lang-bash prettyprint-override"><code>gcloud artifacts repositories create test \ --repository-format=docker \ --location=asia-northeast2 </code></pre> <h3>Push sample image</h3> <pre class="lang-bash prettyprint-override"><code>gcloud auth configure-docker asia-northeast2-docker.pkg.dev docker pull nginx docker tag nginx asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image docker push asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image </code></pre> <h3>Create GKE Autopilot cluster</h3> <p>Create GKE Autopilot cluster by using GUI console.</p> <p>Almost all options is default but I changed these 2.</p> <ul> <li>Set cluster name as test.</li> <li>Set region same as registry's one. (In this case, asia-northeast2)</li> <li>Enabled Anthos Service Mesh.</li> </ul> <h3>Deploy container image to GKE from Artifact Registry</h3> <pre class="lang-bash prettyprint-override"><code>gcloud container clusters get-credentials test --zone asia-northeast2 kubectl run test --image asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image </code></pre> <h3>Check Pod state</h3> <pre class="lang-bash prettyprint-override"><code>kubectl describe po test </code></pre> <pre><code>Name: test Namespace: default Priority: 0 Service Account: default Node: xxxxxxxxxxxxxxxxxxx Start Time: Wed, 08 Feb 2023 12:38:08 +0000 Labels: run=test Annotations: autopilot.gke.io/resource-adjustment: {&quot;input&quot;:{&quot;containers&quot;:[{&quot;name&quot;:&quot;test&quot;}]},&quot;output&quot;:{&quot;containers&quot;:[{&quot;limits&quot;:{&quot;cpu&quot;:&quot;500m&quot;,&quot;ephemeral-storage&quot;:&quot;1Gi&quot;,&quot;memory&quot;:&quot;2Gi&quot;},&quot;reque... seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Pending IP: 10.73.0.25 IPs: IP: 10.73.0.25 Containers: test: Container ID: Image: asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Limits: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Requests: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szq85 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-szq85: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Guaranteed Node-Selectors: &lt;none&gt; Tolerations: kubernetes.io/arch=amd64:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19s gke.io/optimize-utilization-scheduler Successfully assigned default/test to xxxxxxxxxxxxxxxxxxx Normal Pulling 16s kubelet Pulling image &quot;asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image&quot; Warning Failed 16s kubelet Failed to pull image &quot;asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest&quot;: failed to resolve reference &quot;asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest&quot;: failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 16s kubelet Error: ErrImagePull Normal BackOff 15s kubelet Back-off pulling image &quot;asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image&quot; Warning Failed 15s kubelet Error: ImagePullBackOff </code></pre> <p>then, I got <code>ImagePullBackOff</code>.</p>
<p>This could be because the GKE Autopilot service account does not have the necessary permissions to access the Artifact Registry. You can grant the needed permissions by adding the <code>roles/artifactregistry.reader</code> role to the service account that the GKE Autopilot node pool is configured to use. Additionally, you may need to adjust the <a href="https://cloud.google.com/container-registry/docs/access-control#permissions_and_roles" rel="nofollow noreferrer">IAM permissions</a> for the service account so that it has access to the private Artifact Registry.</p> <pre><code>gcloud artifacts repositories add-iam-policy-binding &lt;repository-name&gt; \ --location=&lt;location&gt; \ --member=serviceAccount:&lt;nnn&gt;[email protected] \ --role=&quot;roles/artifactregistry.reader&quot; </code></pre> <p>Can you try creating a new service account and granting it the necessary permissions to pull the image and try to pull the image once.</p> <p>Simple <a href="https://cloud.google.com/container-registry/docs/troubleshooting#imagepullbackoff" rel="nofollow noreferrer">troubleshooting steps</a> are:</p> <ol> <li>you should ensure that your GKE cluster is configured to allow access to the Artifact Registry. You can do this by going to the GKE dashboard and making sure that the “Allow access to Artifact Registry” option is enabled.</li> <li>The container image you are trying to pull does not exist in the Artifact Registry. You should check the registry to make sure that the container image is correctly uploaded and can be accessed.</li> <li>you can look into the error logs to get more information on what is causing this issue. Additionally, you can check the GKE documentation for more information on troubleshooting this issue.</li> </ol>
<p>I think I am misunderstanding Kubernetes CronJobs. On the CKAD exam there was a question to have a CronJob run every minute, but it should start after an arbitrary amount of time. I don't see any properties for CronJobs or Jobs to have them start after a specific time. Should that be part of the cron string or am I completely misunderstanding?</p>
<p>maybe you misunderstood the question, it was to terminate the job if it didn't complete in x seconds. you should use the <code>.spec.activeDeadlineSeconds</code> to satisfy the requirment.</p>
<p>How to safely delete node from the cluster?</p> <p>Firstly, I have drained and deleted the node. However, after few seconds kube created again. I believe its because of the cluster service, where number of replicas are defined. Should i update my cluster service and delete ? Or is there any other way to safely delete ?</p>
<p><strong>To delete a node and stop recreating another one automatically follow the below steps:</strong></p> <ol> <li><p>First drain the node</p> <p><code>kubectl drain &lt;node-name&gt;</code></p> </li> <li><p>Edit instance group for nodes (using kops)</p> <p><code>kops edit ig nodes</code></p> </li> <li><p>Finally delete the node</p> <p><code>kubectl delete node &lt;node-name&gt;</code></p> </li> <li><p>Update the cluster (using kops)</p> <p><code>kops update cluster --yes</code></p> </li> </ol> <p>Note: If you are using a pod autoscaler then disable or edit the replica count before deleting the node.</p>
<p>I am new to Kubernetes. I am planning to build/deploy an application to EKS. This will likely be deployed on azure and gcp as well.</p> <p>I want to separate data plane and control plane in my application deployed in EKS.</p> <p>Is there anyway EKS/kubernetes allows to accomplish this?</p> <p>Or should we go for two EKS with one for data plane and another for control plane?</p> <p>Here is the problem(copied from the answer below)</p> <blockquote> <p>I have an application, built using the microservice architecture (meaning you will have it split into components that will communicate with eachother).</p> <p>I want to deploy this application on a public cloud (EKS, GCP, AWS).</p> <p>I want a separation of the APPLICATION control plane (decision making components like authentication APIs, internal service routing) from the APPLICATION data plane (the serving of your application data to the clients through the egress).</p> </blockquote>
<p>I great open-source observability control plane for you apps is Odigos. The installation is super easy and within a few minutes you can get traces, metrics and logs. You get auto-instrumentation for all languages (including GO) as well as a manager of your opentelemetry collectors. Check it out: <a href="https://github.com/keyval-dev/odigos" rel="nofollow noreferrer">https://github.com/keyval-dev/odigos</a></p>
<p>How do we update the <code>imagePullPolicy</code> alone for certain deployments using <code>kubectl</code>? The image tag has changed, however we don't require a restart. Need to update existing deployments with <code>--image-pull-policy</code> as <code>IfNotPresent</code></p> <p>Note: Don't have the complete YAML or JSON for the deployments, hence need to do it via <code>kubectl</code></p>
<p>use</p> <pre><code>kubectl edit deployment &lt;deployment_name&gt; -n namespace </code></pre> <p>Then you will be able to edit imagePullPolicy</p>
<p>After an unexpected power failure, the mongodb service I deployed on k8s could not be restarted normally. The log of mongodb showed that there was a problem with its data and could not be started.</p> <p>I did not record the specific error log.</p>
<p>Here is my fix:</p> <h2>First</h2> <p>change k8s deployment.yaml config.</p> <p>Because we want to repair the data file of mongodb, the first step is to make the mongo pod run, we run the command in the pod.</p> <p>Now change the startup command of the container:</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: mongodb image: mongo:latest command: [&quot;sleep&quot;] args: [&quot;infinity&quot;] imagePullPolicy: IfNotPresent ports: # ....... </code></pre> <p>After apply it. If I guessed correctly, the mongo pod should be up and running.</p> <h2>Second</h2> <p>Use mongod command to repair data.</p> <pre class="lang-bash prettyprint-override"><code>kubectl exec -it &lt;YOURPODNAME&gt; -- mongod --dbpath &lt;YOURDBPATH&gt; --repair --directoryperdb </code></pre> <p>I have to exec it with <code>--directoryperdb</code>, if you run it error, you can try remove it.</p> <p>If I guessed correctly, So far everything is fine.</p> <h2>Third</h2> <p>Recover k8s deployment.yaml, back to the way they were.</p> <p>Now reapply it.</p> <p>---------- Manual split line</p> <p>The above is my repair process. Its worked for me. I just record it. You can refer to it to fix your mongodb. Thank you.</p>
<p>I'm a little confused about the default Service Account in new created Namespace in my Minikube.</p> <ul> <li>Does it have any permissions? It seems not because I can't find any rolebinding or clusterrolebindung which references this SA</li> <li>Then why is it created when it does not have a permission, or is there a use case around that?</li> <li>and lastly, why are service accounts by default mount to pods?</li> </ul> <p>Regards ralph</p>
<ol> <li><p>The default <a href="https://thenewstack.io/kubernetes-access-control-exploring-service-accounts/" rel="nofollow noreferrer">service account</a> doesn’t have enough permissions to retrieve the services running in the same namespace.</p> </li> <li><p>Kubernetes follows the convention of closed-to-open which means that by default no user or service account has any permissions.</p> <p>To fulfill this request, we need to create a role binding associating the default service account with an appropriate role.This is similar to how we assign a viewer role to the service account that can give permission to list pods.</p> </li> <li><p>Pods have the default service account assigned even when you don’t ask for it. This is because every pod in the cluster needs to have one (and only one) service account assigned to it.</p> <p>Refer <a href="https://stackoverflow.com/questions/52995962/kubernetes-namespace-default-service-account">Kubernetes namespace default service account</a> for more information.</p> </li> </ol>
<p>I'm building Istio/K8s-based platform for controlling traffic routing with NodeJS. I need to be able to programmatically modify Custom Resources and I'd like to use the <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">@kubernetes/node-client</a> for that. I wasn't able to find the right API for accessing Custome Resources in docs and the repo. Am I missing something? Thx in adv.</p> <p>EDIT: When using CustomObjectApi.patchNamespacedCustomObject function, I'm getting the following error back from K8s API:</p> <p><code>message: 'the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml', reason: 'UnsupportedMediaType', code: 415</code></p> <p>My Code:</p> <pre class="lang-js prettyprint-override"><code>const k8sYamls = `${path.resolve(path.dirname(__filename), '..')}/k8sYamls` const vServiceSpec = read(`${k8sYamls}/${service}/virtual-service.yaml`) const kc = new k8s.KubeConfig() kc.loadFromDefault() const client = kc.makeApiClient(k8s.CustomObjectsApi) const result = await client.patchNamespacedCustomObject(vServiceSpec.apiVersion.split('/')[0], vServiceSpec.apiVersion.split('/')[1], namespace, 'virtualservices', vServiceSpec.metadata.name, vServiceSpec) </code></pre> <p>virtual-service.yaml:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: message-service spec: hosts: - message-service http: - name: 'production' route: - destination: host: message-service weight: 100 retries: attempts: 3 perTryTimeout: 2s retryOn: 5xx </code></pre>
<p>I was using the wrong type for the <code>body</code> object in that method. I got it to work following <a href="https://github.com/kubernetes-client/javascript/blob/master/examples/patch-example.js" rel="nofollow noreferrer">this example</a>.</p> <pre class="lang-js prettyprint-override"><code>const patch = [ { &quot;op&quot;: &quot;replace&quot;, &quot;path&quot;:&quot;/metadata/labels&quot;, &quot;value&quot;: { &quot;foo&quot;: &quot;bar&quot; } } ]; const options = { &quot;headers&quot;: { &quot;Content-type&quot;: k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH}}; k8sApi.patchNamespacedPod(res.body.items[0].metadata.name, 'default', patch, undefined, undefined, undefined, undefined, options) .then(() =&gt; { console.log(&quot;Patched.&quot;)}) .catch((err) =&gt; { console.log(&quot;Error: &quot;); console.log(err)}); </code></pre>
<p>I got empty values for CPU and Memory, when I used igztop for check running pods in iguazio/mlrun solution. See the first line in output for this pod <code>*m6vd9</code>:</p> <pre><code>[ jist @ iguazio-system 07:41:43 ]-&gt;(0) ~ $ igztop -s cpu +--------------------------------------------------------------+--------+------------+-----------+---------+-------------+-------------+ | NAME | CPU(m) | MEMORY(Mi) | NODE | STATUS | MLRun Proj. | MLRun Owner | +--------------------------------------------------------------+--------+------------+-----------+---------+-------------+-------------+ | xxxxxxxxxxxxxxxx7445dfc774-m6vd9 | | | k8s-node3 | Running | | | | xxxxxx-jupyter-55b565cc78-7bjfn | 27 | 480 | k8s-node1 | Running | | | | nuclio-xxxxxxxxxxxxxxxxxxxxxxxxxx-756fcb7f74-h6ttk | 15 | 246 | k8s-node3 | Running | | | | mlrun-db-7bc6bcf796-64nz7 | 13 | 717 | k8s-node2 | Running | | | | xxxx-jupyter-c4cccdbd8-slhlx | 10 | 79 | k8s-node1 | Running | | | | v3io-webapi-scj4h | 8 | 1817 | k8s-node2 | Running | | | | v3io-webapi-56g4d | 8 | 1827 | k8s-node1 | Running | | | | spark-worker-8d877878c-ts2t7 | 8 | 431 | k8s-node1 | Running | | | | provazio-controller-644f5784bf-htcdk | 8 | 34 | k8s-node1 | Running | | | </code></pre> <p>and It also was not possible to see performance metrics (CPU, Memory, I/O) for this pod in Grafana.</p> <p>Do you know, how can I resolve this issue without whole node restart (and what is the root cause)?</p>
<p>Below troubleshooting steps will help you in resolving the issue:</p> <p>1.Check if you can see the CPU and memory of the pod using describe command:</p> <pre><code>kubectl describe pods my-pod </code></pre> <p>2.Check if you can view CPU and memory of all pods and nodes using below commands:</p> <pre><code>kubectl top pod kubectl top node </code></pre> <p>3.Check if the metric server is running by using below command:</p> <pre><code>kubectl get apiservices v1beta1.metrics.k8s.io kubectl get pod -n kube-system -l k8s-app=metrics-server </code></pre> <p>4.Check the CPU and memory of the pod using below queries:</p> <blockquote> <p>CPU Utilisation Per Pod:</p> <pre><code>sum(irate(container_cpu_usage_seconds_total{container!=&quot;POD&quot;, container=~&quot;.+&quot;}[2m])) by (pod) </code></pre> <p>RAM Usage Per Pod:</p> <pre><code>sum(container_memory_usage_bytes{container!=&quot;POD&quot;, container=~&quot;.+&quot;}) by (pod) </code></pre> </blockquote> <p>5.Check logs of the pod and node, if you find any error attach those logs for further troubleshooting.</p>
<p>I created new <code>config</code> file for Kubernetes from <code>Azure</code> in <code>Powershell</code> by <code>az aks get-credentials --resource-group &lt;RGName&gt; --name &lt;ClusterName&gt;</code>. Got a message that <code>Merged &quot;cluster_name&quot; as current context in C:\michu\.kube\config</code>. I copied this file into default <code>.kube\config</code> location and now when I try to run any command e.g <code>kubectl get pods</code> I am receiving:</p> <pre><code>Unable to connect to the server: getting credentials: exec: executable kubelogin not found It looks like you are trying to use a client-go credential plugin that is not installed. To learn more about this feature, consult the documentation available at: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins </code></pre> <p>What is wrong here?</p> <p>I just realized that when I type <code>kubectl config get-contexts</code> then I can see my <code>cluster_name</code> and I can even switch to this by <code>kubectl config use-context cluster_name</code> and message is correct: <code>Switched to context cluster_name</code> but then still all other commands ends with <code>Unable to connect to the server: getting credentilas: exec: executable kubelogin not found</code></p>
<p>The error implies that the <code>kubelogin</code> executable could not be located. You need to install <code>kubelogin</code> in the azure cli using <code>az aks install-cli</code>, then it works as expected.</p> <p>Refer <a href="https://azure.github.io/kubelogin/install.html" rel="nofollow noreferrer">github</a> for installation process.</p> <p><em>I tried the same requirement in my environment, and it worked for me as follows.</em></p> <pre class="lang-bash prettyprint-override"><code>az aks get-credentials --resource-group caroline --name sampleaks1 kubectl get pods </code></pre> <p><em><strong>Output:</strong></em></p> <p><img src="https://i.stack.imgur.com/ToC95.png" alt="enter image description here" /></p> <p>Once you have the <code>aks</code> credentials, running <code>kubectl get pods</code> will prompt you for an <code>Azure kubernetes service authentication with AAD</code>, as shown.</p> <p><img src="https://i.stack.imgur.com/IDoCf.png" alt="enter image description here" /></p> <p><em>Just give</em> <code>kubectl</code> <em>in the bash</em> <em>to verify whether it is installed successfully.</em></p> <p><img src="https://i.stack.imgur.com/U7ve2.png" alt="enter image description here" /></p> <p><em>If still the issue persists,</em></p> <ol> <li><p>Delete all the cache or any unused folders inside the ~/.kube/ and ran the aks credentials command by adding <code>--admin</code> flag in the end.</p> <p>Refer this <a href="https://blog.baeke.info/2021/06/03/a-quick-look-at-azure-kubelogin/" rel="nofollow noreferrer">doc</a> by @Geert Baeke for more related information.</p> </li> <li><p>Check the kube config version and upgrade if required.</p> </li> </ol>
<p>From my understanding (based on this guide <a href="https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/" rel="nofollow noreferrer">https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/</a>), if I have the following security context specified for some kubernetes pod</p> <pre><code>securityContext: # Enforce to be run as non-root user runAsNonRoot: true # Random values should be fine runAsUser: 1001 runAsGroup: 1001 # Automatically convert mounts to user group fsGroup: 1001 # For whatever reasons this is not working fsGroupChangePolicy: &quot;Always&quot; </code></pre> <p>I expect this pod to be run as user 1001 with the group 1001. This is working as expected, because running <code>id</code> in the container results in: <code>uid=1001 gid=1001 groups=1001</code>.</p> <p>The file system of all mounts should automatically be accessible by user group 1001, because we specified <code>fsGroup</code> and <code>fsGroupChangePolicy</code>. I guess that this also works because when running <code>ls -l</code> in one of the mounted folders, I can see that the access rights for the files look like this: <code>-rw-r--r-- 1 50004 50004</code>. Ownership is still with the uid and gid it was initialised with but I can see that the file is now readable for others.</p> <p>The question now is how can I add write permission for my <code>fsGroup</code> those seem to still be missing?</p>
<p>You need to add an additional <code>init_container</code> in your pod/deployment/{supported_kinds} with the commands to give/change the permissions on the volume mounted on the pod to the userID which container is using while running.</p> <pre class="lang-yaml prettyprint-override"><code> initContainers: - name: volumepermissions image: busybox ## any image having linux utilities mkdir,echo,chown will work. imagePullPolicy: IfNotPresent env: - name: &quot;VOLUME_DATA_DIR&quot; value: mountpath_for_the_volume command: - sh - -c - | mkdir -p $VOLUME_DATA_DIR chown -R 1001:1001 $VOLUME_DATA_DIR echo 'Volume permissions OK ✓' volumeMounts: - name: data mountPath: mountpath_for_the_volume </code></pre> <p><s>This is necessary when a container in a pod is running as a user other than root and needs write permissions on a mounted volume.</s></p> <p>if this is a helm template this init container can be created as a template and used in all the pods/deployments/{supported kinds} to change the volume permissions.</p> <p>Update: As mentioned by @The Fool, it should be working as per the current setup if you are using Kubernetes v1.23 or greater. After v1.23 <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">securityContext.fsGroup and securityContext.fsGroupChangePolicy</a> features went into GA/stable.</p>
<p>When low on resources kubernetes starts to re-create pods but newer pods also fail, so they keep growing in number. The cluster becomes unusable. This seems an illogical behaviour. Is it possible to prevent it ? Is it possible to recover without deleting everything ?</p> <pre><code>light@o-node0:~/lh-orchestrator$ k get pod NAME READY STATUS RESTARTS AGE aa344-detect-5cd757f65d-8kz2n 0/1 ContainerStatusUnknown 536 (62m ago) 46h bb756-detect-855f6bcc78-jnfzd 0/1 ContainerStatusUnknown 8 (59m ago) 75m aa344-analyz-5cc6c59d6c-rchkm 0/1 ContainerStatusUnknown 1 46h lh-graphql-77fc996db5-8qcxl 0/1 ContainerStatusUnknown 1 (2d ago) 2d lh-pgadmin-5b598d4d4-shjbz 0/1 ContainerStatusUnknown 1 2d bb756-analyz-8cd7c48f7-k2xh9 0/1 ContainerStatusUnknown 1 75m lh-postgres-698bc448bd-9vkqp 0/1 ContainerStatusUnknown 1 2d lh-pgadmin-5b598d4d4-c4ts4 0/1 ContainerStatusUnknown 1 54m lh-graphql-77fc996db5-btvzx 0/1 ContainerStatusUnknown 1 54m lh-postgres-698bc448bd-99m55 0/1 ContainerStatusUnknown 1 54m aa344-detect-5cd757f65d-qmvcc 0/1 ContainerStatusUnknown 1 58m bb756-detect-855f6bcc78-7lc7g 0/1 ContainerStatusUnknown 1 56m lh-graphql-77fc996db5-7lbms 1/1 Running 0 34m lh-pgadmin-5b598d4d4-l6f7s 0/1 ContainerStatusUnknown 1 34m aa344-analyz-5cc6c59d6c-78ltt 0/1 ContainerStatusUnknown 1 (17m ago) 55m lh-postgres-698bc448bd-gjbf2 0/1 ContainerStatusUnknown 1 34m aa344-detect-5cd757f65d-cbspd 0/1 ContainerStatusUnknown 1 33m bb756-detect-855f6bcc78-qvqsf 0/1 ContainerStatusUnknown 1 32m lh-pgadmin-5b598d4d4-4znww 1/1 Running 0 17m lh-postgres-698bc448bd-xxm28 1/1 Running 0 16m aa344-analyz-5cc6c59d6c-h7vfc 1/1 Running 3 (9m41s ago) 16m bb756-analyz-8cd7c48f7-4tdcp 1/1 Running 7 (10m ago) 54m bb756-detect-855f6bcc78-fgpzx 0/1 Pending 0 2s bb756-detect-855f6bcc78-t4p4q 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-cd6gl 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-dwhf6 0/1 Pending 0 1s </code></pre>
<p>Before deleting the pods check why the pods are failing to create, if it is due to memory follow below steps:</p> <ol> <li><p>If you are using docker run docker system prune -a to clean up some space taken by docker and then the node will get some space, drain the node and restart docker</p> </li> <li><p>Any container will write any amount of storage to the filesystem. set a <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">quota</a> (<code>limits.ephemeral-storage, requests.ephemeral-storage</code>) to limit this.</p> </li> <li><p>You may need to increase storage as kubernetes need more space.</p> </li> </ol> <p><strong>For Deployment:</strong> scale down the deployment so that if pods are deleted new pods will not try to create. If you scale down Kubernetes will delete the pods.</p> <p>Now scale up the deployment so kubernetes creates new replicas of the pod that the previous command.</p> <p>You can also delete all the pods which are in failed phase without scaling the deployment by running below command:</p> <pre><code>kubectl delete pod --field-selector=status.phase==Failed </code></pre> <p>You can find more methods regarding deleting the pod in this <a href="https://komodor.com/learn/kubectl-restart-pod/" rel="nofollow noreferrer">blog</a> by Oren Ninio</p> <p>Generally it is recommended to use an <a href="https://www.kubecost.com/kubernetes-autoscaling/kubernetes-cluster-autoscaler/" rel="nofollow noreferrer">autoscaler</a> to manage deployments.</p>
<p>I am familiar with Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/" rel="nofollow noreferrer">documentation</a> that describes how to setup limits for PVC. However, what if the container is not assigned PVC?</p> <p>Suppose a Kubernetes container that simply defines:</p> <pre><code>- image: 'redis:7' name: redis </code></pre> <p>... I keep writing data to this Redis instance.</p> <ul> <li>How do I set a quota to ensure that the container does not use more than allocated storage?</li> <li>How to inspect how much storage is already used?</li> </ul> <p>I have tried setting <code>ResourceQuota</code> for ephemeral resources such as:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ResourceQuota metadata: labels: # {{ include &quot;app.resource_labels&quot; . | indent 4 }} name: '{{ .Release.Name }}' spec: hard: configmaps: 10 limits.cpu: 4 limits.ephemeral-storage: 1Gi limits.memory: 10Gi pods: 30 secrets: 5 services: 20 </code></pre> <p>However, when inspecting quota, it always says 0 for <code>ephemeral-storage</code>.</p> <pre><code>kubectl describe quota Name: gaia-review-contra-resource-quota-c79e5b3c Namespace: gaia-review-c79e5b3c Resource Used Hard -------- ---- ---- configmaps 2 10 limits.cpu 21 4 limits.ephemeral-storage 0 1Gi limits.memory 25576Mi 10Gi pods 16 30 secrets 4 5 services 8 20 </code></pre> <p>Therefore, I suspect that something else is not working as it should or I am looking at the wrong place.</p> <p>Meanwhile, the VMs that are running these pods is experiencing disk pressure.</p> <p><a href="https://i.stack.imgur.com/CUQ8B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUQ8B.png" alt="VMs" /></a></p> <p><a href="https://i.stack.imgur.com/114dZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/114dZ.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/5L8AJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5L8AJ.png" alt="enter image description here" /></a></p> <p>My next best theory is that it is actually the Docker image layers that are filling the disk space, but I am unsure how to confirm that or why resources are not being freed.</p>
<p>If the container is not assigned to PVC then assuming ephemeral storage is assigned will be used by default.</p> <p>To ensure that the container does not use more than allocated storage set the <strong><code>memory limits field spec.containers[].resources.limits.memory</code></strong>.</p> <p>Below example is taken from this <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">official kubernetes</a> doc, similarly you can also set limits to your containers.</p> <pre><code>containers: - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: &quot;2Gi&quot; limits: ephemeral-storage: &quot;4Gi&quot; </code></pre> <p>As @larks suggested, this document contains a detailed explanation and more methods which will help you and you can also use <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass" rel="nofollow noreferrer">resource quotas</a> for storage to set limits.</p> <p>For more information you can also refer to the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-1" rel="nofollow noreferrer">Resource Management for Pods and Containers</a> doc.</p>
<p>Hi I have added secret in my hashi corp vault in the below path</p> <p>cep-kv/dev/sqlpassword</p> <p>I am trying to access secret in my manifest as below</p> <pre><code>spec: serviceAccountName: default containers: # List - name: cep-container image: myinage:latest env: - name: AppSettings__Key value: vault:cep-kv/dev/sqlpassword#sqlpassword </code></pre> <p>This is throwing error below</p> <pre><code>failed to inject secrets from vault: failed to read secret from path: cep-kv/dev/sqlpassword: Error making API request.\n\nURL: GET https://vaultnet/v1/cep-kv/dev/sqlpassword?version=-1\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n&quot; app=vault-env </code></pre> <p>Is the path I am trying to access is correct value:</p> <blockquote> <p>vault:cep-kv/dev/sqlpassword#sqlpassword</p> </blockquote> <p>I tried with below path too</p> <pre><code>value: vault:cep-kv/dev/sqlpassword </code></pre> <p>This says secret not found in respective path. Can someone help me to get secret from hashi corp vault. Any help would be appreciated. Thanks</p>
<p>As you are getting 403 permission you need to Configure Kubernetes authentication, you can configure authentication from the following step:</p> <ol> <li>Enable the Kubernetes auth method:</li> </ol> <blockquote> <p><code>vault enable auth kubernetes</code></p> </blockquote> <ol start="2"> <li>Configure the Kubernetes authentication method to use the location of the Kubernetes API</li> </ol> <pre><code>vault write auth/kubernetes/config \ kubernetes_host=https://192.168.99.100:&lt;your TCP port or blank for 443&gt; </code></pre> <ol start="3"> <li><p>Create a named role:</p> <pre><code>vault write auth/kubernetes/role/demo \ bound_service_account_names=myapp \ bound_service_account_namespaces=default \ policies=default \ ttl=1h </code></pre> </li> <li><p>Write out the ” myapp ” policy that enables the “read” capability for secrets at the path .</p> <pre><code>vault policy write myapp -path &quot;yourpath&quot; { capabilities = [&quot;read&quot;] } </code></pre> </li> </ol> <p>For more information follow <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#configuration" rel="nofollow noreferrer">Configuration</a>, Here is a <a href="https://blog.cloudthat.com/detailed-guide-to-securely-manage-secrets-for-kubernetes-using-hashicorp-vault/" rel="nofollow noreferrer">blog</a> explaining the usage of secrets in kubernetes.</p>
<p>I am looking for best practices to avoid <code>Pod The node had condition: [DiskPressure]</code>.</p> <p>So what I'm doing is full database export of all our views which is massive. At some point the pod runs into DiskPressure error and the k8 decides to Evict and kill it.</p> <p>What would be the best practices to handle this? There is 7GB of free space which maybe is not enough. Is just raising that the best way to go about it or are the other mechanisms to handle this type of work?</p> <p>Hope my question makes sense</p>
<p>Error message <code>Pod The node had a condition: [DiskPressure]</code> happens when the kubelet agent won't admit new pods on the node, that means they won't start. Node disk pressure means that the disks that are attached to the node are under pressure.</p> <p>The reason you might run into node disk pressure is because Kubernetes has not cleaned up any unused images and is a problem of logs building up.if you have a long-running container with a lot of logs, they may build up enough that it overloads the capacity of the node disk.</p> <p><strong>Troubleshooting Node Disk Pressure:</strong></p> <p>To troubleshoot the issue of node disk pressure, you need to figure out what files are taking up the most space. You can either manually SSH into each Kubernetes node, or use a DaemonSet, you can do that from this <a href="https://www.containiq.com/post/using-kubernetes-daemonsets-effectively" rel="nofollow noreferrer">link</a>.</p> <p>After installing you can start looking at the logs of the pods that are running by executing kubectl logs -l app=disk-checker. You will see a list of files and their sizes, which will give you greater insight into what is taking up space on your nodes.</p> <p><strong>Possible solutions:</strong></p> <p>The issue is caused by necessary application data, making it impossible to delete the files. In this case, you will have to increase the size of the node disks to ensure that there’s sufficient room for the application files.</p> <p>Another solution is that you find applications that have produced a lot of files that are no longer needed and simply delete the unnecessary files.</p> <p><strong>Adding more for your information:</strong></p> <p><strong>1)To avoid DiskPressure crashing the node :</strong></p> <p>DiskPressure triggers when either node root file systems or image file systems satisfies an eviction threshold for available disk space, inodes will trigger DiskPressure which in turn causes pod eviction,refer to these <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="nofollow noreferrer">Node conditions</a>.</p> <p>Based on the Node conditions, you should consider adjusting the parameters of your kubelet, <code>--image-gc-high-threshol</code>d and <code>--image-gc-low-threshold</code>, so that there is always enough space for normal operations, consider <code>--low-diskspace-threshold-mb</code> provisioning more space for your nodes, depending on your requirements.</p> <p><strong>2) To reduce the DiskPressure condition</strong></p> <p>Use the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> command line arg : <code>--eviction-hard mapStringString:</code> A set of eviction thresholds (e.g. <code>memory.available&lt;1Gi</code>) that if met would trigger a pod eviction. DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">Set Kubelet parameters via a config file</a> for more information.</p>
<p>I am currently using emailing alerts for our application. One of the contents of the email is the start and end time of the alert but they are displaying as UTC+0000.</p> <p>Tried using .Start.Local.Format but realized that the only time that alertmanager pod has it UTC+0000. Was wondering if there is a way I can set the pod timezone</p>
<p>You can change the timezone of pods by using <a href="https://evalle.github.io/blog/20200914-kubernetes-tz" rel="nofollow noreferrer">volumes and volumeMount</a>s to specify the timezone, like :</p> <pre><code>volumeMounts: - name: tz-config mountPath: /etc/localtime volumes: - name: tz-config hostPath: path: /usr/share/zoneinfo/America/Chicago </code></pre> <p>Or you can use the &quot;TZ&quot; environment variable in the container section inside the pod spec, to configure the desired time zone:</p> <pre><code>spec: containers: - name: foo-bar image: foobar:latest imagePullPolicy: Always env: - name: TZ value: America/Chicago </code></pre> <p>This changes the timezone displayed by the pod, which you can confirm using &quot;kubectl exec POD date&quot;.</p> <p>The Default Dashboards Timezone is utc. Attaching the <a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones" rel="nofollow noreferrer">list of timezones</a> for reference.</p>
<p>I've tried a number of approaches of accessing my Kubernetes pods.</p> <ol> <li><p>I'm able to run this on Docker and am able to access it through http://localhost:1880</p> </li> <li><p>I created the following deployment file to run on K8s:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nodered2 labels: app: nodered2 spec: replicas: 2 selector: matchLabels: app: nodered2 template: metadata: labels: app: nodered2 spec: hostNetwork: true containers: - name: nodered2 image: localhost:1881/nodered2 imagePullPolicy: IfNotPresent ports: - containerPort: 1881 </code></pre> </li> <li><p>I'm able to see the deployment</p> </li> </ol> <p>PS C:\01. SolisPLC\01. Software\02. Kubernetes\02. NodeRed K8S&gt;kubectl get deployment</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE nodered2 2/2 2 2 31m </code></pre> <ol start="4"> <li>I've read that I need to forward the port as one of the ways to access the pod via my browser, so I've gotten stuck on this error:</li> </ol> <blockquote> <p>PS C:\01. SolisPLC\01. Software\02. Kubernetes\02. NodeRed K8S&gt; kubectl port-forward deployment/nodered2 1881:443 Forwarding from 127.0.0.1:1881 -&gt; 443 Forwarding from [::1]:1881 -&gt; 443 Handling connection for 1881 E0715 22:00:47.383640 10056 portforward.go:406] an error occurred forwarding 1881 -&gt; 443: error forwarding port 443 to pod af7fd02bc1d1374fcfed02adf08f9590dd82cb7230e6db523ff7211cf7dd9b2a, uid : exit status 1: 2023/07/16 02:00:47 socat[15579] E connect(11, AF=2 127.0.0.1:443, 16): Connection refused E0715 22:00:47.385258<br /> 10056 portforward.go:234] lost connection to pod</p> </blockquote> <ol start="5"> <li>I've tried different ports, I've tried to add a load balancer (unsuccessfully), etc.</li> </ol> <p>Any help on this would be greatly appreciated; it seems that most tutorials stop at this working &quot;as expected&quot;.</p> <p>NOTE: All of this is running on my local machine. I can see the deployment inside of the visual Docker app and it's running in VS Code tabs I have for Docker and K8s.</p>
<p>Below troubleshooting step will help you to resolve your issue:</p> <p>1.Check which port the pod is listening to by using <code>kubectl describe pod</code>.</p> <p>2.Ensure forwarding matches the port number the application inside the pod is listening on.</p> <p>3.You have mentioned that you can access through http://localhost:1880 but you have specified container port 1881, if it was a typo mistake check logs for more errors.</p> <p>4.Try using pod instead of deployment in kubectl port forwarding :</p> <pre><code>kubectl port-forward &lt;pod-name&gt; &lt;your-local-port&gt;:&lt;container-port&gt; </code></pre> <p>To get the container port run the below command:</p> <pre><code>kubectl get pod &lt;pod_name&gt; --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{&quot;\n&quot;}} </code></pre> <p>You can find more info in this <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">official doc</a>.</p> <p>5.As you are using <code>hostnetwork:true</code> try using the service eg: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress,</a> you can find more information about it in this <a href="https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">blog</a> authored by ales.</p>
<p>I'm having a pain point with Kubernetes and I hope someone can help me out. I'm trying to see the contents of a PVC (Persistent Volume Claim), but it seems that the only way to do so is to mount the PVC into a container and check what's inside.</p> <p>As you can imagine, this is quite cumbersome and not very practical. I was wondering if there are any tools or solutions that could make this process easier?</p> <p>I've searched around but haven't found anything yet. If anyone has any ideas or suggestions, please let me know! I'm eager to hear your thoughts and experiences.</p> <p>Thank you in advance!</p>
<p>Unfortunately, there is no built-in way to do this. To get around this, you would have to attach the disk to a container or VM and inspect its contents</p> <p>You need to add the pvc to a pod and copy the files using kubectl cp or list the files using kubectl exec to check the contents of PVC.</p> <p>A simpler way is to create an inspector POD to inspect the contents of PVC and clean up the pod easily as explained in this <a href="https://frank.sauerburger.io/2021/12/01/inspect-k8s-pvc.html" rel="nofollow noreferrer">blog</a> by Frank Sauerburger.</p>
<p>I need to forward an exposed port of a Docker container to a remote port of a remote server. There are two approaches. One is with IP tables, but this wouldn't work for me since I'm not able to give NET_ADMIN access to my containers, which will be running on a cloud provider's Kubernete hosting platform. The second approach would be to utilize SOCAT, but that has it's own problems since it forks a process for each connection, reaching the maximum allowed open files in no time since I have thousands of concurrent connections.</p> <p>Are there any alternatives, which can forward a port like iptables does, but without NET_ADMIN requirements, and without needing to create a process for each connection?</p>
<p><strong>Check below options :</strong></p> <p>1)If you want to connect to a port on a specific IP address without the use of netcat.</p> <p>Try this <code>telnet host.example.com port</code> (e. g. <code>telnet www.example.com 80</code>).</p> <p>Another possibility is <code>/dev/tcp:</code></p> <p><code>$ echo &quot;HEAD / HTTP/1.0&quot; &gt;/dev/tcp/[www.example.com/80][1]</code>.</p> <p>2)There's a tiny, light resources program called <code>redir</code> which is pretty configurable.</p> <p><code>apt-get install redir</code> to install on Debian-based distributions.</p> <p><code>redir :SRC :DEST</code> will run in the background as a daemon.</p> <p>3)<code>Rinetd</code>, It's a daemon that redirects TCP connections. Have a look at the man page to see if it suits your needs: <a href="https://manpages.debian.org/unstable/rinetd/rinetd.8.en.html" rel="nofollow noreferrer">https://manpages.debian.org/unstable/rinetd/rinetd.8.en.html</a></p> <p>4)<code>portfwd</code>, (TCP and UDP forwarding) https://<a href="https://portfwd.sourceforge.net/" rel="nofollow noreferrer">portfwd.sourceforge.net</a>/, (it latest release is 2007, and it works on 2.6 kernel).</p>
<p>In one of my project, we have 9 PODs for a single microservice, and during the load test run that is noticing the load distribution across the 9 PODs are not even. Also, on the PODs (which have received low traffic compared to other PODs) there is a gap between the traffics. Has anyone faced this issue and advise the areas / spaces that could cause this</p> <p>All 9 PODs are hosted on a different node under the same cluster and we have 3 zones</p> <p>The load balancer algorithm used is round-robin.</p> <p>Sample flow: microservices 1 (is running in 3 PODs, which uses Nginx but not as a load balancer) -&gt; microservices 2 (is running 9 PODs, which uses node js)</p> <p>Another flow: microservices 1 (is running in 6 PODs) -&gt; microservices 2 (running in 9 PODs)</p> <p>Refer to the below screenshots,</p> <p><a href="https://i.stack.imgur.com/8xuNb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8xuNb.png" alt="Load distribution numbers across 9 PODs" /></a></p> <p><a href="https://i.stack.imgur.com/112zJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/112zJ.png" alt="Traffic at POD 6" /></a>[<img src="https://i.stack.imgur.com/TQ0ah.png" alt="Traffic at POD 4Traffic at POD 1/2" /></p> <p><a href="https://i.stack.imgur.com/gkaAf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gkaAf.png" alt="Traffic at POD 4" /></a></p> <p><a href="https://i.stack.imgur.com/TQ0ah.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TQ0ah.png" alt="Traffic at POD 1/2" /></a></p>
<p>As far as Kubernetes is concerned, the LB distributes requests at the node level and not at the pod and it will completely disregard the number of pods on each node. Unfortunately, this is a limitation on Kubernetes. You may also have a look at the last paragraph of this documentation about traffic not equally load balanced across pods. [1]</p> <p>Defining resources for containers [2] is important as it allows the scheduler to make better decisions when it comes time to place pods in nodes. May I recommend to have a look at the following documentation [3] on how pods with resource limits are set. It is mentioned that a pod will not be allowed to exceed its CPU limit for an extended period of time and it will not be killed, eventually leading to a decreased performance.</p> <p>[1] <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer</a></p> <p>[2] <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers</a></p> <p>[3]<a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run</a></p> <p>Regards, Anbu.</p>
<p>I'm using Terraform workload-identity module , to create Kubernetes service account in Google Cloud. When i apply the changes, I'm getting below warning.</p> <blockquote> <p>&quot;default_secret_name&quot; is no longer applicable for Kubernetes v1.24.0 and above │ │ with module.app-workload-identity.kubernetes_service_account_v1.main, │<br /> on ../../modules/workload-identity/main.tf line 57, in resource &quot;kubernetes_service_account_v1&quot; &quot;main&quot;: │ 57: resource &quot;kubernetes_service_account_v1&quot; &quot;main&quot; { │ │ Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, &quot;default_secret_name&quot; will be │ empty</p> </blockquote> <p><strong>Workload-Identity main.tf</strong></p> <pre><code>locals { service_account_tmp = var.google_service_account_email== &quot;&quot; ? &quot;projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com&quot; : var.google_service_account_email service_id = &quot;projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com&quot; k8s_sa_gcp_derived_name = &quot;serviceAccount:${var.project_id}.svc.id.goog[${var.namespace}/${local.output_k8s_name}]&quot; gcp_sa_email = var.google_service_account_email # This will cause terraform to block returning outputs until the service account is created k8s_given_name = var.k8s_sa_name != null ? var.k8s_sa_name : var.name output_k8s_name = var.use_existing_k8s_sa ? local.k8s_given_name : kubernetes_service_account.main[0].metadata[0].name output_k8s_namespace = var.use_existing_k8s_sa ? var.namespace : kubernetes_service_account.main[0].metadata[0].namespace } # resource &quot;google_service_account&quot; &quot;cluster_service_account&quot; { # GCP service account ids must be &lt; 30 chars matching regex ^[a-z](?:[-a-z0-9]{4,28}[a-z0-9])$ # KSA do not have this naming restriction. # account_id = substr(var.name, 0, 30) # display_name = substr(&quot;GCP SA bound to K8S SA ${local.k8s_given_name}&quot;, 0, 100) # project = var.project_id # } resource &quot;kubernetes_namespace&quot; &quot;k8s_namespace&quot; { metadata { name = var.namespace } } # resource &quot;kubernetes_secret_v1&quot; &quot;main&quot; { # metadata { # name = var.name # namespace = var.namespace # annotations = { # &quot;kubernetes.io/service-account.name&quot; = kubernetes_service_account_v1.main.metadata.0.name # &quot;kubernetes.io/service-account.namespace&quot; = kubernetes_service_account_v1.main.metadata.0.namespace # } # generate_name = &quot;${kubernetes_service_account_v1.main.metadata.0.name}-token-&quot; # } # type = &quot;kubernetes.io/service-account-token&quot; # wait_for_service_account_token = true #} resource &quot;kubernetes_service_account&quot; &quot;main&quot; { count = var.use_existing_k8s_sa ? 0 : 1 metadata { name = var.name namespace = var.namespace annotations = { &quot;iam.gke.io/gcp-service-account&quot; = var.google_service_account_email } } } module &quot;annotate-sa&quot; { source = &quot;terraform-google-modules/gcloud/google//modules/kubectl-wrapper&quot; version = &quot;~&gt; 2.0.2&quot; enabled = var.use_existing_k8s_sa &amp;&amp; var.annotate_k8s_sa skip_download = true cluster_name = var.cluster_name cluster_location = var.location project_id = var.project_id kubectl_create_command = &quot;kubectl annotate --overwrite sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account=${local.gcp_sa_email}&quot; kubectl_destroy_command = &quot;kubectl annotate sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account-&quot; } resource &quot;google_service_account_iam_member&quot; &quot;main&quot; { service_account_id = local.service_id role = &quot;roles/iam.workloadIdentityUser&quot; member = local.k8s_sa_gcp_derived_name } </code></pre> <p>As per the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_account" rel="nofollow noreferrer">this</a> documentation , I have tried to add the resource &quot;kubernetes_secret_v1&quot; to create a service account token. But still getting the same warning message.</p>
<p>From this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/pull/1792" rel="nofollow noreferrer">git issue</a> kubernetes_service_account issue has been successfully fixed using this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/pull/1792/files/04140ea649a2dcdaffb2da3f85dde35320fd97c8#diff-c743e045ffac6c322ed857bb5f5b6efa1b2d854c02de71996f9d937e0242dd03" rel="nofollow noreferrer">manifest</a>.</p> <p>I found this <a href="https://github.com/yasserisa/terraform-google-kubernetes-engine/commit/a1972155e856c702c13f1196a202f65b71378bde" rel="nofollow noreferrer">alternative solution</a> where changes are made using the terraform resource kubernetes_manifest to manually generate the service accounts along with their secret.</p> <p>Can you try the main.tf file and let me know if this works.</p> <p>For more information follow this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/issues/1724" rel="nofollow noreferrer">Issue</a>.</p>
<p>I'm trying to get Redocly's <a href="https://github.com/Redocly/developer-portal-starter" rel="nofollow noreferrer">developer portal</a> running in kubernetes. I need to use nginix because the developer portal doesn't bind to anything else than localhost so I'm using nginx to proxy to localhost. I keep running into permission issues with the app running and nginx. I fixed the nginx permissions (I think) by adding this to the dockerfile (def not good but can't figure out how else):</p> <pre><code>RUN chmod -R 777 /var RUN chmod -R 777 /etc/nginx </code></pre> <p>My Dockerfile looks like this:</p> <pre><code>FROM docker.abc.cc/node:16-alpine # Define arguments and environment variables ARG ARTIFACTORY_USERNAME ARG ARTIFACTORY_PASSWORD ENV ARTIFACTORY_USERNAME=$ARTIFACTORY_USERNAME ENV ARTIFACTORY_PASSWORD=$ARTIFACTORY_PASSWORD # Add curl and bash for debugging RUN apk add --no-cache curl bash nginx # Install npm-cli-login RUN npm install -g npm-cli-login WORKDIR /usr/src/app # Log in to Artifactory RUN \ npm-cli-login \ -u $ARTIFACTORY_USERNAME \ -p $ARTIFACTORY_PASSWORD \ -e [email protected] \ -r https://artifactory.abc.cc/artifactory/api/npm/npm # Copy package files and install dependencies COPY --chown=node package*.json /usr/src/app/ RUN npm install --legacy-peer-deps # Copy application files COPY --chown=node . /usr/src/app/ # Add Nginx configuration file COPY nginx.conf /etc/nginx/nginx.conf RUN chmod -R 777 /usr/src RUN chmod -R 777 /var RUN chmod -R 777 /etc/nginx # Start Nginx and the application CMD [&quot;sh&quot;, &quot;-c&quot;, &quot;nginx &amp;&amp; npm start&quot;] # Expose the Nginx port EXPOSE 3001 </code></pre> <p>and the relevant part of my deployment looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: caas-dev-portal namespace: internal-tools spec: replicas: 2 revisionHistoryLimit: 3 spec: containers: - env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace envFrom: - configMapRef: name: caas-dev-portal - configMapRef: name: abc-k8s-cluster-info - secretRef: name: apm-settings image: docker.abc.cc/caas-portal:v1test imagePullPolicy: Always lifecycle: preStop: exec: command: - /bin/sleep - &quot;30&quot; name: web ports: - containerPort: 3001 name: http-api - containerPort: 9999 name: http-management resources: limits: cpu: 1500m ephemeral-storage: 1Gi memory: 1024Mi requests: cpu: 100m ephemeral-storage: 10Mi memory: 128Mi startupProbe: failureThreshold: 45 httpGet: path: /health port: http-api periodSeconds: 4 imagePullSecrets: - name: artifactory-docker-registry restartPolicy: Always serviceAccountName: caas-dev-portal terminationGracePeriodSeconds: 40 </code></pre> <p>I then see this error in the app logs:</p> <pre><code>/usr/src/app/node_modules/yoga-layout-prebuilt/yoga-layout/build/Release/nbind.js:53 throw ex; ^ Error: EPERM: operation not permitted, copyfile '/usr/src/app/package.json' -&gt; '/usr/src/app/node_modules/@redocly/developer-portal/dist/engine/package.json' at copyFileSync (node:fs:2847:3) at runCommand (/usr/src/app/node_modules/@redocly/developer-portal/dist/cli/run-command.js:1:356) at Object.handler (/usr/src/app/node_modules/@redocly/developer-portal/dist/cli.js:2:1376) at Object.runCommand (/usr/src/app/node_modules/@redocly/developer-portal/node_modules/yargs/build/index.cjs:446:48) at Object.parseArgs [as _parseArgs] (/usr/src/app/node_modules/@redocly/developer-portal/node_modules/yargs/build/index.cjs:2697:57) at Object.get [as argv] (/usr/src/app/node_modules/@redocly/developer-portal/node_modules/yargs/build/index.cjs:2651:25) at Object.&lt;anonymous&gt; (/usr/src/app/node_modules/@redocly/developer-portal/dist/cli.js:2:4118) at Module._compile (node:internal/modules/cjs/loader:1198:14) at Object.Module._extensions..js (node:internal/modules/cjs/loader:1252:10) at Module.load (node:internal/modules/cjs/loader:1076:32) { errno: -1, syscall: 'copyfile', code: 'EPERM', path: '/usr/src/app/package.json', dest: '/usr/src/app/node_modules/@redocly/developer-portal/dist/engine/package.json' } npm notice npm notice New major version of npm available! 8.19.4 -&gt; 9.8.1 npm notice Changelog: &lt;https://github.com/npm/cli/releases/tag/v9.8.1&gt; npm notice Run `npm install -g [email protected]` to update! npm notice </code></pre> <p>It looks like the app doesn't have permissions to fully run but I can't quite figure out how to give it the right permissions. I've even added <code>RUN chmod -R 777 /usr/src</code> in the Dockerfile right before I run <code>CMD</code> but still get this error.</p> <p>whats the best way to fix my permissions issue so I can get both nginx and the app to run?</p>
<p>Below troubleshooting steps will help you to resolve your issue:</p> <p>1.As you are using older version try clean cache with <code>npm cache clean --force</code>, install the latest version of npm globally as admin: <code>npm install -g npm@latest --force or npm install -g [email protected]</code></p> <p>2.Try to remove the read-only attribute from the project folder.</p> <p>3.As per this <a href="https://discuss.circleci.com/t/docker-build-fails-with-nonsensical-eperm-operation-not-permitted-copyfile/37364" rel="nofollow noreferrer">issue</a> you need to specify a Docker version using the version attribute. BY adding below code in docker file :</p> <pre><code>- setup_remote_docker: version: 19.03.13 </code></pre> <p>4.You are using <code>RUN chmod -R 777</code> which is not recommended as it sets the permissions to read, write, and execute for the owner, group, and others, which means that everyone has full access to the directory and its contents.</p> <pre><code>Try sudo chown $USER:$USER -R __YOUR_PROJECT__DIRECTORY__ Or sudo chmod -R 775 &lt;/img/folder and project/folder&gt; </code></pre> <p>Note: If you need to change permissions on files or directories that exist outside of the container you need to use sudo, If you need to change permissions on files or directories within a Docker container, you should use the RUN command.</p>
<p>I am running an application/deployment called <em>cons1persec</em> on google kubernetes engine GKE. My application/deployment is being monitored through a controller application and autoscaled approparitely based on a metric. I can view the logs of my deployment in google cloud logs explorer through the following query :</p> <pre><code>resource.type=&quot;k8s_container&quot; resource.labels.project_id=&quot;autoscaling-kafka&quot; resource.labels.location=&quot;europe-west1-d&quot; resource.labels.cluster_name=&quot;autoscalekafka&quot; resource.labels.namespace_name=&quot;default&quot; labels.k8s-pod/app=&quot;cons1persec&quot; severity&gt;=DEFAULT </code></pre> <p>My question is about the appropriate query to get the number of pods created belonging to my application/deployment <em>cons1persec</em>, the name of pods and their creation/deletion time etc..</p> <p>Thank you.</p>
<p>When using Deployment, it makes use of deployment’s name as a prefix for a pod’s name it creates and we cannot have two deployments with the same name, so we can make use of these to query for pods belonging to a specific deployment.<br /> Refer to the below sample query which uses regular expressions/substring comparison operator to match the deployment name which is prefix of a pod’s name and reason for log creation to query for the pod’s name, creation/deletion and corresponding timestamps.</p> <p>Sample Log query:</p> <pre><code>Severity = INFO Resource.type = &quot;k8s_cluster&quot; log_name = &quot;projects/&lt;PROJECT-ID&gt;/logs/events&quot; jsonPayload.reason = (&quot;SuccessfulCreate&quot; OR &quot;SuccessfulDelete&quot;) # Using regular expressions[1] jsonPayload.metadata.name =~ &quot;&lt;WORKLOAD-NAME&gt;\S*&quot; # Using substring comparison operator[2] jsonPayload.metadata.name : &quot;WORKLOAD-NAME&quot; </code></pre> <p>[1]- <a href="https://cloud.google.com/logging/docs/view/logging-query-language#regular-expressions" rel="nofollow noreferrer">https://cloud.google.com/logging/docs/view/logging-query-language#regular-expressions</a><br /> [2]- <a href="https://cloud.google.com/logging/docs/view/logging-query-language#comparisons" rel="nofollow noreferrer">https://cloud.google.com/logging/docs/view/logging-query-language#comparisons</a></p>
<p>My React host has an IP of 10.60.160.61.</p> <p>The API endpoint that I'm trying to reach has an IP of 10.200.50.21.</p> <p>My App.js:</p> <pre><code>useEffect(() =&gt; { const fecthPods = async () =&gt; { try { const response = await axios.get(`https://kubernetes.endpoint.com/k8s/clusters/name/v1/pods/myproject`,{ headers: { 'Authorization': 'Bearer token-myToken' } }) console.log(response.data) } catch (err) { if (err.response) { // Not in the 200 response range console.log(err.response.data) console.log(err.response.status) console.log(err.response.headers) } else { console.log(`Error: ${err.message}`) } } } fecthPods() },[]) </code></pre> <p>I get two errors in the network tab of developer tools:</p> <ol> <li>401</li> <li>CORS preflight</li> </ol> <p>I can access an external API endpoint with no issue (ie. <a href="https://pokeapi.co/api/v2/pokemon/ditto" rel="nofollow noreferrer">https://pokeapi.co/api/v2/pokemon/ditto</a>). I can successfully run a curl command to the Kubernetes API endpoint by passing the Auth token in the headers parameter, but just not working with Axios. I don't think I need to run a backend express server since the API endpoint that I'm trying to reach is not on my localhost. Not sure what else to try.</p>
<p>I believe I have found the answer to this question. Seems like I will need to run my API calls through a custom proxy as mentioned in this <a href="https://forums.rancher.com/t/request-from-react-app-to-rancher-api-returning-401/14449" rel="nofollow noreferrer">post</a>.</p>
<p>We are running a daily cronjob on GKE. This job is executed on spot nodes. The container respects the <code>SIGTERM</code> and gracefully shuts down. However, this is then marked as successful and not restarted. How can I ensure that this job is restarted on a different node?</p> <p>I've read <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#graceful-node-shutdown" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/architecture/nodes/#graceful-node-shutdown</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#handling-pod-and-container-failures</a>, but I see nothing in there that helps me.</p>
<p>By default the cron jobs in kubernetes are not rescheduled after a node shutdown. However you can configure the job to use a <code>restartPolicy</code> of <code>OnFailure</code> to ensure that it is rescheduled after a node shutdown.</p> <p>You need to mention the restartPolicy in spec sections as follows</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: myjob spec: schedule: &quot;* * * * *&quot; jobTemplate: spec: template: spec: containers: - name: myjob image: nginx imagePullPolicy: IfNotPresent restartPolicy: OnFailure </code></pre> <p>By using this <strong>restartPolicy</strong>, if a node is shutdown or the pod running the cron job terminates for any reason, the kubernetes scheduler will automatically reschedule the cronjob to run a healthy node.</p> <p><strong>Note:</strong> It is important to ensure that the cronjob required resources are available in the node.</p>
<p>I am trying to run kubectl commands in offline mode but it keeps saying&gt;</p> <pre><code> kubectl cordon hhpsoscr0001 The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>I fail to understand what can be the issue. Can anyone help me on this?</p>
<p>Please execute : kubectl get svc to see if you get a ClusterIP type output. If you don't, please configure your kube config properly as @David Maze suggested earlier.</p>
<p>Following <a href="https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#installation-guide</a></p> <pre><code>kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo kubectl create ingress demo-localhost --class=nginx \ --rule=&quot;demo.localdev.me/*=demo:80&quot; kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 </code></pre> <p>how the domain &quot;demo.localdev.me&quot; is resolved to localhost?</p> <pre><code> &gt; curl http://demo.localdev.me:8080 &lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>The domain &quot;demo.localdev.me&quot; is not in the c:\Windows\System32\drivers\etc\hosts file on windows 11. Is it added to a DNS server?</p>
<p>Domain names or IP addresses on a local computer can be resolved by adding entries in the local hosts file on a computer (<code>C:\Windows\System32\drivers\etc</code> → for windows) . Go to this path and set up the hostname.</p> <p>Before adding it check if the dns is already configured by using <code>nslookup host_name</code>.If you don’t find it in dns you need to add it.</p> <p>As per <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">official doc</a>:</p> <blockquote> <p>The ingress, and ingress-dns addons are currently only supported on Linux.</p> </blockquote> <p>If you have already used <code>minikube addons enable ingress</code>, Try <code>minikube tunnel</code> and your ingress resources would be available at <code>127.0.0.1</code>. So the requests from <code>http://demo.localdev.me:8080</code> will be sent to <code>127.0.0.1</code>.</p> <p>Refer to the <a href="https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/" rel="nofollow noreferrer">official doc</a> and <a href="https://snehalhingane1998.medium.com/configuring-dns-server-for-kubernetes-clusters-384cddb4f11d" rel="nofollow noreferrer">blog</a> authored by Snehalhingane for more information.</p>
<p>I'm trying to update the certificate of the Load balancer in GKE from Google-managed to self-managed.</p> <p>I was following up on this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">docs</a> to create Google-managed certificates, but I didn't find any docs for creating self-managed certificates. I am not sure how to do this. I would appreciate it if someone could tell me what should I do?</p>
<p>Self-managed SSL certificates are certificates that you obtain, provision, and renew yourself. You can use this resource to secure communication between clients and your load balancer.</p> <p>Make sure that you have the domain names that you want to use for your self-managed SSL certificate. If you're using Google Domains, see Step 1: <a href="https://cloud.google.com/dns/docs/tutorials/create-domain-tutorial#register-domain" rel="nofollow noreferrer">Register a domain name using Google Domains</a>.</p> <p>Step 1: Create a private key and certificate</p> <p>Step 2: Create a self-managed SSL certificate resource</p> <p>Step 3: Associate an SSL certificate with a target proxy</p> <p>Step 4: Update the DNS A and AAAA records to point to the load balancer's IP address</p> <p>Step 5: Test with OpenSSL</p> <p>After the <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#certificate-resource-status" rel="nofollow noreferrer">certificate and domain statuses are active</a>, it can take up to 30 minutes for your load balancer to begin using your self-managed SSL certificate.</p> <p>For detailed information follow <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#create-key-and-cert" rel="nofollow noreferrer">Use self-managed SSL certificates</a>.</p>
<p>I'm having some problems with the Kubernetes Dashboard not showing any information when I tried to access it:</p> <p><a href="https://i.stack.imgur.com/Pr5Hy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pr5Hy.png" alt="enter image description here" /></a></p> <p>I checked the version that I'm using:</p> <pre><code>$ kubectl version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.26.0 Kustomize Version: v4.5.7 Server Version: v1.26.0+k3s1 </code></pre> <p>Both the client and the server versions are indeed the same, so I'm not sure what is causing the Dashboard UI to not show any information? Any ideas?</p> <p>EDIT: I even lowered the version of my kubectl and my k3s server, but still I do not see the Unknown error go away.</p> <pre><code>Client Version: v1.25.0 Kustomize Version: v4.5.7 Server Version: v1.25.6+k3s1 $ kubectl get clusterrolebinding admin-user NAME ROLE AGE admin-user ClusterRole/cluster-admin 19h $ kubectl get sa -n kubernetes-dashboard NAME SECRETS AGE default 0 19h kubernetes-dashboard 0 19h admin-user 0 19h </code></pre>
<p>This might happen for various reasons like Either <strong>ClusterRoleBinding</strong> or <strong>ServiceAccount</strong> is not created properly. Sometimes it will be related to Compatibility as well. Try these troubleshooting steps</p> <ol> <li>In case if you are creating a <strong>Service Account</strong> and <strong>ClusterRoleBinding</strong> manually then make sure you are creating in proper namespace and giving proper configurations and roles.</li> </ol> <p>you can use these commands to get the detials about SA and rolebindings</p> <pre><code>$kubectl get sa -n kubernetes-dashboard NAME SECRETS AGE admin-user 0 61m $kubectl get clusterrolebinding admin-user NAME ROLE AGE admin-user ClusterRole/cluster-admin 62m </code></pre> <ol start="2"> <li>Check whether the <code>dashboard</code> version is compatible with the <code>server</code> version. You can find the compatibility information in <a href="https://github.com/kubernetes/dashboard/releases" rel="nofollow noreferrer">official kubernetes-dashboard page</a> on github. If your server is not compatible then try lowering the version.(In your case try lowering the version to 1.25).</li> <li>Check this <a href="https://github.com/AcalephStorage/kubernetes-dashboard#documentation" rel="nofollow noreferrer">official documentation</a> for detailed kubernetes-dashboard troubleshoot</li> </ol> <p>These SO links have similar issues <a href="https://stackoverflow.com/questions/68885798/kubernetes-dashboard-web-ui-has-nothing-to-display">SO1</a> <a href="https://stackoverflow.com/questions/59141055/kubernetes-dashboard-unknown-server-error-after-login">SO2</a></p>
<p>We need to disable the automount of service account from our existing deployments in AKS cluster. There are 2 ways to do by adding the property &quot;automountserviceaccount : false&quot; in either in the service account manifest or pod template.</p> <p>We are using separate service account specified in our application deployments, however when we looked in the namespace, there are default service account also created.</p> <p>So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.</p> <p>Since our app already live, will there be any impact by adding this to the service account s.</p> <p>How to know the used service accounts of a pod and it's dependencies ?</p>
<blockquote> <p>So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.</p> </blockquote> <p>The design behind the <code>default</code> ServiceAccount is that it does not have any rights unless you give them some. So from a security point of view there is not much need to disable the mount unless you granted them access for some reason. Instead, whenever an application truly needs some access, go ahead and create a ServiceAccount for that particular application and grant it the permissions it needs via RBAC.</p> <blockquote> <p>Since our app already live, will there be any impact by adding this to the service account s.</p> </blockquote> <p>In case you truly want to disable the mount there won't be an impact on your application if it didn't use the ServiceAccount beforehand. What is going to happen though, is that a new Pod will be created and the existing one is being delete. However, if you properly configured readinessProbes and a rolling update strategy, then Kubernetes will ensure that there will be no downtime.</p> <blockquote> <p>How to know the used service accounts of a pod and it's dependencies ?</p> </blockquote> <p>You can check what ServiceAccount a Pod is mounting by executing <code>kubectl get pods &lt;pod-name&gt; -o yaml</code>. The output is going to show you the entirety of the Pod's manifest and the field <code>spec.serviceAccountName</code> contains information on which ServiceAccount the Pod is mounting.</p>
<p>I am running EKS cluster in the AWS. The version is 1.25. Today I start to get this message:</p> <pre><code>Internal error occurred: failed calling webhook &quot;vingress.elbv2.k8s.aws&quot;: failed to call webhook: Post &quot;https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s&quot;: x509: certificate signed by unknown authority </code></pre> <p>How I can fix that?</p>
<p>Below trouble shotting steps will help you to resolve your issue:</p> <p>1.As per this <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2462" rel="nofollow noreferrer">git issue</a> enabling Port 9443 / TCP in the security group of worker nodes that is attached to the EC2 instances will resolve your issue. You can do it in either <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2039#issuecomment-1189524910" rel="nofollow noreferrer">web via security group in ec2 instance</a> or by adding below terraform code:</p> <pre><code>node_security_group_additional_rules = { ingress_allow_access_from_control_plane = { type = &quot;ingress&quot; protocol = &quot;tcp&quot; from_port = 9443 to_port = 9443 source_cluster_security_group = true description = &quot;Allow access from control plane to webhook port of AWS load balancer controller&quot; } } </code></pre> <p>2.Redeploy the service account on the cluster.</p> <p>3.Check logs for more information.</p> <p>4.You are also getting <code>x509: certificate signed by unknown authority</code>, check this <a href="https://www.positioniseverything.net/x509-certificate-signed-by-unknown-authority/" rel="nofollow noreferrer">document</a> curated by the site Position is Everything for troubleshooting. Check whether right webhook configuration has been set up and that the correct endpoints are being used. Check if certificates are properly installed and not expired. If certificates expired then renew it to clear this issue.</p>
<p>I have my app deployed to Kubernetes and it's producing some logs. I can see the logs by running <code>kubectl logs -f &lt;pod-id&gt; -n staging</code>, but I can't find where the logs are physically located on the pod. The <code>/var/log/</code> folder is empty, and I can't find the logs anywhere else on the pod either.</p> <p>Why is this happening, and where should the logs be?</p>
<p>As @ Achraf Bentabib said</p> <p>Kubernetes creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at</p> <pre><code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/ </code></pre> <ol> <li><p>Identify the node on which the Pod is running:</p> <p>kubectl get pod pod-name -owide</p> </li> <li><p>SSH on that node, you can check which logging driver is being used by the node with:</p> </li> </ol> <p>If you are using docker then:</p> <pre><code>docker info | grep -i logging </code></pre> <p>If you are using kubernetes:</p> <pre><code>kubectl ssh node NODE_NAME </code></pre> <ol start="3"> <li><p>If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node</p> <p>kubectl get pod pod-name ojsonpath='{.status.containerStatuses[0].containerID}'</p> </li> </ol> <p>Example:</p> <pre><code>var/log/containers/&lt;pod-name&gt;_&lt;namespace&gt;_&lt;container-name-container-id&gt;.log -&gt; /var/log/pods/&lt;some-uuid&gt;/&lt;container-name&gt;_0.log </code></pre>
<p>I am new to istio and I want to expose three services and route traffic to those services based on the port number passed to &quot;website.com:port&quot; or subdomain.</p> <p>services deployment config files:</p> <pre><code> apiVersion: v1 kind: Service metadata: name: visitor-service labels: app: visitor-service spec: ports: - port: 8000 nodePort: 30800 targetPort: 8000 selector: app: visitor-service --- apiVersion: apps/v1 kind: Deployment metadata: name: visitor-service spec: replicas: 1 selector: matchLabels: app: visitor-service template: metadata: labels: app: visitor-service spec: containers: - name: visitor-service image: visitor-service ports: - containerPort: 8000 </code></pre> <p>second service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: auth-service labels: app: auth-service spec: ports: - port: 3004 nodePort: 30304 targetPort: 3004 selector: app: auth-service --- apiVersion: apps/v1 kind: Deployment metadata: name: auth-service spec: replicas: 1 selector: matchLabels: app: auth-service template: metadata: labels: app: auth-service spec: containers: - name: auth-service image: auth-service ports: - containerPort: 3004 </code></pre> <p>Third one:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gateway labels: app: gateway spec: ports: - port: 8080 nodePort: 30808 targetPort: 8080 selector: app: gateway --- apiVersion: apps/v1 kind: Deployment metadata: name: gateway spec: replicas: 1 selector: matchLabels: app: gateway template: metadata: labels: app: gateway spec: containers: - name: gateway image: gateway ports: - containerPort: 8080 </code></pre> <p>If someone can help setting up the gateway and virtual service configuration it would be great.</p>
<p>It seems like you simply want to expose your applications, for that reason istio seems like a total overkill since it comes with a lot of overhead that you won't be using. Regardless of whether you want to use istio as your default ingress or any other ingress-controller (nginx, traefik, ...) the following construct applies to all of them: Expose the ingress-controller via a service of type <code>NodePort</code> or <code>LoadBalancer</code>, depending on your infrastructure. In a cloud environment the latter one will most likely work the best for you (if on GKE, AKS, EKS, ...). Once it is exposed set up a DNS A record to point to the external IP address. Afterwards you can start configuring your ingress, depending on which ingress-controller you chose the following YAML may need some adjustments (example is given for istio):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: istio name: ingress spec: rules: - host: httpbin.example.com http: paths: - path: / pathType: Prefix backend: serviceName: httpbin servicePort: 8000 </code></pre> <p>If a request for something like <code>httpbin.example.com</code> comes in to your ingress-controller it is going to send the request to a service named <code>httpbin</code> on port <code>8000</code>. As can be seen in the YAML posted above, the <code>rules</code> and <code>paths</code> field take a list (indicated by the <code>-</code> in the next line). To expose multiple services simply add a new entry to the list, e.g.:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: istio name: ingress spec: rules: - host: httpbin.example.com http: paths: - path: /httpbin pathType: Prefix backend: serviceName: httpbin servicePort: 8000 - path: /apache pathType: Prefix backend: serviceName: apache servicePort: 8080 </code></pre> <p>This is going to send requests like <code>httpbin.example.com/httpbin/</code> to <code>httpbin</code> and <code>httpbin.example.com/apache/</code> to <code>apache</code>.</p> <p>For further information see:</p> <ul> <li><a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></li> </ul>
<p>I am trying to start a pod in privileged mode using the following manifest but it doesn't work for me.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> apiVersion: v1 kind: Pod metadata: name: ftp spec: privileged: true hostNetwork: true containers: - name: ftp image: 84d3f7ba5876/ftp-run</code></pre> </div> </div> </p>
<p><strong><code>privileged: true</code></strong> needs to be in <strong><code>securityContext</code></strong> in the spec section of the pod template.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: ftp spec: hostNetwork: true containers: - name: ftp image: 84d3f7ba5876/ftp-run securityContext: privileged: true </code></pre> <p>You can refer to this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">doc</a> for detailed information for privileged mode</p>
<p>I am working through &quot;learn kubernetes the hard way&quot; and am at the &quot;bootstrapping the etcd cluster&quot; step: <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md</a></p> <p>I have run into a command that causes a timeout:</p> <pre><code>somersbmatthews@controller-0:~$ { sudo systemctl daemon-reload; sudo systemctl enable etcd; sudo systemctl start etcd; } Job for etcd.service failed because a timeout was exceeded. See &quot;systemctl status etcd.service&quot; and &quot;journalctl -xe&quot; for details. </code></pre> <p>Here I follow the above recommendations:</p> <p>This is the first thing the CLI asked me to check:</p> <pre><code>somersbmatthews@controller-0:~$ systemctl status etcd.service ● etcd.service - etcd Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: activating (start) since Wed 2020-12-02 03:15:05 UTC; 34s ago Docs: https://github.com/coreos Main PID: 49251 (etcd) Tasks: 8 (limit: 9544) Memory: 10.2M CGroup: /system.slice/etcd.service └─49251 /usr/local/bin/etcd --name controller-0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file&gt; Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 is starting a new election at term 570 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 became candidate at term 571 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 571 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 is starting a new election at term 571 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 became candidate at term 572 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 572 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; </code></pre> <p>The second thing the CLI asked me to check:</p> <pre><code>somersbmatthews@controller-0:~$ journalctl -xe -- A stop job for unit etcd.service has finished. -- -- The job identifier is 3597 and the job result is done. Dec 02 03:05:32 controller-0 systemd[1]: Starting etcd... -- Subject: A start job for unit etcd.service has begun execution -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- A start job for unit etcd.service has begun execution. -- -- The job identifier is 3597. Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead Dec 02 03:05:32 controller-0 etcd[48861]: etcd Version: 3.4.10 Dec 02 03:05:32 controller-0 etcd[48861]: Git SHA: 18dfb9cca Dec 02 03:05:32 controller-0 etcd[48861]: Go Version: go1.12.17 Dec 02 03:05:32 controller-0 etcd[48861]: Go OS/Arch: linux/amd64 Dec 02 03:05:32 controller-0 etcd[48861]: setting maximum number of CPUs to 2, total number of available CPUs is 2 Dec 02 03:05:32 controller-0 etcd[48861]: the server is already initialized as member before, starting as etcd member... Dec 02 03:05:32 controller-0 etcd[48861]: peerTLS: cert = /etc/etcd/kubernetes.pem, key = /etc/etcd/kubernetes-key.pem, trusted-ca = /etc/etcd/ca.pem, cli&gt; Dec 02 03:05:32 controller-0 etcd[48861]: name = controller-0 Dec 02 03:05:32 controller-0 etcd[48861]: data dir = /var/lib/etcd Dec 02 03:05:32 controller-0 etcd[48861]: member dir = /var/lib/etcd/member Dec 02 03:05:32 controller-0 etcd[48861]: heartbeat = 100ms Dec 02 03:05:32 controller-0 etcd[48861]: election = 1000ms Dec 02 03:05:32 controller-0 etcd[48861]: snapshot count = 100000 Dec 02 03:05:32 controller-0 etcd[48861]: advertise client URLs = https://10.240.0.10:2379 Dec 02 03:05:32 controller-0 etcd[48861]: initial advertise peer URLs = https://10.240.0.10:2380 Dec 02 03:05:32 controller-0 etcd[48861]: initial cluster = Dec 02 03:05:32 controller-0 etcd[48861]: restarting member f98dc20bce6225a0 in cluster 3e7cc799faffb625 at commit index 3 Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=() Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 became follower at term 183 Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: newRaft f98dc20bce6225a0 [peers: [], term: 183, commit: 3, applied: 0, lastindex: &gt; Dec 02 03:05:32 controller-0 etcd[48861]: simple token is not cryptographically signed Dec 02 03:05:32 controller-0 etcd[48861]: starting server... [version: 3.4.10, cluster version: to_be_decided] Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=(4203990652121993521) Dec 02 03:05:32 controller-0 etcd[48861]: added member 3a57933972cb5131 [https://10.240.0.12:2380] to cluster 3e7cc799faffb625 Dec 02 03:05:32 controller-0 etcd[48861]: starting peer 3a57933972cb5131... Dec 02 03:05:32 controller-0 etcd[48861]: started HTTP pipelining with peer 3a57933972cb5131 Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer) Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer) Dec 02 03:05:32 controller-0 etcd[48861]: started peer 3a57933972cb5131 somersbmatthews@controller-0:~$ journalctl -xe Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 is starting a new election at term 224 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 became candidate at term 225 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 225 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 is starting a new election at term 225 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 became candidate at term 226 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 226 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 is starting a new election at term 226 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 became candidate at term 227 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 227 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: publish error: etcdserver: request timed out Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 is starting a new election at term 227 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 became candidate at term 228 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 228 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 is starting a new election at term 228 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 became candidate at term 229 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 229 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 is starting a new election at term 229 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 became candidate at term 230 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 230 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 is starting a new election at term 230 Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 became candidate at term 231 Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 231 </code></pre> <p>So I redo the step that I think allows what is not being allowed above <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules</a>:</p> <pre><code>somersbmatthews@controller-0:~$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ &gt; --allow tcp,udp,icmp \ &gt; --network kubernetes-the-hard-way \ &gt; --source-ranges 10.240.0.0/24,10.200.0.0/16 Creating firewall...failed. ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource: - The resource 'projects/k8s-hard-way-2571/global/firewalls/kubernetes-the-hard-way-allow-internal' already exists </code></pre> <p>but I'm still getting the timeout errors above.</p> <p>Any help is appreciated, thanks :)</p>
<p>I encountered similar error . First , I did a mistake in using master-1 ip address in listen-peer-urls,listen-client-urls ,advertise-client-urls &amp; listen-client-urls</p> <p>Second , try to test if telnet works on telnet 2380. And if doesnt work , then open firewall on both masters</p> <p>sudo firewall-cmd --add-port=2380/tcp --permanent</p> <p>sudo systemctl restart firewalld</p> <p>Also , it is necessary that both masters should not have considerable time difference.</p>
<p>I have an application with many services that are deployed in kubernetes. These services are represented by yaml files like <code>configmap.yaml</code> and <code>deployment.yaml</code>.</p> <p>How can I convert these files to helm charts et deploy the application using:</p> <pre><code>helm install </code></pre>
<p>You can install helm with deployment.yaml files by using helm charts:</p> <ol> <li>Create your Helm Chart</li> <li>Convert and Update Chart.yaml, deployment.yaml, service.yaml and values.yaml</li> <li>Verify the Conversion of YAMLs</li> <li>Run/Install Helm Chart</li> </ol> <p>Here in the <a href="https://jhooq.com/convert-kubernetes-yaml-into-helm/" rel="nofollow noreferrer">document</a> you can see the details of each step and process.Attaching the similar <a href="https://stackoverflow.com/questions/68766805/how-to-build-helm-chart-from-existing-kubernetes-manifest">stack question</a> for your understanding.</p>
<p>I have an Ingress configuration, I want to enable cors headers on some specific hosts!</p> <p>I set the annotation in the ingress to</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers &quot;Access-Control-Allow-Origin: $http_origin&quot;; nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-credentials: &quot;true&quot; </code></pre> <p>This works but, also will set Access-Control-Allow-Origin in some other hosts For example:</p> <pre><code>curl 'https://example.com' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:88.0) Gecko/20100101 Firefox/88.0' -H 'Accept: application/json' -H 'Accept-Language: en,en-US;q=0.7,en;q=0.3' --compressed -H 'Origin: https://hacker.org' -H 'Connection: keep-alive' -v </code></pre> <p>The result would be <code>Access-Control-Allow-Origin: hacker.org</code> which is not what I expect!</p> <p>I want to configure the ingress in a way that it sets the <code>Access-Control-Allow-Origin</code> only on some specific hosts!</p> <p>I tried this annotation!</p> <pre><code> nginx.ingress.kubernetes.io/configuration-snippet: | if ($http_origin ~* 'https://example.com') { more_set_headers &quot;Access-Control-Allow-Origin: $http_origin&quot;; } nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-credentials: &quot;true&quot; </code></pre> <p>But does not work!</p> <p>I want the ingress configured to set Cors headers only on some specific hosts!</p>
<p>This is due to the fact that you enabled cors <code>nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot;</code> and with that the default for <code>nginx.ingress.kubernetes.io/cors-allow-origin</code> came into play which is <code>*</code>. Simply configure it with the appropriate annotation like so:</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-origin: &quot;https://origin-site.com&quot; </code></pre> <p>More information can be found here: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors</a></p>
<p>We would like to pack as many pods into each nodes in our cluster as much as possible do decrease the amount of nodes we have on some of our environments. I saw <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/descheduler</a> HighNodeUtilization strategy which seems to fit the bill for what we need. However, it seems the cluster needs to have the scoring strategy <strong>MostAllocated</strong> to work with this.</p> <p>I believe that the kube-scheduler in EKS in inaccessible to be configured. How do I then configure the MostAllocated scoring strategy?</p> <p>Better yet, how do I configure this automated packing of pods in as little nodes as possible in a cluster without the use of Descheduler?</p> <p>Tried deploying the descheduler as is without the MostAllocated scoring strategy configured. Obviously did not provide the results expected.</p> <p>Many of my digging online led to having to create a custom-scheduler, but I have found little/unclear resources to be able to do so.</p>
<p>Eks does not provide the ability to override the default scheduler configuration, which means that actually configuring the <code>default-scheduler</code> profile with the <code>MostAllocated</code> scoring strategy is not an option. However, you may run your own scheduler <em>alongside</em> the default scheduler, and <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">this one may be configured how you like</a>. Once you create a custom scheduler, you can override <em>that</em> scheduler's configuration with the <code>MostAllocated</code> scoring strategy and then instruct your workloads to use that scheduler.</p> <p>In order to run multiple schedulers, you have to set up several Kubernetes Objects. These objects are documented in the guide linked above:</p> <ul> <li>ServiceAccount</li> <li>ClusterRoleBinding x2</li> <li>RoleBinding</li> <li>ConfigMap</li> <li>Deployment</li> </ul> <p>The deployment will use the standard <code>kube-scheduler</code> image provided by Google, <a href="https://www.youtube.com/watch?v=IYcL0Un1io0" rel="nofollow noreferrer">unless you'd like to create your own</a>. I wouldn't recommend it.</p> <h3>Major Note: Ensure your version of the kube-scheduler is the same version as the control plane. This will not work otherwise.</h3> <p>In addition, ensure that your version of the <code>kube-scheduler</code> is compatible with the version of the configuration objects that you use to configure the scheduler profile. <code>v1beta2</code> is safe for <code>v1.22.x</code> -&gt; <code>v1.24.x</code> but only <code>v1beta3</code> or <code>v1</code> is safe for <code>v.1.25+</code>.</p> <p>For example, here's a working version of a deployment manifest and config map that are used to create a custom scheduler compatible with <code>k8s</code> <code>v.1.22.x</code>. Note you'll still have to create the other objects for this to work:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: custom-scheduler namespace: kube-system spec: replicas: 1 selector: matchLabels: name: custom-scheduler template: metadata: labels: component: scheduler name: custom-scheduler tier: control-plane spec: containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/kubernetes/custom-scheduler/custom-scheduler-config.yaml env: [] image: registry.k8s.io/kube-scheduler:v1.22.16 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS name: custom-scheduler readinessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS volumeMounts: - mountPath: /etc/kubernetes/custom-scheduler name: custom-scheduler-config serviceAccountName: custom-scheduler volumes: - configMap: name: custom-scheduler-config name: custom-scheduler-config </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap data: custom-scheduler-config.yaml: | apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: KubeSchedulerConfiguration leaderElection: leaderElect: false profiles: - pluginConfig: - args: apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: NodeResourcesFitArgs scoringStrategy: resources: - name: cpu weight: 1 - name: memory weight: 1 type: MostAllocated name: NodeResourcesFit plugins: score: enabled: - name: NodeResourcesFit weight: 1 schedulerName: custom-scheduler metadata: name: custom-scheduler-config namespace: kube-system </code></pre>
<p>I'm trying to create my own helm chart package for prometheus and its components but I am trying to reuse parts of the kube-prometheus-stack helm chart on github : <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack</a></p> <p>I've modified the templates to my liking but when I try to create a package for them which I can then upload it to my repo I get the following :</p> <pre><code>helm package prometheus-chart/ Error: found in Chart.yaml, but missing in charts/ directory: alertmanager, kube-state-metrics, prometheus-node-exporter, prometheus-pushgateway </code></pre> <p>How can I get the templates from that repo, and create a deployable package from my local machine which I can then share it?</p>
<p>These components <strong>alertmanager, kube-state-metrics, prometheus-node-exporter, prometheus-pushgateway</strong> are added as dependencies in the <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/Chart.yaml" rel="nofollow noreferrer">Chart.yaml</a>. So the helm will check whether these dependencies are built or not.</p> <p>So you need to build the dependencies as well by using this command</p> <pre><code>$ helm dependency build CHARTNAME </code></pre> <p>Example:</p> <pre><code>$ helm dependency build alertmanager </code></pre> <p>Once the dependencies are built you can update them using update command</p> <pre><code>$ helm dependency update CHARTNAME </code></pre> <p>For more detailed information refer to this official documents <a href="https://helm.sh/docs/helm/helm_dependency_build/" rel="nofollow noreferrer">doc1</a> <a href="https://helm.sh/docs/helm/helm_dependency_update/" rel="nofollow noreferrer">doc2</a></p>
<p>I have the following Kubernetes job defined in <code>job.yaml</code></p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-$count spec: template: spec: containers: - name: pi image: perl:5.34.0 command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never backoffLimit: 4 </code></pre> <p>I apply this with the following bash script:</p> <pre><code>for i in {1..3} do kubectl apply -f job.yaml done </code></pre> <p>Is there a way to create 3 jobs here and use an environment variable <code>pi-$count</code> so <code>pi-1</code>, <code>pi-2</code>, <code>pi-3</code> are created?</p>
<p>You can use <code>sed</code> to replace the <code>$count</code> and create a manifest file for each job and apply it.</p> <p>For example create a file called <code>pi-job.yaml</code> (I'm taking your code as example)</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-$count spec: template: spec: containers: - name: pi image: perl:5.34.0 command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never backoffLimit: 4 </code></pre> <p>Create a folder called <strong>job</strong> to store the manifest files (this directory is completely optional)You can use something like this</p> <pre><code> mkdir job for i in {1..3} do cat pi-job.yaml | sed &quot;s/\$count/$i/&quot; &gt; ./jobs/job-$i.yaml; kubectl apply -f ./jobs/job-$i.yaml done </code></pre> <p>By executing this code 3 jobs will be created in the name of <strong>pi-1</strong>,<strong>pi-2</strong> and <strong>pi-3</strong>. But the manifest files will reside in the job folder. You need to clean it up. Otherwise you can add one more line to the script to remove the files like this</p> <pre><code> mkdir job for i in {1..3} do cat pi-job.yaml | sed &quot;s/\$count/$i/&quot; &gt; ./jobs/job-$i.yaml; kubectl apply -f ./jobs/job-$i.yaml rm -rf ./jobs/job-$i.yaml done </code></pre> <p>For more detailed information refer to the <a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/#create-jobs-based-on-a-template" rel="nofollow noreferrer">official k8's job document</a></p>
<p>How do you find the creator of a namespace in Kubernetes? There was a debate today about who had created a namespace and we weren't able to find who the creator was.</p>
<p>If you didn't configure it already you cannot, the information is not being saved by Kubernetes unless you explicitly want to log it.</p> <p>In order to do so you would have to activate <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logs</a>. Audit logs can be customized to a high degree and can contain information such as <em>when</em> did <em>who</em> do <em>what</em>. This also includes the creation of namespaces.</p>
<p>My groovy pipeline has 3 steps (all with shell):</p> <ul> <li><strong>stage 1:</strong> authenticate to GKE cluster and update kubeconfig</li> <li><strong>stage 2:</strong> helm install on that cluster (using --context)</li> <li><strong>stage 3:</strong> kubectl wait for condition (using --context)</li> </ul> <p>Now most of the time these jobs run fine with no issues at all. But a few days ago it gave me this error on stage 3:</p> <p><code>error: context &quot;...&quot; does not exist</code></p> <p>I can't figure out why this failed once, and unfortunately I don't have the full log of that job any more. It's weird, as the context worked for the helm install stage, so how could it be not found all of a sudden?</p> <p>what do you think can cause this random issue? How can I avoid it in the future?</p>
<p>The reason is <code>&quot;...&quot;</code> context does not exist in your kubeconfig file. You can run <code>kubectl config view -o jsonpath='{.current-context}'</code> to check current context and use that context.</p> <p>As per in <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration" rel="nofollow noreferrer">this document</a> Set which Kubernetes cluster kubectl communicates with and modifies configuration information.</p>
<p>Good afternoon,</p> <p>i'd like to ask. im a &quot;little&quot; bit upset regarding ingress and its traffic flow</p> <p>i created test nginx deployment with service and ingress. ( in titaniun cloud )</p> <p>i have no direct connect via browser so im using tunneling to get access via browser abd sock5 proxy in firefox.</p> <p>deployment:</p> <pre><code>k describe deployments.apps dpl-nginx Name: dpl-nginx Namespace: xxx CreationTimestamp: Thu, 09 Jun 2022 07:20:48 +0000 Labels: &lt;none&gt; Annotations: deployment.kubernetes.io/revision: 1 field.cattle.io/publicEndpoints: [{&quot;port&quot;:32506,&quot;protocol&quot;:&quot;TCP&quot;,&quot;serviceName&quot;:&quot;xxx:xxx-svc&quot;,&quot;allNodes&quot;:true},{&quot;addresses&quot;:[&quot;172.xx.xx.117&quot;,&quot;172.xx.xx.131&quot;,&quot;172.xx.x... Selector: app=xxx-nginx Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=xxx-nginx Containers: nginx: Image: nginx Port: 80/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: /usr/share/nginx/html/ from nginx-index-file (rw) Volumes: nginx-index-file: Type: ConfigMap (a volume populated by a ConfigMap) Name: index-html-configmap Optional: false Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created) Events: &lt;none&gt; </code></pre> <p>service:</p> <pre><code>Name: xxx-svc Namespace: xxx Labels: &lt;none&gt; Annotations: field.cattle.io/publicEndpoints: [{&quot;port&quot;:32506,&quot;protocol&quot;:&quot;TCP&quot;,&quot;serviceName&quot;:&quot;xxx:xxx-svc&quot;,&quot;allNodes&quot;:true}] Selector: app=xxx-nginx Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.43.95.33 IPs: 10.43.95.33 Port: http-internal 888/TCP TargetPort: 80/TCP NodePort: http-internal 32506/TCP Endpoints: 10.42.0.178:80,10.42.0.179:80 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>ingress:</p> <pre><code>Name: test-ingress Namespace: xxx Address: 172.xx.xx.117,172.xx.xx.131,172.xx.xx.132 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- test.xxx.io / xxx-svc:888 (10.42.0.178:80,10.42.0.179:80) Annotations: field.cattle.io/publicEndpoints: [{&quot;addresses&quot;:[&quot;172.xx.xx.117&quot;,&quot;172.xx.xx.131&quot;,&quot;172.xx.xx.132&quot;],&quot;port&quot;:80,&quot;protocol&quot;:&quot;HTTP&quot;,&quot;serviceName&quot;:&quot;xxx:xxx-svc&quot;,&quot;ingressName... nginx.ingress.kubernetes.io/proxy-read-timeout: 3600 nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: false Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 9m34s (x37 over 3d21h) nginx-ingress-controller Scheduled for sync </code></pre> <p>when i try curl/wget to host / nodeIP ,direcly from cluster , both option works, i can get my custom index</p> <pre><code> wget test.xxx.io --no-proxy --no-check-certificate --2022-06-13 10:35:12-- http://test.xxx.io/ Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117 Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 197 [text/html] Saving to: ‘index.html.1’ index.html.1 100%[===========================================================================================&gt;] 197 --.-KB/s in 0s </code></pre> <p>curl:</p> <pre><code>curl test.xxx.io --noproxy '*' -I HTTP/1.1 200 OK Date: Mon, 13 Jun 2022 10:36:31 GMT Content-Type: text/html Content-Length: 197 Connection: keep-alive Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT ETag: &quot;62a19f51-c5&quot; Accept-Ranges: bytes </code></pre> <p>nslookup</p> <pre><code>nslookup,dig,ping from cluster is working as well: nslookup test.xxx.io Server: 127.0.0.53 Address: 127.0.0.53#53 Name: test.xxx.io Address: 172.xx.xx.131 Name: test.xxx.io Address: 172.xx.xx.132 Name: test.xxx.io Address: 172.xx.xx.117 </code></pre> <p>dig</p> <pre><code>dig test.xxx.io +noall +answer test.xxx.io. 22 IN A 172.xx.xx.117 test.xxx.io. 22 IN A 172.xx.xx.132 test.xxx.io. 22 IN A 172.xx.xx.131 </code></pre> <p>ping</p> <pre><code>ping test.xxx.io PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data. 64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms 64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms </code></pre> <p>also from ingress nginx pod curl works fine... in firefox via <code>nodeIP:port</code>, i can get index, but via host its not possible</p> <p>seems that ingress forwarding traffic to the pod, but is this issue only something to do with browser ?</p> <p>Thanks for any advice</p>
<p>so for clarification,</p> <p>as I'm using tunneling to reach ingress from local pc via browser with SOCKS5 proxy.</p> <pre><code>ssh [email protected] -D 1090 </code></pre> <p>solution is trivial, add</p> <pre><code>172.xx.xx.117 test.xxx.io </code></pre> <p>into <em>/etc/hosts</em> on jump server.</p>
<p>Is there a way to use an environment variable as the default for another? For example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: Work spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: ALWAYS_SET value: &quot;default&quot; - name: SOMETIMES_SET value: &quot;custom&quot; - name: RESULT value: &quot;$(SOMETIMES_SET) ? $(SOMETIMES_SET) : $(ALWAYS_SET)&quot; </code></pre>
<p>I don't think there is a way to do that but anyway you can try something like this</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: Work spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 args: - RESULT=${SOMETIMES_SET:-${ALWAYS_SET}}; command_to_run_app command: - sh - -c env: - name: ALWAYS_SET value: &quot;default&quot; - name: SOMETIMES_SET value: &quot;custom&quot; </code></pre>
<p>I am running a Spring Boot application utilizing Azure Kubernetes Service. I found this strange error in my pod logs recently.</p> <blockquote> <p>com.azure.identity.CredentialUnavailableException: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.To mitigate this issue, please refer to the troubleshooting guidelines here at <a href="https://aka.ms/azsdk/java/identity/environmentcredential/troubleshoot" rel="nofollow noreferrer">https://aka.ms/azsdk/java/identity/environmentcredential/troubleshoot</a> ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established. SharedTokenCacheCredential authentication unavailable. No accounts were found in the cache. IntelliJ Authentication not available. Please log in with Azure Tools for IntelliJ plugin in the IDE. Failed to read Vs Code credentials from Linux Key Ring. AzureCliCredential authentication unavailable. Azure CLI not installed.To mitigate this issue, please refer to the troubleshooting guidelines here at <a href="https://aka.ms/azsdk/java/identity/azclicredential/troubleshoot" rel="nofollow noreferrer">https://aka.ms/azsdk/java/identity/azclicredential/troubleshoot</a> Unable to execute PowerShell. Please make sure that it is installed in your systemTo mitigate this issue, please refer to the troubleshooting guidelines here at <a href="https://aka.ms/azure-identity-java-default-azure-credential-troubleshoot" rel="nofollow noreferrer">https://aka.ms/azure-identity-java-default-azure-credential-troubleshoot</a></p> </blockquote> <p>Any hints are much appreciated !</p> <p>My trails so far:</p> <ol> <li>Upgrade/Downgrade Kubernetes versions</li> <li>Checking Environment Variable Assignments</li> </ol>
<p><strong>Could you please validate that you are setting the following environment variables?</strong></p> <ol> <li><strong>ENVIRONMENT_VARIABLES</strong> ensure that the variables <strong>azure_client, azure_tenant and azure_client_secret</strong> are properly set.</li> </ol> <p>Below steps will work when authenticate using <strong>environment variables</strong>: Please add the following variables in env_path,</p> <pre><code>export AZURE_CLIENT_ID=XXXXXXXXXXXXXX export AZURE_TENANT_ID=XXXXXXXXXXXXX export AZURE_CLIENT_SECRET=XXXXXXXX </code></pre> <p>Check your environment variables with</p> <pre><code>System.getenv(&quot;AZURE_CLIENT_ID&quot;) </code></pre> <ol start="2"> <li><strong>MANAGEDIDENTITY_CREDENTIALS</strong></li> </ol> <p>Managed Identity is currently unsupported by the Java, we can use an <em><strong>secret</strong></em> or a <em><strong>certificate</strong></em> authentication for Sample:</p> <pre><code>export AZURE_CLIENT_ID=XXXXXXXXXXXXXX export AZURE_TENANT_ID=XXXXXXXXXXXXX export AZURE_CLIENT_CERTIFICATE_PATH=XXXXXXXXXXXX </code></pre> <ol start="3"> <li>In <strong>VS</strong>, go to <em>Tools</em> &gt; <em>Options</em>&gt;<em>Azure Service Authentication</em> &gt; <em>Account Selection</em>&gt; <em>Sign_in</em> with your credentials</li> </ol> <p><a href="https://i.stack.imgur.com/jDMdl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jDMdl.png" alt="enter image description here" /></a></p> <p>If you see the &quot;<em>Re-enter your credentials</em> link, click it and sign in again. if not <em>sign_out</em> and <em>sign_in</em> again.</p> <ol start="4"> <li><strong>PROFILE_ENV_APPLICATION</strong> Please check the profile environment for the application</li> </ol> <p><code>windir\System32\inetsrv\config\applicationHost.config</code></p> <p>In <em>application.config</em> file if it <em>setProfileEnvironment</em> is <em><strong>false</strong></em>, change it to <em><strong>True</strong></em>.</p> <p>If not, add it under &lt;<em>applicationPoolDefaults</em>&gt; tag like below.</p> <pre><code>&lt;applicationPoolDefaults managedRuntimeVersion=&quot;vXX&quot;&gt; &lt;processModel identityType=&quot;ApplicationPoolIdentity&quot; loadUserProfile=&quot;true&quot; setProfileEnvironment=&quot;true&quot;&gt; </code></pre> <ol start="5"> <li><strong>SHARED_TOKEN_CASHE_CREDENTIAL</strong></li> </ol> <p>for shared token cache credentials we have to add the below command</p> <pre><code>DefaultAzureCredential(connection_verify=False, exclude_shared_token_cache_credential=True </code></pre> <ol start="6"> <li><p><strong>AZURE_CLI_CREDENTIAL</strong> AND <strong>AZURE_CLI</strong> in <em><strong>environment variable</strong></em> add your <em>PATH</em> run the terminal echo $PATH</p> </li> <li><p><strong>POWERSHELL</strong></p> </li> </ol> <p>open the PowerShell and run as a administrator, run the command to fix the disk and display a status report</p> <pre><code>Chkdsk c: /F </code></pre> <p>after this command You will have to restart the computer to work the PowerShell.</p>
<p>I try to deploy a MySQL service with a database named <strong>'main'</strong> with a table named <strong>'configs'</strong>. Below are my Kubernetes resources, which includes an SQL init script within a ConfigMap resource. After deploying, I use a forward port to access the MySQL pod, and communicate with the pod via the Python console using the PyMySQL library. The code I use for such communication is indicated below, where I try to insert an item into the <strong>'main.configs'</strong> table. The problem is that, I get an error from PyMySQL saying that the table 'main.configs' doesn't exist (error 1146), although the <strong>'main'</strong> database does. Note that when I use PyMySQL to create a table <strong>'configs'</strong>, it works perfectly. Something seems to prevent the the creation of <strong>'main.configs'</strong> table in my SQL init script in the ConfigMap (but the <strong>'main'</strong> database is created without any issue).</p> <p><strong>Kubernetes resources:</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque stringData: mysql-root-password: abcd1234567 mysql-user: batman mysql-database: main mysql-password: abcd1234567 --- apiVersion: v1 kind: ConfigMap metadata: name: mysql-initdb-config data: # store initial SQL commands init.sql: | CREATE USER 'batman'@'%' IDENTIFIED BY 'abcd1234567'; GRANT ALL PRIVILEGES ON *.* TO 'batman'@'%'; FLUSH PRIVILEGES; CREATE DATABASE IF NOT EXISTS main; USE main; CREATE TABLE configs( task_id VARCHAR(124) NOT NULL, PRIMARY KEY (task_id) ) ENGINE=InnoDB; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi --- apiVersion: v1 kind: Service metadata: name: mysql spec: ports: - port: 3306 selector: app: mysql --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: mysql-root-password - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secrets key: mysql-user - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secrets key: mysql-database - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: mysql-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql subPath: &quot;mysql&quot; - name: mysql-initdb mountPath: /docker-entrypoint-initdb.d # used to configure our database volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: mysql-initdb configMap: name: mysql-initdb-config </code></pre> <p><strong>PyMySQL code:</strong></p> <pre><code>import pymysql host = 'localhost' port = 8080 # The port you forwarded with kubectl user = 'batman' password = 'abcd1234567' database = 'main' # Establish a connection connection = pymysql.connect( host=host, port=port, user=user, password=password, database=database ) cursor = connection.cursor() query = &quot;&quot;&quot; INSERT INTO configs (task_id) VALUES ('{}') &quot;&quot;&quot;.format(*['id001']) cursor.execute(query) connection.commit() </code></pre>
<p>There is a minor issue with the Secret definition, where the secret name should match the one used in the Deployment environment variables. The actual issue seems to be in the Python code, specifically in the way you're establishing the connection and forwarding the port.</p> <p>1 - The '<strong>secretKeyRef</strong>' in the Deployment should match the secret name defined in the <em>Secret</em>. In your '<strong>ConfigMap</strong>', you've defined the secret as '<strong>mysql-secret</strong>'. Update your Deployment script to this:</p> <pre><code>env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-root-password - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: mysql-user - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-secret key: mysql-database - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-password </code></pre> <p>2 - In your Python code, you're using port '<strong>8080</strong>' for the MySQL connection, but in your Kubernetes resources, the MySQL service is exposed on port '<strong>3306</strong>'. Update the port variable in your Python code to match the correct port:</p> <pre><code>port = 3306 # The port you forwarded with kubectl </code></pre> <p>3 - Ensure that you have properly port-forwarded the MySQL service from your local machine to the Kubernetes cluster using this command:</p> <pre><code>kubectl port-forward service/mysql 3306:3306 </code></pre> <p><em><strong>Make sure you are running this command while your Python code is trying to connect to MySQL.</strong></em></p> <p>4 - After making these changes, test the connection to MySQL and table creation again. If everything is configured correctly, your Python code should be able to connect to the MySQL database and execute the SQL statements from your '<strong>init</strong>' script.</p>
<p>I am trying to provision a <strong>private AKS cluster</strong> using terraform. I want to connect my private AKS cluster to <strong>an existing VNET</strong> that I have created using the Azure portal.</p> <p>The Virtual network option is available in the Azure portal. Please find the below image.</p> <p><a href="https://i.stack.imgur.com/9sueQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9sueQ.png" alt="enter image description here" /></a></p> <p>However, the terraform documentation on <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster" rel="nofollow noreferrer">azurerm_kubernetes_cluster</a> has very limited information on how to achieve that.</p> <p>Please find my <code>main.tf</code> below</p> <pre><code>resource &quot;azurerm_kubernetes_cluster&quot; &quot;kubernetes_cluster&quot; { name = var.cluster_name location = var.location resource_group_name = var.resource_group_name private_cluster_enabled = true default_node_pool { name = &quot;default&quot; node_count = var.node_count vm_size = var.vm_size max_pods = var.max_pods_count } kube_dashboard { enabled = true } network_profile { network_plugin = &quot;azure&quot; } } </code></pre> <p><strong>Please note that the VNET and the cluster that is to be created share the same location and resource group.</strong></p> <p>Any help on how to provision a private AKS cluster to an existing VNET using Terraform would be much appreciated.</p>
<p>I used an existing code from Github with some changes as we already have vnet so instead of resource block I have used data block to get the details of the existing Vnet and instead of using the default subnet I created a subnet for aks and other one for firewall.</p> <pre><code>terraform { required_version = &quot;&gt;= 0.14&quot; required_providers { azurerm = { source = &quot;hashicorp/azurerm&quot; version = &quot;&gt;=2.50.0&quot; } } } provider &quot;azurerm&quot; { features {} } #local vars locals { environment = &quot;test&quot; resource_group = &quot;AKS-test&quot; resource_group_location = &quot;East US&quot; name_prefix = &quot;private-aks&quot; aks_node_prefix = [&quot;10.3.1.0/24&quot;] firewall_prefix = [&quot;10.3.2.0/24&quot;] } #Existing vnet with address space &quot;10.3.0.0/16&quot; data &quot;azurerm_virtual_network&quot; &quot;base&quot; { name = &quot;existing-vnet&quot; resource_group_name = &quot;AKS-test&quot; } #subnets resource &quot;azurerm_subnet&quot; &quot;aks&quot; { name = &quot;snet-${local.name_prefix}-${local.environment}&quot; resource_group_name = local.resource_group address_prefixes = local.aks_node_prefix virtual_network_name = data.azurerm_virtual_network.base.name } resource &quot;azurerm_subnet&quot; &quot;firewall&quot; { name = &quot;AzureFirewallSubnet&quot; resource_group_name = local.resource_group virtual_network_name = data.azurerm_virtual_network.base.name address_prefixes = local.firewall_prefix } #user assigned identity resource &quot;azurerm_user_assigned_identity&quot; &quot;base&quot; { resource_group_name = local.resource_group location = local.resource_group_location name = &quot;mi-${local.name_prefix}-${local.environment}&quot; } #role assignment resource &quot;azurerm_role_assignment&quot; &quot;base&quot; { scope = &quot;/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AKS-test&quot; role_definition_name = &quot;Network Contributor&quot; principal_id = azurerm_user_assigned_identity.base.principal_id } #route table resource &quot;azurerm_route_table&quot; &quot;base&quot; { name = &quot;rt-${local.name_prefix}-${local.environment}&quot; location = data.azurerm_virtual_network.base.location resource_group_name = local.resource_group } #route resource &quot;azurerm_route&quot; &quot;base&quot; { name = &quot;dg-${local.environment}&quot; resource_group_name = local.resource_group route_table_name = azurerm_route_table.base.name address_prefix = &quot;0.0.0.0/0&quot; next_hop_type = &quot;VirtualAppliance&quot; next_hop_in_ip_address = azurerm_firewall.base.ip_configuration.0.private_ip_address } #route table association resource &quot;azurerm_subnet_route_table_association&quot; &quot;base&quot; { subnet_id = azurerm_subnet.aks.id route_table_id = azurerm_route_table.base.id } #firewall resource &quot;azurerm_public_ip&quot; &quot;base&quot; { name = &quot;pip-firewall&quot; location = data.azurerm_virtual_network.base.location resource_group_name = local.resource_group allocation_method = &quot;Static&quot; sku = &quot;Standard&quot; } resource &quot;azurerm_firewall&quot; &quot;base&quot; { name = &quot;fw-${local.name_prefix}-${local.environment}&quot; location = data.azurerm_virtual_network.base.location resource_group_name = local.resource_group ip_configuration { name = &quot;ip-${local.name_prefix}-${local.environment}&quot; subnet_id = azurerm_subnet.firewall.id public_ip_address_id = azurerm_public_ip.base.id } } #kubernetes_cluster resource &quot;azurerm_kubernetes_cluster&quot; &quot;base&quot; { name = &quot;${local.name_prefix}-${local.environment}&quot; location = local.resource_group_location resource_group_name = local.resource_group dns_prefix = &quot;dns-${local.name_prefix}-${local.environment}&quot; private_cluster_enabled = true network_profile { network_plugin = &quot;azure&quot; outbound_type = &quot;userDefinedRouting&quot; } default_node_pool { name = &quot;default&quot; node_count = 1 vm_size = &quot;Standard_D2_v2&quot; vnet_subnet_id = azurerm_subnet.aks.id } identity { type = &quot;UserAssigned&quot; user_assigned_identity_id = azurerm_user_assigned_identity.base.id } depends_on = [ azurerm_route.base, azurerm_role_assignment.base ] } </code></pre> <p><strong>Reference:</strong> <a href="https://github.com/kuhlman-labs/terraform_azurerm_aks_private_cluster" rel="noreferrer">Github</a></p> <p><strong>Before Test:</strong></p> <p><a href="https://i.stack.imgur.com/6RboM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6RboM.png" alt="enter image description here" /></a></p> <p><strong>Doing a terraform Plan on the above code:</strong></p> <p><a href="https://i.stack.imgur.com/FoTBv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FoTBv.png" alt="enter image description here" /></a></p> <p><strong>After applying the code:</strong></p> <p><a href="https://i.stack.imgur.com/U0DGX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U0DGX.png" alt="enter image description here" /></a></p> <p><strong>After the deployment :</strong></p> <p><a href="https://i.stack.imgur.com/oNYXq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oNYXq.png" alt="enter image description here" /></a></p>
<p>I am using k8 to host my grpc service.</p> <p>Sometimes, I am getting the following error (few milliseconds into my request):</p> <pre><code>rpc error: code = Unavailable desc = closing transport due to: connection error: desc = &quot;error reading from server: read tcp &lt;ipaddr&gt;:52220-&gt;&lt;internal ip addr&gt;:8070: read: connection reset by peer&quot;, received prior goaway: code: NO_ERROR </code></pre> <p>May I ask how will this occur? Could it be that the k8's network is down?</p>
<p>This happened because on server shutdown, it sends an initial GOAWAY to support graceful shutdown. This means your server shut down.</p>
<p>I'm using the <a href="https://airflow.apache.org/docs/helm-chart/stable/index.html" rel="nofollow noreferrer">official Airflow Helm Chart</a> to deploy KubernetesExecutor (still locally) on a KinD Cluster.</p> <p>Because this is a Helm Chart, I'm having a lot of trouble trying to configure anything that are not explicitly shown at the documentation.</p> <p>Using this scenario, I want to send all my logs data produced by my DAGs to a s3 bucket (which is a common thing to do on the airflow stack).</p> <p>The problem is: there's nothing on the <a href="https://airflow.apache.org/docs/helm-chart/stable/manage-logs.html" rel="nofollow noreferrer">documentation</a> and even on other threads that can help me achieve this.</p> <p>Is there anything that I can do?</p>
<p>I'm not sure what exactly your problem is, but the following values.yaml works for me with the official airflow helm chart:</p> <pre><code>config: logging: # Airflow can store logs remotely in AWS S3. Users must supply a remote # location URL (starting with either 's3://...') and an Airflow connection # id that provides access to the storage location. remote_logging: 'True' #colored_console_log : 'True' remote_base_log_folder : &quot;s3://PATH&quot; # the following connection must be created in the airflow web ui remote_log_conn_id : 'S3Conn' # Use server-side encryption for logs stored in S3 encrypt_s3_logs : 'True' </code></pre>