Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I am trying to deploy mancenter for hazelcast 5.1.4 using kubernetes, But I need to setup the ldap from starting up.</p> <p>in the version 3.12.x, I use to set a configmap with a value ldap.properties and once the pod was up I could use the ldap login, but from version 4..x.x onwards looks like this change.</p> <p>Anyone has tried to set this in version 4.x.x or 5.x.x?</p> <p>Thanks</p>
<p>From this latest release 5.0, Yo can find the setup in this <a href="https://docs.hazelcast.com/management-center/5.3-snapshot/deploy-manage/ldap" rel="nofollow noreferrer">official doc</a>.</p> <p>You can use your existing LDAP server for authentication/authorization on the Management Center.To create and manage additional users, you must configure them on the LDAP server.</p> <p>You can use the <code> ldap update-password</code> task in the hz-mc conf tool to update the encrypted LDAP password stored in the keystore. This command expects information about the keystore such as its location and password and the new LDAP password that you want to use. After updating the LDAP password, you need to click on the Reload Security Config button on the Management Center login page.</p>
<p>Is there any query to get bytes transmitted rate, of the traffic which is out of the node to a remote peer? For pod also, I need the traffic which goes outwards from the pod, through the node, and outwards from the node. I suppose the usual &quot;container_network_transmit_bytes_total&quot; and &quot;node_network_transmit_bytes_total&quot; give the total bytes transmitted to all destinations. For example, if two pods are running on the same node and communicating with each other, I do not want to count that traffic.</p>
<p><strong>Try using the following queries:</strong></p> <pre><code>sum(irate(node_network_receive_bytes_total[1m])) by (instance) irate(container_network_transmit_bytes_total{name=&quot;redis&quot;}[1m]) </code></pre> <p>Where irate(v range-vector) calculates the per-second instant rate of increase of the time series in the range vector. This is based on the last two data points. Breaks in monotonicity (such as counter resets due to target restarts) are automatically adjusted for.</p> <p>You can find more information in this <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#irate" rel="nofollow noreferrer">Official doc</a>.</p>
<p>i have a GKE cluster with FastAPI service, i tried to install certificate with &quot;google managed certificate&quot; so i can access it through https. Certificate installation is working as expected and i can access it with https, but i got problem when i try to access my api, it return me 502 with this error:</p> <p>Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds.</p> <p>I already wait it for a long time but it just didn't fixed, i thing there is something wrong with the routing but i already tried many thing and still confused why it still didn't work. i also already tried documentation provide from google <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Using Google-managed SSL certificates</a> but still got 502.</p> <p>Below is all my yaml files related to this case:</p> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: noctis-stg spec: replicas: 1 selector: matchLabels: app: noctis-stg template: metadata: labels: app: noctis-stg spec: containers: - name: noctis-stg image: asia-southeast1-docker.pkg.dev/#####/######/#######:##### ports: - containerPort: 80 volumeMounts: - name: noctis-stg-env mountPath: /code/.env subPath: .env volumes: - name: noctis-stg-env configMap: name: noctis-stg-env </code></pre> <p>service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: noctis-stg spec: type: NodePort selector: app: noctis-stg ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>managed-cert.yaml</p> <pre><code>apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-stg spec: domains: - stg.noctis.test-domain.com </code></pre> <p>managed-cert-ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: managed-cert-ingress annotations: kubernetes.io/ingress.global-static-ip-name: global-noctis-stg networking.gke.io/managed-certificates: managed-cert-stg kubernetes.io/ingress.class: &quot;gce&quot; spec: tls: - secretName: tls-secret-stg rules: - host: stg.noctis.test-domain.com http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: noctis-stg port: number: 80 </code></pre> <p>It would really great if anyone could help me to solve this.</p>
<p>This error occurs mainly when Ingress backend is <code>unhealthy</code> or <code>failed to pick backend</code>, you can find then in logs,below troubleshooting steps can help to resolve your issue:</p> <ol> <li>When creating a GKE ingress, change the service type to <code>NodePort</code>.</li> </ol> <p>2.Check the firewall rule is configured correctly.</p> <p>For ingress follow this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">documentation</a>, If you are using external ingress follow <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting/troubleshoot-load-balancing#ingress-502s" rel="nofollow noreferrer">External Ingress produces HTTP 502 errors</a>.</p> <p>If you find logs with error <code>backend_connection_closed_before_data_sent_to_client</code>,</p> <p>You must configure the web server software used by your backends so that its keepalive timeout is longer than 600 seconds to prevent connections from being closed prematurely by the backend. Refer to the <a href="https://cloud.google.com/load-balancing/docs/https#timeouts_and_retries" rel="nofollow noreferrer">documentation</a> which describes the External HTTP(S) Load Balancing has two distinct types of timeouts and this other <a href="https://cloud.google.com/load-balancing/docs/https/https-logging-monitoring#statusdetails_http_failure_messages" rel="nofollow noreferrer">documentation</a> regarding the messages we are having on your logs.</p> <p>For more troubleshooting steps on 502 errors follow this <a href="https://cloud.google.com/load-balancing/docs/https/troubleshooting-ext-https-lbs#unexplained_5xx_errors" rel="nofollow noreferrer">official documentation</a>.</p>
<p>I need to setup a shared cache in minikube in such a way that different services can use that cache to pull and update DVC models and data needed for training Machine Learning models. The structure of the project is to use 1 pod to periodically update the cache with new models and outputs. Then, multiple pods can read the cache to recreate the updated models and data. So I need to be able to update the local cache directory and pull from it using DVC commands, so that all the services have consistent view on the latest models and data created by a service.</p> <p>More specifically, I have a docker image called <code>inference-service</code> that should only <code>dvc pull</code> or some how use the info in the shared dvc cache to get the latest model and data locally in <code>models</code> and <code>data</code> folders (see dockerfile) in minikube. I have another image called <code>test-service</code> that runs the ML pipeline using <code>dvc repro</code> which creates the models and data that DVC needs (dvc.yaml) to track and store in the shared cache. So <code>test-service</code> should push created outputs from the ML pipeline into the shared cache so that <code>inference-service</code> can pull it and use it instead of running dvc repro by itself. <code>test-service</code> should only re-train and write the updated models and data into the shared cache while <code>inference-service</code> should only read and recreate the updated/latest models and data from the shared cache.</p> <p><em><strong>Problem: the cache does get mounted on the minikube VM, but the inference service does not pull (using <code>dvc pull -f</code>) the data and models after the test service is done with <code>dvc repro</code> and results the following warnings and failures:</strong></em></p> <p><em>relevant kubernetes pod log of inference-service</em></p> <pre><code>WARNING: Output 'data/processed/train_preprocessed.pkl'(stage: 'preprocess') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit preprocess` to associate existing 'data/processed/train_preprocessed.pkl' with stage: 'preprocess'. WARNING: Output 'data/processed/validation_preprocessed.pkl'(stage: 'preprocess') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit preprocess` to associate existing 'data/processed/validation_preprocessed.pkl' with stage: 'preprocess'. WARNING: Output 'data/processed/test_preprocessed.pkl'(stage: 'preprocess') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit preprocess` to associate existing 'data/processed/test_preprocessed.pkl' with stage: 'preprocess'. WARNING: Output 'data/interim/train_featurized.pkl'(stage: 'featurize') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit featurize` to associate existing 'data/interim/train_featurized.pkl' with stage: 'featurize'. WARNING: Output 'data/interim/validation_featurized.pkl'(stage: 'featurize') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit featurize` to associate existing 'data/interim/validation_featurized.pkl' with stage: 'featurize'. WARNING: Output 'data/interim/test_featurized.pkl'(stage: 'featurize') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. You can also use `dvc commit featurize` to associate existing 'data/interim/test_featurized.pkl' with stage: 'featurize'. WARNING: Output 'models/mlb.pkl'(stage: 'featurize') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. WARNING: Output 'models/tfidf_vectorizer.pkl'(stage: 'featurize') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. WARNING: Output 'models/model.pkl'(stage: 'train') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. WARNING: Output 'reports/scores.json'(stage: 'evaluate') is missing version info. Cache for it will not be collected. Use `dvc repro` to get your pipeline up to date. WARNING: No file hash info found for '/root/models/model.pkl'. It won't be created. WARNING: No file hash info found for '/root/reports/scores.json'. It won't be created. WARNING: No file hash info found for '/root/data/processed/train_preprocessed.pkl'. It won't be created. WARNING: No file hash info found for '/root/data/processed/validation_preprocessed.pkl'. It won't be created. WARNING: No file hash info found for '/root/data/processed/test_preprocessed.pkl'. It won't be created. WARNING: No file hash info found for '/root/data/interim/train_featurized.pkl'. It won't be created. WARNING: No file hash info found for '/root/data/interim/validation_featurized.pkl'. It won't be created. WARNING: No file hash info found for '/root/data/interim/test_featurized.pkl'. It won't be created. WARNING: No file hash info found for '/root/models/mlb.pkl'. It won't be created. WARNING: No file hash info found for '/root/models/tfidf_vectorizer.pkl'. It won't be created. 10 files failed ERROR: failed to pull data from the cloud - Checkout failed for following targets: /root/models/model.pkl /root/reports/scores.json /root/data/processed/train_preprocessed.pkl /root/data/processed/validation_preprocessed.pkl /root/data/processed/test_preprocessed.pkl /root/data/interim/train_featurized.pkl /root/data/interim/validation_featurized.pkl /root/data/interim/test_featurized.pkl /root/models/mlb.pkl /root/models/tfidf_vectorizer.pkl Is your cache up to date? </code></pre> <p><em>relevant kubernetes pod log of test-service</em></p> <pre><code>Stage 'preprocess' is cached - skipping run, checking out outputs Generating lock file 'dvc.lock' Updating lock file 'dvc.lock' Stage 'featurize' is cached - skipping run, checking out outputs Updating lock file 'dvc.lock' Stage 'train' is cached - skipping run, checking out outputs Updating lock file 'dvc.lock' Stage 'evaluate' is cached - skipping run, checking out outputs Updating lock file 'dvc.lock' Use `dvc push` to send your updates to remote storage. </code></pre> <p><strong>Project Tree</strong></p> <pre><code>├─ .dvc │ ├─ .gitignore │ ├─ config │ └─ tmp ├─ deployment │ ├─ docker-compose │ │ ├─ docker-compose.yml │ ├─ minikube-dep │ │ ├─ inference-test-services_dep.yaml │ ├─ startup_minikube_with_mount.sh.sh ├─ Dockerfile # for inference service ├─ dvc-cache # services should push and pull from this cache folder and see this as the DVC repo ├- dvc.yaml ├- params.yaml ├─ src │ ├─ build_features.py | ├─ preprocess_data.py | ├─ serve_model.py | ├─ startup.sh | ├─ requirements.txt ├─ test_dep │ ├─ .dvc # same as .dvc in the root folder | | ├─... │ ├─ Dockerfile # for test service │ ├─ dvc.yaml | ├─ params.yaml │ └─ src │ ├─ build_features.py # same as root src folder | ├─ preprocess_data.py # same as root src folder | ├─ serve_model.py # same as root src folder | ├─ startup_test.sh | ├─ requirements.txt # same as root src folder </code></pre> <p><strong>dvc.yaml</strong></p> <pre><code>stages: preprocess: cmd: python ${preprocess.script} params: - preprocess deps: - ${preprocess.script} - ${preprocess.input_train} - ${preprocess.input_val} - ${preprocess.input_test} outs: - ${preprocess.output_train} - ${preprocess.output_val} - ${preprocess.output_test} featurize: cmd: python ${featurize.script} params: - preprocess - featurize deps: - ${featurize.script} - ${preprocess.output_train} - ${preprocess.output_val} - ${preprocess.output_test} outs: - ${featurize.output_train} - ${featurize.output_val} - ${featurize.output_test} - ${featurize.mlb_out} - ${featurize.tfidf_vectorizer_out} train: cmd: python ${train.script} params: - featurize - train deps: - ${train.script} - ${featurize.output_train} outs: - ${train.model_out} evaluate: cmd: python ${evaluate.script} params: - featurize - train - evaluate deps: - ${evaluate.script} - ${train.model_out} - ${featurize.output_val} metrics: - ${evaluate.scores_path} </code></pre> <p><strong>params.yaml</strong></p> <pre><code>preprocess: script: src/preprocess/preprocess_data.py input_train: data/raw/train.tsv input_val: data/raw/validation.tsv input_test: data/raw/test.tsv output_train: data/processed/train_preprocessed.pkl output_val: data/processed/validation_preprocessed.pkl output_test: data/processed/test_preprocessed.pkl featurize: script: src/features/build_features.py output_train: data/interim/train_featurized.pkl output_val: data/interim/validation_featurized.pkl output_test: data/interim/test_featurized.pkl mlb_out: models/mlb.pkl tfidf_vectorizer_out: models/tfidf_vectorizer.pkl train: script: src/models/train_model.py model_out: models/model.pkl evaluate: script: src/models/evaluate_model.py scores_path: reports/scores.json roc_json: reports/roc_plot.json prc_json: reports/prc_plot.json </code></pre>
<p>After running <code>dvc repro</code> in <code>test-service</code>, a new <code>dvc.lock</code> will be created, containing the file hashes relative to your pipeline (i.e. the hash for <code>models/model.pkl</code> etc).</p> <p>If you're running a shared cache, <code>inference-service</code> should have access to the updated <code>dvc.lock</code>. If that is present, it will be sufficient to run <code>dvc checkout</code> to populate the workspace with the files corresponding to the hashes in the shared cache.</p>
<h2>Issue Description</h2> <p>I am getting an error in fluent-bit basically saying it cant resolve host</p> <pre><code>getaddrinfo(host='&lt;My Elastic Cloud Instance&gt;.aws.elastic-cloud.com:9243', err=4): Domain name not found </code></pre> <p>I suspect it has something to do with the port getting appended in the dns lookup, but I can't seem to see any settings that join the two together in my configuration's</p> <p>I have verified that using the a dnsutil pod in the same namespace, I am able to resolve the host correctly</p> <h2>Info that may be helpful</h2> <p>Config Map output-elasticsearch.conf</p> <pre><code>[OUTPUT] Name es Match * Host ${CLOUD_ELASTICSEARCH_HOST} Port ${CLOUD_ELASTICSEARCH_PORT} Cloud_ID ${CLOUD_ELASTICSEARCH_ID} Cloud_Auth ${CLOUD_ELASTICSEARCH_USER}:${CLOUD_ELASTICSEARCH_PASSWORD} Logstash_Format On Logstash_Prefix kube1 Replace_Dots On Retry_Limit False tls On tls.verify Off </code></pre> <p>elasticsearch-configmap</p> <pre><code>data: CLOUD_ELASTICSEARCH_HOST: &lt;MyCloudId&gt;.aws.elastic-cloud.com CLOUD_ELASTICSEARCH_ID: &gt;- elastic-security-deployment:&lt;Bunch of Random Bits&gt; CLOUD_ELASTICSEARCH_PORT: '9243' </code></pre> <p>env portion of my daemonset</p> <pre><code> env: - name: CLOUD_ELASTICSEARCH_HOST valueFrom: configMapKeyRef: name: elasticsearch-configmap key: CLOUD_ELASTICSEARCH_HOST - name: CLOUD_ELASTICSEARCH_PORT valueFrom: configMapKeyRef: name: elasticsearch-configmap key: CLOUD_ELASTICSEARCH_PORT - name: CLOUD_ELASTICSEARCH_ID valueFrom: configMapKeyRef: name: elasticsearch-configmap key: CLOUD_ELASTICSEARCH_ID - name: CLOUD_ELASTICSEARCH_USER valueFrom: secretKeyRef: name: elasticsearch-secret key: CLOUD_ELASTICSEARCH_USER - name: CLOUD_ELASTICSEARCH_PASSWORD valueFrom: secretKeyRef: name: elasticsearch-secret key: CLOUD_ELASTICSEARCH_PASSWORD - name: FLUENT_ELASTICSEARCH_HOST value: elasticsearch - name: FLUENT_ELASTICSEARCH_PORT value: '9200' </code></pre>
<p>Also, if you are using Elastic Cloud, try to decode the value of ${CLOUD_ELASTICSEARCH_ID} variable, remove the <code>:443</code> and encode it again.</p> <p>I was getting this error and it was solved after doing this.</p>
<p>I am trying to run a kubectl exec command on a pod, but it fails saying <em>'No such file or directory'</em></p> <p>I can run the command if I login to the terminal of the pod through bash Also this problem is only for a few commands. I found that there is a PATH variable difference</p> <ol> <li><p>When i do kubectl exec $POD -- printenv , then PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</p> </li> <li><p>When i run -- printenv from the terminal of POD , then PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/abc/scripts:/opt/abc/bin:/opt/admin/bin:/opt/abc/bin:/root/bin</p> </li> </ol> <p>I am guessing this is causing the commands to fails when run through kubectl exec.</p> <p>Any ideas to overcome this are welcome; can we pass the env variable of PATH in someway to the POD which using kubectl exec ?</p>
<p>You can try executing <code>bash -c &quot;&lt;command&gt;&quot;</code></p> <pre><code>$ kubectl exec &lt;pod&gt; -- bash -c &quot;&lt;cmd&gt;&quot; </code></pre> <p>It is likely PATH is being modified by some shell initialization files</p>
<p>I am deploying a NextJS app in various K8 environments, each passing it's own variables. I can pass them in and get them with <code>getServerSideProps()</code>, but as this function works on page components (or at least that's what the docs say), I have to add them in every single page through which my users can &quot;enter&quot; the application. This becomes error prone.</p> <p>What I would like, ideally, is to be able to have a <code>getServerSideProps()</code> on the <code>_app.tsx</code>, which I cannot.</p> <p>What is the best alternative?</p>
<p><a href="https://nextjs.org/docs/pages/api-reference/functions/get-initial-props" rel="nofollow noreferrer"><code>getInitialProps</code></a> seems what you want.</p> <pre class="lang-js prettyprint-override"><code>App.getInitialProps = async ({ ctx }: AppContext) =&gt; { const something = await getData(); return { pageProps: { something } } } export default App; </code></pre>
<p>I am really confused about Kubernetes default Network Policy. By accessing the official <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">document</a>, I can read that:</p> <ul> <li>By default, a pod is non-isolated for egress; all outbound connections are allowed</li> <li>By default, a pod is non-isolated for ingress; all inbound connections are allowed</li> </ul> <p>Hence, <code>my understanding is all connections are allowed</code>. However, obviously that is not true. If, for example, creating a namespace and two pods inside it, it is not possible to communicate among each other unless you specify both <code>Ingress</code> and <code>Egress</code> Network policies. <em><strong>What do I miss?</strong></em></p>
<p>You check the link <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-policies" rel="nofollow noreferrer">Default policies</a></p> <p>By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.</p> <p>You can create two pods with nginx image in a namespace.</p> <pre><code>kubectl run nginx1 --image=nginx --port=80 </code></pre> <pre><code>kubectl run nginx2 --image=nginx --port=80 </code></pre> <p>check the pod ips</p> <pre><code>kubectl get pods -o wide </code></pre> <p>Run the curl command try to access</p> <p>nginx1 to nginx2</p> <pre><code>kubectl exec -it nginx1 -- curl -I nginx2ip </code></pre> <p>Output:</p> <pre><code>HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Wed, 22 Jun 2022 13:42:57 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: &quot;61f01158-267&quot; Accept-Ranges: bytes </code></pre> <p>nginx2 to nginx1</p> <pre><code>kubectl exec -it nginx2 -- curl -I nginx1ip </code></pre> <p>Output:</p> <pre><code>HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Wed, 22 Jun 2022 13:45:26 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT Connection: keep-alive ETag: &quot;61f01158-267&quot; Accept-Ranges: bytes </code></pre> <p>Above both requests will get success responses</p>
<p>I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline, but when I run the following command</p> <pre><code>- helm upgrade server ./aks/server --install --namespace demo --kubeconfig ${CI_PROJECT_DIR}/.kube/config --set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server --set image.tag=${CI_COMMIT_SHA} --set database.user=${POSTGRES_USER} --set database.password=${POSTGRES_PASSWORD} </code></pre> <p>I receive the following error:</p> <pre><code>&quot;Error: Secret in version &quot;v1&quot; cannot be handled as a Secret: v1.Secret.Data: decode base64: illegal base64 data at input byte 8, error found in #10 byte of ...&quot; </code></pre> <p>It looks like something is not working with the secrets file, but I don't understand what.</p> <p>The <code>secret.yaml</code> template file is defined as follows:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: server-secret namespace: demo type: Opaque data: User: {{ .Values.database.user }} Host: {{ .Values.database.host }} Database: {{ .Values.database.name }} Password: {{ .Values.database.password }} Port: {{ .Values.database.port }} </code></pre> <p>I will also add the deployment and the service <code>.yaml</code> files.</p> <p><code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.app.name }} labels: app: {{ .Values.app.name }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: tier: backend stack: node app: {{ .Values.app.name }} template: metadata: labels: tier: backend stack: node app: {{ .Values.app.name }} spec: containers: - name: {{ .Values.app.name }} image: &quot;{{ .Values.image.name }}:{{ .Values.image.tag }}&quot; imagePullPolicy: IfNotPresent env: - name: User valueFrom: secretKeyRef: name: server-secret key: User optional: false - name: Host valueFrom: secretKeyRef: name: server-secret key: Host optional: false - name: Database valueFrom: secretKeyRef: name: server-secret key: Database optional: false - name: Password valueFrom: secretKeyRef: name: server-secret key: Password optional: false - name: Ports valueFrom: secretKeyRef: name: server-secret key: Ports optional: false resources: limits: cpu: &quot;1&quot; memory: &quot;128M&quot; ports: - containerPort: 3000 </code></pre> <p><code>service.yaml</code></p> <pre><code>apiVersion: v1 kind: Service metadata: name: server-service spec: type: ClusterIP selector: tier: backend stack: node app: {{ .Values.app.name }} ports: - protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>Any hint?</p>
<p>You have to encode secret values to base64</p> <p>Check the doc <a href="https://helm.sh/docs/chart_template_guide/function_list/#encoding-functions" rel="nofollow noreferrer">encoding-functions</a></p> <p>Try below code</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: server-secret namespace: demo type: Opaque data: User: {{ .Values.database.user | b64enc }} Host: {{ .Values.database.host | b64enc }} Database: {{ .Values.database.name | b64enc }} Password: {{ .Values.database.password | b64enc }} Port: {{ .Values.database.port | b64enc }} </code></pre> <p>Else use <code>stringData</code> instead of <code>data</code></p> <p>stringData will allow you to create the secrets without encode to base64</p> <p>Check the example in the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#bootstrap-token-secrets" rel="nofollow noreferrer">link</a></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: server-secret namespace: demo type: Opaque stringData: User: {{ .Values.database.user | b64enc }} Host: {{ .Values.database.host | b64enc }} Database: {{ .Values.database.name | b64enc }} Password: {{ .Values.database.password | b64enc }} Port: {{ .Values.database.port | b64enc }} </code></pre>
<p>Our project is undergoing a refactor to a micro-services architecture, and we are currently considering different API gateway solutions.</p> <p>We did our research, looked at the official sites for several solutions, went over some technical comparisons of different solutions, and read articles about our top picks.</p> <p>So far our main contenders are <strong>Apachee APISIX</strong> and <strong>Kong</strong>, but we are quite torn between them and would like to get a general opinion from actual users.</p> <p>Below are outlined the different properties and requirements of the project, I would appreciate it if any of you can point out some pros and cons of a solution you are familiar with in regard to them, and it would be great if someone facing similar requirements could share their experience with actually integrating one.</p> <p><strong>General Info</strong></p> <ul> <li>The project is of medium scale, has an active user base, and sees daily use around the clock with an incoming traffic count of a few thousand per minute on the backend.</li> <li>The project is hosted in a private network, and no cloud services are utilized, so we are looking for a good on-prem solution.</li> <li>Looking for a rather lightweight solution.</li> </ul> <p><strong>Technical Info and Requirements</strong></p> <ul> <li>AD FS-based authentication.</li> <li>Significant reliance on JWT.</li> <li>Using WebSocket in some micro-services, specifically Socket.io.</li> <li>Kubernetes deployment, supported by Helm.</li> <li>Full-stack under Monorepo.</li> <li>Repository and CI/CD are hosted and managed on GitLab.</li> <li>The team is trained in several coding languages but prefers working mainly with Typescript as we use React for the front-end, and NestJS for the back-end.</li> </ul> <p>Thank you!</p>
<p>Both Kong and Apache APISIX are popular and feature-rich API gateway solutions. Choosing the right one depends on your specific requirements and use case.</p> <ol> <li><p>API Management Features: Both Kong and Apache APISIX provide a wide range of API management features including API authentication, rate limiting, caching, SSL/TLS termination, request/response transformations, and more.</p> </li> <li><p>Scalability: Both solutions are built to scale horizontally and vertically. However, Apache APISIX uses a more lightweight and efficient architecture, making it a better option for high performance and low-latency workloads.</p> </li> <li><p>Both solutions have a rich ecosystem of plugins and extensions, and can be installed and configured easily.</p> </li> </ol> <p>In summary, for use cases with large-scale, high performance, and low-latency workloads, Apache APISIX might be a better fit.</p> <p>There has one comparison page may help you: <a href="https://api7.ai/apisix-vs-kong" rel="nofollow noreferrer">https://api7.ai/apisix-vs-kong</a></p>
<p>I tried to send request 192.168.49.2:31083, but it did not work. I'm using Kubernetes with Docker.</p> <pre class="lang-none prettyprint-override"><code>kubectl expose deployment k8s-web-hello --port=3000 --type=NodePort kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8s-web-hello NodePort 10.109.32.222 &lt;none&gt; 3000:31083/TCP 9m54s kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 7d minikube ip 192.168.49.2 </code></pre> <p>How do I fix it?</p> <p>Edited:</p> <pre><code>minikube service k8s-web-hello |-----------|---------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|---------------|-------------|---------------------------| | default | k8s-web-hello | 3000 | http://192.168.49.2:31083 | |-----------|---------------|-------------|---------------------------| 🏃 Starting tunnel for service k8s-web-hello. |-----------|---------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|---------------|-------------|------------------------| | default | k8s-web-hello | | http://127.0.0.1:45767 | |-----------|---------------|-------------|------------------------| 🎉 Opening service default/k8s-web-hello in default browser... ❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it. Opening in existing browser session. </code></pre> <p>I do not understand why i can not access via first URL with IP address of the Minikube virtual machine but the second URL with local IP can ?</p>
<p>Refer to this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">official Doc</a> and as per this you need to use “type: NodePort” in yaml that specifies a NodePort value and you need to do curl with this nodeport value only.</p> <p>Below is the sample example :</p> <blockquote> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: type: NodePort selector: app.kubernetes.io/name: MyApp ports: # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 80 targetPort: 80 # Optional field # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) nodePort: 30007 </code></pre> </blockquote> <p>Note that As you are using minikube IP you need to <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#:%7E:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system" rel="nofollow noreferrer">tunnel</a> into Minikube for access.</p> <p>Can I know how you are sending the request either by Curl or browsing with HTTP? Can you share your yaml also so that it could be easy to answer.</p>
<p>I want to define thread count for my fixed thread pool in my spring application. Where should I do that? I am confused between yaml and application.properties files.</p>
<p>You should define the thread count for your fixed thread pool in the <code>application.properties file</code>. This is the default configuration file for Spring applications, and it is the recommended place to store configuration settings. You can specify the number of threads for your fixed thread pool with the property <code>spring.task.execution.pool.core-size</code>.</p> <p>For example, if you wanted to define a fixed thread pool with 10 threads, you would add the following to your application.properties file: <code>spring.task.execution.pool.core-size=10</code></p> <p>As per <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config.yaml.mapping-to-properties" rel="nofollow noreferrer">Spring boot doc</a> :</p> <blockquote> <p>YAML files cannot be loaded by using the @PropertySource or @TestPropertySource annotations. So, in the case that you need to load values that way, you need to use a properties file.</p> </blockquote> <p>Refer to this <a href="https://stackoverflow.com/a/47464820/19230181">SO1</a> and <a href="https://stackoverflow.com/a/58407832/19230181">SO2</a> to know the difference between yaml and application.properties files.</p>
<p>After running a pipeline job in Jenkins that runs in my k8s cluster</p> <p>I am getting this error -</p> <pre><code>‘Jenkins’ doesn’t have label ‘jenkins-eks-pod’. </code></pre> <p>What am I missing in my configuration?</p> <p>Pod Logs in k8s-</p> <pre><code> 2023-02-20 14:37:03.379+0000 [id=1646] WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: jenkins-eks-agent-h4z6t, template=PodTemplate{id='05395ad55cc56972ee3e4c69c2731189bc03a75c0b51e637dc7f868fa85d07e8', name='jenkins-eks-agent', namespace='default', slaveConnectTimeout=100, label='jenkins-non-prod-eks-global-slave', serviceAccount='default', nodeUsageMode=NORMAL, podRetention='Never', containers=[ContainerTemplate{name='jnlp', image='805787217936.dkr.ecr.us-west-2.amazonaws.com/aura-jenkins-slave:ecs-global-node_master_57', alwaysPullImage=true, workingDir='/home/jenkins/agent', command='', args='', ttyEnabled=true, resourceRequestCpu='512m', resourceRequestMemory='512Mi', resourceRequestEphemeralStorage='', resourceLimitCpu='512m', resourceLimitMemory='512Mi', resourceLimitEphemeralStorage='', envVars=[KeyValueEnvVar [getValue()=http://jenkins-non-prod.default.svc.cluster.local:8080/, getKey()=JENKINS_URL]], livenessProbe=ContainerLivenessProbe{execArgs='', timeoutSeconds=0, initialDelaySeconds=0, failureThreshold=0, periodSeconds=0, successThreshold=0}}]} java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=0} at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.checkTerminatedContainers(KubernetesLauncher.java:275) at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:225) at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:298) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:48) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:82) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-02-20 14:37:03.380+0000 [id=1646] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-eks-agent-h4z6t 2023-02-20 14:37:03.380+0000 [id=1646] SEVERE o.c.j.p.k.KubernetesSlave#_terminate: Computer for agent is null: jenkins-eks-agent-h4z6t </code></pre>
<p>This error might be due to not creating the label ‘jenkins-eks-pod’ on the jenkins server.</p> <p>To create a label on the jenkins server :</p> <blockquote> <p>go to manage jenkins &gt; Manage nodes and Clouds &gt; labels and then enter the label name.</p> </blockquote> <p>Post creating this label try to run the job and check if it works.</p> <p>Refer to this <a href="https://devopscube.com/jenkins-build-agents-kubernetes/" rel="nofollow noreferrer">Blog by Bibin Wilson</a>.</p> <p>As per the error <code>No httpclient implementations found on the context classloader</code> You need to upgrade jenkins kubernetes plugin to 3802. This error No httpclient implementations found on the context classloader is fixed in this release. Refer to this <a href="https://github.com/jenkinsci/kubernetes-plugin/releases/tag/3802.vb_b_600831fcb_3" rel="nofollow noreferrer">jenkinsci/kubernetes-plugin/releases/tag/3802</a>.</p>
<p>In Kubernetes we need a new service to handle the root path, but but still a catch everything else on our current frontend.</p> <p>Current frontend Ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: current-frontend labels: app: current-frontend tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - my.domain.com secretName: tls-secret rules: - host: my.domain.com http: paths: - backend: service: name: current-frontend port: number: 80 path: / pathType: Prefix </code></pre> <p>New service Ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: new-service labels: app: new-service tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - my.domain.com secretName: tls-secret rules: - host: my.domain.com http: paths: - backend: service: name: new-service port: number: 80 path: /someendpoint pathType: ImplementationSpecific - backend: service: name: new-service port: number: 80 path: / pathType: Exact </code></pre> <p>According to the documentation of Kuberntes Ingress, it should prioritize Exact over Prefix</p> <blockquote> <p>If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#multiple-matches" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#multiple-matches</a></p> <p>The problem is that everything else then my.domain.com/someendpoint goes to the current-frontend, while the expected would be that my.domain.com/ would go to new-service.</p> <p>How do I achieving this?</p> <hr /> <p><strong>Solution</strong></p> <p>I got it working, even If it doesn't seams to be the optimal solution (According to the Documentation)</p> <p>I Followed Hemanth Kumar's answer and changed the Current Frontend to use Regex, with (.+) instead of (.*) as I wanted at least one char after the slash for it to be hit.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: current-frontend labels: app: current-frontend tier: frontend annotations: nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; kubernetes.io/ingress.class: nginx spec: tls: - hosts: - my.domain.com secretName: tls-secret rules: - host: my.domain.com http: paths: - backend: service: name: current-frontend port: number: 80 path: /(.+) pathType: Prefix </code></pre> <p>At the same time I needed to change the New service to use Prefix instead of Exact as it does not work, event if there is no other services to hit.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: new-service labels: app: new-service tier: frontend annotations: kubernetes.io/ingress.class: nginx spec: tls: - hosts: - my.domain.com secretName: tls-secret rules: - host: my.domain.com http: paths: - backend: service: name: new-service port: number: 80 path: /someendpoint pathType: ImplementationSpecific - backend: service: name: new-service port: number: 80 path: / pathType: Prefix </code></pre>
<p>Ingress path matching can be enabled by setting the</p> <p><code>nginx.ingress.kubernetes.io/use-regex annotation to true</code> .</p> <p>See the description of the use-regex annotation :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: ingressClassName: nginx rules: - host: test.com http: paths: - path: /foo/.* pathType: Prefix backend: service: name: test port: number: 80 </code></pre> <p>Refer to this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">ingress path matching</a> for more information on path priority</p>
<p>I want to organize my web apis iwth kubernetes ingress tool.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-api-ingress annotations: kubernetes.io/ingress.class: nginx # nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; # nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: api.myapp.com http: paths: - pathType: Prefix path: /catalog backend: service: name: myapp-catalog-service port: number: 80 - pathType: Prefix path: /identity backend: service: name: myapp-identity-service port: number: 80 </code></pre> <p>With this configuration, I can access the &quot;<code>api.myapp.com/catalog</code>&quot;.</p> <p>But &quot;<code>api.myapp.com/catalog</code>&quot; is 404 not found. How can fix this configuration?</p>
<p>Seems to be an issue with rewrite annotation that might cause the 404 error. Can you give the below annotation in the yaml and give a try :</p> <pre><code> nginx.ingress.kubernetes.io/rewrite-target: /$2 </code></pre> <p>As per this rewrite <a href="https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/rewrite" rel="nofollow noreferrer">target example</a> , These $2 placeholders can be used as parameters in the rewrite-target annotation. This Target URI where the traffic must be redirected.</p> <p>As per Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">ingress</a> update your yaml as below example which can be accessed from foo.bar.com/foo from port 4200 and foo.bar.com/bar from port 8080.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: simple-fanout-example spec: rules: - host: foo.bar.com http: paths: - path: /foo pathType: Prefix backend: service: name: service1 port: number: 4200 - path: /bar pathType: Prefix backend: service: name: service2 port: number: 8080 </code></pre> <p>Refer to this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">ingress path matching doc</a> and <a href="https://stackoverflow.com/questions/56942594/kubernetes-ingress-gives-404-error-for-a-particular-service">SO</a></p>
<p>I have deployed Jupyterhub on Kubernetes following this guide <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/" rel="nofollow noreferrer">link</a>, I have setup nbgrader and ngshare on jupyterhub using this guide <a href="https://nbgrader.readthedocs.io/en/stable/configuration/jupyterhub_config.html" rel="nofollow noreferrer">link</a>, I have a Learning management system(LMS) similar to moodle, I want to view the list of assignments both for instructors and students I can do that by using the rest API of Jupyternotebook like this</p> <pre><code>import requests import json api_url = 'http://xx.xxx.xxx.xx/user/kevin/api/contents/release' payload = {'token': 'XXXXXXXXXXXXXXXXXXXXXXXXXX'} r = requests.get(api_url,params = payload) r.raise_for_status() users = r.json() print(json.dumps(users, indent = 1)) </code></pre> <p>now I want to grade all submitted assignments using the nbgrader command <code>nbgrader autograde &quot;Assignment1&quot;</code>, I can do that by logging into instructor notebook server and going to terminal and running the same command but I want to run this command on the notebook server terminal using Jupyter Notebook server rest API, so that instructor clicks on grade button on the LMS frontend which sends a request to LMS backend and which sends a rest API request(which has the above command) to jupyter notebook , which runs the command on terminal and returns the response to LMS backend. I cannot find anything similar on the Jupyter Notebook API <a href="https://jupyter-server.readthedocs.io/en/latest/developers/rest-api.html" rel="nofollow noreferrer"> documentation</a> there is endpoint to start a terminal but not how to run commands on it.</p>
<p>An easier way to invoke terminal using jupyter-notebooks is to use magic function %%bash and use the jupyter cell as a terminal:</p> <pre><code>%%bash head xyz.txt pip install keras git add model.h5.dvc data.dvc metrics.json git commit -m &quot;Second model, trained with 2000 images&quot; </code></pre> <p>For more information refer to this <a href="https://www.dominodatalab.com/blog/lesser-known-ways-of-using-notebooks" rel="nofollow noreferrer">Advance Jupyter notebook Tricks.</a></p> <p><strong>Check this <a href="https://stackoverflow.com/questions/54475896/interact-with-jupyter-notebooks-via-api">Link</a> to Interact with Jupyter Notebooks via AP</strong></p>
<p>My service (with no ingress) is running in the amazon EKS cluster and I was asked to provide a CA signed cert for a third party that consumes the API hosted in the service. I have tried provisioning my cert using certificates.k8s.io API but it is still self-signed I believe. Is there a CA that provides certification for services in the Kubernetes cluster?</p>
<p>Yes, Certificates created using the certificates.k8s.io API are signed by a <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#configuring-your-cluster-to-provide-signing" rel="nofollow noreferrer">dedicated CA</a>. It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this. Do not assume that these certificates will validate against the cluster root CA.</p> <p>Refer this <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/" rel="nofollow noreferrer">Certificate Signing Request Process</a></p>
<p>I want to restart my Kubernetes pods after every 21 days I want probably a crone job that can do that. That pod should be able to restart pods from all of my deployments. I don't want to write any extra script for that is that possible ? Thanks you .</p>
<p>If you want it to run every 21 days, no matter the day of month, then cron job doesn't support a simple, direct translation of that requirement and most other similar scheduling systems.</p> <p>In that specific case you'd have to track the days yourself and calculate the next day it should run at by adding 21 days to the current date whenever the function runs.</p> <p>My suggestion is : if you want it to run at 00:00 every 21st day of the month:</p> <pre><code>Cron Job schedule is : 0 0 21 * * (at 00:00 on day-of-month 21). </code></pre> <p>Output :</p> <p><a href="https://i.stack.imgur.com/jfqso.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jfqso.png" alt="enter image description here" /></a></p> <p>If you want it to run at 12:00 every 21st day of the month:</p> <pre><code>Cron Job schedule is : 0 12 21 * * (at 12:00 on day-of-month 21). </code></pre> <p>Output :</p> <p><a href="https://i.stack.imgur.com/DT5di.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DT5di.png" alt="enter image description here" /></a></p> <p>Or else</p> <p>If you want to add the 21 days yourself, you can refer and use setInterval to schedule a function to run at a specific time as below :</p> <pre><code>const waitSeconds = 1814400; // 21 days function scheduledFunction() { // Do something } // Run now scheduledFunction(); // Run every 21 days setInterval(scheduledFunction, waitSeconds); </code></pre> <p>For more information Refer this <a href="https://stackoverflow.com/questions/52422300/how-to-schedule-pods-restart">SO Link</a> to how to schedule Pod restart.</p>
<p>I have followed the quickstart of ArgoCD documentation <a href="https://argo-cd.readthedocs.io/en/stable/#quick-start" rel="nofollow noreferrer">https://argo-cd.readthedocs.io/en/stable/#quick-start</a></p> <p>The only difference is that I deployed in <code>argo-cd</code> namespace instead of <code>argocd</code>.</p> <p>Now, I'm having issues to access this application through my Nginx ingress. It's throwing <code>ERR_TOO_MANY_REDIRECTS</code> on the browser and in the logs of Nginx:</p> <pre><code>172.71.6.219 - - [05/Jul/2023:23:01:54 +0000] &quot;GET / HTTP/1.1&quot; 308 164 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36&quot; 904 0.000 [argo-cd-argocd-server-http] [] - - - - ba62103809faf5da2f9e2661a2c1f50b 172.71.6.219 - - [05/Jul/2023:23:01:54 +0000] &quot;GET / HTTP/1.1&quot; 308 164 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36&quot; 904 0.000 [argo-cd-argocd-server-http] [] - - - - 0d05a1f48fdceb5d4c557884c3850074 172.71.6.219 - - [05/Jul/2023:23:01:55 +0000] &quot;GET / HTTP/1.1&quot; 308 164 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36&quot; 904 0.000 [argo-cd-argocd-server-http] [] - - - - d724f5d890ce91e713c768db2126d44f .......... more and more </code></pre> <p>On the ArgoCD server, I've setted the <code>server.insecure:&quot;true&quot;</code> on <code>argocd-cmd-params-cm</code>.</p> <p>This is my Ingress:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-ingress namespace: argo-cd annotations: cert-manager.io/issuer: prod-issuer cert-manager.io/issuer-kind: OriginIssuer cert-manager.io/issuer-group: cert-manager.k8s.cloudflare.com nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTP&quot; spec: tls: - hosts: - '*.my_site.dev' secretName: argocd-secret ingressClassName: nginx rules: - host: argocd.my_site.dev http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: http </code></pre> <p>I've tried to set <code>server.insecure:&quot;false&quot;</code> and my Ingress like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-ingress namespace: argo-cd annotations: cert-manager.io/issuer: prod-issuer cert-manager.io/issuer-kind: OriginIssuer cert-manager.io/issuer-group: cert-manager.k8s.cloudflare.com nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: tls: - hosts: - '*.my_site.dev' secretName: argocd-secret ingressClassName: nginx rules: - host: argocd.my_site.dev http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https </code></pre> <p>PS: Of course I've deleted the argo-cd-server between the changes on the ConfigMap.</p> <p>Nothing is working.</p> <p>Do you guys have any idea how to solve this?</p>
<p><code>ERR_TOO_MANY_REDIRECTS</code> is an error message, a browser returns when it fails to load webpage content due to a redirect loop. A redirect loop occurs when a requested page redirects to another page and the second page redirects back to the origin. Follow this <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Link</a> for redirecting the Path and add the below annotation in yaml and it will help in overcoming this error.</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre>
<p>My 2 microservices are talking to each other via GRPC and I am using envoy proxy. Microservice A is calling Microservice B and envoy container is inside Mircoservice A's pod.</p> <p>My problem is that if because of any issue Microservice B pod crashed I dont get any error at A, its just keeps trying to establish connection.</p> <p>My expectation is that since B pod is crashed envoy proxy should immediately give some error that cant connect or something because it knows that connection is not possible as no pods are available.</p> <p>I have a standard connection timeout at A but I want envoy to fail quick in this case.</p> <p>Nothing can be done at B side since it will be already crashed and A is relying on envoy proxy.</p>
<p>In order to know the Status or Error when the microservice B is getting failed in Envoy Proxy has a feature called <strong><a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#circuit-breaking" rel="nofollow noreferrer">Envoy Circuit Breaking.</a></strong> This stops sending requests to pod B as it is experiencing the issues and not able to respond. Refer to this <a href="https://blog.christianposta.com/microservices/01-microservices-patterns-with-envoy-proxy-part-i-circuit-breaking/" rel="nofollow noreferrer">blog</a> by Christian Posta who has explained clearly about Circuit Breaking with Envoy Proxy and its configuration with a demo.</p> <p>Envoy Proxy circuit breaking is working for the microservices whereas in order to know the Pod level crash and its alert we need to use <strong>Prometheus.</strong> Prometheus is a good fit for microservices because you just need to expose a metrics port, and don’t need to add too much complexity or run additional services. You can find templates for Pod Crashing and Pod unhealthy <a href="https://samber.github.io/awesome-prometheus-alerts/rules.html#rule-kubernetes-1-18" rel="nofollow noreferrer">here</a> configure accordingly in the pods which will give you alert when Pod crashing or pod unhealthy.</p>
<p>I tried with the below command but it provides only the created date. How do I get when was it last updated?</p> <pre><code>kubectl describe hpa -n xyz hpa-hello </code></pre> <p>Example: Today I create a hpa with max replica 3 and tomorrow I apply the same yaml but with max replica 6. If I do that, I can see only the created date and not the updated date. The description correctly shows the updated max replicas as 6</p> <p>Update: There is no direct way to obtain this information !</p>
<p>I Assume here that the last updated date or time nothing but when was last auto scaled? To get details about the Horizontal Pod Autoscaler, you can use kubectl get hpa with the -o yaml flag. The status field contains information about the current number of replicas and any recent autoscaling events.</p> <pre><code>kubectl get hpa &lt;hpa name&gt; -o yaml </code></pre> <p>In this output we can see the last transition times along with messages. Refer to this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-get" rel="nofollow noreferrer">doc</a> which might help you and also to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#viewing" rel="nofollow noreferrer">view</a> the details about the HPA.</p> <p>Edit 1: Moreover, I have gone through many docs and I don't think we can get the info about it from a get command. For this, an <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">auditing resource</a> in k8's can be created to log all activities generated by users for a specific type of resource . This might help you to find the logs.</p>
<p>How to get Kubectl get po -o yaml in golang</p> <p>it to run this code <a href="https://github.com/kubernetes/client-go/tree/master/examples/out-of-cluster-client-configuration" rel="nofollow noreferrer">Go client example</a></p> <p>where the expected output is</p> <pre><code>./app There are 3 pods in the cluster There are 3 pods in the cluster There are 3 pods in the cluster </code></pre> <p>but when i run it i got</p> <pre><code> go build -o app . ./app panic: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot; goroutine 1 [running]: main.main() /Users/padmanabanpr/Documents/client-go/examples/out-of-cluster-client-configuration/main.go:61 +0x5b6 </code></pre>
<p>This might be due to your k8s client/server version being higher than 1.24+ <a href="https://github.com/aws/aws-cli/issues/6920" rel="nofollow noreferrer">https://github.com/aws/aws-cli/issues/6920</a> Please refer this for further clarification</p>
<p>we have a batch data processing script inside a container and want to check it is alive and actually working or whether it should be restarted.</p> <p>It is PHP command line script and doesn't expose any kind of server. Currently running in Docker soon in Kubernetes.</p> <p>How can we monitor liveness of such script without introducing unnecessary features/libraries?</p>
<p>You can use liveness probe <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">command</a> to check whether process is alive or not. Below the sample from k8s documentation. You can write small script to return 0 if process is alive or return non zero code in case of process is dead</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 </code></pre>
<p>I am following <a href="https://aws.amazon.com/blogs/containers/building-a-gitops-pipeline-with-amazon-eks/" rel="nofollow noreferrer">Building a GITOPS pipeline with EKS</a>.</p> <p>After <code>flux create source</code> and <code>flux create kustomization</code></p> <p>my fleet-intra repo</p> <pre><code>tree -L 3 . └── adorable-mushroom-1655535375 ├── flux-system │   ├── gotk-components.yaml │   ├── gotk-sync.yaml │   └── kustomization.yaml └── guestbook-gitops.yaml </code></pre> <p>I have problem with kustomization</p> <pre><code>flux get kustomizations NAME REVISION SUSPENDED READY MESSAGE flux-system main/75d8189db82f1c2c77d22a9deb6baea06f179d0c False False failed to decode Kubernetes YAML from /tmp/kustomization-985813897/adorable-mushroom-1655535375/guestbook-gitops.yaml: missing Resource metadata </code></pre> <p>My guestbook-gitops.yaml looks like this</p> <pre><code>apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: guestbook-gitops namespace: flux-system spec: interval: 1h0m0s path: ./deploy prune: true sourceRef: kind: GitRepository name: guestbook-gitops </code></pre> <p>What is wrong with metadata?</p>
<p>In the <a href="https://github.com/weaveworks/guestbook-gitops/blob/master/deploy/kustomization.yaml" rel="nofollow noreferrer">link</a> you had shared there sample kustomization yaml. you can prepare kustomize yaml in similar manner and check it.</p>
<p>I used the following guide to set up my chaostoolkit cluster: <a href="https://chaostoolkit.org/deployment/k8s/operator/" rel="nofollow noreferrer">https://chaostoolkit.org/deployment/k8s/operator/</a></p> <p>I am attempting to kill a pod using kubernetes, however the following error:</p> <pre><code>HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;pods is forbidden: User \&quot;system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\&quot; cannot list resource \&quot;pods\&quot; in API group \&quot;\&quot; in the namespace \&quot;task-dispatcher\&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;kind&quot;:&quot;pods&quot;},&quot;code&quot;:403} </code></pre> <p>I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to &quot;system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb&quot;.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: my-chaos-exp namespace: chaostoolkit-run data: experiment.yaml: | --- version: 1.0.0 title: Terminate Pod Experiment description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time. tags: [&quot;kubernetes&quot;] secrets: k8s: KUBERNETES_CONTEXT: &quot;docker-desktop&quot; method: - type: action name: terminate-k8s-pod provider: type: python module: chaosk8s.pod.actions func: terminate_pods arguments: label_selector: '' name_pattern: my-release-rabbitmq-[0-9]$ rand: true ns: default --- apiVersion: chaostoolkit.org/v1 kind: ChaosToolkitExperiment metadata: name: my-chaos-exp namespace: chaostoolkit-crd spec: serviceAccountName: test-user automountServiceAccountToken: false pod: image: chaostoolkit/chaostoolkit:full imagePullPolicy: IfNotPresent experiment: configMapName: my-chaos-exp configMapExperimentFileName: experiment.yaml restartPolicy: Never </code></pre>
<p>Error which is shared is using default service account &quot;choastoolkit&quot;. Look like the role associated might not proper permissions.</p> <p>The service account &quot;test-user&quot; which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.</p> <p>Please specify proper service account having proper role access.</p>
<p>I have 2 AKS clusters, Cluster 1 and Cluster 2 both running Istio 1.14 minimal out-of-the-box (default configs).</p> <p>Everything on Cluster 1 works as expected (after deploying Istio).</p> <p>On Cluster 2, all HTTPS outbound connections initiated from my services (injected with istio-proxy) fail.</p> <pre><code>curl http://www.google.com #works curl https://www.google.com #fails </code></pre> <p>If I create a service entry for google, then the https curl works:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google spec: hosts: - www.google.com ports: - number: 443 name: https protocol: HTTPS resolution: DNS location: MESH_EXTERNAL </code></pre> <p>Both Istio installations are out-of-the-box so <code>meshConfig.outboundTrafficPolicy.mode</code> is set to <code>ALLOW_ANY</code> (double-checked).</p> <p>I read online that there were some Istio bugs that would cause this behavior, but I don't think it's the case here. I also compared the Istio configs between the 2 clusters and they really seem to be the same.</p> <p>I'm starting to think the problem may lie in some cluster configs because I know there are some differences between the 2 clusters here.</p> <p>How would you go about troubleshooting this? Do you think the issue is related to Istio or Cluster services/configs? What should I look into first?</p>
<p>You are correct. By default ALLOW_ANY is value set for meshConfig.outboundTrafficPolicy.mode. This can be verified in the cluster by running below command.</p> <pre><code>kubectl get configmap istio -n istio-system -o yaml | grep -o &quot;mode: ALLOW_ANY&quot; </code></pre> <p>Please also refer the <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/" rel="nofollow noreferrer">istio documentation</a> for the options available in accessing external services</p>
<p>I have a container image that already has the <code>index.html</code> file. My goal is to pick that <code>index.html</code> update it (using simple placeholder replacement) and then put it back.</p> <p>I am trying to use the <code>initContainer</code> for the same. But somehow it's not working out for me.</p> <p>Here is what I have tried in <code>deployment.yaml</code></p> <pre><code> initContainers: - name: init-container image: &quot;my-image-name&quot; command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: - cp /usr/share/nginx/html/index.html /tmp/index.html - sed -i 's#&lt;base href=&quot;./&quot;&gt;#&lt;base href=&quot;/my-base/&quot;&gt;#' /tmp/index.html - cp /tmp/index.html /usr/share/nginx/html/index.html volumeMounts: - name: custom-index mountPath: /usr/share/nginx/html/ containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;my-image-name&quot; ports: - name: http containerPort: 8080 protocol: TCP volumeMounts: - mountPath: /usr/share/nginx/html/ name: custom-index volumes: - emptyDir: {} name: custom-index </code></pre> <p>However with above, I am getting the following error:</p> <pre><code>Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting &quot;/var/lib/kubelet/pods/bb343f44-00eb-477b-9e26-17949a73100d/volume-subpaths/custom-index/my-container-name/7&quot; to rootfs at &quot;/usr/share/nginx/html/&quot;: mount /var/lib/kubelet/pods/bb343f44-00eb-477b-9e26-17949a73100d/volume-subpaths/custom-index/my-container-name/7:/usr/share/nginx/html/ (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown </code></pre> <p>How to achieve this?</p> <p>Please note that I cannot maintain the separate copy of <code>index.html</code>, I need to use the one present in the main container, update it and put it back.</p>
<p>For initcontainer can you please check with below. If required correct the sed statement</p> <pre><code> command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;cp /usr/share/nginx/html/index.html /remoteDir &amp;&amp; sed -i 's#&lt;base href=\&quot;./\&quot;&gt;#&lt;base href=\&quot;/my-base/\&quot;&gt;#' /remoteDir/index.html&quot;] volumeMounts: - name: custom-index mountPath: /remoteDir </code></pre>
<p>I'm trying to set up balancing between applications in different namespaces with Istio. But I am always getting reply from one application in dev namespace. My point is get response from applications from same prefix, like: <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a> - need to balance between namespaces <code>dev</code>, <code>staging</code>, <code>production</code></p> <p>I deployed <code>VirtualService</code>, <code>DestinationRule</code> and <code>Gateway</code> in different namespaces. Also I have tried to deploy one <code>Gateway</code> in the <code>istio-system</code> namespace and set correct path to it in the all of <code>VirtualService</code>. Obviously I don't understand how it works... My <code>VirtualService</code></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istio-nginx-vs namespace: dev spec: hosts: - xxx.elb.amazonaws.com gateways: - dev/istio-nginx-gw http: - route: - destination: host: istio-nginx.dev.svc.cluster.local port: number: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istio-nginx-vs namespace: staging spec: hosts: - xxx.elb.amazonaws.com gateways: - staging/istio-nginx-gw http: - route: - destination: host: istio-nginx.staging.svc.cluster.local port: number: 80 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istio-nginx-vs namespace: production spec: hosts: - xxx.elb.amazonaws.com gateways: - production/istio-nginx-gw http: - route: - destination: host: istio-nginx.production.svc.cluster.local port: number: 80 </code></pre> <p>Here is the <code>DestinationRule</code></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-nginx-dr namespace: dev spec: host: istio-nginx.svc.dev.cluster.local trafficPolicy: loadBalancer: simple: ROUND_ROBIN --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-nginx-dr namespace: staging spec: host: istio-nginx.staging.svc.cluster.local trafficPolicy: loadBalancer: simple: ROUND_ROBIN --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-nginx-dr namespace: production spec: host: istio-nginx.production.svc.cluster.local trafficPolicy: loadBalancer: simple: ROUND_ROBIN </code></pre> <p>And the <code>Gateway</code></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-nginx-gw namespace: dev spec: selector: app: istio-gateway servers: - port: number: 80 name: http protocol: HTTP hosts: - xxx.elb.amazonaws.com --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-nginx-gw namespace: staging spec: selector: app: istio-gateway servers: - port: number: 80 name: http protocol: HTTP hosts: - xxx.elb.amazonaws.com --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-nginx-gw namespace: production spec: selector: app: istio-gateway servers: - port: number: 80 name: http protocol: HTTP hosts: - xxx.amazonaws.com </code></pre>
<p>As the hosts &quot;xxx.elb.amazonaws.com&quot; value in different gateway definitions of different namespace is same for the &quot;istio-nginx&quot; service. So only one routing rule would have got applied. So the requests would be serviced from single application. For more details on routing rule precedence can be referred in <a href="https://istio.io/latest/docs/concepts/traffic-management/#routing-rule-precedence" rel="nofollow noreferrer">istio documentation</a></p> <p>Using 'istioctl&quot; tool you can verify the routes configured.</p> <pre><code>istioctl pc routes deploy/istio-ingressgateway.istio-system | grep &quot;istio-nginx&quot; </code></pre>
<p>I am having a following working SFTP Deployment in my Kubernetes:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: sftp-deployment spec: replicas: 1 selector: matchLabels: app: sftp template: metadata: labels: app: sftp spec: volumes: - name: sftp-storage persistentVolumeClaim: claimName: sftp-pvc containers: - name: sftp image: atmoz/sftp ports: - containerPort: 22 env: - name: SFTP_USERS value: &quot;user1:password:::user-directory&quot; volumeMounts: - name: ftm-sftp-storage mountPath: &quot;/home/user1/user-directory&quot; resources: requests: memory: &quot;64Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; </code></pre> <p>I know that the Deployment is working fine because if I <code>port-forward</code> the deployment I can easily access the directories on port 22 using any FTP Client, like FileZilla.</p> <p>Of course, what I want to achieve is to expose this deployment to the public using Istio. I have tried to do it by exposing the port 22 in a custom gateway:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: sftp-gw spec: selector: istio: ingressgateway servers: - hosts: - '*.example.com' port: name: sftp-gw-server number: 22 protocol: TCP targetPort: 22 </code></pre> <p>And then creating the Virtual Service for it:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: sftp-vs spec: gateways: - sftp-gw.default.svc.cluster.local hosts: - my-host.example.com tcp: - match: - port: 22 route: - destination: host: sftp-service.default.svc.cluster.local port: number: 22 </code></pre> <p>The Service for the Deployment looks as such:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: sftp-service spec: ports: - protocol: TCP port: 22 targetPort: 22 selector: app: sftp-deployment </code></pre> <p>This configuration does not work for me. Does anybody know how to configure properly the Istio to expose the port 22 of the Service/Deployment so it can be accessed via for example FTP Client? Thanks in advance.</p>
<p>Please do below changes in application gateway, virtual service and istio ingress gateway. As the ports in objects arent matching the requests wont get routed properly. Below changes are mentioned based on nodeport if LB is used kindly adjust it accordingly.</p> <p>Gateway:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: sftp-gw spec: selector: istio: ingressgateway servers: - hosts: - '*.example.com' port: name: tcp number: &lt;mention nodeport number of tcp service ingwservice&gt; protocol: TCP </code></pre> <p>Virtual Service:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: sftp-vs spec: gateways: - sftp-gw.default.svc.cluster.local hosts: - my-host.example.com tcp: - match: - port: &lt;mention nodeport number of tcp service ingwservice&gt; route: - destination: host: sftp-service.default.svc.cluster.local port: number: 22 </code></pre> <p>Istio Ingress Gateway Service: Please declare new tcp service which will be listening for sftp requests. The below information has to match the application gateway definition. Please provide the nodePort, port and targetPort numbers accordingly</p> <pre><code>- name: tcp nodePort: 30009 port: 30009 protocol: TCP targetPort: 30009 </code></pre>
<p>I'm trying to deploy a docker container with a python app to Azure Kubernetes Service. After deploying and looking at the logs on the new pod, I see this error:</p> <blockquote> <p>exec /usr/bin/sh: exec format error</p> </blockquote> <p>I'm building the container on a mac using the following docker buildx command:</p> <pre><code>docker buildx build --platform linux/x86_64 -t &lt;my username&gt;/ingest . </code></pre> <p>My Docker file has the following header</p> <pre><code>FROM --platform=linux/x86_64 python:3.11 </code></pre> <p>My deployment yaml looks like the following and seems to pull the image just fine.(I just used something from the azure documentation as a template.)</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ingest namespace: default spec: replicas: 1 selector: matchLabels: bb: ingest template: metadata: labels: bb: ingest spec: containers: - name: ingest image: &lt;my username&gt;/ingest:0.0.1 imagePullPolicy: Always </code></pre> <p>When I inspect the images locally, I see</p> <pre><code>&quot;Architecture&quot;: &quot;amd64&quot;, &quot;Os&quot;: &quot;linux&quot;, </code></pre> <p>I'm was assuming that the default chip architecture is x86_64, but not sure. I also built the image with default chip architecture and OS and tested locally - it works fine. I'm new to K8 and Azure. Maybe, I'm missing something obvious. Is there a way to specify the chip architecture and the OS for in my configuration? If I don't specify it, what is the default?</p>
<p>Below error will come when an container is getting executed in an architecture which are not built for.</p> <blockquote> <pre><code>exec /usr/bin/sh: exec format error </code></pre> </blockquote> <p>The default value will be the value of os arch where buildkit is running. More details how to build images for other arch is explained in <a href="https://docs.docker.com/engine/reference/commandline/buildx_build/#platform" rel="nofollow noreferrer">docker buildx documentation</a>.</p> <p>In your case you can use below which build image for arm64(mac os AArch64) and x86 arch</p> <pre><code>docker buildx build --platform=linux/amd64,linux/arm64 </code></pre>
<p>basic question, I have the following LB:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-lb spec: type: LoadBalancer ports: - name: my-udp-port port: 54545 protocol: UDP selector: app: my-app </code></pre> <p>And for some reason the traffic is not distributed equally (single pod get's ~99% of the traffic). Any idea how to control it?</p>
<p>When an resource is created as &quot;LoadBalancer&quot; service type in k8s by default it doesn't offer load balancing. If you are looking for proper load balancing then this service needs to be associated with external load balancer of the cloud provider. For more details please refer <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">k8s service documentation</a></p> <p>Depending on the cloud provider the algorithm can be chosen at external load balancer end.</p> <p>Affinity is an different topic and this also can be achieved using external load balancer.</p> <p>In k8s service abstraction / service proxies is provided by kube-proxy and it has different modes, more details are in their <a href="https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-modes" rel="nofollow noreferrer">k8s documentation</a>.</p>
<p>I need some help in understanding how we implement service discovery for microservices in kubernetes.</p> <p>Im going through some tutorials on spring boot and noticed that wr need to use Eureka discovery for implementing service discovery for maintaining communication b/w microservices. But my question is if we deploy those spring boot microservices in kubernetes, do we still need to use Eureka tool? We can use kubernetes services for implementing service discovery and load balancing right?</p>
<p>Kubernetes orchestration platform provides <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/" rel="nofollow noreferrer">CoreDNS</a> for Service discovery. Micro services when they get deployed to the platform can utilise the services by default no need to implement it unless if there is specific requirements which is not satisfied . Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Loadbalancer</a> services type can be used for load balancing of services</p>
<p>I have setup VirtualService and ServiceEntry for a few services outside my kubernetes cluster.<br /> My apps can access them without any issue.<br /> I was wondering if it would be somehow possible to perform a port-forward to these services via <code>kubectl</code>, similar to how you would port-forward into a pod or a service.</p> <p>This works:<br /> <code>kubectl port-forward service/my-service 8080:80</code></p> <p>This Doesn't:<br /> <code>kubectl port-forward vs/my-virtual-service 6379:6379</code></p> <p>I get:</p> <blockquote> <p>error: no kind &quot;VirtualService&quot; is registered for version &quot;networking.istio.io/v1beta1&quot; in scheme &quot;k8s.io/client-go/kubernetes/scheme/register.go:72&quot;</p> </blockquote>
<p>Port forwarding can be done to istio-ingressgateway service but not to virtual service. More details of service and virtual service of istio are in their <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">official documentation</a></p> <p>One more option is to change istio-ingressgateway service to NodePort</p> <pre><code>kubectl edit svc istio-ingressgateway -n istio-system </code></pre>
<p>I am mounting db secrets as a file in my Kubernetes container. Db secrets will get updated after the password expiry time. I am using polling mechanism to check if Db secrets has been reset to updated value. Is it possible to change mounted secret inside file.</p>
<pre><code>is secret mounted as file is editable from application code in kubernetes </code></pre> <p>The file which gets loaded into the container will be loaded in readonly format, so loaded file can't be edited from inside the container. But secret can be edited from either updating the secret or copying the file into different location within the container.</p>
<p>i am trying to lunch a new pod from in side another pod using Kubernetes cluster, but when trying this following error message could seen in the ,</p> <p>Failed to pull image &quot;dhammikalks/node&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/dhammikalks/node:latest&quot;: failed to copy: httpReadSeeker: failed open: unexpected status code <a href="https://registry-1.docker.io/v2/XXXXXXXXXX" rel="nofollow noreferrer">https://registry-1.docker.io/v2/XXXXXXXXXX</a> 9: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <a href="https://www.docker.com/increase-rate-limit" rel="nofollow noreferrer">https://www.docker.com/increase-rate-limit</a></p> <p>from what i can find-out, i need to login to the docker registry in-order to avoid this, can someone explain how can i do this ?, currently docker commands are not available inside the pod.</p>
<p>Easy option to overcome docker rating limiting problem is to have the image stored in private repo. There are different articles to understand how docker rate limiting works. For this you wouldn't require docker login commands inside the container.</p> <p>Please use kubernetes secrets if required to supply login credentials of image repository. Request to refer kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret" rel="nofollow noreferrer">sample definition</a> how to use &quot;imagePullSecrets&quot;</p>
<p>We use a headless service to balance traffic, but progers do not like this option. Installed istio, I read the documentation, but my eyes run wide.</p> <p>Now the task is: Balance traffic to the service, in my case: <code>results-service2.predprod.svc.cluster.local</code></p> <p>Do I understand correctly that it is enough for me to create a <code>DestinationRule</code> and all incoming traffic to <code>results-service2.predprod.svc.cluster.local</code> will be balanced on replicas using the <code>LEAST_CONN</code> balancing algorithm?</p> <p>In my case:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: results-load-balancer spec: host: results-service2.predprod.svc.cluster.local trafficPolicy: loadBalancer: simple: LEAST_CONN </code></pre>
<p>Istio destination rule is place to define service load balancing algorithm. By default istio uses &quot;LEAST_REQUEST&quot; as LB algorithm. &quot;LEAST_CONN&quot; is deprecated algorithm. More details are mentioned in <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB" rel="nofollow noreferrer">istio documentation</a>. If you would like to change LB algorithm to &quot;ROUND_ROBIN&quot; or another supported methon this can be done as per below sample</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: configure-client-mtls-dr-with-workloadselector spec: host: example.com workloadSelector: matchLabels: app: ratings trafficPolicy: loadBalancer: simple: ROUND_ROBIN </code></pre>
<p>Currently when I apply stateful set then pods are getting created with ab-app-0, ab-app-1 , ab-app-2</p> <p>But I want pods like ab-app-1000, ab-app-1001</p>
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#ordinal-index" rel="nofollow noreferrer">Kubernetes documentation on StatefulSet</a> says the following about ordinal index:</p> <blockquote> <p>For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, that is unique over the Set.</p> </blockquote> <p>So there looks to be no way of customising this. But if your concern is that you need to pass this unique ID to a service, you could do the following in the config file, as proposed in <a href="https://github.com/kubernetes/kubernetes/issues/40651" rel="nofollow noreferrer">this</a> open github issue:</p> <pre class="lang-yaml prettyprint-override"><code> env: - name: ZOO_MY_ID valueFrom: fieldRef: fieldPath: metadata.annotations['spec.pod.beta.kubernetes.io/statefulset-index'] </code></pre>
<p>I want to configure an EnvoyFilter to run only on Gateway and Sidecar-Inbound. Gateway and the Apps are in different namespaces.</p> <p>If I specify the context as ANY, it will apply to Gateway, Sidecar-inbound and sidecar-outbound. However, I want it to apply only to Gateway and Sidecar-Inbound. how can I do that?</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-0-mydebugger namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY # AND SIDECAR-INBOUND HOW? listener: filterChain: filter: name: envoy.http_connection_manager subFilter: name: envoy.router patch: operation: INSERT_BEFORE value: name: envoy.lua.mydebugger typed_config: '@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(request_handle) request_handle:logInfo(&quot;HelloWorld&quot;) end </code></pre> <p>If you see, I have set the context to GATEWAY. How can I specify multiple matches - Gateway and Sidecar-Inbound? (Without having to repeat/duplicate the patch section)</p>
<p>The <code>context</code> is an <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/#EnvoyFilter-PatchContext" rel="nofollow noreferrer">enum</a> so you can't do something like <code>[GATEWAY, SIDECAR_INBOUND]</code>. Therefore, unfortunately you will need to create another element inside <code>configPatches</code> with an <code>applyTo</code>, <code>match</code>, and <code>patch</code>.</p> <p>However, with yaml you can use anchors(<code>&amp;</code>) and references(<code>*</code>) to reuse blocks of code which makes the duplication easier.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: filter-0-mydebugger namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: &amp;mymatch # create an anchor for reuse context: GATEWAY listener: filterChain: filter: name: envoy.http_connection_manager subFilter: name: envoy.router patch: &amp;mypatch # create an anchor for reuse operation: INSERT_BEFORE value: name: envoy.lua.mydebugger typed_config: '@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(request_handle) request_handle:logInfo(&quot;HelloWorld&quot;) end - applyTo: HTTP_FILTER match: &lt;&lt;: *mymatch # reuse the match context: SIDECAR_INBOUND # ... but override the context patch: *mypatch # reuse the patch without any overriding </code></pre>
<p>I want to share a volume across pods in a node in GKE.</p> <p>I am using Terraform to provision all of this and using GKE autoscaling but no auto-provsioning.</p> <pre><code>cluster_autoscaling { enabled = false autoscaling_profile = var.aggresive_node_autoscale ? &quot;OPTIMIZE_UTILIZATION&quot; : &quot;BALANCED&quot; } </code></pre> <p>Which means we will not get new node-pools created by GKE, just new nodes from the existing pools. Right?</p> <p>My original idea was to attach a volume per node and then share that by creating a persistent volume.</p> <p>I have failed to mount to such disks, so now I am trying to use the boot disk from the nodes.</p> <p>From the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#nested_node_config" rel="nofollow noreferrer">documentation</a> I understand that we can setup SSD disks as boot disk</p> <pre><code> disk_size_gb = 256 disk_type = &quot;pd-ssd&quot; </code></pre> <p>And I want to make this boot disk usable as a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/persistent_volume_v1#local" rel="nofollow noreferrer">Local Persistent Volume</a></p> <p>From what I see, the boot disk have the same name as the node pool, so I can use that to reference them</p> <pre><code>data &quot;google_compute_disk&quot; &quot;gke_boot_disk&quot; { name = google_container_node_pool.cpu_pool_n2_highmem_8.name project = local.project_id zone = var.google_zone } </code></pre> <p>And then how would I actually create the persistent disk?</p> <p>Should I approach this in a different way? Do you have any recommendations or a good read for how to share a local SSD across pods of a node in GKE?</p> <p>Thank you.</p>
<blockquote> <p>Sharing a local volume across pods in GKE with autoscaling</p> </blockquote> <p>Sharing a local volume across pods in GKE with auto scaling is impossible as the Autoscaler cluster has some <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#limitations" rel="nofollow noreferrer">limitations</a> in which Local PersistentVolumes are currently not supported by cluster autoscaler.</p> <p>If you are using Pods tied to a node lifecycle, make sure to plan accordingly as your workloads may be disrupted during cluster maintenance and node upgrades. If you require persistent storage, consider using <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">Filestore</a> instead.</p> <blockquote> <p>How would I actually create the persistent disk and share a local SSD across pods?</p> </blockquote> <p>Local SSDs can be specified as <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">PersistentVolumes</a>. A local persistent volume represents a local disk that is attached to a single node. Local persistent volumes allow you to share local SSDs across Pods. The data is still ephemeral. You can create PersistentVolumes from local SSDs by manually creating a PersistentVolume, or by running the local volume static provisioner.</p> <p>Refer to this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#pv" rel="nofollow noreferrer">document</a> for creating the PersistentVolume manually for each local SSD on each node in your cluster.</p> <p>We recommend this approach if any of the following run on your cluster:</p> <ul> <li>Workloads using StatefulSets and volumeClaimTemplates.</li> <li>Workloads that share node pools. Each local SSD can be reserved through a PersistentVolumeClaim, and specific host paths are not encoded directly in the Pod spec.</li> </ul> <p>A GKE node pool can be configured to use local SSD for <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#example_ephemeral" rel="nofollow noreferrer">ephemeral storage</a>, including <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd#emptydir-volume" rel="nofollow noreferrer">emptyDir volumes</a>. If you are using GKE with Standard clusters, you can attach local SSDs to nodes when creating node pools. This improves the performance of ephemeral storage for your workloads that use emptyDir volume.</p>
<p>I am diving into K8s and deployed my first pod via <code>kubectl apply</code> on the master node. In order to check its status, I called <code>kubectl get pods</code> twice in a row. I did not touch anything, but a subsequent call of the same command failed with the error below. Can anyone help me understand what may have happened?</p> <pre><code>ubuntu:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 0/1 Pending 0 18s ubuntu:~$ kubectl get pods The connection to the server xxx.xxx.xxx.xxx:6443 was refused - did you specify the right host or port? </code></pre> <p>For completion, here the status of <code>kubelet.service</code> on the master node:</p> <pre><code>● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Thu 2023-04-20 02:12:52 UTC; 23min ago </code></pre> <p>Thanks for any hint.</p>
<p>As explained in this <a href="https://www.thegeekdiary.com/troubleshooting-kubectl-error-the-connection-to-the-server-x-x-x-x6443-was-refused-did-you-specify-the-right-host-or-port/" rel="nofollow noreferrer">doc</a> by Greek Diary ‘admin’ which explains how to fix the kubectl error:<code>The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?</code></p> <p><strong>Below troubleshooting steps will help to resolve your issue</strong>:</p> <blockquote> <ol> <li><p>The kubectl should be executed on the Master Node.</p> </li> <li><p>Current user must have Kubernetes cluster configuration environment variable (Details of how to are listed under section Preparing to Use Kubernetes as a Regular User),</p> <p><code>$ env | grep -i kube</code></p> <p><code>KUBECONFIG=/root/.kube/config</code></p> </li> </ol> <p>3.The docker service must be running:</p> <pre><code> $ systemctl status docker </code></pre> <p>4.The kubelet service must be running:</p> <pre><code> $ systemctl status kubelet </code></pre> <p>5.TCP port 6443 should be listed as it is listening to port:</p> <pre><code>netstat -pnlt | grep 6443 </code></pre> <p>If TCP port 6443 is not available, check firewall/iptables rules matching requirements:</p> <pre><code>$ firewall-cmd --list-all </code></pre> <p>Also check kubelet logs:</p> <pre><code># journalctl -xeu kubelet </code></pre> <p>6.Try restarting Kubernetes cluster which will also do some basic checks:</p> <pre><code> $ kubeadm-setup.sh restart </code></pre> </blockquote>
<p>I'm a little confused why my <code>rancher-agent</code> is no longer able to connect to the cluster server. This was working for me for a long time, but it appears to have broken on its own. DNS and networking confuses me.</p> <p>My setup:</p> <ul> <li>OS: <code>Ubuntu 20.04.6 LTS</code></li> <li>Docker: <code>Docker version 23.0.1</code></li> <li>Rancher: <code>v2.6.5</code></li> </ul> <p>I have configured my cluster to run a single node, as <a href="https://ranchermanager.docs.rancher.com/v2.6/pages-for-subheaders/rancher-on-a-single-node-with-docker" rel="nofollow noreferrer">specified here</a>, and then I followed the <a href="https://ranchermanager.docs.rancher.com/v2.6/reference-guides/single-node-rancher-in-docker/advanced-options" rel="nofollow noreferrer">advanced setup instructions</a> to run <code>rancher/rancher</code> and <code>rancher/rancher-agent</code> on the same node.</p> <h2>The issue</h2> <p>Everything boots and runs. I can access all my applications in my cluster from <code>https://homelab.local</code> and everything loads and runs. My rancher admin UI boots on <code>https://homelab.local:8443/dashboard/home</code>. The issue is that I cannot manage the cluster at all.</p> <p>I see these two errors under Cluster Management: <code>Unsupported Docker version found [23.0.1] on host [192.168.0.75], supported versions are [1.13.x 17.03.x 17.06.x 17.09.x 18.06.x 18.09.x 19.03.x 20.10.x]</code><br /> and<br /> <code>[Disconnected] Cluster agent is not connected</code></p> <p>So it appears that I have inadvertently upgraded Docker and this is breaking my cluster?</p> <p>When I run kubectl get pods, I get some kind of cert error:</p> <pre><code>kubectl get nodes E0326 19:56:23.504726 70231 memcache.go:265] couldn't get current server API group list: Get &quot;https://localhost:8443/api?timeout=32s&quot;: x509: certificate signed by unknown authority E0326 19:56:23.506701 70231 memcache.go:265] couldn't get current server API group list: Get &quot;https://localhost:8443/api?timeout=32s&quot;: x509: certificate signed by unknown authority E0326 19:56:23.508357 70231 memcache.go:265] couldn't get current server API group list: Get &quot;https://localhost:8443/api?timeout=32s&quot;: x509: certificate signed by unknown authority E0326 19:56:23.510425 70231 memcache.go:265] couldn't get current server API group list: Get &quot;https://localhost:8443/api?timeout=32s&quot;: x509: certificate signed by unknown authority E0326 19:56:23.513743 70231 memcache.go:265] couldn't get current server API group list: Get &quot;https://localhost:8443/api?timeout=32s&quot;: x509: certificate signed by unknown authority Unable to connect to the server: x509: certificate signed by unknown authority </code></pre> <p>How can I get my cluster back to a good state?</p> <h2>Update</h2> <p>I uninstalled the latest Docker with:<br /> <code>sudo apt-get remove docker-ce docker-ce-cli docker-ce-rootless-extras docker-compose-plugin docker-scan-plugin docker-buildx-plugin</code> And installed Rancher's supported version like this:<br /> <code>curl https://releases.rancher.com/install-docker/20.10.sh | sh</code></p> <p>This fixes the issue with the unsupported Docker version, but the rancher-agent image is still not booting up. When I look at the logs of the container, I see this:</p> <pre><code>time=&quot;2023-03-27T03:20:59Z&quot; level=fatal msg=&quot;Certificate chain is not complete, please check if all needed intermediate certificates are included in the server certificate (in the correct order) and if the cacerts setting in Rancher either contains the correct CA certificate (in the case of using self signed certificates) or is empty (in the case of using a certificate signed by a recognized CA). Certificate information is displayed above. error: Get \&quot;https://192.168.0.75:8443\&quot;: x509: certificate signed by unknown authority&quot; </code></pre>
<p>As explained in this official <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#tls-certificate-errors" rel="nofollow noreferrer">doc</a></p> <blockquote> <p><strong>The following error “Unable to connect to the server: x509: certificate signed by unknown authority&quot; indicates a possible certificate mismatch.</strong></p> <p>When you run</p> <pre><code>kubectl get pods </code></pre> <p>If you are getting this error Unable to connect to the server: x509: certificate signed by unknown authority</p> <p><strong>To resolve this error try below troubleshooting methods</strong>:</p> <p>1)Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The base64 --decode command can be used to decode the certificate and openssl x509 -text -noout can be used for viewing the certificate information.</p> <p>2)Unset the KUBECONFIG environment variable using:</p> <pre><code>unset KUBECONFIG </code></pre> <p>Or set it to the default KUBECONFIG location:</p> <pre><code>export KUBECONFIG=/etc/kubernetes/admin.conf </code></pre> <p>3)Another workaround is to overwrite the existing kubeconfig for the &quot;admin&quot; user:</p> <pre><code> mv $HOME/.kube $HOME/.kube.bak mkdir $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> </blockquote> <p>Refer to this official <a href="https://ranchermanager.docs.rancher.com/v2.5/getting-started/installation-and-upgrade/resources/custom-ca-root-certificates" rel="nofollow noreferrer">doc</a> for more information.</p>
<p>I have followed this <a href="https://thechief.io/c/cloudplex/what-kubeflow-and-how-deploy-it-kubernetes/" rel="nofollow noreferrer">doc</a> to deploy kubeflow on gke</p> <blockquote> <p>Error:404 page not found</p> </blockquote> <p>when Accessing Central dashboard</p> <p>Try to deploy multiple times but still getting same error</p> <p>I'm new to kubernets</p>
<p>As per this official <a href="https://v0-5.kubeflow.org/docs/gke/troubleshooting-gke/#404-page-not-found-when-accessing-central-dashboard" rel="nofollow noreferrer">doc</a>, try below troubleshooting steps to resolve your error :</p> <blockquote> <p>This section provides troubleshooting information for 404s, page not found, being return by the central dashboard which is served at</p> <pre><code>https://${KUBEFLOW_FQDN}/ </code></pre> <p>KUBEFLOW_FQDN is your project’s OAuth web app URI domain name <code>&lt;name&gt;.endpoints.&lt;project&gt;.cloud.goog</code></p> <p>We can confirm that the Ambassador reverse proxy is functioning properly by running the following command because we were able to sign in.</p> <pre><code> kubectl -n ${NAMESPACE} get pods -l service=envoy NAME READY STATUS RESTARTS AGE envoy-76774f8d5c-lx9bd 2/2 Running 2 4m envoy-76774f8d5c-ngjnr 2/2 Running 2 4m envoy-76774f8d5c-sg555 2/2 Running 2 4m </code></pre> <p>Now try to Attempt different administrations to check whether they're open for instance</p> <pre><code> https://${KUBEFLOW_FQDN}/whoami https://${KUBEFLOW_FQDN}/tfjobs/ui https://${KUBEFLOW_FQDN}/hub </code></pre> <p>We know that the issue is specific to the central dashboard and not ingress if other services are accessible.</p> <p>Now check that the central dashboard is running</p> <pre><code>kubectl get pods -l app=centraldashboard NAME READY STATUS RESTARTS AGE centraldashboard-6665fc46cb-592br 1/1 Running 0 7h </code></pre> <p>Check that an Ambassador route is properly defined or not</p> <pre><code>kubectl get service centraldashboard -o jsonpath='{.metadata.annotations.getambassador\.io/config}' apiVersion: ambassador/v0 kind: Mapping name: centralui-mapping prefix: / rewrite: / service: centraldashboard.kubeflow </code></pre> <p>,</p> <p>Take a look at the logs of Envoy for blunders. Check to see if there are any errors like these that indicate a problem parsing the route. In the event that you are utilizing the new Stackdriver Kubernetes observing you can utilize the accompanying channel in the stackdriver console</p> <pre><code> resource.type=&quot;k8s_container&quot; resource.labels.location=${ZONE} resource.labels.cluster_name=${CLUSTER} metadata.userLabels.service=&quot;ambassador&quot; &quot;could not parse YAML&quot; </code></pre> </blockquote> <p>You can also refer to this <a href="https://v1-5-branch.kubeflow.org/docs/distributions/gke/" rel="nofollow noreferrer">doc</a> for more information aboutdeploying kubeflow on GKE.</p>
<p>What is the reason of this error?</p> <p>How to fix this above error?</p> <p>I try to resolve this error but getting same error again and again</p> <p>I'm new to kubernetes so I need your help</p> <p>Error :when deploying a pod image pull back off error in gke</p>
<p>As David Maze mentioned &quot;The name you specified in image: doesn't exist on the node where it's being run, and it can't be fetched from Docker Hub or the registry named in the image name”</p> <p>As per this official <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#ImagePullBackOff" rel="nofollow noreferrer">doc</a>, ImagePullBackOff error indicates that the image used by a container cannot be loaded from the image registry.</p> <p>Network connectivity issues, an incorrect image name or tag, missing credentials, or insufficient permissions are all possible causes of this error.</p> <blockquote> <p>You can verify this issue using the Google Cloud console or the kubectl command-line tool.</p> <pre><code>kubectl describe pod </code></pre> <p><strong>If you encounter the image is not found</strong>:</p> <p>You need to follow this <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#if_the_image_is_not_found" rel="nofollow noreferrer">troubleshooting</a> steps to resolve your issue</p> <p><strong>If you get Permission denied error</strong>:</p> <p>If you encounter a &quot;permission denied&quot; or &quot;no pull access&quot; error, verify that you are logged in and have access to the image. Try one of the following <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#permission_denied_error" rel="nofollow noreferrer">methods</a> depending on the registry in which you host your images.</p> </blockquote>
<p>I'm trying to use client-go informers to get the replica count on deployments. Whenever autoscaling changes the number of replicas, I need to retrieve this in order to handle some other logic. I was previously using the Watch() function, but there are a few inconsistencies with timeouts and connection drops.</p> <p>The following code below shows an example of the implementation:</p> <pre><code>labelOptions := informers.WithTweakListOptions(func(opts *v1.ListOptions) { opts.FieldSelector = &quot;metadata.name=&quot; + name }) factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 2*time.Second, informers.WithNamespace(namespace), labelOptions) informer := factory.Apps().V1().Deployments().Informer() // Using the channels and goroutines below didn't show changes: stopper := make(chan struct{}) defer close(stopper) //go func() { informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { mObj, ok := obj.(*appsv1.Deployment) if !ok { panic(spew.Sdump(&quot;informer returned invalid type&quot;, mObj)) } replicas := int(*mObj.Spec.Replicas) logger.Infof(&quot;updating replicas to %d&quot;, replicas) sendUpdates() // use updates elsewhere }, UpdateFunc: func(oldObj, newObj interface{}) { old, ok := oldObj.(*appsv1.Deployment) if !ok { panic(spew.Sdump(&quot;informer returned invalid type&quot;, old)) } newDeployment, ok := newObj.(*appsv1.Deployment) if !ok { panic(spew.Sdump(&quot;informer returned invalid type&quot;, newDeployment)) } oldReplicas := int(*old.Spec.Replicas) newReplicas := int(*newDeployment.Spec.Replicas) if oldReplicas != newReplicas { sendUpdates() } }, }) //factory.Start(wait.NeverStop) //factory.WaitForCacheSync(wait.NeverStop) informer.Run(stopper) </code></pre> <p>When Kubernetes autoscales or I change the Deployments replica manually, I get <code>deployment.apps/app scaled</code> but it doesn't get caught by the Informer. Nothing gets printed in the logs and it enters a crash loop with no error message.</p> <p>I used the following resources:</p> <ul> <li><a href="https://dev.to/davidsbond/go-creating-dynamic-kubernetes-informers-1npi" rel="nofollow noreferrer">https://dev.to/davidsbond/go-creating-dynamic-kubernetes-informers-1npi</a></li> <li><a href="https://stackoverflow.com/questions/53200785/whats-the-best-way-to-get-notified-when-kubernetes-deployments-change-using-the">What&#39;s the best way to get notified when kubernetes Deployments change using the k8s.io/client-go library?</a></li> <li><a href="https://stackoverflow.com/questions/47205516/kubernetes-filter-objects-in-informer">kubernetes filter objects in Informer</a></li> </ul>
<p>A few things to note:</p> <ul> <li>Before calling <code>informerFactory.Start()</code>, ensure that the Informer is called directly(<code>informer := factory.Apps().V1().Deployments().Informer()</code>) or <code>Start()</code> wont start anything.</li> <li>Using a goroutine to start the SharedInformerFactory is meaningless because the <code>informerFactory.Start()</code> uses one internally.</li> <li>It will also cease the <code>informerFactory.WaitForCacheSync()</code> method from working resulting in it getting the wrong data for started informers.</li> </ul> <pre><code>labelOptions := informers.WithTweakListOptions(func(opts *v1.ListOptions) { opts.FieldSelector = &quot;metadata.name=&quot; + name }) factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 2*time.Second, informers.WithNamespace(namespace), labelOptions) informer := factory.Apps().V1().Deployments().Informer() informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { mObj, ok := obj.(*appsv1.Deployment) if !ok { doSomething() } replicas := int(*mObj.Spec.Replicas) doSomething() }, UpdateFunc: func(oldObj, newObj interface{}) { old, ok := oldObj.(*appsv1.Deployment) if !ok { doSomething() } newDeployment, ok := newObj.(*appsv1.Deployment) if !ok { doSomething() } oldReplicas := int(*old.Spec.Replicas) newReplicas := int(*newDeployment.Spec.Replicas) if oldReplicas != newReplicas { doSomething() } }, }) // Initializes all active informers and starts the internal goroutine factory.Start(wait.NeverStop) factory.WaitForCacheSync(wait.NeverStop) </code></pre>
<p>I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster.</p> <p>From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken.</p> <p>Is there anyway to create one cluster-issuer to manage all ingresses across the cluster?</p> <p>Code included below</p> <h3>Cluster-issuer.yaml</h3> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-grafana namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-grafana solvers: - selector: dnsZones: - &quot;foo.com&quot; dns01: route53: region: eu-central-1 hostedZoneID: foo accessKeyID: foo secretAccessKeySecretRef: name: aws-route53-creds key: password.txt </code></pre> <h3>Ingress.yaml</h3> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: loki annotations: cert-manager.io/cluster-issuer: letsencrypt-grafana kubernetes.io/tls-acme: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: &quot;125m&quot; nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - grafana.foo.com secretName: letsencrypt-grafana # &lt; cert-manager will store the created certificate in this secret. rules: - host: grafana.foo.com http: paths: - path: / pathType: Prefix backend: service: name: loki-grafana port: number: 80 </code></pre>
<p>@Harsh Manvar while I do appreciate your anwser I found something that is a better suit for my needs.</p> <p><a href="https://cert-manager.io/docs/tutorials/syncing-secrets-across-namespaces/" rel="nofollow noreferrer">Cert-manager documentation</a> contains multiple options to sync secrets across namespaces</p> <p>The one I chose was <a href="https://github.com/emberstack/kubernetes-reflector" rel="nofollow noreferrer">reflector</a>. The steps to install are included in the documentation but just for the sake of service i'll post here aswell</p> <h5>Requirements: <a href="https://helm.sh/docs/helm/helm_install/" rel="nofollow noreferrer">Helm</a></h5> <h4>Installation:</h4> <pre><code>helm repo add emberstack https://emberstack.github.io/helm-charts helm repo update helm upgrade --install reflector emberstack/reflector </code></pre> <h3>Setup:</h3> <p>Add the following annotation to your secret <code>reflector.v1.k8s.emberstack.com/reflection-allowed: &quot;true&quot;</code>, it should look like the following</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: source-secret annotations: reflector.v1.k8s.emberstack.com/reflection-allowed: &quot;true&quot; </code></pre> <p>Done! Your secret should be replicated within all namespaces. For multiple ingress configurations within the same namespace you could edit your ingress.yaml like this</p> <h4>Ingress.yaml</h4> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jenkins-ingress namespace: jenkins annotations: cert-manager.io/cluster-issuer: letsencrypt-global kubernetes.io/tls-acme: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: &quot;125m&quot; nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - jenkins.foo.com - nginx.foo.com secretName: letsencrypt-global # &lt; cert-manager will store the created certificate in this secret. rules: - host: jenkins.foo.com http: paths: - path: / pathType: Prefix backend: service: name: jenkins port: number: 80 - host: nginx.foo.com http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 </code></pre>
<p>I have a deployment in Kubernetes that takes advantage of HorizontalPodAutoscaler. What are the recommended or typical settings for averageUtilization for CPU and Memory? I'm running a FastAPI and React image within each pod and wanted to know what experiences you all have had in production environments for the averageUtilization setting.</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: *** spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: *** minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 70 </code></pre> <p>My current settings are at 70% for both CPU and Memory thresholds.</p>
<p>If there are no sudden requests on your pod, 70% is a good number. As it may also take time for the additional pods to be ready and existing pods may be fully utilized before the scaled up pods are ready.<br><br>When set to 50, it will scale up faster in the event of a sudden burst requests on your FastAPI and React image. <br>Once the scaled up pods are ready, HPA will maintain an average CPU/Memory utilization of the desired percentage (50% or 70%) across all pods.<br></p>
<p>I am migrating from ingress to gateway on GKE and wondering if there are things I could keep from the old configuration or if I have to start from zero with GW. Also, I use envoy proxy, do I need a specific one per route?</p>
<p>If your concern is related to the GKE Gateway controller. I recommend checking this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api#comparison_of_ingress_and_gateway" rel="nofollow noreferrer">Comparison of Ingress and Gateway</a>. This explains how Gateway is an evolution of Ingress and it will also show you the differences in configuration of Ingress over Gateway.</p> <p>For additional information,I also recommend checking this document about <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api" rel="nofollow noreferrer">Gateway</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways" rel="nofollow noreferrer">Deploying Gateways</a>. For other concern, you may contact <code>[email protected]</code> since this feature is still in the Preview launch stage.</p>
<p>So, I started out trying to fix what was supposed to be a simple issue on my company's website. The HTTP was not redirecting to the HTTPS.</p> <p>To try to resolve this, I tried deleting the HTTPS load balancer so that I could recreate it with the redirect.</p> <p>However, now, 3 days later, another developer and myself can't get it recreated and can't figure out why.</p> <p>When I try creating the load balancer, I get the error:</p> <p><code>Invalid value for field 'resource.IPAddress': 'projects/bes-com/global/addresses/bes-load-balancer-ip'. Specified IP address is in-use and would result in a conflict.</code></p> <p>If I go to IP Addresses, the specified IP address says &quot;<strong>In use by</strong>: <em>None</em>&quot;</p> <p>I also tried recreating the load balancer with a new IP Address instead of the current one, but this one didn't work either.</p> <p>With this, I got:</p> <p><code>Operation type [patch] failed with message &quot;Validation failed for instance 'projects/bes-com/zones/us-central1-c/instances/gke-bes-com-bes-com-n-37339480-9xpj': instance may belong to at most one load-balanced instance group.&quot;</code></p> <p>If I go to the Ingress, it has this error at the top:</p> <p><code>Error syncing to GCP: error running backend syncing routine: received errors when updating backend service: googleapi: Error 400: INSTANCE_IN_MULTIPLE_LOAD_BALANCED_IGS - Validation failed for instance 'projects/bes-com/zones/us-central1-c/instances/gke-bes-com-bes-com-n-37339480-9xpj': instance may belong to at most one load-balanced instance group. googleapi: Error 400: INSTANCE_IN_MULTIPLE_LOAD_BALANCED_IGS</code></p> <p>Note that I did not create this website. I inherited it, and am new to GCP so am learning as I go. However, I have to get this website back up and do not know where to go next. Thanks for any help, and please let me know what I can show.</p>
<p><strong>Note</strong>: Make sure your domain is accessible via https and Google managed certificate is already in Active status and fully propagated. Also, <strong>these steps are part of the troubleshooting steps. It was posted as an answer since it won't fit in the comment section</strong>. If you have questions or clarifications, you may reply in the comment section.</p> <p>I ask you to list the backend services, for you to confirm that no MIG is used in multiple backend services.</p> <p>If you are deploying using YAML, I recommend using this procedure as a reference. Since your primary concern is the redirection from HTTP to HTTPs, let me share some basic configuration in deploying GKE redirection using an existing static frontend static IP address and managed SSL certificate.</p> <pre><code>1. In Cloud shell. Execute below commands to point which region and cluster you’re working on. gcloud config set compute/zone us-central1-a gcloud container clusters get-credentials test-gke test-gke &gt; your existing cluster us-central1-a &gt; “test-gke” cluster region 2. In Cloud shell, create redirection YAML file sudo nano web-redirect.yaml web-redirect.yaml &gt; preferred YAML filename 3. Copy paste below lines apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true responseCodeName: PERMANENT_REDIRECT my-frontend-config &gt; preferred name for frontend config PERMANENT_REDIRECT &gt; for http - https redirection. To return a 308 redirect response code. 4. Save the web-redirect.yaml file. Press ctrl + o to write the lines, then Enter to verify the filename lastly ctrl + x to exit. 5. Apply the resource to the cluster kubectl apply -f web-redirect.yaml 6. Modify your existing ingress yaml (sample: web-ingress.yaml) sudo nano web-ingress.yaml 7. Add the annotation to use the manifested FrontendConfig: Sample modified ingress YAML: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: basic-ingress annotations: networking.gke.io/v1beta1.FrontendConfig: &quot;my-frontend-config&quot; kubernetes.io/ingress.global-static-ip-name: &quot;bes-load-balancer-ip&quot; networking.gke.io/managed-certificates: &quot;managed-cert&quot; kubernetes.io/ingress.class: &quot;gce&quot; spec: defaultBackend: service: name: web port: number: 8080 my-frontend-config &gt; FrontendConfig metadata name in web-redirect.yaml web &gt; name of service deployed in your cluster bes-load-balancer-ip&gt; name of reserve external IP managed-cert &gt; name of managed certificate 8. Save the existing ingress.yaml file. Press ctrl + o to write the lines, then Enter to verify the filename lastly ctrl + x to exit. 9. Apply the resource to the cluster: kubectl apply -f &lt;ingress.yaml&gt; </code></pre> <p>For more information, you may check these documents <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect" rel="nofollow noreferrer">HTTP to HTTPS redirects</a>, and <a href="https://www.doit-intl.com/say-goodbye-to-improvised-https-redirection-workarounds/" rel="nofollow noreferrer">Say goodbye to improvised HTTPS Redirection Workarounds</a>.</p>
<p>We need to reach out to connect to Master nodes which is quite feasible via network policy with required IPs however it requires IPs of master node to be configured within it.</p> <p>Issue is that IPs of master node can be changed (not frequent but it can happen due to some changes) thus we would like to lock it down further using other options like labels etc.</p> <pre><code> egress: - to: - ipBlock: cidr: 172.20.x.x/32 ports: - protocol: TCP port: 443 - to: - ipBlock: cidr: 172.20.y.y/32 ports: - protocol: TCP port: 443 - to: - ipBlock: cidr: 172.20.z.z/32 ports: - protocol: TCP port: 443 </code></pre> <p>Digged but couldn't find any <code>nodeSelector</code> field available within network policy that can be used so it makes the things complicated. Any suggestion to overcome this situation.</p>
<p>You might want to consider using DNS resolution. You can start this by using this <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">link</a> you can use DNS names in your application instead of IP addresses to connect in your master node. Another way I believe is <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">External load balancer</a> which is having a static IP address to route your traffic in master node. Then your network policy can reference a stable IP address.</p>
<p>I've follwed the steps at <a href="https://docs.docker.com/get-started/kube-deploy/" rel="nofollow noreferrer">https://docs.docker.com/get-started/kube-deploy/</a> to enable Kubernetes in Docker Desktop for Mac. I can run some commands like <code>kubectl apply</code> and <code>kubectl get</code>, but other commands like <code>kubectl exec</code> and <code>kubectl logs</code> fail like so:</p> <pre><code>$ kubectl.docker logs -n kube-system kube-apiserver-docker-desktop Error from server: Get &quot;https://192.168.65.4:10250/containerLogs/kube-system/kube-apiserver-docker-desktop/kube-apiserver&quot;: open /run/config/pki/apiserver-kubelet-client.crt: no such file or directory </code></pre> <p>How can I work around this?</p> <pre><code>$ kubectl.docker version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.24.0 Kustomize Version: v4.5.4 Server Version: v1.24.0 </code></pre> <pre><code>$ docker version Client: Cloud integration: v1.0.24 Version: 20.10.14 API version: 1.41 Go version: go1.16.15 Git commit: a224086 Built: Thu Mar 24 01:49:20 2022 OS/Arch: darwin/arm64 Context: default Experimental: true Server: Docker Desktop 4.8.2 (79419) Engine: Version: 20.10.14 API version: 1.41 (minimum version 1.12) Go version: go1.16.15 Git commit: 87a90dc Built: Thu Mar 24 01:45:44 2022 OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.5.11 GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8 runc: Version: 1.0.3 GitCommit: v1.0.3-0-gf46b6ba docker-init: Version: 0.19.0 GitCommit: de40ad0 </code></pre>
<p>You may check this reference <a href="https://forums.docker.com/t/enable-docker-commands-run-without-sudo/135671/14" rel="nofollow noreferrer">forum</a> and as per rimelek,</p> <p>They recommended running the command</p> <blockquote> <p>chmod +xr /usr/local/bin</p> </blockquote> <p>After rebooting Docker, Kubernetes commands are accessible and even Kubernetes context menu</p>
<p>I have been trying to deploy spark and jupyter note on minikube. I used helm charts for deploying both</p> <p>Jupyter notebook - <a href="https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook</a></p> <p>Spark - <a href="https://bitnami.com/stack/spark/helm" rel="nofollow noreferrer">https://bitnami.com/stack/spark/helm</a></p> <p>While able to establish to the master using</p> <pre><code>spark = SparkSession.builder.master(&quot;spark://my-release-spark-master-0.my-release-spark-headless.default.svc.cluster.local:7077&quot;).getOrCreate() </code></pre> <p>When running the following snippet</p> <pre><code>nums= sc.parallelize([1,2,3,4]) squared = nums.map(lambda x: x*x).collect() for num in squared: print('%i ' % (num)) </code></pre> <p>The execution takes a long time and never completes when it runs the collect() method</p>
You can check the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#prerequisites" rel="nofollow noreferrer">prerequisites</a> when running Spark in Kubernetes to determine if it will increase its performance</p> A running Kubernetes cluster at version &gt;= 1.22 with access configured to it using <a href="https://kubernetes.io/docs/user-guide/prereqs/" rel="nofollow noreferrer">kubectl</a>. If you do not already have a working Kubernetes cluster, you may set up a test cluster on your local machine using <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">minikube</a>.</p> </li> We recommend using the latest release of minikube with the DNS addon enabled.</p> </li> Be aware that the default minikube configuration is not enough for running Spark applications. We recommend 3 CPUs and 4g of memory to be able to start a simple Spark application with a single executor.</p> </li> Check <a href="https://github.com/fabric8io/kubernetes-client" rel="nofollow noreferrer">kubernetes-client library</a>&rsquo;s version of your Spark environment, and its compatibility with your Kubernetes cluster&rsquo;s version.</p> </li> You must have appropriate permissions to list, create, edit and delete <a href="https://kubernetes.io/docs/user-guide/pods/" rel="nofollow noreferrer">pods</a> in your cluster. You can verify that you can list these resources by running&nbsp;</p> </li> &nbsp;&nbsp;&nbsp; kubectl auth can-i &lt;list|create|edit|delete&gt; pods</p> The service account credentials used by the driver pods must be allowed to create pods, services and configmaps.</p> </li> You must have <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes DNS</a> configured in your cluster.</p>
<p>I have a Minikube instance installed on an Ubuntu VM (Digital Ocean). I have also configured ingress for one my deployments but when I curl the hostname I am getting an error :</p> <pre><code>curl myserver.xyz curl: (7) Failed to connect to myserver.xyz port 80: Connection refused </code></pre> <p>My deployment :</p> <pre><code>apiVersion: v1 kind: Service metadata: name: echo2 spec: ports: - port: 80 targetPort: 5678 selector: app: echo2 --- apiVersion: apps/v1 kind: Deployment metadata: name: echo2 spec: selector: matchLabels: app: echo2 replicas: 1 template: metadata: labels: app: echo2 spec: containers: - name: echo2 image: hashicorp/http-echo args: - &quot;-text=echo2&quot; ports: - containerPort: 5678 </code></pre> <p>And the ingress resource (deployed in same namespace) :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echo-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; spec: rules: - host: myserver.xyz http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: echo1 port: number: 80 </code></pre> <p>My ingress is deployed using :</p> <pre><code> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml </code></pre> <p>What I have tried so far --&gt; 1. Changing the nginx service to type LoadBalancer :</p> <pre><code> kubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;, &quot;externalIPs&quot;:[&quot;146.190.XXX.XXX&quot;]}}' </code></pre> <ol start="2"> <li><p>Checking status of ingress</p> <pre><code> kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE echo-ingress &lt;none&gt; myserver.xyz 192.168.XX.X 80 2d9h example-ingress nginx hello-world.info 192.168.XX.X 80 61d </code></pre> </li> <li><p>Checking status of the services</p> </li> </ol> <pre><code> kubectl get svc --namespace=ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.108.xxx.XXX 146.190.XXX.XXX 80:317XX/TCP,443:320XX/TCP 2d23h ingress-nginx-controller-admission ClusterIP 10.101.xxx.XXX &lt;none&gt; 443/TCP 2d23h </code></pre> <ol start="3"> <li>Checking the ingress definition</li> </ol> <pre><code>kubectl describe ingress echo-ingress Name: echo-ingress Labels: &lt;none&gt; Namespace: default Address: 192.168.XX.X Ingress Class: &lt;none&gt; Default backend: &lt;default&gt; Rules: Host Path Backends ---- ---- -------- myserver.xyz / echo1:80 (XX.XXX.X.XXX:5678) Annotations: kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 131m nginx-ingress-controller Scheduled for sync Normal Sync 126m (x2 over 126m) nginx-ingress-controller Scheduled for sync Normal Sync 108m nginx-ingress-controller Scheduled for sync </code></pre> <p>I have already pointed the domain name and DNS A records to the DigitalOcean VM 's External IP Address.</p> <p>I have also checked /etc/hosts and I added an entry for the domain as follows :</p> <pre><code>146.190.XXX.XXX myserver.xyz </code></pre> <p>What am I missing?</p> <p>Do I need to install <strong>metallb</strong> Minikube addon or its not applicable in this context ?</p> <p>PS: I am following the example tutorial given <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">here</a></p>
<p>Per @jordanm</p> <blockquote> <p>If you use service type of LoadBalancer, you connect to the load balancer IP, not to the VMs IP. If you want to connect directly to the VM, your nginx ingress should be a service type of NodePort.</p> </blockquote>
<p>When using the google cloud console, I can navigate to my GKE cluster in the browser and see/edit the &quot;control plane authorized networks&quot; like this:</p> <p><a href="https://i.stack.imgur.com/H5BaR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H5BaR.png" alt="enter image description here" /></a></p> <p>I've blurred the image, but essentially what you see is a list of &quot;display name&quot; - &quot;cidr block&quot; pairs. From the command line, I you can update the authorized networks using:</p> <pre><code>gcloud container clusters update CLUSTER_NAME --region=REGION \ --enable-master-authorized-networks \ --master-authorized-networks=59.28.227.255/32,59.28.227.254/32 </code></pre> <p>But I also want to specify a display name for each CIDR block. How can I specify a display name when updating the authorized networks for a GKE cluster from the command line?</p>
<p>At the moment, it is not possible to specify a display name when adding a new master authorized network or when modifying an existing one using the gcloud command. A workaround would be to do it through the console.</p> <p>To <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#add" rel="nofollow noreferrer">add an authorized network to an existing cluster</a>:</p> <ol> <li>Go to the Google Kubernetes Engine page in the Google Cloud console.</li> <li>Click the name of the cluster you want to modify.</li> <li>Under Networking, in the Control plane authorized networks field, click <code>Edit control plane authorized networks</code>.</li> <li>Select the <code>Enable control plane authorized networks checkbox</code>.</li> <li>Click Add authorized network.</li> <li>Enter a <code>Name</code> for the network.</li> <li>For Network, enter a CIDR range that you want to grant access to your cluster control plane.</li> <li>Click Done. Add additional authorized networks as needed.</li> <li>Click Save Changes.</li> </ol> <p>To add a display name to an existing authorized network in an existing cluster</p> <ol> <li>Click the name of the cluster you want to modify.</li> <li>Under Networking, in the Control plane authorized networks field, click <code>Edit control plane authorized networks</code>.</li> <li>Select the Enable control plane authorized networks checkbox.</li> <li>Click the <code>drop down arrow</code> next to the existing authorized network that you want to name</li> <li>Add the name of the authorized network</li> <li>Click Done and Save Changes</li> </ol> <p>Meanwhile, a feature request has been submitted to address the limitation. To stay updated on the status and progress of this feature request, please follow this <a href="https://issuetracker.google.com/issues/123983759" rel="nofollow noreferrer">link</a> to access the feature request page. Once there, click on the <code>+1</code> and <code>star</code> buttons to receive notifications regarding any updates.</p>
<p>I'm running the following cronjob in my minikube:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: CronJob metadata: name: hello spec: schedule: &quot;* * * * *&quot; concurrencyPolicy: Allow suspend: false successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - somefailure restartPolicy: OnFailure </code></pre> <p>I've added the &quot;somefailure&quot; to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores <code>successfulJobsHistoryLimit</code> and <code>failedJobsHistoryLimit</code>. I've checked the documentation on <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/</a> and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add <code>ttlSecondsAfterFinished: 1</code>, it removes the container after 1 second, but the other parameters are completely ignored. So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?</p>
<p>It seems it's a Kubernetes bug: <a href="https://github.com/kubernetes/kubernetes/issues/53331" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/53331</a>.</p>
<p><a href="https://i.stack.imgur.com/D5fPZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/D5fPZ.png" alt="enter image description here" /></a></p> <p>I have been trying to setup minikube but the command is stuck at pulling base image since forever</p>
<p>It's not stuck; it is just not showing the output of the downloading progress. It's an 800 MB file, it takes time to download.</p> <p>See <a href="https://github.com/kubernetes/minikube/issues/7012" rel="noreferrer">https://github.com/kubernetes/minikube/issues/7012</a></p>
<p>I'm trying to migrate Several spring boot services to EKS and they can't retrieve aws credentials from credentials chain and pods are failing with following error: <code>Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain</code></p> <p>These are what I've tried so far:</p> <p>I'm using <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html" rel="nofollow noreferrer">Web identity token from AWS STS</a> for credentials retrieval.</p> <pre><code>@Bean public AWSCredentialsProvider awsCredentialsProvider() { if (System.getenv(&quot;AWS_WEB_IDENTITY_TOKEN_FILE&quot;) != null) { return WebIdentityTokenCredentialsProvider.builder().build(); } return new DefaultAWSCredentialsProviderChain(); } @Bean public SqsClient sqsClient(AWSCredentialsProvider awsCredentialsProvider) { return SqsClient .builder() .credentialsProvider(() -&gt; (AwsCredentials) awsCredentialsProvider.getCredentials()) .region(Region.EU_WEST_1).build(); } @Bean public SnsClient snsClient(AWSCredentialsProvider awsCredentialsProvider) { return SnsClient .builder() .credentialsProvider(() -&gt; (AwsCredentials) awsCredentialsProvider.getCredentials()) .region(Region.EU_WEST_1).build(); } </code></pre> <p>The services also have <code>aws-java-sdk-sts</code> maven dependency packaged.</p> <p>IAM role for the services is also fine and <code>AWS_WEB_IDENTITY_TOKEN_FILE</code> is a also automatically created within pod after each Jenkins build based on K8s manifest file.</p> <p>From pod I can make GET and POST request to SNS and SQS without any problem.</p>
<p>Problem was fixed.</p> <p>Main issue was conflicting AWS SDK BOM version with individual models. Also previous version of BOM I was using wasn't supporting AWS SDK v2.x .</p> <p>These are the main take aways from the issue:</p> <ol> <li><p>AWS SDK authenticate services using <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html" rel="nofollow noreferrer">credentials provider chain</a> . The <a href="https://sdk.amazonaws.com/java/api/latest/index.html?software/amazon/awssdk/auth/credentials/DefaultCredentialsProvider.html" rel="nofollow noreferrer">default credential provider</a> chain of the AWS SDK for Java 2.x searches for credentials in your environment using a predefined sequence.</p> <p>1.1 As of <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html" rel="nofollow noreferrer">AWS SDK for Java 2.x</a> Web identity token from AWS STS is within default provider chain.</p> <p>1.2 As long as using v2 of the SDK and having the STS dependency makes explicit configuration of Web identity token redundant.</p> <p>1.3 Make sure candidate service is using AWS SDK v2 as it’ll reduce the configuration code to minimum.</p> </li> </ol> <p>If a candidate service using AWS SDK v1 following configuration should be added as Web identity token isn’t in default provider chain for v1.</p> <pre><code>@Bean public AWSCredentialsProvider awsCredentialsProvider() { if (System.getenv(&quot;AWS_WEB_IDENTITY_TOKEN_FILE&quot;) != null) { return WebIdentityTokenCredentialsProvider.builder().build(); } return new DefaultAWSCredentialsProviderChain(); } </code></pre> <p>Last but not least try to use try to use latest <a href="https://mvnrepository.com/artifact/software.amazon.awssdk/bom" rel="nofollow noreferrer">AWS SDK BOM</a> dependency . <a href="https://github.com/aws/aws-sdk-java-v2" rel="nofollow noreferrer">(currently all modules have the same version, but this may not always be the case)</a></p>
<p>I am trying to create a path in a GKE ingress like this: /organizations/''/entity/''/download.</p> <p>NOTE: The '' above represents a wildcard (*)</p> <p>The values after organizations/ and after entity/ are dynamic so I have to use two wildcards, but this is not working, the first wildcard after the /organizations/* is taking all the requests.</p> <p>I want to apply a different timeout only on this specific request, therefore I need to configure it just like this, if there is /test instead of /download at the end, it shouldn't take place.</p> <p>I can't be the only one to have the same situation, and I am struggling to find anything on internet with the same issue? Any workaround?</p>
<p>The only supported wildcard character for the path field of an Ingress is the * character. The * character must follow a forward slash (/) and must be the last character in the pattern. For example, /<em>, /foo/</em>, and /foo/bar/* are valid patterns, but <em>, /foo/bar</em>, and /foo/*/bar are not.</p> <p>Source (From GKE Ingress docs): <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#multiple_backend_services" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#multiple_backend_services</a></p>
<p>I am trying to install kubeflow from branch master from manifests, using the command</p> <pre><code>while ! kustomize build example | awk '!/well-defined/' | kubectl apply -f -; do echo &quot;Retrying to apply resources&quot;; sleep 10; done </code></pre> <p>I am using kubernetes 1.24 from rancher desktop:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;24&quot;, GitVersion:&quot;v1.24.4&quot;, GitCommit:&quot;95ee5ab382d64cfe6c28967f36b53970b8374491&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-08-17T18:54:23Z&quot;, GoVersion:&quot;go1.18.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Kustomize Version: v4.5.4 Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;24&quot;, GitVersion:&quot;v1.24.11+k3s1&quot;, GitCommit:&quot;c14436a9ecfffb3be553a06bb0a4fac6122579ce&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2023-03-10T21:47:44Z&quot;, GoVersion:&quot;go1.19.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/arm64&quot;} and kustomize 5.0.0.1. </code></pre> <p>During the deployement I obtain an error:</p> <pre><code> NAMESPACE NAME READY STATUS RESTARTS AGE kube-system helm-install-traefik-crd-hs4r2 0/1 Completed 0 43m kube-system helm-install-traefik-s2c7l 0/1 Completed 2 43m kube-system traefik-64b96ccbcd-tjdz9 1/1 Running 1 (5m38s ago) 43m auth dex-8579644bbb-p5kc7 1/1 Running 1 (5m38s ago) 36m istio-system istiod-586fcd6677-nsfvh 1/1 Running 1 (5m38s ago) 38m cert-manager cert-manager-cainjector-d5dc6cd7f-qrjtt 1/1 Running 1 (9m11s ago) 37m kubeflow metadata-envoy-deployment-76c587bd47-dpxv2 1/1 Running 1 (5m38s ago) 13m kube-system local-path-provisioner-687d6d7765-gqnlg 1/1 Running 1 (5m38s ago) 43m kube-system svclb-traefik-1503cd1b-w69sd 2/2 Running 2 (5m38s ago) 43m kubeflow kubeflow-pipelines-profile-controller-5dd5468d9b-nxv99 1/1 Running 0 13m kube-system coredns-7b5bbc6644-xd9xp 1/1 Running 1 (5m38s ago) 43m kubeflow kserve-controller-manager-7879bf6dd7-29bdj 2/2 Running 2 (5m38s ago) 13m knative-eventing eventing-controller-5b7bfc8895-vzb4x 1/1 Running 1 (5m38s ago) 13m knative-eventing eventing-webhook-5896d776b-l4xb4 1/1 Running 1 (5m38s ago) 13m kubeflow katib-controller-86d4d45478-pstv7 1/1 Running 1 (5m38s ago) 13m cert-manager cert-manager-7475574-2w29b 1/1 Running 1 (9m11s ago) 37m cert-manager cert-manager-webhook-6868bd8b7-lbvrx 1/1 Running 1 (5m38s ago) 37m kube-system metrics-server-667586758d-59g4s 1/1 Running 1 (5m38s ago) 43m kubeflow katib-db-manager-689cdf95c6-v7jl8 1/1 Running 1 (7m47s ago) 13m kubeflow metacontroller-0 1/1 Running 0 12m kubeflow cache-server-86584db5d8-fvzq5 2/2 Running 0 13m kubeflow ml-pipeline-persistenceagent-75bccd8b64-n2gfl 2/2 Running 0 13m knative-serving net-istio-webhook-6858cd8998-mznfm 2/2 Running 5 (4m47s ago) 13m istio-system cluster-local-gateway-757849494c-cqv88 1/1 Running 1 (5m38s ago) 13m istio-system authservice-0 1/1 Running 0 13m istio-system istio-ingressgateway-cf7bd56f-9lvmg 1/1 Running 1 (5m38s ago) 38m kubeflow minio-6d6d45469f-8f7qt 2/2 Running 1 (5m38s ago) 13m knative-serving controller-657b7bb75c-gjxkm 2/2 Running 4 (4m32s ago) 13m knative-serving webhook-76f9bc6584-kzm74 2/2 Running 5 (4m39s ago) 13m knative-serving domainmapping-webhook-f76bcd89f-qdzg7 2/2 Running 5 (4m28s ago) 13m knative-serving domain-mapping-6c4878cc54-zvwz6 2/2 Running 5 (4m26s ago) 13m knative-serving net-istio-controller-6cb499fccb-g7dvk 2/2 Running 4 (4m33s ago) 13m kubeflow workflow-controller-78c979dc75-gl46c 2/2 Running 4 (4m29s ago) 13m kubeflow katib-mysql-5bc98798b4-v5tbv 1/1 Running 1 13m kubeflow ml-pipeline-scheduledworkflow-6dfcd5dd89-m4lmd 2/2 Running 1 (5m38s ago) 13m kubeflow ml-pipeline-viewer-crd-86cbc45d9b-8rrg8 2/2 Running 4 (4m23s ago) 13m knative-serving autoscaler-5cc8b77f4d-ztbzd 2/2 Running 3 (4m8s ago) 13m knative-serving activator-5bbf976855-979ch 2/2 Running 3 (4m9s ago) 13m kubeflow katib-ui-b5d5cf978-djvs5 2/2 Running 5 (4m20s ago) 13m kubeflow mysql-6878bbff69-pzq2p 2/2 Running 0 13m kubeflow training-operator-7f768bbbdb-9cp57 1/1 Running 2 (3m54s ago) 13m kubeflow metadata-writer-6c576c94b8-d7dhl 2/2 Running 2 (3m8s ago) 13m kubeflow ml-pipeline-77d4d9974b-vx5sz 2/2 Running 3 (2m12s ago) 13m kubeflow metadata-grpc-deployment-5c8599b99c-zp7qk 2/2 Running 5 (117s ago) 13m kubeflow ml-pipeline-visualizationserver-5577c64b45-d2v4b 2/2 Running 0 13m kubeflow admission-webhook-deployment-cb6db9648-78rtl 0/1 ImagePullBackOff 0 13m kubeflow kserve-models-web-app-f9c576856-88qdc 1/2 ImagePullBackOff 1 (5m38s ago) 13m kubeflow centraldashboard-dd9c778b6-78snk 1/2 ImagePullBackOff 1 (5m38s ago) 13m kubeflow ml-pipeline-ui-5ddb5b76d8-89hdf 2/2 Running 7 (2m15s ago) 13m kubeflow jupyter-web-app-deployment-cc9cbc696-bvb48 1/2 ImagePullBackOff 1 (5m38s ago) 13m kubeflow volumes-web-app-deployment-7b998df674-765sm 1/2 ImagePullBackOff 0 13m kubeflow tensorboards-web-app-deployment-8474fd9569-4xnst 1/2 ImagePullBackOff 0 13m kubeflow notebook-controller-deployment-699589b4f9-bb6fd 1/2 ImagePullBackOff 1 (5m38s ago) 13m kubeflow profiles-deployment-74f656c59f-qbzlz 1/3 ImagePullBackOff 1 (5m38s ago) 13m kubeflow tensorboard-controller-deployment-5655cc9dbb-5mvfg 2/3 ImagePullBackOff 0 13m </code></pre> <p>When I inspect the problematic pods: (for example tensorboard-controller-deployment-5655cc9dbb-5mvfg ) I obtained:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 15m default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Warning FailedScheduling 8m10s default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Normal Scheduled 7m57s default-scheduler Successfully assigned kubeflow/tensorboard-controller-deployment-5655cc9dbb-5mvfg to lima-rancher-desktop Warning FailedScheduling 16m default-scheduler 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Normal Created 7m43s kubelet Created container istio-init Normal Pulled 7m43s kubelet Container image &quot;docker.io/istio/proxyv2:1.16.0&quot; already present on machine Normal Started 7m42s kubelet Started container istio-init Normal Pulling 7m28s kubelet Pulling image &quot;gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0&quot; Normal Pulled 7m8s kubelet Successfully pulled image &quot;gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0&quot; in 19.515117093s Normal Created 7m7s kubelet Created container kube-rbac-proxy Normal Created 7m6s kubelet Created container istio-proxy Normal Pulled 7m6s kubelet Container image &quot;docker.io/istio/proxyv2:1.16.0&quot; already present on machine Normal Started 7m6s kubelet Started container kube-rbac-proxy Normal Started 7m5s kubelet Started container istio-proxy Warning Unhealthy 7m2s (x2 over 7m3s) kubelet Readiness probe failed: Get &quot;http://10.42.0.104:15021/healthz/ready&quot;: dial tcp 10.42.0.104:15021: connect: connection refused Normal Pulling 6m20s (x3 over 7m39s) kubelet Pulling image &quot;docker.io/kubeflownotebookswg/tensorboard-controller:v1.7.0-rc.0&quot; Warning Failed 6m8s (x2 over 7m28s) kubelet Failed to pull image **&quot;docker.io/kubeflownotebookswg/tensorboard-controller:v1.7.0-rc.0&quot;: rpc error: code = NotFound desc = failed to pull and unpack image &quot;docker.io/kubeflownotebookswg/tensorboard-controller:v1.7.0-rc.0&quot;: no match for platform in manifest: not found** Warning Failed 6m8s (x2 over 7m28s) kubelet Error: ErrImagePull Warning Failed 5m55s (x3 over 6m47s) kubelet Error: ImagePullBackOff Normal BackOff 81s (x20 over 6m47s) kubelet Back-off pulling image &quot;docker.io/kubeflownotebookswg/tensorboard-controller:v1.7.0-rc </code></pre> <p>It looks like the docker image registry is not found. Any idea how should I proceed ?</p>
<p>Yes,</p> <p>after further research I found that the image I need, is not available for macos m1. I need a full virtualization in order to make it work.</p>
<p>I'm currently trying to autoscale my tcp services with kubernetes. I'm not using cloud infrastructure(amazon, aws..etc) so i use haproxy for load balancing. my tcp service isn't http protocol so i added my services with --configmap-tcp-services argument.</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: prokittest spec: replicas: 1 selector: matchLabels: app: prokit-server-label template: metadata: labels: app: prokit-server-label spec: volumes: - name: host-volume hostPath: path: /cosmo/ type: DirectoryOrCreate containers: - name: flask-web-server image: jakgon/app:2.2 securityContext: privileged: true volumeMounts: - name: host-volume mountPath: /node/plugins env: - name: POD_HOSTNAME valueFrom: fieldRef: fieldPath: status.podIP - name: SERVER_GROUP value: &quot;lobby&quot; - name: MAX_PLAYER value: &quot;100&quot; - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name ports: - name: server-port containerPort: 25565 - name: healthz-checker containerPort: 8088 --- kind: Service apiVersion: v1 metadata: name: prokittest annotations: haproxy.org/send-proxy-protocol: proxy-v2 haproxy.org/load-balance: &quot;leastconn&quot; haproxy.org/check: &quot;true&quot; haproxy.org/check-interval: &quot;10s&quot; labels: run: prokittest app: service-monitor-label spec: selector: app: prokit-server-label ports: - name: server-port port: 25565 - name: healthz-checker port: 8088 targetPort: 8088 --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: blog-service-monitor labels: release: prometheus-operator spec: selector: matchLabels: app: service-monitor-label endpoints: - port: healthz-checker </code></pre> <p>and following is my configmap for configmap-tcp-services</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: default data: 9000: &quot;cosmoage/prokittest:25565&quot; </code></pre> <p>After this applies, haproxy.cfg automatically changed when service pod resizing. But, I can't access to haproxy server without port-forwarding manually like this:</p> <pre><code>kubectl port-forward [[ingress pod name]] 9000:9000 --address 0.0.0.0 </code></pre> <p>Am i misunderstanding about haproxy ConfigMap? ingress pod name is not fixed so port forward is not a good idea for using.</p>
<p>Have you checked the ingress logs ? I can see that your service is in the default namespace, not in the cosmoage</p> <p>Try using this configmap instead:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: default data: &quot;9000&quot;: default/prokittest:25565 </code></pre>
<p>I can't find where the kubelet logs are located for Docker Desktop (Windows). There's a similar question <a href="https://stackoverflow.com/questions/34113476/where-are-the-kubernetes-kubelet-logs-located">here</a>, but the answers all refer to linux/kind installs of kubernetes.</p>
<p><em>I get that this is an old question, but it does seem to get some traffic, has no accepted answer and is the only of this kind on SO - here is my take on this problem and a solution:</em></p> <h2>The process</h2> <p>As already pointed out by @acid_fuji, everything is running inside a VM we need to access, to get the required information. As most things in Linux have some representation in the filesystem, and tools like <code>ps</code> use <em>procfs</em> to query the requested information, mounting the VMs root directory (<code>/</code>) ist sufficient for introspection and quite easy:</p> <p>Run a container and mount the VMs root directory <code>/</code> to <code>/host</code>:</p> <pre class="lang-bash prettyprint-override"><code>docker run --rm -it -v /:/host alpine </code></pre> <p>In that container, get a shell with the VMs root directory in <code>/host</code> as root <code>/</code>:</p> <pre class="lang-bash prettyprint-override"><code>chroot /host </code></pre> <p>From this shell with changed root directory, tools like <code>ps</code> will return information about the VM and not your container anymore. Next step is to find some information about the running <code>kubelet</code>. Get a tree formatted list of all running processes in the VM with their command line and highlight <em>kubelet</em>:</p> <pre class="lang-bash prettyprint-override"><code>ps -ef --forest | grep --color -E 'kubelet|$' </code></pre> <p>Reduced to the relevant portions, the result will look something like this:</p> <pre><code>UID PID PPID C STIME TTY TIME CMD ... root 30 1 0 Aug11 ? 00:05:09 /usr/bin/memlogd -fd-log 3 -fd-query 4 -max-lines 5000 -max-line-len 1024 ... root 543 1 0 Aug11 ? 00:00:16 /usr/bin/containerd-shim-runc-v2 -namespace services.linuxkit -id docker -address /run/containerd/containerd.sock root 563 543 0 Aug11 ? 00:00:02 \_ /usr/bin/docker-init /usr/bin/entrypoint.sh root 580 563 0 Aug11 ? 00:00:00 | \_ /bin/sh /usr/bin/entrypoint.sh root 679 580 0 Aug11 ? 00:00:00 | \_ /usr/bin/logwrite -n lifecycle-server /usr/bin/lifecycle-server root 683 679 0 Aug11 ? 00:00:35 | \_ /usr/bin/lifecycle-server ... root 1539 683 0 Aug11 ? 00:00:01 | \_ /usr/bin/logwrite -n kubelet kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config /etc/kubeadm/kubelet.yaml --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=docker-desktop --container-runtime=remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock root 1544 1539 2 Aug11 ? 00:38:38 | \_ kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config /etc/kubeadm/kubelet.yaml --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=docker-desktop --container-runtime=remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock </code></pre> <p>We find <code>kubelet</code> as PID <strong>1544</strong>, spawned by <code>logwrite</code> (PID <strong>1539</strong>). Issuing <code>logwrite --help</code> we find the <code>-n</code> flag to set the name to appear in logs for the launched instance, and <code>logwrite</code> to be sending its logs to <code>memlogd</code>, which we find as PID <strong>30</strong>.</p> <p>Knowing what to search for I found this blog post <a href="https://www.docker.com/blog/capturing-logs-in-docker-desktop/" rel="nofollow noreferrer">Capturing Logs in Docker Desktop</a> describing how logging in <em>Docker Desktop</em> is implemented. What we can learn:</p> <p><code>logwrite</code> and <code>memlogd</code> are part of <a href="https://github.com/linuxkit/linuxkit" rel="nofollow noreferrer">Linuxkit</a>, and LinuxKit provides binaries of their packages as containers, with the logging infrastructure as <a href="https://hub.docker.com/r/linuxkit/memlogd" rel="nofollow noreferrer"><code>linuxkit/memlogd</code></a>. <code>memlogd</code> has a socket to query for logs at <code>/run/guest-services/memlogdq.sock</code>.</p> <p><em>Be aware, that the <code>latest</code>-tag on <code>linuxkit/memlogd</code> is quite useless and pick a specific tag matching the running version in the VM; just try another version if the next step errors.</em></p> <h2>The solution</h2> <p>Run <code>logread</code> from <a href="https://hub.docker.com/r/linuxkit/memlogd" rel="nofollow noreferrer"><code>linuxkit/memlogd</code></a> and mount the VMs <code>/run/guest-services/memlogdq.sock</code> to the <code>logread</code>s expected default location at <code>/var/run/memlogdq.sock</code>. Tell <code>logread</code> to either show only new entries and follow, using <code>-f</code>, or dump all existing entries and follow with <code>-F</code>. Pipe this through something like grep, or a powershell equivalent and filter for <code>kubelet</code>:</p> <pre class="lang-bash prettyprint-override"><code># example assuming presence grep on windows docker run --rm -it -v /run/guest-services/memlogdq.sock:/var/run/memlogdq.sock linuxkit/memlogd:014f86dce2ea4bb2ec13e92ae5c1e854bcefec40 /usr/bin/logread -F | grep -i kubelet </code></pre> <p>Tag <code>014f86dce2ea4bb2ec13e92ae5c1e854bcefec40</code> is working with <em>Docker Desktop <code>v4.11.1</code></em>; just test which container version/tag works with your docker version.</p>
<p>So I was going to set up a GKE cluster and interact it with <code>kubectl</code>. But when I tried to apply the namespace, it just threw an error.</p> <p>I've added my IP to Control plane authorized networks in GKE dashboard.</p> <p>I'm using a Windows 10 machine, my kube/config</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: [REDACTED] server: https://34.66.200.196 name: gke_project-307907_us-central1_test contexts: - context: cluster: gke_project-307907_us-central1_test user: gke_project-307907_us-central1_test name: gke_project-307907_us-central1_test current-context: gke_project-307907_us-central1_test kind: Config preferences: {} users: - name: gke_project-307907_us-central1_test user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin.exe installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true </code></pre> <p>I tried to apply namespace but,</p> <pre class="lang-bash prettyprint-override"><code>╰─ kubectl apply -f k8s/canary/namespace.yaml Unable to connect to the server: dial tcp 34.66.200.196:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. </code></pre>
<strong>Looks kubectl lost connection to the Cluster and you can set the Cluster context by following the official GCP troubleshooting doc&nbsp;<a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#kubectl-times-out" rel="nofollow noreferrer">GCP kubectl command times out</a>, try below two solutions :</strong></p> <strong>Solutions 1 : When kubectl cannot communicate with the cluster control plane or doesn't exist:</strong></p> <blockquote> <p>To resolve your issue, verify the context were the cluster is set:</p> <p>Go to <code>$HOME/.kube/config</code> or run the command <code>kubectl config view</code> to verify the config file contains the cluster context and the external IP address of the control plane (Check there is a possibility that the server mentioned was old or not reachable).</p> <p>Set the cluster credentials:</p> <pre><code> gcloud container clusters get-credentials CLUSTER_NAME \ --region=COMPUTE_REGION \ --project=PROJECT_ID </code></pre> <p><strong>Note :</strong> For zonal clusters, use <code>--zone=COMPUTE_ZONE</code></p> </blockquote> <p>Above command will automatically update the default cluster for kubectl. In case you don’t know the correct cluster name and zone, use <code>gcloud container clusters list</code>. After completing the above steps, please try to create the namespace again and let me know the outcome.</p> <p><strong>Solution 2 : Source IP is not allowed on the &quot;<code>Control plane authorized networks</code>&quot; - cluster config :</strong></p> <blockquote> <p>If the cluster is a private GKE cluster, then ensure that the outgoing IP of the machine you are attempting to connect from is included in the list of existing authorized networks.</p> <p>You can find your existing authorized networks by running the following command:</p> <pre><code> gcloud container clusters describe \ --region= --project= \ --format &quot;flattened(masterAuthorizedNetworksConfig.cidrBlocks[])&quot; </code></pre> </blockquote> <p>In GKE, there is a feature called &quot;<code>Control plane authorized networks</code>&quot;. The main purpose of this parameter is to allow the user to specify CIDR ranges and allow IP addresses in those ranges to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept#overview" rel="nofollow noreferrer">access GKE cluster endpoints</a>.</p> <p>You can also use the GCP console to check the allowed IP CIDR in the &quot;<code>Control plane authorized networks</code>&quot; from the GKE cluster details.</p> <p><strong>Other Scenarios :</strong></p> <p><strong>a.</strong> For other common reasons refer to <strong>Uli Köhler’s</strong> blog on <strong>Techover flow</strong> <a href="https://techoverflow.net/2019/04/01/how-to-fix-kubectl-unable-to-connect-to-the-server-dial-tcp-443-i-o-timeout/" rel="nofollow noreferrer">How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout</a>.</p> <p><strong>b.</strong> Check If you're using docker's default bridge network as your GKE endpoint, disable the bridge network to avoid the network conflict.</p> <p><strong>c.</strong> Also, check if there are recent updates in <strong>Windows/Docker</strong>. Refer to <strong>Dzmitry Paulouski’s</strong> answer <a href="https://stackoverflow.com/questions/68954948/kubernetes-on-windows-error-unable-to-connect-to-the-server-dial-tcp-some-ip">Kubernetes on Windows Error: Unable to connect to the server: dial tcp</a>, which may help to resolve your issue.</p> <p><strong>d.</strong> If the issue with the config file, restarts the Docker Desktop restores with the new config file by adding/replacing it in your <code>HOME/.kube / config file</code>.</p>
<p>I have been getting alerts that a few APIs were deprecated in v1.22 of Kubernetes cluster in GCP and so I changed all of my deployments and nginx controllers to version apps/v1 and networking.k8s.io/v1 . But my cluster still shows that there are some components using the deprecated api's. The error on the pod is -</p> <pre><code>k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource </code></pre> <p>The deprecated apis called are - API :</p> <pre><code>/apis/networking.k8s.io/v1beta1/ingresses </code></pre> <p>for user agent -</p> <pre><code>nginx-ingress-controller/v0.35.0 (linux/amd64) ingress-nginx/54ad65e32bcab32791ab18531a838d1c0f0811ef </code></pre> <p>and</p> <pre><code>/apis/networking.k8s.io/v1beta1/ingressclasses </code></pre> <p>for</p> <pre><code>nginx-ingress-controller/v0.35.0 (linux/amd64) ingress-nginx/54ad65e32bcab32791ab18531a838d1c0f0811ef </code></pre> <p>Today some of deployments were unable to reach from the browser, what changes do I need to make.</p>
<p>The root cause of the issue is that the <strong>NGINX Controller has to be compatible with the kubernetes version</strong>.</p> <p><strong>Follow resolutions steps below :</strong></p> <ol> <li><p>Check the version of NGINX.</p> </li> <li><p>Check Kubernetes version GKE console in GCP UI.</p> </li> <li><p>Confirm the supported versions with kubernetes <a href="https://github.com/kubernetes/ingress-nginx#support-versions-table" rel="nofollow noreferrer">here</a>.</p> </li> <li><p>Update NGINX (change the API version of NGINX Ingress)</p> </li> </ol> <p><strong>If NGINX was installed using Helm, its name will be ingress-nginx, and it can be upgraded using a helm command like below:</strong></p> <pre><code>helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx </code></pre> <p>You can also refer to the GCP official doc <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingress-v122" rel="nofollow noreferrer">Upgrading third-party components</a> and similar <a href="https://github.com/bitnami/charts/issues/7264" rel="nofollow noreferrer">Github issue</a> for relevant info, which may help to resolve your issue.</p>
<p>After upgrading Kubernetes node pool from 1.21 to 1.22, ingress-nginx-controller pods started crashing. The same deployment has been working fine in EKS. I'm just having this issue in GKE. Does anyone have any ideas about the root cause?</p> <p><code>$ kubectl logs ingress-nginx-controller-5744fc449d-8t2rq -c controller</code></p> <pre><code>------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.3.1 Build: 92534fa2ae799b502882c8684db13a25cde68155 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.10 ------------------------------------------------------------------------------- W0219 21:23:08.194770 8 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0219 21:23:08.194995 8 main.go:209] &quot;Creating API client&quot; host=&quot;https://10.1.48.1:443&quot; </code></pre> <p>Ingress pod events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 27m default-scheduler Successfully assigned infra/ingress-nginx-controller-5744fc449d-8t2rq to gke-infra-nodep-ffe54a41-s7qx Normal Pulling 27m kubelet Pulling image &quot;registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974&quot; Normal Started 27m kubelet Started container controller Normal Pulled 27m kubelet Successfully pulled image &quot;registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974&quot; in 6.443361484s Warning Unhealthy 26m (x6 over 26m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 502 Normal Killing 26m kubelet Container controller failed liveness probe, will be restarted Normal Created 26m (x2 over 27m) kubelet Created container controller Warning FailedPreStopHook 26m kubelet Exec lifecycle hook ([/wait-shutdown]) for Container &quot;controller&quot; in Pod &quot;ingress-nginx-controller-5744fc449d-8t2rq_infra(c4c166ff-1d86-4385-a22c-227084d569d6)&quot; failed - error: command '/wait-shutdown' exited with 137: , message: &quot;&quot; Normal Pulled 26m kubelet Container image &quot;registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974&quot; already present on machine Warning BackOff 7m7s (x52 over 21m) kubelet Back-off restarting failed container Warning Unhealthy 2m9s (x55 over 26m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 502 </code></pre>
<p>The Beta API versions (<code>extensions/v1beta1 and networking.k8s.io/v1beta1</code>) of <code>Ingress</code> are no longer served (removed) for GKE clusters created on versions 1.22 and later. Please refer to the official <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingress-v122" rel="nofollow noreferrer">GKE ingress documentation</a> for changes in the GA API version.</p> <p>Also refer to Official Kubernetes documentation for <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/#api-changes" rel="nofollow noreferrer">API removals for Kubernetes v1.22</a> for more information.</p> <p>Before upgrading your Ingress API as a client, make sure that every ingress controller that you use is compatible with the v1 Ingress API. See <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">Ingress Prerequisites</a> for more context about Ingress and ingress controllers.</p> <p><strong>Also check below possible causes for Crashloopbackoff :</strong></p> <ol> <li><p>Increasing the <code>initialDelaySeconds</code> value for the livenessProbe setting may help to alleviate the issue, as it will give the container more time to start up and perform its initial work operations before the liveness probe server checks its health.</p> </li> <li><p>Check <code>“Container restart policy”</code>, the spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.</p> </li> <li><p><strong>Out of memory or resources :</strong> Try to increase the VM size. Containers may crash due to memory limits, then new ones spun up, the health check failed and Ingress served up 502.</p> </li> <li><p>Check <code>externalTrafficPolicy=Local</code> is set on the NodePort service will prevent nodes from forwarding traffic to other nodes.</p> </li> </ol> <p>Refer to the Github issue <a href="https://github.com/kubernetes/ingress-gce/issues/34" rel="nofollow noreferrer">Document how to avoid 502s #34</a> for more information.</p>
<p>I have one api, which will run about 4 minutes. I have deployed it on kubernetes with ingress-nginx. All api work normally except the long-run api, it always return 504 Gateaway as below: <a href="https://i.stack.imgur.com/zBS9z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zBS9z.png" alt="enter image description here" /></a></p> <p>I have check in stackoverflow, and try some solution, none of it work for me.</p> <p>any help is welcome.</p> <p><a href="https://stackoverflow.com/questions/63945381/kubernetes-ingress-specific-app-504-gateway-time-out-with-60-seconds">Kubernetes Ingress (Specific APP) 504 Gateway Time-Out with 60 seconds</a></p> <p>I have changed ingress config as below:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Values.releaseName }}-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/session-cookie-name: &quot;route&quot; nginx.ingress.kubernetes.io/session-cookie-expires: &quot;172800&quot; nginx.ingress.kubernetes.io/session-cookie-max-age: &quot;172800&quot; nginx.ingress.kubernetes.io/server-snippet: &quot;keepalive_timeout 3600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;client_body_timeout 3600s;&quot; nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;3601&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;3601&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;3601&quot; # nginx.org/proxy-connect-timeout: 3600s # nginx.org/proxy-read-timeout: 3600s # nginx.org/proxy-send-timeout: 3600s nginx.ingress.kubernetes.io/proxy-body-size: &quot;100M&quot; nginx.ingress.kubernetes.io/proxy-next-upstream: &quot;error non_idempotent http_502 http_503 http_504&quot; nginx.ingress.kubernetes.io/retry-non-idempotent: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: &quot;5&quot; nginx.ingress.kubernetes.io/proxy-next-upstream-tries: &quot;1&quot; # nginx.ingress.kubernetes.io/configuration-snippet: |- # location /HouseKeeping/Health/Healthz { # deny all; # return 403; # } # location /InternalApi/ { # deny all; # return 403; # } nginx.ingress.kubernetes.io/server-snippets: | http { client_max_body_size 100m; } location / { proxy_set_header Upgrade $http_upgrade; proxy_http_version 1.1; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header Connection $http_connection; proxy_cache_bypass $http_upgrade; proxy-connect-timeout: 3602; proxy-read-timeout: 3602; proxy-send-timeout: 3602; } spec: rules: - host: {{ .Values.apiDomain }} http: paths: - path: / pathType: Prefix backend: service: name: {{ .Values.releaseName }}-web-clusterip-srv port: number: 80 </code></pre> <p>I also change the configMap for ingress-nginx-controller to add below config:</p> <pre><code>apiVersion: v1 data: allow-snippet-annotations: &quot;true&quot; proxy-connect-timeout: &quot;300&quot; proxy-read-timeout: &quot;300&quot; proxy-send-timeout: &quot;300&quot; kind: ConfigMap </code></pre> <p>I also used command to get ingress-nginx's conf, it seems okay</p> <pre><code>kubectl -n ingress-nginx exec ingress-nginx-controller-6cc65c646d-ljmrm cat /etc/nginx/nginx.conf | tee nginx.test-ingress-export.conf </code></pre> <pre><code> # Custom headers to proxied server proxy_connect_timeout 3601s; proxy_send_timeout 3601s; proxy_read_timeout 3601s; </code></pre> <p>it still timeout in 120 seconds.</p>
<p><strong>Typo</strong></p> <p>Replace below (ingress config):</p> <pre><code>nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: &quot;5&quot; nginx.ingress.kubernetes.io/proxy-next-upstream-tries: &quot;1&quot; </code></pre> <p>In place of (ingress config)</p> <pre><code>nginx.ingress.kubernetes.io/proxy_next_upstream_timeout: &quot;5&quot; nginx.ingress.kubernetes.io/proxy_next_upstream_tries: &quot;1&quot; </code></pre> <p><strong>Edit :</strong></p> <p>Check this <a href="https://github.com/kubernetes/ingress-nginx/issues/2007#issuecomment-656662251" rel="nofollow noreferrer">comment</a> may help to resolve your issue.</p>
<p>I have a production cluster is currently running on K8s version <code>1.19.9</code>, where the kube-scheduler and kube-controller-manager failed to have leader elections. The leader is able to acquire the first lease, however it then cannot renew/reacquire the lease, this has caused other pods to constantly in the loop of electing leaders as none of them could stay on long enough to process anything/stay on long enough to do anything meaningful and they time out, where another pod will take the new lease; this happens from node to node. Here are the logs:</p> <pre><code>E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost </code></pre> <p>Detailed Docker logs:</p> <pre><code>Flag --port has been deprecated, see --secure-port instead. I1201 22:14:10.374271 1 serving.go:331] Generated self-signed cert in-memory I1201 22:14:10.735495 1 controllermanager.go:175] Version: v1.19.9+vmware.1 I1201 22:14:10.736289 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I1201 22:14:10.736302 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I1201 22:14:10.736684 1 secure_serving.go:197] Serving securely on 0.0.0.0:10257 I1201 22:14:10.736747 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... I1201 22:14:10.736868 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1201 22:14:20.737137 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get &quot;https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s&quot;: context deadline exceeded E1201 22:14:32.803658 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get &quot;https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s&quot;: context deadline exceeded E1201 22:14:44.842075 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get &quot;https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s&quot;: context deadline exceeded E1201 22:15:13.386932 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: context deadline exceeded I1201 22:15:44.818571 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I1201 22:15:44.818755 1 event.go:291] &quot;Event occurred&quot; object=&quot;kube-system/kube-controller-manager&quot; kind=&quot;Endpoints&quot; apiVersion=&quot;v1&quot; type=&quot;Normal&quot; reason=&quot;LeaderElection&quot; message=&quot;master001_1d360610-1111-xxxx-aaaa-9999 became leader&quot; I1201 22:15:44.818790 1 event.go:291] &quot;Event occurred&quot; object=&quot;kube-system/kube-controller-manager&quot; kind=&quot;Lease&quot; apiVersion=&quot;coordination.k8s.io/v1&quot; type=&quot;Normal&quot; reason=&quot;LeaderElection&quot; message=&quot;master001_1d360610-1111-xxxx-aaaa-9999 became leader&quot; E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000fb20d0, 0x4c, 0xc6) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a57fa0, 0xc000000003, 0x0, 0x0, 0xc000472070, 0x68d5705, 0x14, 0x125, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191 </code></pre> <p>My duct tape recovery method was to shutdown the other candidates and disable leader elections <code>--leader-elect=false</code>. We manually set a leader and let it stay on for a while, then reactivated leader elections after. This has seemed to work as intended again, the leases are renewing normally after.</p> <p>Could it be possible that the api-server may be too overwhelmed to expend any resources(?), because the elections have failed due to timeout? Was wondering if anyone has ever encountered such an issue.</p>
<p>@<strong>janeosaka</strong>, you are right This problem occurs when you have a <code>1)resource crunch</code> or <code>2)network issue</code>.</p> <p>It seems the leader election API call is getting timeout as Kube API Server had a resource crunch and it has increased the latency of API calls.</p> <p><strong>1)Resource Crunch :</strong> (<strong>Increasing the CPU and Memory of the nodes</strong>)</p> <p>It seems that it is the expected behavior. When the leader election fails the controller is not able to renew the lease and per design the controller is restarted to ensure that a single controller is active at a time.</p> <p>LeaseDuration and RenewDeadline (RenewDeadline is the duration that the acting master will retry), are <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/manager/manager.go#L177-L186" rel="nofollow noreferrer">configurable in controller-runtime</a>.</p> <p>Another approach you may consider is to leverage <a href="https://kubernetes.io/docs/concepts/cluster-administration/flow-control/" rel="nofollow noreferrer">API Priority &amp; Fairness</a> to increase the chances of success of the calls made to the API by your controller if it is not at the origin of the API overload.</p> <p><strong>2)Network Issue :</strong> If it is a network issue : (The leader election lost is a symptom that the host has network problems, not a cause).</p> <pre><code>Check the issue may resolve after restarting the SDN pod </code></pre> <p><code>&quot;sdn-controller&quot;</code> and <code>&quot;sdn&quot;</code> are very different things. If restarting an <em>sdn</em> pod fixed things, then the <em>sdn-controller</em> the error you noticed was not the actual problem.</p>
<p>I am trying to install PGO operator by following <a href="https://access.crunchydata.com/documentation/postgres-operator/5.3.0/quickstart/#installation" rel="nofollow noreferrer">this</a> Docs. When I run this command</p> <pre><code>kubectl apply --server-side -k kustomize/install/default </code></pre> <p>my Pod run and soon it hit to crash loop back.</p> <p><strong>What I have done</strong> I check the logs of Pods with this command</p> <pre><code>kubectl logs pgo-98c6b8888-fz8zj -n postgres-operator </code></pre> <p><strong>Result</strong></p> <pre><code>time=&quot;2023-01-09T07:50:56Z&quot; level=debug msg=&quot;debug flag set to true&quot; version=5.3.0-0 time=&quot;2023-01-09T07:51:26Z&quot; level=error msg=&quot;Failed to get API Group-Resources&quot; error=&quot;Get \&quot;https://10.96.0.1:443/api?timeout=32s\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; version=5.3.0-0 panic: Get &quot;https://10.96.0.1:443/api?timeout=32s&quot;: dial tcp 10.96.0.1:443: i/o timeout goroutine 1 [running]: main.assertNoError(...) github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:42 main.main() github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:84 +0x465 </code></pre> <p>To check the network connection to host I run this command</p> <pre><code>wget https://10.96.0.1:443/api </code></pre> <p><strong>The Result is</strong></p> <pre><code>--2023-01-09 09:49:30-- https://10.96.0.1/api Connecting to 10.96.0.1:443... connected. ERROR: cannot verify 10.96.0.1's certificate, issued by ‘CN=kubernetes’: Unable to locally verify the issuer's authority. To connect to 10.96.0.1 insecurely, use `--no-check-certificate'. </code></pre> <p>As you can see it is connected to API</p> <p><strong>Strange issue might be useful to help me</strong></p> <p>I run <code>kubectl get pods --all-namespaces</code> and see this output</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-9gmmq 1/1 Running 0 3d16h kube-flannel kube-flannel-ds-rcq8l 0/1 CrashLoopBackOff 10 (3m15s ago) 34m kube-flannel kube-flannel-ds-rqwtj 0/1 CrashLoopBackOff 10 (2m53s ago) 34m kube-system etcd-masterk8s-virtual-machine 1/1 Running 1 (5d ago) 3d17h kube-system kube-apiserver-masterk8s-virtual-machine 1/1 Running 2 (5d ago) 3d17h kube-system kube-controller-manager-masterk8s-virtual-machine 1/1 Running 8 (2d ago) 3d17h kube-system kube-scheduler-masterk8s-virtual-machine 1/1 Running 7 (5d ago) 3d17h postgres-operator pgo-98c6b8888-fz8zj 0/1 CrashLoopBackOff 7 (4m59s ago) 20m </code></pre> <p>As you can see my two kube-flannel Pods are also in crash loop-back and one is running. I am not sure if this is the main cause of this problem</p> <p><strong>What I want?</strong> I want to run the PGO pod successfully with no error.</p> <p><strong>How you can help me?</strong> Please help me to find the issue or any other way to get detailed logs. I am not able to find the root cause of this problem because, If it was network issue then why its connected? if its something else then how can I find the information?</p> <p><strong>Update and New errors after apply the fixes:</strong></p> <pre><code>time=&quot;2023-01-09T11:57:47Z&quot; level=debug msg=&quot;debug flag set to true&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Metrics server is starting to listen&quot; addr=&quot;:8080&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;upgrade checking enabled&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;starting controller runtime manager and will wait for signal to exit&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting server&quot; addr=&quot;[::]:8080&quot; kind=metrics path=/metrics version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=debug msg=&quot;upgrade check issue: namespace not set&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1beta1.PostgresCluster&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.ConfigMap&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Endpoints&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.PersistentVolumeClaim&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Secret&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Service&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.ServiceAccount&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Deployment&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.StatefulSet&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Job&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Role&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.RoleBinding&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.CronJob&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.PodDisruptionBudget&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.Pod&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting EventSource&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster source=&quot;kind source: *v1.StatefulSet&quot; version=5.3.0-0 time=&quot;2023-01-09T11:57:47Z&quot; level=info msg=&quot;Starting Controller&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster version=5.3.0-0 W0109 11:57:48.006419 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:57:48.006642 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope time=&quot;2023-01-09T11:57:49Z&quot; level=info msg=&quot;{\&quot;pgo_versions\&quot;:[{\&quot;tag\&quot;:\&quot;v5.1.0\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.5\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.4\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.3\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.2\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.1\&quot;},{\&quot;tag\&quot;:\&quot;v5.0.0\&quot;}]}&quot; X-Crunchy-Client-Metadata=&quot;{\&quot;deployment_id\&quot;:\&quot;288f4766-8617-479b-837f-2ee59ce2049a\&quot;,\&quot;kubernetes_env\&quot;:\&quot;v1.26.0\&quot;,\&quot;pgo_clusters_total\&quot;:0,\&quot;pgo_version\&quot;:\&quot;5.3.0-0\&quot;,\&quot;is_open_shift\&quot;:false}&quot; version=5.3.0-0 W0109 11:57:49.163062 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:57:49.163119 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope W0109 11:57:51.404639 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:57:51.404811 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope W0109 11:57:54.749751 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:57:54.750068 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope W0109 11:58:06.015650 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:58:06.015710 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope W0109 11:58:25.355009 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:58:25.355391 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope W0109 11:59:10.447123 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope E0109 11:59:10.447490 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User &quot;system:serviceaccount:postgres-operator:pgo&quot; cannot list resource &quot;poddisruptionbudgets&quot; in API group &quot;policy&quot; at the cluster scope time=&quot;2023-01-09T11:59:47Z&quot; level=error msg=&quot;Could not wait for Cache to sync&quot; controller=postgrescluster controllerGroup=postgres-operator.crunchydata.com controllerKind=PostgresCluster error=&quot;failed to wait for postgrescluster caches to sync: timed out waiting for cache to be synced&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=info msg=&quot;Stopping and waiting for non leader election runnables&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=info msg=&quot;Stopping and waiting for leader election runnables&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=info msg=&quot;Stopping and waiting for caches&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=error msg=&quot;failed to get informer from cache&quot; error=&quot;Timeout: failed waiting for *v1.PodDisruptionBudget Informer to sync&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=error msg=&quot;error received after stop sequence was engaged&quot; error=&quot;context canceled&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=info msg=&quot;Stopping and waiting for webhooks&quot; version=5.3.0-0 time=&quot;2023-01-09T11:59:47Z&quot; level=info msg=&quot;Wait completed, proceeding to shutdown the manager&quot; version=5.3.0-0 panic: failed to wait for postgrescluster caches to sync: timed out waiting for cache to be synced goroutine 1 [running]: main.assertNoError(...) github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:42 main.main() github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:118 +0x434 </code></pre>
<p>If this is a new deployment, I suggest using <a href="https://access.crunchydata.com/documentation/postgres-operator/v5/" rel="nofollow noreferrer">v5</a>.</p> <p>That said, as PGO manages the networking for Postgres clusters (and as such, manages listen_adresses), there's no reason to modify the listen_addresses configuration parameter. If you need to manage networking or networking access, you can do that by setting the pg_hba config or using <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">NetworkPolicies</a>.</p> <p>Please go through the <a href="https://github.com/CrunchyData/postgres-operator/issues/2904" rel="nofollow noreferrer">Custom 'listen_addresses' not applied #2904</a> for more information.</p> <p><strong>CrashLoopBackOff:</strong> Check the pod logs for configuration or deployment issues such as missing dependencies (Like : kubernetes engine doesn't support docker-compose depends-on, so now we are using kubernetes + docker without nginx) and also check for pods being OOM killed and excessive resource usage.</p> <p>Check for the <a href="https://github.com/projectcalico/calico/issues/3422" rel="nofollow noreferrer">timeout issues</a> and also <a href="https://forum.linuxfoundation.org/discussion/856520/lab-12-7-problem-timeouts-on-dashboard" rel="nofollow noreferrer">lab on timeout problem</a></p> <pre><code>ERROR: cannot verify 10.96.0.1's certificate, issued by ‘CN=kubernetes’: Unable to locally verify the issuer's authority. To connect to 10.96.0.1 insecurely, use `--no-check-certificate'. </code></pre> <p><strong>Try solution for the above Error :</strong> first, remove ip link flannel.1 on every hosts which has this problem</p> <p>secondly, delete kube-flannel-ds from k8s</p> <p>last, recreate kube-flannel-ds from k8s, flannel.1 ip link will recreated and return back good.</p> <p>(For flannel to work correctly, you must pass <code>--pod-network-cidr=10.244.0.0/16</code> to kubeadm init.(I mean Change CIDR).)</p> <p><strong>Edit :</strong></p> <p>Please check similar <a href="https://bugzilla.redhat.com/show_bug.cgi?id=2040136" rel="nofollow noreferrer">issue and solution</a> ,which may help to resolve your issue.</p>
<p>I'm trying to deploy a react app on my local machine with docker-desktop and its kubernetes cluster with <strong>bitnami apache helm chart</strong>.<br/> I'm following this <a href="https://docs.bitnami.com/tutorials/deploy-react-application-kubernetes-helm/" rel="nofollow noreferrer">this tutorial</a>.<br/> The tutorial makes you publish the image on a public repo (step 2) and I don't want to do that. It is indeed possible to pass the app files through a persistent volume claim.<br/> This is described in the <a href="https://docs.bitnami.com/tutorials/deploy-static-content-apache-kubernetes/" rel="nofollow noreferrer">following tutorial</a>.</p> <p>Step 2 of this second tutorial lets you create a pod pointing to a PVC and then asks you to copy the app files there by using command <br/></p> <pre><code>kubectl cp /myapp/* apache-data-pod:/data/ </code></pre> <p>My issues:</p> <ol> <li>I cannot use the * wildcard or else I get an error. To avoid this I just run</li> </ol> <blockquote> <p>kubectl cp . apache-data-pod:/data/ <br/></p> </blockquote> <ol start="2"> <li>This instruction copies the files in the pod but it creates another data folder in the already existing data folder in the pod filesystem<br/> After this command my pod filesystem looks like this <a href="https://i.stack.imgur.com/PqBdm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PqBdm.png" alt="enter image description here" /></a> I tried executing</li> </ol> <blockquote> <p>kubectl cp . apache-data-pod:/</p> </blockquote> <p>But this copies the file in the root of the pod filesystem at the same location where first data folder is.<br/></p> <p>I need to copy the data directly in &lt;my_pod&gt;:/data/. How can I achieve such behaviour?</p> <p>Regards</p>
<p>**Use the full path in the command as mentioned below to copy local files to POD : *</p> <pre><code>kubectl cp apache-pod:/var/www/html/index.html /tmp </code></pre> <p>*<em>If there are multiple containers on the POD, Use the below syntax to copy a file from local to pod:</em></p> <pre><code>kubectl cp /&lt;path-to-your-file&gt;/&lt;file-name&gt; &lt;pod-name&gt;:&lt;fully-qualified-file-name&gt; -c &lt;container-name&gt; </code></pre> <p><strong>Points to remember :</strong></p> <ol> <li>While referring to the file path on the POD. It is always relative to the WORKDIR you have defined on your image.</li> <li>Unlike Linux, the base directory does not always start from the / workdir is the base directory</li> <li>When you have multiple containers on the POD you need to specify the container to use with the copy operation using -c parameter</li> </ol> <p><strong>Quick Example of kubectl cp :</strong> Here is the command to copy the index.html file from the POD’s /var/www/html to the local /tmp directory.</p> <p>No need to mention the full path, when the doc root is the workdir or the default directory of the image.</p> <pre><code>kubectl cp apache-pod:index.html /tmp </code></pre> <p>To make it less confusing, you can always write the full path like this</p> <pre><code>kubectl cp apache-pod:/var/www/html/index.html /tmp </code></pre> <p>*Also refer to this <a href="https://stackoverflow.com/questions/52407277/how-to-copy-files-from-kubernetes-pods-to-local-system">stack question</a> for more information.</p>
<p>I know that the kubelet reports that the node is in diskpressure if there is not enough space on the node.<br /> But I want to know the exact threshold of diskpressure.<br /> Please let me know the source code of the kubelet related this issue if you could.<br /> Or I really thanks for your help about the official documentation from k8s or sth else.<br /> Thanks again!!</p>
<p>Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.</p> <p>DaemonSet can deploy apps to multiple nodes in a single step. Like deployments, DaemonSets must be applied using kubectl before they can take effect.</p> <p>Since kubernetes is running on Linux,this is easily done by running du command.you can either manually ssh into each kubernetes nodes,or use a Daemonset As follows:</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: disk-checker labels: app: disk-checker spec: selector: matchLabels: app: disk-checker template: metadata: labels: app: disk-checker spec: hostPID: true hostIPC: true hostNetwork: true containers: - resources: requests: cpu: 0.15 securityContext: privileged: true image: busybox imagePullPolicy: IfNotPresent name: disk-checked command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;du -a /host | sort -n -r | head -n 20&quot;] volumeMounts: - name: host mountPath: &quot;/host&quot; volumes: - name: host hostPath: path: &quot;/&quot; </code></pre> <p>Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold, check complete <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="nofollow noreferrer">Node Conditions</a> for more details.</p> <p><strong>Ways to set Kubelet options :</strong></p> <p>1)Command line options like --eviction-hard.</p> <p>2)Config file.</p> <p>3)More recent is dynamic configuration.</p> <p>When you experience an issue with node disk pressure, your immediate thoughts should be when you run into the issue: an error in garbage collecting or log files. Of course the better answer here is to clean up unused files (free up some disk space).</p> <p>So Monitor your clusters and get notified of any node disks approaching pressure, and get the issue resolved before it starts killing other pods inside the cluster.</p> <p><strong>Edit :</strong> AFAIK there is no magic trick to know the exact threshold of diskpressure . You need to start with reasonable (limits &amp; requests) and refine using trial and error.</p> <p>Refer to this <a href="https://serverfault.com/questions/1031424/gke-kill-pod-when-monitoring-tool-still-show-that-we-have-memory">SO</a> for more information on how to set the threshold of diskpressure.</p>
<p>I am trying to install PGO (Postgres Operator) in k8s. I am following <a href="https://access.crunchydata.com/documentation/postgres-operator/5.3.0/quickstart/" rel="nofollow noreferrer">this</a> documentation. At the 2nd step when I run following command</p> <pre><code>kubectl apply --server-side -k kustomize/install/default </code></pre> <p>I see error</p> <blockquote> <p>master-k8s@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default<br /> error: containers path is not of type []interface{} but map[string]interface {}</p> </blockquote> <p><strong>System Specifications:</strong></p> <ul> <li>I have k8s 2 node cluster with one master node.</li> <li>All running Ubuntu 20.4</li> </ul> <p><strong>What I have try:</strong></p> <ul> <li><p>I download repository again without clone and directory uploaded on master node</p> </li> <li><p>I try to provide full path and this time I received the same error</p> </li> <li><p>I checked the default directory there 2 files present</p> </li> <li><p>I try to run this command inside the directory.</p> </li> </ul> <p><strong>What Do I need?</strong></p> <p>I am looking for solution why I am not able to follow the 2nd step of document. Please help me to find what did I missed or doing it wrong.</p> <p>I really thankful.</p> <p><strong>Update Question:</strong></p> <p>I updated the version of k8s and kustomize and still see the same issue.</p> <pre><code>master-k8s@masterk8s-virtual-machine:~/test/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default error: containers path is not of type []interface{} but map[string]interface {} </code></pre> <p><strong>Kustomize version:</strong></p> <pre><code>{Version:kustomize/v4.5.7 GitCommit:56d82a8xxxxxxxxxxxxx BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64} </code></pre>
<p><strong>To fix your issue make sure kubectl integration like below :</strong></p> <p>As <strong>@Ralle</strong> commented, Check versions. <strong>Kustomize v2.1.0 and v3.0.0+, and is included in kubectl 1.21+</strong>, for more information please look at the <a href="https://github.com/kubernetes/kubernetes/issues/107751" rel="nofollow noreferrer">Kustomize doesn't work with CRDs when specifying images</a>.</p> <p><strong>Update :</strong> The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which <a href="https://github.com/kubernetes/kubernetes/blob/4d75a6238a6e330337526e0513e67d02b1940b63/CHANGELOG/CHANGELOG-1.21.md#kustomize-updates-in-kubectl" rel="nofollow noreferrer">updated it to v4.0.5</a>. It will be updated on a regular basis going forward, Check your versions &amp; updates in the Kubernetes release notes. <a href="https://i.stack.imgur.com/kgvkQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kgvkQ.png" alt="enter image description here" /></a></p> <p>For examples and guides for using the kubectl integration please see the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubernetes documentation</a>.</p> <p>Also check <a href="https://kubernetes.io/blog/2021/08/06/server-side-apply-ga/" rel="nofollow noreferrer">Kubernetes 1.22: Server Side Apply moves to GA</a> for more information.</p>
<p>I'm using <code>HA-Proxy</code> (Independent server) and <code>Kubernetes v1.25.4</code> cluster (One master and three workers) bare metal based.</p> <p>I have deployed <code>Jenkins</code> and<code>Nginx ingress controller</code> with help of this <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Link</a>,</p> <p>But when i tried to access our jenkins URL <code>http://jenkins.company.com/jenkins</code> getting <code>404 Not Found</code> error.</p> <p><strong>My Jenkins application name space status:-</strong></p> <pre><code>$ kubectl get all -n jenkins NAME READY STATUS RESTARTS AGE pod/jenkins-75cbc954b6-2wfpt 1/1 Running 2 (13d ago) 70d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/jenkins-svc ClusterIP 10.96.180.240 &lt;none&gt; 80/TCP 70d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/jenkins 1/1 1 1 70d NAME DESIRED CURRENT READY AGE replicaset.apps/jenkins-75cbc954b6 1 1 1 70d </code></pre> <p><strong>Ingress controller status:-</strong></p> <pre><code>$ kubectl get all -n nginx-ingress NAME READY STATUS RESTARTS AGE pod/nginx-ingress-5xnz4 1/1 Running 2 (13d ago) 70d pod/nginx-ingress-h2g9p 1/1 Running 3 (13d ago) 70d pod/nginx-ingress-jgtc9 1/1 Running 2 (13d ago) 70d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/nginx-ingress 3 3 3 3 3 &lt;none&gt; 70d </code></pre> <p><strong>Ingress pod log:-</strong></p> <pre><code>$ kubectl logs nginx-ingress-h2g9p -n nginx-ingress 2023/01/09 03:21:33 [notice] 25#25: signal 29 (SIGIO) received 2023/01/09 03:21:33 [notice] 25#25: signal 17 (SIGCHLD) received from 145 2023/01/09 03:21:33 [notice] 25#25: worker process 145 exited with code 0 2023/01/09 03:21:33 [notice] 25#25: worker process 164 exited with code 0 2023/01/09 03:21:33 [notice] 25#25: worker process 178 exited with code 0 2023/01/09 03:21:33 [notice] 25#25: signal 29 (SIGIO) received </code></pre> <p><strong>Ingress status:-</strong></p> <pre><code>$ kubectl get ingress jenkins-ingress -n jenkins NAME CLASS HOSTS ADDRESS PORTS AGE jenkins-ingress nginx jenkins.company.com 80 63s </code></pre> <pre><code>$ kubectl describe ingress jenkins-ingress -n jenkins Name: jenkins-ingress Labels: &lt;none&gt; Namespace: jenkins Address: Ingress Class: nginx Default backend: &lt;default&gt; Rules: Host Path Backends ---- ---- -------- jenkins.company.com /jenkins jenkins-svc:80 (10.244.2.19:8080) Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal AddedOrUpdated 9m4s nginx-ingress-controller Configuration for jenkins/jenkins-ingress was added or updated Normal AddedOrUpdated 9m4s nginx-ingress-controller Configuration for jenkins/jenkins-ingress was added or updated Normal AddedOrUpdated 9m4s nginx-ingress-controller Configuration for jenkins/jenkins-ingress was added or updated </code></pre> <p>Even on Kubernetes cluster master terminal also <code>curl</code> fails with below errors.</p> <pre><code>$ curl -D- http://jenkins.company.com/jenkins -v * Trying 160.20.21.6... * TCP_NODELAY set * Connected to jenkins.company.com (160.20.21.6) port 80 (#0) &gt; GET /jenkins HTTP/1.1 &gt; Host: jenkins.company.com &gt; User-Agent: curl/7.58.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found &lt; Server: nginx/1.23.2 Server: nginx/1.23.2 &lt; Date: Mon, 09 Jan 2023 02:58:06 GMT Date: Mon, 09 Jan 2023 02:58:06 GMT &lt; Content-Type: text/html Content-Type: text/html &lt; Content-Length: 153 Content-Length: 153 &lt; Connection: keep-alive Connection: keep-alive &lt; &lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.23.2&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; * Connection #0 to host jenkins.company.com left intact </code></pre> <p>Please let me know whats wrong with my configurations side?</p> <p>Any help will be helpful. Thanks in advance.</p>
<p>For me its looks, like your request is not reaching to Nginx container itself. so try below mentioned options.</p> <p>As Nginx-ingress runs as a daemon set, so Nginx container ports 80 and 443 are mapped with the 80 and 443 ports of the host. so you can use http://{node-IP} to connect to Nginx container. You can make use of any node-IP out of 3 nodes. Once the request reaches to Nginx container then it will be routed to the backend container after checking the resource rules that you created. so you can use http//{node-ip}/jenkins to connect to jenkins container.</p> <p>To test with curl client. may use hostname spoofing. curl -H &quot;Host: jenkins.company.com&quot; http://{host-ip}/jenkins</p>
<p>i am struggling to create a custom subject when receiving alerts from my AlertManager, i am doing it with manifest file:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] </code></pre> <p>i have read that i need to add headers under the emailConfigs tab, but when i do like follows:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] headers: - subject: &quot;MyTestSubject&quot; </code></pre> <p>or</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] headers: subject: &quot;MyTestSubject&quot; </code></pre> <p>I receive following errors:</p> <p>either:</p> <p>com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers, ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers[0]): missing required field &quot;key&quot; in com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers, ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers[0]): missing required field &quot;value&quot; in com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers];</p> <p>or</p> <p>error: error validating &quot;alert-config.yaml&quot;: error validating data: ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers): invalid type for com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers: got &quot;map&quot;, expected &quot;array&quot;</p> <p>i have checked other solutions and everyone is doing it like headers: subject: mySubject<br /> but for some reason to me, it doesn't work</p>
<p>Hi @Blackcat from this <a href="https://www.suse.com/c/wp-content/uploads/2021/09/rancher_blog_07-rancher-alertmanager-status-tab.png" rel="nofollow noreferrer">example</a> I found that you might need to use Catipal 'S' for the subject under headers looks like we need to use first letter capital for the keys under headers like mentioned below.</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] headers: Subject: &quot;MyTestSubject&quot; From: Me To: You </code></pre> <p>For more detailed information check this <a href="https://www.suse.com/c/rancher_blog/custom-alerts-using-prometheus-queries/" rel="nofollow noreferrer">link</a></p>
<p>For example I have a role that gives permission to user get list of pods from specific namespace. And I have a clusterRole that gives permission to user get list of pods from all namespaces.</p> <p>can user get all pods from all namespaces? or does role override clusterRole because role is more specific?</p> <p>I know this doesn't make sense. But what if I do it by mistake?</p>
<p>In kubernetes <strong>ClusterRole</strong> takes precedence over the <strong>Roles</strong>. Because the <strong>Role</strong> is bound to a certain <strong>namespace</strong> whereas the ClusterRole is a <strong>non-namespaced</strong> resource. Whenever a user requests for resources, the kubernetes will check the roles and clusterroles assigned to that <strong>user</strong> or <strong>service account</strong>.</p> <p>If there is any conflict between <strong>Roles</strong> and <strong>ClusterRoles</strong> then the ClusterRole overrides the Role <strong>permissions</strong> for a <strong>user</strong> or <strong>service account</strong> in kubernetes.</p> <p>For more detailed information refer to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Official RBAC Document</a></p>
<p>I have an <code>endpoints</code> that I can see via (for example) :</p> <pre><code>kubectl get endpoints busybox-service </code></pre> <p>This endpoints is &quot;backed&quot; by a <code>service</code>:</p> <pre><code>kubectl get services busybox-service </code></pre> <p>Is there a way for me via the <code>endpoints</code> object, to find the &quot;backing&quot; service, without finding it by name?</p> <p>What I mean by that, is having just the information from (for example):</p> <pre><code>kubectl get endpoints busybox-service -o=json </code></pre> <p>or anything like that, to be able to tell what the <code>metadata.uid</code> of the <em>service</em> is?</p> <p>So, if I do : <code>kubectl get endpoints busybox-service -o=json</code>, I would get:</p> <pre><code>&quot;metadata&quot;: { &quot;annotations&quot;: { &quot;endpoints.kubernetes.io/last-change-trigger-time&quot;: &quot;2023-03-19T19:40:08Z&quot; }, &quot;creationTimestamp&quot;: &quot;2023-03-19T19:40:08Z&quot;, &quot;labels&quot;: { &quot;a&quot;: &quot;b&quot; }, &quot;name&quot;: &quot;busybox-service&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;resourceVersion&quot;: &quot;277476&quot;, &quot;uid&quot;: &quot;8d76841b-ad74-4697-8d47-e7449b2cea24&quot; } </code></pre> <p>that is, I get the <code>uid</code> of the <code>endpoints</code>: <code>&quot;uid&quot;: &quot;8d76841b-ad74-4697-8d47-e7449b2cea24&quot;</code>. Is there an extended call of some kind to the endpoints that would give me the <code>uid</code> of the service that this endpoints is based on.</p> <p>Something like this does not exist, but may be there is a certain call that I miss:</p> <pre><code>&quot;metadata&quot;: { &quot;annotations&quot;: { &quot;endpoints.kubernetes.io/last-change-trigger-time&quot;: &quot;2023-03-19T19:40:08Z&quot; }, &quot;creationTimestamp&quot;: &quot;2023-03-19T19:40:08Z&quot;, &quot;labels&quot;: { &quot;a&quot;: &quot;b&quot; }, &quot;name&quot;: &quot;busybox-service&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;resourceVersion&quot;: &quot;277476&quot;, &quot;uid&quot;: &quot;8d76841b-ad74-4697-8d47-e7449b2cea24&quot; &quot;ownerReference&quot;: &quot;&lt;UID_OF_THE_SERVICE&gt;&quot; } </code></pre> <p>Notice the last field: <code>&quot;ownerReference&quot;: &quot;&lt;UID_OF_THE_SERVICE&gt;&quot;</code> - in reality it does not exist, but may be I miss some kind of a call that would give me this information.</p> <p>Thank you.</p>
<p><strong>Endpointslice</strong> object may suit this kind of requirement as it has the <strong><code>ownerRefernce</code></strong> metadata and also the annotations like <code>kubernetes.io/servie-name</code> and <code>endpointslice.kubernetes.io/managed-by</code> which helps to get the source metadata.</p> <ol> <li>`kubernetes.io/servie-name helps to indicate the name of the service object from which the Endpointslice is created.</li> <li><code>endpointslice.kubernetes.io/managed-by</code> which can be used to indicate controller or operator which is managing the <code>Endpointslice</code> object.</li> </ol> <p>You can use kubectl describe command to get the details of <strong>ownerReferences</strong> and <strong>annotations</strong> as shown here</p> <p><strong>ownerReferences:</strong></p> <pre><code>$ kubectl get endpointslice &lt;endpointslice-name&gt; -o=jsonpath='{.metadata.ownerReferences}' </code></pre> <p><strong>Annotations:</strong></p> <pre><code> $ kubectl describe enpointslice &lt;endpointslice-name&gt; | grep kubernetes.io/servie-name $ kubectl describe enpointslice &lt;endpointslice-name&gt; | grep endpointslice.kubernetes.io/managed-by </code></pre> <p>For more detailed information refer to these documents <a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#management" rel="nofollow noreferrer">Endpointslice</a>, <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/" rel="nofollow noreferrer">Ownerships</a></p>
<p>This is my Dockerfile:</p> <pre><code>FROM gcr.io/distroless/static:nonroot WORKDIR / COPY ls . COPY tail . COPY test . COPY manager . ENTRYPOINT [&quot;/manager&quot;] </code></pre> <p>after</p> <pre><code>[root@master go-docker-test]# docker build -t strangething:v1.13 . [root@master go-docker-test]# docker run -d strangething:v1.13 [root@master go-docker-test]# docker logs b2 </code></pre> <p>it shows:</p> <pre><code>exec /manager: no such file or directory </code></pre> <p>I'm pretty sure it is there. I use dive to see it:</p> <pre><code>[Layers]───────────────────────────────────────────────────────────────────── [● Current Layer Contents]────────────────────────────────────────────────── Cmp Image ID Size Command Permission UID:GID Size Filetree sha256:cb60fb9b862c6a89f9 2.3 MB FROM sha256:cb60fb9b862c6a89f9 drwxr-xr-x 0:0 2.3 MB ├── . sha256:3e884d7c2d4ba9bac6 118 kB COPY ls . # buildkit drwxr-xr-x 0:0 0 B │ ├── bin sha256:e75e9da8f1605f7944 67 kB COPY tail . # buildkit drwxr-xr-x 0:0 0 B │ ├── boot sha256:7a0f1970f36a364672 1.8 MB COPY test . # buildkit drwxr-xr-x 0:0 0 B │ ├── dev sha256:c9ab59cb1ce11477ca 47 MB COPY manager . # buildkit drwxr-xr-x 0:0 220 kB │ ├─⊕ etc drwxr-xr-x 65532:65532 0 B │ ├─⊕ home [Layer Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 0 B │ ├── lib drwxr-xr-x 0:0 0 B │ ├── proc Digest: sha256:c9ab59cb1ce11477cac4d634bb81cf7316c344b50f01a62a8e5ddcf355d5fe drwx------ 0:0 0 B │ ├── root cf drwxr-xr-x 0:0 0 B │ ├── run Tar ID: 998c57d00785ccffaf3b308a529c7f816633897097d1ef6519269a8e3c5af59b drwxr-xr-x 0:0 0 B │ ├── sbin Command: drwxr-xr-x 0:0 0 B │ ├── sys COPY manager . # buildkit drwxrwxrwx 0:0 0 B │ ├── tmp drwxr-xr-x 0:0 2.1 MB │ ├─⊕ usr [Image Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 1.8 kB │ └─⊕ var -rwxr-xr-x 0:0 118 kB ├── ls Total Image size: 51 MB -rwxr-xr-x 0:0 47 MB ├── manager Potential wasted space: 0 B -rwxr-xr-x 0:0 67 kB ├── tail Image efficiency score: 100 % -rwxr-xr-x 0:0 1.8 MB └── test Count Total Space Path </code></pre> <p>This file tree confuses me more or less.</p> <p>Anyway, strange thing is, <code>ls tail manager</code> can't exec, but <code>test</code>can exec.</p> <p>after I change <code>ENTRYPOINT [&quot;/manager&quot;]</code> to <code>ENTRYPOINT [&quot;/test&quot;]</code>:</p> <pre><code>[root@master go-docker-test]# docker logs c11 empty </code></pre> <p>test is a program built by test.go, I write it myself:</p> <pre><code>package main import ( &quot;fmt&quot; ) type Per struct { Name string Age int } type Person struct { Name string Age int Lov Per } func main() { var one Person one.Name=&quot;abc&quot; one.Age=11 var two Per one.Lov=two if one.Lov != (Per{}){ fmt.Println(&quot;not empty!&quot;) }else { fmt.Println(&quot;empty&quot;) } } </code></pre>
<p>After reading silh's comment and the kubebuilder <code>Dockerfile</code> again, I somehow solved this problem by adding <code>CGO_ENABLED=0</code> to <code>go build</code>. It changes dynamic linker to static linker.</p> <pre><code>CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go </code></pre> <p>At least the <code>no such file or directory</code> error is now gone.</p>
<p>Getting error while configuring prometheus in azure kubernetes</p> <p><a href="https://i.stack.imgur.com/8hbaD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8hbaD.jpg" alt="enter image description here" /></a></p>
<p><em>I tried to reproduce the same issue in my environment and got the below results</em></p> <p><em>I have the cluster and I am trying to configure the Prometheus in azure Kubernetes and I got the successful deployment</em></p> <p><img src="https://i.stack.imgur.com/U7EOF.png" alt="enter image description here" /></p> <p><em>To verify the agent is deployed or not use the below commands</em></p> <pre><code> kubectl get ds &lt;dep_name&gt; --namespace=kube-system kubectl get rrs --namespace=kube-system </code></pre> <p><img src="https://i.stack.imgur.com/PUfvM.png" alt="enter image description here" /></p> <p><em>This error getting because of you are using the service principal instead of managed identity</em></p> <p><em>For enabling the managed identity please follow the below commands</em></p> <p><em>AKS cluster with service principal first disable monitoring and then upgrade to managed identity, the azure public cloud is supporting for this migration</em></p> <p><em>To get the log analytics workspace id</em></p> <pre><code>az aks show -g &lt;rg_name&gt; -n &lt;cluster_name&gt; | grep -i &quot;logAnalyticsWorkspaceResourceID&quot; </code></pre> <p>For disable the monitoring use the below command</p> <pre><code>az aks disable-addons -a monitoring -g &lt;rg_name&gt; -n &lt;cluster_name&gt; </code></pre> <p><em>Or I can get it on portal in the azure monitor logs</em></p> <p><img src="https://i.stack.imgur.com/wmByq.png" alt="enter image description here" /></p> <p><em>I have upgrade the cluster to system managed identity, use the below command to upgrade</em></p> <pre><code>az aks update -g &lt;rg_name&gt; -n &lt;cluster_name&gt; --enable-managed-identity </code></pre> <p><img src="https://i.stack.imgur.com/96Zo8.png" alt="enter image description here" /></p> <p><em>I have enable the monitoring addon with the managed identity authentication</em></p> <pre><code>az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g &lt;rg_name&gt; -n &lt;cluster_name&gt; --workspace-resource-id &lt;workspace_resource_id&gt; </code></pre> <p><em>For more information use this document for <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#migrate-to-managed-identity-authentication" rel="nofollow noreferrer">Reference</a></em></p>
<p>We have upgraded our AKS to 1.24.3, and since we have, we are having an issue with containers refusing connection.</p> <p>There have been no changes to the deployed microservices as part of the AKS upgrade, and the issue is occurring at random intervals.</p> <p>From what I can see the container is returning the error - The client closed the connection.</p> <p>What I cannot seem to be able to trace is, the connections, within AKS, and the issue is across all services.</p> <p>Has anyone experienced anything similar and are able to provide any advise?</p>
<p>I hit similar issue upgrading from 1.23.5 to 1.24.3, issue was configuration mis-match with kubernetes load balancer health probe path and ingress-nginx probe endpoints.</p> <p>Added this annotation to my ingress-nginx helm install command corrected my problem: --set controller.service.annotations.&quot;service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path&quot;=/healthz</p>
<p>I have a Kubernetes Cluster in an on-premise server, I also have a server on <a href="https://www.ncloud.com/" rel="nofollow noreferrer">Naver Cloud</a> lets call it <code>server A</code>, I want to join my <code>server A</code> to my Kubernetes Cluster, the server can join normally, but the <code>kube-proxy</code> and <code>kube-flannel</code> pods spawned from daemonset are constantly in <code>CrashLoopBackOff</code> status</p> <p>here is the log from <code>kube-proxy</code></p> <pre><code>I0405 03:13:48.566285 1 node.go:163] Successfully retrieved node IP: 10.1.0.2 I0405 03:13:48.566382 1 server_others.go:109] &quot;Detected node IP&quot; address=&quot;10.1.0.2&quot; I0405 03:13:48.566420 1 server_others.go:535] &quot;Using iptables proxy&quot; I0405 03:13:48.616989 1 server_others.go:176] &quot;Using iptables Proxier&quot; I0405 03:13:48.617021 1 server_others.go:183] &quot;kube-proxy running in dual-stack mode&quot; ipFamily=IPv4 I0405 03:13:48.617040 1 server_others.go:184] &quot;Creating dualStackProxier for iptables&quot; I0405 03:13:48.617063 1 server_others.go:465] &quot;Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6&quot; I0405 03:13:48.617093 1 proxier.go:242] &quot;Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses&quot; I0405 03:13:48.617420 1 server.go:655] &quot;Version info&quot; version=&quot;v1.26.0&quot; I0405 03:13:48.617435 1 server.go:657] &quot;Golang settings&quot; GOGC=&quot;&quot; GOMAXPROCS=&quot;&quot; GOTRACEBACK=&quot;&quot; I0405 03:13:48.618790 1 conntrack.go:52] &quot;Setting nf_conntrack_max&quot; nf_conntrack_max=131072 </code></pre> <p>there is no log from <code>kube-flannel</code>, <code>kube-flannel</code> pods failed on its Init containers named <code>install-cni-plugin</code>, when I try <code>kubectl -n kube-flannel logs kube-flannel-ds-d2l4q -c install-cni-plugin</code> it returns</p> <pre><code>unable to retrieve container logs for docker://47e4c8c580474b384b128c8e4d74297a0e891b5f227c6313146908b06ee7b376 </code></pre> <p>I have no other clue that I can think of, please tell me if I need to attach more info</p> <p>Please help, I've been stuck for so long T.T</p> <p>More info:</p> <p><code>kubectl get nodes</code></p> <pre><code>NAME STATUS ROLES AGE VERSION accio-randi-ed05937533 Ready &lt;none&gt; 8d v1.26.3 accio-test-1-b3fb4331ee NotReady &lt;none&gt; 89m v1.26.3 master Ready control-plane 48d v1.26.1 </code></pre> <p><code>kubectl -n kube-system get pods</code></p> <pre><code>NAME READY STATUS RESTARTS AGE coredns-787d4945fb-rms6t 1/1 Running 0 30d coredns-787d4945fb-t6g8s 1/1 Running 0 33d etcd-master 1/1 Running 168 (36d ago) 48d kube-apiserver-master 1/1 Running 158 (36d ago) 48d kube-controller-manager-master 1/1 Running 27 (6d17h ago) 48d kube-proxy-2r8tn 1/1 Running 6 (36d ago) 48d kube-proxy-f997t 0/1 CrashLoopBackOff 39 (90s ago) 87m kube-proxy-wc9x5 1/1 Running 0 8d kube-scheduler-master 1/1 Running 27 (6d17h ago) 48d </code></pre> <p><code>kubectl -n kube-system get events</code></p> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 42s Warning DNSConfigForming pod/coredns-787d4945fb-rms6t Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 54s Warning DNSConfigForming pod/coredns-787d4945fb-t6g8s Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 3m10s Warning DNSConfigForming pod/etcd-master Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 2m48s Warning DNSConfigForming pod/kube-apiserver-master Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 3m33s Warning DNSConfigForming pod/kube-controller-manager-master Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 3m7s Warning DNSConfigForming pod/kube-proxy-2r8tn Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 15s Normal SandboxChanged pod/kube-proxy-f997t Pod sandbox changed, it will be killed and re-created. 5m15s Warning BackOff pod/kube-proxy-f997t Back-off restarting failed container kube-proxy in pod kube-proxy-f997t_kube-system(7652a1c4-9517-4a8a-a736-1f746f36c7ab) 3m30s Warning DNSConfigForming pod/kube-scheduler-master Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 </code></pre> <p><code>kubectl -n kube-flannel get pods</code></p> <pre><code>NAME READY STATUS RESTARTS AGE kube-flannel-ds-2xgbw 1/1 Running 0 8d kube-flannel-ds-htgts 0/1 Init:CrashLoopBackOff 0 (2s ago) 88m kube-flannel-ds-sznbq 1/1 Running 6 (36d ago) 48d </code></pre> <p><code>kubectl -n kube-flannel get events</code></p> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 100s Normal SandboxChanged pod/kube-flannel-ds-htgts Pod sandbox changed, it will be killed and re-created. 26m Normal Pulled pod/kube-flannel-ds-htgts Container image &quot;docker.io/flannel/flannel-cni-plugin:v1.1.2&quot; already present on machine 46m Warning BackOff pod/kube-flannel-ds-htgts Back-off restarting failed container install-cni-plugin in pod kube-flannel-ds-htgts_kube-flannel(4f602997-5502-4dcf-8fca-23eba01325dd) 5m Warning DNSConfigForming pod/kube-flannel-ds-sznbq Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.8.0.1 192.168.18.1 fe80::1%3 </code></pre> <p><code>kubectl -n kube-flannel describe pod kube-flannel-ds-htgts</code></p> <pre><code>Name: kube-flannel-ds-htgts Namespace: kube-flannel Priority: 2000001000 Priority Class Name: system-node-critical Service Account: flannel Node: accio-test-1-b3fb4331ee/10.1.0.2 Start Time: Thu, 06 Apr 2023 09:25:12 +0900 Labels: app=flannel controller-revision-hash=6b7b59d784 k8s-app=flannel pod-template-generation=1 tier=node Annotations: &lt;none&gt; Status: Pending IP: 10.1.0.2 IPs: IP: 10.1.0.2 Controlled By: DaemonSet/kube-flannel-ds Init Containers: install-cni-plugin: Container ID: docker://0fed30cc41f305203bf5d6fb7668f92f449a65f722faf1360e61231e9107ef66 Image: docker.io/flannel/flannel-cni-plugin:v1.1.2 Image ID: docker-pullable://flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: cp Args: -f /flannel /opt/cni/bin/flannel State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 06 Apr 2023 15:11:34 +0900 Finished: Thu, 06 Apr 2023 15:11:34 +0900 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /opt/cni/bin from cni-plugin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbk6z (ro) install-cni: Container ID: Image: docker.io/flannel/flannel:v0.21.0 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; Command: cp Args: -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /etc/cni/net.d from cni (rw) /etc/kube-flannel/ from flannel-cfg (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbk6z (ro) Containers: kube-flannel: Container ID: Image: docker.io/flannel/flannel:v0.21.0 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /opt/bin/flanneld Args: --ip-masq --kube-subnet-mgr --iface=accio-k8s-net State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 100m memory: 50Mi Environment: POD_NAME: kube-flannel-ds-htgts (v1:metadata.name) POD_NAMESPACE: kube-flannel (v1:metadata.namespace) KUBERNETES_SERVICE_HOST: 10.1.0.1 KUBERNETES_SERVICE_PORT: 6443 EVENT_QUEUE_DEPTH: 5000 Mounts: /etc/kube-flannel/ from flannel-cfg (rw) /run/flannel from run (rw) /run/xtables.lock from xtables-lock (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbk6z (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: run: Type: HostPath (bare host directory volume) Path: /run/flannel HostPathType: cni-plugin: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: cni: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: flannel-cfg: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-flannel-cfg Optional: false xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate kube-api-access-gbk6z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: :NoSchedule op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 31m (x8482 over 5h46m) kubelet Back-off restarting failed container install-cni-plugin in pod kube-flannel-ds-htgts_kube-flannel(4f602997-5502-4dcf-8fca-23eba01325dd) Normal Created 21m (x8783 over 5h46m) kubelet Created container install-cni-plugin Normal Pulled 11m (x9051 over 5h46m) kubelet Container image &quot;docker.io/flannel/flannel-cni-plugin:v1.1.2&quot; already present on machine Normal SandboxChanged 81s (x18656 over 5h46m) kubelet Pod sandbox changed, it will be killed and re-created. </code></pre>
<p>I had a similar issue in one of my nodes due to the container runtime being incorrectly configured. Please check the <code>containerd</code> configuration located at <code>/etc/containerd/config.toml</code> for specifying daemon level options. The default configuration can be generated by running</p> <pre><code>containerd config default &gt; /etc/containerd/config.toml </code></pre> <p>To use the systemd cgroup driver in <code>/etc/containerd/config.toml</code> with <code>runc</code>, set</p> <pre><code>[plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc] ... [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc.options] SystemdCgroup = true </code></pre> <p>If the cgroup driver is incorrect, that might lead to the pod in that node always being in a <code>CrashLoopBackOff</code>.</p>
<p>I want to create config map from file with key as value of variable from kubectl tool</p> <p>However,</p> <pre><code>MY_VARIBLE=&quot;something&quot; kubectl create configmap myconfigmap --from-file=${MY_VARIBLE}=myfile.json </code></pre> <p>does not return anything and configmap is not created</p> <pre><code>kubectl create configmap myconfigmap --from-file=something=myfile.json </code></pre> <p>works correctly. How can I workaround it? Thanks!!</p>
<p>Thank you all! My real problem was that MY_VARIABLE had value with &quot;:&quot; After replacing &quot;:&quot; with &quot;-&quot; everything started working correctly in any case from above</p>
<p>I want to deploy two apps &quot;A&quot; and &quot;B&quot; on our Kubernetes cluster (a first for me).</p> <p>&quot;A&quot; handles traffic from outside the cluster and can request further data from &quot;B&quot; over http. &quot;B&quot; needs much more resources than &quot;A&quot; as it's CPU and Memory intensive.</p> <p>&quot;A&quot; and &quot;B&quot; are quite tied together since the body of the HTTP calls between them is versioned and quite complex, but &quot;B&quot; should be free to scale independently from &quot;A&quot;.</p> <p>Today, &quot;A&quot; calls &quot;B&quot; using a dedicated Service whose url is hardcoded in &quot;A&quot;.</p> <p>During deployment earlier today, &quot;A&quot; and &quot;B&quot; were deployed simultaneously but &quot;A&quot; managed to call an old version of &quot;B&quot; (since the Service was still routing requests to old &quot;B&quot; for a few seconds/minutes).</p> <p><strong>What's the good practice to tie calls from newly deployed &quot;A&quot; to newly deployed &quot;B&quot; pods only?</strong></p> <p>I don't want to have a conditional processing in &quot;A&quot; based on the version of the payload received from &quot;B&quot;.</p> <p>Thank your for your help,</p>
<p>I presume you're just working with native Kubernetes resources, so the most simple way to do this would be to deploy your new backend <code>B</code> separately with different labels than the previous version. Once all pods are up, you simply change the service'es label selectors, this would instantly switch all the traffic to the newly created backend pods.</p> <p>If you were to update the current backend <code>B</code> this would, depending on the numbers of pods, cause a RollingUpdate by default, so there would be a timeframe where requests could reach the old and newly created backend <code>B</code>'s pods.</p> <p>However there's special tools that solve such issues in a more &quot;clean&quot; way, such as Argo Rollouts. But If this is your only use-case I would recommend the first method as this causes a sudden switch of all traffic.</p> <p>Let's assume your backend <code>b</code> looks like the following:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: backend-b labels: app.kubernetes.io/name: backend-b spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: backend-b-service spec: selector: app.kubernetes.io/name: backend-b ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>Now you would create a second backend B, please note the different label:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: backend-b-new labels: app.kubernetes.io/name: backend-b-new spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 </code></pre> <p>Currently there would be no traffic hitting this new backend, in order to cause all traffic to switch instantly to the new backend, you would need to change the label selectors of the service for backend B.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: backend-b-service spec: selector: app.kubernetes.io/name: backend-b-new ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>As I said, this isn't the best solution, but should work for your use cases, provided that your Application <code>A</code> is communicating with Application B through the DNS name of the given Service.</p>
<p>I am new to K8S so please be gentle. I have a test hello world Flask app and I would like to deploy it on EKS. I am using the AWS load Balancer Controller Addon using the link below. At the end when I check the deployment it shows up without any issues as the link describes. <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p> <p>When I apply the three files below they all apply correctly and I see the pods up, but on the ingress I dont see an external IP address and cant access my Flask app.</p> <p>My goal is to have AWS create a dummy DNS name and then I can point my public DNS name to it as an CNAM entry. Also the Ingress should be in port 80 and then that should forward to port 5000 for the Flask app internally.</p> <p>What am I missing? Could someone please point me in the right direction?</p> <p>ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: &quot;flask-test-ingress&quot; annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;: 443}]' labels: app: hello-world spec: rules: - host: testing.somesite.com http: paths: - path: / pathType: Prefix backend: service: name: &quot;hello-world-service&quot; port: number: 80 </code></pre> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: selector: matchLabels: app: hello-world replicas: 2 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: gitlab.privaterepo.com:5050/jmartinez/flask_helloworld:v4 ports: - containerPort: 5000 protocol: TCP imagePullSecrets: - name: regcred </code></pre> <p>service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hello-world-service spec: selector: app: hello-world type: NodePort ports: - protocol: &quot;TCP&quot; port: 80 targetPort: 5000 nodePort: 30000 </code></pre>
<p>Finally got it working. When I realized that the ALB was not created automatically I researched and found the solution. I had to remove the ingress.class value from the annotations as well as remove the host. So now my ingress looks like the following. After deleting the old ingress and reapplying this one, I waited about 10 minutes and my hello world app is now running.</p> <p>ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: &quot;flask-test-ingress&quot; annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip labels: app: hello-world spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: &quot;hello-world-service&quot; port: number: 80 </code></pre>
<p>How can I find the version of the 'sriov' cni plugin installed on a Kubernetes cluster?</p> <p>I've tried the commands below but they fail.</p> <pre><code>/opt/cni/bin# ./sriov --version { &quot;code&quot;: 4, &quot;msg&quot;: &quot;required env variables [CNI_COMMAND] missing&quot; } </code></pre>
<p>The CNI Protocol parameters are passed to the plugins via OS environment variables.</p> <p>CNI_COMMAND: indicates the desired operation; ADD, DEL, CHECK, GC, or VERSION. Refer: <a href="https://github.com/containernetworking/cni/blob/main/SPEC.md#version-probe-plugin-version-support" rel="nofollow noreferrer">https://github.com/containernetworking/cni/blob/main/SPEC.md#version-probe-plugin-version-support</a></p> <p>So the following command worked.</p> <pre><code>/opt/cni/bin# export CNI_COMMAND=VERSION; ./sriov --version {&quot;cniVersion&quot;:&quot;1.0.0&quot;,&quot;supportedVersions&quot;:[&quot;0.1.0&quot;,&quot;0.2.0&quot;,&quot;0.3.0&quot;,&quot;0.3.1&quot;,&quot;0.4.0&quot;,&quot;1.0.0&quot;]} </code></pre>
<p>I'm running Airflow on Kubernetes using this Helm chart: <a href="https://github.com/apache/airflow/tree/1.5.0" rel="nofollow noreferrer">https://github.com/apache/airflow/tree/1.5.0</a></p> <p>I've written a very simple DAG just to test some things. It looks like this:</p> <pre><code>default_args={ 'depends_on_past': False, 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } with DAG( 'my-dag', default_args=default_args, description='simple dag', schedule_interval=timedelta(days=1), start_date=datetime(2022, 4, 21), catchup=False, tags=['example'] ) as dag: t1 = SparkKubernetesOperator( task_id='spark-pi', trigger_rule=&quot;all_success&quot;, depends_on_past=False, retries=3, application_file=&quot;spark-pi.yaml&quot;, namespace=&quot;my-ns&quot;, kubernetes_conn_id=&quot;myk8s&quot;, api_group=&quot;sparkoperator.k8s.io&quot;, api_version=&quot;v1beta2&quot;, do_xcom_push=True, dag=dag ) t2 = SparkKubernetesOperator( task_id='other-spark-job', trigger_rule=&quot;all_success&quot;, depends_on_past=False, retries=3, application_file=other-spark-job-definition, namespace=&quot;my-ns&quot;, kubernetes_conn_id=&quot;myk8s&quot;, api_group=&quot;sparkoperator.k8s.io&quot;, api_version=&quot;v1beta2&quot;, dag=dag ) t1 &gt;&gt; t2 </code></pre> <p>When I run the DAG from the Airflow UI, the first task Spark job (<code>t1</code>, spark-pi) gets created and is immediately marked as successful, and then Airflow launches the second (t2) task right after that. This can be seen in the web UI:</p> <p><a href="https://i.stack.imgur.com/CXBP3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CXBP3.png" alt="enter image description here" /></a></p> <p>What you're seeing is the status of the two tasks in 5 separate DAG runs, as well as their total status (the circles). The middle row of the image shows the status of <code>t1</code>, which is <code>&quot;success&quot;</code>.</p> <p><strong>However</strong>, the actual spark-pi pod of <code>t1</code> launched by the Spark operator fails on every run, and its status can be seen by querying the Sparkapplication resource on Kubernetes:</p> <pre><code>$ kubectl get sparkapplications/spark-pi-2022-04-28-2 -n my-ns -o json { &quot;apiVersion&quot;: &quot;sparkoperator.k8s.io/v1beta2&quot;, &quot;kind&quot;: &quot;SparkApplication&quot;, &quot;metadata&quot;: { &quot;creationTimestamp&quot;: &quot;2022-04-29T13:28:02Z&quot;, &quot;generation&quot;: 1, &quot;name&quot;: &quot;spark-pi-2022-04-28-2&quot;, &quot;namespace&quot;: &quot;my-ns&quot;, &quot;resourceVersion&quot;: &quot;111463226&quot;, &quot;uid&quot;: &quot;23f1c8fb-7843-4628-b22f-7808b562f9d8&quot; }, &quot;spec&quot;: { &quot;driver&quot;: { &quot;coreLimit&quot;: &quot;1500m&quot;, &quot;cores&quot;: 1, &quot;labels&quot;: { &quot;version&quot;: &quot;2.4.4&quot; }, &quot;memory&quot;: &quot;512m&quot;, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/tmp&quot;, &quot;name&quot;: &quot;test-volume&quot; } ] }, &quot;executor&quot;: { &quot;coreLimit&quot;: &quot;1500m&quot;, &quot;cores&quot;: 1, &quot;instances&quot;: 1, &quot;labels&quot;: { &quot;version&quot;: &quot;2.4.4&quot; }, &quot;memory&quot;: &quot;512m&quot;, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/tmp&quot;, &quot;name&quot;: &quot;test-volume&quot; } ] }, &quot;image&quot;: &quot;my.google.artifactory.com/spark-operator/spark:v2.4.4&quot;, &quot;imagePullPolicy&quot;: &quot;Always&quot;, &quot;mainApplicationFile&quot;: &quot;local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar&quot;, &quot;mainClass&quot;: &quot;org.apache.spark.examples.SparkPi&quot;, &quot;mode&quot;: &quot;cluster&quot;, &quot;restartPolicy&quot;: { &quot;type&quot;: &quot;Never&quot; }, &quot;sparkVersion&quot;: &quot;2.4.4&quot;, &quot;type&quot;: &quot;Scala&quot;, &quot;volumes&quot;: [ { &quot;hostPath&quot;: { &quot;path&quot;: &quot;/tmp&quot;, &quot;type&quot;: &quot;Directory&quot; }, &quot;name&quot;: &quot;test-volume&quot; } ] }, &quot;status&quot;: { &quot;applicationState&quot;: { &quot;errorMessage&quot;: &quot;driver container failed with ExitCode: 1, Reason: Error&quot;, &quot;state&quot;: &quot;FAILED&quot; }, &quot;driverInfo&quot;: { &quot;podName&quot;: &quot;spark-pi-2022-04-28-2-driver&quot;, &quot;webUIAddress&quot;: &quot;172.20.23.178:4040&quot;, &quot;webUIPort&quot;: 4040, &quot;webUIServiceName&quot;: &quot;spark-pi-2022-04-28-2-ui-svc&quot; }, &quot;executionAttempts&quot;: 1, &quot;lastSubmissionAttemptTime&quot;: &quot;2022-04-29T13:28:15Z&quot;, &quot;sparkApplicationId&quot;: &quot;spark-3335e141a51148d7af485457212eb389&quot;, &quot;submissionAttempts&quot;: 1, &quot;submissionID&quot;: &quot;021e78fc-4754-4ac8-a87d-52c682ddc483&quot;, &quot;terminationTime&quot;: &quot;2022-04-29T13:28:25Z&quot; } } </code></pre> <p>As you can see in the <code>status</code> section, we have <code>&quot;state&quot;: &quot;FAILED&quot;</code>. Still, Airflow marks it as successful and thus runs t2 right after it, which is not what we want when defining <code>t2</code> as dependent on (downstream of) <code>t1</code>.</p> <p>Why does Airflow see <code>t1</code> as successful even though the Spark job itself fails?</p>
<p>That's the implementation. If you see the code for the operator it is basically a submit and forget job. To monitor the status you use SparkkubernetesSensor</p> <pre class="lang-py prettyprint-override"><code> t2 = SparkKubernetesSensor( task_id=&quot;spark_monitor&quot;, application_name=&quot;{{ task_instance.xcom_pull(task_ids='spark-job-full-refresh.spark_full_refresh') ['metadata']['name'] }}&quot;, attach_log=True, ) </code></pre> <p>I have tried to create a custom operator that combines both but it does not work very well via inheritance because they are slightly different execution patterns, so it needs to be created from scratch. But for all purposes and intents, the Sensor works perfectly, just adds unneeded lines to code.</p>
<p>I have a problem where i need to trigger multiple Pipelinerun's at same time where the runs will take longer the 1 hour. The global timeout for the PipelineRun is 1 hours by default so it fails. The team managing the Openshift cluster does not want to change the global timeout so I have to override it in some way.</p> <p>The resources I am using is an EventListener, TriggerBinding, TriggerTemplate, Pipeline and PipelineRun.</p> <p>I have tried to set a timeout on the Pipeline tasks like:</p> <pre><code> runAfter: - fetch-repository taskRef: kind: ClusterTask name: buildah timeout: &quot;3h0m0s&quot; </code></pre> <p>but the PipelineRun timeout seems to have precedence so it does not work.</p> <ul> <li>Openshift version: 4.9.35</li> <li>K8s version: v1.22.8</li> <li>Tekton seems to be installed with Openshift and cannot find which version but probably very new.</li> </ul> <p>Any ideas?</p>
<p>You can specify a timeout in the PipelineRun:</p> <pre><code>spec: ... timeout: 12h0m0s </code></pre> <p>If you set it to 0, it will fail immediately upon encountering an error. For now it works on OpenShift Pipelines, but it might get deprecated <a href="https://tekton.dev/docs/pipelines/pipelineruns/#configuring-a-failure-timeout" rel="nofollow noreferrer">as in Tekton</a> (not fully deprecated, just syntax change).</p>
<p>I didn't get how I can restrict the access of custom resources in Openshift using RBAC</p> <p>let's assume I have a custom api:</p> <pre><code>apiVersion: yyy.xxx.com/v1 kind: MyClass metadata: ... </code></pre> <p>Is it possible to prevent some users to deploy resources where <code>apiVersion=yyy.xxx.com/v1</code> and <code>kind=MyClass</code>?</p> <p>Also can I grant access to other users to deploy resources where <code>apiVersion=yyy.xxx.com/v1</code> and <code>kind=MyOtherClass</code>?</p> <p>If this can be done using RBAC roles, how can I deploy RBAC roles in Openshift? only using CLI or I can create some yaml configuration files and deploy them with Yaml for example?</p>
<p>You can use cluster roles and RBAC roles:</p> <pre><code>oc adm policy add/remove-cluster-role-to-group oauth:system:authenticated </code></pre> <p>So the general idea is to remove the permission to deploy the resource to all the authenticated users. The next step is to add the permission to deploy that resourse only to ServicesAccounts assigned to specific namepsaces</p>
<p>I'm using Gradle as the build tool and I need to run helm commands like <code>helm repo add</code> inside a Gradle task. I have tried as below.</p> <pre><code> exec { workingDir = file(&quot;&lt;Helm.exe location&gt;&quot;) CommandLine &quot;helm.exe repo add&quot; } </code></pre> <p>But this fails with</p> <pre><code>Error: unknown command &quot;repo add&quot; for &quot;helm&quot; Run 'helm --help' for usage. </code></pre> <p>Any help is much appreciated.</p>
<p>Gradle exec is not doing any processing to the arguments. If you supply <code>&quot;repo add&quot;</code> as argument, helm will receive one argument: <code>&quot;repo add&quot;</code>. This would be the same as calling <code>helm &quot;repo add&quot;</code> in the terminal. What you want to do is, supply two arguments: <code>&quot;repo&quot;, &quot;add&quot;</code>.</p> <p>For more details see my answer here: <a href="https://stackoverflow.com/questions/74190600/gradle-exec-command-line-difference-between-separated-and-connected-args/74195472#74195472">gradle exec command line: difference between separated and connected args</a></p>
<p><a href="https://i.stack.imgur.com/LxnEI.png" rel="nofollow noreferrer">Please click here for the screen shot</a>I'm trying to configure a replicaset for my podspec file in my minikube. My minikube version: v1.26.1. I couldn't figure, whether the issue is with my code or api resources.</p> <p>this is my error: resource mapping not found for name: &quot;my-replicaset&quot; namespace: &quot;&quot; from &quot;replicasets.yaml&quot;: no matches for kind &quot;Replicaset&quot; in version &quot;apps/v1&quot; ensure CRDs are installed first</p> <p>tried to change the apps/v1 to some thing else; nothing works!</p>
<p>It’s Replica<strong>S</strong>et … (capital S).</p>
<p>How to set some curl requests in yaml whit using curlimages/curl? I get the error &quot;Mark bundle as not supporting multiuse&quot;</p> <p>I tried these variants:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: croncurl spec: schedule: &quot;0 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: croncurl image: curlimages/curl:7.72.0 imagePullPolicy: IfNotPresent command: - /bin/sh - -ec - &quot;curl -v http://{1}&quot; - /bin/sh - -ec - &quot;curl -v http://{2}&quot; - /bin/sh - -ec - &quot;curl -v http://{3}&quot; - /bin/sh - -ec - &quot;curl -v http://{4}&quot; restartPolicy: OnFailure </code></pre> <pre><code> command: - /bin/sh - -ec - &quot;curl -v http://{1}&quot; - &quot;curl -v http://{2}&quot; - &quot;curl -v http://{3}&quot; - &quot;curl -v http://{4}&quot; </code></pre>
<p>Use <code>-:</code></p> <pre><code>command: - /bin/sh - -ec - &quot;curl -v http://{1} -: http://{2} -: http://{3}&quot; </code></pre>
<p>I use OpenSSL &amp; generated self signed certificate &amp; key files.</p> <p>Now I try to transform them to base64 encoded value in order to put into kubernetes &quot;Secret&quot; manifest. I tried <a href="https://stackoverflow.com/questions/48341240/transform-ssl-crt-to-kubernetes-inline-format">the answer from this post</a>.</p> <p>But the accepted answer doesn't work for me, this is what I get:</p> <pre><code>cat myapp.com.crt | base64 -w0 base64: invalid argument -w0 Usage: base64 [-hDd] [-b num] [-i in_file] [-o out_file] -h, --help display this message -Dd, --decode decodes input -b, --break break encoded string into num character lines -i, --input input file (default: &quot;-&quot; for stdin) -o, --output output file (default: &quot;-&quot; for stdout) </code></pre> <p>Why is that? How to get the base64 encoded value for my certificate so that I can use the value in k8s secret manifest?</p>
<p>What kind of distro are you using? It seems that your <code>base64</code> binary is not the same program as used in your referenced answer.</p> <p>On Ubuntu (20.04) it should work:</p> <pre><code>base64 --help Usage: base64 [OPTION]... [FILE] Base64 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit </code></pre> <p>Or you can just use <code>cat server.csr | base64 | tr -d '\n'</code> as mentioned in <a href="https://stackoverflow.com/a/48341521/20424756">this answer</a>.</p>
<p>There is already two existing Network Policies present and one of which allows all the outbound traffic for all pods</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-default namespace: sample-namespace spec: podSelector: {} policyTypes: - Ingress - Egress egress: - to: - podSelector: {} ingress: - from: - podSelector: {} </code></pre> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-egress namespace: sample-namespace spec: podSelector: {} policyTypes: - Egress egress: - to: - podSelector: {} - ipBlock: cidr: 0.0.0.0/0 - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP </code></pre> <p>and I want to block all outbound traffic for a certain pod with label <code>app: localstack-server</code> so I created one more Network Policy for it but its not getting applied on that Pod</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: psp-localstack-default-deny-egress namespace: sample-namespace spec: podSelector: matchLabels: app: localstack-server policyTypes: - Egress </code></pre> <p>I'm able to run <code>curl www.example.com</code> inside that pod and its working fine which it should not have.</p>
<p>NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).</p> <p>To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like <code>netpol-all-allowed: true</code>, and don't add it to <code>localstack-server.</code>NetworkPolicies are additive, and they only have allow rules. So for each pod (as selected by podSelector), the traffic that will be allowed is the sum of all network policies that selected this pod. In your case, that's all traffic, since you have a policy that allows all traffic for an empty selector (all pods).</p> <p>To solve your problem, you should apply the allow all policy to a label selector that applies to all pods, except that one app: localstack-server. So, add a label like <code>netpol-all-allowed: true</code>, and don't add it to <code>localstack-server.</code></p>
<p>I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments. After restart the VM's all k8s command return</p> <pre><code>The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port? </code></pre> <p><strong>What I have done so far?</strong> I saw many stack question to fix this error also on git and some other sites. All need <code>kubeadm reset</code> If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me.</p> <p><strong>What I want?</strong> Is there a way I can make all the pods and nodes up and running without reset? Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments.</p> <p><strong>Update Question</strong></p> <p>When I run <code>docker ps -a | grep api</code> I see this</p> <pre><code>1a6ba468af3a 123.70.70.70:444/demo/webapikl &quot;dotnet UserProfileA…&quot; 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5 </code></pre> <p>Like this many containers. Now any advice to start them please?</p> <p>I am new to K8s that's why I like to be sure before I do anything.</p>
<p>I really thankful for your time and effort. What worked for me is <a href="https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired/56334732#56334732">this</a> stack overflow Answer along with some changes.</p> <p>In my case when I was running <code>systemctl status kubelet</code> I see this error</p> <pre><code>devops@kubemaster:/$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Wed 2023-01-11 12:51:04 EET; 9s ago Docs: https://kubernetes.io/docs/home/ Process: 188116 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXT&gt; Main PID: 188116 (code=exited, status=1/FAILURE) </code></pre> <p>Kubelet was stuck at activating.</p> <p>I follow these steps as mentioned answer.</p> <pre><code> cd /etc/kubernetes/pki/ $ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/ $ kubeadm init phase certs all --apiserver-advertise-address &lt;IP&gt; $ cd /etc/kubernetes/ $ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/ $ kubeadm init phase kubeconfig all $ reboot </code></pre> <p>I also had to delete my <code>etcd .crt</code> and <code>.key</code> files from <code>/etc/kubernetes/pki/etcd/</code> as mentioned in one comment.</p> <p>This make kubelet in active state and then I generate new join command and add all the working nodes with master node one by one. Once all nodes were ready I delete the terminating and crashed-loop back pods. Kubeadm created them on different worker nodes. Now all pods working without any issue.</p>
<p>I understand what a Cluster IP service is and everything, I just don't understand the 'behind the scenes' and routing. Can someone PLEASE explain it to me!</p> <ol> <li><p>A request is made to the Cluster IP service. If this service is serving multiple pods, how does the service choose which pod to send the request to, what is controlling and deciding that?</p> </li> <li><p>Once a pod is chosen and the request is sent to it are there iptables (added by kube-proxy) within that pod mapping the service to its corresponding pods and the pod then forwards/load-balances that request off again? unless it's that pods turn to process the request?</p> </li> </ol> <p>Thank you so much</p>
<p>Let me try my best to help you out. Cluster IP is not exposed to the outer network it is automated and K8s manage its traffic in the round-robin scheme. It also depends upon resources. Let's suppose you have one Frontend app and it has 4 pods with cluster-IP service running on 4 working nodes of k8s. These CLuster IPs talking to a backend app. Now k8s will auto-scal the load accordingly. It all depends on what policies you have set for resources ( Like how much CPU RAM or storage each pod can use). If you don't set any policies the resources can be auto-scale. It also depends on which pod is running on which node and what the spec of each node has(maybe one node is more powerful than the other). Let's say one pod is running on node-x which is the most powerful node and you don't have any resource policies on the pod level then most of the traffic will go to that pod which is running on the powerful node. If you have all the nodes with equal specs and no pod-level policies then K8s will distribute the traffic equally on each pod. K8s decide with leftover resources on the node level to distribute the traffic and it's done by Scheduler API. I hope this helps.</p>
<p>In my k3s cluster, I am editing a <code>Role</code> to add permission to a resource e.g. <code>deployment</code>.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: &quot;2023-09-02T08:00:23Z&quot; name: foo namespace: default resourceVersion: &quot;432595&quot; uid: 603d0e0b-62f8-4e46-a42c-66436203bdc9 rules: - apiGroups: - apps. # I know API Group is 'apps' but I wonder how to figure it out in general resources: - deployments verbs: - get - list - watch </code></pre> <p>In the <code>apiGroups</code> field above, I know the API group for <code>deployment</code> resource is <code>app</code> based on my googling.</p> <p>But I wonder in general how can I figure out which API group certain resource belongs to using <code>kubectl</code> command?</p> <p>For example, I know also there is <code>kubectl api-resources</code> command, but if I run it for <code>deployment</code> resource as an example:</p> <pre><code>k api-resources | grep 'deployment' NAME SHORTNAMES APIVERSION NAMESPACED KIND deployments deploy apps/v1 true Deployment </code></pre> <p>The output tells me the <code>APIVERSION</code> is <code>apps/v1</code>. But what exactly is the <strong>API Group</strong> when I see value of <code>apps/v1</code>? Should I consider the value of <code>APIVERSION</code> is the API Group of this resource or is it always the left side of <code>/</code> be the API group?</p> <p>Could someone please clarify for me? Thank you.</p>
<p>Yes, you can consider that the left side of the <code>/</code> is always the apiGroup. <code>app/v1</code> indicates the <strong>REST API path</strong>, in general the api path looks like <code>/apis/$GROUP\_NAME/$VERSION</code> where app is the <a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer"><strong>apiGroup</strong></a> and <strong>v1</strong> is the <strong>apiVersion.</strong></p> <p>There is one exception though, if you are using components from <strong>core group</strong> or <strong>legacy group</strong>, the definition looks like <code>apiVersion: v1</code> instead of <code>apiVersion: core/v1</code>.</p> <p>When we are trying to use a particular object, let's say <code>role</code> in your example, we can go through the official documentation of that particular component or you can also go through the <strong><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/" rel="nofollow noreferrer">api reference documentation</a></strong> for finding the <strong>apiGroup</strong> of that object.</p>
<p>I am fairly new to Kubernetes. I am trying to figure out the best structure for a pod that requires configuration depending upon which pod requests its services.</p> <p>So for example. Data Stream port might be 8080, this is a stream of data of unknown size. Data is processes as it is received. So I can't really use a REST API with a payload as the payload is a stream which could be days long.</p> <p>The issue is that I might have 10+ copies of this service, and they need to be configured dynamically upon a client pod connecting to that service. I would prefer to use a separate port like 9000 to connect to the pod with an XML or INI file type of configuration.</p> <p>My concern is the following. Since there is 10 copies and the same pod is making 2 unique requests to a services, are they guaranteed to connect to the same service pod or could they be 2 different ones? Ultimately, I would want to select a service pod (orchestrator can select, but it be a known IP address now), send a configuration file to 9000, then connect to port 8080 with a data stream for the service to be properly completed.</p>
<p>Since you are trying to open multiple ports for the same service on the same pod, you can assign each port a different name. I’m providing you with a sample yaml file in which a service is communicating outside using both port 80 and port 443.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 - name: https protocol: TCP port: 443 targetPort: 9377 </code></pre> <p>For more information and explanation you can refer to this <a href="https://www.devopsschool.com/blog/how-to-expose-multiple-port-in-services-in-kubernetes-or-multi-port-services/" rel="nofollow noreferrer">link</a> from where the above yaml example is provided.</p> <p><strong>Update:</strong></p> <ol> <li>@mazecreator I can suggest a workaround for now, attach a temp or persistent volume to your service deployment and when you are trying to push your configuration files using port 9000 push them to this volume and when your application is performing data stream operation it will fetch your configs from here and it will be completed properly.</li> <li>@mazecreator follow these steps and create separate deployment files for each application you are going to deploy in this way all the pods related to that particular application can only access this volume.</li> </ol>
<p>pinniped cli is not working in widnows. pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized.</p> <p>C:\Users\hello&gt;pinniped pinniped is not recognized as a internal command or external command, operable program or batch file.</p> <p>Seem windows is not recognizing this .exe file as published by a valid publisher.</p> <p>pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.</p>
<p>Hi <code>Chai</code> I have gone through the link and tried installing pinniped cli it throwed me same error, then I troubleshooted and found that pinniped cli’s executable file is not getting added to the path and we can run pinniped commands by executing the exe file, however everytime you need to go to the directory where your pinniped-cli.exe file is present. Inorder to resolve this you can add pinniped cli’s exe to you path and it will solve your problem, follow this <a href="https://helpdeskgeek.com/how-to/fix-not-recognized-as-an-internal-or-external-command/" rel="nofollow noreferrer">document</a> for more information.</p>
<p>I am using <code>NFS</code> <code>Persistent Volume</code> to create PV.</p> <p>The reclaim policy being used in <code>persistentVolumeReclaimPolicy: Delete.</code></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv001 spec: capacity: storage: 3Gi volumeMode: Filesystem accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Delete mountOptions: - hard - nfsvers=4.1 nfs: path: / server: fs-0bb.efs.us-east-2.amazonaws.com </code></pre> <br> <p>However, when I delete my <code>deployment-controller</code> and also delete <code>PersistentVolumeClaim</code>, the NFS volume is not getting deleted.</p> <br> <p><strong>Expected Behaviour:</strong> <code>The NFS PV volume should be deleted after PVC is deleted.</code></p>
There are two types of PV configurations one is Static PV and the other is dynamic PV, if you have configured Static PV you need to delete both PVC and PV. For dynamic PV you just need to delete the PVC and then the PV will be released. From the manifest file you have provided it seems that you are using static PV, so you need to delete both PVC and PV.</p>
<p>So In our organization we use GKE(GCP) to configure and deploy everything. Most of our apps are java based apps. Hence by defining normal HPA, we face 2 issues.</p> <ol> <li>Say we scale up when we reach 70% of the total capacity for that app. Many times, the time delay provided for scale up is not sufficient for new pods to come up and start running. And its not standardized because we deal with many apps.</li> <li>The obvious choice seems like &quot;then just do a scaling at 50%&quot;. Not as simple as it sounds. By decreasing the threshold which can now be easily met, We observed there were very large number of pods spawning and being removed which is difficult to deal with as it also frequently fluctuates the number of nodes in our cluster(unnecessarily).</li> </ol> <p>So to sum it up, I'm looking for a good solution/strategy using which I can scale Java apps in Kubernetes with/without using HPA feature.</p> <p>I'm pretty sure I can't be the only one facing this problem. Feel free to discuss and share ideas. Thanks!</p>
<p>There are many ways to reduce these issues. Firstly you need to get a better understanding of your application and perform load tests along with the security tests and other tests you perform on your application.</p> <p>To do this you need to have various environments like testing, non-prod and production. Let’s say three projects one for a running test environment, one for non-prod and the final one for your production. You might think having multiple environments on cloud will cost you more but cloud providers like GCP will only charge you for the resources that you have used. You might be thinking of using on-prem solutions in your laptop or desktop but the way they scale and the way scaling works in cloud providers varies and here when using cloud you have more options than the on-prem where you need to create everything from scratch.</p> <p>So now the workflow is like creating three accounts in GCP or some other cloud provider to test your application performance on single node and multi node deployments or by changing the machine sizes and by using various load test simulators. Configure monitoring metrics to your application using the google stack driver/cloud monitoring(since you mentioned GCP) and analyze the resource utilization graph for a particular time period now based on this configure your HPA based on the analysis done and this will definitely optimize your billing. In general you should have done this before deploying your application, it’s still not too late to create a checklist and validate your application performance at various sizes and implement it in your production accordingly, go through this <a href="https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability" rel="nofollow noreferrer">best practices document</a> by GCP on scalability for more detailed information.</p>
<p>I want to fetch cluster id using java spring kubernetes api. Is there a way to get it programmatically like we get pod information using below:</p> <pre><code>ApiClient client = Config.defaultClient(); CoreV1Api api = new CoreV1Api(client); String continuationToken = null; V1PodList items = api.listPodForAllNamespaces(null, continuationToken, null, null, 10, null, null, null, 10, false); </code></pre>
<p>In <strong><a href="https://docs.spring.io/spring-cloud-open-service-broker/docs/3.1.0.RELEASE/apidocs/org/springframework/cloud/servicebroker/model/KubernetesContext.html" rel="nofollow noreferrer">kubernetes spring api documentation</a></strong> there is a detailed explanation provided for KubernetesContext class. There is a method called <code>getClusterid()</code> which will return the cluster id. Following is the explanation provided in the documentation:</p> <blockquote> <p><strong>getClusterid</strong></p> <p>public String getClusterid()</p> <p>Retrieve the kubernetes clusterid from the collection of platform.</p> <p>properties</p> <p><strong>Returns:</strong> the clusterid</p> </blockquote> <p>Note: The above information is provided by referring to the <a href="https://docs.spring.io/spring-cloud-open-service-broker/docs/3.1.0.RELEASE/apidocs/org/springframework/cloud/servicebroker/model/KubernetesContext.html" rel="nofollow noreferrer">official documentation</a>.</p>
<p>I'm having trouble deploying my fastapi app to a k8s container in GCP. Even though I have green checkmark indicating its up and running my log shows this:<br /> <a href="https://i.stack.imgur.com/JKEzE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JKEzE.png" alt="enter image description here" /></a></p> <p>When I run my app locally, it builds. When I build my image and container locally, Docker desktop shows no issues and its not hitting this alive endpoint over and over. Before I added this endpoint to my app. The logs were showing the app stopping and restarting again and again.</p> <p>Other endpoints that are deployed don't have any of these logs. So I feel like something keeps making the app restart over and over and checking this health liveness endpoint, what am I doing wrong here?</p> <p>Dockerfile:</p> <pre><code>FROM python:3.9 #need to run virtualenv RUN python3 -m venv /opt/venv # Install system dependencies RUN apt-get update ENV PATH=&quot;${PATH}:/root/.poetry/bin&quot; WORKDIR .app work directory for code ARG PIPCONF_B64 RUN mkdir -p ~/.pip &amp;&amp; echo $PIPCONF_B64 | base64 -d &gt; ~/.pip/pip.conf RUN pip install poetry # Copy over the requirements.txt so we can install it Copy project files and test files COPY . app COPY requirements.txt /requirements.txt COPY poetry.lock pyproject.toml COPY pyproject.toml pyproject.toml RUN poetry export -f requirements.txt --output requirements.txt --without-hashes # Install requirements RUN pip3 install --upgrade pip RUN pip3 install -r /requirements.txt RUN . /opt/venv/bin/activate &amp;&amp; pip install -r requirements.txt ENV PYTHONPATH=&quot;${PYTHONPATH}:/appstuff&quot; EXPOSE 80 CMD [&quot;uvicorn&quot;, &quot;main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;80&quot;] </code></pre> <p>my deployment.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &quot;{{ .Release.Name }}-deployment&quot; spec: revisionHistoryLimit: 5 {{- if not .Values.hpa.enabled }} replicas: {{ .Values.replicas }} {{ end }} selector: matchLabels: app: &quot;{{ .Release.Name }}&quot; {{ toYaml .Values.labels | indent 6 }} template: metadata: annotations: ad.datadoghq.com/postgres.logs: '[{&quot;source&quot;: ...}]' labels: app: &quot;{{ .Release.Name }}&quot; {{ toYaml .Values.labels | indent 8 }} spec: serviceAccountName: &quot;{{ .Release.Name }}-sa&quot; containers: - name: &quot;{{ .Release.Name }}&quot; image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag }}&quot; imagePullPolicy: &quot;{{ .Values.image.pullPolicy }}&quot; command: [&quot;uvicorn&quot;] args: [&quot;main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;80&quot;] ports: - name: ui containerPort: 80 protocol: TCP env: envFrom: - configMapRef: name: &quot;{{ .Release.Name }}-configmap&quot; # - secretRef: # name: my-secret-name livenessProbe: httpGet: path: /alive port: 80 initialDelaySeconds: 120 resources: {{ toYaml .Values.resources | indent 12 }} strategy: type: RollingUpdate rollingUpdate: maxSurge: 33% maxUnavailable: 33% </code></pre> <p>my config.yaml:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: &quot;{{ .Release.Name }}-configmap&quot; annotations: meta.helm.sh/release-name: &quot;{{ .Release.Name }}&quot; meta.helm.sh/release-namespace: &quot;{{ .Release.Namespace }}&quot; labels: app.kubernetes.io/managed-by: Helm data: {{ toYaml .Values.envConfig | indent 2 }} </code></pre>
<p>5xx errors such as 502 occur when the server is unable to process or serve the client’s request. This <a href="https://komodor.com/learn/how-to-fix-kubernetes-502-bad-gateway-error/" rel="nofollow noreferrer"><strong>blog</strong></a> written by Nir Shtein briefs about various points of failures, which can cause 5xx errors when an application is deployed on a kubernetes cluster.</p> <blockquote> <p>Consider a typical scenario in which you map a Service to a container within a pod, and the client is attempting to access an application running on that container. This creates several points of failure:</p> <ul> <li>The pod</li> <li>The container</li> <li>Network ports exposed on the container</li> <li>The Service</li> <li>The Ingress</li> </ul> </blockquote> <p>As per the description provided by you, the main reason for getting 502 errors is that your containers are restarting continuously.</p> <p>A pod or container restart might occur because of reasons like over resource utilization(CPU or memory overshoot) or because of the pod shutting down prematurely due to an error in application code, follow this <a href="https://www.bluematador.com/docs/troubleshooting/kubernetes-pod#:%7E:text=on%20the%20container.-,Container%20Restarts,-In%20most%20cases" rel="nofollow noreferrer"><strong>blog</strong></a> for more information regarding container restarts.</p> <p><strong>Troubleshooting steps:</strong></p> <p>Check the pod logs for more information, on why the pod is getting restarted, you can use the below commands for this</p> <pre><code> `kubectl get logs &lt;pod&gt;` or `kubectl describe pod &lt;pod&gt;` </code></pre> <p>Pod restarts might also happen due to high CPU or Memory utilization. Use the below command for checking the pod that is consuming higher resources</p> <pre><code> `kubectl top pods` </code></pre> <p>Check if your application pods are showing in the list and login to the pod with high memory utilization and check the processes that are consuming more resources.</p> <p>Sometimes improper auto-scaler configurations will also cause container or pod restarts, check your auto-scaler config and correct any misconfiguration found.</p> <p><a href="https://dwdraju.medium.com/a-pod-restarts-so-whats-going-on-fa12bb8a57ea" rel="nofollow noreferrer">Here</a> is an additional reference on Pod restarts, go through this for additional information and debugging steps.</p>
<p>I got a list in yml - credentials. And supposedly each bank has to have a different password that needs to be encrypted. What would be the right way to specify that? As of now I got it configured like this, but that doesn't work.</p> <p>This is the config.yml</p> <pre><code>infopoint: endpoint: https://test.test.com/ws/SSS/Somthing.pl system: TEST mock: false credentials: - bank: 1111 user: LSSER existingSecret: name: infopoint-creds-s1-hb - bank: 2222 user: TESSER existingSecret: name: infopoint-creds-s1 envFrom: - secretRef: name: infopoint-creds-s1-hb - secretRef: name: infopoint-creds-s1 </code></pre> <p>This is how I created both secret keys on the server.</p> <pre><code>C:\Users\mks\IdeaProjects&gt;kubectl.exe create secret generic infopoint-creds-s1-hb --from-literal=INFOPOINT_CREDENTIALS_PASSWORD=SOMEPASS -o yaml -n test-env --dry-run=client | kubeseal -o yaml --scope namespace-wide &gt; infopoint-creds-s1-hb.yaml C:\Users\mks\IdeaProjects&gt;kubectl.exe create secret generic infopoint-creds-s1 --from-literal=INFOPOINT_CREDENTIALS_PASSWORD=SOMEPASS -o yaml -n test-env --dry-run=client | kubeseal -o yaml --scope namespace-wide &gt; infopoint-creds-s1.yaml </code></pre> <p>This is my Spring configuration.</p> <pre><code>@Configuration @ConfigurationProperties(prefix = &quot;infopoint&quot;) class InfopointAPIConfiguration { lateinit var endpoint: String var proxyServerName: String? = null var proxyPortNumber: String? = null lateinit var system: String lateinit var mock: String lateinit var credentials: List&lt;Credentials&gt; data class Credentials( var bank: String? = null, var user: String? = null, var password: String? = null ) fun credentialsByBank(bank: Int): Credentials { return credentials.firstOrNull { it.bank == bank.toString() } ?: error(&quot;Could not load credential for bank $bank&quot;) } } </code></pre>
<p>Kubernetes secrets can be used or configured in applications in multiple ways for example configmaps, sealed secrets and environment variables. Since you got struck with the sealed secrets part I am providing an answer related to the same.</p> <p>First we need to create a sealed secret in the same namespace with the same name for preventing other users on the same cluster from using your sealed secret. For more information related to sealed secrets please go through this <a href="https://github.com/bitnami-labs/sealed-secrets#usage" rel="nofollow noreferrer">document</a>.</p> <p>Now we have our secret created, all we need to do is to use it in our application. The secret which we created needs to be referenced in the yaml file. There is a detailed description on how to configure secrets in a spring boot application along with a sample project available <a href="https://capgemini.github.io/engineering/securing-spring-boot-config-with-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p>Developed a nextjs app and have deployed it to a kubernetes (AKS) cluster. The docker file looks pretty much identical to the sample nextjs docker files (see <a href="https://github.com/vercel/next.js/blob/v12.3.2/examples/with-docker/Dockerfile" rel="nofollow noreferrer">here</a>).</p> <p>The pod manifest is nothing special either, all I'm doing is setting some environment variables and setting the container up with the right docker image.</p> <p>The nextjs app refuses to start on a pod in the AKS cluster.</p> <p>I have pulled the docker image from the container registry and am able to start it up locally on docker desktop - all works fine.</p> <p>However, it refuses to start on a kubernetes pod, and I am at my wits end! I overrode the entry point to the container on the pod and have manually tried running <code>node server.js</code> via kubectl inside the container, and... nothing happens. Node just exits after a second which seems to indicate that something is causing nodejs to silently crash.</p> <p>What could be going wrong here? Is there anyway for me to get more output to try and diagnose this issue?</p>
<p>Ok, seems like running operations and the final command as nextjs USER as specified in the Dockerfile is the bit that's causing issues in kubernetes. Removing those and running as default, root user fixes the issues.</p>