Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
add this to makefile:# makefile git clone REPO cd REPO_DIR; python setup.py bdist_wheel cp REPO_DIR/dist/* . rm -rf REPO_DIR/add this to dockerfile:# dockerfile RUN pip install REPO*.whland then the package is successfully installed within docker
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux.Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
pip install git+url within a docker environment
Seems to be as easy as this:FROM centos RUN yum install -y \ java-1.8.0-openjdk \ java-1.8.0-openjdk-devel ENV JAVA_HOME /etc/alternatives/jre . .
Assuming that there is a CentOS Dockerfile:FROM centosWhat is the right way of adding OpenJDK 8 for it ?I have tried to use similar approach as for Fedorahttps://github.com/projectatomic/docker-fedora-images/blob/master/java-openjdk-8/DockerfileBut when I run the image java version is"1.7.0_111", even though it is expected to be JDK 8:docker run -i -t /bin/bash [user@2fcc1e47c3cd projects]$ java -version java version "1.7.0_111" OpenJDK Runtime Environment (rhel-2.6.7.2.el7_2-x86_64 u111-b01) OpenJDK 64-Bit Server VM (build 24.111-b01, mixed mode)There are many sources which describe Oracle JDK installations, but I was not able to find any relevant instructions for OpenJDK
How to define OpenJDK 8 in CentOS based Dockerfile?
The problem with this approach is that the mysql user does not have permission to write to the /var/log/mysql directoryThe problem actually is that the directory/var/log/mysqldoes not exists on themysql:5.7Docker image. You can make sure of it running the following container:$ docker run --rm mysql:5.7 ls /var/log/ alternatives.log apt bootstrap.log btmp dmesg dpkg.log faillog fsck lastlog wtmpFurthermore,MySQL binary logsaren't logs meant for following your MySQL server activity or errors ; they are logs meant to give your MySQL server a chance to recover data in case of a server crash.As a consequence, you want those binary logs:to stay close to your datato be written on a fast file systemIn most cases, Docker container file system is slow and that's why the MySQL data folder for the container is declared as aVOLUME.So you also want your binary logs to be written on a Docker data volume and not the Docker container file system.long story short, start your container with:docker run -d \ --name mysql \ -v /var/lib/mysql:/var/lib/mysql \ mysql:5.7 \ mysqld \ --datadir=/var/lib/mysql \ --user=mysql \ --server-id=1 \ --log-bin=/var/lib/mysql/mysql-bin.log \ --binlog_do_db=test
What would be the best way to enable binary logging using the official mysql image?I have tried using the mysql:5.7 image, overriding the command when running it to also pass through the startup options to enable binary logging to mysqld (see below). The problem with this approach is that the mysql user does not have permission to write to the/var/log/mysqldirectory.The run command:docker run -d \ --name mysql \ -v /var/lib/mysql:/var/lib/mysql \ mysql:5.7 \ mysqld \ --datadir=/var/lib/mysql \ --user=mysql \ --server-id=1 \ --log-bin=/var/log/mysql/mysql-bin.log \ --binlog_do_db=testThe output:mysqld: File '/var/log/mysql/mysql-bin.index' not found (Errcode: 2 - No such file or directory)Should I fork the repository and add a volume for/var/log/mysqlwhich the mysql user can write to and create a custom image, or is there a better way to do it? Is this possible using only the official mysql image?
How can I enable MySQL binary logging using the official Docker image?
Thehostnamedirective simply sets the hostname inside the container (that is, the name you get back in response to thehostnameoruname -ncommands). It does not result in a DNS alias for the service. For that, you want thealiasesdirective. Since that directive is per-network, you need to be explicit about networks rather than using the compose default, for example:version: '3' services: redis1: image: "redis:alpine" hostname: redis1host networks: redis: aliases: - redis1host redis2: image: "redis:alpine" hostname: redis2host networks: redis: aliases: - redis2host networks: redis:
I'm unable to get the Docker Composehostnamecommand to work.I'm running a simpledocker-compose.yml:version: '3' services: redis1: image: "redis:alpine" hostname: redis1host redis2: image: "redis:alpine" hostname: redis2hostOnce I run this withdocker-compose up, I should be able to rundocker-compose exec redis1 /bin/ashand thenping redis2hostto talk to the other Redis container, but the ping just doesn't reach its destination. I can ping the other Redis container withping redis2.ping redishost2should work, no?
Docker Compose hostname command not working
use WORKDIRhttps://docs.docker.com/engine/reference/builder/#workdiror do all in one RUNyourcdis "forgotten" when you are in another RUNBy the way, group your RUN, as indicated in the Dockerfile best practiceshttps://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
I am trying to create an Image from the Dockerfile.# cat Dockerfile FROM ubuntu:16.04 COPY $pwd/intel_virtual_gateway_console64_1_9_0.tar /root/ COPY $pwd/login.exp /root/ RUN cd /root RUN echo $PWD RUN tar -xvf intel_virtual_gateway_console64_1_9_0.tar RUN cd virtualgatewayconsole_package RUN apt-get update && apt-get install expect \ expect-devWhile building the Image the directory is not getting changed to /root/. I thought the issue could be the tar file is missing, in order to confirm that printing the current working directory after the changing it to /root directory.But I have verified in the container that the packages were successfully copied to the /root directory. I have even verified by experimenting with other directories as well, even for those the directory is not getting changed. Due to this issue the consequent steps are failing:# docker build -t release:1.0 . Sending build context to Docker daemon 633.2MB Step 1/8 : FROM ubuntu:16.04 ---> 6a2f32de169d Step 2/8 : COPY $pwd/intel_virtual_gateway_console64_1_9_0.tar /root/ ---> Using cache ---> 36e9ea407082 Step 3/8 : COPY $pwd/login.exp /root/ ---> Using cache ---> 578f9f9481d9 Step 4/8 : RUN cd /root ---> Running in 07ccfc507888 ---> ad60f9d31c7e Removing intermediate container 07ccfc507888 Step 5/8 : RUN echo $PWD ---> Running in e0ec2df6a0dc / ---> 979a42368814 Removing intermediate container e0ec2df6a0dc Step 6/8 : RUN tar -xvf intel_virtual_gateway_console64_1_9_0.tar ---> Running in 0701db595e27 tar: intel_virtual_gateway_console64_1_9_0.tar: Cannot open: No such file or directory tar: Error is not recoverable: exiting now The command '/bin/sh -c tar -xvf intel_virtual_gateway_console64_1_9_0.tar' returned a non-zero code: 2But able to change the directory within the container.# docker run -it 979a42368814 /bin/bash root@100b02ddc98a:/# pwd / root@100b02ddc98a:/# cd /root/ root@100b02ddc98a:~# pwd /rootPlease help to find out what is causing the issue.
Unable to change directories while building docker Image using Dockerfile
I'm not sure using the ~/.profile configuration file is the best way to do what you want. Also, usingRUN source /root/.profilewon't have any effect since the line will be executed once only and won't be persistent when trying to execute the bash binary inside de container. (It will actually run a new bash session).So.. first of all, the kind of configuration you are trying to do should be in the .bashrc file (Just because it is the place where it usually appear).Then, as the bash man page say :When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that orderAnd :When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc and ~/.bashrc, if these files exist.What you should probably do :In the Dockerfile :COPY config/.bashrc /root/.bashrcThe .bashrc file you want to copy into your container is located in a config repo. This is where you should put you configuration.Then, in the entrypoint :exec "$@"Then, you could run bash using the docker command :docker run XXX /bin/bash
I try to set a custom configuration for Docker container bash prompt to display the git branch name when connected.I found everything to make it properly, but I fail to execute the~/.profileor even~/.bash_profilefiles at container's building.If I performsource ~/.profilemanually inside the container, it works fine. But I don't want my users to type any command to enable the custom prompt.I tried to putRUN /bin/bash -c "source /root/.profile"orRUN source /root/.profilein my Dockerfile,source /root/.profilein myentrypoint.shfile without any success.I saw some solutions when runningdocker run, but I am using docker-compose.Thank you all if you have any piece of advice :D !
Customize docker container bash
I had the same issue. Apparently this is a known bug for docker-compose 1.29 and Ubuntu 20 [1]Mydocker-composewasinstalled using curl.What worked for me was removingdocker-composeand install it using pip instead:sudo rm /usr/local/bin/docker-compose pip3 install docker-composeAfter that everything worked as expected.This bug has been reported several times:https://github.com/docker/compose/issues/8170https://github.com/docker/compose/issues/8309https://github.com/docker/compose/issues/8461At the time of this writing (December 2021) things have not yet been resolved.
I'm followingthisApache Airflow tutorial and I'm failing to executedocker-compose up -dcommand. I get following error:Building webserver unable to prepare context: unable to 'git clone' to temporary context directory: error fetching: /usr/lib/git-core/git-remote-https: /tmp/_MEItH0v3Q/libcrypto.so.1.1: version `OPENSSL_1_1_1' not found (required by /lib/x86_64-linux-gnu/libssh.so.4) : exit status 128 ERROR: Service 'webserver' failed to buildI'm using Ubuntu 20.04 onWSL2.I've installed exactly that version of OPEN SSL - OPENSSL_1_1_1 but error remains. I've also updated git to 2.30.2 because I've read it could fix it but no luck.
Error executing docker-compose: Building webserver unable to prepare context: unable to 'git clone' to temporary context directory: error fetching
You need to explicitly install node / npm in your container before running npm install. Add this to your Dockerfile.RUN apt-get update && apt-get install -y curl RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - RUN apt-get update && apt-get install -y nodejs
I'm really new to Docker and would like to create a container that has all my Ruby on Rails website and its dependencies in it. I read TONS of documentations and how-to's to do it but I keep struggling.Since I use Rubymine, there is a built-in tool that allows you to Dockerize the current project you're in from the Dockerfile you created in it. However, I keep having the error "npm not foundwhile NodeJSshouldbe installed.Here is my Dockerfile:FROM ruby:2.4.3 MAINTAINER Jaeger RUN apt-get update -qq && apt-get install -y build-essential nodejs RUN mkdir -p /app WORKDIR /app COPY package.json /app RUN npm i -g yarn && yarn COPY Gemfile Gemfile.lock ./ RUN gem install bundler && bundle install --jobs 20 --retry 5 RUN yarn COPY . ./ EXPOSE 3000 CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]The ultimate goal should be that, since I use Webpacker and React in it (well it's a dummy project for a test, but the real website has this), I would like to install Yarn, and make Yarn install all the depencies.I found some other people that had the same problem but got lost trying to understand the Docker layers concept and trying to copy some codes that didn't work either
Dockerfile returns npm not found on build
Yes, it is possible: see "Lab 6: Docker Networking".The key part of an overlay network is the discovery service, like for instanceConsul.An overlay network requires a key-value store.The store maintains information about the network state which includes discovery, networks, endpoints, ip-addresses, and more. Engine supports Consul, etcd, and ZooKeeper (Distributed store) key-value store stores.The article "Docker Networks: Discovering Services on an Overlay" lay some criticisms about the current service discovery tools which are not built for individual container registration or discovery.Overlay uses KV stores under the covers to model the network topology and enable cross-host container to container communication. It does not provideSRV recordresolution.The rub is that in an overlay network every container has its own IP address.So, the only way you could make this work is by running Consul agents inside of every container on the network that contributes a service. That is certainly not transparent to the developer, or compatible with off-the-shelf images.
I have followed through the example atgetting started overlayand i have a simple query. Is it possible to use multi-host networking feature using overlay without creating aswarm cluster? I don't want to use third party plugins like weave etc. I want to use docker native networking support for this.I have3.16+ kernelrunning RHEL anddocker 1.9.
Is swarm required for using multi-host networking feature using overlay in docker
In that case, you don't have to define a "link"; the database service is already running, so all you need to do, is configure your django app to connect to that host.I don't have experience with django, but based on theexample in the docker-compose documentation, it would look something like;DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'example.blahblahblah.eu-west-1.rds.amazonaws.com', 'PORT': 5432, } }
My docker-compose.yml looks something like this:django: build: . user: django links: # LINK TO AMAZON RDS? command: /gunicorn.sh env_file: config/settings/.env nginx: build: ./compose/nginx links: - django ports: - "0.0.0.0:80:80"How do I link the django container to the Amazon RDS, which has an url like:example.blahblahblah.eu-west-1.rds.amazonaws.com:5432
How to link from docker-compose to Amazon RDS
CentOS 6 and RHEL 6 are no longer supported, and the last build for them is docker 1.7.1.That page of the documentation (https://docs.docker.com/engine/installation/centos/) also mentions;Docker runs on CentOS 7.X Docker requires a 64-bit installation regardless of your CentOS version. Also, your kernel must be 3.10 at minimum, which CentOS 7 runs.The kernel that those distro's are running on (2.6.x) is over 13 years old, and although newer features are back-ported to them by Red Hat, they lack certain options that are required by Docker, and have proven to be unstable, and unsuitable for production.I encourage you to upgrade upgrade to CentOS 7.x if you want to (keep) using Docker.
I'm now deploying on CentOS 6.5, and I'm now starting to use docker.So, I follow the instructions on:https://docs.docker.com/engine/installation/centos/No matter which method I follow to install, I get version of1.7.1.[root@VM_72_235_centos ~]# docker version Client version: 1.7.1 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 786b29d/1.7.1 OS/Arch (client): linux/amd64 Server version: 1.7.1 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 786b29d/1.7.1 OS/Arch (server): linux/amd64But now I run the command in the documentation:docker network lsWith failuredocker: 'network' is not a docker command..And I finally know that the network action is first introduced in 1.9.So is there any way to install docker 1.9 in centos6 ?
How to install docker 1.9+ in CentOS 6.5?
The way i do it is that I use busybox for all data stored and shared with mariadb. Then use--volumes-fromin mariadb to link that directories. Please have a look into my simpifiedcompose.ymlfile.db-data: container_name: db-data image: busybox:latest volumes: - /data/mysql:/var/lib/mysql db: container_name: db image: million12/mariadb restart: always volumes_from: - db-data environment: - MARIADB_USER=admin - MARIADB_PASS=my_passNow all database files are accessible on host os too and there shouldn't be any permissions issues.Update for docker-compose 2.0version: '2' volumes: database: services: db: container_name: db image: million12/mariadb restart: always volumes_from: - database environment: - MARIADB_USER=admin - MARIADB_PASS=my_passYou can see where docker is storing that volume on your hard drive by running command:docker volume inspect docker_database[ { "Name": "docker_database", "Driver": "local", "Mountpoint": "/var/lib/docker/volumes/docker_database/_data", "Labels": null, "Scope": "local" }]
I'm having trouble in configuring persistent data withMariadb. I'm usingdocker-compose, with each service in a single container (Nginx,PHP-FPMandMariadb). Everything is working, exceptMariadbdoesn't store data. Every time I restart the container, I lose all the data. Then I found out that I can use another container just to keep data, and it doesn't even have to be running.So I'm using, inMariadbcontainervolume_fromcontent container. But when I do that, when I try to map the volume/var/lib/mysql, the ContainerMariaDbdoesn't start.Error2015-12-29 12:16:40 7f2f02e4a780InnoDB: Operating system error number 13 in a file operation.InnoDB: The error means mysqld does not have the access rights toInnoDB: the directory.The error refers to a problem about volume permissions, but I've tried to set permissions throughDockerfilein both containers, and the problem persists. I'm a bit lost. I'm using OSX, so I believe this is an OSX problem. Can anyone help me on this?This is my code:My Docker Composecontent: build: containers/content container_name: content hostname: content volumes: - /var/lib/mysql mariadb: build: containers/mariadb container_name: mariadb hostname: mariadb ports: - "3306:3306" volumes_from: - content environment: - MYSQL_ROOT_PASSWORD=mariadb - TERM=xterm - PORT=3306MariaDB DockerfileFROM debian:jessie RUN apt-get update && apt-get install -y mariadb-server EXPOSE 3306Content DockerfileFROM debian:jessie VOLUME /var/lib/mysql CMD ["true"]
Docker-Compose Persistent Data Trouble
Create your python container with argument-net host,it will share address and port with host,so it can access progress which is running on host.Refer to my another answer:https://stackoverflow.com/a/48069763/5465023
I have an application runing on my localhost at port 8080. I have some python code that consumes that service. The code runs fine on my base system but as soon as I put it inside a docker container I geturllib2.URLError: . I have another application that exposes an api at port 6543. Same problem.I assume I need to tell docker that it's allowed to consume certain localhost ports. How do I do that?Here are some more specific details:I can execute this line of code just fine on my base system:urllib2.urlopen(req, json.dumps(dData))but when I try to do it from inside a docker container then I get:File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 431, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 449, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1227, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open raise URLError(err) urllib2.URLError: I've tried adding permissions to docker.sockls -l /var/run/docker.sock => srw-rw-rwx 1 root docker 0 Feb 17 11:09 /var/run/docker.sock
making requests to localhost from inside docker container
As of 17.06, you can create node local networks with a swarm scope. Do so with the--scope=swarmoption, e.g.:docker network create --scope=swarm --driver=bridge \ --subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_bridgeThen you can use this network with services and stacks defined in swarm mode. For more details, you can seePR #32981.Edit: you appear to have significantly overcomplicated your problem. As long as everything is being done in a single compose file, there's no need to define the network as external. There is a requirement to use an overlay network if you want to communicate container-to-container. DNS discovery is included on bridge and overlay networks with the exception of the default "bridge" network that docker creates. With a compose file, you would never use this network without explicitly configuring it as an external network with that name. So to get container to container networking to work, you can letdocker-composeordocker stack deploycreate the network for your project/stack automatically with:version: "3.0" services: web1: image: "test" ports: - "12023:22" web2: image: "test" ports: - "12024:22"Note that I have also removed the "hostname" setting. It's not needed for DNS resolution. You can communicate directly with a service VIP with the name "web1" or "web2" from either of these containers.Withdocker-composeit will create a default bridge network. Swarm mode will create an overlay network. These defaults are ideal to allow DNS discovery and container-to-container communication in each of the scenarios.
I learned fromdocker documentationthat I can not use docker DNS to find containers using their hostnames without utilizing user-defined bridge network. I created one using the command:docker network create --driver=overlay --subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_overlayand tried to deploy a container that uses it. compose file looks like:version: "3.0" services: web1: image: "test" ports: - "12023:22" hostname: "mytest-web1" networks: - test web2: image: "test" ports: - "12024:22" hostname: "mytest-web2" networks: - test networks: test: external: name: user_defined_overlaymy docker version is:Docker version 17.06.2-ce, build cec0b72and I got the following error when I tried deploying the stack:network "user_defined_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"I was able to create an overlay network and define it in compose file. that worked fine but it didn't for bridge. result of docker network ls:NETWORK ID NAME DRIVER SCOPE cd6c1e05fca1 bridge bridge local f0df22fb157a docker_gwbridge bridge local 786416ba8d7f host host local cuhjxyi98x15 ingress overlay swarm 531b858419ba none null local 15f7e38081eb user_defined_overlay overlay swarmUPDATEI tried creating two containers running on two different swarm nodes(1st container runs on manager while second runs on worker node) and I specified the user-defined overlay network as shown in stack above. I tried pinging mytest-web2 container from within mytest-web1 container using hostname but I gotunknown host mytest-web2
can not use user-defined bridge in swarm compose yaml file
I think it is not possible to prevent docker for searching docker hub..Butit is possibleto prevent/avoid pulling image from docker hub. And it this case, all searching on docker hub will find nothing.To do it: you must concatenate your private registry url(without https:// or http://) with your image name like:FROM myprivate_registry/myimagename:if_versionorFROM myprivate_registry:registry_port/myimagename:if_versionAlso, you can say to docker to use your private registry as its first registry and docker hub will become the second:`ADD_REGISTRY='--add-registry myregistry'` or `INSECURE_REGISTRY='--insecure-registry myregistry'`
I'm standing up a few docker hosts to run in a production environment. We want all of our images to have to go through our container pipeline and we do not want to be able to pull images from Docker Hub (security concerns).How can I stop docker being able to pull images from dockerhub? Ideally I would like to do this via configuring the docker daemon.
How to prevent docker from searching docker hub
Did you specify--token-auth-file=and/or--basic-auth-file=or one of the other authentication modes? I don't know that https endpoint will work without one of these (maybe it should, but it doesn't, apparently). Check outhttps://kubernetes.io/docs/admin/authentication/
Kubernetes API requestcurl https://192.168.0.139 --cacert /home/mongeo/ku-certs/ca.pemreturnUnauthorizedRequestcurl localhost:8080worked good.My kube-proxy and kube-apiserver standart (coreos+k8s tutorial)How do I get data on HTTPS?
Kubernetes. HTTPS API return `Unauthorized`
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"Instead it i write to Dockerfile string like:RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'and my env variable available from root, even after bash login. or may beRUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'then it variable available in CMD and ENTRYPOINT commandsDocker cache it as layer and change only if you change some strings before it.You also can trydifferent waysto set environment variable.
I need to fill a variable in dockerfile with the result of a commandLike in bash var=$(date)EDIT 1date is a example. in my case i useFROM phusion/baseimage:0.9.17so i want at each building use the last version so i use thiscurl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//'==> 0.9.17. but i don't know how i parse it in var with dockerfile for this resultENV verbaseimage=curl... FROM phusion/baseimage:$verbaseimageRESULTIn my use caseFROM phusion/baseimage:latestBut the question remains unresolved for other case
Parse a variable with the result of a command in DockerFile
The most elegant solution I've found is described in this post:Docker-compose make 2 microservices (frontend+backend) communicate to each other with http requestsExample implementation:Innext.config.js:module.exports = { serverRuntimeConfig: { // Will only be available on the server side URI: 'your-docker-uri:port' }, publicRuntimeConfig: { // Will be available on both server and client URI: 'http://localhost:port' } }Inpages/index.js:import getConfig from 'next/config'; const { serverRuntimeConfig, publicRuntimeConfig } = getConfig(); const API_URI = serverRuntimeConfig.apiUrl || publicRuntimeConfig.apiUrl; const Index = ({ json }) => Index; Index.getInitialProps = async () => { ... const res = await fetch(`${API_URI}/endpoint`); ... }
I have two docker containersfrontendanddata-service.frontendis using NextJS which is only relevant because NextJS has a method calledgetInitialProps()which can be run on the server, or can be run in the visitor's browser (I have no control over this).IngetInitialProps()I need to call an API to get the data for the page:fetch('http://data-service:3001/user/123').then(...When this is called on the server the API returns fine because my frontend container has access to the internal docker network and therefor can reference the data-service using the hostnamehttp://data-service.When this is called on the client, however, it fails (obviously) because Docker is now exposed ashttp://localhostand I can't referencehttp://data-serviceanymore.How can I configure Docker so that I can use 1 URL for both use cases. I would prefer not to have to figure out which environment I'm in in my NextJS code if possible.If seeing my docker-compose is useful I have included it below:version: '2.2' services: data-service: build: ./data-service command: npm run dev volumes: - ./data-service:/usr/src/app/ - /usr/src/app/node_modules ports: - "3001:3001" environment: SDKKEY: "whatever" frontend: build: ./frontend command: npm run dev volumes: - ./frontend:/usr/src/app/ - /usr/src/app/node_modules environment: API_PORT: "3000" API_HOST: "http://catalog-service" ports: - "3000:3000"
Referencing Docker container from server-side (from another container) AND from client-side (browser) with same URL
Inside the container, when Ipip install bugsnag, I get the following:root@af08af24a458:/app# pip install bugsnag Requirement already satisfied: bugsnag in /usr/local/lib/python2.7/dist-packages Requirement already satisfied: webob in /usr/local/lib/python2.7/dist-packages (from bugsnag) Requirement already satisfied: six<2,>=1.9 in /usr/local/lib/python2.7/dist-packages (from bugsnag)You probably see the problem here. You're installing the package for python2.7, which is the OS default, instead of python3.6, which is what you're trying to use.Check out this answer for help resolving this issue:"ModuleNotFoundError: No module named " in my Docker containerAlternatively, this is a problemvirtualenvand similar tools are meant to solve, you could look into that as well.
I'm trying to run a python script in a Docker container, and i don't know why, python can't find any of the python's module. I thaught it has something to do with the PYTHONPATH env variable, so i tried to add it in the Dockerfile like this :ENV PYTHONPATH $PYTHONPATHBut it didn't work. this is what my Dockerfile looks like:FROM ubuntu:16.04 MAINTAINER SaveMe[email protected]ADD . /app WORKDIR /app RUN apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get install -y locales # Set the locale RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \ locale-gen ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en ENV LC_ALL en_US.UTF-8 ENV PYTHONPATH ./app #Install dependencies RUN echo "===> Installing sudo to emulate normal OS behavior..." RUN apt-get install -y software-properties-common RUN apt-add-repository universe RUN add-apt-repository ppa:jonathonf/python-3.6 RUN (apt-get update && apt-get upgrade -y -q && apt-get dist-upgrade - y -q && apt-get -y -q autoclean && apt-get -y -q autoremove) RUN apt-get install -y libxml2-dev libxslt-dev RUN apt-get install -y python3.6 python3.6-dev python3.6-venv openssl ca-certificates python3-pip RUN apt-get install -y python3-dev python-dev libffi-dev gfortran RUN apt-get install -y swig RUN apt-get install -y sshpass openssh-client rsync python-pip python- dev libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg8-dev zlib1g-dev libpulse-dev RUN pip install --upgrade pip RUN pip install bugsnag #Install python package + requirements.txt RUN pip3 install -r requirements.txt CMD ["python3.6", "import_emails.py"]when i'm trying to run:sudo docker run i got this Traceback:Traceback (most recent call last): File "import_emails.py", line 9, in import bugsnag ModuleNotFoundError: No module named 'bugsnag'As you can see i'm using python3.6 for this project. Any lead on how to solve this ?
"ModuleNotFoundError: No module named <package>" in my Docker container
It actually works. I mean: the actual behaviour / final result.Despite the fact that it showsxdebug.mode => developthe actual features are ALL turned OFF:Feature => Enabled/Disabled Development Aids => ✘ disabled Coverage => ✘ disabled GC Stats => ✘ disabled Profiler => ✘ disabled Step Debugger => ✘ disabled Tracing => ✘ disabledI've tested it locally on a Windows 10 .. and I see the same:php.ini hasxdebug.mode = debugWithoutXDEBUG_MODEoverride cmd shows that the debugger is enabled as it should:Feature => Enabled/Disabled Development Aids => ✘ disabled Coverage => ✘ disabled GC Stats => ✘ disabled Profiler => ✘ disabled Step Debugger => ✔ enabled Tracing => ✘ disabled ... xdebug.mode => debug => debugWithXDEBUG_MODEoverride:C:\Users\Andriy $ SET XDEBUG_MODE=off C:\Users\Andriy $ php -r "xdebug_info();" ... Feature => Enabled/Disabled Development Aids => ✘ disabled Coverage => ✘ disabled GC Stats => ✘ disabled Profiler => ✘ disabled Step Debugger => ✘ disabled Tracing => ✘ disabled ... xdebug.mode => debug => debugIf I run this command (passing additional Xdebug config param that tells to start debugging straight away):php -dxdebug.start_with_request=yes -r "xdebug_info();"then WITHOUT the override it will try to establish the debug connection and WITH override it will not try to do that. That confirms that the override works (at very least here in my environment).
I’m trying to configXdebug 3in PHP container, and setXDEBUG_MODEenv variable tooffaccording with documentationhttps://xdebug.org/docs/all_settings#modebutxdebug_info()shows thatmode=develop. How to fix?Dockerfile:FROM php:7.4.11-fpm … ENV XDEBUG_MODE=off ENV XDEBUG_CONFIG="" RUN pecl install xdebug \ && docker-php-ext-enable xdebug \ ...docker-compose.yml:services: php: build: dockerfile: ${PWD}/.devcontainer/Dockerfile image: php-fpm environment: XDEBUG_MODE: ${XDEBUG_MODE} // off XDEBUG_CONFIG: ${XDEBUG_CONFIG}xdebug info:php -r 'xdebug_info();' Version => 3.0.0 Support Xdebug on Patreon, GitHub, or as a business: https://xdebug.org/support Feature => Enabled/Disabled Development Aids => ✘ disabled Coverage => ✘ disabled GC Stats => ✘ disabled Profiler => ✘ disabled Step Debugger => ✘ disabled Tracing => ✘ disabled PHP Build Configuration Version => 7.4.11 Debug Build => no Thread Safety => disabled Settings Configuration File (php.ini) Path => /usr/local/etc/php Loaded Configuration File => /usr/local/etc/php/php.ini Scan this dir for additional .ini files => /usr/local/etc/php/conf.d Additional .ini files parsed => /usr/local/etc/php/conf.d/docker-php-ext-amqp.ini, /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini, Directive => Local Value => Master Value xdebug.mode => develop => developUPDATE:My case: I use VSCode to debug my app, so I need to turn on Xdebug module only when Xdebug listening is active. Better way to do that is using env XDEBUG_CONFIG and XDEBUG_MODE, because it not require change ini files.
Docker PHP with Xdebug 3 env XDEBUG_MODE doesn't work
Found the error, this seems to be Docker 17.06.1-ce error. This version seems not correctly deleting images, and keeping files in/var/lib/docker/aufs/mnt/So just upgrade to new docker version and this will be fine. now df show meFilesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 51558236 3821696 45595452 8% / udev 10240 0 10240 0% /dev tmpfs 1398308 57696 1340612 5% /run tmpfs 3495768 0 3495768 0% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 3495768 0 3495768 0% /sys/fs/cgroupThis is better :)
got huge problem, all my inodes seems to be used. I've cleaned all volumes unused Cleaned all container and images with command -> docker prunebut still it seems that it stay full :Filesystem Inodes IUsed IFree IUse% Mounted on none 3200000 3198742 1258 100% / tmpfs 873942 16 873926 1% /dev tmpfs 873942 13 873929 1% /sys/fs/cgroup /dev/sda1 3200000 3198742 1258 100% /images shm 873942 1 873941 1% /dev/shm tmpfs 873942 1 873941 1% /sys/firmwaredocker infoContainers: 5 Running: 3 Paused: 0 Stopped: 2 Images: 23 Server Version: 17.06.1-ce Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 53 Dirperm1 Supported: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170 runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2 init version: 949e6fa Kernel Version: 3.16.0-4-amd64 Operating System: Debian GNU/Linux 8 (jessie) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 6.668GiB Name: serveur-1 ID: CW7J:FJAH:S4GR:4CGD:ZRWI:EDBY:AYBX:H2SD:TWZO:STZU:GSCX:TRIC Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: falseThe only thing i think can do this, is a build i'm doing on this machine. This build runs a npm install with many files. Can these files stays on server ? is there any chance i have to delete these temporary files ?
docker is full, all inodes are used
There are tools designed to solve this problem:https://github.com/yelp/dumb-inithttps://github.com/krallin/tiniI think if you only have a single process, all you need to do is explicitly handle the signal with a signal handler, which bash doesn't do for you.Using the["node", "."]syntax, you could usehttps://nodejs.org/api/process.html#process_signal_eventsand just have it exit on SIGTERM. I believe that would be enough.Or using a bash script you can usetrap "exit 0" TERMYou could also use a process supervisor likehttp://skarnet.org/software/s6/
When my Dockerfile ends withCMD node .docker runs that container with the command/bin/sh -c "node ."instead of simplynode .(I know, I could do that withCMD ["node", "."]).I thought that this behavior is actually nice, since it means that inside the containerPID1is/bin/shand not my humble node script.If I understand correctlyPID1is responsible for reaping orphaned zombie processes, and I don't really wan't to be responsible for that... So if/bin/shcould do that, that would be nice. (I actually thought that this is the reason why docker does rewrite myCMD).The problem is that when I send aSIGTERMto the container (started with/bin/sh -c "node ."), either viadocker-composer stopordocker-composer kill -s SIGTERM, the signal doesn't reach mynodeprocess and therefore it get's forcefully killed everytime with aSIGKILLafter the 10 seconds grace period. Not nice.Is there a way to have someone manage my zombies and have my node instance receive the signals sent by docker?
SIGTERM does not reach node script when docker runs it with `/bin/sh -c`
If you take a look at the official Docker MySQL imageDockerfile, you will discover how they did it usingdebconf-set-selections.The relevant instructions are:RUN { \ echo mysql-community-server mysql-community-server/data-dir select ''; \ echo mysql-community-server mysql-community-server/root-pass password ''; \ echo mysql-community-server mysql-community-server/re-root-pass password ''; \ echo mysql-community-server mysql-community-server/remove-test-db select false; \ } | debconf-set-selections \ && apt-get update && apt-get install -y mysql-serverdebconf-set-selectionsis a tool that allows you to prepare the answers for the questions that will be asked during the later installation.
When installing mysql in ubuntu withapt-get install mysql-server, the mysql username and password are asked during the installation.But when using a dockerfile to build a mysql image, how can the username and password be provided?I tried using the dockerfile as follows:FROM ubuntu:14.04 apt-get update apt-get install -y mysql-serverBut when building the image, I found out we can login in the mysql without username and password.How can I set the username and password when I use dockerfile to build my images?
How to set mysql username in dockerfile
Docker containers, by default, run inside an isolated network namespace where they do not have access to the host network configuration (including iptables).If you want your container to be able to modify the network configuration of the host, you need to pass the--net=hostoption todocker run. From thedocker-run(1)man page:--net="bridge" Set the Network mode for the container 'bridge': creates a new network stack for the container on the docker bridge 'none': no networking for this container 'container:': reuses another container network stack 'host': use the host network stack inside the container. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.You will need to run with both--privilegedand--net=host.
I want to run a docker container with central log andfail2banservice to prevent from dos/ddos attacks.I'm having a problem to run a container with such capabilities that it could also modify the hosts iptables.There is a projectianblenke/docker-fail2banhowever it does not work...Giving the container flag privileged only allows me to controliptableson this container. Is there any way to control hostsiptablesthrough container?Regards.
Docker - modifying IPTABLES for host from container
If I understand your question correctly, you're asking how something can be run (specifically in the context of docker) without invoking a command shell.The way things are run in the linux kernel are usually usingtheexecfamily of system calls.You pass it the path to the executable you want to run and the arguments that need to be passed to it via an execl call for example.This is actually what your shell (sh, bash, ksh, zsh) does under the hood anyway. You can observe this yourself if you run something likestrace -f bash -c "cat /tmp/foo"In the output of that command you'll see something like this:execve("/bin/cat", ["cat", "/tmp/foo"], [/* 66 vars */]) = 0What's really going on is that bash looks upcatin$PATH, it then finds thatcatis actually an executable binary available at/bin/cat. It then simply invokes it viaexecve. and the correct arguments as you can see above.You can trivially write a C program that does the same thing as well. This is what such a program would look like:#include int main() { execl("/bin/cat", "/bin/cat", "/tmp/foo", (char *)NULL); return 0; }Every language provides its own way of interfacing with these system calls. C does, Python does and Go, which is what's used to write Docker for the most part, does as well. ARUNinstruction in the docker likely translates to one of theseexeccalls when you hitdocker build. You can run anstrace -f docker buildand thengrepforexeccalls in the log to see how the magic happens.The only difference between running something via a shell and directly is that you lose out on all the fancy stuff your shell will do for you, such as variable expansion, executable searching etc.
I'm learning about Docker at the moment, and going through theDockerfile reference, specifically theRUNinstruction. There are two forms ofRUN- theshellform, which runs the command in a shell, and theexecform, which "does not invoke a command shell" (quoted from theNotesection).If I understood the documentation correctly, my question is -If, and how can, Docker run a command without a shell?Note that the answers toCan a command be executed without a shell?'s don't actually answer the question.
How does Docker run a command without invoking a command shell?
I had the same issues. The posted solutions didn't fit my requirements.Here is my solution. If you run more than one container iterate over the cids in/var/lib/vagrant/cids/The first script disables the docker-deamon container autostart at boot. The second script starts the container by its CID only if it isn't running.This is working for the initialvagrant upand followingvagrant [ up | reload ] --provision# -*- mode: ruby -*- # vi: set ft=ruby : $disableAutostart = <
I have a Vagrant virtualbox which hosts a Docker container. The host machine has a folder which needs to be accessible in the vm and the container:Host: /host/path => VM: /vagrant/path => Container: /docker/pathBackground:/host/path/holds the development files for a project which are available at container level to ensure automatic reloads when a change was made.ConfigurationVagrant:Vagrant.configure("2") do |config| config.vm.synced_folder "/host/path", "/vagrant/path" endDocker:docker run -name mycontainer -d -v /vagrant/path:/docker/path my/imageProblemThis configuration works until i reload the vm. For example, when i restart my computer and start the vm withvagrant up, the docker container only recognizes an empty folder in/docker/path. I guess that could be some timing or sequencing issue./vagrant/pathis not empty and has the correct content.My workaround at the moment is to reload the container after each restart of the vm:docker rm mycontainer docker kill mycontainer docker run -name mycontainer -d -v /vagrant/path:/docker/path my/imageThat feels wrong. Any ideas?
Shared Volume in Docker through Vagrant
Please check the program in container is listening on interface 0.0.0.0.In container, run command:ss -lntpIf it appears like:LISTEN 0 128 127.0.0.1:5000 *:*that means your web app only listen at localhost so container host cannot access to your web app. You should make your server listen at0.0.0.0interface by changing your web app build setting.For example if your server is nodejs app:var app = connect().use(connect.static('public')).listen(5000, "0.0.0.0");If your server is web pack:"scripts": { "dev": "webpack-dev-server --host 0.0.0.0 --port 5000 --progress" }
I'm usingDocker for Mac. I have a container that run a server, for example my server is run on port 5000. I have exposed this port onDockerfileWhen my container is running, I connect to this container and check and if this server is working or not by running below command and see that it returns data (a bunch of html and javascript)wget -d localhost:5000Notes, I start this container and also publish port outside by command:docker run -d -p 5000:5000 But at docker host (is my mac and running El Capitan), I open chrome and go to addresslocalhost:5000. It doesn't work. Just a little note, if I go to any arbitrary port such aslocalhost:4000I see error message from chrome such as:This site can’t be reached localhost refused to connect.But error message forlocalhost:5000is:The localhost page isn’t working localhost didn’t send any data.So it seems I have configured work "a little" but something wrong. Please tell me how to fix this.
Docker: cannot open port from container to host
To control the master key the Function host uses on startup - instead of generating random keys - prepare our ownhost_secrets.jsonfile like{ "masterKey": { "name": "master", "value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==", "encrypted": false }, "functionKeys": [{ "name": "default", "value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==", "encrypted": false }] }and then feed this file into the designated secrets folder of the Function host (Dockerfile):for V1 Functions (assuming your runtime root is C:\WebHost):... ADD host_secrets.json C:\\WebHost\\SiteExtensions\\Functions\\App_Data\\Secrets\\host.json ...for V2 Functions (assuming your runtime root is C:\runtime):... ADD host_secret.json C:\\runtime\\Secrets\\host.json USER ContainerAdministrator RUN icacls "c:\runtime\secrets" /t /grant Users:M USER ContainerUser ENV AzureWebJobsSecretStorageType=files ...The function keys can be used to call protected functions like.../api/myfunction?code=asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==.The master key can be used to call FunctionsAdmin APIandKey management API.Inmy blogI describe the whole journey of bringing V1 and later V2 Functions runtime intoDockercontainers and host those in Service Fabric.for V3 Functions on Windows:ENV FUNCTIONS_SECRETS_PATH=C:\Secrets ENV AzureWebJobsSecretStorageType=Files ADD host_secrets.json C:\\Secrets\\host.jsonfor V3 Functions on Linux:RUN mkdir /etc/secrets/ ENV FUNCTIONS_SECRETS_PATH=/etc/secrets ENV AzureWebJobsSecretStorageType=Files ADD host_secrets.json /etc/secrets/host.json
I am playing around with an Http Triggered Azure Functions in a Docker container. Up to now all tutorials and guides I found on setting this up configure the Azure Function with theauthLevel"set toanonymous.After readingthis blogcarefully it seems possible (although tricky) to also configure other authentication levels. Unfortunately the promised follow up blogpost has not (yet) been written.Can anyone help me clarify on how I would go about and set this up?
Http Trigger Azure Function in Docker with non anonymous authLevel
It works.In Dockerfile :# sendmail config ############################################ RUN apt-get install -q -y ssmtp mailutils # root is the person who gets all mail for userids < 1000 RUN echo "[email protected]" >> /etc/ssmtp/ssmtp.conf # Here is the gmail configuration (or change it to your private smtp server) RUN echo "mailhub=smtp.gmail.com:587" >> /etc/ssmtp/ssmtp.conf RUN echo "[email protected]" >> /etc/ssmtp/ssmtp.conf RUN echo "AuthPass=yourGmailPass" >> /etc/ssmtp/ssmtp.conf RUN echo "UseTLS=YES" >> /etc/ssmtp/ssmtp.conf RUN echo "UseSTARTTLS=YES" >> /etc/ssmtp/ssmtp.conf # Set up php sendmail config RUN echo "sendmail_path=sendmail -i -t" >> /usr/local/etc/php/conf.d/php-sendmail.iniFor testing inside php sendmail container :echo "Un message de test" | mail -s "sujet de test"[email protected]I succeed with the help of this two documents :https://unix.stackexchange.com/questions/36982/can-i-set-up-system-mail-to-use-an-external-smtp-serverhttps://github.com/cmaessen/docker-php-sendmail/blob/master/Dockerfile
I'm on ubuntu 16.04. I have a (testing) docker (docker-compose) container running php 5.6 and apache 2.4.On the production platform (without docker) the mail is sent with sendmail.How to send test email on docker container (with sendmail)?Thanks in advance for responses.
Send email on testing docker container with php and sendmail
Exposing ports in a container does not imply that the ports will be opened on the docker host. You should be using thedocker run-poption. The documentation says:-p=[] :Publish a container᾿s port or a range of ports to the hostformat:ip:hostPort:containerPort|ip::containerPort|hostPort:containerPort|containerPortBothhostPortandcontainerPortcan be specified as a range of ports.When specifying ranges for both, the number of container ports in the range must match the number > of host ports in the range. (e.g.,-p 1234-1236:1234-1236/tcp) (use 'docker port' to see the actual mapping)Since you tried the-p containerPortform, the actual port opened on your host (Linux mint) was randomly chosen by docker when you run thedocker runcommand. To figure out what port was chosen, you have to use thedocker portcommand.Since this is not convenient, you should use the-p hostPort:containerPortform, and specify thathostPortis35729.(I also assume you expect ports 80, 8080 and 3000 to be accessible in the same manner)The command to run your container would then be:docker run --name=gulp_container -i -t --rm \ -v /var/www/my_app:/var/www/my_app:rw \ -p 35729:35729 \ -p 80:80 \ -p 8080:8080 \ -p 3000:3000 \ gulp_image bashAn easier way to deal with ports is to run your docker container inhost networking mode. In this mode, any port opened on the container is in fact opened on the host network interface (they are actually both sharing the same interface).You would then start your container with:docker run --name=gulp_container -i -t --rm \ -v /var/www/my_app:/var/www/my_app:rw \ --net=host \ gulp_image bash
I created a docker container to run tasks with gulp. All tasks are running, the problem is I can't enable livrereload in Chrome although I exposed the 35729 port in my container.Here is the Dockerfile :FROM ubuntu:latest MAINTAINER jiboulex EXPOSE 80 8080 3000 35729 RUN apt-get update RUN apt-get install curl -y RUN apt-get install software-properties-common -y RUN add-apt-repository ppa:chris-lea/node.js RUN apt-get update RUN apt-get install nodejs -y RUN curl -L https://www.npmjs.com/install.sh | sh RUN npm install --global gulp -y # overwrite this with 'CMD []' in a dependent Dockerfile CMD ["/bin/bash"]I create the image with the following command :docker build -t gulp_image .I create a container :docker run --name=gulp_container -i -t --rm -v /var/www/my_app:/var/www/my_app:rw gulp_image bashthen in my containercd /var/www/my_app gulpHere is my Gulpfile.jsvar gulp = require('gulp'), livereload = require('gulp-livereload'), exec = require('child_process').exec; gulp.task('js', function() { gulp.src([ './src/js/*.js' ]).pipe(livereload()); }); gulp.task('watch', function(){ var onChange = function (event) { console.log('File '+event.path+' has been '+event.type); }; livereload.listen(); gulp.watch([ './src/js/*.js' ], ['js']) .on('change', onChange); }); gulp.task('default', ['watch', 'js']);When I edit a js file, I can see in my container that the files are processed but when I try to enable live reload in my browser (Chrome), I got the following message : "Could not connect to LiveReload server.."Anyone got a clue about what I missed or didn't do ? Thanks for reading !
How to run livereload with gulp within a docker container?
You can adjust the history limit in swarm by running:docker swarm update --task-history-limit=1Which will only keep one previous task instead of the default 5. See the cli docs for more details:https://docs.docker.com/engine/reference/commandline/swarm_update/
When running docker in swarm mode, a history of past tasks accumulate as docker services are updated. Runningdocker node psdisplays the log of tasks.How do I clear this log without removing the service?
How to clear Docker task history
You probably don't have internet connection from the container. I had a similar issue when connecting from containerized java application to public web service.At first I would try to restart docker:systemctl restart dockerIf it does not help then look into resolv.conf in you container:docker run --rm myflaskimage cat /etc/resolv.confIf it showsnameserver 127.x.x.xthen you can try:1) on the host system commentdns=dnsmasqline in/etc/NetworkManager/NetworkManager.conffile with a#and restart NetworkManager usingsystemctl restart network-manager2) or explicitly set DNS for docker adding this into the/etc/docker/daemon.jsonfile and restarting the docker:{ "dns": ["my.dns.server"] }
I have aflaskbased python code which simply connects tomongodb.It has two routesGetPost.Getsimply printshello worldand usingPostwe can post any json data which is later saved inMongoDBThis python code is working fine.MongoDBis hosted on cloud.I have now created a Dockerfile:FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 RUN pip3 install pymongo ENV LISTEN_PORT=8000 EXPOSE 8000 COPY /app /appUsing command to rundocker run --rm -it -p 8000:8000 myflaskimageAfter starting the container for this docker image, I am getting response ofGETbut no response fromPOST. I am usingPostmansoftware to post json data. I get below error:pymongo.errors.ServerSelectionTimeoutError: No servers found yetI am bit confused as to why the python code is working fine but when I put the same in docker and start container, it throws error. Do we have to include anything inDockerfileto enable connections toMongoDB.Please help. ThanksPython Code:from flask import Flask, request from pymongo import MongoClient app = Flask(__name__) def connect_db(): try: client = MongoClient() return client.get_database() except Exception as e: print(e) def main(): db = connect_db() collection = db.get_collection('') @app.route('/data', methods=['POST']) def data(): j_data = request.get_json() x = collection.insert_one(j_data).inserted_id return "Data added successfully" @app.route('/') def hello_world(): return "Hello World" main() if __name__ == '__main__': app.run()
Docker container not able to connect to remote MongoDB
The root cause of this issue is that your docker daemon is not authenticated with the credentials necessary to push to gcr.io. For the original question, I believe this is likely because the user account being used was_tokeninstead ofoauth2accesstoken.I was experiencing an error similar to this, except that instead of usingdocker login, I was usingdocker-credential-gcrand was getting the sameunexpected EOFerror.My problem was the fact that I was running on GCE, from whichdocker-credential-gcrwas detecting and using a different service account via the GCE metadata API.So, for others experiencing this issue who are running on GCP and trying to authenticate a service account viadocker-credential-gcr, you need to tell it to only look at thegcloudcredentials, instead of looking at the environment for the metadata API details. My flow looks like this now:gcloud auth activate-service-account --key-file=$FILEdocker-credential-gcr configure-docker --token-source="gcloud"docker push gcr.io/....Hope it helps someone.
I'm trying to push to the Google container registry from my Jenkins. The builds run inside the Kubernetes Jenkins Plugin, which uses the gcr.io/cloud-solutions-images/jenkins-k8s-slave to build the docker image into the Kubernetes native Docker.After authenticating to the Google container registry I'm trying to push the newly built image. This is my pipeline script:def imageTag = 'gcr.io/project-id/tag' def version = version from pom sh './mvnw package' sh "docker build -t $imageTag:$version ." sh('gcloud auth activate-service-account --key-file=$FILE') sh('docker login -p $(gcloud auth print-access-token) -u _token https://gcr.io') sh("gcloud docker -- push $imageTag:$version")The push fails with the following output:c6ff94654483: Preparing 209db64c273a: Preparing 762429e05518: Preparing 2be465c0fdf6: Preparing 5bef08742407: Preparing c6ff94654483: Retrying in 5 seconds 5bef08742407: Retrying in 5 seconds 209db64c273a: Retrying in 5 seconds 2be465c0fdf6: Layer already exists 762429e05518: Layer already exists c6ff94654483: Retrying in 4 seconds 5bef08742407: Retrying in 4 seconds 209db64c273a: Retrying in 4 seconds c6ff94654483: Retrying in 3 seconds 5bef08742407: Retrying in 3 seconds 209db64c273a: Retrying in 3 seconds c6ff94654483: Retrying in 2 seconds 5bef08742407: Retrying in 2 seconds 209db64c273a: Retrying in 2 seconds c6ff94654483: Retrying in 1 second 5bef08742407: Retrying in 1 second 209db64c273a: Retrying in 1 second 5bef08742407: Retrying in 10 seconds ... unexpected EOF
Push to google container registry fails: Retrying
In a default vue-cli setup,npm start(the command you are using) runsnpm run dev.And, again, by default,npm run devbinds to localhost only.Add--host 0.0.0.0to yourwebpack-dev-serverline inpackage.jsonso you can access it fromoutsidethe docker container:From something like:"scripts": { "dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",To something like (add --host 0.0.0.0):"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js --host 0.0.0.0",Note:I'm assuming, because you usedCMD ["npm", "start"], you are creating a container for development or debugging purposes. If you are targeting production, you should really consider generating the bundle (npm run build) and serving the generated files directly on a HTTP server like nginx (which could be created in a docker as well).
I am unable to access the site locally on thehttp://172.17.0.2:8080/in Chrome, I get "172.17.0.2 took too long to respond".I used the inspect command to obtain the IP address of the container.docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} e83c95d05d63The run command that I used.docker run -it -p 8080:8080 --name portfolio-vue portfolio-vue:v1And my DockerfileFROM node:7.7-alpine ADD package.json /tmp/package.json RUN cd /tmp && npm install RUN mkdir -p /opt/portfolio-vue && cp -a /tmp/node_modules /opt/portfolio-vue-app WORKDIR /opt/portfolio-vue COPY . /opt/portfolio-vue EXPOSE 8080 CMD ["npm", "start"]
How to Containerize a Vue.js app?
You can force using the local image by retaging the existing image:docker tag remote/image local_imageAnd then inside the compose file usinglocal_imageinstead ofremote/image.
I have a docker-compose file and want to be able to make one of the images be spun up from the image in my local cache vs. pulling from dockerhub. I'm using the sbt docker plugin, so I can see the image being created, and can see it when I dodocker imagesat the command line. Yet, when I dodocker-compose up -d myimageit always defaults to the remote image. How can I force it to use my local image??Here is the relevant part of my compose file:spark-master: image: gettyimages/spark:2.2.0-hadoop-2.7 command: bin/spark-class org.apache.spark.deploy.master.Master -h spark-master hostname: spark-master environment: MASTER: spark://spark-master:7077 SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: localhost expose: - 7001 - 7002 - 7003 - 7004 - 7005 - 7006 - 7077 - 6066 ports: - 4040:4040 - 6066:6066 - 7077:7077 - 8080:8080 volumes: - ./conf/master:/conf - ./data:/tmp/data hydra-streams: image: ****/hydra-spark-core command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077 hostname: worker environment: SPARK_CONF_DIR: /conf SPARK_WORKER_CORES: 2 SPARK_WORKER_MEMORY: 1g SPARK_WORKER_PORT: 8881 SPARK_WORKER_WEBUI_PORT: 8091 SPARK_PUBLIC_DNS: localhost links: - spark-master expose: - 7012 - 7013 - 7014 - 7015 - 7016 - 8881 ports: - 8091:8091 volumes: - ./conf/worker:/conf - ./data:/tmp/data
How to use local Docker image with docker-compose?
1)General idea: Docker it is not Vagrant. It is wrong to put two different services into one container! Split it into two different images and link them together. Don't do this shitty image.Check and followhttps://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/Avoid installing unnecessary packagesRun only one process per containerMinimize the number of layersIf you do it:you will remove your supervisoryour can decrease numbers of layersIt should be something like (example):FROM alpine RUN apk add --update \ wget \ curl RUN apk add --update \ php \ php-fpm \ php-pdo \ php-json \ php-openssl \ php-mysql \ php-pdo_mysql \ php-mcrypt \ php-ctype \ php-zlib RUN usermod -u 1000 www-data RUN rm -rf /var/cache/apk/* EXPOSE 9000For nginx it is enough to use default image and mount configs. docker-compose file like:nginx: image: nginx container_name: site.dev volumes: - ./myconf1.conf:/etc/nginx/conf.d/myconf1.conf - ./myconf2.conf:/etc/nginx/conf.d/myconf2.conf - $PWD/cms:/srv/cms ports: - "80:80" links: - phpfpm phpfpm: build: ./phpfpm/ container_name: phpfpm.dev command: php5-fpm -F --allow-to-run-as-root volumes: - $PWD/cms:/srv/cms2) AddRUN usermod -u 1000 www-datainto Dockerfile for php container, it will fix problem with permission.
I run Docker 1.8.1 in OSX 10.11 via an local docker-machine VM.I have the following docker-compose.yml:web: build: docker/web ports: - 80:80 - 8080:8080 volumes: - $PWD/cms:/srv/cmsMy Dockerfile looks like this:FROM alpine # install nginx and php RUN apk add --update \ nginx \ php \ php-fpm \ php-pdo \ php-json \ php-openssl \ php-mysql \ php-pdo_mysql \ php-mcrypt \ php-ctype \ php-zlib \ supervisor \ wget \ curl \ && rm -rf /var/cache/apk/* RUN mkdir -p /etc/nginx && \ mkdir -p /etc/nginx/sites-enabled && \ mkdir -p /var/run/php-fpm && \ mkdir -p /var/log/supervisor && \ mkdir -p /srv/cms RUN rm /etc/nginx/nginx.conf ADD nginx.conf /etc/nginx/nginx.conf ADD thunder.conf /etc/nginx/sites-enabled/thunder.conf ADD nginx-supervisor.ini /etc/supervisor.d/nginx-supervisor.ini WORKDIR "/srv/cms" VOLUME "/srv/cms" EXPOSE 80 EXPOSE 8080 EXPOSE 22 CMD ["/usr/bin/supervisord"]When I run everything withdocker-compose upeverything works fine, my volumes are mounted at the correct place.But the permissions in the mounted folder /srv/cms look wrong. The user is "1000" and the group is "50" in the container. The webserver could not create any files in this folder, because it runs with the user "root".
Wrong permissions in volume in Docker container
go-wrapperhas been deprecated and removed from the images usinggoversion 10 and above. Seehere.If you are fine usinggo v1.9you can use the following image:golang:1.9.6-alpine3.7.So yourDockerfilewill be:FROM golang:1.9.6-alpine3.7 WORKDIR /go/src/app COPY . . RUN apk add --no-cache git RUN go-wrapper download # "go get -d -v ./..." RUN go-wrapper install # "go install -v ./..." #final stage FROM alpine:latest RUN apk --no-cache add ca-certificates COPY --from=builder /go/bin/app /app ENTRYPOINT ./app LABEL Name=cloud-native-go Version=0.0.1 EXPOSE 3000
i've golang application which I want to build docker image for it the application folder calledcloud-native-goand the dockerfile is under the root project Any idea what is wrong here ?FROM golang:alpine3.7 WORKDIR /go/src/app COPY . . RUN apk add --no-cache git RUN go-wrapper download # "go get -d -v ./..." RUN go-wrapper install # "go install -v ./..." #final stage FROM alpine:latest RUN apk --no-cache add ca-certificates COPY --from=builder /go/bin/app /app ENTRYPOINT ./app LABEL Name=cloud-native-go Version=0.0.1 EXPOSE 3000The error is :Step 5/12 : RUN go-wrapper download # "go get -d -v ./..." ---> Running in 70c2e00f332d /bin/sh: go-wrapper: not foundi Build it withdocker build -t cloud-native-go:1.0.0 .
Docker for golang application
The build script for the docker dnsmasq service needs to be changed in order tobind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface#!/bin/bash NIC="eth0" name="dnsmasq_" timenow=$(date +%s) name="$name$timenow" MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}') sudo docker run \ -v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \ --name=$name \ -p=$MY_IP:53:5353/udp \ -d sroegner/dnsmasqOn the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to bedns-nameservers xxx.xxx.xxx.xxx 8.8.8.8E.g. there is no comma or another line.You then need to restart your networking servicesudo /etc/init.d/networking restartif you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.Now restart all of your containerssudo docker stop $CONTAINER_IDsudo docker start $CONTAINER_IDThis causes their /etc/resolv.conf files update so they point to the new nameserver settings.DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!As a side note, this means that docker containerson other hostscan also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.
I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact thatone cannot alter the /etc/hosts file in dockercontainers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.It looks like someone has already done the hard work for me and created adnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:name="dnsmasq_" timenow=$(date +%s) name="$name$timenow" sudo docker run \ -v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \ --name=$name \ -p='127.0.0.1:53:5353/udp' \ -d sroegner/dnsmasqBefore running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:192.168.1.3 database.mydomain.comUnfortunately whenever I try to ping that domain from within:the hostThe dnsmasq containeranother container on the same hostI always receive theping: unknown hosterror message.I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:dnsmasq: started, version 2.59 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN dnsmasq: reading /etc/resolv.dnsmasq.conf dnsmasq: using nameserver 8.8.8.8#53 dnsmasq: read /etc/hosts - 7 addresses dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addressesI am guessing that I have not specified the-pparameter correctly when starting the container.Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
Setting Up Docker Dnsmasq
VS Code uses SSH Tunnel to connect to remote machine. The port forwarding is simply creating this tunnel. You can do it without vscode with the command below if you have ssh client installed. You have to run this command from localhost shell prompt. (I assumed we want to connect to port 8080 on remote-machine using localhost:8085)ssh -L 8085:remote-machine-ip:8080 remote-machine-ipNow, from your browser, if you go to http://localhost:8085 it will show content from remote-machine's 8080 service
When using VSCodeRemote DevelopmentOpen Folder in Containerto develop in docker container in Mac, I can not find:Any info about the port bydocker inspect containerIdAny port config in DockerfileBut I can still access the service in container from host browser.
How does VSCode [Remote Development] [Forward Port] work?
To add an insecure docker registry, add the file/etc/docker/daemon.json(in Linux) with the following content:{ "insecure-registries" : [ "your.registry.host:5000" ] }and then you need to restart docker.In case of Windows the file is at the following path:C:\ProgramData\docker\config\daemon.json
While trying to pull Windows images from a Private Docker Registry, I'm getting the following errorx509: certificate signed by unknown authorityI've installed the proper certificate and I can pull Linux images without any issue, but for some reason I'm unable to pull Windows ones.My co-workers don't have this problem.Any ideas on this one?
Private Docker Registry: 'x509: certificate signed by unknown authority' only for Windows images
using docker volumes on a cluster like Kubernetes gives you no data persistency. The workload can get scheduled on different node and you're done. To provide persistent storage in K8S cluster you need to use K8S solution to the problem.
Both Docker images & Kubernetes clusters have mechanisms to configure persistent storage on the host machine, a separate container, or just some form of cloud/network storage mechanism.I'm trying to understand how they are different in use cases and why you'd use one over the other. For context, I'm also looking at this more with transactional database persistence in mind, rather than log files or for a shared file/folder access.Thanks in advance!
Docker volume vs Kubernetes persistent volume for databases
How can I ensure the target host will have the libraries needed to compile the node-gyp modules?The target host is running docker as well. As long as the dependencies are in your image then your server has them as well. That's the entire point with docker if you ask me. If it runs locally, then it runs on the server as well.I'd go with node-alpine (FROM node:8-alpine) for even smaller files. I struggled with node-gyp before I wrapped my head around it, but now I don't even see how I ever thought it was a problem. As long as you add build toolsRUN apk add python make gcc g++you are good to go (this adds some 100-200mb to the size however).Also if it ever gets time consuming (say you find yourself rebuilding your image with --no-cache every now and then) then it can be a good idea to split it up into a base-image of your own and another imageFROM my-base-image:latestwhich contains things that you change a more often.There is some learning curve for sure, but I didn't find it that steep. At least not if you have touched docker before.The other way I'm looking at is to build the Dockerfile FROM ubuntu:version.I had only used CentOS before jumping on docker, and I run CentOS on my servers. So I thought it would be a good idea to run CentOS-images as well, but I found that to be just silly. There is absolutely zero gain unless you need something very OS-specific. Now I've only used alpine for maybe half a year, and so far the only alpine-specific command I've needed to learn isapk add/del.And you probably know already, but don't spend too much time optimizing docker file size in the beginning. (You can reduce layer size a lot by combining commands on one line, (adding packages, running command, removing packages). But that cancels out the use of the docker image cache if you make any small changes in big layers. Better to leave that out until it matters.
I'm planning to use Docker to deploy a node.js app. The app has several dependencies that require node-gyp. Node-gyp builds these modules (e.g. canvas, lwip, qrcode) against compiled libraries on the delivery platform, and in my experience these builds can be highly dependent on the o/s version and libraries installed, and they often break a simplenpm install.So is building my DockerfileFROM node:versionthe correct approach? This seems to be the approach shown in every Docker/Node tutorial I've found so far. But if I build from a node image, what will happen when I deploy the container? How can I ensure the target host will have the libraries needed to compile the node-gyp modules?The other way I'm looking at is to build the DockerfileFROM ubuntu:version. But I think this would mean installing nodeJS into the Ubuntu image and the whole thing would be much larger.Are there other ways of handling this?
Using Docker with nodejs with node-gyp dependencies
You need the-yparameter for the aptFROM node:latest ENV DEBIAN_FRONTEND=noninteractive RUN apt-get -qq update && \ apt-get -yqq install krb5-user libpam-krb5 && \ apt-get -yqq clean COPY / ./ EXPOSE 3000 CMD ["npm", "start"]And pay attention, that eachRUNdirective creates one additional layer in the image. That means, yourcleancommand will create a new layer, but all package cache will remain in other layers. So reducing the amount of these directives will be nice. It would help you to shrink the image size.
I am trying to createDockerimage by nextDockerfile. It must to installKerberosclient.Dockerfile:FROM node:latest RUN export DEBIAN_FRONTEND=noninteractive RUN apt-get -qq update RUN apt-get -qq install krb5-user libpam-krb5 RUN apt-get -qq clean COPY / ./ EXPOSE 3000 CMD ["npm", "start"]Next commandRUN apt-get -qq install krb5-user libpam-krb5from Dockerfile ask me to enter the value to interactive prompt which looks like:Default Kerberos version 5 realm:The point is that the command does not terminate even if I write value and press enter. Whats wrong and how to fix it?
How to install kerberos client in docker?
You're correct, it's becauseamazon-linux-extrasonly works with Python 2. You can modify theRUNinstruction toRUN PYTHON=python2 amazon-linux-extras install epel -y
Here is my (simplified) Dockerfile# https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-base FROM public.ecr.aws/lambda/python:3.8 # get the amazon linux extras RUN yum install -y amazon-linux-extras RUN amazon-linux-extras install epel -yWhen it reaches theRUN amazon-linux-extras install epel -yline during the build, it getsStep 6/8 : RUN amazon-linux-extras install epel -y ---> Running in dbb44f57111a /var/lang/bin/python: No module named amazon_linux_extras The command '/bin/sh -c amazon-linux-extras install epel -y' returned a non-zero code: 1I think that has to do with some python 2 vs. 3 stuff, but I'm not sure
No module named amazon_linux_extras when running amazon-linux-extras install epel -y
You can use python base imageFROM python:2.7This base image with have python pre-configured and you don't need to install python seperately. Hope it helps.Here is thelistof available imageFor quick reference please checkhttps://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3
I have node app and in one use case I am calling python script from node usingpython-shell. I am trying to setup this app on docker and my Dockerfile looks something like this:FROM debian:latest # replace shell with bash so we can source files RUN rm /bin/sh && ln -s /bin/bash /bin/sh # update the repository sources list # and install dependencies RUN apt-get update \ && apt-get install -y curl \ && apt-get -y autoclean # nvm environment variables ENV NVM_DIR /usr/local/nvm ENV NODE_VERSION 10.15.3 # install nvm # https://github.com/creationix/nvm#install-script RUN curl --silent -o-https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash # install node and npm RUN source $NVM_DIR/nvm.sh \ && nvm install $NODE_VERSION \ && nvm alias default $NODE_VERSION \ && nvm use default # add node and npm to path so the commands are available ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH # confirm installation RUN node -v RUN npm -v RUN apt-get -y install python2.7 COPY package.json . RUN npm install COPY . . CMD ["npm","run","start"]after building and running this container when I try to invoke use case where python script gets called from node I am getting this error.null events.js:174 throw er; // Unhandled 'error' event ^ Error: spawn /usr/lib/python2.7 EACCES at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19) at onErrorNT (internal/child_process.js:415:16) at process._tickCallback (internal/process/next_tick.js:63:19) Emitted 'error' event at: at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12) at onErrorNT (internal/child_process.js:415:16) at process._tickCallback (internal/process/next_tick.js:63:19) npm ERR! code ELIFECYCLE npm ERR! errno 1Help on setting up just python2.7 in a docker container?
How do I setup only python 2.7 in a docker container?
Turns out there wasn't anything wrong with my files. I created a new directory on my pc, created new files and copied the contents of the start.sh and Dockerfile and my app there. The error was gone. This has to be some serious bug, my friend just got the same error with other files that work on my pc as well.Maybe some issue with Docker and Windows 10.EDIT: couldn't fix it for my friend and I run into the same issue again. Someone an idea how to fix?SOLUTION: It is a incompatibility of the start.sh which is created under windows and with the one linux needs. To solve this, add this to the dockerfile, after you copied the start.sh:RUN dos2unix /start.shIf dos2unix is not installed, you have to install it first:RUN apt-get install dos2unix
I'm trying to build a docker container, but it doesn't seem to find my start.sh. It copies it to the container, but it somehow doesnt work.This is my dockerfile:FROM ubuntu:16.04 # Install Meteor RUN apt-get update RUN apt-get install -y curl RUN curl https://install.meteor.com/ | sh RUN meteor npm install --save highcharts # Entypointscript COPY start.sh / RUN chmod u+x /start.sh # Copy App COPY /app /app # UI Expose EXPOSE 80 ENTRYPOINT /start.shAnd this is my start.sh:#!/bin/bash sleep 20 /app/meteor run # don't exit /usr/bin/tail -f /dev/nullAlso I'm not sure about that meteor run command in the start.sh. How do I tell meteor run to be executed in a specific directory, without being able to cd into it?I'm using Windows 10. I have my meteor app in the \app\ directory and the Dockerfile and start.sh in the same directory as the app folder.I build the container using: docker build -t meteorapp .The error when I'm trying to run using:docker run -p 80:80 --net docker-network --name meteorapp meteorappis:/bin/sh: 1: /start.sh: not foundThank you very much!
Trying to build a docker container, start.sh not found
docker0network interface is associated with the defaultdocker networkbridge.You can access info about it with thedocker network inspect bridge.You can use the--formatoption to get specific value:$ docker network inspect bridge --format='{{json .IPAM.Config}}' [{"Subnet":"172.17.0.0/16","Gateway":"172.17.0.1"}]
I'm building application which needs to have ip address of docker0 without using commands likeip addr show dev docker0. Is there any way to get it from docker itself maybe usingdockercommand or something else? At least thedocker infodoesn't seem to show it.
How to get docker0 ip address platform independently
You could usecadvisor, which provides container users an understanding of the resource usage and performance characteristics of their running containers.Avery good articleabout setting upPrometheusto monitorDockeris using this architecture:Briefly, the idea is to collect information about containers usingcAdvisorand put them intoPrometheusdatabase.Grafanawill query thePrometheusdatabase and render monitoring charts / values.To collect data fromcAdvisortoPrometheus, you will have to edit aconfiguration file(prometheus.yml):scrape_configs: - job_name: 'cadvisor' scrape_interval: 5s static_configs: - targets: ['cadvisor:8080']When you have some data inPrometheus, the you have to useGrafanato view it. A (short) example of monitoringjsonqueries you could import intoGrafanais as follow:Get % of user CPU :"targets": [ { "expr": "sum(rate(container_cpu_user_seconds_total{image!=\"\"}[1m])) / count(node_cpu{mode=\"system\"}) * 100", "interval": "10s", "intervalFactor": 1, "legendFormat": "", "refId": "A", "step": 10 } ]Get % of RAM used :"targets": [ { "expr": "(sum(node_memory_MemTotal) - sum(node_memory_MemFree+node_memory_Buffers+node_memory_Cached) ) / sum(node_memory_MemTotal) * 100", "interval": "10s", "intervalFactor": 2, "legendFormat": "", "refId": "A", "step": 20 } ]For complete json data (too long to be posted here), you can clone this repository :git clone https://github.com/stefanprodan/dockpromAnd try to import thisGrafanajson.I'm currently using this architecture to monitor adocker swarm modecluster in production, and here is the output of the monitoring that you can find in the github repository.
I want to usePrometheusto monitor mydockercontainers. I can runPrometheuswithGrafanabut I do not know how to instruct it to monitor other docker containers. If possible I would like to see some example. For instance I haveUbuntucontainer running on my host as well asGentoocontainer. How can I tell Prometheus to monitor them, or perhaps to monitor some application(s) running inside? Thanks in advance for your time and patience with me.
Prometheus - how to monitor other docker containers
I finally found the solution for mounting my local maven repository in docker. I changed my solution; I am mounting it in therunphase instead ofbuildphase. This is myDockerfile:FROM ubuntu MAINTAINER Zeinab Abbasimazar ADD gwr $HOME RUN apt-get update; \ apt-get install -y --no-install-recommends apt-utils; \ apt-get install -y wget unzip curl maven git; \ echo \ " \ /root/.m2/repository \ true \ false \ false \ " \ > /usr/share/maven/conf/settings.xml; \ mkdir /root/.m2/; \ echo \ " \ /root/.m2/repository \ true \ false \ false \ " \ > /root/.m2/settings.xml WORKDIR . CMD mvn -X clean install -pl components -P profileAt first, I build the image using aboveDockerfile:sudo docker build -t imageName:imageTag .Then, I run a container as below:sudo docker run -d -v /home/zeinab/.m2/:/root/.m2/ --name containerName imageName:imageTag
I am trying to build a Java application and make a package using docker. This builds needs a maven repository which I don't want to include in the image, since it's very large. I wanted to try using volumes and mount my local maven repository to the maven repository in the image. I usedapt-get install -y mavenin order to have maven available, but I can't find the directory.m2in the image$HOME.I usedls -la $HOME,ls -laandls -la /rootto find the maven home, but there is no.m2directory there.EDIT 1:I have these lines inDockerfile:FROM ubuntu MAINTAINER Zeinab Abbasimazar # Install and configure required packages RUN apt-get update; \ apt-get install -y --no-install-recommends apt-utils; \ apt-get install -y dialog; \ apt-get install -y wget unzip curl maven; \ mkdir $HOME/.m2/; \ ls -la /usr/share/maven/conf/; \ echo \ " \ /root/.m2/repository \ true \ false \ false \ " \ > /usr/share/maven/conf/settings.xml VOLUME ["/home/zeinab/.m2/", "/root/.m2/"] # Build RUN mvn -X clean install -pl components -P profileWhich puts local repository configurations in image's maven configuration file, mounts my local maven repository to a directory in the image and finally performs the build. As I can see in the maven build log that it's using the local repository path I expected:[DEBUG] Reading global settings from /usr/share/maven/conf/settings.xml [DEBUG] Reading user settings from /root/.m2/settings.xml [DEBUG] Using local repository at /root/.m2/repositoryBut still can't detect dependencies.
Mounting Maven Repository to Docker
SOLVED:It turns out to be a networking issue. I am behind a corporate firewall at work that leverages TLS packet inspection to break apart SSL traffic. The build process while debugging runs as "me" on my local machine, however, the release build (docker-compose) actually pulls down a aspnetcore-build docker image, copies your code to the docker container, then runs dotnet restore to get fresh nuget packages for your docker image. These actions can be found in the Docker File in your project. This "dotnet restore" inside the container, runs under a different security context, and therefore was getting hung up. We traced the network traffic which was hard for me to get to because of how docker networking works. Fiddler was not catching the traffic. Using wireshark, we were able to catch it from a device level and see the drop. The reason it continued to fail from my home network was due to the configuration with our hypervisor & networking.RESOLUTIONS:Add a firewall rule forhttps://api.nuget.org/v3/index.json(Preferred) OR Build the image from VSTS in the cloud OR Build from a different network.PS4 please post back if you are able to resolve this the same way? Having spent 3 days on this, I'm curious about your status.
I am getting nugget restore error while building using docker-compose behind proxy. I have set proxy in docker for windows. Nuget restore works for command linedotnet restoreand visual studio debug, but not usingdocker-compose.:\Program Files\dotnet\sdk\2.1.104\NuGet.targets(104,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json. [C:\src\WebApp.sln] :\Program Files\dotnet\sdk\2.1.104\NuGet.targets(104,5): error : An error occurred while sending the request. [C:\src\WebApp.sln] :\Program Files\dotnet\sdk\2.1.104\NuGet.targets(104,5): error : A connection with the server could not be established [C:\src\WebApp.sln] ERROR: Service 'idenityapi' failed to build: The command 'powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; dotnet restore -nowarn:msb3202,nu1503' r turned a non-zero code: 1
Nuget package restore error in Docker Compose build
Found that the docker run command should contain -e CI=trueto exit immediately:docker run -e CI=true myimage npm run testFrom React documentation about "CI=true":The test command will force Test to run in CI-mode, and tests will only run once instead of launching the watcher.
I want to test my node docker image with "npm run test" as a command overwrite when running my container.My Dockerfile is:FROM node:alpine WORKDIR /app COPY ./package.json ./ RUN npm install COPY ./ ./ CMD ["npm", "run", "start"]The "npm run test" command should be run in my container and exit to the terminal (locally and Travis CI) but the test run is stuck at "Ran all test suites." waiting for input.My docker run command is:docker run myimage npm run test -- --coverageI also tried with:docker run myimage npm run test -- --forceExitBut none of them exits when the test have run (neither locally or in Travis CI).My App.test.js file is the standard test:import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(, div); ReactDOM.unmountComponentAtNode(div); });What should I do to automatically exit the test when it is finished?
Node in Docker: npm test and exit
The$REDIS_PORT_6379_TCP_ADDRand$REDIS_PORT_6379_TCP_PORTvariables are not known at the time thedocker runcommand is executed so there's no way to construct it from the host.However, there is a workaround. In theDockerfilefor thefirehose/serverimage there must be aCMDorENTRYPOINTthat dictates what command is executed when the image is run. You can put a wrapper around that command that will construct theREDIS_URLvariable. Something like this:#!/bin/sh export REDIS_URL="redis://${REDIS_PORT_6379_TCP_ADDR}:${REDIS_PORT_6379_TCP_PORT}/0" Use the wrapper script as theCMDorENTRYPOINTin theDockerfile.
I have an application which uses an environment variable namedREDIS_URL. A typicalREDIS_URLwould beredis://172.17.0.5:6379/0. I'd like to be able to populateREDIS_URLbased on container linking:docker run --name redis -d redis docker run --name firehose --link redis:redis -e REDIS_URL="redis://$REDIS_PORT_6379_TCP_ADDR:$REDIS_PORT_6379_TCP_PORT/0" -d firehose/serverBut depending on how I escape the environment variables, they are either evaluated in my shell at docker run time and are blank (redis://:/0), or passed as literal strings (redis://$REDIS_PORT_6379_TCP_ADDR:$REDIS_PORT_6379_TCP_PORT/0).How can I populate myREDIS_URLapplication environment variable based on conatiner linking?
Using linked container environment variables in application environment?
Something is very odd here. Why do you have the virtualenv content next to your Dockerfile?The image you are building fromcreates the virtualenv on /var/app (within the container, yes?) for you. I believe that the ONBUILD command copies it (or parts of it) over and corrupt the rest of the process, making the /var/app/bin/pip inoperable.FROM python:3.4.2 <-- this is the base image, on top of which the following command will be applied WORKDIR /var/app <-- this is the working dir (a la 'cd /var/app') RUN pip3 install virtualenv <-- using pip3 (installed using base image I presume) to install the virtualenv package RUN virtualenv /var/app <-- creating a virtual env on /var/app RUN /var/app/bin/pip install --download-cache /src uwsgi <-- using the recently install virtualenv pip to install uwsgi ... ONBUILD ADD . /var/app <-- add the contents of the directory where the Dockerfile is built from, I think this is where the corruption happen ONBUILD RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi <-- /var/app/bin/pip has beed corruptedYou should not care about externally having /var/app available on the host. You just need (based on the Dockerbuild file) have the "requirements.txt" available on the host, to be copied into the container (or not, if not, it will skip).
Trying to follow a few[1][2] simple Docker tutorials via AWS am and getting the following error:> docker build -t my-app-image . Sending build context to Docker daemon 94.49 MB Step 1 : FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 # Executing 2 build triggers... Step 1 : ADD . /var/app ---> Using cache Step 1 : RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi ---> Running in d48860787e63 /bin/sh: 1: /var/app/bin/pip: not found The command '/bin/sh -c if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi' returned a non-zero code: 127Dockerfile:# For Python 3.4 FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1Which pip returns the following:> which pip ./bin/pipRelevant file structure:. ├── Dockerfile ├── bin │   ├── activate │   ├── pip │   ├── pip3 │   ├── pip3.5 │   ├── python -> python3 │   ├── python-config │   ├── python3 │   ├── python3.5 -> python3 │ . .Again, noob in all things Docker so I'm not sure what troubleshooting steps to take. Please let me know what other helpful information I can provide.
Docker Build can't find pip
You can set memory usage of docker container using-e JAVA_OPTS="-Xmx64M -Xms64M".docker file:FROM openjdk:8-jre-alpine VOLUME ./mysql:/var/lib/mysql ADD /build/libs/application.jar app.jar ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jarimage run:docker run -d --name container-name -p 9100:9100 -e JAVA_OPTS="-Xmx512M -Xms512M" imagename:tagHere i set 512Mb memory usage . you can set 1g or as per your requirement. After run using this check your memory usage. it will max 512Mb.
I have created some services in spring boot, I have 11 fat jars and I deploy them in docker containers, my doubt was that every jar was consuming between 1 and 1.5 GB of RAM without any use, I check the RAM by running:docker stats containernameAt first I thought that it was the java container and I tried to change to one that uses alpine but nothing changed, so I think the only problem is my jar. Is there a way to change the RAM that the jar is using? Or this behavior is normal because every jar has an embedded tomcat? Or maybe is better to put some jars together and deploy them as war and use only one tomcat for a group of "jars"? Can someone share his/her experience?,Thanks in advance.
Spring boot is consuming too much RAM
you can alsodocker exec -it container_id bashand thenkill -9of the main process. I tested withdocker run -d --restart=always -e DISPLAY=$DISPLAY -v /home/gg/moncontainer:/home/gg -v /tmp/.X11-unix:/tmp/.X11-unix k3ck3c/captvtyI killed the main process (pid 5, Captvty.exe), was logged out of the container, and 2 seconds later it was restarted, the window was created again
From the Dockerdocument, there is arestart policyparameter could be set.How do I verify the container indeed restarts when the container exits. How to trigger the exit of container manually, and observe if the container restarts?My environment is Mac and boot2docker.Thanks
How to check if the restart policy works of Docker
Yes, the directory name is the default project name fordocker-compose:$ docker-compose --help ... -p, --project-name NAME Specify an alternate project name (default: directory name)Use the-pargument to specify a particular non-default project name.Alternatively, you can also set theCOMPOSE_PROJECT_NAMEenvironment variable (which defaults to thebasenameof the project directory).If you are sharing compose configurations between files and projects with multiple compose files, refer tothislink for more info.
I have two projects and I need two differents docker environnement (containers). I have twodocker-compose.ymlfiles in two different projects.fooproject andbarproject.foo/src/website/docker-compose.yml#1 (foo)version: '3' services: db: env_file: .env image: mariadb:10.0.23 container_name: foo-db ports: - "42333:3306" restart: always web: image: project/foo container_name: foo-web env_file: .env build: . restart: always command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails server -p 3000 -b '0.0.0.0'" volumes: - .:/webapps/foo ports: - "3000:3000" depends_on: - dbbar/src/website/docker-compose.yml#2 (bar)version: '3' services: db: image: mysql:5.5.50 container_name: bar-db ports: - "42333:3306" env_file: .env restart: always web: image: project/bar container_name: bar-web env_file: .env build: . restart: always command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails server -p 3000 -b '0.0.0.0'" volumes: - .:/webapps/bar ports: - "3000:3000" depends_on: - dbI do this command for myfooprojectdocker-compose buildanddocker-compose up, everything works. In Kitematic I see my two containers with the good names (foo-web).I do this command to stop my imagedocker-compose stop.I go to my second project (bar) and rundocker-compose buildanddocker-compose up. everything works, but my container name in now replaced bybar-web.I stop my second image withdocker-compose stopand I try to performdocker-compose upin myfooproject folder again but it fails.How can I keep two different containers and easily switch from one to the other withdocker-compose stopanddocker-compose up?Edit 1I found the issue, the main folder where mydocker-compose.ymlare located for my two projects have the same folder name. Can I fix this or I need to rename my folders?
docker-compose containers uses wrong container with multiple projects
DockersUnderstand images, containers, and storage driversdetails most of this.From Docker 1.10 onwards, all the layers that make up an image have an SHA256 secure content hash associated with them at build time. This hash is consistent across hosts and builds, as long as the content of the layer is the same.If any number of images share a layer, only the 1 copy of that layer will be stored and used by all images on that instance of the Docker engine.A tag likedebiancan refer to multiple SHA256 image hash's over time as new releases come out. Two images that are built withFROM debiandon't necessarily share layers, only if the SHA256 hash's match.Anything that runs the Docker Engine underneath will use this storage setup.This sharing also works in the Docker Registry (>2.2 for the best results). If you were to push images with layers that already exist on that registry, the existing layers are skipped. Same with pulling layers to your local engine.
My understanding is that Docker creates an image layer at every stage of a dockerfile.If I have X containers running on the same machine (where X >=2) and every container has a common underlying image layer (ie. debian), will docker keep only one copy of the base image on that machine, or does it have multiple copies for each container?Is there a point this breaks down, or is it true for every layer in the dockerfile?How does this work?Does Kubernetes affect this in any way?
Does docker reuse images when multiple containers run on the same host?
Update: this answer is no longer relevant - was for 2016 TP5. AD support has been added in later releasesOriginal answerQuick answer - no, containers are not supported as part of AD so you can't use AD accounts to run processes within a container or authenticate with itThis used to be mentioned on the MS Containers site but theoriginal linknow redirects.Original wording (CTP 3 or 4?): "Containers cannot join Active Directory domains, and cannot run services or applications as domain users, service accounts, or machine accounts."I don't know if that will change in a later release.Someone tried tohack around itbut with no joy.
So I have Windows Server 2016 TP5 and I'm playing around with the containers. I am able to do basic docker tasks fine. I'm trying to figure out how to containerize some of our IIS-hosted web applications.Thing is, we usually use integrated authentication for the DB and use domain service accounts for the app pool. I currently don't have a test VM (that is in a domain) so I can't test if this will work inside a container.If the host is joined to an AD domain, are its containers also part of the domain? Can I still run processes using domain accounts?EDIT: Also, if I specify the "USER" in the dockerfile, does this mean that my app pool will run using that (instead of the app pool identity)?
Active directory accounts inside a windows container (server 2016 TP5)
#Note: image1 and image2 can be same FROM image1 .. any commands for image1 FROM image2 .. any commands for image2It will create two images. It will return latest image id after the build(as the doc says). So this usage is possible(I didn't see that usage yet.), but in my opinion it can be used on exceptional cases. It doesn't seem a nice usage to build two different images and reaching first image id.May be your requirement is building mass applications and able to build once a time together. So it's up to your requirement. Do you really need this usage is the main question.
TheDockerfile referencesays the following about the FROM instruction:FROMcan appear multiple times within a singleDockerfilein order to create multiple images. Simply make a note of the last image ID output by the commit before each newFROMcommand.I don't understand what they mean bynote the last image ID output by the commit. I'm not really sure I see the point at all in having multipleFROMinstructions.Is there any valid use case of this?
Dockerfile FROM Instruction
There's a per-service dictionary calledulimits:.version: '3' services: my_proj: image: image/my_image ulimits: rtprio: 95 memlock: -1 ...Note that Docker Compose works better with non-interactive services that stay running; I would use it to launch your service proper and not necessarily to get an interactive shell in a temporary container.
I can't find the option in the docker-compose.yaml to pass the parameters the following 'docker' parameters:--ulimit rtprio=95 --ulimit memlock=-1In other words, I wish to wrap the following command with docker-compose:docker run --rm -it --network host --ulimit rtprio=95 --ulimit memlock=-1 --name my_proj image/my_image bash
docker-compose yaml - option to pass the 'ulimit' parameters 'rtprio' and 'memlock'
It seems like you wont be able to use docker in Windows 10family, since docker Desktop requires specific Windows version, as said inofficial documentation.System RequirementsWindows 10 64-bit: Pro, Enterprise, or Education (Build 15063 or later).What you can try is to run linux-based virtual machine on you Windows host, and run docker inside of it. But even if you succeeded, you will lose all advantages of the docker in resources consumption.
I am new to Docker. I'm trying to work with it on windows. I have Windows 10 Family so I installed Linux Bash Shell. When I run this command:$ docker run hello-worldI get :docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. And when I run$ systemctl status dockerI getSystem has not been booted with systemd as init system (PID 1). Can't operate
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Linux Bash Shell on windows 10
I'll answer myself as I was able to call VSCode using thecodecommand in the remote's container.When I look closely underneath home, I see that there was acodeat the following directory.$HOME/.vscode-server/bin//bin/So I passed the PATH through it and it worked.By the way,is a directory with a hash-like name, which is randomly assigned when you connect to the container remotely. It's different every time, so please refer to it by yourself.The way to register the path is as follows.export PATH="$PATH:$HOME/.vscode-server/bin//bin/"Thank you very much for your support.
I am using VSCode in my local PC and connecting to a Docker container in a remote server with VSCode's Extensions ofRemote - SSHandRemote - containers. However, when I type$ code on the VSCode's terminal (Bash), I get an error messages saying thatbash: code: command not foundand I can't edit the file on the VSCode's editor.If I click on the file from VSCode's Explorer (Ctrl+Shift+E), the edit screen will appear, but isn't it possible to call it with thecodecommand?Also, I call the command palette (Ctrl+Shift+P) and then search forShell Command: Install 'code' command in PATHbut no matching commands are found.The execution environment is as follows:.Local PC: Windows 10 ProRemote host PC: Ubuntu 18.04.3 LTSDocker container in the remote host PC: Ubuntu 18.04.3 LTSThank you very much for your answer.
The "code" command does not work when connecting to a Docker container remotely with VSCode
I solve my problem:I try to setup my connection (from node) to mongodb before the mongodb server was completely up (it take 5/6 secs for the first start).So, i just need to retry the connection few times (3/4 times) with 1 sec before each requests from node before mongo accept the request.var connectWithRetry = function() { return mongoose.connect(db, function(err) { if (err) { console.error('Failed to connect to mongo on startup - retrying in 1 sec', err); setTimeout(connectWithRetry, 1000); } }); }; connectWithRetry();
I try to up a Node.JS container linked with a MongoDB container by docker-compose, but systematically node.js return an ECONNREFUSED error.The errornodejs_1 | /code/node_modules/mongoose/node_modules/mongodb/lib/server.js:228 nodejs_1 | process.nextTick(function() { throw err; }) nodejs_1 | nodejs_1 | Error: connect ECONNREFUSED nodejs_1 | at exports._errnoException (util.js:746:11) nodejs_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1010:19)NodeJS codevar db = 'mongodb://database:27017/wondrapi'; mongoose.connect(db);docker-compose.ymlweb: build: ./web ports: - "8080:80" links: - nodejs volumes: - ./web:/usr/share/nginx/html:ro nodejs: build: ./api ports: - "8081:3000" links: - database command: npm start database: image: mongo volumes: - db:/data/db ports: - 27017Dockerfile (./api)FROM node ADD package.json /code/ WORKDIR /code RUN npm install ADD . /codeHow can I solve the error?
NodeJS Mongodb in docker compose = ECONNREFUSED
It should be this way: myrepo.git#:myfolderversion: "2" services: php: build: context: https://github.com/wodby/drupal-php.git#:7 args: - BASE_IMAGE_TAG=7.1 - WODBY_USER_ID=117 - WODBY_GROUP_ID=111 volumes: - ./:/var/www/htmlhttps://docs.docker.com/engine/reference/commandline/build/#git-repositories
I would like to build a new image in my docker compose project using a git repository as I need to change some ARG vars.My concern is that the Dockerfile is inside a folder of the git repository.How can be specified a folder as build context using a git repository?Repository:https://github.com/wodby/drupal-php/blob/master/7/Dockerfileversion: "2" services: php: build: context: https://github.com/wodby/drupal-php.git dockerfile: 7/Dockerfile args: - BASE_IMAGE_TAG=7.1 - WODBY_USER_ID=117 - WODBY_GROUP_ID=111 volumes: - ./:/var/www/htmlI've tried the dockerfile property: "FOLDER/" + DockerfileBut the repository uses relative paths, and it doesn't find dependencies:---> 6cc2006e9102 Step 7/9 : COPY templates /etc/gotpl/ ERROR: Service 'phpe' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder740707850/templates: no such file or directory
Docker Compose build context from git repository with Dockerfile inside folder
The problem is theCOPYcommand in the Docker file:COPY build/libs/myproject.jar myproject.jarThe source directorybuild/libs/is not where the files for building the Docker container reside. Instead the directorybuild/docker/is used as Docker build context. WhenCOPYis executed this directory is the effective working directory.The correctCOPYcommand is as simple as this:COPY myproject.jar /Docker task:docker { dependsOn bootJar name "${project.group}/${jar.baseName}:${version}" files bootJar.archivePath }If you want to copy resources too, you need to addprocessResourcesto thefilesparameter:files bootJar.archivePath, processResources
If I try to build a Docker container with a Spring Boot application under Windows 10, I get the following error:> Task :docker FAILED COPY failed: stat /var/lib/docker/tmp/docker-builder711841135/myproject.jar: no such file or directoryI'm using Docker Community Edition in version 18.03.0-ce-win59 (16762) and Gradle 4.7 with Java 8.build.gradle(shortened):plugins { id 'java' id 'org.springframework.boot' version '2.0.1.RELEASE' id "com.palantir.docker" version "0.19.2" } version = '2.0.0' sourceCompatibility = 1.8 group = "com.example" repositories { mavenCentral() } bootJar { archiveName 'myproject.jar' } dependencies { ... } docker { dependsOn(build) name "${project.group}/${jar.baseName}" files bootJar }Dockerfile(sibling of build.gradle in the top-level project directory):FROM openjdk:8-jre COPY build/libs/myproject.jar myproject.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myproject.jar"]If I build the Docker container with Docker only (without Gradle) it works.How can I let Gradle (or Docker?) find the file myproject.jar?
.jar file not found when building a Docker container with Palantir Gradle plug-in
Having experienced this exact scenario, I can confirm that upon reaching the limit, AWS will block you from pushing with this very unhelpful error message:Error pushing to registry: Server error: 403 trying to push : manifestYou'll need to manage the number of repositories yourself. As there is currently no built in garbage collection (nor 'remove oldest') functionality, you have a few options:Remove the images via the console (which really is just woeful with so many images)Write your own tool that interfaces with the AWS CLI/SDK using theecr batch-delete-imagecommandsRequest a limit to the maximum number you can store per repository. We've recently done this and was very easy to get the 1,000 limit increased to 5,000.
According toAmazon ECR Service Limits, the maximum number of images per repository is 1,000. After exceeding this limit, the oldest image won't remove automatically. It blocks pushing to the repository. So I have to clean old images manually.Update:AWS introducedECR Lifecycle Policies. We can now automate the cleanup with this.
What will happen after the maximum number of images pushed to ECR repository
On Linux containers, your can access the host using IP address172.17.0.1. So from inside your Java app you should be able to reach the other containers on172.17.0.1:8081,172.17.0.1:8082and172.17.0.1:8083. That's equivalent to usinglocalhost:8081,localhost:8082andlocalhost:8083on your host machine.
I have an assignment to set up 3 docker container on localhost:8081, localhost:8082 and localhost:8083 which i've done succesfully.Then there is a last container that is a java app on localhost:8080 and it needs to send requests using HttpClient and HttpRequest to the other containers i've done this creating a bridge with "docker network create web_server --driver brigde" and im running the containers with --network web_server and this way they can communicate using the container names and it works. But my teacher told me to send the request to http://localhost:8081, 8082 etc. Is there a way to make containers access localhost? Im using docker for linux
Access localhost from docker container
Step 1: Run registry version 2+ with proxy configurationYou need to run docker registry with aproxyconfiguraiton.To get an initialconfig.yml:docker run -it --rm --entrypoint cat registry:2 /etc/docker/registry/config.yml > `pwd`/config.ymlAdd following toconfig.yml:proxy: remoteurl: https://registry-1.docker.ioThen start docker registry withconfig.yml:docker run -d --restart=always -p 5000:5000 --name docker-registry-proxy -v `pwd`/config.yml:/etc/docker/registry/config.yml registry:2Step2: Configure Docker Daemon on clientIf you use Docker for Mac (not Docker toolbox or boot2docker), just addhttp://:5000to mirrors section under Advanced tab:Restart Docker for Mac.Otherwise, you need to run docker daemon with--registry-mirror=http://:5000, by doing something like following on the client or Docker Toolbox VM:docker --registry-mirror=https:// daemonStep 3: Verify proxy is working:Try to pull an image you don't have yet:docker pull nginxThen verify proxy catalog has the new image:curl https://:5000/v2/_catalogIt should return something including the image you have just pulled."repositories":["library/nginx"]}
I have a server (let's name it A) which may have access to internet and from which I'm able to pull images from the officiel docker.io registry.I also have other servers (B, C) which cannot have this same access for security reasons, but are allowed to have access to A.I also have decided to install a private registry on A, which can be used from B and C.Is it possible to have this registry acting as a proxy, in the way that when I want to pull an official image from B, it could be done through A ?
How to set-up a docker registry acting as a Proxy?
Since your database has publishedports:, you can access it directly from the host. The application running outside a container on the host and the same application running in a Compose setup are different environments, and it's appropriate to use environment variables to specify this. Do not hard-code a database location in your application.If you can use thestandard PostgreSQL environment variables, then it's fairly easy to specify this.# To run migrations: cd app/database PGUSER=username PGPASSWORD=password PGDATABASE=database \ alembic revision --autogenerate -m "Description" # (assumes default PGHOST=localhost)# To run the application: version: '3.8' services: db: { ... } app: build: . environment: PGHOST: db PGUSER: username PGPASSWORD: password PGDATABASE: database ports: - "5000:5000" depends_on: - db
I find the workflow for working with database migrations in a containerized environment confusing. I have a web API with an attached database. The API runs in one container and the database in another. The project file structure is as follows. ├── docker-compose.yml ├── Dockerfile └── app | ├── __init__.py | ├── database | | ├── alembic/ | | ├── __init__.py | | ├── db.py | | └── models.py | ├── other | ├── source | └── files ├── other └── filesIn order for the API container to be able to access the database the sqlalchemy.url in the ini-file is set to:postgresql://{username}:{password}@db:5432/{database}However when I want to do a migration, for example add a table column, I will change the model inapp/database/models.pychange directory toapp/databaseand runalembic revision --autogenerate -m "Description". This is where the problem occurs, I get the error:sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not knownIf I change the hostname to localhost it works but then the docker-compose breaks since it has to reference the container name.This workflow does not seem right. How do people work with databases in projects which uses containers?Thedocker-compose.ymlfile looks like this:version: "3" services: db: image: postgres ports: - "5432:5432" environment: - POSTGRES_USER=username - POSTGRES_PASSWORD=password - POSTGRES_DB=database app: build: . command: bash -c "cd app/database && alembic upgrade head && cd ../.. && python app/main.py" volumes: - .:/code ports: - "5000:5000" depends_on: - db
How to autogenerate and apply migrations with alembic when the database runs in a container?
So you can use this command to check which versions are in the yum repo:sudo yum list docker-engine.x86_64 --showduplicates | sort -rand then use this to install the version listed that you want:sudo yum -y install docker-engine-If you simply want to downgrade the docker package (this can be performed multiple times, by the way), you'd do this:sudo yum downgrade docker-engineand that will install the previous version of docker to the one you currently have installed while cleaning up the later version.You could always keep downgrading until you got the one you want, but that's annoying, so I'd just go with the first method :P
I tried to install docker 1.8.2 on Centos7. The docs don't tell anything about versioning. Someone who can help me? I triedwget -qO- https://get.docker.com/ | sed 's/lxc-docker/lxc-docker-1.8.2/' | sh + sh -c 'sleep 3; yum -y -q install docker-engine'but didn't work.EDIT:I performed:yum install -y http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.8.2-1.el7.centos.x86_64.rpmThat works but I miss options asdocker-storage-setupanddocker-fetch
How to install specific version of Docker on Centos?
Yes, answer is "true" to both questions. If you start 2 (or more) containers on the same host, all using the same base image, the whole content of the base image will be shared.What is called as an "image" is, in fact, multiple images called "layers" with parent-child relationships, stacked together.Now, If you start multiple containers with different images, it may happen that these images share some common layers, depending on how they were built.At the system level, Docker mounts each image layer on top of the other up to the final/top image. each layer overwrites its parent content if it overlaps. To do that, it uses what is called an "union filesystem" (Aufs), or even volume snapshots. More informationhere.The images are never modified, they are read-only. On top of the last/upper image, an extra, writeable layer, is added, it will contain changes/additions made by the running container.That means that this writeable layer can also be turned into an image layer, and you can start other containers from this new image.To see layers sharing "with your own eyes", just run the following examples:docker run ubuntu:trusty /bin/bashThen:docker run ubuntu-upstart:trusty /bin/bashDocker will tell you that it already has some layers and will thus download them all.Check the documentation about writing aDockerfile(image build script), that should give you a good vision about how all this works.
Along with the readings with Docker, I stopped a couple of times with the fact that Docker containers not only share the host kernel, but If possible they share common binaries and libraries.What I understand from that, is that If I'm running the same docker Image twice on the same host, and this image is using some files x,y,z (say libraries / bins .. anything). These files will also be shared among the 2 launched containers? What is even more is that if I'm running two different images, they still could share these common dependencies. What I'm asking for is just two things ...1- Verification / Explanation --> Is that true / false + explanation (how does that happen)2- If true --> Is there a practical example, that I can run 2 containers (of the same / diff images) and verify they are seeing the same files / libs.I hope my question is clear and someone has an answer :)
Docker "Sharing Dependencies"
Containers don't translate instructions. A program running in a container is exactly the same as any other program running on the same machine, except that it has separate ("namespaced") instances of certain things, like the filesystem, the network stack, and the system hostname. The CPU isn't emulated or virtualized (any more than usual, anyway.)Virtual machinescansupport instructions not supported on the host machine, but they do notnecessarilydo so. If they do, it will usually come at a substantial cost in performance.
I recently ran into a bug where a python library used a certain CPU instruction which existed on one x86 processor but not on another, resulting in an unexpected crash of the program (Illegal instruction) on one system but not on another. That had me thinking of the benefits of containerization to create a well-defined run-time environment for my software. But my brain ground to a halt when I realized how low level this is, and I could not figure out from reasoning nor from reading on the internet, as to what level the isolation of software like docker goes.QuestionSo my questions is:Would a containerization software, like Docker or LXC, be able to emulate an instruction which does not exist on the physical hardware?And would a full VM be able to deal with it, if a container could not?Anecdotal informationThought I'd fill in the blanks, just because people were curious.The specific scenario I was caught by was when trying to apply Reed-Solomon erasure coding to a data object. I'm using thePyECLiblibrary which implements Vandermonde Reed-Solomon via theliberasurecodelibrary (which in turn usesjerasure, I believe).Minimal Working ExampleThis piece of code runs without errors on a compatible processor, but produces theIllegal instructionexception on some older processors:from pyeclib.ec_iface import ECDriver ec_driver = ECDriver(k=1, m=5, ec_type='liberasurecode_rs_vand') ec_driver.encode(b'foo')EnvironmentI'm using Python 3.6 on multiple Linux platforms. The notable case where things wreak havoc is in an LXC container running Fedora 25 on the processor specified below, but I'd bet LXC and Fedora has little to do with it.I've tried both pyeclib 1.4 and 1.1, and have the same thing happen.These processors makes my program crash:Intel Xeon X5660Intel Xeon X3363Intel Xeon E5405Intel Xeon X3430Intel Xeon E3110Here are some processors which works fine:Intel Xeon E31220Intel Core i7-7500U
How does containerization software like Docker translate CPU instructions?
It doesnt seem possible to start a container by connecting to multiple networks at once.From the pagehttps://success.docker.com/article/multiple-docker-networksDocker only allows a single network to be specified with the docker run command. To connect multiple networks "docker network connect" is used to connect additional networks. If a container needs to be connected to multiple networks before it runs then it is possible to attach networks to a created container that has not started yet.And to connect to the default network - in the following example , alpine4 is connected to the default network (along with apline-net) -https://docs.docker.com/network/network-tutorial-standalone/docker run -dit --name alpine4 --network alpine-net alpine ash docker network connect bridge alpine4
I am getting the following error when running a contaner by attaching to network interfaces test-net, sample-net,bridge0 . My requirement is to have a container that connects to different containers those are in different networks.docker network create --driver bridge sample-net docker container run --name c3 -d --network test-net alpine:latest ping 127.0.0.1 docker network create --driver bridge --subnet "10.1.0.0/16" test-net docker container run --name c4 -d --network test-net alpine:latest ping 127.0.0.1 docker container run --name c1 -it --rm alpine:latest sh docker container run --name c5 -d --network sample-net --network test-net --network docker0 alpine:latest ping 127.0.0.1My intention is to connect "c5" with all other containers by connecting to their interfaces. However , I am facing the error while executing the commanddocker container run --name c5 -d --network sample-net --network test-net --network docker0 alpine:latest ping 127.0.0.1 docker: Error response from daemon: Container cannot be connected to network endpoints: sample-net, test-net, docker0.
Docker - Container cannot be connected to network endpoints
which I made by base alphine-golang plus the git tool), runs on my laptop with the -u arg, but if I run it on an debian virtual machine, it tells me No user exists for uid 1001Ideally, you would make your own image (based on an existing one) with the right expected ID:RUN useradd -r -u 1001 -g appuser appuser USER appuserSee "Understanding how uid and gid work in Docker containers", fromMarc Campbell:The OP adds:I understand the reason because this is exactly what I try to avoid: add user into the image.If my problem was not solved, I will try to modify the container directly.Actually, there is another alternative, which allows you tonotmodify an image andnotadd a user:userns or user remap(since docker 1.10). However, as Imention here, you would need docker 17.06 to avoid some bugs.
I use the official golang docker image to compile my go program and put the resulting executable on a volume mapped to my host directory.The problem is that file generated by docker belongs to root:root, which is very annoying because I do not want to run my script via sudo.I searched for solutions to make docker container run as non-root, but the method I found need to change Dockerfile and add user to the image, e.g.http://gbraad.nl/blog/non-root-user-inside-a-docker-container.htmlIs there a way to make docker container run as the CURRENT user on host machine, i.e., user A runs this image will generate file belongs to A:A, and user B generate file belongs to B:B, while A and B are both users defined on host machine (i.e. where the volume resides), without the need to add A and B into the image via Dockerfile?
docker: set running user while launch container
This feature is now officially supported by VSCode:https://github.com/Microsoft/vscode-node-debug/issues/8
I'm trying to attach the Visual Studio Code debugger to a node.js app that is running inside a Docker container.I start the app like:node --debug-brk app.jsI expose the debugger port indocker-compose.yml:app: build: . working_dir: /code volumes: - .:/code command: npm run debug ports: - "3004:3000" - "5858:5858"Mylaunch.jsonlooks like:{ "version": "0.1.0", "configurations": [ { "name": "Attach", "type": "node", "address": "localhost", "port": 5858 } ] }Now, when I start the application and attach the debugger this will correctly connect (I can see the values flashing in the debugger UI already), but then it will stop, telling me the following:Error opening 'app.js' (File not found: /code/app.js).This is due to the fact that docker will not mount the app in root but in/code(seevolumesindocker-compose.yml) and VS code is confused by the sudden offset.When I run the application outside the container (i.e. locally, without offset) it works just as expected and I can use the debugger as expected.There seems to be acwdoption for the launch configuration but I am not sure if that makes any difference in my case.Can I fix this path offset? Am I missing something else here?
How can I attach VS Code to a node process running in a docker container
Usedocker history --no-trunc IMAGE_NAME_OR_IDThis will show all commands run in the image building process in reverse order. It's not exactly a Dockerfile, but you can find all essential content.
Is there a way to see the Dockerfile that generated an image I downloaded, to use as a template for my own docker images?
Where to see the Dockerfile for a docker image?
This works for me. Can you try this?Running tomcatdocker run -d -p 8080:8080 --name=tomcat tomcat:8Running nginxdocker run -d -p 80:80 --link tomcat:tomcat --name=nginx nginxGo inside nginx container and update the confdocker exec -it nginx bash/etc/nginx/nginx.conf:server { listen 80 default_server; server_name subdomain.domain.com; location / { proxy_pass http://tomcat:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }Restart nginx servicenginx -s reloadAccess the tomcat through nginx from host browser. You may need to add entry to /etc/hostshttp://subdomain.domain.comComplete nginx conf:nginx.conf
In my project, web app is developed using Spring boot with default tomcat server. I am using NGINX as load-balancer and have configured my spring-boot-web-app in NGINX configuration as follows:location /spring-boot-web-app { proxy_pass http://spring-boot-web-app/ } http { upstream /spring-boot-web-app { server : } }Now lets say NGINX IP and port asnginx_ipandnginx_portrespectively. Also working URL for my web app as:http://web_app_ip:web_app_port/rest/echo/hiThe above URL works fine. But when i try to hit same URI via NGINX it throws 404. URL used via NGINX as:http://nginx_ip:nginx_port/spring-boot-web-app/rest/echo/hiIs there something i am missing?
Spring Boot and Nginx integration
Thegolang:latestimage is based on debian bullseye. You don't need anything else than using this image to build your binary so that it can be run as is on ubuntu.Just start your dockerfile with this line instead of what you're currently using.FROM golang:latest
So:The official Go build container is based on Alpine.Alpine usesmuslas libc instead of glibc.I need to build a Go executable in a container that can be run on Ubuntu, which uses glibc.How do I eitherMake the official GoLang build container use glibc orBuild my GoLang project on an Ubuntu based containerI can't use theDisable CGOsolution, as my Go code is a FUSE driver, which requires CGO
How to build a go executable that doesn't link to musl libc
The default network is bridged. The0.0.0.0:49166->443shows a port mapping of exposed ports in the container to high level ports on your host because of the-Poption. You can manually map specific ports by changing that flag to something like-p 8080:80 -p 443:443to have port 8080 and 443 on your host map into the container.You can also change the default network to be your host network as you've requested. This removes some of the isolation and protections provided by the container, and limits your ability to configure integrations between containers, which is why it is not the default option. That syntax would be:docker run --name nginx1 --net=host -d nginxEdit: from your comments and a reread I see you're also asking about where the 10.0.75.2 ip address comes from. This is based on how you launch the docker daemon. That IP binding is assigned when you pass the--ipflag to the daemondocumentation here. If you're running docker in a vm with docker-machine, I'd expect this to be the IP of your vm.
I'm following thefollowing tutorialon how to start a basic nginx server in a docker container. However, the example's nginx docker container runs on localhost (0.0.0.0) as shown here:Meanwhile, when I run it it for some reason it runs on the IP 10.0.75.2:Is there any particular reason why this is happening? And is there any way to get it to run on localhost like in the example?Edit: I tried using--net=hostbut had no results:
How can I run a docker container on localhost over the default IP?
I ran into the same issue yesterday and I think I've come up with a workable solution.Here are the basic steps I took - using thesshagent pluginto manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.The ssagent (or alternatively the key) can be made available to specific build steps using thedocker buildcommands --ssh flag.(Feature reference)It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagentCut down look at the pipeline:pipeline { agent { // ... } environment { // Necessary to enable Docker buildkit features such as --ssh DOCKER_BUILDKIT = "1" } stages { // other stages stage('Docker Build') { steps { // Start ssh agent and add the private key(s) that will be needed in docker build sshagent(['credentials-id-of-private-key']) { // Make the default ssh agent (the one configured above) accessible in the build sh 'docker build --ssh default .' } } // other stages } } }In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by includingmount=type=sshin the relevant RUN command.For me, this looked roughly like this:FROM node:14 # Retrieve bitbucket host key RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts ... # Mount ssh agent for install RUN --mount=type=ssh npm i ...With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
This question is a follow up to this questionHow to pass jenkins credentials into docker build command?I am getting the ssh key file from jenkins credential store in my groovy pipeline and passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker containercredentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfilecheckout([$class: 'GitSCM', userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://[email protected]:7999/A/software.git']]I access it and try to pass the same to docker build command:withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) { sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ." }in Dockerfile I doRUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \ chmod 700 /home/"$USERNAME"/.ssh/id_rsaRUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/configBut I am seeing the following issue"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format) "[email protected]:Permission denied( Public key) "fatal: could not read from remote repository"In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"Should I do something similar--build-arg PRIV_KEY_FILE="$(cat $FILE)"Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
How to correctly pass ssh key file from Jenkins credentials variable into to docker build command?
3306/tcp -> 127.0.0.1:3666means port 3306 inside container is exposed on to port 3666 of host.More infohere.If you think output ofdocker portcommand is confusing then use docker inspect command to retrieve port mapping. As mentionedherein official doc.docker psdocker portdocker inspectare useful commands to get the info about port mapping.[user@jumphost ~]$ docker run -itd -p 3666:3306 alpine sh Unable to find image 'alpine:latest' locally latest: Pulling from library/alpine 050382585609: Pull complete Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998 Status: Downloaded newer image for alpine:latest 428c80bfca4e60e474f82fc5fe9c1c0963ff2a2f878a70799dc5da5cb232f27a [user@jumphost ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 428c80bfca4e alpine "sh" 3 seconds ago Up 3 seconds 0.0.0.0:3666->3306/tcp fervent_poitras [user@jumphost ~]$ docker port 428c80bfca4e 3306/tcp -> 0.0.0.0:3666 [user@jumphost ~]$ docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' 428c80bfca4e 3306/tcp -> 3666 [user@jumphost ~]$docker inspect comtainer-idalso gives a clear mapping of the ports.$ docker inspect 428c80bfca4e | | "Ports": { "3306/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "3666" } ] }, | |Hope this helps.
I am running a database container. I know to inspect port mappings, I can use commanddocker port . So, I tried that command:$docker port ea72b2c4ba47 3306/tcp -> 127.0.0.1:3666I see the output, but which one means the port used by host machine and which one is the port of container?
docker host port and container port
I think I found the answer. Inorder to disable the default bridge network add"bridge": "none"indaemon.jsonand restart docker service. The changes will taken effect immediately if there are no running containers. In my case there were some containers already running, so changes not taken effect. After inspecting the log, could see thatinfo msg="There are old running containers, the network config will not take affect"So I stopped the running container and restarted the docker service. After that bridge network is disabled. Hope this help someone.
Is there any way we can disable docker0 bridge on docker startup ?Tried "bridge": "none" in daemon.json but its not working.Also removed default docker bridge using "ip link delete docker0" but when we restart docker it came up automatically. So any permanent way to disable/delete default docker bridge on startup ? I see the same question hereHow to delete interface docker0but I already tried that and whenever docker is restarted the docker0 bridge came back.
How to delete/disable docker0 bridge on docker startup
I had a same problem on heroku, the error comes from Jinja2 version 2.11.x and it run locally but not in Heroku.Just install latest version of jinja2 it will work fine in your case too.pip install Jinja2==3.1.2 or pip install Jinja2 --upgrade
when I use docker-compose to install a fastapi project, I gotAssertionError: jinja2 must be installed to use Jinja2Templatesbut when I use env to install it, that will be run well.my OS:Ubuntu18.04STLmy requirements.txt:fastapi~=0.68.2 starlette==0.14.2 pydantic~=1.8.1 uvicorn~=0.12.3 SQLAlchemy~=1.4.23 # WSGI Werkzeug==1.0.1 pyjwt~=1.7.0 # async-exit-stack~=1.0.1 # async-generator~=1.10 jinja2~=2.11.2 # assert aiofiles is not None, "'aiofiles' must be installed to use FileResponse" aiofiles~=0.6.0 python-multipart~=0.0.5 requests~=2.25.0 pyyaml~=5.3.1 # html-builder==0.0.6 loguru~=0.5.3 apscheduler==3.7.0 pytest~=6.1.2 html2text==2020.1.16 mkdocs==1.2.1DockerfileFROM python:3.8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 WORKDIR /server COPY requirements.txt /server/ RUN pip install -r requirements.txt COPY . /server/docker-compose.ymlversion: '3.7' services: figbox_api: build: context: . dockerfile: Dockerfile command: uvicorn app.main:app --port 8773 --host 0.0.0.0 --reload volumes: - .:/server ports: - 8773:8773Do I need to provide some other information?Thanks
when i use docker-compose to install a fastapi project, i got AssertionError:
Among the directives of the Dockerfile, you have SHELLhttps://docs.docker.com/engine/reference/builder/#shellfrom this docThe SHELL instruction can also be used on Linux should an alternate shell be required such as zsh, csh, tcsh and others.
I want to update GCC from 4.4.7 to 4.7.2 in a container(CentOS 6.9) following this tutorialHow to upgrade GCC on CentOS.In the end of the tutorial, the author usesscl enable devtoolset-1.1 bashto launch a new shell where all the environments are updated. I write the following Dockerfile:Run ... \ && yum install devtoolset-1.1 \ && scl enable devtoolset-1.1 bashHowever, when I run the container from the images generated by the Dockerfile, I find that the GCC version is still 4.4.7, which indicates that I enter the old shell.Though I success in updating GCC in the container by explicitly defining the CC, CPP, CXX variables, I still want to know how to update GCC with "scl" command in a Dockerfile.That's to say, how to enter a new shell in a Dockerfile?Thank you in advance. ^_^
How to start another bash in Dockerfile
The "Environment Variables" in the Docker Compose task not inject the variables into the containers so the Java application can't read them, but they are will be available in the agent during the process.The variables are for use in thedocker-compose.ymlin this way:${variableName}.So you can define in the Docker Compose task variable:FEATURE_LIST=blablaand in thedocker-compose.ymlinject the variable into the container:image:ubuntu:latest environment: - FEATURE_LIST=${FEATURE_LIST}In this way you can specify environment variables inside Azure Build Pipeline (but you must also define them in thedocker-compose.yml).
I have a build pipeline that runs a docker image with some java program that is run using maven.Selected pipeline stepRun automation testsis starting docker-compose that starts my java program inside docker, as you can see I also set system environment valueFEATURES_LISTwith some test value, now inside my java program, I tried to return value of like I normally do for environment variables:System.getenv("FEATURES_LIST");But it never finds it, If on another hand, I specify environment variable, inside my docker compose file, it finds it (some different env variable set on the bottom of the docker compose file, see below)version: '3.4' services: # SELENIUM GRID selenium-hub: image: selenium/hub ports: - 4444:4444 chrome: image: selenium/node-chrome-debug ports: - 5900:5900 environment: - HUB_PORT_4444_TCP_ADDR=selenium-hub - HUB_PORT_4444_TCP_PORT=4444 depends_on: - selenium-hub # AUTOMATION PROJECT image_name: image: imagepathhere:latest volumes: - ./:/usr/src/app/ network_mode: "host" depends_on: - chrome environment: - TARGET_TEST_ENV=uatTrouble is, it would really make my life easier, if I could specify environment variable inside azure build pipeline, is there something I am doing wrong?
Azure build pipeline, docker compose - set environment variable
I was able to make the docker container run by making following changes to the dockerfileFROM python:3.6.8 COPY . /app WORKDIR /app ENV DEBIAN_FRONTEND noninteractive RUN apt-get update -y RUN apt install libgl1-mesa-glx -y RUN apt-get install 'ffmpeg'\ 'libsm6'\ 'libxext6' -y RUN pip3 install --upgrade pip RUN pip3 install opencv-python==4.3.0.38 RUN pip3 install -r requirements.txt EXPOSE 80 CMD ["python3", "server.py"]The lines required for resolving the libGl errorRUN apt install libgl1-mesa-glx -y RUN apt-get install 'ffmpeg'\ 'libsm6'\ 'libxext6' -ywere not able to run without updating the ubuntu environment. Moreover creating the docker image as noninteractive helped to skip any interactive command line inputs
DockerfileFROM python:3.6.8 COPY . /app WORKDIR /app RUN pip3 install --upgrade pip RUN pip3 install opencv-python==4.3.0.38 RUN pip3 install -r requirements.txt EXPOSE 80 CMD ["python3", "server.py"]requirements.txtFlask==0.12 Werkzeug==0.16.1 boto3==1.14.40 torch torchvision==0.7.0 numpy==1.15.4 sklearn==0.0 scipy==1.2.1 scikit-image==0.14.2 pandas==0.24.2The docker build succeeds but the docker run fails with the errorINFO:matplotlib.font_manager:Generating new fontManager, this may take some time... PyTorch Version: 1.6.0 Torchvision Version: 0.7.0 Traceback (most recent call last): File "server.py", line 7, in from pipeline_prediction.pipeline import ml_pipeline File "/app/pipeline_prediction/pipeline.py", line 3, in from segmentation_color import get_swatch_color_from_segmentation File "pipeline_prediction/segmentation_color.py", line 7, in import cv2 File "/usr/local/lib/python3.6/site-packages/cv2/__init__.py", line 5, in from .cv2 import * ImportError: libGL.so.1: cannot open shared object file: No such file or directoryI looked at answerimport matplotlib.pyplot as plt, ImportError: libGL.so.1: cannot open shared object file: No such file or directoryrelating to it and replacedimport matplotlib.pyplot as pltwithimport matplotlib matplotlib.use("Agg") import matplotlib.pyplot as pltbut it is not working for me. Also looked atImportError: libGL.so.1: cannot open shared object file: No such file or directorybut I do not have Ubuntu as base image so this installation would not work for me as listed in the answer.Let me know a way to make this work.
Unable to run docker image due to libGl error
I had the same problem. My image was based onnimmis/alpine-apache-php7/. I found that the image was usingsupervisorto start processes.supervisorhas no knowledge of the Docker environment variables.The convention to tellsupervisorthat a process needs to be run, is to create arunscript at/etc/sv/{process}/run.A script like thiswas used to start Apache. I needed to change the script so that it would import Docker environment variables before starting Apache.Thedocs for the base imageexplain the convention for importing Docker environment:If you need environment variables from the docker command line (-e,--env=[]) addsource /etc/envvarsbefore you use them in the script fileSo I createdmy own customrunscriptfor Apache — I addedsource /etc/envvarsjust before execution ofhttpd.I overwrotethe originalrunscript, by adding a simpleCOPYtomy Dockerfile:COPY apache-run.sh /etc/sv/apache2/runThis successfully ensured that my$XDEBUG_CONFIGwas visible tohttpdat the time it was launched. I was able to confirm that this affected my PHP configuration, by printingphpinfo();in a webpage.
I'm running a PHP application on Docker and I'd like to debug it using XDebug. In my docker-compose I added the following lines in the phpfpm part:environment: XDEBUG_CONFIG: "remote_enable=1 remote_host=192.168.110.29 remote_port=9000 idekey=PHPSTORM remote_autostart=1" PHP_IDE_CONFIG: "serverName=reports.dev"I configured PHPStorm in the right way, listening on port 9000 and ran the application.The application works flawlessly but XDebug doesn't seem to be working.If I move the lines of configuration inside the php.ini file the debugger works, except for the fact that Server Name is empty and I cannot debug (that's why I tried following the docker-compose configuration way).If, inside the docker container, I run echo $XDEBUG_CONFIG the output is right, but XDebug seems not to read that Env variable.
Xdebug (inside Docker container) ignoring XDEBUG_CONFIG environment variable
You can simply perform a manual commit.This operation is not available within the Dockerfile, but can be done manually.When doingdocker inspect , you can retrieve the ID of the container that has been used in order to create this image.You can do thendocker commit and all the ENV and other config will get flushed.If the container has been removed, you can run the imagedocker run -d , and then commit the resulting container.If you want to keep some of the configuration, you can use thedocker commit -run '{}' syntax. Cfhttps://docs.docker.com/engine/reference/commandline/commit/for more info.
I have a docker image which sets HOME and PATH:[{ ... "config": { "HOME=/", } ...I know I can replace it, but is it possible to remove it (and let the normal bash profile settings be used instead). I'd prefer not to hack the shell profile files to override it.
how to remove an ENV setting from a docker image
If you receive a 403 "Error: Forbidden" error message when accessing your Cloud Run service, it means that your client is not authorized to invoke this service. You can address this by taking one of the following actions:If the service is meant to be invocable by anyone,update its IAM settingsto make the service public.If the service is meant to be invocable only by certain identities, make sure that youinvoke it with the proper authorization token.
I built my container image and then deployed to Cloud Run using the Cloud Console. However, when I open the endpoint URL of my service, I get a403 "Error: Forbidden"page
403 "Error: Forbidden" when opening the URL of my Cloud Run service
You have to create a.pgpasson the home folder of the user who's going to be running the commands (in this case,postgres). Each line of the file has to be in theformathostname:port:database:username:passwordand supports wildcards, so you can just set the database to "*" for example.In my case, I have something like this...$ sudo echo "${PRIMARY_IP}:5432:*:${REPL_USER}:${REPL_PASS}" > /var/lib/postgresql/.pgpass $ sudo chown postgres:postgres /var/lib/postgresql/.pgpass $ sudo chmod 0600 /var/lib/postgresql/.pgpass $ sudo -u postgres pg_basebackup -h $PRIMARY_IP -D /var/lib/postgresql/9.4/main -U ${REPL_USER} -v -P --xlog-method=streamThose variables (e.g. PRIMARY_IP) are set when I run the docker container with-e PRIMARY_IP=x.x.x.x
I try to set up an PostgreSQL slave using Docker and a bash script (I use Coreos). I have not found any way to supply a valid.pgpass.I know I could create a PGPASSWORD environment variable, but do not wish to do so for security reasons (as stated here,http://www.postgresql.org/docs/current/static/libpq-envars.html),, and because this password should be accessible every time the recovery.conf file is used (for the primary_conninfo variable).Dockerfile# ... # apt-get installs and other config # ... USER postgres # Create role and db RUN /etc/init.d/postgresql start &&\ psql --command "CREATE USER replicator WITH ENCRYPTED PASSWORD 'THEPASSWORD';" &&\ psql --command "CREATE DATABASE db WITH OWNER replicator;" # Set the pg_pass to allow connection to master ADD ./pgpass.conf /home/postgres/.pgpass # pgpass.conf comes my root git folder USER root RUN chmod 0600 /home/postgres/.pgpassIn my bash file# ... pg_basebackup -h host.of.master.ip -D /var/pgbackup/backup_data -U replicator -v -P # ...The problems seems to be that the pgpass file is not read. It seems I should use the password of the user I'm sudoing to (https://serverfault.com/questions/526170/psql-fe-sendauth-no-password-supplied), but in this case the replicator role is naturally not an available bash user. (Note that neither copying the pgpass to /home/root not /home/postgres works).Note: my pgpass file and by remote database conf# pgpass.conf host.of.master.ip:5432:replication:replicator:THEPASSWORD host.of.master.ip:5432:*:replicator:THEPASSWORD # pg_hba.conf host replication replicator host.of.slave.ip/24 md5
.pgpass for PostgreSQL replication in Dockerized environment
Streaming the docker build logs can be done using the low-level APIs given indocker-pyas follows,here = os.path.dirname(__file__) dockerfile = os.path.join(here, 'app', 'nextdir') docker_client = docker.APIClient(base_url='unix://var/run/docker.sock') generator = docker_client.build(path=dockerfile, tag='app:v.2.4', rm=True) while True: try: output = generator.__next__ output = output.strip('\r\n') json_output = json.loads(output) if 'stream' in json_output: click.echo(json_output['stream'].strip('\n')) except StopIteration: click.echo("Docker image build complete.") break except ValueError: click.echo("Error parsing output from docker image build: %s" % output)
I am building an image from a Dockerfile using the docker python API.import os import sys import os.path import docker client = docker.from_env() try: here = os.path.dirname(__file__) no_cache = False dockerfile = os.path.join(here, 'app', 'nextdir') image = client.images.build(path=dockerfile, tag='app:v.2.4', nocache=no_cache, stream=True)The operation finishes successfully, however I was not able to stream the logs. The API says:Return a blocking generator you can iterate over to retrieve build output as it happenswhen stream=True.How can I get these logs in python?
How to stream the logs in docker python API?
LowerDir: these are the read-only layers of an overlay filesystem. For docker, these are the image layers assembled in order.UpperDir: this is the read-write layer of an overlay filesystem. For docker, that is equivalent to the container specific layer which contains changes made by that container.WorkDir: this is a required directory for overlay, it needs an empty directory for internal use.MergedDir: this is the result of the overlay filesystem. Docker effectively chroot's into this directory when running the container.For more on overlay filesystems (overlay2 is a newer release, but I don't believe there are any user visible changes), see the kernel docs:https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt
Below is the manifest file entry snippet (docker inspect image redis) of redis image"GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/bd512eb256c8aa56cbe9243d440a311820712d1a245fe6f523d39d19cd6c862d/diff:/var/lib/docker/overlay2/7fa1e90f35c78fc83c3a 4b86e36e45d742383b394adf9ce4cf9b339d919c9cbe/diff:/var/lib/docker/overlay2/2c1869386b5b8542959da4f0173a5272b9703326d619f27258b4edff7a1dbbf9/diff:/var/lib/docker/overlay2 /23ba3955c5b72ec17b9c409bd5233a3d92cbd75543c7d144b364f8188765788e/diff:/var/lib/docker/overlay2/87d8a92919103e8ff723221200acb36e17c611fa499571ab183d0f51458e6f24/diff", "MergedDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/merged", "UpperDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/diff", "WorkDir": "/var/lib/docker/overlay2/e503ed41978e99fe9b71a4225763a40b7988e9a4f31d4c06ef1ec1af46b0b6ab/work" }, "Name": "overlay2" },whereoverlay2filesystem is used by docker image and container.WithinGraphDriverentry of manifest,what doesLowerDir/MergedDir/UpperDir/WorkDirindicate?
docker image - merged/diff/work/LowerDir components of GraphDriver
Thanks to @Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:resolver 127.0.0.11 ipv6=off valid=10s; set $upstream http://${UPSTREAM}; proxy_pass $upstream;This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
I'm running nginx vialets-nginxin the default nginx configuration (as per the lets-nginx project) in a docker swarm:services:ssl: image: smashwilson/lets-nginx networks: - backend environment: -[email protected]- DOMAIN=api.finestructure.co - UPSTREAM=api:5000 ports: - "80:80" - "443:443" volumes: - letsencrypt:/etc/letsencrypt - dhparam_cache:/cache api: image: registry.gitlab.com/project_name/image_name:0.1 networks: - backend environment: - APP_SETTINGS=/api.cfg configs: - source: api_config target: /api.cfg command: - run - -w - tornado - -p - "5000"apiis a flask app that runs on port 5000 on the swarm overlay networkbackend.When services are initially started up everything works fine. However, whenever I update theapiin a way that makes theapicontainer move between nodes in the three node swarm,nginxfails to route traffic to the new container.I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.In order to make nginx 'see' the new IP I need to either restart the nginx container ordocker execinto it andkill -HUPthe nginx process.Is there a better and automatic way to make the nginx container refresh its name resolution?
nginx does not automatically pick up dns changes in swarm
I found a few issues that mention this exact problem, like these:Exception patterns in .dockerignore do not support wildcard directoriesdockerignore does not respect the "special wildcard **"(comment)And it seems like it's not gonna be fixed any time soon :(
I'm having trouble with the.dockerignorefile. This is my project structure:file.sh file.js file.go file.py subdir/ file2.go file2.py .dockerignore DockerfileAccording to the.dockerignoredocumentation:(...) you may want to specify which files to include in the context, rather than which to exclude. To achieve this, specify * as the first pattern, followed by one or more ! exception patterns.And:Lines starting with ! (exclamation mark) can be used to make exceptions to exclusions.Finally:Docker also supports a special wildcard string ** that matches any number of directories (including zero). For example, **/*.go will exclude all files that end with .go that are found in all directories, including the root of the build context.Based on that, this is my.dockerignorefile:# ignore everything * # whitelist # --------- # python files !**/*.pyWhen I build and run the container and inspect the files, I get this:file.pyThesubdirdirectory is missing,subdir/file2.pyshould be included. It works if I add!subdir/**/.pyto my.dockerignore, but the idea is to include any.pyfile in any subdirectory.This is the file structure that should be present in the container:file.py subdir/ file2.pyWhat's going on here?
.dockerignore fails to include files in subdirectories with !**/*.extension pattern
Finally solved the problem.Tcsh shell doesn't receive most of the signals like SIGTERMwhich is the signal sent by docker when stopping the container.So I changed the script to use bash shell and whenever I want to run a tcsh command I just do it like this:/bin/tcsh ./my-commandSo, mydocker-entrypoint.shis like this:#!/bin/bash # SIGTERM-handler this funciton will be executed when the container receives the SIGTERM signal (when stopping) term_handler(){ echo "***Stopping" /bin/tcsh ./my-cleanup-command exit 0 } # Setup signal handlers trap 'term_handler' SIGTERM echo "***Starting" /bin/tcsh ./my-command # Running something in foreground, otherwise the container will stop while true do #sleep 1000 - Doesn't work with sleep. Not sure why. tail -f /dev/null & wait ${!} done
I'm having some trouble to understand how I can do some cleanup when the container is stopped.To make it easier, I prepared a sample to reproduce the problem.Here are the contents of my files:DockerfileFROM opensuse:latest # Install tcsh (non-interactive mode) RUN zypper -n in tcsh # Create user RUN useradd -ms /bin/tcsh dummyuser # Set the user USER dummyuser # Change Working Dir WORKDIR /home/dummyuser # Copy entrypoint script COPY docker-entrypoint.sh $HOME # Starter Script ENTRYPOINT ["./docker-entrypoint.sh"]docker-entrypoint.sh#!/bin/tcsh echo "Starting" onintr cleanup # Running something in foreground, otherwise the container will stop while (1) sleep 1000 end exit 0 cleanup: onintr - echo "cleanup on going" exit 0Makedocker-entrypoint.shexecutable:chmod 744 docker-entrypoint.shBuild the image:docker build -t my-dummy-img .Notice that I'm usingtcshshell.If you take a look at thedocker-entrypoint.shyou can see that I'm waiting to cath the interrupt (onintr cleanup) and call a cleanup method.Now, these are the commands I run:mstack/dummy-project> docker run --name my-service -ti -d my-dummy-img ps -eaf da1dc21281a58e384f2ff34aa49a82019214e204e6d7a77ff54e8c96e005f913 mstack/dummy-project> docker logs my-service Starting mstack/dummy-project> docker stop my-service my-service mstack/dummy-project> docker logs my-service Starting mstack/dummy-project>Here is the problem, I would expect that after the seconddocker logs my-servicethe output would be:Starting cleanup on goingInstead of onlyStartingBecause docker is supposed to send a signal when stopping...On the other hand, if I run:docker run --name my-service-attached -ti my-dummy-img ps -eafAnd hitCTRL+C, I can see the expected output.What am I missing here? I hope the question is clear enough.BTW, I used the following to articles as guideline:Gracefully Stopping Docker ContainersTrapping signals in Docker containers
Gracefully Stopping Docker Containers
It isn't necessary to alwaysapk update/upgradein your dockerfile. However it surely isn't a bad idea. Especially if you install packages withapk, you should make sure that the package list is up-to-date. So you always get the latest version of the package you want to install.Installing security updates on build time does matter, especially if your base image is not that recent. But I wouldn't call it necessary and it also depends on how important it is for your base image to be up-to-date.
I'm creating a multi stage build docker file. In the deployment step that will actually run the program i'm runningRUN apk update && apk upgrade --no-cacheShould I also have this statement in my build stage?
Is it necessary to RUN apk update && apk upgrade in a docker build stage?
You're missing a setup step forruby-build: You need to run itsinstall.shafter you cloned it.
I am trying to setup rbenv with a Dockerfile, but this just fails onrbenv install. I do have ruby-build in there, it just doesn't seem to work.Relevant bits of the Dockerfile (largely lifted fromhttps://gist.github.com/deepak/5925003):# Install rbenv RUN git clone https://github.com/sstephenson/rbenv.git /usr/local/rbenv RUN echo '# rbenv setup' > /etc/profile.d/rbenv.sh RUN echo 'export RBENV_ROOT=/usr/local/rbenv' >> /etc/profile.d/rbenv.sh RUN echo 'export PATH="$RBENV_ROOT/bin:$PATH"' >> /etc/profile.d/rbenv.sh RUN echo 'eval "$(rbenv init -)"' >> /etc/profile.d/rbenv.sh RUN chmod +x /etc/profile.d/rbenv.sh # install ruby-build RUN mkdir /usr/local/rbenv/plugins RUN git clone https://github.com/sstephenson/ruby-build.git /usr/local/rbenv/plugins/ruby-build ENV PATH /usr/local/rbenv/shims:/usr/local/rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Set to Ruby 2.0.0-p247 RUN rbenv install 2.0.0-p247 RUN rbenv rehash RUN rbenv local 2.0.0-p247Error:Step 21 : RUN rbenv install 2.0.0-p247 ---> Running in 8869fa8f0651 rbenv: no such command `install' Error build: The command [/bin/sh -c rbenv install 2.0.0-p247] returned a non-zero code: 1 The command [/bin/sh -c rbenv install 2.0.0-p247] returned a non-zero code: 1
Using rbenv with Docker
As a general rule, changing the settings or code running inside a container involves deleting and restarting the container. This is totally normal, anddocker-compose upwill do it for you when necessary. (Remember to make sure any data you care about is stored outside the container.)At a Docker API level, there are only a limited set of things that can be changed in theUpdate a containercall, and labels aren’t one of those. That means anything that manages a container, whether directdockercommands or Docker Compose, must always delete and recreate a container to change its labels.
Can I update labels on a container using docker-compose without restarting the container?Ideal scenario: - change labels in docker-compose.yml - save docker-compose.yml - run a command to update the labels without restarting the container
docker-compose - Can I update traefik labels without restarting a container?
At the moment, this is a recurring issue with no simple answer.There are two common approaches I hear of.First involveschowning the directory before using it.RUN mkdir -p /home/jboss/myhub/logs ; chown -R jboss:jboss /home/jboss/myhub/logs USER jbossIn case you need to access the files from your host system with a different user, you canchmodfiles that your app created inside the container with your jboss user.$ chmod -R +rw /home/jboss/myhub/logsThe second approach, involves creating the files with appropriatechmodinDockerfile(or in your host system) before running your application.$ touch /home/jboss/myhub/logs/app-log.txt $ touch /home/jboss/myhub/logs/error-log.txt $ chmod 766 /home/jboss/myhub/logs/app-log.txt $ chmod 766 /home/jboss/myhub/logs/error-log.txtThere certainly are more ways to achieve this, but I haven't yet heard of any more "native" solutions. I'd like to find out an easier/more practical approach.
trying to mount a volume to my container from thedocker runcommand. It seems like the folder is always created as root instead of the container user. This makes it so that I'm lacking rights on the folder(cant create or write files for logging).Doing some testing using this command:docker run -it --entrypoint /bin/bash -v $PWD/logs:/home/jboss/myhub/logs:rw myImage:latestIf i now do the command:ls -ld /logsi get the result:drwxr-xr-x 2 root root 4096 Jun 12 13:01 logs/Here we can see that only the owner has write-rights. And root is the owner. I would expect(I want) jboss to be the owner of this folder. Or at least that all users have read/write rights given the:rwoption in the-vparameterWhat am I not understanding here? How can i get it to work like I want?
Mounted folder created as root instead of current user in Docker
No, it's not defined. You have overwritten oneserviceswith the other one.You should fix the configuration:version: '3.5' services: apache: build: ./Docker image: apache:latest ports: - "80:80" restart: always db: image: mariadb:latest restart: always environment: MYSQL_ROOT_PASSWORD: example depends_on: - "apache" adminer: image: adminer restart: always ports: - "8080:8080" depends_on: - "db" networks: default: name: frontend-network
I'm having this error:ERROR: Service 'db' depends on service 'apache' which is undefined.Why is it saying that apache is undefined? I check the indentation. Should be the right one.version: '3.5' services: apache: build: ./Docker image: apache:latest ports: - "80:80" restart: always networks: default: name: frontend-network services: db: image: mariadb:latest restart: always environment: MYSQL_ROOT_PASSWORD: example depends_on: - "apache" adminer: image: adminer restart: always ports: - "8080:8080" depends_on: - "db" networks: default: name: frontend-network
Docker-Compose: Service xxx depends on service xxx which is undefined