Difference between revisions of "Docker"

From Steak Wiki
Jump to navigationJump to search
 
(79 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Docker is a type of virtualization for Gnu\Linux that allows processes to share similar resources, without having a full OS for each image.
+
<small>Docker is a type of virtualization for Gnu\Linux that allows programs to run and share similar resources, without having a full OS for each image, but while still isolating each 'vm'.
 +
 
 +
It works a lot of the time but then when something doesn't work you will be completely screwed as there will be no logging or normal utilities that you would need to troubleshoot, or it will add complexity that makes some aspect of what is going on undecipherable. In my 5+ years of using it, I can say: Docker is a trap. "how do we solve this software problem - more software."
 +
 
 +
I use docker in three places where i shouldn't use it, and one place where i should use it, I don't. So it's software that you have to know when to use it, but also know when NOT to use it. The mistake is to think that Docker will always be a good idea, and this is not so.
  
 
==General Tips==
 
==General Tips==
It always helps to read a book on a subject, and then keep it as a Reference. I have read Using Docker By Adrian Mouat. It is a decent book. Not bad. In fact, if you should be reading books on any subject that interests you. Physical paper books, too.
+
It helps to read a book, and then keep it as a Reference. Here Using Docker By Adrian Mouat is a decent book. In fact, you should be reading books on any subject that interests you. Physical paper books, too.
  
Docker is 64 bit only for i386 architecture. ARM has a separate build. There is no 32 bit i386, unfortunately.
+
Docker is x86_64. ARM has a separate build. There is no i386.
  
You will want 'some' RAM. I had 1GB on a P4 machine, and that was not enough. 4GB was enough. This means go with the RPI4 2 or 4GB models.
+
You will want 'some' RAM. I had 1GB on a P4 machine, and that was not enough. 4GB was enough.  
  
You should always use docker compose. If you read the book above, you will understand why. Docker is possible to run on the command line (commands are somewhat complex for each container), but with a compose file, you can write everything down in a much simpler fashion. Use compose. It's a separate install, currently. Install it. ''Seriously, just ignore the docker command lines. I consider them useless. More of a red herring for rookies.''
+
You should always use docker compose. Docker is possible to run on the command line, but with a compose file, you can write everything down in a much simpler fashion. Use compose. It's a separate install, currently. Install it. ''Seriously, just ignore the docker command lines. I consider them useless. More of a red herring for rookies.''
  
One of the benefits of docker, is its simplicity. There are essentially two commands you will ever need to know to use docker. Both must be run as root. One is
+
===ARM is SOL===
#docker (e.g. docker restart container\_name\_here)
+
I tried to use Docker on ARM, and while you might be able to install docker, you will find that images for your applications may not be there. e.g. you will find Nginx, but you won't find Gitlab. This means that docker is pretty much x86-64 only. (exception: if you make custom Dockerfiles).
. The other is docker-compose (e.g. \
 
#docker-compose up -d
 
I think I ripped this saying from what they purport of ZFS, but it applies here as well...
 
  
 
==Docker Commands==
 
==Docker Commands==
 
Here are commands you need to know. Just the necessary ones.
 
Here are commands you need to know. Just the necessary ones.
docker-compose up -d  
+
{{cmd| docker-compose up -d}}
 +
 
Starts the containers in the docker compose file, if they aren't already started. the -d detaches from the stdout logging. You don't need to use stdout logging, you can use docker logs, but its there if you want it.
 
Starts the containers in the docker compose file, if they aren't already started. the -d detaches from the stdout logging. You don't need to use stdout logging, you can use docker logs, but its there if you want it.
  docker ps  
+
  {{cmd| docker ps }}
 
Lists containers running. If one fails to start, you'll see it missing from here
 
Lists containers running. If one fails to start, you'll see it missing from here
  docker logs <containername>  
+
  {{cmd|docker logs <containername> }}
Gives you some logging output from the container. Often enough to troubleshoot.
+
Gives you some logging output from the container. Often enough to troubleshoot. (also see tips below).
  docker exec -it <containername> /bin/bash
+
  {{cmd|docker exec -it <containername> /bin/bash}}
 
This will get you in a shell in the docker container. From here you can do what you need to. Most are debian, and need apt-get install less nano or whatever program you are missing. Ping is missing from possibly all containers, so if you want to test via ping, you'll have to apt-get it.
 
This will get you in a shell in the docker container. From here you can do what you need to. Most are debian, and need apt-get install less nano or whatever program you are missing. Ping is missing from possibly all containers, so if you want to test via ping, you'll have to apt-get it.
  docker-compose restart  
+
  {{cmd|docker-compose restart }}
 
This will restart all containers. However, I don't recommend it. Initting containers can get corrupted this way, and also its much easier to restart a single faulty container via...
 
This will restart all containers. However, I don't recommend it. Initting containers can get corrupted this way, and also its much easier to restart a single faulty container via...
  docker restart <containername>  
+
  {{cmd|docker restart <containername> }}
 
This will restart one single container.
 
This will restart one single container.
  docker cp <containername>:/dir/to/file dest  
+
  {{cmd|docker cp <containername>:/dir/to/file dest }}
 
You can copy files from local machine to docker, or vice versa with this. Extremely useful.
 
You can copy files from local machine to docker, or vice versa with this. Extremely useful.
  
 +
===Deleting Containers===
 +
Less often, you might want to know
 +
{{cmd|
 +
#!/bin/bash
 +
docker rm $(docker ps -a -q)
 +
docker rmi $(docker images -q) --force}}
 +
 +
This starts over from scratch. This is how easy it is to reboot a docker from square one.  note: below command (rmi), only needed if you want to remove base images. Needed when updating a stable. Not needed when changing other parameters not related to base image. Also, this deletes all containers, which maybe you don't want to do, if you either A) only want to delete certain images you are testing B) have some containers with custom changes saved, and not backed up elsewhere.
  
===Restarting / Deleting Containers===
+
===Updating===
Less often, you might want to know
+
docker images
  docker ps (list active containers)
+
(shows out of date image)
  docker images (list images)
+
  docker pull mysql
  docker kill <containername>
+
(downloads new image)
  docker rmi -f <imagename>
+
  docker images
The first will stop a container, the second will remove an image. If you corrupt the install of a container, the second will save you. The force switch (-f) is required. Alternatively, you can just install a container of the same type with a new container name. This is a good way to test that your containers are built in a reproducible way. If you are able to rebuild them by deleting everything, then you likely won't have trouble down the road.
+
(shows two versions of mysql. old and new)
 +
  docker stop some_container_name
 +
  docker rmi -f cjklfs23404
 +
(where cjklfs23404 is the old container alias under docker images)
 +
docker-compose up -d mysql
  
 
====Containers don't all Restart?!====
 
====Containers don't all Restart?!====
 
Make sure to have in your docker compose for each container a '''restart:always'''
 
Make sure to have in your docker compose for each container a '''restart:always'''
 
Otherwise, a container won't necessarily start when docker is restarted.
 
Otherwise, a container won't necessarily start when docker is restarted.
 +
 +
===Search for images===
 +
docker search <imagename>
 +
====Search for Versions of an image====
 +
What if you want to see what versions are available for a given image? Say you want php, but don't know which to get. hub.docker.com is broken. Requires js, slow, and bad. You can't get a list of images on it easily.
 +
 +
Solution:
 +
<pre>
 +
#!/bin/bash
 +
#https://stackoverflow.com/questions/28320134/how-can-i-list-all-tags-for-a-docker-image-on-a-remote-registry
 +
#$1 means
 +
#first parameter you pass to this script will be searched
 +
#e.g. working: debian
 +
#note: some have multiple layers
 +
#e.g. gitea/gitea is literally entered as $1 == gitea/gitea  (teste, working)
 +
# while debian is just debian. so be aware.
 +
 +
wget -q https://registry.hub.docker.com/v1/repositories/$1/tags -O -  \
 +
| sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'
 +
</pre>
 +
Ref: https://stackoverflow.com/questions/54418542/dockerhub-listing-all-available-versions-of-a-given-image
 +
 +
https://stackoverflow.com/questions/28320134/how-can-i-list-all-tags-for-a-docker-image-on-a-remote-registry
 +
 +
===Save Changes to Docker Image===
 +
docker commit
 +
See official documentation.
 +
 +
===Set IP Address on docker-compose===
 +
<pre>
 +
mariadb:
 +
    image: mariadb:10
 +
    container_name: mariadb
 +
    command:
 +
      - mysqld
 +
      - --character-set-server=utf8
 +
      - --collation-server=utf8_bin
 +
    environment:
 +
      - MYSQL_ROOT_PASSWORD=admin
 +
      - MARIADB_DATABASE=guy
 +
      - MARIADB_USER=user
 +
      - MARIADB_PASSWORD=passwd
 +
    volumes:
 +
      - ./mariadb/appdata:/var/lib/mysql
 +
      - ./mariadb/config:/etc/mysql
 +
    networks:
 +
      somenamefornetwork:
 +
        ipv4_address: 192.168.12.11
 +
    ports:
 +
      - "10055:3306"
 +
 +
networks:
 +
  somenamefornetwork:
 +
    ipam:
 +
      driver: default
 +
      config:
 +
        - subnet: "192.168.12.0/24"
 +
</pre>
 +
Add the networks section to each container.
 +
 +
===Store /var/lib/docker somewhere else===
 +
i.e. instead of /var/lib/docker put them on external storage...
 +
Ubuntu/Debian: edit your /etc/default/docker file with the -g option:
 +
DOCKER_OPTS="-dns 8.8.8.8 -dns 8.8.4.4 -g /mnt"
 +
ref: https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169
 +
 +
then
 +
stop docker compose
 +
stop docker service
 +
start docker service
 +
start docker compose
 +
 +
the above guide also mentions using a symlink. (that's) A different approach. either should work. This is useful e.g. if you are on a VPS/SBC with limited storage (but have external storage).
 +
 +
==Dockerfile==
 +
Dockerfiles and compose are slightly confusing. Sometimes you see images run an entrypoint script, sometimes not. In any case, Dockerfiles are fundamental to reproducible builds.
 +
 +
Basic usage is:
 +
* Docker-compose builds container from dockerfile (say you start with alpine, and install some programs)
 +
* that container you built is used from thence on.
 +
* if you want to change dockerfile, make changes and you must call docker-compose up --build otherwise, it will use the old container
 +
 +
For a basic apache server, you might call the following in a cmd at the end of the dockerfile:
 +
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
 +
When in doubt, work off of existing examples.
 +
 +
entrypoint.sh are typically called in a Dockerfile not in docker-compose. Alternatively, the above CMD can be used.
 +
 +
===Dockerfile with docker-compose===
 +
basically:
 +
get a dockerfile
 +
        FROM nginx:latest
 +
        COPY ./hello-world.html /usr/share/nginx/html/
 +
build the dockerfile
 +
        docker build -t my-nginx-image:latest .
 +
docker images shows the new image
 +
        docker images
 +
call it in the docker-compose
 +
        version: '3.9'
 +
        services:
 +
          my-nginx-service:
 +
            container_name: my-website
 +
            image: my-nginx-image:latest
 +
            cpus: 1.5
 +
            mem_limit: 2048m
 +
            ports:
 +
              - "8080:80"
 +
 +
you usually need an entrypoint.sh as well,
 +
or you can call a command in the dockerfile such as (from before)
 +
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
 +
(where some command will run when the container starts, otherwise it just closes...)
 +
the entrypoint or CMD is at the end of the Dockerfile, not the docker-compose.
 +
 +
 +
note that the my-nginx-image is just a name you gave it
 +
when building. the file itself is just named Dockerfile. It can be any name.
 +
ref: http://web.archive.org/web/20230128053439/https://www.theserverside.com/blog/Coffee-Talk-Java-News-Stories-and-Opinions/Dockerfile-vs-docker-compose-Whats-the-difference
 +
 +
==Volumes==
 +
Volumes can be handled at least two ways. The simplest is to store files in a local directory which is then mapped to the remote drive.
 +
 +
e.g. docker-compose:
 +
<pre>
 +
  nginx:
 +
    image: nginx:####
 +
    container_name: mynginxserver
 +
    volumes:
 +
      - ./nginx.conf:/etc/nginx/nginx.conf
 +
      - /etc/letsencrypt/:/etc/letsencrypt/
 +
      - ./local_dir/:/var/www/html/
 +
      - ./local_logs/:/var/log/nginx/
 +
    ports:
 +
      - 80:80
 +
      - 443:443
 +
    restart: always
 +
    logging:
 +
      driver: "json-file"
 +
      options:
 +
        max-size: "200k"
 +
        max-file: "10"
 +
</pre>
 +
Here, in the local dir where docker-compose is run, the ./local_dir/ or ./local_logs will store the containers html or logs respectively. Simple.
 +
 +
 +
However, if you just have a volume, without a local directory, then it will save in /var/ somewhere (unintuitive... bad).
 +
e.g. from Dolibarr:
 +
<pre>
 +
The Dolibarr installation and all data beyond what lives in the database
 +
(file uploads, etc) are stored in the unnamed docker volume volume
 +
/var/www/html and /var/www/documents. The docker daemon will store
 +
that data within the docker directory /var/lib/docker/volumes/....
 +
That means your data is saved even if the container crashes, is stopped
 +
or deleted.
 +
</pre>
  
 
==Specific Tips==
 
==Specific Tips==
Line 63: Line 232:
 
</pre>
 
</pre>
 
And this is put in every server declaration of nginx.conf. Real simple, real easy. The docker compose of the nginx proxy is something like:
 
And this is put in every server declaration of nginx.conf. Real simple, real easy. The docker compose of the nginx proxy is something like:
<pre>
+
 
 +
{{cat|docker-compose.yml|
 
nginx:
 
nginx:
 
     image: nginx:latest
 
     image: nginx:latest
Line 71: Line 241:
 
       - /etc/letsencrypt/:/etc/letsencrypt/
 
       - /etc/letsencrypt/:/etc/letsencrypt/
 
       - ./webroot/:/var/www/html/
 
       - ./webroot/:/var/www/html/
</pre>
+
}}
 +
 
 
The volumes section is extremely simple, don't be scared. There are two entries. Local and remote. You specify what folder will be the local directory which will be cloned to the host at the remote path you specify. So, the host runs certbot at /etc/letsencrypt, and this folder is cloned to the nginx proxy container, at the same location. Finally, webroot must be set in certbot, but it prompts you for this. And if you forget or get it wrong, it can be configured somewhere in /etc/letsencrypt. it's a one liner text entry).
 
The volumes section is extremely simple, don't be scared. There are two entries. Local and remote. You specify what folder will be the local directory which will be cloned to the host at the remote path you specify. So, the host runs certbot at /etc/letsencrypt, and this folder is cloned to the nginx proxy container, at the same location. Finally, webroot must be set in certbot, but it prompts you for this. And if you forget or get it wrong, it can be configured somewhere in /etc/letsencrypt. it's a one liner text entry).
 +
 
===Give every Container a Containername===
 
===Give every Container a Containername===
 
This makes it easier to refer to them later. All you need to do in the compose file is include container\_name: something. Much better than the gibberish they give these names if you don't include it.
 
This makes it easier to refer to them later. All you need to do in the compose file is include container\_name: something. Much better than the gibberish they give these names if you don't include it.
Line 117: Line 289:
 
But that will only work on container base images, not running containers. To 'execute' a command on a running or existing container, the command you want is exec. (Also used to interactively start bash shell above)
 
But that will only work on container base images, not running containers. To 'execute' a command on a running or existing container, the command you want is exec. (Also used to interactively start bash shell above)
  
====Mysql Backup / Restore====
+
EDIT: Use with discretion. Active data in RAM may use power.
 +
 
 +
===Mysql Backup / Restore===
 
<pre>
 
<pre>
 
# Backup
 
# Backup
Line 126: Line 300:
 
</pre>
 
</pre>
 
ref: https://gist.github.com/spalladino/6d981f7b33f6e0afe6bb
 
ref: https://gist.github.com/spalladino/6d981f7b33f6e0afe6bb
 +
May not need to do this, if you keep the DB (ibdata files) in a local directory/volume via docker-compose.yml
 +
e.g.
 +
mydb:
 +
    image: mysql:9999
 +
    volumes:
 +
      - ./db_files:/var/lib/mysql
 +
    restart: always
 +
 +
===Limit docker logs===
 +
Docker will keep lots of logs if you don't manage it.
 +
https://github.com/docker/compose/issues/1083
 +
https://docs.docker.com/compose/compose-file/#logging
 +
https://stackoverflow.com/questions/31829587/docker-container-logs-taking-all-my-disk-space
 +
view (all) logs with
 +
docker-compose logs
 +
To limit a service's logs in docker-compose do the following:
 +
* find a containers/services entry
 +
* append to the end (watch the white space & no tabs)
 +
    logging:
 +
      driver: "json-file"
 +
      options:
 +
        max-size: "200k"
 +
        max-file: "10"
 +
The rule with logs, is nothing - unless you need it. Then everything. Any unnecessary logging is taxing the CPU / HDD.
 +
Which leads me to...
 +
 +
===view details about container===
 +
To view paths, configuration settings, etc...
 +
See
 +
docker inspect <name or id>
 +
It's easier to grep these. the search in less wasn't working for me to find .LogPath for example.
 +
docker inspect mycontainer | grep log
 +
Will show where the log files are located.
 +
 +
===Installing Docker-compose from pip or github binary not Debian===
 +
Subject. See official docker documentation which states the URL of
 +
github.com/docker/compose/releases or pip.
 +
 +
For a while pip3 was required, however as of 2022 or 2023, Debian now
 +
has a new enough compose, so this is unnecessary.
 +
apt-get install python3-pip
 +
pip3 install docker-compose
 +
 +
===Root on Alpine Images===
 +
Alpine images: can't su to root.
 +
su: must be suid to work properly
 +
use the flag --user root when entering the container.
 +
 +
docker exec -it --user root mycontainername bash   
 +
 +
And alpine images don't have telnet. It's hard to find. it's in:
 +
/app # apk add busybox-extras
 +
 +
 +
Remember when in a docker, that you (should) always be able to install
 +
whatever troubleshooting items you need, to replicate a normal bare metal
 +
install, and then ts from there. e.g. telnet, tcpdump, etc...
 +
 +
e.g.: i tried to connect to an smtp email server (465,587) from VPS but was blocked.
 +
Either the VPS was blocking outgoing, or the email servers were blocking incoming.
 +
Since i tried three email servers which were blocked, it looked like the VPS. But I didn't
 +
want to be a nuisance customer, so I did a reasonable amount of research before contacting
 +
support.
 +
 +
It turned out the VPS provider was blocking packets. It was a bit of a phantom block
 +
as they would go outbound from each VPS but then just vanish. Even within a private network where they weren't headed for the WAN, but from VPS to VPS. Ugh. Two hours. Now I know...
 +
 +
===Accessing files on the container, when its not running===
 +
/var/lib/docker/volumes/somethinghere/_data
 +
In this scenario, I was trying to remove a file on a non running container (the file was an install.lock file). Note that this is not the full filesystem of the container, just (i think) files that diff from the original. I found where this file was by:
 +
tree /var/lib/docker > /tmp/dockertree
 +
cat /tmp/dockertree | grep -C 5 install.lock
 +
However that didn't work. In this case, I had multiple copies of dolibarr setup on the filesystem. This path was probably from a different one. The answer was to just remove the install.lock from the running container. (shell into it with exec -it <containername> /bin/bash
 +
Another thing to check here was the docker inspect command, which tells you the paths of all files.
 +
 +
==Troubleshooting==
 +
===Cannot start container: [0] Id already in use #20570===
 +
* https://github.com/moby/moby/issues/20570
 +
After update, unable to start container. Solution was found to be:
 +
docker-compose down -v then docker-compose up -d
 +
Container was viewable with
 +
docker ps -a (not just docker ps)
 +
==Poor Design==
 +
===Creating a docker-compose with a stable or latest tag===
 +
If you make a docker-compose file, you should link it to a marked version. You can later edit it, to use stable or latest, but one of the strengths of Docker is version control. If you create a docker-compose, always test on a specific version, should you need to roll back in the future. Otherwise, your scripts will fail in 2-20-200 years when the conf files are not supported anymore.
 +
 +
e.g. https://github.com/nucreativa/frontaccounting-docker  (6th commit)
 +
Here, the nginx config is no longer supported in vhosts, it must be in server_blocks. This would've been avoided by denoting a fixed nginx release instead of tag:latest.
 +
 +
===Repos that remove tagged versions===
 +
Had a repo that removed one of the tagged releases. This would be equivalent to debian just dropping one of its archives (such as not allowing you to download anything for stretch anymore). "Endless treadmill of software". I don't need to update a tool that is good enough.
 +
 +
==Good Design==
 +
===Testing updates===
 +
It's easy to test updating of containers, since all files can be kept in one folder, and described by a docker-compose. So if you simply
 +
make a duplicate of the folder, and change the tag version number in the release file, then hopefully it will migrate all your data to the new release or give some kind of walkthrough to do this within the container. And you can easily copy or delete the folder to do these tests without worrying about database exports, or config files.
 +
 +
==See Also==
 +
 +
* [[samba_docker_by_dlandon]]
 +
* [[zencart_docker_compose]]
  
 
==External Links==
 
==External Links==
 
* https://github.com/LeCoupa/awesome-cheatsheets/blob/master/tools/docker.sh Information dense cheatsheet (unfortunately github)
 
* https://github.com/LeCoupa/awesome-cheatsheets/blob/master/tools/docker.sh Information dense cheatsheet (unfortunately github)
 
* https://gist.github.com/spalladino/6d981f7b33f6e0afe6bb A million idiots type "thanks". Also the source for the mysql backup command.
 
* https://gist.github.com/spalladino/6d981f7b33f6e0afe6bb A million idiots type "thanks". Also the source for the mysql backup command.
 +
* https://linuxhint.com/how-to-use-docker-to-make-local-development-breeze/ http://web.archive.org/web/20221221035042/https://linuxhint.com/how-to-use-docker-to-make-local-development-breeze/ - An example of developing in docker. While this is technically for Visual Studio, you can just repurpose the docker-compose file (Albeit, this example is for python). todo: find/build other examples for other programming languages. It also gets tricky when you include GUI environments (should still be feasible, though).
 +
</small>
 
{{GNU\Linux}}
 
{{GNU\Linux}}

Latest revision as of 05:04, 15 August 2024

Docker is a type of virtualization for Gnu\Linux that allows programs to run and share similar resources, without having a full OS for each image, but while still isolating each 'vm'.

It works a lot of the time but then when something doesn't work you will be completely screwed as there will be no logging or normal utilities that you would need to troubleshoot, or it will add complexity that makes some aspect of what is going on undecipherable. In my 5+ years of using it, I can say: Docker is a trap. "how do we solve this software problem - more software."

I use docker in three places where i shouldn't use it, and one place where i should use it, I don't. So it's software that you have to know when to use it, but also know when NOT to use it. The mistake is to think that Docker will always be a good idea, and this is not so.

General Tips

It helps to read a book, and then keep it as a Reference. Here Using Docker By Adrian Mouat is a decent book. In fact, you should be reading books on any subject that interests you. Physical paper books, too.

Docker is x86_64. ARM has a separate build. There is no i386.

You will want 'some' RAM. I had 1GB on a P4 machine, and that was not enough. 4GB was enough.

You should always use docker compose. Docker is possible to run on the command line, but with a compose file, you can write everything down in a much simpler fashion. Use compose. It's a separate install, currently. Install it. Seriously, just ignore the docker command lines. I consider them useless. More of a red herring for rookies.

ARM is SOL

I tried to use Docker on ARM, and while you might be able to install docker, you will find that images for your applications may not be there. e.g. you will find Nginx, but you won't find Gitlab. This means that docker is pretty much x86-64 only. (exception: if you make custom Dockerfiles).

Docker Commands

Here are commands you need to know. Just the necessary ones.

docker-compose up -d

Starts the containers in the docker compose file, if they aren't already started. the -d detaches from the stdout logging. You don't need to use stdout logging, you can use docker logs, but its there if you want it.

docker ps

Lists containers running. If one fails to start, you'll see it missing from here

docker logs <containername>

Gives you some logging output from the container. Often enough to troubleshoot. (also see tips below).

docker exec -it <containername> /bin/bash

This will get you in a shell in the docker container. From here you can do what you need to. Most are debian, and need apt-get install less nano or whatever program you are missing. Ping is missing from possibly all containers, so if you want to test via ping, you'll have to apt-get it.

docker-compose restart

This will restart all containers. However, I don't recommend it. Initting containers can get corrupted this way, and also its much easier to restart a single faulty container via...

docker restart <containername>

This will restart one single container.

docker cp <containername>:/dir/to/file dest

You can copy files from local machine to docker, or vice versa with this. Extremely useful.

Deleting Containers

Less often, you might want to know

#!/bin/bash docker rm $(docker ps -a -q) docker rmi $(docker images -q) --force

This starts over from scratch. This is how easy it is to reboot a docker from square one. note: below command (rmi), only needed if you want to remove base images. Needed when updating a stable. Not needed when changing other parameters not related to base image. Also, this deletes all containers, which maybe you don't want to do, if you either A) only want to delete certain images you are testing B) have some containers with custom changes saved, and not backed up elsewhere.

Updating

docker images
(shows out of date image)
docker pull mysql
(downloads new image)
docker images
(shows two versions of mysql. old and new)
docker stop some_container_name
docker rmi -f cjklfs23404
(where cjklfs23404 is the old container alias under docker images)
docker-compose up -d mysql

Containers don't all Restart?!

Make sure to have in your docker compose for each container a restart:always Otherwise, a container won't necessarily start when docker is restarted.

Search for images

docker search <imagename>

Search for Versions of an image

What if you want to see what versions are available for a given image? Say you want php, but don't know which to get. hub.docker.com is broken. Requires js, slow, and bad. You can't get a list of images on it easily.

Solution:

#!/bin/bash
#https://stackoverflow.com/questions/28320134/how-can-i-list-all-tags-for-a-docker-image-on-a-remote-registry
#$1 means
#first parameter you pass to this script will be searched
#e.g. working: debian
#note: some have multiple layers
#e.g. gitea/gitea is literally entered as $1 == gitea/gitea  (teste, working)
# while debian is just debian. so be aware.

wget -q https://registry.hub.docker.com/v1/repositories/$1/tags -O -  \
| sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'

Ref: https://stackoverflow.com/questions/54418542/dockerhub-listing-all-available-versions-of-a-given-image

https://stackoverflow.com/questions/28320134/how-can-i-list-all-tags-for-a-docker-image-on-a-remote-registry

Save Changes to Docker Image

docker commit

See official documentation.

Set IP Address on docker-compose

 mariadb:
    image: mariadb:10
    container_name: mariadb
    command:
       - mysqld
       - --character-set-server=utf8
       - --collation-server=utf8_bin
    environment:
      - MYSQL_ROOT_PASSWORD=admin
      - MARIADB_DATABASE=guy
      - MARIADB_USER=user
      - MARIADB_PASSWORD=passwd
    volumes:
      - ./mariadb/appdata:/var/lib/mysql
      - ./mariadb/config:/etc/mysql
    networks:
      somenamefornetwork:
        ipv4_address: 192.168.12.11
    ports:
      - "10055:3306"

networks:
  somenamefornetwork:
    ipam:
      driver: default
      config:
        - subnet: "192.168.12.0/24"

Add the networks section to each container.

Store /var/lib/docker somewhere else

i.e. instead of /var/lib/docker put them on external storage... Ubuntu/Debian: edit your /etc/default/docker file with the -g option:

DOCKER_OPTS="-dns 8.8.8.8 -dns 8.8.4.4 -g /mnt"

ref: https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169

then

stop docker compose
stop docker service
start docker service
start docker compose

the above guide also mentions using a symlink. (that's) A different approach. either should work. This is useful e.g. if you are on a VPS/SBC with limited storage (but have external storage).

Dockerfile

Dockerfiles and compose are slightly confusing. Sometimes you see images run an entrypoint script, sometimes not. In any case, Dockerfiles are fundamental to reproducible builds.

Basic usage is:

  • Docker-compose builds container from dockerfile (say you start with alpine, and install some programs)
  • that container you built is used from thence on.
  • if you want to change dockerfile, make changes and you must call docker-compose up --build otherwise, it will use the old container

For a basic apache server, you might call the following in a cmd at the end of the dockerfile:

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

When in doubt, work off of existing examples.

entrypoint.sh are typically called in a Dockerfile not in docker-compose. Alternatively, the above CMD can be used.

Dockerfile with docker-compose

basically:

get a dockerfile
       FROM nginx:latest
       COPY ./hello-world.html /usr/share/nginx/html/
build the dockerfile
       docker build -t my-nginx-image:latest .
docker images shows the new image
       docker images
call it in the docker-compose
       version: '3.9'
       services:
         my-nginx-service:
           container_name: my-website
           image: my-nginx-image:latest
           cpus: 1.5
           mem_limit: 2048m
           ports:
             - "8080:80"

you usually need an entrypoint.sh as well, or you can call a command in the dockerfile such as (from before)

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

(where some command will run when the container starts, otherwise it just closes...) the entrypoint or CMD is at the end of the Dockerfile, not the docker-compose.


note that the my-nginx-image is just a name you gave it when building. the file itself is just named Dockerfile. It can be any name. ref: http://web.archive.org/web/20230128053439/https://www.theserverside.com/blog/Coffee-Talk-Java-News-Stories-and-Opinions/Dockerfile-vs-docker-compose-Whats-the-difference

Volumes

Volumes can be handled at least two ways. The simplest is to store files in a local directory which is then mapped to the remote drive.

e.g. docker-compose:

  nginx:
    image: nginx:####
    container_name: mynginxserver
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - /etc/letsencrypt/:/etc/letsencrypt/
      - ./local_dir/:/var/www/html/
      - ./local_logs/:/var/log/nginx/
    ports:
      - 80:80
      - 443:443
    restart: always
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"

Here, in the local dir where docker-compose is run, the ./local_dir/ or ./local_logs will store the containers html or logs respectively. Simple.


However, if you just have a volume, without a local directory, then it will save in /var/ somewhere (unintuitive... bad). e.g. from Dolibarr:

The Dolibarr installation and all data beyond what lives in the database
 (file uploads, etc) are stored in the unnamed docker volume volume
/var/www/html and /var/www/documents. The docker daemon will store
 that data within the docker directory /var/lib/docker/volumes/....
That means your data is saved even if the container crashes, is stopped
or deleted.

Specific Tips

YAML is space sensitive

When you edit the .yml file for docker-compose, you have to hit spaces in a certain pattern (tabs not allowed). This is absurd, but just be aware. The errors are cryptic, and its often just because the spacing doesn't stick to what it expects.

If you restart a containers namesake process, it will probably restart / reset the container

So if you are troubleshooting an apache container, you edit some files, then /etc/init.d/apache2 restart, uh oh... You just undid all the edits you made, if they aren't in a permanent volume. You can shell in, make edits, and then exit the shell, but a service restart often resets the container.

Consider a single reverse proxy, to handle multiple websites

There are many ways to do this. I use an nginx proxy from scratch. You can also use some containers that are built for this purpose (I personally think it's bloated but a lot of people use Jason Wilder's proxy) https://web.archive.org/web/https://github.com/jwilder/nginx-proxy - A lot of people swear by this, but I think it's straying too far to the high level.

If you use a single reverse proxy, Lets Encrypt can be done easy

In this case scenario you would have certbot on the host and a local volume that the proxy has access to which is the webroot of the Lets Encrypt scripts. The nginx proxy entry look something like this:

location ^~ /.well-known {
     alias /var/www/html/.well-known/;
      autoindex on;
    }

And this is put in every server declaration of nginx.conf. Real simple, real easy. The docker compose of the nginx proxy is something like:

Contents of docker-compose.yml

nginx: image: nginx:latest container_name: custom_name_for_my_proxy volumes: - ./nginx.conf:/etc/nginx/nginx.conf - /etc/letsencrypt/:/etc/letsencrypt/ - ./webroot/:/var/www/html/

The volumes section is extremely simple, don't be scared. There are two entries. Local and remote. You specify what folder will be the local directory which will be cloned to the host at the remote path you specify. So, the host runs certbot at /etc/letsencrypt, and this folder is cloned to the nginx proxy container, at the same location. Finally, webroot must be set in certbot, but it prompts you for this. And if you forget or get it wrong, it can be configured somewhere in /etc/letsencrypt. it's a one liner text entry).

Give every Container a Containername

This makes it easier to refer to them later. All you need to do in the compose file is include container\_name: something. Much better than the gibberish they give these names if you don't include it.

Beware of Interrupting Initting Containers

When you first build a container, it might take 30-60 or more seconds to do whatever it needs to do. If, before then, you restart it... It may get corrupted. This has happened to me more than once. When you are testing a new container, and it doesn't seem to work for some inexplicable reason, create a container with a new name (it will create a new one), or delete the first one, and start it again.

Put Apache or Program logs from the Container in a volume that is locally accessible

This means you want a volume something like ./containerA\_files/logs:/var/www/log/apache2/ so that you can monitor the logs from your host machine easily. docker logs doesn't have everything.

Only Restart Containers you need to Restart

You can restart everything with docker-compose restart, but it's faster, and less prone to break initting containers, if you docker restart containername. Do the latter.

Volumes Mounting Over Existing Directories

As discussed here: https://web.archive.org/web/https://github.com/moby/moby/issues/4361, if you add a volume to an existing container, it will seem to delete the folder's contents. I've seen mixed behaviour with this. Sometimes it deletes it even if you start a new container with the folder... Other times it has not. In any case, just docker cp the files to the folder, then add the volume mount. This may not be the most graceful solution for upgrades, but it will work. Best practices would be here (See: dbxt commented Dec 14, 2014): https://web.archive.org/web/https://github.com/docker-library/wordpress/issues/10

If you edit a docker-compose file you must restart the container with Docker-compose

If you make a change in docker compose, you must docker stop service, then docker-compose up -d service. Otherwise the changes will not take effect. The mistake here, would be thinking that you could just docker stop service, then docker start service... That doesn't work.

Keep verbose logging off your HDD with RAM only logging

If you have e.g. apache/nginx write to your HDD with every website visit, it will quickly wear out your HDD (esp. SSD). Put verbose logging in the RAM only with a tmpfs mount. https://docs.docker.com/storage/tmpfs/

e.g. (ref: https://stackoverflow.com/questions/41902930/docker-compose-tmpfs-not-working)

services:
  ubuntu:
    image: ubuntu
    command: "bash -c 'mount'"
    volumes:
      - cache_vol:/var/cache
      - run_vol:/run

volumes:
  run_vol:
    driver_opts:
      type: tmpfs
      device: tmpfs
  cache_vol:
    driver_opts:
      type: tmpfs
      device: tmpfs

This also allows you to share the tmpfs mounts if needed.

Access it from host via:

docker exec -it nginx_server tail -F /run/shm/access.log

Note that you might think you can do

docker container run my_nginx_server tail -F /run/shm/access.log

But that will only work on container base images, not running containers. To 'execute' a command on a running or existing container, the command you want is exec. (Also used to interactively start bash shell above)

EDIT: Use with discretion. Active data in RAM may use power.

Mysql Backup / Restore

# Backup
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql

# Restore
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE

ref: https://gist.github.com/spalladino/6d981f7b33f6e0afe6bb May not need to do this, if you keep the DB (ibdata files) in a local directory/volume via docker-compose.yml e.g.

mydb:
   image: mysql:9999
   volumes:
      - ./db_files:/var/lib/mysql
   restart: always

Limit docker logs

Docker will keep lots of logs if you don't manage it. https://github.com/docker/compose/issues/1083 https://docs.docker.com/compose/compose-file/#logging https://stackoverflow.com/questions/31829587/docker-container-logs-taking-all-my-disk-space view (all) logs with

docker-compose logs

To limit a service's logs in docker-compose do the following:

  • find a containers/services entry
  • append to the end (watch the white space & no tabs)
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"

The rule with logs, is nothing - unless you need it. Then everything. Any unnecessary logging is taxing the CPU / HDD. Which leads me to...

view details about container

To view paths, configuration settings, etc... See

docker inspect <name or id>

It's easier to grep these. the search in less wasn't working for me to find .LogPath for example.

docker inspect mycontainer | grep log

Will show where the log files are located.

Installing Docker-compose from pip or github binary not Debian

Subject. See official docker documentation which states the URL of github.com/docker/compose/releases or pip.

For a while pip3 was required, however as of 2022 or 2023, Debian now has a new enough compose, so this is unnecessary.

apt-get install python3-pip 
pip3 install docker-compose

Root on Alpine Images

Alpine images: can't su to root. su: must be suid to work properly use the flag --user root when entering the container.

docker exec -it --user root mycontainername bash     

And alpine images don't have telnet. It's hard to find. it's in:

/app # apk add busybox-extras


Remember when in a docker, that you (should) always be able to install whatever troubleshooting items you need, to replicate a normal bare metal install, and then ts from there. e.g. telnet, tcpdump, etc...

e.g.: i tried to connect to an smtp email server (465,587) from VPS but was blocked. Either the VPS was blocking outgoing, or the email servers were blocking incoming. Since i tried three email servers which were blocked, it looked like the VPS. But I didn't want to be a nuisance customer, so I did a reasonable amount of research before contacting support.

It turned out the VPS provider was blocking packets. It was a bit of a phantom block as they would go outbound from each VPS but then just vanish. Even within a private network where they weren't headed for the WAN, but from VPS to VPS. Ugh. Two hours. Now I know...

Accessing files on the container, when its not running

/var/lib/docker/volumes/somethinghere/_data

In this scenario, I was trying to remove a file on a non running container (the file was an install.lock file). Note that this is not the full filesystem of the container, just (i think) files that diff from the original. I found where this file was by:

tree /var/lib/docker > /tmp/dockertree
cat /tmp/dockertree | grep -C 5 install.lock

However that didn't work. In this case, I had multiple copies of dolibarr setup on the filesystem. This path was probably from a different one. The answer was to just remove the install.lock from the running container. (shell into it with exec -it <containername> /bin/bash Another thing to check here was the docker inspect command, which tells you the paths of all files.

Troubleshooting

Cannot start container: [0] Id already in use #20570

After update, unable to start container. Solution was found to be:

docker-compose down -v then docker-compose up -d

Container was viewable with

docker ps -a (not just docker ps)

Poor Design

Creating a docker-compose with a stable or latest tag

If you make a docker-compose file, you should link it to a marked version. You can later edit it, to use stable or latest, but one of the strengths of Docker is version control. If you create a docker-compose, always test on a specific version, should you need to roll back in the future. Otherwise, your scripts will fail in 2-20-200 years when the conf files are not supported anymore.

e.g. https://github.com/nucreativa/frontaccounting-docker (6th commit) Here, the nginx config is no longer supported in vhosts, it must be in server_blocks. This would've been avoided by denoting a fixed nginx release instead of tag:latest.

Repos that remove tagged versions

Had a repo that removed one of the tagged releases. This would be equivalent to debian just dropping one of its archives (such as not allowing you to download anything for stretch anymore). "Endless treadmill of software". I don't need to update a tool that is good enough.

Good Design

Testing updates

It's easy to test updating of containers, since all files can be kept in one folder, and described by a docker-compose. So if you simply make a duplicate of the folder, and change the tag version number in the release file, then hopefully it will migrate all your data to the new release or give some kind of walkthrough to do this within the container. And you can easily copy or delete the folder to do these tests without worrying about database exports, or config files.

See Also

External Links