<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[RDBT]]></title><description><![CDATA[RDBT]]></description><link>https://rdbt.no/</link><generator>Ghost 5.39</generator><lastBuildDate>Wed, 22 Apr 2026 10:26:42 GMT</lastBuildDate><atom:link href="https://rdbt.no/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Many Routes to Root]]></title><description><![CDATA[<p>There are a number of ways to get to root access, some of them not entirely intuitive. &#xA0;For example, what&apos;s the difference between <code>sudo su</code> and <code>sudo -i</code>? &#xA0;How about <code>su -</code> ?</p><p>Root can be used in a variety of ways, so there are a variety</p>]]></description><link>https://rdbt.no/the-many-routes-to-root/</link><guid isPermaLink="false">6169da60234b8000010513de</guid><dc:creator><![CDATA[patrick]]></dc:creator><pubDate>Fri, 15 Oct 2021 20:50:33 GMT</pubDate><media:content url="https://rdbt.no/content/images/2021/10/root.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rdbt.no/content/images/2021/10/root.jpg" alt="The Many Routes to Root"><p>There are a number of ways to get to root access, some of them not entirely intuitive. &#xA0;For example, what&apos;s the difference between <code>sudo su</code> and <code>sudo -i</code>? &#xA0;How about <code>su -</code> ?</p><p>Root can be used in a variety of ways, so there are a variety of ways to get root access.</p><p>One thing to consider when obtaining root is the environment that you need to do what you want to do. &#xA0;If you just want to run a command, you can just use <code>sudo</code> before the command that requires root privileges, then enter your (not the root) password. &#xA0;This requires that you be part of the group that controls <code>sudo</code> access. &#xA0;In some distributions, this is the <code>wheel</code> group, in some it&apos;s <code>sudo</code>. &#xA0;To get access, someone with root privileges has to run something like <code>usermod -aG wheel username</code>. &#xA0;</p><p>Let&apos;s break that command down a bit:</p><ol><li><code>usermod</code> is the command that allows you to modify a user</li><li><code>-aG</code> means append/add the user to the supplementary group named next</li><li><code>wheel</code> is the name of the group that determines <code>sudo</code> access</li><li><code>username</code> is the username of the user who will be getting <code>sudo</code> access</li></ol><p>You could also drop a file with the user name into <code>/etc/sudoers.d</code>:</p><pre><code class="language-bash">username	ALL=(ALL) ALL</code></pre><p>How do you know what groups you&apos;re part of? &#xA0;Simple - just use the <code>groups</code> command or the <code>id</code> command.</p><p>Now that the basics are out of the way, let&apos;s talk about the different base commands.</p><ol><li><code>su</code> : Surprise! &#xA0;This does NOT stand for &quot;super user&quot;, it stands for &quot;SUBSTITUTE user&quot;. &#xA0;If you enter it with no arguments, the assumption is that you want to switch to the root user. &#xA0;<strong>Requires the root password.</strong><br>NOTE: $HOME and $PATH will remain that of the current user, not the root user. </li><li><code>su -</code> : Functionally the same as <code>su -l</code>(login). This command does log you in as the root user, with the root user&apos;s $HOME, $PATH and id. &#xA0;<strong>Requires the root password.</strong></li><li><code>sudo su -</code> : This seems like it would do the same thing as su -, but it doesn&apos;t. &#xA0;Adding <code>sudo</code> before the command tells the shell to use the root privileges of the current user, and as such, only requires the password of the current user, NOT the root user. &#xA0;This gives you a root login shell, and your $HOME, $PATH and id become that of the root user. &#xA0;This only works if you have ALREADY been given root privileges and are part of the sudo group.</li><li><code>sudo -i</code> : Functionally the same as <code>sudo su -</code>.</li></ol><p>And now you know a little bit more about the routes to root!</p>]]></content:encoded></item><item><title><![CDATA[Scheduling and Killing Processes]]></title><description><![CDATA[<p>Sometimes processes crash, or use too many resources, or just need to be gracefully shut down. &#xA0;In this post, I&apos;ll cover the following:</p><ul><li>nice</li><li>renice</li><li>kill</li><li>pkill</li></ul><p>Let&apos;s talk about killing first. There are actually two different commands with &quot;kill&quot; in them, and</p>]]></description><link>https://rdbt.no/process-scheduling-nice-and-renice/</link><guid isPermaLink="false">6158f8a87cb0490001dfb5d2</guid><dc:creator><![CDATA[patrick]]></dc:creator><pubDate>Sun, 03 Oct 2021 00:51:10 GMT</pubDate><media:content url="https://rdbt.no/content/images/2021/10/mostly-dead-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rdbt.no/content/images/2021/10/mostly-dead-2.jpg" alt="Scheduling and Killing Processes"><p>Sometimes processes crash, or use too many resources, or just need to be gracefully shut down. &#xA0;In this post, I&apos;ll cover the following:</p><ul><li>nice</li><li>renice</li><li>kill</li><li>pkill</li></ul><p>Let&apos;s talk about killing first. There are actually two different commands with &quot;kill&quot; in them, and they do different things. &#xA0;There are also different ways of killing processes, and some of them aren&apos;t even very murderey!</p><p>The first thing we should cover is how you can even tell if a process is causing problems - it&apos;s not always as easy as something becoming unresponsive. &#xA0;It could be that a process is using too much memory, or too many CPU cycles - but how do you determine that? &#xA0;If you&apos;re even vaguely familiar with Windows, you&apos;ll know about the Task Manager. &#xA0;Linux has something similar, called <code>top</code>. &#xA0;Just type &quot;top&quot; in a shell and you&apos;ll see something like this:</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://rdbt.no/content/images/2021/10/top-overview.png" class="kg-image" alt="Scheduling and Killing Processes" loading="lazy" width="784" height="310" srcset="https://rdbt.no/content/images/size/w600/2021/10/top-overview.png 600w, https://rdbt.no/content/images/2021/10/top-overview.png 784w"></figure><p>There&apos;s a lot to unpack here, but we&apos;re only going to focus on two areas right now - the first line with the text load average: 0.28, 1.05, 1.20, and the %CPU column. &#xA0;</p><p>The load average line shows a running average of system load over 1 minute, 5 minutes and 15 minutes. &#xA0;These numbers will change, and by watching them for a moment, you&apos;ll be able to tell if your system&apos;s load is increasing or decreasing. &#xA0;It&apos;s important to note that this information is for all cores COMBINED, so to determine the real load on your system, you need to divide the number you&apos;re interested in (say, the 15 minute figure of 1.20) by the number of cores in your system. &#xA0;My system has 6 cores, so I would divide 1.20 by 6, which is .20 - that means that over the last 15 minutes, my system has been running at 20% load. &#xA0;Which isn&apos;t bad at all! &#xA0;</p><p>Now let&apos;s assume that your system has suddenly slowed down. &#xA0;You open <code>top</code> and see the following:</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://rdbt.no/content/images/2021/10/top-loaded.png" class="kg-image" alt="Scheduling and Killing Processes" loading="lazy" width="742" height="288" srcset="https://rdbt.no/content/images/size/w600/2021/10/top-loaded.png 600w, https://rdbt.no/content/images/2021/10/top-loaded.png 742w"></figure><p>Whoa! &#xA0;We&apos;ve got a load averages of 9.20, 4.28 and 2.18 - that&apos;s 154% over 1 minute, 71% over 5 minutes, and 36% over 15 minutes. &#xA0;So we can tell just from looking at these numbers that something started hammering the system within the last 5 minutes or so. And if we look below, we can see that a process called <code>ghb</code> is using 482.4% of the CPU! That&apos;s actually <a href="https://handbrake.fr/?ref=rdbt.no">Handbrake</a>, which was busy converting a video, so that&apos;s kind of to be expected, since video conversion is a resource-intensive process.</p><p>But what if I want to do more than just sit back and let Handbrake bog my system down until it&apos;s finished? &#xA0;That&apos;s where <code>renice</code> comes in!</p><p>Every process in Linux has a &quot;nice&quot; value. &#xA0;The nice value determines the priority of the process, and the priority determines how much of the system&apos;s resources the process is allowed to use. &#xA0;In <code>top</code>, the nice value is in the &quot;NI&quot; column, and we can see that Handbrake has a nice value of 0, which is essentially neutral, and which was also assigned automatically by the system. &#xA0;Nice values go from -20, which is the LEAST nice, to 19, which is the MOST nice. &#xA0;</p><p>Think of the nice value in terms of people (processes) standing in line to get into a concert. &#xA0;Let&apos;s assume three different scenarios:</p><ol><li>Scott is standing at about the halfway point of the line. &#xA0;Scott has a nice value of 0 - he&apos;s not at the front of the line, but he&apos;s not at the back, either.</li><li>Crispin starts out right behind Scott, but he keeps letting people get in line in front of him, which eventually puts him at the back of the line. &#xA0;Crispin has a nice value of 19 - he&apos;s the nicest it&apos;s possible to be.</li><li>Steve arrives right before they&apos;re about ready to let people in and he cuts the line right at the front. &#xA0;Steve has a nice value of -20 - not nice at all!</li></ol><p>Using the magic of <code>renice</code>, we can move these processes/people around. &#xA0;Before we do that, however, it&apos;s important to know that regular users have fewer permissions than the root user when it comes to adjusting priorities, so a regular user can only make a process MORE nice, not LESS. &#xA0;Only the root user (or someone who has permission to <code>sudo</code>) can give a process more resources/make it less nice.</p><p>So let&apos;s assume that Steve is Handbrake/<code>ghb</code>, and type <code>renice -n 10 456911</code>, where <code>456911</code> is the Handbrake&apos;s process id (PID - see the left column in <code>top</code> above), and <code>-n 10</code> tells the system what the new nice value should be. &#xA0;Since Handbrake started at a nice value of 0 and was using 482% of available system resources, this new nice value of 10 will cause it to use far fewer resources. &#xA0;</p><p>However, what if I want to make more resources available to Scott (Joplin), now that Scott (Handbrake) isn&apos;t using so much? &#xA0;I would need to use <code>sudo</code>, because giving permission to use more resources requires root privileges. &#xA0;So I would type something like <code>sudo renice -n -10 273996</code>(Joplin&apos;s PID - see <code>top</code> above). &#xA0;This would let Joplin run smoother, as it would have access to more CPU and memory than it did before, and significantly more than Steve/Handbrake. &#xA0;That&apos;s what you get for cutting, Steve!</p><p>We&apos;re not done with Steve yet, though! &#xA0;We still need to talk about kill and pkill. &#xA0;Now let&apos;s assume that Steve doesn&apos;t like that he had to go almost to the back of the line, and he starts faking a seizure in an attempt to make people feel sorry for him so he can get back to the front of the line. &#xA0;In Linux, this means that Handbrake starts to hang. &#xA0;Oh no! &#xA0;Here&apos;s where it gets interesting, though, because we&apos;ve got a License to (p)Kill (I&apos;m so sorry).</p><p>There are technically 31 different <code>kill</code> signals we could send, but I&apos;m not going to cover them all here. &#xA0;It&apos;s enough to know that each kill signal has a number, and the number is what you use to tell the system exactly how you want to kill a process. &#xA0;By default, if you don&apos;t use a number at all, the system will send signal 15 (SIGTERM), which will basically tell a process to shut itself down gracefully if it can, much like short-pressing the power button on your computer or laptop - when you do that, stuff is going to try to shut down in a way that won&apos;t cause issues like data loss. &#xA0;HOWEVER, if the process doesn&apos;t shut down, and we need it to just die now, immediately, don&apos;t try to save anything, just die, we would use signal 9 (SIGKILL), which causes immediate death. &#xA0;To do this, you simply type <code>kill -9 ghb</code> and Handbrake/Steve will die as soon as you hit enter.<br>NOTE: You can&apos;t kill processes owned by others without root privileges.</p><p>Alternatively, if we just want to PAUSE Handbrake while we get some other stuff done, we could use kill signal 19 (SIGSTOP), which will effectively suspend a process, and when we&apos;re ready for it to use more resources, we could use signal 18 (SIGCONT), which will unpause/continue the process. &#xA0;<strong>So sometimes, a process might only be MOSTLY dead, in a state of suspension, waiting for the true love&apos;s kiss of that sweet, sweet signal 18.</strong></p><p>These signals aren&apos;t just for you, the system uses them as well. &#xA0;For example, if your browser crashes due to an invalid memory reference (SIGSEGV), that&apos;s one of the kill signals that will automatically generate a core dump, so you&apos;d be able to check the logs and hopefully determine why the process crashed.</p><p>Sometimes you&apos;ll have processes running that have the same command name. &#xA0;Browsers do this a lot. &#xA0;If you take a look at the <code>top</code> example above you&apos;ll see multiple instances of &quot;brave&quot; in the command column. &#xA0;If I wanted to kill all processes that share the same command name, I would use the <code>killall</code> command: <code>killall brave</code> and the system would send a kill -15 signal to those processes. &#xA0;If that doesn&apos;t work, you could send signal 9 like this: <code>killall -9 brave</code>.</p><p><code>killall</code> leads us nicely to the last command I wanted to cover: <code>pkill</code>. &#xA0;Like <code>killall</code>, <code>pkill</code> can be used to kill multiple processes, but it also includes advanced features like being able to kill processes based on the owning user, the owning group, the child processes of a parent process, or processes running on a specific terminal.</p><p>So if we wanted to terminate all of Steve&apos;s processes, and ONLY Steve&apos;s, we could do this: <code>pkill -U steve</code>. Or, if we know that Steve is logged in to the same server you&apos;re on, and is working in a shell session in tty6, we (as root) could terminate his shell by using this pkill command: <code>pkill -t tty6</code>. &#xA0;This will send signal 15 to his shell and all processes in it, and he&apos;d be logged out.</p><p>That&apos;s what you get for cutting in line, Steve!</p>]]></content:encoded></item><item><title><![CDATA[elif: Not a Tolkien Character]]></title><description><![CDATA[<p><code>elif</code> is a portmanteau consisting of the words &quot;else&quot; and &quot;if&quot;. &#xA0;In bash (and Python), it&apos;s a way to test multiple statements before closing a loop, as opposed to the single statement test provided by <code>else</code>. &#xA0;If a typical if/then construct</p>]]></description><link>https://rdbt.no/elif-not-a-tolkein-character/</link><guid isPermaLink="false">6154f2cef699de00016a040e</guid><dc:creator><![CDATA[patrick]]></dc:creator><pubDate>Thu, 30 Sep 2021 00:44:45 GMT</pubDate><media:content url="https://rdbt.no/content/images/2021/09/elif.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rdbt.no/content/images/2021/09/elif.jpg" alt="elif: Not a Tolkien Character"><p><code>elif</code> is a portmanteau consisting of the words &quot;else&quot; and &quot;if&quot;. &#xA0;In bash (and Python), it&apos;s a way to test multiple statements before closing a loop, as opposed to the single statement test provided by <code>else</code>. &#xA0;If a typical if/then construct uses an if/then/else format, elif allows for if/then/elif/then/else.</p><p>Here is a simple example that will try to start mysql if mariadb is active on the system, &#xA0;psql if postgresql is active on the system. Below the example is a breakdown of what each line does:</p><pre><code class="language-bash">[user@host ~]$ systemctl is-active mariadb &gt; /dev/null 2&gt;&amp;1 
MARIADB_ACTIVE=$? 
[user@host ~]$ systemctl is-active postgresql &gt; /dev/null 2&gt;&amp;1 
POSTGRESQL_ACTIVE=$? 
[user@host ~]$ if [ &quot;$MARIADB_ACTIVE&quot; -eq 0 ]; then 
&gt; mysql 
&gt; elif [ &quot;$POSTGRESQL_ACTIVE&quot; -eq 0 ]; then 
&gt; psql 
&gt; else 
&gt; echo &quot;Nuts, all outta databases here!&quot; 
&gt; fi</code></pre><ol><li>This line asks <code>systemctl</code> to check if the <code>mariadb</code> service <code>is-active</code>, and to send any output (error or otherwise) to <code>/dev/null</code>, or nowhere.<br><strong><em>Fun Fact</em></strong>: <code>2&gt;&amp;1</code> Refers to something very basic about how Unix systems operate: standard input (stdin), standard output (stdout) and standard error (stderr). <br>stdin (0) is a keyboard - what you use to input information<br>stdout (1) is the terminal - where output is displayed<br>stderr (2) are errors.<br>So <code>2&gt;&amp;1</code> is saying to send any errors (2) from this command to the same location as standard output (1), which in this case is /dev/null, the black hole of all Linux filesystems.</li><li>Set a variable called <code>MARIADB_ACTIVE</code>, the value of which is the result of <code>$?</code>, which returns exit codes from the immediately previous command. &#xA0;If <code>mariadb</code> was active, the exit code would be <code>0</code>. If it&apos;s not running, or not installed, it would be another number, between 1 and 255. &#xA0;In this instance, <code>mariadb</code> is not installed, so the exit code is <code>3</code>.<br><em><strong>Unfamiliar with exit codes?</strong></em> In bash, you can check the exit code of the previous command by typing <code>echo $?</code>, which will output EITHER a 0 or any other number. &#xA0;If the previous command was successful, you always get a 0. &#xA0;Any other number is a fail.</li><li>Ask <code>systemctl</code> to check if the <code>postgresql</code> service <code>is-active</code> and send any output (and errors) to <code>/dev/null</code>.</li><li>Set a variable called <code>POSTGRESQL_ACTIVE</code> with the value of the exit code <code>$?</code> for the command from the previous line. &#xA0;In this instance, <code>postgresql</code> is also not installed, so the exit code is <code>3</code>.</li><li>If/then loop using a <code>test</code> expression. &#xA0;This is saying that <code>if</code> the value of the variable <code>MARIADB_ACTIVE</code> is equal (<code>-eq</code>) to <code>0</code>, <code>then</code></li><li>Run <code>mysql</code> if the previous line can be processed. &#xA0;In this instance, the value of <code>MARIADB_ACTIVE</code> is <code>3</code>, so the test failed, and we go to the next line:</li><li><code>elif</code> (else if) the previous command failed, try another <code>test</code> expression - in this case, <code>if</code> the value of <code>POSTGRESQL_ACTIVE</code> is equal (<code>-eq</code>) to <code>0</code>, then</li><li>Run <code>psql</code> if the <code>test</code> expression succeeded - it didn&apos;t, because the value of <code>POSTGRESQL_ACTIVE</code> is <code>3</code>, so we go to the next line:</li><li><code>else</code> - if <code>elif</code> failed and can&apos;t run <code>psql</code> on the previous line, go to the next line</li><li>Neither mariadb nor postgresql is installed, so we&apos;re just going to output text that says <code>&quot;Nuts, all outta databases around here!&quot;</code>.</li><li><code>fi</code> - The inverse of if and closes out the if/then loop.</li></ol><p>Just for super geeky funsies, let&apos;s write a bash script to cover Bilbo&apos;s thought processes about taking the ring back at Rivendell, using <code>elif</code>:</p><pre><code class="language-bash">systemctl is-active mordor-desire &gt; /dev/null 2&gt;&amp;1
MORDOR_DESIRE=$?
systemctl is-active wanna-turn-into-a-gollum /dev/null 2&gt;&amp;1
GOLLUM_DESIRE=$?
if [ &quot;$MORDOR_DESIRE&quot; -eq 0 ]; then
&gt; echo &quot;I&apos;m going on an adventure!&quot;
&gt; elif [ &quot;$GOLLUM_DESIRE&quot; -eq 0 ]; then
&gt; echo &quot;REEEEEEEEEEEEEE&quot;
&gt; else
&gt; echo &quot;I&apos;m sorry that you must carry this burden&quot;
&gt; fi</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://rdbt.no/content/images/2021/09/gollum_desire.jpg" class="kg-image" alt="elif: Not a Tolkien Character" loading="lazy" width="620" height="330" srcset="https://rdbt.no/content/images/size/w600/2021/09/gollum_desire.jpg 600w, https://rdbt.no/content/images/2021/09/gollum_desire.jpg 620w"><figcaption>elif [ &quot;$GOLLUM_DESIRE&quot; -eq 0 ]; then echo &quot;REEEEEEEEEEEEEE&quot;</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[What is Docker?]]></title><description><![CDATA[<h3 id="and-why-would-you-want-to-use-it">And why would you want to use it?</h3><p>Put simply, a Docker container is like a mini virtual machine. &#xA0;It contains just enough to do what it needs to do, and no more. &#xA0;It&apos;s also isolated from the machine it&apos;s running on, so crashing</p>]]></description><link>https://rdbt.no/what-is-docker/</link><guid isPermaLink="false">614bc141f699de00016a01e5</guid><dc:creator><![CDATA[patrick]]></dc:creator><pubDate>Thu, 23 Sep 2021 01:47:06 GMT</pubDate><media:content url="https://rdbt.no/content/images/2021/09/docker.png" medium="image"/><content:encoded><![CDATA[<h3 id="and-why-would-you-want-to-use-it">And why would you want to use it?</h3><img src="https://rdbt.no/content/images/2021/09/docker.png" alt="What is Docker?"><p>Put simply, a Docker container is like a mini virtual machine. &#xA0;It contains just enough to do what it needs to do, and no more. &#xA0;It&apos;s also isolated from the machine it&apos;s running on, so crashing the host machine is many orders of magnitude more difficult than if the application in the container was running directly on the machine. &#xA0;You can also run a LOT of them at the same time with not very many resources. &#xA0;I&apos;m currently running 22 containers on a modern 8 core server with 64 GB of RAM, and CPU usage usually doesn&apos;t exceed 3 or 4% per core, and memory usage doesn&apos;t get above 4 or 5 GB. &#xA0;It also works well on older systems with less specs - I also have an ancient server in a closet running an old 4 core Athlon chip with 8 GB of RAM, and it does just fine running 20 containers. &#xA0;</p><p>Let&apos;s look at an example of how docker can be useful. &#xA0;Assume that you come across an application that you want to install on your Ubuntu system. &#xA0;It&apos;s a document scanning program that requires Python 3. Your main Ubuntu system has Python 2 installed and can&apos;t be updated because other software already installed on the system requires that specific version to work.</p><p>It&apos;s not really practical to install two versions of Python on the same machine - you could <em>technically</em> do it, but it wouldn&apos;t work right without a lot of faffing around. &#xA0;BUT (<em>insert non-denominational angelic choir sounds</em>) you managed to find the application you need in a <strong>Docker container</strong>. &#xA0;This means that the author of the software you want has created a docker image in addition to providing a &quot;bare metal&quot; installer that installs directly on your system.</p><p>Installing the bare metal version of the software requires that you first install any dependencies that are required for the application to work. &#xA0;This includes the correct version of Python and the relevant libraries, some font files, software to convert files to PDF, software to generate thumbnails, software that can scan graphics for text and convert it to editable, searchable text, a database to store all of this information, maybe some search software to enable you to search for documents, and a lot more. &#xA0;You&apos;d have to do that yourself if you want to go the bare metal route. &#xA0;However, in addition to the bare-metal version, the author of the software has packaged everything the program needs to run and has put it in a single container for you - a docker image. &#xA0;As long as your system can run docker images, you can install this software quickly and easily.</p><p>There are only two things you need to (optimally) run any docker container:</p><ol><li>Docker Engine</li><li>Docker-compose</li></ol><h3 id="docker-engine">Docker Engine</h3><p>So what is Docker Engine? &#xA0;Very simply, Docker Engine is set of several pieces of software that allows your system to read and use docker images. &#xA0;A docker image is what is provided by a developer - it contains instructions on how to build a container from the image, like a template. &#xA0;If you&apos;re using Ubuntu, you can install Docker Engine by following the instructions <a href="https://docs.docker.com/engine/install/ubuntu/?ref=rdbt.no">here</a>. &#xA0;Here are the important bits of the Docker Engine:</p><p>The docker daemon (<em>dockerd</em>) hangs out on your system waiting for requests to do stuff, like &quot;give me a list of all currently running containers&quot;, &quot;stop container x&quot;, &quot;start container y&quot;, etc.</p><p>The docker client (<em>docker</em>), is how you, the user, will interact with your containers. &#xA0;When you type in a command like <code>docker ps</code>, you&apos;re telling the client to create a message for the docker daemon, telling it to list all currently running docker images. &#xA0;The docker daemon hears this and executes your request.</p><p><em>Docker registries</em> are where you&apos;ll find docker images. &#xA0;The main docker registry is the <a href="https://hub.docker.com/?ref=rdbt.no">Docker Hub</a>. &#xA0;There are others, though, like <a href="https://quay.io/?ref=rdbt.no">quay.io</a>.</p><p>To create and run a docker container (without docker-compose), you would send the following as a single command (we&apos;ll assume you want to install the papermerge docker container):</p><pre><code class="language-bash">docker run -d \
  --name=papermerge \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=America/New_York \
  -e REDIS_URL= `#optional` \
  -p 8000:8000 \
  -v &lt;/path/to/appdata/config&gt;:/config \
  -v &lt;/path/to/appdata/data&gt;:/data \
  --restart unless-stopped \
  ghcr.io/linuxserver/papermerge</code></pre><p>Let&apos;s break this down, line-by-line.</p><p><strong><u>Line 1</u></strong>: <code>docker run -d \</code> This is the command to run a docker container - the &quot;-d&quot; means that it should run &quot;detached&quot;, which is a good idea if you don&apos;t want to keep a terminal window open the entire time the container is running. &#xA0;The &quot;\&quot; at the end of the line tells your terminal to go to the next line instead of executing the command on the current line.</p><p><strong><u>Line 2</u></strong>: <code>--name=papermerge \</code> This is simply the name of the container - you want something easy to read, especially if you have a number of containers running at the same time.</p><p><strong><u>Line 3</u></strong>: <code>-e PUID=1000 \</code> This is the ID of the user who will be running the command to execute the container. &#xA0;This is the &quot;owner&quot; of the container. &#xA0;Usually it just needs to match your usual user ID, which is typically &quot;1000&quot; on most flavors of Linux. &#xA0;The &quot;-e&quot; simply means that what you&apos;re defining here is an &quot;environment&quot; variable.</p><p><strong><u>Line 4</u></strong>: <code>-e PGID=1000 \</code> This is the ID of the group that the container will belong to. &#xA0;In Linux, pretty much everything has both an ID and a group ID. &#xA0;This is the ID of the same group the user belongs to, which again is usually 1000.</p><p><strong><u>Line 5</u></strong>: <code>-e TZ=America/New_York \</code> Another environment variable! &#xA0;Software often needs to know what time it is, so &quot;TZ&quot; stands for &quot;Time Zone&quot;, and New York is the time zone this particular computer resides in.</p><p><strong><u>Line 6</u></strong>: <code>-e REDIS_URL= <code>optional</code> \</code> Redis is a kind of database - it&apos;s optional in this case, but if you want to use one for this image, it&apos;s supported! &#xA0;We&apos;ll leave it alone.</p><p><strong><u>Line 7</u></strong>: <code>-p 8000:8000 \</code> &quot;-p&quot; here stands for &quot;port&quot;. There are actually a few things going on here. &#xA0;First of all, understand that ports are basically like &quot;portholes&quot; through which computers (and containers) pass data through. &#xA0;<br><br>In this example, the first &quot;8000&quot; is the port for the host machine, and the second &quot;8000&quot; is the port for the container. &#xA0;This format is useful because you could perhaps have something else that&apos;s already using port 8000 on your host machine, so you could define a different port here, like 8020, in which case you&apos;d have <code>-p 8020:8000 \</code>. &#xA0;This means that when your container is up and running, you would access it in your browser by tacking that port number to the end of the IP address of the machine the container is running on. &#xA0;For example, if your computer&apos;s IP address is 192.168.1.221, you would access this container by going to 192.168.1.221:8020. &#xA0;That request is then passed to port 8000 in your container.</p><p><strong><u>Line 8</u></strong>: <code>-v &lt;/path/to/appdata/config&gt;:/config</code> The &quot;-v&quot; here stands for &quot;volume&quot;. &#xA0;Containers by their very nature are &quot;contained&quot;, but sometimes there is a need for part of the container to be visible <em>outside</em> the container. &#xA0;In this instance, this container has some editable configuration files. &#xA0;There are several advantages to creating a volume outside of a container. &#xA0;<br><br>First, while it is possible to access the configuration file that is located within the container, doing so has two problems:</p><ol><li>You&apos;d have to use the docker exec command to get a shell for the container, and </li><li>Any changes made inside a docker container only last as long as the container is running - they are not <em>persistent</em>. &#xA0;The next time the container is started, docker is going to re-create it from the image, which does not include any changes you made. </li></ol><p>Second, creating a volume outside the container, and <em>defining that volume when you run the container </em>means that you can access and edit anything in that volume using your normal editor at any time. &#xA0;It also means that the contents of that volume can be backed up.</p><p>Let&apos;s look at the format of this command, because it&apos;s pretty important. &#xA0;It follows the same logic as the port command, in that the first part references the host computer, and the second part refers to the container. &#xA0;So &#xA0;instead of &lt;path/to/appdata/config&gt;, you would enter something like <code>/opt/papermerge/appdata/config</code>, which is a directory on your host system. &#xA0;If this directory doesn&apos;t yet exist, docker will create it. &#xA0;The second part says essentially &quot;copy the contents of the config file (in /config in the container) to /opt/papermerge/appdata/config, AND mirror any changes made to the files in /opt/papermerge/appdata/config INTO /config in the container.</p><p>The creator of the image determines what goes in the config folder, you don&apos;t have to do anything except modify whatever is in there, if you want/need to.</p><p><strong><u>Line 9</u></strong>: <code>-v &lt;/path/to/appdata/data&gt;:/data</code> Same as the /config directory, but for whatever goes in a data directory for this particular image.</p><p><strong><u>Line 10</u></strong>: <code>--restart unless-stopped</code> If you don&apos;t add this line, the docker container will not be restarted when your system is rebooted. &#xA0;Another alternative here is &quot;always&quot;.</p><p><strong><u>Line 11</u></strong>: <code>ghcr.io/linuxserver/papermerge</code> This is the actual location of the image in a repository. In this instance, the repository is ghcr.io, which is the repository for the linuxserver group&apos;s images. &#xA0;This line tells docker to go to that address and pull the latest &quot;papermerge&quot; image.</p><p>Whew! &#xA0;That&apos;s a lot, but it&apos;s all pretty simple and logical. &#xA0;Now, imagine that you want to change the timezone (maybe you moved), or you want to change the restart variable to &quot;always&quot;. &#xA0;You would need to stop this particular image with the command <code>docker stop papermerge</code> and then re-enter the entire command above, with your changes, as a single command. &#xA0;This could get tedious, especially if you have a lot of docker containers that might need to change at the same time. &#xA0;This is where docker-compose comes in.</p><h3 id="docker-compose">Docker-compose</h3><p><a href="https://docs.docker.com/compose/install/?ref=rdbt.no">Docker compose</a> allows you to use YAML configuration files instead of commands. &#xA0;For example, instead of the above docker command, a docker-compose.yml file for the papermerge image looks like this:</p><pre><code class="language-yaml">---
version: &quot;2.1&quot;
services:
  papermerge:
    image: ghcr.io/linuxserver/papermerge
    container_name: papermerge
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - REDIS_URL= #optional
    volumes:
      - &lt;/path/to/appdata/config&gt;:/config
      - &lt;/path/to/appdata/data&gt;:/data
    ports:
      - 8000:8000
    restart: unless-stopped</code></pre><p>The beauty of this is that format is pretty much the same, but this is a text file that is easily editable. &#xA0;When you want to bring the container up, you just run <code>docker-compose up -d</code> in the same directory as the docker-compose.yml file, and docker will create the container based on the parameters in the docker-compose.yml file. &#xA0;If you need to bring the container &#xA0;down because you made changes to the compose file, you&apos;d just run <code>docker-compose down</code>.</p><p>The only thing tricky about YAML files is that the spacing has to be exactly correct, but that&apos;s not very difficult. &#xA0;The only thing to keep in mind when using docker-compose.yml files is the version number. &#xA0;This relates to the version number of docker-compose itself. &#xA0;If the version of the docker-compose.yml file you&apos;re attempting to run is 3.7, but you&apos;ve only got 3.2 installed, you&apos;ll need to update the version of docker-compose on your system before it&apos;ll work.</p><p>And that&apos;s the scoop on docker!</p>]]></content:encoded></item></channel></rss>