Install fdupes (or other small program) on FreeNAS

FreeNAS (now TrueNAS) does not have access to FreeBSD’s ports or packages. Normally extra software needs to be installed in jails. However sometimes you need software on the host itself. It is possible to copy program binaries from the jail to the host if you take care of a few things first.

It is not advised to install packages on an appliance such as FreeNAS. For this reason FreeNAS does not come with FreeBSD packages or ports. In these situations you’re supposed to create a jail and install the software you need inside that jail instead. However you might prefer to have some basic utilities on the host itself. In my use case I thought fdupes would be much more useful if it had access to all my disks directly. The following instructions are fdupes, a duplicate file finder, but it will work for sufficiently small utilities for which there is a package on a similar FreeBSD version. For large programs the dependencies might be too big to handle.

First you need to have a copy of the binary you need. You can get it from an existing FreeBSD installation or pkg install it in a FreeNAS jail. Make a directory on Freenas such as ‘~/bin/fdupes’ to keep things tidy. Move the binary you have into this directory. You can try running it now. If you’re lucky it will just run. That means it either does not need to load any system libraries to work or the libraries it needs are already part of FreeNAS.

If it didn’t work on the first try, you will see an error message about a missing library. Currently for fdupes on FreeNAS the missing library would be - a perl style regular expression library. You can also run ‘ldd ./fdupes’ to list the libraries it needs and which of them are missing. Here’s some sample output.

root@freenas ~/bin/fdupes]# ldd fdupes
fdupes: => not found (0) => /lib/ (0x80025e000) => /lib/ (0x8002c0000)
[root@freenas ~/bin/fdupes]#

Identify the missing libraries and copy it from the machine which donated you the fdupes binary to the same directory where you placed fdupes on FreeNAS. Running ldd there will help you identify its location without having to deal with multiple symlinks pointing to the same library.

In order to run a binary on unix systems without installing its required dynamic libraries systemwide, you can pass the location of the dynamic library as an environment variable. This is all a long way of saying you should do the following

[root@freenas ~/bin/fdupes]# ls -1
[root@freenas ~/bin/fdupes]# LD_LIBRARY_PATH=/root/bin/fdupes \

If you run this you will see that fdupes is indeed possible to run right now. Example:

[root@freenas ~/bin/fdupes/tmp]#  LD_LIBRARY_PATH=/root/bin/fdupes \
                         /root/bin/fdupes/fdupes -r /root/bin/fdupes



That’s pretty cool. To make it a little bit more user friendly, we can put the LD_LIBRARY_PATH in a script which you can call directly, remembering to have it also handle command arguments meant for the actual binary we’re calling. You can call the script anything but if you want to call it just fdupes for ease of use, you first have to rename the fdupes binary to something else, like fdupes.bin. Then create a new script called fdupes. Make sure both have executable permissions. Here’s the contents of our new fdupes script.

[root@freenas ~/bin/fdupes/tmp]# cat
LD_LIBRARY_PATH=/root/bin/fdupes  /root/bin/fdupes/fdupes.bin $*

Now you can call /root/bin/fdupes (or put it in PATH and run it directly) from anywhere on the system with your usual fdupes parameters (that’s the $* part).

591 Words

2021-09-13 00:00 +0000

Containerception: It's virtual all the way down

Various virtualization and container technologies nest within each other very well and provide different levels of isolation. Here’s one example from my server.

The main machine is not virtual at all. It’s a pretty beefy dedicated server or if we’re being hip, bare-metal.

[kvm ~]$ uname -a
Linux 3.10.0-1062.18.1.el7.x86_64 #1 SMP Tue Mar 17 23:49:17 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

I am running qemu-kvm with libvirt on this machine.

[kvm ~]$ virsh list
 Id    Name                           State
 5     UbuntuServer              running
 6     ubuntu16.04server              running
 7     ERPCentos7New            running
 8     virt01.debian8                 running

Let’s pick one vm that has nested virtual stuffs in it.

[kvm ~]$ virsh console 5
Connected to domain UbuntuServer
Escape character is ^]

login: myuser
Last login: Sat Mar 28 11:35:47 UTC 2020 on ttyS0
Welcome to Ubuntu 19.10 (GNU/Linux 5.3.0-42-generic x86_64)

This is a virtual machine running Ubuntu with lxd installed from snap. I am not really sold on snaps. To me they sound like installing software on OSX but it’s how lxd is installed so I didn’t fight it. (I do remove snapd on lxc images. Yes even the minimal cloud images have it.)

I have a bunch of lxc containers here. lxc containers are usually bigger than app containers, but they provide a lot more flexibility since they provide a whole OS environment.

myuser@:~$ lxc list -c n,4
|    NAME     |         IPV4         |
| alpine-edge | (eth0)   |
| dockers     | (docker0) |
|             | (eth0)    |
| eaonmin     | (eth0)   |
| grafana     | (eth0)    |
| lamp        | (eth0)    |
| pihole      |                      |
| rundeck     | (eth0)    |

Let’s pick one and keep digging.

myuser@:~$ lxc exec dockers bash
root@dockers:~# docker container ls --format 'table {{.Names}}\t{{.Status}}'
NAMES               STATUS
telegraf            Up 13 hours
influxdb            Up 13 hours
grafana             Up 13 hours
privoxy             Up 13 hours
pihole              Up 14 hours (healthy)
portainer           Up 14 hours

I am in the process of moving some lxc containers into docker containers. That’s why the same names such as portainer and grafana appear in multiple places.

This is a good time to mention when one should use lxc vs docker. The grafana instance in lxc actually also includes influxdb and telegraf installed with OS packages. In docker, I split them up. One app per container is suggested, and makes sense, although it is not enforced by docker.

Moving on. We’re at the bottom of the virtualization staircase. Let’s see what’s going on in a pihole docker container, running in an Ubuntu LXC host, that’s running in an Ubutu qemu VM, installed on a dedicated server running Centos 7.

root@dockers:~# docker exec -it pihole bash
root@1215803f4731:/# ps -axw
    1 ?        Ss     0:00 s6-svscan -t0 /var/run/s6/services
   28 ?        S      0:00 s6-supervise s6-fdholderd
 1282 ?        S      0:00 s6-supervise lighttpd
 1284 ?        S      0:00 s6-supervise cron
 1285 ?        S      0:00 s6-supervise pihole-FTL
 1287 ?        Ss     0:00 bash ./run
 1289 ?        Ss     0:00 bash ./run
 1299 ?        S      0:09 lighttpd -D -f /etc/lighttpd/lighttpd.conf
 1303 ?        S      0:00 /usr/sbin/cron -f
 1325 ?        Ss     0:00 /usr/bin/php-cgi
 1326 ?        S      0:02 /usr/bin/php-cgi
 1327 ?        S      0:02 /usr/bin/php-cgi
 1328 ?        S      0:02 /usr/bin/php-cgi
 1329 ?        S      0:02 /usr/bin/php-cgi
 2061 ?        Ss     0:00 bash ./run
 2065 ?        Sl     0:59 pihole-FTL no-daemon
 5565 pts/0    Ss+    0:00 bash
26529 pts/1    Ss     0:00 bash
26604 pts/1    R+     0:00 ps -axw

I am personally happy that pihole’s selection of things that I wouldn’t have exactly done that way are contained as deep down as possible.