Dass ich Bartagamen halte, ist ja bekannt. Neben Bartagamen züchten wir auch diverse Futtertiere wie argentinische Waldschaben oder Heuschrecken. In beiden Fällen braucht man Futter, wofür wir schon seit Jahren nur noch gesammelte oder selbst angebaute Kräuter und Blüten verwenden.
Es ist natürlich für Laien wie uns schwierig zu beurteilen, welche Pflanzen, bzw Pflanzenteile man verwenden kann. Anfangs haben wir uns mit Google und Wikipedia herumgequält. Im Grunde kann man durchaus alles online finden. Aber wie das eben so ist: die einen sagen so und die anderen SO. Oder wie wir bei TCS zu sagen pflegen: Das is nix.
Ein Nachschlagewerk musste also her und Das Handbuch der Futterpflanzen von Marion Minch passt da wie ein Deckel auf den Topf. Marion is nicht einfach nur ein Profi was Kräuter angeht, sie ist ein Guru. Ich als ITler würde sagen sie ist ein Pflanzen-Nerd. Sehr beeindruckend, wie tiefgreifend das Wissen ist, das sie in dem Buch weitergibt. Gleichzeitig ist es übersichtlich, man findet schnell was man sucht, was insbesondere zur Pflanzenbestimmung wichtig ist. Das Werk ist verständlich geschrieben, kompetent und behält auch den Umweltaspekt und die Gesundheit unserer Pfleglinge im Auge. Was ich auch schön finde ist, dass es auf überflüssigen Brimborium verzichtet. Es geht um diese eine Sache und das macht sie perfekt.
Mittlerweile sind wir auch Stammkunden in ihrem Shop Samenkiste, wo man Samen bestellen kann, die alle in dem Buch vorkommen. Unter anderem die Samen für meine Hummelwiese kommen von dort. Da weiss man was man hat, das sind keine hochgezüchteten Zierpflanzen sondern ursprüngliche Wildarten aus eigenem Anbau. Schon alleine weil man bei Marion Minch die Samen bestellen kann, die sie selber herstellt, weiss man, dass ihr Buch "Futterpflanzen für Reptilien" von einer Fachfrau geschrieben wurde.
I released a new version of dbtool. This is a small tool I wrote over 15 years ago! I reworked the configure script and a couple of bits here and there. However, the most fascinating thing is, that I didn't have to alter the source at all. I still compiles out of the box, with berkeley db or gdbm; and pcre. Now this is really great, that the APIs of those libraries are still supporting 15 years old clients, I'm amazed.
Besides, I moved the source to github.
Just in case: with dbtool you can manage key/value data storage in the shell (on the command line or in shell scripts). It's fast and simple to use, doesn't have much dependencies and even supports encryption (using AES block cipher).
There are cases, when you need to change the source ip address of a client, but can't do it in the kernel. Either because you don't have a firewall running or because it doesn't work, or because there's no firewall available.
In such cases the usual aproach is to use a port forwarder. This is a small piece of software, which opens a port on the inside, visible to the client and establishes a new connection to the intended destination, which is otherwise unreachable for the client. In fact, a port forwarder implements hiding nat or masquerading in userspace. Tools like this are also usefull for testing things, to simulate requests when the tool you need to use for the test are unable to set their source ip address (binding address).
For TCP there are LOTs of solutions available, e.g. tcpxd. Unfortunately there are not so many solutions available if you want to do port forwarding with UDP. One tool I found was udp-redirect. But it is just a dirty hack and didn't work for me. Also it doesn't track clients and is therefore not able to handle multiple clients simultaneusly.
So I decided to enhance it. The - current - result is udpxd, a "general purpose UDP relay/port forwarder/proxy". It supports multiple concurrent clients, it is able to bind to a specific source ip address and it is small and simple. Just check out the repository, enter "make" and "sudo make install" and you're done.
How does it work: I'll explain it on a concrete example. On the server where this website is running there are multiple ip addresses configured on the outside interface, because I'm using jails to separate services. One of those jail ip addresses is 18.104.22.168. Now if I want to send a DNS query to the Hetzner nameserver 22.214.171.124 from the root system (not inside the jail with the .33 ip address!), then the packet will go out with the first interface address of the system. Let's say I don't want that (either for testing or because the remote end does only allow me to use the .33 address). In this szenario I could use udpxd like this:
udpxd -l 172.16.0.3:53 -b 126.96.36.199 -t 188.8.131.52:53
The ip address 172.16.0.3 is configured on the loopback interface lo0. Now I can use dig to send a dns query to 172.16.0.3:53:
dig +nocmd +noall +answer google.de a @172.16.0.3
google.de. 135 IN A 184.108.40.206
google.de. 135 IN A 220.127.116.11
google.de. 135 IN A 18.104.22.168
google.de. 135 IN A 22.214.171.124
When we look with tcpdump on our external interface, we see:
IP 126.96.36.199.24239 > 188.8.131.52.53: 4552+ A? google.de. (27)
IP 184.108.40.206.53 > 220.127.116.11.24239: 4552 4/0/0 A 18.104.22.168,
A 22.214.171.124, A 126.96.36.199, A 188.8.131.52 (91)
And this is how the same request looks on the loopback interface:
IP 172.16.0.3.24239 > 172.16.0.3.53: 4552+ A? google.de. (27)
IP 172.16.0.3.53 > 172.16.0.3.24239: 4552 4/0/0 A 184.108.40.206,
A 220.127.116.11, A 18.104.22.168, A 22.214.171.124 (91)
As you can see, dig sent the packet to 172.16.0.3:53, udpxd took it, opened a new outgoing socket, bound to 126.96.36.199 and sent the packet with that source ip address to the hetzner nameserver. It also remembered which socket the particular client (source ip and source port) has used. Then the response came back from hetzner, udpxd took it, looked up its internal cache for the matching internal client and sent it back to it with the source ip on the loopback interface.
As I already said, udpxd is a general purpose udp port forwarder, so you can use it with almost any udp protocol. Here's another example using ntp: now I set up a tunnel between two systems, because udpxd cannot bind to any interface with port 123 while ntpd is running (ntpd binds to *.123). So on system A I have 3 udpxd's running:
udpxd -l 172.17.0.1:123 -b 188.8.131.52 -t 184.108.40.206:123
udpxd -l 172.17.0.2:123 -b 220.127.116.11 -t 18.104.22.168:123
udpxd -l 172.17.0.3:123 -b 22.214.171.124 -t 126.96.36.199:123
Here I forward ntp queries on 172.16.0.1-3:123 to the real ntp pool servers and I'm using the .33 source ip to bind for outgoing packets again. On the other system B, which is able to reach 172.17.0.0/24 via the tunnel, I reconfigured the ntpd like so:
server 172.17.0.1 iburst dynamic
server 172.17.0.2 iburst dynamic
server 172.17.0.3 iburst dynamic
After restarting, let's look how it works:
remote refid st t when poll reach delay offset jitter
*172.17.0.1 188.8.131.52 2 u 12 64 1 18.999 2.296 1.395
172.17.0.2 184.108.40.206 2 u 11 64 1 0.710 1.979 0.136
172.17.0.3 220.127.116.11 2 u 10 64 1 12.073 5.836 0.089
Seems to work :), here's what we see in the tunnel interface:
14:13:32.534832 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
14:13:32.556627 IP 172.17.0.1.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:33.535081 IP 10.10.10.1.123 > 172.17.0.2.123: NTPv4, Client, length 48
14:13:33.535530 IP 172.17.0.2.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:34.535166 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
14:13:34.535278 IP 10.10.10.1.123 > 172.17.0.3.123: NTPv4, Client, length 48
14:13:34.544585 IP 172.17.0.3.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:34.556956 IP 172.17.0.1.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:35.535308 IP 10.10.10.1.123 > 172.17.0.2.123: NTPv4, Client, length 48
14:13:35.535742 IP 172.17.0.2.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:36.535363 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
And the forwarded traffic on the public interface:
14:13:32.534944 IP 18.104.22.168.63956 > 22.214.171.124.123: NTPv4, Client, length 48
14:13:32.556586 IP 126.96.36.199.123 > 188.8.131.52.63956: NTPv4, Server, length 48
14:13:33.535188 IP 184.108.40.206.48131 > 220.127.116.11.123: NTPv4, Client, length 48
14:13:33.535500 IP 18.104.22.168.123 > 22.214.171.124.48131: NTPv4, Server, length 48
14:13:34.535255 IP 126.96.36.199.56807 > 188.8.131.52.123: NTPv4, Client, length 48
14:13:34.535337 IP 184.108.40.206.56554 > 220.127.116.11.123: NTPv4, Client, length 48
14:13:34.544543 IP 18.104.22.168.123 > 22.214.171.124.56554: NTPv4, Server, length 48
14:13:34.556932 IP 126.96.36.199.123 > 188.8.131.52.56807: NTPv4, Server, length 48
14:13:35.535379 IP 184.108.40.206.22968 > 220.127.116.11.123: NTPv4, Client, length 48
14:13:35.535717 IP 18.104.22.168.123 > 22.214.171.124.22968: NTPv4, Server, length 48
14:13:36.535442 IP 126.96.36.199.24583 > 188.8.131.52.123: NTPv4, Client, length 48
You see, the ntp server gets the time via the tunnel via udpxd which in turn forwards it to real ntp servers.
Note, if you left the parameter -b out, then the first ip address of the outgoing interface will be used.
udpxd source code and documentation is on github.
I made a couple of enhancements to udpxd:</p>
- added ipv6 support (see below)
- udpxd now forks and logs to syslog if -d (-v) is specified
So the most important change is ip version 6 support. udpxd can now listen on an ipv6 address and forward to an ipv4 address (or vice versa), or listen and forward on ipv6. Examples:
Listen on ipv6 loopback and forward to ipv6 google:
udpxd -l [::1]:53 -t [2001:4860:4860::8888]:53
Listen on ipv6 loopback and forward to ipv4 google:
udpxd -l [::1]:53 -t 184.108.40.206:53
Listen on ipv4 loopback and forward to ipv6 google:
udpxd -l 127.0.0.1:53 -t [2001:4860:4860::8888]:53
Of course it is possble to set the bind ip address (-b) using ipv6 as well.
And I added a Travis-CI test, which runs successful. So, udpxd is supported on FreeBSD and Linux. I didn't test other platforms yet.
A couple of days ago I stumbled across Travis-CI, a tool for automated build tetsts of github projects. It sounded like a great idea to me, since I were very lax lately with testing builds of PCP on other platforms, so I gave it a try.
Basically Travis-CI works very simple: you create a .travis.yml file, which contains instructions to configure, build and test your github project, and you create an account on Travis-CI and activate a github project of yours. Then every time you make a commit, Travis-CI creates a fresh VM instance, checks out your latest code and runs the instructions in your .travis.yml file.
However, I had lots of problems making it work. Admittedly some of them were caused by code which either didn't compile or run on linux. This is a fat point on the plus side, because I was totally unaware of those issues. But I had much more trouble getting all the dependencies to work.
According to Travis' documentation each VM contains everything needed like gcc, perl, node.js and whatnot. But in reality you have to denote your project to a specific environment, which is C in my case. The unittests in the PCP source are driven by a perl script which needs a couple of additional modules. Their documentation stated, they use perlbrew, which I know and like a lot since I use it almost every day. Unfortunately perlbrew is not installed in a C VM. And so I had to install the perl modules "manually" (by wgetting them, untar, perl Makefile.PL, make and sudo install).
The good thing is you can do almost anything on the Travis-CI VM including root installs with sudo. Thanks to this ability I were able to get the unittests to work. But PCP also has a python binding. This was mush more troublesome. Python itself were installed, but no headers, no python-pip, no cffi and so on. First I tried to install a package "python-cffi", which does exist according to google, but not on the Travis-CI VM. Instead I had to install the python headers, python-pip and libffi by downloading and compiling it. There's also an libffi ubuntu package available but it could not be installed because of some unspecified and unresolvable conflict. Such conflicts of binary package management tools are the very reason I left linux behind many years ago.
Well, now, 60 commits later, I've got it working:
Would I recommend Travis-CI? Definitly yes. However, there are some pros and cons I like to point out:
- Each commit leads to a test build and if that fails you get notified.
- Pull requests lead to test builds as well, so you'll see if a patch breaks something or not.
- It's a free service, and a recource hungry at that. so - wow!
- It just works and every problem can be fixed by yourself, given you know how to do it.
- The Travis-CI documentation is outstanding with lots of examples.
- There's only one platform available for tests: Linux (and only one distribution: Ubuntu). It maybe possible to test on MacOSX, but as of this writing only upon request.
- While you can choose between clang and gcc, you cannot test with different versions of the compilers.
- Those language specific environments make it difficult to use with a project with multiple language needs.
- There's no way to live test your .travis.yml file, like on a VM where you can login temporarily and initiate the test manually multiple times and tune the .travis.yml file til it works. Instead you've got to commit everything you like to try out, wait until Travis-CI tests are done, look on the site for the results and repeat the process. This iterative process comsumes a lot of time.
- Travis-CI is a german startup in berlin and as always with german companies they do just not respond to emails. I mailed them because of the mixed language issue outlined above, but noting came back and I had to figure it out myself like a blind one in a foreign country (at least sometimes I felt like this during the process detailed in 3.).
- Although they provide a Pro account for which you've to pay, I doubt the business model. Maybe the company will vanish overnight (or, worse, get sold and closed).