Allen Teilnehmern der AXA Campus Days meinen herzlichen Dank für die Einladung, die produktive Atmosphäre und die guten Gespräche. Ich hoffe sehr, einge Anstöße gegeben zu haben, auch wenn sicherlich nicht alles, was ich behaupte, im Kontext Eurer Arbeit und im Umfeld des Konzerns tatsächlich anwendbar ist.
Ich würde mich freuen, einige von Euch auf der diesjährigen Continuous Lifecycle wiederzusehen.
Kollegen haben mich bereits mehrfach ermahnt, gehaltene Präsentationen online zur Verfügung zu stellen. In Erweiterung dessen sollte ich vielleicht Präsentationen bereits vorher online stellen.
Für die AXA Continuous Campus Days am 21.06. bin ich eingeladen worden, meinen Rant, den ich seit den Frühjahrsfachgesprächen der German Unix Users Group, in verschiedenen Formen (Uptimes 2017-1, Continuous Lifecycle 2017) erneut vorzusingen.
Ich freue mich sehr und selbstverständlich bemühe ich mich, mich zu benehmen. Für den Erfolg des Versuches garantiere ich nicht. Die dazugehörigen Slides finden sich hier.
Kollegen haben mich bereits mehrfach ermahnt, gehaltene Präsentationen online zur Verfügung zu stellen.
When working with docker on smaller scales, often it is economically not feasible to setup performant docker registries. Notwithstanding, operators desire to keep certain parts of the docker image private.
Combining docker containers with data-only docker images as volume mounts offers an option to have public docker images at some public registry or on github. Then, the (often much smaller) private images can be served from some private registry, eliminating the need for highly performant - and expensive - network connections.
Having set up a working CoreOS cluster, the intrepid, viz., naive might proceed with installing applications on or even admitting users to their clustered systems.
Having seen developers and users alike wreak havoc on the most beautiful systems, I strongly advise against such folly. However, if you are bribed, coerced or otherwise motivated to do so, it is indeed possible. Employing a combination of docker and systemd, it is rather easy, I daresay.
In this first post of my cluster-series, I will explain the design constraints of cloud operations and how to use CoreOS and etcd to setup an arbitrary number of nodes.
These nodes then form a cluster, i.e., they know about their cluster membership and communicate their own state using a dedicated and shared key-value store.
I will start with some theoretical remarks on cloud operations, will then proceed to the setup of a CoreOS cluster with vagrant and conclude with those “special” cases which actually are those relevant to productive operations.
Without spending too much time on the reasoning inherent, replicating applications in the cloud is the devops thing of the time. Doing so requires intelligent packaging, some automated deployment and service discovery.
The concepts and technologies are simple in themselves, but their consequences are often overlooked. So, the “yet another framework in two easy clicks” syndrome leads to procedural tutorials which fail to bring the crucial operative point across.
In the following posts I hope to redress parts of that situation. I will not spare you the theories behind. Stumbling along workflow-driven tutorials rather obscures the point. I hope that clustering applications in cloud settings will become more clear by mixing theory with practice in that process.
The aim will be to have cluster of operating systems, to deploy applications thereon and to auto-discover apps in a load-balancing mechanism.
SmartOS is a OpenSolaris based hypervisor consisting of a stripped down Illumos environment, QEMU and a port of the Linux KVM module to Solaris. With the no anymore so recent addition of LX-branded zones and docker to SmartOS, it is possible to conveniently provision docker containers on SmartOS.
./configure are often
To safely experiment and work with ZFS root filesystems on Debian-based systems, it is necessary to have fall back options in place. This includes a Live CD with precompiled ZFS modules. As ZFS is not available in the standard Linux kernel and in the standard Linux distributions, I will briefly explain how to build such a Live CD and link to scripts to save the typing.
Performance data gathered by the nagios
check_mk agent can be transported to and stored in an ElasticSearch database to save the original time-resolution from being reduced by the round-robin database mechanism.
The term monitoring usually refers to a concept better described as availability monitoring, i.e. monitoring whether some service or resource provided for by your computing system is available for your users. Examples include checking whether some port is open or if you have sufficient free space on your block devices, vulgo, hard disks.
For operations accustomed to a rather constant use or, perhaps better phrased, a constant rate of service consumption, it is sufficient to monitor the state of the service you supply which will close the case.
Another kind of operations is accustomed to varying rates of service use or service consumption. A typical example consists of some kind of web-service (a website, a portal, a blog, you name it). The use of this service typically varies over time, which necessiatets scaling the machines supplying different parts of that web-service up and down according to customer demand.
To be able to do so, it is first necessary to have at least some idea about the load your machines or services are currently subjected to, which then leads to the necessity to monitor load.
Installing Postfix on a SmartMachine is trivial as long as you do not defer far from the usual ;-)
Having recently patched my firewall running pfSense from 2.1.5 to 2.2 and having become a tad suspicious after some years experience with IT, I found out my VPN setup stopped working. (To be honest, I do not know exactly when it stopped working. I only learned about it when testing my patching procedures.)
Rolling back the corresponding ZFS dataset (my pfSense instance runs as a KVM-virtualized host inside a SmartOS hypervisor) would have been a quick option. I opted to revisit my IPsec configuration instead, though.
It appears that somehow, my earlier configuration tolerated some sloppiness the more recent does not.
ARMv6 Raspberry Pi mini-computers offer an power-economical soultion to provision networking appliances for small to medium scale networks. This article shows how to circumvent the low computation power of these machines when building software in a cross-compilation setup.
A serious computing infrastructure depends on a set of network services available to all connected clients. Networked computers in a small to medium (home?) office depend on services for network address assignment and address resolution, i.e., naming, as a minimum. Other services such as central authentication or authorization follow.
Addresses can be assigned statically and naming can be provided by entries in the host-DB or any other database available to
nsdispatch(3), which is configurable in
nsswitch.conf(5). With rising numbers of networked computers, the necessity for (manual?) configuration poses a problem. Therefore, usually name- and DHCP-servers are used for naming and address assignment respectively. The same is true for many other domains of application.
Most small home or small office routers offer basic DHCP and naming services. However, usually these do not offer the same possibilities for configuration as required. Consequently, administrators may wish to run dedicated servers for DHCP and DNS.
Setting up a separate PC for DHCP and DNS is dubious, however. Even when ignoring hardware costs when salvaging some low end machine from the scrap heap, costs for electric power can (and will) quickly reach proportions where such a setup is seriously debatable.
With a maximum power intake of 3.5 W, the Raspberry Pi ARMv6 machines offers an economically feasible alternative to provision low capacity network appliances on. Linux distributions like ArchLinux or Raspbian exist and are able to install and run various network appliances.
For several reasons, I wish to run FreeBSD. On FreeBSD, binary packages are available only for tier-1 architectures, which are i386 and amd64. For armv6, packages need to be compiled from FreeBSD ports. While I like to customize my packages anyway, building packages from source on an ARMv6 machine is a tedious task.
Joyent’s SmartOS is essentially a Solaris-based hypervisor. It employs Solaris Zones as very lightweight OS-level virtualization containers. In addition, Linux KVM has been ported to the (open-sourced, OpenSolaris-derived) Illumos-flavour of Solaris, adding type-I hypervisor virtualization.
SmartOS is canonically booted from USB-sticks and any “work”, i.e., anything requiring persistent state, is done in virtual machines. Irrespective of any discussion of the (non-) merits of such an approach, most of us would be inclined to agree that an operating system on it’s own is in most circumstances not very useful and typically requires further software to fulfil it’s function.
For SmartOS, Joyent provides an extensive repository of pre-compiled pkgsrc packages. For many settings, this may well suffice. However, in some situations users or administrators desire to compile their software themselves. A pkgsrc-based system enables them to do precisely that.
Various guides on how to efficiently compile packages using pkgsrc can be found. The main manual http://www.netbsd.org/docs/pkgsrc/ is the canonical source of information. Regarding Illumos/SmartOS specifically, Jonathan Perkin has written a series of blog posts starting with http://www.perkin.org.uk/pages/pkgsrc-binary-packages-for-illumos.html.
While all methods have their valid applications, coming from FreeBSD, I myself would like a mechanism which allows me to build software in a pristine environment, i.e. I would like to completely destroy the environment after a compile and compile the next package from a clean slate. I have been nastily bitten by implicit dependencies introduced by autoconf, which happily links against anything to be found on a given system, which can cause quite a headache when things break.
On FreeBSD, Baptiste Daroussin has developed a package called poudriere, which effectively is a testing and compile environment leveraging FreeBSD jails and ZFS. Ignoring many great features of poudriere for the moment, the idea is simple: Populate a chroot directory from a clean base system on a separate dataset. Clone that dataset for n systems, give them ports trees, jail them and compile the packages therein. Afterwards, destroy the clones and start the process anew.