Alex headshot

AlBlue’s Blog

Macs, Modularity and More

Using dnsmasq to block DNS requests


Recently I have noticed that there’s an uptick in spamvertising on my home network, and as such, I decided to spend some time investigating using a Raspberry Pi to host a DNS service which would filter such spam sites out.

I initially looked at Pi-Hole which provides a really simple out-of-the-box experience for doing DNS blocking, with a nifty user interface for adding black or whitelist hosts on the fly, and a report of how well the DNS is working. For those with limited technical experience, it’s an easy-to-use experience and I can recommend it on those points. Version 5 has just been released as I write this, so you may want to look at that.


Pi-Hole has some advantages – for example, it doesn’t just run on a Raspberry PI – and it has some built-in lists that it uses to do ad blocking, which is a good start.

However, after using it for a while, it became clear to me that in essence it’s a soft fork of dnsmasq which is available on many upstream repositories already. The Pi-Hole fork of dnsmasq is called pihole-FTL and is, in essence, a dnsmasq process running inside something with a network front end for doing querying and dynamic reloading of data.

The other problem with Pi-Hole is that it is, in essence, a bunch of PHP web pages (potentially running as root) in a server and communicating via a binary protocol to the local daemon, so there’s a lot of potential places where security issues could occur. It also fires up a lightweight HTTP server running on the host (there’s a configuration option to turn it off) which again increases the chance of something going wrong.


As a result, I decided to move away from the Pi-Hole configuration, and use a vanilla dnsmasq implementation instead; which has the following advantages:

  • Less custom code running in dnsmasq, so less potential for security issues
  • Supported by upstream repositories natively, rather than an add-on
  • All of the hard work by Pi-Hole is actually handled by dnsmasq anyway
  • Less maintenance to worry about in the future

There are of course disadvantages – it doesn’t provide a helpful webpage which says “This domain is blocked; click here to unblock it”, but I didn’t want that anyway. The other minor disadvantage is that the hosts parsing format supports a missing address, which is parsed as NXDOMAIN – this doesn’t apply to the vanilla dnsmasq daemon.


The default configuration for dnsmasq is pretty well commented, and there’s a great manpage with all the configuration options defined in there. However, I thought I would split it up into different sections that you could enable or customise what you want, and I’ve made it available on GitHub.

Despite the name, dnsmasq also handles TFTP, PXE and DHCP operations on both IPv6 and IPv4 addresses. I have customised mine to allow for DHCP and DNS on both IPv4 and IPv6 at the moment, but I’m going to investigate the boot options at a later stage.

Some of the defaults have been updated to allow sensible values in place; they are all commented in the individual files which you can pick-and-choose from. For example, there are multiple options for the DNS server:

Recently, Cloudflare has announced “Cloudflare for Families” which builds on the name server with a name server that blocks malware attempts, and a that also blocks porn. These may be useful for families, and over time, I expect the range of blocks to grow rather than fade – which will be especially useful for fast-moving targets like those trying to fool you into handing over your Apple/Facebook logins. I’ve set up a configuration file for those as well.

The idea is that you can clone the example repository, adjust it for your local environment, and then run it in your /etc/dnsmasq.d/ directory.

Blocking hosts

The main reason I did this was to block hosts. In dnsmasq, the syntax for blocking a host is:


which roughly translated means ‘For domains ending in, send the result to NXDOMAIN. You can also specify a particular address; other block hosts use as a host:


Secondly, there’s a form that can read in a hosts file (man 5 hosts) that can be used to serve addresses (but not NXDOMAIN).


The advantage of an additional hosts file is that a SIGHUP will ask the dnsmasq daemon to reload the hosts file dynamically without stopping and starting the service.

If you want to ensure that a local domain doesn’t get forwarded out, or you have an alternative server, then you can use the server option instead:


This will delegate all mynet domains to the server, but then cache the results and serve them to clients quicker.

Finally, if you want to blackhole entire TLDs, you can do so with either the local directive (i.e. make the machine responsible for the domain) or with the address=/.biz/ directive. Many newer TLDs are full of spam sites, and countries that you don’t regularly visit or shop from could be on this list.


Dnsmasq provides a DHCP service for you, in case you want to use it to allocate your hosts on demand. One advantage of having this wired into the server is that it will give you a mapping between the DHCP allocated hosts and DNS names for free, so you only have to have them in one place.

The DHCP hosts have a range that you can specify along with a lease time:


You can configure the DHCP server to send out its address for the router, for both IPv4 and IPv6, which means that hosts using DHCP will automatically pick up the host:


You can specify the host-record for binding a particular name, and a dhcp-host for setting a machine based on its mac address:,

It is also possible to have a cname record, but only for those hosts that are already known to dnsmasq … so it’s not possible, for example, to have a cname pointing to a host elsewhere.,

There are other configuration options, such as setting an auth-server and auth-zone along with an auth-soa which will synthesise records for the SOA name type; these are left to the reader’s investigation. For those using zeroconf/bonjour/mDNS or SPF, you can also add generic SRV and TXT records to the DNS zones hosted. These are left as an exercise for the reader.


Pi-Hole is a great out-of-the-box experience, but has a wider security attack area and drags in interpreted languages, which may not be appropriate on a DMZ hosts. Using dnsmasq is perfectly capable, provided that you can curate your own blacklists for domains, or write scripts that do that for you. (An older version of Pi-Hole used native dnsmasq, but was replaced to give stats that could otherwise be grepped from the lookup logs.)

Hopefully the partitioned configuration files in will give you a start point to run your own DNS/DHCP server, optionally with blocking capabilities.

Understanding CPU Microarchitecture for performance


I recently gave a talk at QCon London entitled “Understanding CPU Microarchitecture for Performance” on the details of CPU internals and how they affect the speed of programs that run on them.

I’ve given this talk twice recently; once at QCon London, and a further virtual event for the London Java Community (LJC). Although both of the presentations are similar content, I marginally updated the slides for the LJC event to include a new release of one version of the software that I recommended, and for a new project that wasn’t open-sourced at the time.

The QCon London presentation has the advantage that there’s a transcript, and synchronised slides, so depending on which form you find more useful it’s up to you. Here are the links:

The abstract for both is the same:

Microprocessors have evolved over decades to eke out performance from existing code. But the microarchitecture of the CPU leaks into the assumptions of a flat memory model, with the result that equivalent code can run significantly faster by working with, rather than fighting against, the microarchitecture of the CPU.

This talk, given for the (QCon London| London Java Community) in 2020, presents the microarchitecture of modern CPUs, showing how misaligned data can cause cache line false sharing, how branch prediction works and when it fails, how to read CPU specific performance monitoring counters and use that in conjunction with tools like perf and toplev to discover where bottlenecks in CPU heavy code live. We’ll use these facts to revisit performance advice on general code patterns and the things to look out for in executing systems. The talk will be language agnostic, although it will be based on the Linux/x86-64 architecture.

If you have any comments or questions, feel free to reach out to me via Twitter, e-mail or any other means you have at your disposal.

QCon London 2020 Day 3

2020 qcon conference

This morning was kicked off by Katie Gamanji, of American Express and part of the Cloud Native Computing Foundation. The talk was on Kubernetes; there were 23,000 attendees of KubeCon conferences in 2019, and over 2,000 contributors to the project. She introduced the Cloud Native Interface, which is used to ensure that a pod has its own IP which can be routed to via various mechanisms, and the Container Runtime Interface which abstracted away Docker from other engines such as gviosr, containerd etc. Mostly it seemed to be an overview of what Kube does, rather than anything new; for the people who were attending hoping to learn about more than just the basics, I’m not sure what the benefits were.

The rest of the day I spent hosting the Java track, on behalf of Martijn Verburg, who was unable to assume hosting duties due to other committments. I’m very glad that he had those committments, because I had a very enjoyable day, listening to experts in their field as well as playing compere to a captive audience. I’m very much looking forward to being invited back again!

Ben Evans from New Relic was the first speaker in my track, talking about Record and sealed types in Java, which are in an experimental phase. Record objects are data types on steroids; essentially, a data type wich has an automaticaly generated equals, hashCode and toString with the correct properties in a final data structure. Together with sealed types, which are a way of doing inner classes properly, it looks like the various experimental Java projects are really providing the goods.

Andrzej Grzesik, better known as “ags”, talked about the path that Revolut had in mirating to Java 11 over the last year. They moved over to Java 11 as the compile engine and as the runtime engine for all of their apps over the year, following up with various bugs reported against OpenJDK as they wen through, and some of the pitfalls of their experience. The main one seems to be updating all of the dependent libraries and build tools; for example, moving from Gradle v3 to Gradle v4 and v5, a step at a time. They are keen to try keeping up with the latest JDK releases, although their use of Gradle and more specifically Groovy is holding them back from migrating at the moment. Making sure that all of the dependencies worked seemed to be the biggest challenge, though most (non-abandoned) Java libraries work at this stage.

David Delabassee gave an overview from Oracle about how to best drive Java applications inside Docker containers. His examples and slides (to be uploaded later) showed how to use a multi-stage Docker build along with jlink and the --no-man-pages and --no-header-files along with compression to build a custom base image which was about 20% of its original size. He also highlighted using Alpine as a distribution and Musl as a replacement for glibc, but noted this wasn’t officially supported; project portola aims to provide a means to do this in the future. By using AppCDS he was able to shrink the launch time down further, and talked about the evolution of Docker and docker rootless along with a podman blog talking more about enabling this functionality.

Emily Jiang gave a talk and live demo of how to build a 12-factor app with Open Liberty and MicroProfile. The demo code is available on GitHub and since it was a live demo, the screencast from InfoQ will have a lot more detail. In essence, Emily demonstrated having two services, connecting via localhost, to be run inside different Kubernetes pods and with redundant services, and used asynchronous calls to be able to route around delays or failures in the underlying service implementations. One to watch carefully on the replays, I think.

The day was rounded off nicely by James (Jim) Gough whom I’ve had the pleasure of working with commercially before. He gave a talk on how GraalVM executes code, and showed a number of demonstrations of how Graal can eecute code that has been compiled using the Java compiler, by debugging through how the JDK’s JIT (in the form of Graal) works. He also demonstrated the jaotc tool for compiling Java code ahead of time for faster startup and lower memory footprint. I had had a prior audience for this talk, and so almost everything went to plan in this presentation – with the exception of the QCon sign falling off the podium by the end. Oh well, it had been a long day …

This brought to the end my 12th? QCon London. I don’t think I’ve been to all 14, but I have certainly been to most of them over the past years, and all in all, it’s just as great as ever. Obviously this year was a little ‘special’ due to all the changes in place, but I thought that the staff (both at QCon and also at the QEII conference centre itself) handled everything marveously. I can’t wait to be back again this time next year, either as an attendee, speaker or track host!