Home DNS with Unbound and NSD 12 Nov 2023
I recently redid the DNS on my home network, moving from dnsmasq to Unbound and NSD. Unbound acts as the DNS server the network uses, and Unbound hosts losts local zones for my search domains. For reasons, this is really across two sites: our home, and a barn we own a few miles away. Because I am a dork, I have things at both sites :-)
My network controller has a built in DNS server which assigns a local domain, a la brians-laptop.local
to anything which gets a DHCP lease. I wanted to be able to respect these, but also assign a diferent domain to statically assigned things on the network, such as printers and our NAS. For these I set up a .home
and .barn
respectively, for things in the home and in the barn. Those zones are hosted on NSD, with a config like:
# /usr/local/etc/nsd/nsd.conf
server:
ip-address: 127.0.0.1
port: 53530
zone:
name: home
zonefile: "home.zone"
zone:
name: barn
zonefile: "barn.zone"
Note that NSD is only listening on 127.0.0.1
and on port 53530. It should only ever be queried from the unbound instance on the same host (which is using port 53).
The zone files referenced are just stubs, not really correct, but they don't need to be :-)
; /usr/local/etc/nsd/home.zone
$ORIGIN home. ; 'default' domain as FQDN for this zone
$TTL 600 ; default time-to-live for this zone
home. IN SOA ns.home. noc.dns.icann.org. (
16 ;Serial
7200 ;Refresh
3600 ;Retry
1209600 ;Expire
3600 ;Negative response caching TTL
)
; The nameserver that are authoritative for this zone.
NS ns.home.
nas.home. A 192.168.2.114
printer.home. A 192.168.2.140
m0001.home. A 192.168.2.101
m0002.home. A 192.168.2.102
and the barn:
$ORIGIN barn. ; 'default' domain as FQDN for this zone
$TTL 600 ; default time-to-live for this zone
barn. IN SOA ns.barn. noc.dns.icann.org. (
17 ;Serial
7200 ;Refresh
3600 ;Retry
1209600 ;Expire
3600 ;Negative response caching TTL
)
; The nameserver that are authoritative for this zone.
NS ns.barn.
m0003.barn. A 192.168.81.101
m0004.barn. A 192.168.81.102
dvr.barn. A 192.168.81.110
With NSD up and running, I set up Unbound to make use of those local zones. It runs on the same instance as NSD, and is configured ot use it for those domains:
# /usr/local/etc/unbound/unbound.conf
server:
# This is a CARP interface, which apparently
# requires explicitely listing
interface: 192.168.2.100
interface: 0.0.0.0
do-not-query-localhost: no
access-control: 192.168.0.0/16 allow
local-zone: "home" nodefault
domain-insecure: "home"
local-zone: "barn" nodefault
domain-insecure: "barn"
stub-zone:
name: "home."
stub-addr: 127.0.0.1@53530
stub-zone:
name: "barn."
stub-addr: 127.0.0.1@53530
forward-zone:
name: "local."
forward-addr: 192.168.1.1
Unbound is set up to listen on all interfaces, but because I am using a CARP interface as well, it seems to require seperately listing that one to avoid confusions. The two local zones are configured to be insecure (no DNSSEC) and to forward to the NSD instance as a stub-zone
for each domain, respectively.
The dynamically assigned .local
domain is forwarded to the router to pick up the names of things which get DHCP leases via the forward-zone
block.
Finally, to make it all work, I hand out three search domains on DHCP, .local
, .home
, and .barn
. This is DHCP code 119 with a value local,home,barn
, for future reference.
Deployment wise, this is running on two servers in each location, with each server running both NSD and Unbound. The CARP interface is used to provide a single IP address for the DNS servers, and the DHCP server is configured to hand out that IP address as the DNS server for the network.
Fun fact along the way: I learned emacs has a mode for zone files which automatically increments serial
when you save. Handy!
20 Years of Wasting Time! 11 Nov 2023
I cannot let 2023 go past without writing anything given it is 20 years of this blog, but an update to Hugo broke things, and getting it to work was just annoying. So I got frustrated enough making Hugo do what I wanted that I ported over to Zola and here we go, I am able to update again!
Along the way I took a moment to add all the posts from my original blosxom based blog to the archive, which makes me happy. Really, I want to post about the home DNS setup I did, but that will have to wait until tomorrow :-) For now, I want to push this and see if the deploy works!
Happy 2023!
Blog CD Pipeline with AWS CodePipeline 22 Nov 2017
Jumped out of order from my earlier checklist and set up some automagic build and deploy. I'd wanted an excuse to try out CodePipeline, so this was it!
So, how does this blog work? It is deployed to an S3 bucket (skife.org) with CloudFront in front of it. CloudFront is set up to use the free SNI certs to provide TLS. Previously, I pushed manually via s3cmd, which worked well with some incantation fiddling.
I won't write a full CodeBuild and CodeDeploy tutorial, Amazon has that well covered, but a couple bits were funny to work out, so will talk about those.
First, CodePipeline needs to trigger things. This is important as CodeBuild has no mechanism (which I could find) to only care about particular branches. CodeBuild really just does builds (kind of). Conceptually, do everything through CodePipeline and other stuff is just steps which react to the pipeline.
Given this is a static site, the build step just builds a tarball:
version: 0.2
phases:
install:
commands:
- wget https://github.com/gohugoio/hugo/releases/download/v0.31/hugo_0.31_Linux-64bit.deb
- dpkg -i ./hugo_0.31_Linux-64bit.deb
build:
commands:
- hugo
- tar -C public -cvzf skife.org.tgz .
artifacts:
files:
- skife.org.tgz
Nothing fancy here, but having the artifact for CodePipeline to pass around is important.
My first pass just had the deploy at the end of the build, but I want to be able to insert some basic tests before I deploy new versions. Just things like link verification, HTML5 validation, and maybe running stuff like Lighthouse against a test instance before letting it out. The test part :-) Because of this, I wanted to seperate build from deploy. It turns out "deploy by copying into an S3 bucket" is not a thing CodeDeploy has any concept for.
So my "deploy" is just another CodeBuild build:
version: 0.2
phases:
build:
commands:
- tar -xf skife.org.tgz
- rm skife.org.tgz
- aws s3 sync . s3://skife.org --acl public-read --cache-control public,max-age=600
You can feed one build to another, it doesn't mind. When I configured the second build I had to set it up with an output artifact or CodePipeline wouldn't let me add it. After I saved the CodePipeline changes, I could go back and remove that output. The other decent path is probably to set up a Lambda function that takes apart the tarball and copies things over... but this build approach seems simpler.
I tried to put a cloudfront invalidation into the last step as well, but the version of the aws cli on the build image is old and it is not supported. I'll sort that out later. Once I do, will change max-age to max-age=31536000 or so and add something like:
- aws cloudfront create-invalidation --distribution-id E1DTTO3T6ZPN9M --paths / /index.html /404.html /archive.html /index.xml
to the build commands, and voila!