Migrating from BIND9 to Knot DNS + Knot Resolver
Introduction
This guide covers migrating a complex DNS infrastructure from BIND9 to Knot DNS (authoritative) and Knot Resolver (recursive). The setup includes split-horizon DNS, DNSSEC signing, dynamic updates from DHCP and ACME clients, and privacy-focused upstream forwarding.
Why Migrate?
Limitations of BIND9
- Monolithic design: BIND9 combines authoritative and recursive functions in a single process, which can be a security concern
- Complex configuration: Views and zones mixed together make configuration harder to reason about
- Modern features: At the time, BIND9 did not support DNS-over-TLS and required an external resolver
- Performance: Because BIND9 didn't support DNS-over-TLS at the time, using
stubbycaused a significant performance problem
Benefits of Knot
- Separation of concerns: Knot DNS (authoritative) and Knot Resolver (recursive) are separate daemons
- Better DNSSEC tooling: Automatic key management and signing
- Performance: Knot can spawn several instances from systemd and share the load of a single listening socket
- Modern configuration: YAML for Knot DNS, Lua for Knot Resolver means editor support
Architecture Comparison
BIND9 Architecture
- Single process handles all DNS functions
- Views separate traffic based on source IP
- Internal view forwards to local resolver for recursion
- External view serves authoritative data only
Knot Architecture
- Separate processes for authoritative and recursive
- Knot Resolver listens on internal IPs for recursive queries
- Knot DNS listens on both internal and external IPs for authoritative data
- Stub zones in Knot Resolver forward local domain queries to Knot DNS
Migration Steps
1. Install Knot Packages
# Install both Knot DNS and Knot Resolverdnf install knot knot-resolver
2. Convert TSIG Keys
Both systems use the same TSIG key format. Copy key definitions from BIND to Knot:
BIND format:
key "dhcp-updater" {algorithm hmac-sha256;secret "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";};
Knot format:
key:- id: dhcp-updateralgorithm: hmac-sha256secret: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
3. Configure Knot DNS (Authoritative)
Create /etc/knot/knot.conf:
server:rundir: "/run/knot"user: knot:knotautomatic-acl: on# This hides the server versionidentity: ""version: ""# Loopbacklisten: 127.0.0.1@53listen: ::1@53# Internal interface for stub queries from resolverlisten: 10.250.0.2@53# External interfaces for public querieslisten: 203.0.113.1@53listen: 2001:db8::1@53database:storage: "/var/lib/knot"# Define TSIG keys to be used in ACLskey:- id: dhcp-updateralgorithm: hmac-sha256secret: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=- id: certbotalgorithm: hmac-sha256secret: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=# Define ACLs for use in templates and zonesacl:- id: allow_dhcp_updatekey: dhcp-updateraction: updateacl:- id: acme_txt_updatekey: certbotaction: updateupdate-type: [TXT]# Define templates for common configurationstemplate:- id: defaultstorage: "/var/lib/knot/global"file: "%s.zone"semantic-checks: ondnssec-signing: onacl: allow_dhcp_update- id: internalstorage: "/var/lib/knot/internal"file: "%s.zone"semantic-checks: onacl: allow_dhcp_update# Define zoneszone:- domain: internal.example.comacl: acme_txt_updateacl: allow_dhcp_update- domain: dmz.example.comacl: acme_txt_update- domain: guest.example.com- domain: internaltemplate: internal- domain: 10.100.10.in-addr.arpatemplate: internal
4. Configure Knot Resolver
Create /etc/knot-resolver/kresd.conf:
-- Network interface configurationnet.listen('10.250.0.1', 53, { kind = 'dns' })net.listen('10.250.0.1', 853, { kind = 'tls' })net.listen('fd00:1234:5678::1', 53, { kind = 'dns' })-- TLS certificate for DNS-over-TLSnet.tls('/etc/letsencrypt/live/resolver.example.com/fullchain.pem','/etc/letsencrypt/live/resolver.example.com/privkey.pem')-- Load modulesmodules = {'hints > iterate','stats','predict',}-- Cache configurationcache.open(100 * MB, 'lmdb:///var/cache/knot-resolver')-- Local domains - forward to Knot DNSlocalDomains = policy.todnames({'internal.example.com.','dmz.example.com.','guest.example.com.','10.in-addr.arpa.',})policy.add(policy.suffix(policy.FLAGS({'NO_EDNS', 'NO_CACHE'}), localDomains))policy.add(policy.suffix(policy.STUB({'10.250.0.2'}), localDomains))-- Upstream: forward to DNS provider via DNS-over-TLSpolicy.add(policy.all(policy.TLS_FORWARD({{'2001:db8::53', hostname='dns.example.net'},{'203.0.113.53', hostname='dns.example.net'}})))
5. Migrate Zone Files
Zone files are compatible between BIND and Knot. Copy them to the Knot storage directory.
My old BIND zones were in /var/named/dynamic and each subdirectory was named after the
zone it is reference.
# Copy zone filescp /var/named/dynamic/example.com/zone.db /var/lib/knot/global/example.com.zone# Fix ownershipchown -R knot:knot /var/lib/knot
6. Migrate DNSSEC Keys
For BIND zones with inline-signing, export and import DNSSEC keys:
# BIND stores keys as K<zone>+<alg>+<keyid>.key/private files# Knot can import these or generate new keys automatically# Option 1: Let Knot generate new keys (easier)knotc zone-key-rollover example.com kskknotc zone-key-rollover example.com zsk# Option 2: Import existing BIND keyskeymgr example.com import-bind /var/named/dynamic/example.com
Advanced Configuration & Edge Cases
1. Split-Horizon DNS: BIND Views vs. Knot Services
The Challenge:
BIND9 uses views to implement split-horizon DNS - serving different answers based on the client's source IP. Knot DNS doesn't have views, requiring a different approach.
BIND9 Approach:
view "internal" {match-clients { 10.0.0.0/8; 192.168.0.0/16; };recursion yes;zone "example.com" {type master;file "internal/example.com.zone";};};view "external" {match-clients { any; };recursion no;zone "example.com" {type master;file "external/example.com.zone";};};
Single BIND process listens on all IPs and routes queries to views.
Knot Approach:
Different architecture entirely:
-
Knot DNS (authoritative) listens on:
- Internal IP (10.250.0.2) - for stub queries from Knot Resolver
- External IP (203.0.113.1) - for public authoritative queries
-
Knot Resolver (recursive) listens on:
- Internal IP (10.250.0.1) - for internal clients
-
Query flow:
- Internal clients → Knot Resolver (10.250.0.1) → Stub to Knot DNS (10.250.0.2) for local zones
- External clients → Knot DNS (203.0.113.1) directly
Key Differences:
| Aspect | BIND9 | Knot |
|---|---|---|
| Process model | Single process | Two separate processes |
| Traffic separation | Views (source IP matching) | Network interface binding |
| Recursion | Per-view setting | Separate daemon (Knot Resolver) |
| Configuration | Match-clients in views | Listen directives on different IPs |
Migration Gotcha:
BIND's listen-on with negation doesn't work the same way:
# BIND: Listen on all interfaces EXCEPT 10.250.0.1listen-on { !10.250.0.1; any; };
With Knot, you must explicitly list all interfaces:
# Knot: Explicitly list each interfaceserver:listen: 10.250.0.2@53listen: 203.0.113.1@53listen: 2001:db8::1@53
2. DNS-over-TLS to Upstream Resolvers
The Challenge:
ISPs can snoop on DNS queries. Use DNS-over-TLS (DoT) to encrypt queries to upstream resolvers.
BIND9 Approach:
BIND9 does support DNS-over-TLS in version 9.18+, but configuration is complex:
options {forwarders port 853 tls ephemeral {2001:db8::53;203.0.113.53;};};
Knot Resolver Approach:
Much cleaner with policy.TLS_FORWARD:
-- Forward all queries via DNS-over-TLSpolicy.add(policy.all(policy.TLS_FORWARD({{'2001:db8::53', hostname='dns.provider.com'},{'203.0.113.53', hostname='dns.provider.com'}})))
The hostname parameter is critical - it's used for TLS certificate verification.
Real-World Example: NextDNS
NextDNS provides DNS filtering with unique profile IDs:
-- NextDNS with profile ID abc123policy.add(policy.all(policy.TLS_FORWARD({{'2a07:a8c0::', hostname='abc123.dns.nextdns.io'},{'2a07:a8c1::', hostname='abc123.dns.nextdns.io'},{'45.90.28.0', hostname='abc123.dns.nextdns.io'},{'45.90.30.0', hostname='abc123.dns.nextdns.io'}})))
Local Zones Exception:
Local zones should NOT go over TLS to upstream - they should stub to your authoritative server:
-- First: Define local domainslocalDomains = policy.todnames({'internal.example.com.',})-- Second: Disable EDNS and caching for local domainspolicy.add(policy.suffix(policy.FLAGS({'NO_EDNS', 'NO_CACHE'}), localDomains))-- Third: Stub local domains to Knot DNSpolicy.add(policy.suffix(policy.STUB({'10.250.0.2'}), localDomains))-- Fourth: Everything else goes to upstream via TLSpolicy.add(policy.all(policy.TLS_FORWARD({...})))
Order matters! Policies are evaluated in order, so local domain policies must come before the catch-all TLS forward.
3. Selective Routing with EDNS Client Subnet (ECS)
The Challenge:
Some services like YouTube, Netflix, and CDNs benefit from knowing your approximate location to route you to the nearest edge server. EDNS Client Subnet (ECS) sends part of your IP address to authoritative nameservers for better locality. However, this reduces privacy.
You might want:
- Default: ECS disabled for privacy
- Selective: ECS enabled only for specific domains that benefit from it (video streaming, CDNs)
Privacy vs Performance Trade-off:
| Approach | Privacy | Performance | Use Case |
|---|---|---|---|
| No ECS | High - Your IP subnet isn't shared | Lower - May get farther CDN nodes | Privacy-conscious users |
| ECS everywhere | Low - Your subnet shared with all domains | Higher - Optimal CDN routing | Performance-first users |
| Selective ECS | Balanced - Only specific domains get subnet | Balanced - Fast for streaming, private otherwise | Best of both worlds |
BIND9 Approach:
BIND9 doesn't have built-in selective ECS routing. You would need to:
- Run two BIND instances with different forwarding configs
- Use views to route different clients to different instances
This is complex and not practical.
Knot Resolver Approach:
Knot Resolver supports per-policy forwarding, making selective ECS routing straightforward.
Example: Route video streaming domains to ECS-enabled resolver
-- Define domains that benefit from ECS (CDNs, video streaming)ecsDomains = policy.todnames({'youtube.com.','googlevideo.com.','ytimg.com.','netflix.com.','nflxvideo.net.','twitch.tv.','cloudflare.com.','cloudfront.net.',})-- Forward ECS domains to Quad9 with ECS enabledpolicy.add(policy.suffix(policy.TLS_FORWARD({{'2620:fe::11', hostname='dns.quad9.net'}, -- IPv6 ECS{'2620:fe::fe:11', hostname='dns.quad9.net'}, -- IPv6 ECS{'9.9.9.11', hostname='dns.quad9.net'}, -- IPv4 ECS{'149.112.112.11', hostname='dns.quad9.net'} -- IPv4 ECS}), ecsDomains))-- Everything else goes to privacy-focused resolver (no ECS)policy.add(policy.all(policy.TLS_FORWARD({{'2620:fe::fe', hostname='dns.quad9.net'}, -- IPv6 no ECS{'2620:fe::9', hostname='dns.quad9.net'}, -- IPv6 no ECS{'9.9.9.9', hostname='dns.quad9.net'}, -- IPv4 no ECS{'149.112.112.112', hostname='dns.quad9.net'} -- IPv4 no ECS})))
How it works:
-
Client queries
video.youtube.com -
Knot Resolver matches
youtube.com.inecsDomains -
Query forwarded to Quad9 ECS endpoint (
9.9.9.11) -
Quad9 sends ECS with client's subnet to YouTube's DNS
-
YouTube returns IP of geographically closest edge server
-
Client queries
example.com -
No match in
ecsDomains -
Query forwarded to Quad9 privacy endpoint (
9.9.9.9) -
Quad9 does NOT send ECS
-
Privacy preserved for non-CDN domains
Alternative: Per-Client ECS Routing
You can also route based on source IP - give some clients ECS, others not:
-- Define networks that should use ECSecsClients = kres.view.addr('10.100.10.0/24', '10.100.20.0/24')-- ECS-enabled forwarding for specific clientspolicy.add(policy.suffix(policy.TLS_FORWARD({{'9.9.9.11', hostname='dns.quad9.net'}}), ecsClients))-- Privacy forwarding for everyone elsepolicy.add(policy.all(policy.TLS_FORWARD({{'9.9.9.9', hostname='dns.quad9.net'}})))
DNS-over-QUIC for Lower Latency:
Some providers support DNS-over-QUIC (DoQ) which is faster than DNS-over-TLS. Currently Knot DNS supports DNS-over-QUIC, but Knot Resolver does not support it yet. See Support for DoQ on cz.nic GitLab for status.
As of now, as far as providers, NextDNS supports DNS-over-QUIC and no others do. This is slightly ironic since Google invented QUIC.
Provider Comparison for ECS:
| Provider | ECS Endpoint | No-ECS Endpoint | Notes |
|---|---|---|---|
| Quad9 | 9.9.9.11, 149.112.112.11 | 9.9.9.9, 149.112.112.112 | Malware filtering on both |
| Cloudflare | 1.1.1.1 | 1.1.1.1 | ECS disabled by default, can't enable (exception is whoami.ds.akahelp.net for debugging) |
| 8.8.8.8 | 8.8.8.8 | ECS enabled by default, can't disable | |
| NextDNS | Custom profile | Custom profile | "Anonymized" ECS configurable per profile |
NextDNS with Selective ECS:
NextDNS allows ECS configuration per profile. Create two profiles on your NextDNS
dashboard under Settings. It's called "Anonymized EDNS Client Subnet"
and when it's off, it won't send ECS. When it's on, it will send ECS with some sort of
anonymization. From the "Setup" page, note the ID of each profile.
We'll assume ECS profile is ecs and non-ECS is nonecs
-- Profile with ECS enabled for CDN domainsecsDomains = policy.todnames({'youtube.com.','netflix.com.',})policy.add(policy.suffix(policy.TLS_FORWARD({{'45.90.28.0', hostname='ecs.dns.nextdns.io'} -- ECS profile}), ecsDomains))-- Default profile without ECSpolicy.add(policy.all(policy.TLS_FORWARD({{'45.90.28.0', hostname='nonecs.dns.nextdns.io'} -- Privacy profile})))
Testing ECS:
Verify ECS is being sent by comparing against other providers:
# Query with dig with ECSdig @9.9.9.11 youtube.com# Query with dig without ECSdig @9.9.9.9 youtube.com# Check response header for ECS in EDNSdig @10.250.0.1 youtube.com +subnet=203.0.113.0/24
Recommended Domains for ECS:
ecsDomains = policy.todnames({-- Google/YouTube'youtube.com.','googlevideo.com.','ytimg.com.','ggpht.com.',-- Netflix'netflix.com.','nflxvideo.net.','nflximg.net.',-- Streaming services'twitch.tv.','hulu.com.','disneyplus.com.',-- CDNs'cloudflare.com.','cloudfront.net.','akamaihd.net.','fastly.net.',-- Gaming'steampowered.com.','valvesoftware.com.','riotgames.com.',})
Security Considerations:
- ECS leaks subnet information - Authoritative servers learn your approximate location
- Cache poisoning - ECS responses are client-specific, complicating cache validation
- Fingerprinting - Combination of queries + ECS can identify users
Best Practice:
- Enable ECS only for trusted, performance-critical domains
- Use privacy-first (no ECS) as default
- Regularly review ECS domain list
- Consider geographic privacy requirements (GDPR, etc.)
4. Systemd Multi-Instance for Multiple Resolver Profiles
The Challenge:
You want different resolver instances with different filtering policies (e.g., filtered for kids, unfiltered for admins). Multiple instances can even listen to the same TCP socket and have the same cache to increase performance.
Solution: Systemd Templates
Knot Resolver supports systemd instances via [email protected]:
# Start filtered instancessystemctl start kresd@filtered1systemctl start kresd@filtered2# Start bypass instancessystemctl start kresd@bypass1systemctl start kresd@bypass2# Start web management instancesystemctl start kresd@webmgmt
Configuration Detection:
The Lua config can detect which instance is running:
local systemd_instance = os.getenv("SYSTEMD_INSTANCE")local is_bypass = string.match(systemd_instance, '^bypass')if is_bypass then-- Bypass instance: listen on different IPnet.listen('10.250.0.10', 53, { kind = 'dns' })-- Use unfiltered upstreampolicy.add(policy.all(policy.TLS_FORWARD({{'2001:db8::53', hostname='unfiltered.dns.example.net'}})))else-- Filtered instance: normal IPnet.listen('10.250.0.1', 53, { kind = 'dns' })-- Use filtered upstreampolicy.add(policy.all(policy.TLS_FORWARD({{'2001:db8::53', hostname='filtered.dns.example.net'}})))end
Separate Caches:
Each instance should have its own cache:
if is_bypass thencache.open(100 * MB, 'lmdb:///var/cache/knot-resolver/bypass')elsecache.open(100 * MB, 'lmdb:///var/cache/knot-resolver/filtered')end
Web Management Instance:
You can run a dedicated instance just for the web UI:
if string.match(systemd_instance, '^webmgmt') thenmodules.load('http')http.config({tls = true,cert = '/etc/letsencrypt/live/resolver.example.com/fullchain.pem',key = '/etc/letsencrypt/live/resolver.example.com/privkey.pem',})net.listen('10.250.0.1', 8888, { kind = 'webmgmt'})end
Access at: https://10.250.0.1:8888
5. Dynamic DNS Updates for DHCP
The Challenge:
DHCP server (ISC KEA) needs to update DNS records when it assigns leases.
BIND9 Configuration:
key "dhcp-updater" {algorithm hmac-sha256;secret "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";};zone "internal.example.com" {type master;file "dynamic/internal.example.com.zone";update-policy {grant dhcp-updater zonesub ANY;};};zone "10.100.10.in-addr.arpa" {type master;file "dynamic/10.100.10.in-addr.arpa.zone";update-policy {grant dhcp-updater zonesub ANY;};};
Knot DNS Configuration:
key:- id: dhcp-updateralgorithm: hmac-sha256secret: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=acl:- id: allow_dhcp_updatekey: dhcp-updateraction: updatezone:- domain: internal.example.comacl: allow_dhcp_update- domain: 10.100.10.in-addr.arpaacl: allow_dhcp_update
KEA DHCP Configuration:
No changes needed! KEA uses nsupdate with TSIG, which works identically:
{"DhcpDdns": {"enable-updates": true,"server-ip": "10.250.0.2","server-port": 53,"ncr-protocol": "UDP","ncr-format": "JSON","tsig-keys": [{"name": "dhcp-updater","algorithm": "HMAC-SHA256","secret": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="}]}}
Important: Point KEA to Knot DNS's IP (10.250.0.2), not Knot Resolver.
6. ACME DNS-01 Challenge Updates
The Challenge:
Let's Encrypt DNS-01 challenges require updating TXT records. Multiple services need updates, but should only update specific records.
BIND9 Granular Permissions:
zone "internal.example.com" {update-policy {grant dhcp-updater zonesub ANY;grant web-certbot name _acme-challenge.web.internal.example.com. TXT;grant mail-certbot name _acme-challenge.mail.internal.example.com. TXT;grant wildcard-certbot name _acme-challenge.internal.example.com. TXT;};};
Knot DNS Granular Permissions:
key:- id: web-certbotalgorithm: hmac-sha512secret: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB==- id: mail-certbotalgorithm: hmac-sha512secret: CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC==acl:- id: web_acme_updatekey: web-certbotaction: updateupdate-type: [TXT]update-owner: nameupdate-owner-match: equalupdate-owner-name: [_acme-challenge.web]- id: mail_acme_updatekey: mail-certbotaction: updateupdate-type: [TXT]update-owner: nameupdate-owner-match: equalupdate-owner-name: [_acme-challenge.mail]zone:- domain: internal.example.comacl: web_acme_updateacl: mail_acme_updateacl: allow_dhcp_update
Certbot Hook Script:
#!/bin/bash# /etc/letsencrypt/renewal-hooks/deploy/update-dns.shSERVER="10.250.0.2"ZONE="internal.example.com"KEY_NAME="web-certbot"KEY_FILE="/etc/letsencrypt/dns-update.key"nsupdate -k "$KEY_FILE" <<EOFserver $SERVERzone $ZONEupdate delete _acme-challenge.web.$ZONE TXTupdate add _acme-challenge.web.$ZONE 60 TXT "$CERTBOT_VALIDATION"sendEOF
TSIG Key File Format:
key "web-certbot" {algorithm hmac-sha512;secret "BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB==";};
7. DNSSEC Automatic Signing
The Challenge:
BIND9 requires manual key management and zone signing configuration. Knot automates this.
BIND9 Configuration:
zone "example.com" {type master;file "dynamic/example.com/zone.db";inline-signing yes;dnssec-policy default;key-directory "dynamic/example.com";};
Requires:
- Generating keys manually with
dnssec-keygen - Setting key timings
- BIND handles inline signing (separates signed/unsigned zones)
Knot DNS Configuration:
template:- id: defaultdnssec-signing: ondnssec-policy: defaultzone:- domain: example.com
That's it! Knot automatically:
- Generates keys if they don't exist
- Signs the zone
- Rotates keys based on policy
- Publishes DNSKEY records
- Manages NSEC/NSEC3 records
Custom DNSSEC Policy:
policy:- id: customalgorithm: ecdsap256sha256ksk-lifetime: 365dzsk-lifetime: 90dnsec3: onnsec3-iterations: 10nsec3-salt-length: 8zone:- domain: example.comdnssec-policy: custom
Key Management:
# List keys for a zonekeymgr example.com list# Generate new keykeymgr example.com generate# Import BIND keyskeymgr example.com import-bind /path/to/bind/keys# Trigger key rolloverknotc zone-key-rollover example.com ksk
DS Records:
After signing, publish DS records at your registrar:
# Get DS recordsknotc zone-read example.com | grep '^example.com.*DS'
8. Catalog Zones for View Synchronization
The Challenge:
In BIND9, you might use catalog zones to automatically provision zones from internal view to external view.
BIND9 Configuration:
# Internal view - master catalogview "internal" {zone "catalog.example.org" {type master;file "dynamic/catalog.example.org/zone.db";also-notify { 127.0.0.1 key internal-to-external; };};};# External view - slave catalogview "external" {zone "catalog.example.org" {type slave;masters { 127.0.0.1 key internal-to-external; };};catalog-zones {zone "catalog.example.org"default-masters { 127.0.0.1 key internal-to-external; }zone-directory "catzones";};};
Zones added to the catalog in the internal view are automatically provisioned in the external view.
Knot DNS Alternative:
Knot DNS doesn't support catalog zones (as of 3.x). Instead:
-
Option 1: Single Knot instance
- Don't separate internal/external zones
- Use different IPs for internal vs. external access
- Knot Resolver handles the split-horizon by stubbing local domains
-
Option 2: Zone synchronization script
- Use
knotcto manage zones programmatically - Script to add/remove zones on both instances
- Use
-
Option 3: Configuration management
- Use Ansible/Puppet/Salt to generate Knot configs
- Deploy synchronized configs to both instances
Example: Adding zones via script
#!/bin/bash# add-zone.shZONE="$1"ZONE_FILE="/var/lib/knot/global/$ZONE.zone"# Add zone to configurationknotc conf-beginknotc conf-set "zone[$ZONE]"knotc conf-commit# Create empty zone file if neededif [ ! -f "$ZONE_FILE" ]; thencat > "$ZONE_FILE" <<EOF\$ORIGIN $ZONE.\$TTL 3600@ SOA ns1.example.com. admin.example.com. ($(date +%Y%m%d01) ; serial3600 ; refresh1800 ; retry604800 ; expire86400 ) ; minimumNS ns1.example.com.EOFchown knot:knot "$ZONE_FILE"fi# Load the zoneknotc zone-reload "$ZONE"
9. DNS64 for IPv6-Only Clients
The Challenge:
Kubernetes clusters with IPv6-only pods need to reach IPv4-only services. DNS64 synthesizes AAAA records from A records.
BIND9 Configuration:
view "internal" {dns64 64:ff9b::/96 {clients { 10.200.0.0/16; 2001:db8::/64; };};};
Knot Resolver Configuration:
-- Enable DNS64 modulemodules.load('dns64')-- Configure DNS64 for specific clientsdns64.config({prefix = '64:ff9b::/96',-- Optional: Restrict to specific clients})-- Alternative: Use policy for more controlpolicy.add(policy.suffix(policy.FLAGS('DNS64_DISABLE'), policy.todnames({-- Disable DNS64 for domains that have native AAAA'ipv6-capable.example.com.'})))
How it works:
- IPv6-only client queries
ipv4-only.example.comfor AAAA record - Knot Resolver finds no AAAA record
- Knot Resolver looks up A record:
192.0.2.1 - Knot Resolver synthesizes AAAA:
64:ff9b::192.0.2.1(or64:ff9b::c000:201) - Client connects to NAT64 gateway with that IPv6 address
- NAT64 gateway translates to IPv4
192.0.2.1
NAT64 Configuration (Linux):
# Enable IPv6 forwardingsysctl -w net.ipv6.conf.all.forwarding=1# Set up Jool NAT64modprobe jooljool instance add "default" --netfilter --pool6 64:ff9b::/96# Add IPv4 pooljool -i "default" pool4 add 203.0.113.100 --tcp --udp --icmp
10. Forwarding Specific Zones to Kubernetes DNS
The Challenge:
Kubernetes has its own internal DNS (CoreDNS). You want *.kube.example.com to resolve via Kubernetes DNS.
BIND9 Configuration:
zone "kube.example.com" {type forward;forward only;forwarders { 10.43.0.10; };};
Knot Resolver Configuration:
-- Forward Kubernetes domains to CoreDNSkubeDomains = policy.todnames({'kube.example.com.',})-- Disable EDNS and cachingpolicy.add(policy.suffix(policy.FLAGS({'NO_EDNS', 'NO_CACHE'}), kubeDomains))-- Forward to CoreDNSpolicy.add(policy.suffix(policy.STUB({'10.43.0.10'}), kubeDomains))
Why NO_EDNS?
Some Kubernetes DNS implementations don't handle EDNS properly. Disabling it ensures compatibility.
Why NO_CACHE?
Kubernetes services can change IPs rapidly. Disabling cache ensures you always get fresh data.
Multiple Kubernetes Clusters:
-- Production clusterprodKube = policy.todnames({'prod.kube.example.com.'})policy.add(policy.suffix(policy.FLAGS({'NO_EDNS', 'NO_CACHE'}), prodKube))policy.add(policy.suffix(policy.STUB({'10.43.0.10'}), prodKube))-- Staging clusterstagingKube = policy.todnames({'staging.kube.example.com.'})policy.add(policy.suffix(policy.FLAGS({'NO_EDNS', 'NO_CACHE'}), stagingKube))policy.add(policy.suffix(policy.STUB({'10.44.0.10'}), stagingKube))
11. Hiding Server Version and Identity
The Challenge:
You don't want to leak DNS server information to attackers.
BIND9 Configuration:
options {version "unknown";hostname none;server-id none;};
Knot DNS Configuration:
server:identity: ""version: ""nsid: ""
Knot Resolver Configuration:
Knot Resolver doesn't expose version information by default. You can set custom NSID if you have a large number of resolvers and you are trying to debug which one is sending the reply back.
-- Set NSID (useful for debugging from client side)local systemd_instance = os.getenv("SYSTEMD_INSTANCE")modules.load('nsid')nsid.name(systemd_instance)
Test:
# Query for version (should return nothing or custom value)dig @10.250.0.2 version.bind chaos txtdig @10.250.0.2 id.server chaos txt
12. Statistics and Monitoring
BIND9 Statistics:
options {statistics-file "data/named_stats.txt";};# Generate statisticsrndc stats# View statisticscat /var/named/data/named_stats.txt
Knot DNS Statistics:
# Real-time statisticsknotc stats# Specific zone statisticsknotc zone-stats example.com
Knot Resolver Statistics:
-- Enable statistics modulemodules.load('stats')-- Enable Prometheus metricsmodules.load('prometheus')net.listen('127.0.0.1', 9145, { kind = 'webmgmt' })
Access Prometheus metrics: http://127.0.0.1:9145/metrics
Grafana Dashboard:
Knot Resolver provides a Grafana dashboard template:
- Import dashboard ID: 13314
- Configure Prometheus data source
- Monitor queries, cache hit rate, latency, etc.
Testing the Migration
1. Test DNSSEC Validation
# Query signed zonedig @10.250.0.2 example.com SOA +dnssec# Should show RRSIG recordsdig @10.250.0.2 example.com DNSKEY +short
2. Test Dynamic Updates
# Create update filecat > /tmp/update.txt <<EOFserver 10.250.0.2zone internal.example.comupdate add test.internal.example.com 300 A 10.100.10.99sendEOF# Send update with TSIG keynsupdate -k /etc/dhcp-updater.key /tmp/update.txt# Verifydig @10.250.0.2 test.internal.example.com A +short
3. Test DNS-over-TLS
# Query via DoTkdig +tls @10.250.0.1 example.com# Alternative: Use opensslopenssl s_client -connect 10.250.0.1:853 -servername resolver.example.com
4. Test Split-Horizon
# Query from internal network (should recurse)dig @10.250.0.1 google.com# Query Knot DNS directly (should only answer authoritative)dig @10.250.0.2 google.com # Should get REFUSED# Query local zone from bothdig @10.250.0.1 internal.example.com # Works (via stub)dig @10.250.0.2 internal.example.com # Works (authoritative)
5. Test Cache
# First query (cache miss)time dig @10.250.0.1 example.com# Second query (cache hit - should be faster)time dig @10.250.0.1 example.com# Check cache statisticsknotc stats | grep cache
Performance Comparison
If you want to compare these two head-to-head, you can use the dnsperf program to do so.
Benchmarking with dnsperf
# Create query filecat > queries.txt <<EOFgoogle.com Aexample.com Ainternal.example.com AEOF# Test BIND9dnsperf -s 10.250.0.1 -d queries.txt -l 30# Test Knot Resolverdnsperf -s 10.250.0.1 -d queries.txt -l 30
Common Pitfalls
1. Forgetting to Update Stub Zones
After adding a new zone to Knot DNS, update Knot Resolver:
localDomains = policy.todnames({'internal.example.com.','dmz.example.com.','new-zone.example.com.', -- Don't forget this!})
Restart Knot Resolver:
systemctl restart kresd@filtered
2. TSIG Key Algorithm Mismatches
BIND and Knot support different algorithms. Stick to commonly supported ones:
- ✅
hmac-sha256 - ✅
hmac-sha512 - ❌
hmac-md5(deprecated, replace ASAP)
3. SELinux Issues
Knot DNS and Knot Resolver have different SELinux contexts:
# Fix file contextsrestorecon -Rv /var/lib/knotrestorecon -Rv /var/cache/knot-resolver# Check denialsausearch -m avc -ts recent | grep knot
4. Systemd Socket Activation
Knot Resolver uses systemd socket activation by default. Disable if you configure net.listen() in Lua:
# Disable socket activationsystemctl disable kresd.socketsystemctl stop kresd.socket# Start service directlysystemctl enable kresd@filteredsystemctl start kresd@filtered
5. IPv6 Freebind
If Knot Resolver fails to bind IPv6 addresses:
-- Use freebind optionnet.listen('2001:db8::1', 53, { kind = 'dns', freebind = true })
Requires:
# Enable freebind in kernelsysctl -w net.ipv6.ip_nonlocal_bind=1
6. ACME DNS Validation Timing
Certbot deletes TXT records immediately after validation. If you have multiple ACME clients, you might have conflicts.
Solution: Use --dns-txt-propagation-seconds:
certbot certonly \--dns-rfc2136 \--dns-rfc2136-credentials /etc/letsencrypt/rfc2136.ini \--dns-rfc2136-propagation-seconds 10 \-d example.com
DNS Application Firewall for Sandboxed Clients
The Challenge:
You want to restrict certain clients (guest networks, IoT devices, sandbox environments) from accessing specific domains or redirect their queries to filtered upstreams without creating separate resolver instances.
BIND9 Approach:
BIND9 requires separate views with response-policy zones (RPZ):
view "sandbox" {match-clients { 10.200.0.0/24; };recursion yes;response-policy {zone "rpz.blacklist";};zone "rpz.blacklist" {type master;file "rpz.blacklist.zone";};};
This requires maintaining zone files with blocklists, and you need separate views for each policy.
Knot Resolver DAF Approach:
The DNS Application Firewall (DAF) module provides a high-level declarative interface for filtering queries:
-- Enable DAF modulemodules.load('daf')-- Block malware/phishing domainsdaf.add('qname ~ (malware|phishing).example.com deny')-- Block queries from guest network to internal domainsdaf.add('qname ~ %.internal%.example%.com AND src = 10.200.0.0/24 deny')-- Reroute sandbox network queries to filtered upstreamdaf.add('src = 10.200.0.0/24 mirror 10.250.0.10')
DAF Syntax:
Rules follow a "field operator operand action" format:
| Field | Description | Example |
|---|---|---|
qname | Query name | qname = example.com |
src | Source IP | src = 10.200.0.0/24 |
dst | Destination IP | dst = 10.250.0.1 |
| Operator | Description | Example |
|---|---|---|
= | Exact match | qname = example.com |
~ | Regex match | qname ~ %.ads%. |
AND, OR | Chain filters | qname ~ ads AND src = 10.200.0.0/24 |
| Action | Description | Effect |
|---|---|---|
deny | Block the query | Returns NXDOMAIN |
reroute | Rewrite destination IP | reroute 192.0.2.1-127.0.0.1 |
rewrite | Rewrite answer | rewrite A 127.0.0.1 |
mirror | Forward to another resolver | mirror 10.250.0.10 |
forward | Forward with fallback | forward 10.250.0.10 |
truncate | Force TCP retry | Truncates response |
Real-World Examples:
1. Block ads and tracking for all clients:
-- Load blocklist domainsadDomains = {'doubleclick.net','googlesyndication.com','googleadservices.com','facebook.com','fbcdn.net',}for _, domain in ipairs(adDomains) dodaf.add('qname ~ %.' .. domain .. ' deny')end
2. Sandbox IoT devices (no internal access, filtered internet):
-- Block IoT devices from accessing internal domainsdaf.add('qname ~ %.internal%.example%.com AND src = 10.200.0.0/24 deny')daf.add('qname ~ %.dmz%.example%.com AND src = 10.200.0.0/24 deny')-- Route IoT queries through NextDNS with strict filteringdaf.add('src = 10.200.0.0/24 forward 45.90.28.0 hostname=strict.dns.nextdns.io')
3. Prevent DNS rebinding attacks:
-- Block private IP responsesdaf.add('answer ~ 10%.%d+%.%d+%.%d+ deny')daf.add('answer ~ 192%.168%.%d+%.%d+ deny')daf.add('answer ~ 172%.(1[6-9]|2[0-9]|3[01])%.%d+%.%d+ deny')
4. Redirect specific domains to local services:
-- Redirect queries for service.example.com to local serverdaf.add('qname = service.example.com rewrite A 10.100.10.50')
Managing Rules Dynamically:
The DAF provides a control interface for runtime rule management:
-- Add rule with tracking IDlocal rule_id = daf.add('qname = blocked.example.com deny')-- List all rulesprint(daf.rules())-- Count how many times a rule matchedprint(daf.count(rule_id))-- Suspend a rule temporarilydaf.suspend(rule_id)-- Delete a ruledaf.del(rule_id)
Web Interface
DAF includes a web UI and RESTful API:
-- Enable web managementmodules.load('http')http.config({tls = true,cert = '/etc/letsencrypt/live/resolver.example.com/fullchain.pem',key = '/etc/letsencrypt/live/resolver.example.com/privkey.pem',})
Access at: https://10.250.0.1:8453/daf
Performance Considerations
- DAF evaluates rules in order (first match wins)
- Place most frequently matched rules first
- Use specific matches before regex when possible
- Monitor rule performance with
daf.count()
Combining with Policy Routing
DAF works alongside Knot Resolver's policy framework:
-- First: DAF blocks specific domainsdaf.add('qname ~ ads%.example%.com deny')-- Then: Policy handles routing for allowed querieslocalDomains = policy.todnames({'internal.example.com.'})policy.add(policy.suffix(policy.STUB({'10.250.0.2'}), localDomains))policy.add(policy.all(policy.TLS_FORWARD({...})))
Migration from BIND RPZ
If you have BIND RPZ zones, convert them to DAF rules:
# BIND RPZ zoneblocked.example.com CNAME .*.ads.example.com CNAME .
Becomes:
# Knot Resolver DAFdaf.add('qname = blocked.example.com deny')daf.add('qname ~ %.ads%.example%.com deny')
Logging Blocked Queries
Monitor what DAF is blocking:
-- Custom action with logginglocal function log_and_deny(req, qry)log('[DAF] Blocked: ' .. kres.dname2str(qry:name()) ..' from ' .. req.qsource.addr)return policy.DENYend-- Apply to specific ruledaf.add('qname ~ ads%.example%.com', log_and_deny)
Best Practices
- Test rules before deployment - Use
daf.count()to verify matches - Document rule purposes - Add comments explaining why domains are blocked
- Regular blocklist updates - Automate updates from threat feeds
- Monitor false positives - Check logs for legitimate services blocked
- Layer with upstream filtering - Use DAF for local policies, upstream for global threats
The DNS Application Firewall gives you fine-grained control over DNS traffic without maintaining complex RPZ zone files or running multiple resolver instances. It's particularly powerful for protecting IoT networks, enforcing corporate policies, and implementing parental controls.