K8S controller to update your domain’s DNS records on STRATO servers
For a few years, as I’ve slowly moved my home lab’s workloads from virtual machines to Docker containers, and eventually to Kubernetes, I’ve been trying to keep my domain’s DNS records in sync with the assigned dynamic IP addresses. Looking for an efficient solution to the problem. by my ISP.
i used direct update For a long time, although it costs ~25EUR and is really worth its money, comes with a downside: it only runs on Windows, and after a point only with a simple DynDNS-updater client. Wasting so many resources for this was an overkill. So I started looking into other solutions like Cloudflare, DigitalOcean, No-IP DynDNS and others. But I still wasn’t satisfied. I got very bored of jumping into different dashboards, providers, and panels from time to time to overview my domain’s DNS records, so I make sure my reverse proxy and my Kubernetes ingress don’t get into trouble. I decided I needed my own solution (why not?) that had to meet three criteria:
- it shouldn’t cost me a dime
- It should integrate with Kubernetes so I won’t need to jump from dashboard to dashboard
- It must be completely autonomous, self-healing and periodic
The obvious solution that met all those criteria was to go for a custom Kubernetes controller with a custom CRD, and what better tool to start with than kubebuilder? Kubebuilder is a framework for building using the Kubernetes API Custom Resource Definitions (CRDs), It does all the heavy lifting for us, building the project structure and scaffolding the basic components needed to code, build and deploy our artifacts.
In short, the story is very simple and consists mainly of two parts: you extend the Kubernetes control plane by expressing your artifacts in the form of Custom Resource Definitions (CRDs), and you create a custom controller that, or So by periodically responding to changes in or on the CR, it tries to adjust the actual observed position of those CRs so it matches with the desired one.
In our case, this turns into a CRD, which will be called domain and is practically a representation of the domain (or subdomain) that you want to update its DNS records from time to time. strato, and a custom controller that transforms the Sisyphean task of reconciling CRS states and IP propagation to STRATO DynDNS endpoints.
Additionally, we will need a Secret, but its role is pure complementary as it only contributes as a safekeeper for the credentials needed to issue requests to STRATO DynDNS endpoints.
Why Strato in the first place? Simply because this is where I register all my domains.
Strato AG is a German Internet hosting service provider headquartered in Berlin. It is a subsidiary of United Internet AG which bought it from Deutsche Telekom AG in 2016. Strato primarily operates in Germany, the Netherlands, Spain, France, the UK and Sweden, serving over 2 million customers.
This article is not an introduction to building custom controllers with Kubebuilder. If you are new to the topic, consult the authority cubebuilder book or take a look at this very nice article Stephanie Lai,
Leaving that behind, let’s now analyze the code!
Domain
have two properties (mainly – what is TypeMeta
And ObjectMeta
You can see this in the Kubebuilder book) which we’ve discussed briefly before. Spec
Types of DomainSpec
is the desired state and Status
Types of DomainStatus
our actual (observed) position is Domain
Customer Resource (CR) at any time.
If you notice, the struct is decorated with a set of attributes that are prefixed with +kubebuilder:printcolumn
And they determine which columns will be displayed when we query about an item or list of items. Kind
For example, with kubectl:
kubectl get domains --all-namespaces
The value of each column can either be derived from the desired condition (.spec.XXX
) or from the observed state (.status.XXX
,
// Domain is the Schema for the domains API
// +kubebuilder:printcolumn:name="Fqdn",type=string,JSONPath=`.spec.fqdn`
// +kubebuilder:printcolumn:name="IP Address",type=string,JSONPath=`.status.ipAddress`
// +kubebuilder:printcolumn:name="Mode",type=string,JSONPath=`.status.mode`
// +kubebuilder:printcolumn:name="Successful",type=boolean,JSONPath=`.status.lastResult`
// +kubebuilder:printcolumn:name="Last Run",type=string,JSONPath=`.status.lastLoop`
// +kubebuilder:printcolumn:name="Enabled",type=boolean,JSONPath=`.spec.enabled`
type Domain struct
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`Spec DomainSpec `json:"spec,omitempty"`
Status DomainStatus `json:"status,omitempty"`
Desired position, DomainSpec
There are five qualities. Fqdn
This is the fully qualified name of your domain or subdomain that you want to track. IpAddress
is optional, and if it is set, we essentially invoke manual mode, and when it is empty, our controller will search for the current IP address assigned to us by our ISP (dynamic mode). Enabled
That’s something that doesn’t need further explanation. IntervalInMinutes
is defining the interval between two consecutive solution loops and Password
refers to Secret
The resource that will hold the password for our Strato Dndns service.
Those properties can also be decorated with attributes that implement or direct different behavioral aspects of the object. For example, we implement validation via regular expressions for Fqdn
so we make sure it is a valid domain name and for IpAddress
This is a valid IPv4 address. For IntervalInMinutes
We want to make sure that this cannot exceed five minutes, and in case of absence, it will be the automatically assigned default value when deployed.
// DomainSpec defines the desired state of Domain
type DomainSpec struct )\d)\.?\b)4$`
IpAddress *string `json:"ipAddress,omitempty"`// +optional
// +kubebuilder:default:=true
// +kubebuilder:validation:Type=boolean
Enabled bool `json:"enabled,omitempty"`
// +optional
// +kubebuilder:default=5
// +kubebuilder:validation:Minimum=5
IntervalInMinutes *int32 `json:"interval,omitempty"`
Password *v1.SecretReference `json:"password"`
observed state, DomainStatus
, is very easy. Their values are calculated in each solution loop based on the output of either solution (IpAddress
the IP that was updated in the Strato record, LastReconciliationLoop
when the last update attempt occurred and LastReconciliationResult
whether the last attempt was successful or not) or on the current desired condition to be processed in that loop (Enabled
either Mode
,
// DomainStatus defines the observed state of Domain
type DomainStatus struct
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
Enabled bool `json:"enabled,omitempty"`
IpAddress string `json:"ipAddress,omitempty"`
Mode string `json:"mode,omitempty"`
LastReconciliationLoop *metav1.Time `json:"lastLoop,omitempty"`
LastReconciliationResult *bool `json:"lastResult,omitempty"`
When we have finished coding those structures, you can find them below /api/v1alpha1/domain_types.go
We can then update the rest of our projects with Kubebuilder and install them as CRDs in our development cluster.
make manifests
make install
Creating the manifest will, among other things, create some sample YAML files based on the structures we coded under earlier /config/samples
,
apiVersion: dyndns.contrib.strato.com/v1alpha1
kind: Domain
metadata:
name: www-example-de
spec:
fqdn: "www.example.de"
enabled: true
interval: 5
password:
name: strato-dyndns-password
Change the values so that they point to one of your domains, or subdomains.
A scaffold will not be built for the manifesto Secret
– This is not a CRD but a core resource of Kubernetes. We’ll have to make it ourselves. STRATO DynDNS endpoints require a username
and a password
Where username
is always a domain or subdomain, and password
Is the password you created when you activated DynDNS for this (sub)domain or the master-password for DynDNS of your STRATO customer account. You choose which one to use, but before proceeding with its YAML Secret
We need to encode this password in base64:
echo -n "password" | base64
Create an empty YAML file under /config/samples
and like name
declare password.name
used in you Domain
YAML, and as data.password
The encoded value of the password you just generated.
apiVersion: v1
kind: Secret
metadata:
name: strato-dyndns-password
type: Opaque
data:
password: cGFzc3dvcmQ=
Deploy both YAML to your cluster:
kubectl apply -f config/samples
You can see if everything works www-example-de
If you have requested to receive domains
and strato-dyndns-password if you requested secrets
In your cluster:
kubectl get domains --all-namespaces
kubectl get secrets --all-namespaces
As mentioned earlier, it is beyond the scope of this article to explain to you how a custom controller works – so I’ll stick to how this controller works. If this is a new subject for you then prepare yourself.
First, we want to make sure that our controller has sufficient permissions to view or update various resources. We want, of course, to have complete control over Domains
but we additionally want to be able to get and observe Secrets
and to create or update kubernetes Events
, we manage it +kubebuilder:rbac
Speciality
//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=dyndns.contrib.strato.com,resources=domains/finalizers,verbs=update
//+kubebuilder:rbac:groups="",resources=events,verbs=create;patch
//+kubebuilder:rbac:groups="",resources=secrets,verbs=get;list;watch;
when you issue make manifests
Among other things, a bunch of YAML files will be created under /config/rbac
on the basis of those qualities.
The flow of our reconciliation loop is simple. bring Domain
If that fails, terminate the loop permanently and don’t require .
var domain dyndnsv1alpha1.Domain
if err := r.Get(ctx, req.NamespacedName, &domain); err != nil
if apierrors.IsNotFound(err)
logger.Error(err, "finding Domain failed")
return ctrl.Result, nil
logger.Error(err, "fetching Domain failed")
return ctrl.Result, err
Check desired status (.Spec.Enabled
), if it is not enabled, update the status of CR in Kubernetes (.Status.Enabled
) accordingly and exit the solution loop permanently if enable is false.
// update status and break reconciliation loop if is not enabled
if !domain.Spec.Enabled
domainCopy.Status.Enabled = domain.Spec.Enabled
// update the status of the CR
if err := r.Status().Update(ctx, &domainCopy); err != nil
logger.Error(err, "updating status failed") //requeueAfterUpdateStatusFailure := time.Now().Add(time.Second * time.Duration(15))
return ctrl.ResultRequeueAfter: time.Until(requeueAfterUpdateStatusFailure), err
return ctrl.Result, nil
Make sure an acceptable gap is present and decide whether the desired condition directs us to proceed in manual or dynamic mode.
// define interval between reconciliation loops
interval := defaultIntervalInMinutes
if domain.Spec.IntervalInMinutes != nil
interval = *domain.Spec.IntervalInMinutes
// change mode to manual in presence of an explicit ip address in specs
if domain.Spec.IpAddress != nil
mode = Manual
If the reconciliation loop starts before the interval is defined (perhaps an external change in YAML files or an internal Kubernetes event), make sure you skip this loop and wait until its next scheduled execution.
Otherwise, we can create an overflow of requests to STRATO constantly and we don’t want to do that because we will hit the rate limiter of either Kubernetes or STRATO itself, and you want their APIs to be benched for a while due to misuse.
// is reconciliation loop started too soon because of an external event?
if domain.Status.LastReconciliationLoop != nil && mode == Dynamic
if time.Since(domain.Status.LastReconciliationLoop.Time) < (time.Minute*time.Duration(interval)) && wasSuccess
sinceLastRunDuration := time.Since(domain.Status.LastReconciliationLoop.Time)
intervalDuration := time.Minute * time.Duration(interval)
requeueAfter := intervalDuration - sinceLastRunDurationlogger.Info("skipped turn", "sinceLastRun", sinceLastRunDuration, "requeueAfter", requeueAfter)
return ctrl.ResultRequeueAfter: time.Until(time.Now().Add(requeueAfter)), nil
if mode Manual
Our IP address is defined in the desired position (.Spec.IpAddress
, Otherwise, we search for our external IP address, which is assigned to our router by our ISP.
currentIpAddress := domain.Status.IpAddress
var newIpAddress *stringswitch mode
case Dynamic:
externalIpAddress, err := r.getExternalIpAddress()
if err != nil
logger.Error(err, "retrieving external ip failed")
r.Recorder.Eventf(instance, v1core.EventTypeWarning, "RetrieveExternalIpFailed", err.Error())
success = false
else
newIpAddress = externalIpAddress
case Manual:
newIpAddress = domain.Spec.IpAddress
If the new desired state of our IP address matches the observed state, do nothing – remember, play nice, and don’t abuse their endpoints for no reason. if not, get Secret
and retrieve your password
and propagate the desired changes to the STRATO DNS servers.
// proceed to update Strato DynDNS only if a valid IP address was found
if newIpAddress != nil {
// if last reconciliation loop was successful and there is no ip change skip the loop
if *newIpAddress == currentIpAddress && wasSuccess
logger.Info("updating dyndns skipped, ip is up-to-date", "ipAddress", currentIpAddress, "mode", mode.String())
r.Recorder.Event(instance, v1core.EventTypeNormal, "DynDnsUpdateSkipped", "updating skipped, ip is up-to-date")
else
logger.Info("updating dyndns", "ipAddress", newIpAddress, "mode", mode.String())passwordRef := domain.Spec.Password
objectKey := client.ObjectKey
Namespace: req.Namespace,
Name: passwordRef.Name,
var secret v1core.Secret
if err := r.Get(ctx, objectKey, &secret); err != nil
if apierrors.IsNotFound(err)
logger.Error(err, "finding Secret failed")
return ctrl.Result, nil
logger.Error(err, "fetching Secret failed")
return ctrl.Result, err
password := string(secret.Data["password"])
if err := r.updateDns(domain.Spec.Fqdn, domain.Spec.Fqdn, password, *newIpAddress); err != nil
logger.Error(err, "updating dyndns failed")
r.Recorder.Eventf(instance, v1core.EventTypeWarning, "DynDnsUpdateFailed", err.Error())
success = false
else
logger.Info("updating dyndns completed")
r.Recorder.Eventf(instance, v1core.EventTypeNormal, "DynDnsUpdateCompleted", "updating dyndns completed")
success = true
}
Updating STRATO DynDNS is fairly easy. you need to issue a GET
Request to do, and it looks like this:
https://%s:%s@dyndns.strato.com/nic/update?hostname=%s&myip=%s
the first two parameters are username
And password
respectively, hostname
is your (sub)domain name, and myip
is the new IP address you want to update the DNS records for.
Finally, we update the state of our CR, and we reschedule the following:
// update the status of the CR no matter what, but assign a new IP address in the status
// only when Strato DynDNS update was successful
if success
domainCopy.Status.IpAddress = *newIpAddress
domainCopy.Status.LastReconciliationLoop = &v1meta.TimeTime: time.Now()
domainCopy.Status.LastReconciliationResult = &success
domainCopy.Status.Enabled = domain.Spec.Enabled
domainCopy.Status.Mode = mode.String()
// update the status of the CR
if err := r.Status().Update(ctx, &domainCopy); err != nil
logger.Error(err, "updating status failed") //
requeueAfterUpdateStatusFailure := time.Now().Add(time.Second * time.Duration(15))
return ctrl.ResultRequeueAfter: time.Until(requeueAfterUpdateStatusFailure), err
// if Mode is Manual, and we updated DynDNS with success, then we don't requeue, and we will rely only on
// events that will be triggered externally from YAML updates of the CR
if mode == Manual && success
return ctrl.Result, nil
requeueAfter := time.Now().Add(time.Minute * time.Duration(interval))
logger.Info("requeue", "nextRun", fmt.Sprintf("%s", requeueAfter.Local().Format(time.RFC822)))
logger.V(10).Info("finished dyndns update")
return ctrl.ResultRequeueAfter: time.Until(requeueAfter), nil
Now, we’re ready to try out our controller (without deploying it externally to the cluster):
make run
You can find the complete source code on GitHub along with instructions on how to build it as a container and deploy it to your cluster:
Give this controller a try, and feel free to fork the repo and extend it as you see fit, or leave your feedback in the comments below or on Github. Till next time…