(**) Translated with www.DeepL.com/Translator
When all the results of the nightly treatments, the automatic tasks, are sent to your email address
it quickly becomes a mess. Checking this, checking that, reading emails becomes very complex and tiring.
Healtchecks, it is a dashboard that shows what went well and what went wrong.
How does it work ? You execute the task, after its execution you simply add a line of code (curl for example) which calls the application “healthchecks”, with a unique identifier and the state of the treatment (return code of the command for example 0 is OK in shell).
This identifier is assigned to the task declared on the “healthchecks” application. Once the task is created on the application, it gives you the unique url to paste in your scripts (several languages supported).
So in the morning if all went well everything is green, or it’s the multicolored garland :)
A task registered on “healthchecks” must have been called within the time limit, if not, the dashboard becomes red and it is possible to send alerts to several types of media: mail, matrix, …
Install “healthchecks” in the Kubernetes cluster
Following the guidelines of “PÄ“teris Caune - Healthchecks Team.”
Prerequisites
- Operational kubernetes cluster
- Postgresql service (data persistence)
- You know how to expose a kubernetes network service (haproxy, traefic,…)
Postgresql database
Create a database for the “healthchecks” application. This database will have to be accessible from all the “Workers” of the kubernetes cluster (pg_hba.conf setting required).
Example:
- Database name: hc
- Postgresql user: hc
- Postgresql user password: mypassword
Connected to postgresql server:
su - postgres
psql
CREATE USER hc with password 'mypassword';
CREATE DATABASE hc WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
GRANT ALL PRIVILEGES ON DATABASE hc to hc;
\q
exit
Modify the pg_hba.conf file so that the new “healthchecks” database is accessible from all workers in the Kubernetes cluster.
Example, my kubernetes cluster has two “Workers”, whose respective IP addresses are:
- 192.168.1.20
- 192.168.1.21
Add to the pg_hba.conf file :
host hc hc 192.168.1.20/32 md5
host hc hc 192.168.1.21/32 md5
then run : systemctl reload postgresql
The account information (user/password, IP address/port of the postgresql server) will be needed to perform the “healthchecks” setup.
Installation of “healthchecks
You are now used to using Kubernetes “manifests”, which describe the operations to be performed in the cluster. We are going to create the “healthchecks” application deployment manifest.
Manifest of deployment
- Creation of the “namespace
- Creation of the “Secret” for the environment variables
- Creation of a “ConfigMap” configuration file “uwsgi.ini
- Creation of an optional “ConfigMap”, only if you can’t act on the configuration of the loadbalancers, Allows to solve the csrf django problem encountered during the connection to the application_, see: TLS Termination chapter
- Creation of “LimitRange” to adapt to your architecture
- Creation of the “Deployment”, container readonly and user/group 999 (id user of the container)
- Creation of the “Service
Content of the “manifest.yml” file :
apiVersion: "v1"
kind: "Namespace"
metadata:
name: "healthchecks"
labels:
name: "healthchecks"
---
apiVersion: v1
kind: "Secret"
metadata:
name: "healthchecks-secret-envvars"
namespace: "healthchecks"
type: "Opaque"
stringData:
# Needed if impossible to set loadbalancer : https://healthchecks.io/docs/self_hosted_docker/ - see TLS Termination
CSRF_TRUSTED_ORIGINS: "https://yourdomain"
SECRET_KEY: "xxxxxxxxxxxxxxxxxxxxx"
DEBUG: "False"
SITE_ROOT: "https://yourdomain"
SITE_NAME: "whatyouwant"
ALLOWED_HOSTS: "yourdomain"
DB: "postgres"
DB_TARGET_SESSION_ATTRS: "read-write"
DB_USER: "hc"
DB_HOST: "yourpostgresqlserver hostname or ip address"
DB_NAME: "hc"
DB_PASSWORD: "postgres hc user password"
# Do not allow "self registration", you could have surprises
# a self-service healthcheck ?
REGISTRATION_OPEN: "False"
DEFAULT_FROM_EMAIL: "healthchecks@yourdomain"
EMAIL_HOST: "STMP host"
EMAIL_HOST_USER: "credential login smtp"
EMAIL_HOST_PASSWORD: "credential password smtp"
EMAIL_PORT: "Smtp port 465|587"
EMAIL_USE_TLS: "True"
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "uwsgi-ini-configmap-healthchecks"
namespace: "healthchecks"
data:
uwsgi.ini: |+
[uwsgi]
master
die-on-term
http-socket = :8000
harakiri = 10
post-buffering = 4096
# the original version indicates processes = 4
# Performance ok on raspberry PI4 with 1 and limitrange max cpu 1000m
processes = 1
enable-threads
threads = 1
chdir = /opt/healthchecks
module = hc.wsgi:application
thunder-lock
disable-write-exception
hook-pre-app = exec:./manage.py migrate
attach-daemon = ./manage.py sendalerts
attach-daemon = ./manage.py sendreports --loop
---
# Needed if impossible to set loadbalancer : https://healthchecks.io/docs/self_hosted_docker/ - see TLS Termination
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "local-settings-py-configmap-healthchecks"
namespace: "healthchecks"
data:
local_settings.py: |+
import os
CSRF_TRUSTED_ORIGINS = os.getenv("CSRF_TRUSTED_ORIGINS", "").split(",")
---
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "limitrange-healthchecks"
namespace: "healthchecks"
spec:
limits:
- default:
cpu: "1000m"
memory: "300Mi"
defaultRequest:
cpu: "150m"
memory: "200Mi"
max:
cpu: "1000m"
memory: "300Mi"
min:
cpu: "150m"
memory: "200Mi"
type: Container
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "healthchecks"
namespace: "healthchecks"
spec:
revisionHistoryLimit: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app: "healthchecks"
replicas: 1
template:
metadata:
labels:
app: "healthchecks"
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
containers:
- name: "healthchecks"
env:
- name: "CSRF_TRUSTED_ORIGINS"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "CSRF_TRUSTED_ORIGINS"
optional: false
- name: "SECRET_KEY"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "SECRET_KEY"
optional: false
- name: "DEBUG"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DEBUG"
optional: false
- name: "SITE_ROOT"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "SITE_ROOT"
optional: false
- name: "SITE_NAME"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "SITE_NAME"
optional: false
- name: "ALLOWED_HOSTS"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "ALLOWED_HOSTS"
optional: false
- name: "DB"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB"
optional: false
- name: "DB_TARGET_SESSION_ATTRS"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB_TARGET_SESSION_ATTRS"
optional: false
- name: "DB_USER"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB_USER"
optional: false
- name: "DB_HOST"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB_HOST"
optional: false
- name: "DB_NAME"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB_NAME"
optional: false
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DB_PASSWORD"
optional: false
- name: "REGISTRATION_OPEN"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "REGISTRATION_OPEN"
optional: false
- name: "DEFAULT_FROM_EMAIL"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "DEFAULT_FROM_EMAIL"
optional: false
- name: "EMAIL_HOST"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "EMAIL_HOST"
optional: false
- name: "EMAIL_HOST_PASSWORD"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "EMAIL_HOST_PASSWORD"
optional: false
- name: "EMAIL_HOST_USER"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "EMAIL_HOST_USER"
optional: false
- name: "EMAIL_PORT"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "EMAIL_PORT"
optional: false
- name: "EMAIL_USE_TLS"
valueFrom:
secretKeyRef:
name: "healthchecks-secret-envvars"
key: "EMAIL_USE_TLS"
optional: false
image: "healthchecks/healthchecks:v2.1"
# imagePullPolicy : Always IfNotPresent None
imagePullPolicy: "Always"
securityContext:
readOnlyRootFilesystem: true
ports:
- containerPort: 8000
protocol: "TCP"
name: "port-8000"
volumeMounts:
- mountPath: "/opt/healthchecks/docker/uwsgi.ini"
name: "configmapvol-uwsgi-ini"
subPath: "uwsgi.ini"
# Needed if impossible to set loadbalancer : https://healthchecks.io/docs/self_hosted_docker/ - see TLS Termination
- mountPath: "/opt/healthchecks/hc/local_settings.py"
name: "configmapvol-local-settings-py"
subPath: "local_settings.py"
volumes:
- name: "configmapvol-uwsgi-ini"
configMap:
name: "uwsgi-ini-configmap-healthchecks"
# Needed if impossible to set loadbalancer : https://healthchecks.io/docs/self_hosted_docker/ - see TLS Termination
- name: "configmapvol-local-settings-py"
configMap:
name: "local-settings-py-configmap-healthchecks"
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "healthchecks-app-port-0-8000"
namespace: "healthchecks"
spec:
type: "NodePort"
ports:
- port: 8000
protocol: "TCP"
targetPort: 8000
name: "port-healthchecks-8000"
selector:
app: "healthchecks"
---
To install, run, from the kubernetes master:
kubectl apply -f manifest.yml
Connection to the ihm “healthchecks
The application is started, the network service is effective, open your browser, then indicate the access url to the service ex: https://monurlhealthchecks. This is the first connection, you will not be able to connect to it.
You must first create a “superuser”.
To execute this operation, start a shell in the “healthcheck” container on your kubernetes cluster, then execute the command :
./manage.py createsuperuser
# You will need to enter your email address, a password and its confirmation
# To cancel CTRL+C
Return to your web browser and login to the application with this account and its password.
Alerts
If you have not yet visited the site, I invite you to look at the part Notifications, for each desired case, you will have to act on the configuration of the container (not discussed here).
Conclusion
This tool is great and greatly facilitates the life of the system administrator…
+++