Kubernetes - Development Tools - SonarQube (RaspberryPI4/arm64)

(**) Translated with www.DeepL.com/Translator

What is Sonarqube?

Source : Wikipedia

SonarQube (formerly Sonar) is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells on 17 programming languages. SonarQube offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security recommendations.

SonarQube can record metrics history and provides evolution graphs. SonarQube provides fully automated analysis and integration with Maven, Ant, Gradle, MSBuild and continuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.).

Install “Sonarqube” in the Kubernetes/Mytinydc cluster


  • Operational Kubernetes cluster - Rasbpberry PI4/arm64/64 bits/Ram 8GB
  • Gitea repository (git repo)
  • CI/CD Drone chain
  • Postgresql service (data persistence)
  • You master the concepts of Manifest, secrets, Kubernetes volumes.
  • You know how to expose a Kubernetes network service (nodePort, haproxy, traefic,…)

Application requirements

This preparation phase allows us to discover the information needed to build the Docker image and the Kubernetes “Manifest”.

  • Arm64 docker image in the registry of the Datacenter

  • the container runs with the sq/sq user (uid 2000/gid 2000 - arbitrary choice) - Elasticsearch, integrated to the Sonarqube application, cannot be run by “root”, and anyway, for security reasons, it is not recommended to run containers with the “root” user

  • volumes :

    • data
    • conf
    • logs
    • temp
    • extensions

    These volumes should be owned by the user “sq” (in my case uid/gid : 2000:2000)

  • Access to the postgresql database (postgresql server in the Datacenter)

  • Access secrets to the postgresql database

Postgresql database

Start by creating a database for the “Sonarqube” application. This database will have to be accessible from all the “Workers” of the kubernetes cluster (pg_hba.conf setting required).


  • Database name: sonarqube
  • Postgresql user : sonarqube
  • Postgresql user password: mypassword

Connected to postgresql server:

su - postgres
CREATE USER sonarqube with password 'mypassword';
GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonarqube;

Modify the pg_hba.conf file so that the new “drone” database is accessible from all workers in the Kubernetes cluster.

Example, my kubernetes cluster has two “Workers”, whose respective IP addresses are:


Add to the pg_hba.conf file :

host sonarqube    sonarqube  md5
host sonarqube    sonarqube  md5

then run: systemctl reload postgresql.

The account information (user/password, IP address/port of the postgresql server) will be needed to perform the “drone” setup.

Création de l’image Docker

Sonarqube ne dispose pas d’image pour les plateformes “arm64”, j’ai donc créé une image spécifique à l’aide de cette source

PS : L’intégration d’une application sur mon Datacenter se fait avec l’outil CI/CD “Drone”. Pour des raisons techniques : bug SSL dans le plugin “docker” pour Drone, je n’utilise pas les commandes" “wget” ou “curl” dans le Dockerfile. En amont, je télécharge et dispose les sources nécessaires au fonctionnement de Sonarqube dans le repository de mon application.


To build and test this image, I have a multi-architecture docker environment (see documentation) on my development VM. This allows to prepare an arm64 image in an X86_64 environment.


if [ ! -f "sonarqube/conf/$file" ];then
	cp sonarqube/confinit/$file sonarqube/conf/.
	#Database host/databasename
	echo "Setting Postgresql Host/databasename"
	sed -i "s;^#sonar.jdbc.url=jdbc:postgresql.*;sonar.jdbc.url=$SONARQUBE_POSTGRES_SERVER;" /sonarqube/conf/$file
	#Database user
	echo "Setting Postgresql user"
	sed -i "s;^#sonar.jdbc.username.*;sonar.jdbc.username=$SONARQUBE_POSTGRES_USER;" /sonarqube/conf/$file
	#Database password
	echo "Setting Postgresql password"
	sed -i "s;^#sonar.jdbc.password.*;sonar.jdbc.password=$SONARQUBE_POSTGRES_PASSWORD;" /sonarqube/conf/$file
if [ ! -f "sonarqube/conf/$file" ];then
	cp sonarqube/confinit/$file sonarqube/conf/.
# Start sonarqube
/sonarqube/bin/linux-arm64/sonar.sh start && tail -f /dev/null

These sources will be placed in the “resources” directory of the project.


Very simplistically, starting with debian:bullseye, I don’t use wget or curl, for the reasons given above. At the time of writing this documentation, I used the versions :

      | wrapper-linux-arm-64-xxxxxx.tar.gz/ (3.5.50 Wrapper Java)
      | sonarqube-xxxxxx.gzip/ ( Sonarqube)
      | entrypoint.sh (container initialization shell)


ARG DEBIAN_FRONTEND=noninteractive

FROM debian:bullseye AS build
RUN apt-get update \
  && apt-get -y install unzip
COPY resources/wrapper-linux-arm-64-${WRAPPERVERSION}.tar.gz .
RUN tar -xvzf wrapper-linux-arm-64-${WRAPPERVERSION}.tar.gz
COPY resources/sonarqube-${SONARVERSION}.zip .
RUN unzip sonarqube-${SONARVERSION}.zip
RUN cp -r sonarqube-${SONARVERSION}/bin/linux-x86-64 sonarqube-${SONARVERSION}/bin/linux-arm64 \
    && cd sonarqube-${SONARVERSION}/bin/linux-arm64 \
    && cp -r /wrapper-linux-arm-64-${WRAPPERVERSION}/bin/wrapper ./wrapper \
    && cp -r /wrapper-linux-arm-64-${WRAPPERVERSION}/lib/libwrapper.so ./lib/. \
    && cp -r /wrapper-linux-arm-64-${WRAPPERVERSION}/lib/wrapper.jar /sonarqube-${SONARVERSION}/lib/jsw/wrapper-3.2.3.jar

FROM debian:bullseye
RUN apt-get update \
 && apt-get -y install openjdk-11-jdk procps
COPY --from=build /sonarqube-${SONARVERSION} /sonarqube
RUN groupadd -g 2000 sq \
&& useradd -m -u 2000 -g sq sq
RUN chown -R 2000:2000 /sonarqube/ \
    && cp -r /sonarqube/conf /sonarqube/confinit
COPY resources/entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

Build image in the development environment

uid=$(id -u)
if [ "$uid" != "0" ];then echo "You re not root";exit;fi

#opt="--no-cache --force-rm"
docker buildx build --progress plain --platform linux/arm64 $opt -t $img:$tag .

Testing the image in the development environment

The shell to test the image in a docker “multitarch” environment, logged in as root on your development machine, with docker installed:

# Install the qemu packages
apt-get install qemu binfmt-support qemu-user-static
# This step will execute the registering scripts
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# Port
optrun="-p 9000:9000"
#4 volumes: logs temp data extensions
volumes="extensions data conf temp logs"
for v in $volumes
  if [ ! -d "$tmpvol" ];then
    echo "creating directory $tmpvol"
    mkdir -p "$tmpvol"
  volumesdocker+="-v $tmpvol:/sonarqube/$v "
#Test with postgres, set and uncomment, postgres server must be reachable from developpement station
#evt="-e SONARQUBE_POSTGRES_SERVER=jdbc:postgresql://[postgreshostname]/sonarqube -e SONARQUBE_POSTGRES_USER=sonarqube -e SONARQUBE_POSTGRES_PASSWORD=[postgresuserpassword]"

## sysctl for elasticsearch
sysctl -w vm.max_map_count=52428
sysctl -w fs.file-max=131072

docker run --rm $optrun $volumesdocker $evt -d $img:$tag

Comptez environ 10 mn pour pouvoir accéder à l’interface Web Sonarqube (première utilisation, création environnement par Sonarqube dans les volumes), l’url pour l’atteindre est : http://localhost:9000 Vous pouvez voir l’evolution du démarrage en consultant les logs de la machine de développement : “/volumesdocker/sonarqube/vols”

Kubernetes cluster internal registry

Download now, this new image in your Kubernetes registry…


# volume
rm -rf /volumesdocker/sonarqube
docker system prune

Kubernetes objects needed for deployment

To prepare for the proper execution of the integration, create the volumes on the Datacenter’s GlusterFs server, and the database on the Datacenter’s Postgresql server

Then the Kubernetes manifest, including the necessary objects, in this order:

  • Creation of the “namespace”
  • Creation of the “secret” access to the internal registry of the Datacenter
  • Creation of the “secret” access to the Postgresql database
  • Creation of the “PersistentVolume”
  • Creation of the “PersistentVolumeClaim”
  • Creation of the “LimitRange”
  • Creation of the “Deployment”
  • Creation of the “Service”

To install, run, from the kubernetes master: kubectl apply -f manifest.yml

**I will not present the final manifest anymore, too tedious and depends on your internal organization (use of secrets or not, execution of containers with “root” or not, management of volumes, the mode of exposure of network services,…).

For my part, I have developed a tool (bash) allowing the complete management of a Kubernetes application project in the Mytinydc Datacenter.

Example of description for “Sonarqube” (yaml) :

# Attention si changement de namespace exécuter make drone
appname: "sonarqube"
# image name is the complete image without tag
image: "sonarqube"
# Always - IfNotPresent (**default) - None
imagepullpolicy: "Always"
# full tag image ( latest is generally not recommanded )
tag: ""
# using private registry : true | false default is true, so image will be prefixed with the private docker registry url
useprivatedockerregistry: true
  - "SONARQUBE_POSTGRES_SERVER=jdbc:postgresql://[host]/sonarqube"
  - name: "data"
    size: "1Gi"
    access: "RWO"
    mount: "/sonarqube/data/"
  - name: "conf"
    size: "1Mi"
    access: "RWO"
    mount: "/sonarqube/conf/"
  - name: "logs"
    size: "500Mi"
    access: "RWO"
    mount: "/sonarqube/logs/"
  - name: "temp"
    size: "500Mi"
    access: "RWO"
    mount: "/sonarqube/temp/"
  - name: "extensions"
    size: "100Mi"
    access: "RWO"
    mount: "/sonarqube/extensions/"

# Ports to expose External#internal#PROTO#servicename#url#HAPROXYBALANCE (for kubernetes)
  - containerport: 9000
    type: "TCP"
    #HAPROXY mecanisme interne pour auto setup mainloadbalancer
    haproxyexpose: "https://[mydomain]"
    internetexposed: true
    finaltls: false
    maxconnserver: 10
  #readonlyimage: true - impossible, l'application écrit des pid dans le répertoire de démarrage !!!!
  runasuser: 2000
  runasgroup: 2000
#Limitrange for namespace
    cpu: "2"
    memory: "2500Mi"
    cpu: "1"
    memory: "1000Mi"

From this description and through a “Makefile”, the tool generates :

  • the complete “manifest” file of the project, executed by the Kubernetes cluster administrator :

    • “Namespace”,
    • “Secret”,
    • Glusterfs : “Service”, “Endpoints”
    • “PersistentVolume”,
    • “PersistentVolumeClaim”,
    • “LimitRange”,
    • “Secret”,
    • “ConfigMap”,
    • “Deployment”,
    • “Service”
  • only the “deployment” manifest, used in the CI/CD process for updating applications in the Datacenter.

  • the volume creation shell and its quotas on the Gluster server,

  • the Docker tags (build and deployment processes),


The steps are :

  • Retrieve the git repository of the sonarqube application project
  • Build the docker image with the Dockerfile described above
  • Push of the image in the registry of the Datacenter
  • Deployment update in kubernetes.

This process is executed by “Drone”, at each “push” received by “Gitea” on the “sonarqube” application repository.


The call for “energy sobriety” is heard, using Sonarqube with less energy consumption is possible.

And in terms of performance:

Refresh: using a Kubernetes Raspberry PI4/8GB cluster, 1 master/2Workers

  • 2CPU Max and 2.5G Ram allocated
  • Project with 5.1K lines of Javascript code
  • Use of “sonar-scanner” on the developer’s workstation (analysis and download of results to the Sonarqube server)

The analysis times (Analysis report processing) Sonarqube, of the sent results, were of :

  • First time: 1m33s
  • Second time: 56 seconds
  • The internal Kubernetes monitoring (metrics), indicates these consumptions: RAM: 1.8G, less than 1 CPU on average.

I don’t see any difference in the web GUI response time compared to a classic installation (X86_64).


(**) Translated with www.DeepL.com/Translator