Skip to main content
Skip table of contents

TrustBuilder.Connect installation guide

This section documents the procedure to install TrustBuilder.Connect (TB.Connect) according to “Option Self-managed Docker” as described on Technical Overview.

The option “Option Self-managed Docker” is available on request.

Contact sales

🚩 Currently, TB.Connect 11 is equivalent to IDHub 10.4.7. This document and the commands still refer. to the latter. Over time, the name and the version numbers of IDHub will disappear.

Appliance installation

The TB.Connect image is delivered as an .ova template. It is deployed in the environment of the customer. Location of the .ova template:

https://repository.trustbuilder.io/trustbuilder/production/appliances/TB-Connect-20220516.ova

The minimum requirements for the machines are:

  • CPU cores: 4

  • RAM: 8Gb (minimum) and 32Gb (in case Argon2 password hashing is used)

  • HDD/SSD: 150Gb

Complete the wizard to configure the initial settings (passwords, network, hostname and DNS).

Update appliances

Update the packages on the appliances using following commands as root user. You will receive the credentials from your designated point of contact at TrustBuilder.

zypper update

reboot

1. Single machine setup

Docker login proxy

Execute as trustbuilder user.

Make sure docker is installed:

sudo zypper install docker

Make sure the correct docker-compose version is installed:

sudo curl -L https://repository.trustbuilder.io/.docker/docker-compose -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Make sure the user is in the correct group:

sudo usermod -aG docker trustbuilder

Execute a daemon reload and restart docker:

sudo systemctl daemon-reload

sudo systemctl restart docker

Close all sessions for your user and authenticate on the appliance again.

Login on the TB.Connect docker environment:

docker login http://docker.trustbuilder.io/

Prepare Docker installation

Download the trustbuilder.zip package containing installation files:

curl -vk https://repository.trustbuilder.io/.docker/trustbuilder.zip > /tmp/trustbuilder.zip

Extract the downloaded package in the temp folder:

unzip /tmp/trustbuilder.zip -d /tmp

Move the content of the downloaded zip to the docker folder:

mv /tmp/trustbuildercorp-docker-<random-nr>/* /opt/trustbuilder/docker/

Portal modifications

We will externalize the logging (out of the container). Change the file /opt/trustbuilder/docker/docker-compose.yml and add the logging volume on top.

CODE
volumes:
mysql:
   driver: local
gateway:
   driver: local
portal:
   driver: local
tba:
   driver: local
logging:
   driver: local

Use the logging volume to externalize the logging folder from the container. In case of container re-creation, the logging will persist. We also specify the OneSpan property to enable Digipass functionalities (will be used for VPN). There is also an extra .jar for connection with the SQL database. This .jar will come in the /opt/trustbuilder/docker/tomcat-libs folder.

Also put the version of all components to 10.4.7 in docker-compose.yml

CODE
portal:
      image: docker.trustbuilder.io/orchestrator:10.4.7
      restart: unless-stopped
      depends_on:
mysql
redis
      environment:
        TB_ORCH_CATALINA_OPTS: ''
        TB_ORCH_DB_USER: idhub
        TB_ORCH_DB_URI: mysql://mysql:3306/IDHUB
        TB_ORCH_JAVA_OPTS: -Xms2048m -Xmx6144m -XX:PermSize=2048m -XX:MaxPermSize=6144m
        TB_ORCH_LOG_HEADERS: X-Request-Id,X-Real-Ip,X-Forwarded-For
        TB_ORCH_REDIS_URI: redis://redis:6379
        TB_ORCH_TIMEZONE: Europe/Brussels
        TB_ORCH_VASCO: 'true'
      volumes:
portal:/opt/trustbuilder/IDHUB_CUSTOM_HOME
./logback.xml:/usr/local/tomcat/conf/logback.xml
logging:/usr/local/tomcat/logs
./tomcat-libs:/usr/local/tomcat/lib/ext

The logging can be found on the location: /var/lib/docker/volumes/docker_logging/_data

You can inspect the volume using docker command:

docker volume inspect docker_portal

Migrate Logback

Copy the content of the old appliance file /opt/trustbuilder/tomcat-core/webapps/idhub/WEB-INF/classes/logback.xml to the new appliance /opt/trustbuilder/docker/logback.xml

Change the paths in the logback.xml to point to /usr/local/tomcat/logs/

Add the following to disable some loggings:

XML
<!--TURN OFF LOGGINGS-->
    <logger name="org.springframework.jdbc.core.JdbcTemplate">
        <level value="off"/>
    </logger>
     <logger name="org.springframework.jdbc.core.StatementCreatorUtils">
        <level value="off"/>
    </logger>
     <logger name="org.springframework.transaction.support.TransactionSynchronizationManager">
        <level value="off"/>
    </logger>
    <logger name="org.springframework.jdbc.datasource.DataSourceUtils">
        <level value="off"/>
    </logger>
    <logger name="org.redisson.tomcat.RedissonSessionManager.findSession">
        <level value="off"/>
    </logger>
    <logger name="com.ning.http.client.providers.netty.channel.pool.DefaultChannelPool">
        <level value="off"/>
    </logger>

Migrate SQL jar

Create additional folder on the new appliance:

mkdir /opt/trustbuilder/docker/tomcat-libs

Copy the file from the old appliance to the new appliance.

scp /opt/trustbuilder/tomcat-core/lib/ext/mssql-jdbc-8.4.1.jre8.jar trustbuilder@<ip>:/opt/trustbuilder/docker/tomcat-libs/

Database connections

Create 2 new files: /opt/trustbuilder/docker/idhub.d/sql.xml and /opt/trustbuilder/docker/idhub.d/db.xml

Copy the jdbc/sql resource from /opt/trustbuilder/tomcat-core/conf/Catalina/localhost/idhub.xml to /opt/trustbuilder/docker/idhub.d/csql.xml

Copy the jdbc/db resource from /opt/trustbuilder/tomcat-core/conf/Catalina/localhost/idhub.xml to /opt/trustbuilder/docker/idhub.d/db.xml

Workflow configuration

First run the basic install.

Copy the workflow config from the old appliance to the new appliance:

scp -r /opt/trustbuilder/IDHUB_CUSTOM_HOME/* trustbuilder@<ip>:/tmp/IDHUB_CUSTOM_HOME/

Move it to the portal volume:

mv /tmp/IDHUB_CUSTOM_HOME/* /var/lib/docker/volumes/docker_portal/_data/

Basic Install

Go into the /opt/trustbuilder/docker folder and execute the initial installation as trustbuilder user:

./init.sh --vhost https://HOSTNAME

After installation you can check if everything if running by using:

docker ps

Migrate database

Copy the encryption key from the old appliance in the file /etc/idhub.password

Put it in the file /opt/trustbuilder/docker/docker-compose.override.yml

Also set the password for the idhub admin correct.

CODE
---
version: "3.8"
services:
  mysql:
    environment:
      MYSQL_ROOT_PASSWORD: xxx
      MYSQL_PASSWORD: xxx
  portal:
    volumes:
./idhub.d:/idhub.d
    environment:
      TB_ORCH_IDHUB_ADMIN_PASSWORD: <admin password old environment>
      TB_ORCH_DB_PASSWORD: xxx
      TB_ORCH_IDHUB_ENC_PWD: <value from idhub.password>
      TB_ORCH_STDOUT_LEVEL: INFO
      TB_ORCH_VHOST: https://D-WKG-WEB016
  gateway:
    environment:
      TB_GW_BOOTSTRAP_IDENTIFIER: gateway
      TB_GW_BOOTSTRAP_TOKEN: T001.WuG8

Dump the database on the old appliance: (note: use no lock tables option for production and acc)

mysqldump IDHUB > /tmp/dump.sql

Copy the database to the new appliance:

scp /tmp/dump.sql trustbuilder@<ip>:/tmp

Move it to the volume that can be reached from the mysql container:

mv /tmp/dump.sql /var/lib/docker/volumes/docker_mysql/_data

Open mysql command to the mysql container (note: password can be found in the docker-compose.override.yml):

docker-compose exec mysql mysql -uroot -pxxx

Drop the current IDHUB database:

DROP DATABASE IDHUB;

Create a new empty database:

CREATE DATABASE IDHUB;

Go in the newly created DB:

USE IDHUB;

Restore the dump from the volume: (note: will take some time to complete)

source /var/lib/mysql/dump.sql

Recreate the portal container after dump restore, it will execute the database upgrade commands:

docker-compose up -d --force-recreate portal

Link gateway

If the gateway is not linked (/idhub/admin/#/configserver) you can execute following commands to link it in the Administrator portal.

Note: first link the default scheme to the application catalog.

Start with removing the current docker gateway and its volume:

docker-compose stop gateway

docker rm -f docker_gateway_1

docker volume rm -f docker_gateway

Use the Administrator user password on the **** placeholder.

docker-compose exec portal init_gateway.sh "default" "****" "portal"

Check the generated token to link the gateway by executing following command:

docker-compose exec portal cat /bootstrap/default.token

Copy the value and change it in the docker-compose.override.yml file:

CODE
gateway:
    environment:
      TB_GW_BOOTSTRAP_IDENTIFIER: default
      TB_GW_BOOTSTRAP_TOKEN: T001.5aRMvqH2mGsGbOdF…=

Run to recreate the containers:

docker-compose up -d

2. Cluster installation

Time settings

sudo timedatectl set-timezone Europe/Brussels

sudo vi /etc/systemd/timesyncd.conf

NTP=ntp.dika.be

sudo systemctl restart systemd-timesyncd.service

Update all nodes

Update all packages on the nodes and reboot them.

sudo zypper update

reboot

Install ansible

Ansbile is used to deploy the TB.Connect cluster. Install ansible on all the nodes in the cluster:

sudo zypper install ansible

Create keypair

Create a keypair that can be used by ansible to complete the docker swarm installation.

Create the keypair on the node where you are going to run the playbook.

ssh-keygen -t rsa

Choose a password and accepts the default location.

Copy the public key to the other nodes in the environment:

NOTE: make sure the current node can also ssh to itself

ssh-copy-id trustbuilder@<ip1>

ssh-copy-id trustbuilder@<ip2>

ssh-copy-id trustbuilder@<ip3>

Try if connection to the nodes is possible when using the keypair:

ssh trustbuilder@<ip>

Download ansible install files

Download the latest .zip with ansible installation files from the TB.Connect repository. The .zip contains the installation templates and the folder ‘default’. The files in the ‘default’ folder can be modified to the environment you are installing.

The hosts files defines all the hosts and roles they need.

The groups_vars/all.yml can be modified if you are migrating from a previous version.

Extract the .zip to /opt/trustbuilder/ansible

Hosts file

Example hosts file environment (whereby NN.NNN.NNN.NNN and HHHHHH denotes the IP address and hostname of the actual server). All servers have all roles, except for tba, this is only installed on 1 node:

BASH
# hosts
# Group names that cannot be deleted from this file, But can be empty
# - swarm_leader
# - swarm_managers
# - swarm_workers
# - swarm_all
# - trustbuilder_sql
# - trustbuilder_redis
# - trustbuilder_orchestrator
# - trustbuilder_gateway
# - trustbuilder_tba
# - trustbuilder_redis
# - trustbuilder_core
# - trustbuilder_all
# - nodes_all
 
[swarm_leader]
# The orginal leader for the cluster setup. Once deployed, this can change automatically.
# You can leave the original leader here during redeploys, we'll not force the leadership
# of the cluster back to this host.
HHHHHH16 ansible_host=NN.NNN.NNN.NNN ansible_user=trustbuilder
 
[swarm_managers]
# This contains the manager nodes and should ALWAYS be an uneven number
# The maximum is 7
HHHHHH17 ansible_host=NN.NNN.NNN.NNN ansible_user=trustbuilder
HHHHHH18 ansible_host=NN.NNN.NNN.NNN ansible_user=trustbuilder
 
[swarm_workers]
# This contains all the worker nodes in the cluster. This section is optional
# There is no maximum or required rule on this section
 
# Trustbuilder Roles
[trustbuilder_sql]
# If empyt you will need to define the SQL servers in the variable
HHHHHH16
HHHHHH17
HHHHHH18
 
[trustbuilder_redis]
# If empty you will need to define the Redis servers in the variable
HHHHHH16
HHHHHH17
HHHHHH18
 
[trustbuilder_orchestrator]
# The nodes that will run the orchestrator
HHHHHH16
HHHHHH17
HHHHHH18
 
[trustbuilder_gateway]
# The nodes that will run the gateway
HHHHHH16
HHHHHH17
HHHHHH18
 
[trustbuilder_tba]
# the maximum is 1
HHHHHH16
 
#############################################################################
## ALL LISTS BELOW ARE COMBINATION GROUPS, NEVER DELETE THE GROUP NAMES    ##
#############################################################################

# Add every new group to this general swarm group
[swarm_all:children]
swarm_leader
swarm_managers
swarm_workers
 
[trustbuilder_core:children]
trustbuilder_orchestrator
trustbuilder_tba
trustbuilder_gateway
 
[trustbuilder_all:children]
trustbuilder_sql
trustbuilder_redis
trustbuilder_core
 
[nodes_all:children]
swarm_all
trustbuilder_sql
trustbuilder_redis
trustbuilder_core

all.ym

Example of the all.yml file:

See remarks inline

BASH
timezone: Europe/Brussels
dryrun: false #dry run will create the yml files but will not execute any docker command in the install role
 
docker:
  repository:
    trustbuilder:
      baseurl: http://docker.trustbuilder.io 
#configure the username and password for docker repo access
      username: <username>
      password: <password>
 
trustbuilder:
  environment: production
  passwords:
#If you are migrating from a previous installation, this password should match the Administrator account password)
    admin: "<password>"
#If you are migrating from a previous installation, this password should match the /etc/idhub.password for database encryption)
    encryption: "<password>"
#Configure the default vhost name
  vhost: "https://HOST "
  cors: "*"
  log_level: TRACE
  radius:
    enabled: true
    port: 1645
  vasco:
    enabled: true
#Point to the correct TB.Connect version
orchestrator:
  image: "{{ docker.repository.trustbuilder.baseurl }}/orchestrator:10.4.7"
  mounts:
    data: /opt/trustbuilder
    logging: /var/log/authentication
  resource:
    enabled: false
    reservation_cpu: 1
    reservation_memory: 1024M
  healthcheck:
    enabled: true
    interval: 10s
    timeout: 5s
    start_period: 60s
    retries: 3
  java_opts: -Xms2048m -Xmx6144m -Dlogback.configurationFile=/opt/trustbuilder/IDHUB_CUSTOM_HOME/logback.xml
 
tba:
  passwords:
    user: YOURUSERNAME
  image: "{{ docker.repository.trustbuilder.baseurl }}/tba:10.4.7"
  resource:
    enabled: false
    reservation_cpu: 1
    reservation_memory: 1024M
  healthcheck:
    enabled: true
    interval: 10s
    timeout: 5s
    start_period: 40s
    retries: 3
 
gateway:
  deploy:
    global: true
  image: "{{ docker.repository.trustbuilder.baseurl }}/gateway:10.4.7"
  name: gateway
  resource:
    enabled: false
    reservation_cpu: 1
    reservation_memory: 1024M
  healthcheck:
    enabled: true
    interval: 10s
    timeout: 5s
    start_period: 10s
    retries: 3
 
mysql:
  passwords:
    backup_user: --
    root: --
    user: --
    replication: --
  user: idhub
  database: IDHUB
  image: "{{ docker.repository.trustbuilder.baseurl }}/bitnami/mariadb-galera:10.10.2"
  mounts:
    data: /opt/mysql/data
  resource:
    enabled: false
    reservation_cpu: 1
    reservation_memory: 1024M
  healthcheck:
    enabled: true
    interval: 10s
    timeout: 5s
    start_period: 10s
    retries: 3
 
redis:
  password: --
  image: "{{ docker.repository.trustbuilder.baseurl }}/bitnami/redis:7.0.7"
  mounts:
    data: /opt/redis/data
  resource:
    enabled: false
    reservation_cpu: 0.2
    reservation_memory: 512M
  healthcheck:
    enabled: false
    interval: 10s
    timeout: 15s
    start_period: 30s
    retries: 3
  sentinel:
    image: "{{ docker.repository.trustbuilder.baseurl }}/bitnami/redis-sentinel:7.0.7"
    resource:
      enabled: false
      reservation_cpu: 0.1
      reservation_memory: 512M
    healthcheck:
      enabled: true
      interval: 10s
      timeout: 5s
      start_period: 10s
      retries: 3
  cluster:
    image: "{{ docker.repository.trustbuilder.baseurl }}/haproxy:2.7.1"
    resource:
      enabled: false
      reservation_cpu: 0.1
      reservation_memory: 512M
    healthcheck:
      enabled: true
      interval: 10s
      timeout: 5s
      start_period: 10s
      retries: 3
 
install:
  path: "/opt/trustbuilder/swarm/{{ trustbuilder.environment }}"
os:
  debian:
    baseurl: https://download.docker.com/linux/debian 
    gpg: https://download.docker.com/linux/debian/gpg
  ubuntu:
    baseurl: https://download.docker.com/linux/ubuntu 
    gpg: https://download.docker.com/linux/ubuntu/gpg
  centos:
    baseurl: https://download.docker.com/linux/centos/7/$basearch/stable
    gpg: https://download.docker.com/linux/centos/gpg
  sles:
    baseurl: https://download.docker.com/linux/sles/ {{ ansible_distribution_major_version }}/x86_64/stable
    gpg: https://download.docker.com/linux/sles/gpg

Start the ansible installation

Execute the following command in the folder that contains the docker.yml file:

ansible-playbook docker.yml -vvvv -kK

First password can be empty.

The second one is the password for trustbuilder to become root.

Check the output for any failed tasks.

NOTE (SKIP IF OUTPUT DOES NOT FREEZE):

If the output freezes for no apparent reason, like during “Gathering Facts” task, just type “yes”. It will not prompt for this, but will wait for this response.

NOTE (SKIP IF NO ERRORS OCCUR):

If you skipped through the file to chapter 6 and didn’t do chapter 5, then only install docker on all nodes + add user “trustbuilder” to group “docker”. Here are the commands to copy/paste:

sudo zypper install docker 

sudo usermod -aG docker trustbuilder

sudo systemctl daemon-reload

sudo systemctl restart docker

NOTE (SKIP IF NO ERRORS OCCUR):

Should you get an error related to a missing token variable during a task relating to “docker swarm” then, on the current node, execute the following commands:

sudo su 

docker swarm init

This will set the current node as a manager of a swarm, but now you need to set the managers and workers, as based on your /opt/trustbuilder/ansible/default/hosts file. There you had set who the managers should have been and possibly (but can also be empty) who the workers are. With the following commands, you can ask your now manager node what the join token is for other managers to join, type:

docker swarm join-token manager

Now it outputs the command you need to run on the nodes that are supposed to be managers. Just copy their output command, go to the other node, become root on it, paste that command in it.

Now on you main node ask for the token of the workers with this command:

docker swarm join-token worker

And continue to do the same as with the manager nodes, but then on the worker nodes.

Now this error should be fixed, don’t forget to go back to “trustbuilder” user on your main node, and rerun the ansible-playbook command.

Migrate the database

Make a dump of the database on the old environment:

mysqldump IDHUB > /tmp/dump.sql

Transfer the dump.sql to one of the new nodes. Put the file in location /opt/mysql/datadump.sql

Make connection to the database and DROP the default IDHUB database.

docker exec -it <id> mysql -u idhub -p<pwd>

DROP DATABASE IDHUB;

CREATE DATABASE IDHUB;

USE IDHUB;

Restore the dump.sql file:

source /bitnami/mariadb/data/dump.sql;

Change the authentication scheme and authentication method of the application catalog to the defaults:

SELECT * FROM SP;

UPDATE SP SET ... ;

Force recreation of the orchestrator containers:

docker service update --force production_orchestrator

Custom libraries

Put custom libraries in the path /opt/trustbuilder/tomcat-libs -- do this on all appliances.

aal2wrap.jar

mssql-jdbc-8.4.1.jre8.jar

External resources

Create the external resources in folder /opt/trustbuilder/idhub.d (see above)

Change configuration for TB.Connect

Continue starting from “Start the ansible installation” to complete settings for TB.Connect.

3. Resolving possible issues

Issue with docker network

If the TB.Connect .ova templates are installed on VMWare and if the docker network has issues (problems with connecting to services on nodes), the following can be used to solve them:

Install ethtool:

sudo zypper install ethtool

Use ethtool to change settings of the network cards:

sudo ethtool -K eth0 tx off

sudo ethtool -K docker0 tx off 

Issue with orchestrator restarting

(Make sure all other services are running except orchestrator and gateway)

It is possible that, even if everything is correct up until now, the orchestrator keeps failing it’s health check and restarts. In this case the health check may have been misconfigured, or is deprecated and needs to be updated. To do this, do the following:

In the ansible folder:

vi roles/install/templates/trustbuilder/orchestrator.yml.j2

Search for the term “healthcheck” and make sure it looks as follows:

CODE
healthcheck:
      test: ["CMD-SHELL", "curl --fail -s http://localhost:8080/idhub/healthcheck/ || exit 1"]
      interval: {{ orchestrator.healthcheck.interval }}
      timeout: {{ orchestrator.healthcheck.timeout }}
      start_period: {{ orchestrator.healthcheck.start_period }}
      retries: {{ orchestrator.healthcheck.retries }}

After this rerun the ansible-playbook command.

Issue with mysql-servers not starting in the cluster

This requires some patience on one hand, and occasionally using following command:

docker service update ---force production_mysql-server-#

It may not always work, but eventually it will, the error that occurs within the container is one about WSREP.

Issue with node not connecting to the network even if you used ethtool

You will notice this when you try the following command and don’t get any containers in it:

docker ps

You will also notice on your leader node that none of the services are running on this node.

A way that can fix this is to go to this disconnected node and restart docker:

sudo systemctl restart docker

4. Useful docker commands

Single machine

Some useful Docker commands to manage the TB.Connect installation: (execute as trustbuilder user while in the folder /opt/trustbuilder/docker)

Show docker processes:

docker-compose ps

Restart a container (portal, gateway, crl2db, redis, tba, mysql):

docker-compose restart <container-name>

Open shell to enter a container:

docker-compose exec <container-name> /bin/bash

Tail logfile of a container:

docker-compose logs -f <container-name>

Access database:

docker-compose exec mysql mysql <DB-NAME> -u<username> -p<password>

Cluster setup

Show docker services on 1 machines:

docker ps

Show docker services on 1 machines (including crashed services):

docker ps -a

Show logs of a docker container:

docker logs <container-id>

Go in container:

docker exec -it <container-id> /bin/bash

Open shell to database:

docker exec -it <container-id> mysql -u idhub -p<password>

Show services of the complete cluster:

docker service ls

Force recreation of a set of containers (for example orchestrator):

docker service update --force production_orchestrator

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.