4. Installation for Docker¶
4.1. Installing QKDLite solution via Docker¶
In this section, we will guide you on how to set up a pair of virtual QKDLite nodes that can replicate keys from the main node to the remote node. You will need to get the installer package from pQCee to install QKDLite solution in your Virtual Machine (VM) or bare metal machine.
We will use the following example settings on two VMs to guide you through the installation process.
QKDLite Main Node
Operating System: x64 Ubuntu Server 22.04 LTS (Jammy Jellyfish)
Node Name: QKDLITE_SAE_A
Username: pqcee
SAE Name: SAE_A
Intranet IP: 192.168.1.10
Public Internet IP: 1.2.3.4
QKDLite Remote Node
Operating System: x64 Ubuntu Server 22.04 LTS (Jammy Jellyfish)
Node Name: QKDLITE_SAE_B
Username: pqcee
SAE Name: SAE_B
Intranet IP: 192.168.1.20
Public Internet IP: 4.3.2.1
4.1.1. Set Up the Virtual Machines¶
Create two VM instances in your cloud infrastructure.
Setup one VM for QKDLite Main Node.
Setup a second VM for QKDLite Remote Node.
Ensure that you can SSH into pqcee user account in each VM via its respective public internet IP address from your Internet administration machine.
ssh -i ~/.ssh/<private_key.pem> pqcee@1.2.3.4
ssh -i ~/.ssh/<private_key.pem> pqcee@4.3.2.1
Update each VM to the latest patches and set the appropriate clock time zone.
sudo apt update sudo apt upgrade -y sudo timedatectl set-timezone Asia/Singapore
Note
You may need to reboot the operating system at the end of this step.
Copy the QKDLite installer package to the home root directory each VM. Please replace
<private_key.pem>below with your SSH private key filename.scp -i ~/.ssh/<private_key.pem> pQCee_QKDLite_3.2.0_installer_Ubuntu_2204_LTS_x64_docker.tar.gz pqcee@1.2.3.4:~ scp -i ~/.ssh/<private_key.pem> pQCee_QKDLite_3.2.0_installer_Ubuntu_2204_LTS_x64_docker.tar.gz pqcee@4.3.2.1:~
Extract the QKDLite installer package into a directory. The created directory will have the same name as the
.tar.gzfilename.tar -xzvf pQCee_QKDLite_3.2.0_installer_Ubuntu_2204_LTS_x64_docker.tar.gz
4.1.2. Install QKDLite in the VMs¶
Install QKDLite solution by executing the installer package.
In each VM, execute the installer package and follow the instructions provided by the installer.
cd ~/pQCee_QKDLite_3.2.0_installer_Ubuntu_2204_LTS_x64 ./install.sh
The installer will request for the following information to setup Docker Swarm.
Docker Swarm Unlock Key. In this example we will use from a file
How would you like to provide the swarm unlock key? 1) From a file (default: swarm_unlock_key) 2) From a command (e.g., gpg, vault, pass) 3) disable auto unlock (manual unlock required) Select option [1-5]: 1
Caution
The key (SWMKEY-1-…) will be outputted in the console. if locking the swarm is required, do take note of the key output. See Docker Swarm Unlocking for more info
[WARN] ========================================== [WARN] IMPORTANT: Docker Swarm Unlock Key [WARN] ========================================== To unlock a swarm manager after it restarts, run the `docker swarm unlock` command and provide the following key: SWMKEY-1-***************************************/*/* Remember to store this key in a password manager, since without it you will not be able to restart the manager. [WARN] ========================================== [WARN] SAVE THIS KEY IN A SECURE LOCATION! [WARN] You will need it to unlock the swarm after Docker restarts [WARN] ========================================== [WARN] Enter y to continue...The installer will then request for the following information to setup QKDLite node.
External public IP address. If you assigned an external public IP address to the VM and the IP address is not detected by the installer, please type in the public IP address. Otherwise, press Enter to skip.
Enter QKDLITE_KME_IP: 1.2.3.4
SAE Name of main node and remote node. The Security Application Entity (SAE) name is required for the proper labelling of quantum keys to be provided via the ETSI server. We can use “SAE_A” for QKDLITE_SAE_A and “SAE_B” for QKDLITE_SAE_B. If you are on QKDLITE_SAE_A, press Enter to use the defaults for “SAE_A” for main node and “SAE_B” for remote node.
Please enter SAE name for this node: [SAE_A] Please enter SAE name for remote node: [SAE_B]
SSH details of remote node. The username and IP address of the remote node is required for the localhost node to connect to the remote node for key replication. Press Enter to use the same username as the container user, and enter the IP address of the remote node.
Please enter SSH username for remote node: [qkdliteuser] Please enter IP address for remote node (e.g., 192.168.1.10): 192.168.1.20
Note
the username for the shhd component in the docker version should always be
qkdliteuser, when in doubt and the other node is also using the docker version go withqkdliteuser.HSM SO Pin. The Security Officer (SO) Pin is used to perform administration actions on the Local Key Storage. The installer installs the software-based PKCS #11 Local Key Storage out of the box and the SO Pin allows you to create and delete HSM slots in this Local Key Storage. Enter your SO Pin twice to confirm. You can alternatively press Enter for auto-generate the keys
Please enter an SO Pin for the HSM: Please enter the SO Pin again:
Note
Due to the sensitive nature of the secrets, characters typed for SO Pin, User Pin, and Transport Key Data will not be shown on screen.
HSM User Pin. The HSM User Pin is used to add, delete and view keys stored inside the HSM slot. Enter your User Pin twice to confirm. You can alternatively press Enter for auto-generate the keys
Please enter a User Pin for the HSM: Please enter the User Pin again:
Transport Key Data. The Transport Key Data is your secret used to create a Transport Key in the HSM slot. The Transport Key Data must be the same for both the main node and the remote node, otherwise key replication from main node to remote node will fail. You can alternatively press Enter for auto-generate the keys for the main node, and enter in the remote node.
Please enter Transport Key data: Please enter the Transport Key data again:
Configuration Summary. If the Pins and/or the Transport Key Data are auto generated, they will be shown after the configuration is done.
================================================ Configuration done! Please take note of the pins, it will not be shown again! ================================================ User Pin: 12345678 SO Pin: 12345678 Transport Key Data: 1234567890123456789012345678901234567890123 ================================================
Once you have entered the above information, the installer will proceed to setup QKDLite in Docker Swarm and run the services. Successful setup would look like this at the end.
[INFO] Setup complete! [INFO] To view logs: docker service logs [INFO] To remove stack: docker stack rm QKDLite
4.1.3. Connect QKDLite Main Node to Remote Node¶
Once QKDLite has been installed on both main node and remote node, we will need to grant the right permissions for the main node to connect to the remote node to perform key replication.
In QKDLite main node, copy the text contents of the
qkdlite-edcsa-key.pubssh client public key displayed on the console.cd ~/QKDLite/ssh cat qkdlite-edcsa-key.pub # copy contents of file printed on console output
In QKDLite remote node, append the copied ssh client public key to the end of the
authorized_keysfile.cd ~/QKDLite/ssh vim authorized_keys # append copied contents to this file
Caution
Ensure that you append the copied contents to a new line in the file. Otherwise, sshd is unable to authenticate against your newly-added public key in
authorized_keysfile.Tip
You can use
nanoto edit files if you are not familiar withvim.Back to QKDLite main node, test that QKDLite main node can connect to the remote node.
docker exec -u qkdliteuser -it $(docker ps -q -f name=QKDLite_qkdlite-node) sh -l ./replicate_create_keys_qrng.py QKDLITE_SAE_B TestKey 1 ./replicate_delete_keys.py QKDLITE_SAE_B TestKey
A successful connection will show a console output as follows:
pqcee@QKDLITE_SAE_A:~/QKDLite$ ./replicate_create_keys_qrng.py QKDLITE_SAE_B TestKey 1 Creating QRNG key(s) locally on QKDLITE_SAE_A... TestKey,4545aca956bb1f9c808338d5fb4b7f56 Exporting QRNG key(s) locally on QKDLITE_SAE_A... TestKey, 4545aca956bb1f9c808338d5fb4b7f56, 60CA71133D81E38DBA5872105422EB247B6EF4F9CB3F31153BA151188130C10BD9062C44122BB789 Importing key(s) on remote QKDLite node QKDLITE_SAE_B... TestKey,4545aca956bb1f9c808338d5fb4b7f56 pqcee@QKDLITE_SAE_A:~/QKDLite$ ./replicate_delete_keys.py QKDLITE_SAE_B TestKey Deleting keys on remote QKDLite node QKDLITE_SAE_B... TestKey,4545aca956bb1f9c808338d5fb4b7f56 Deleting keys locally on QKDLITE_SAE_A... TestKey,4545aca956bb1f9c808338d5fb4b7f56
Note
The above instructions establish a one-way connection from QKDLITE_SAE_A to QKDLITE_SAE_B. If you require another one-way connection in the reverse direction from QKDLITE_SAE_B to QKDLITE_SAE_A, you will need to repeat the above instructions with QKDLITE_SAE_B as the main node.
4.1.4. Docker Swarm Unlocking¶
For extra protection to Docker swarm cluster, it is recommended to lock the swarm with an unlock key, which is required after every reboot of docker daemon.
For an autounlock after every restart, some options are provided which are used to help input the key automatically.
environment variable - pulls from environment variable of user file - use the key in a file command - run a command that outputs the key only
If the node has been initialized with a lock and the unlock key have not been saved, run the command below:
docker swarm unlock-key
Note
this command is only accessible while the node is still unlock. If the docker has been restarted then it would return an error.
To test unlocking, restart the docker service and run the command with prefered option below:
sudo systemctl restart docker cd ~/QKDLite (or) ./qkdlite_helper.sh autounlock file ./swarm_unlock_key (or) ./qkdlite_helper.sh autounlock command "echo hello world"
if successful the output should look like this (using env option as an example):
[INFO] Swarm is locked, attempting to unlock... [INFO] Using unlock key from environment... [INFO] Swarm unlocked successfully
In each VM, execute the script to configure auto unlocking swarm as a service
cd ~/pQCee_QKDLite_3.2.0_installer_Ubuntu_2204_LTS_x64 INPUT_TYPE=file SWARM_UNLOCK_KEY_INPUT="./swarm_unlock_key" ./configure_swarm_unlock.sh (or) INPUT_TYPE=command SWARM_UNLOCK_KEY_INPUT="echo hello world" ./configure_swarm_unlock.sh