12.27. DD 27: Sandboxing all the Taler services

Note

This design document is currently a draft, it does not reflect any implementation decisions yet.

12.27.1. Summary

This document presents a method of deploying all the Taler services via one Docker container.

12.27.2. Motivation

It is very difficult to build GNU Taler from scratch. It is even more difficult to install, configure and launch it correctly.

The purpose of the sandbox is to have a demonstration system that can be both build and launched with ideally a single command.

12.27.3. Requirements

  • No external services should be required, the only dependencies should be:

    • podman/docker

    • optionally: configuration files to further customize the setup

  • All services that are used should be installed from repositories and not built from scratch (i.e. debian repos or PyPI)

  • There should be some “admin page” for the whole sandbox that:

    • Show an overview of all deployed services, a link to their documentation and the endpoints they expose

    • Show very simple statistics (e.g. number of transactions / withdrawals)

    • Allows generating and downloading the auditor report

  • Developers should be able to launch the sandbox on their own machine

    • Possibly using nightly repos instead of the official stable repos

  • We should be able to deploy it on $NAME.sandbox.taler.net

12.27.4. Design

The container is based on Debian Sid, and it installs all the services from their Debian packages. During the build process, it creates all the ‘static’ configuration. This one includes all the .conf-files, the database setup and the keying.

Subsequently at the launch step, the system will create all the remaining RESTful resources. Such RESTful resources include the merchant instances and all the euFin accounts, both at Sandbox and at Nexus.

The sandbox will serve one HTTP base URL and make any service reachable at $baseUrl/$service. For example, the exchange base URL will be “$baseUrl/exchange”.

The sandbox allows to configure:

  • which host it binds to, typically localhost+port.

  • which host is being reverse proxied to the sandbox. This helps to generate valid URIs of services.

All the other values will be hard-coded in the preparation.

The database is aunched in the same container along the other services.

12.27.5. Open questions

  • How to collect the static configuration values?

    • => Via a configuration file that you pass to the container via a mounted directory (=> -v $MYCONFIG:/sandboxconfig)

    • If we don’t pass any config, the container should have sane defaults

    • This is effectively a “meta configuration”, because it will be used to generate the actual configuration files and do RESTful configuration at launch time.

  • How to persist, at build time, the information needed later at launch time to create the RESTful resources?

    • => The configuration should be done at launch-time of the container.

  • Should we at this iteration hard-code passwords too? With generated passwords, (1) it won’t be possible to manually log-in to services, (2) it won’t be possible to write the exchange password for Nexus in the conf. Clearly, that’s a problem when the sandbox is served to the outside.

  • How is data persisted? (i.e. where do we store stuff)

    • By allowing to mount some data directory on the host of the container (This stores the DB files, config files, key files, etc.)

    • … even for data like the postgresql database

    • future/optional: we might allow connection to an external postgresql database as well

  • How are services supervised?

    • SystemD? gnunet-arm? supervisord? something else?

      • SystemD does not work well inside containers

    • alternative: one container per service, use (docker/podman)-compose

      • Either one docker file per service, or one base container that can be launched as different services via command line arg

      • Advantage: It’s easy to see the whole architecture from the compose yaml file

      • Advantage: It would be easy to later deploy this on kubernetes etc.

      • list of containers:

        • DB container (postgres)

        • Exchange container (contains all exchange services, for now) - Split this up further?

        • Merchant container

  • Do we have multi-tenancy for the sandbox? (I.e. do we allow multiple currencies/exchanges/merchants/auditors per sandbox)

    • Might be simpler if we disallow this

  • How do we handle TLS

    • Do we always do HTTPs in the sandbox container?

    • We need to think about external and internal requests to the sandbox

  • How do we handle (external vs internal) URLs

    • If we use http://localhost:$PORT for everything, we can’t expose the services externally

    • Example 1: Sandbox should run on sb1.sandbox.taler.net.

      • What will be the base URL for the exchange in the merchant config?

      • If it’s https://sb1.sandbox.taler.net/exchange, we need some /etc/hosts entry inside the container

      • Once you want to expose the sandbox internally, you need a proper TLS cert (i.e. letsencrypt)

      • Inside the container, you can get away with self-signed certificates

      • Other solution: Just require the external nginx (e.g. at gv) to reverse proxy sb1.sandbox.taler.net back to the container. This means that all communication between services inside the sandbox container goes through gv

        • Not great, but probably fine for first iteration

        • Disadvantages: To test the container in the non-localhost mode, you need the external proxy running

  • Where do we take packages from?

    • By default, from the stable taler-systems.com repos and PyPI

    • Alternatively, via the nightly gv debian repo

    • Since we install packages at container build time, this setting (stable vs nightly) results in different container base images