Chapter 7. Deploying the Application

[Note]

Note that this chapter only focuses on DOC applications deployed on a single machine Docker environment and does not apply to bare metal or cloud deployments.

It describes the architecture and deployment key points, but does not cover networking, rights management nor system administration.

In order to achieve a deployment, you need:

  • A machine compatible with Docker. For more details, refer to Chapter Meeting the Requirements.

  • Docker images of your application available on a (private) Docker registry.

  • Network access to the (private) Docker registry, DOC registry and all the external resources (such as LDAP or ERP).

The following diagram describes the development and deployment workflow.

Figure 7.1. Understanding the Development and Deployment Workflow
Understanding the Development and Deployment Workflow

We assume that the development phase is already done and that DOC version is previously finalized with all the artifacts built and available.

The following architecture diagram is standard and might be adapted to your needs.

Figure 7.2. Understanding the Standard Docker Architecture
Understanding the Standard Docker Architecture

In this architecture diagram, we can see that the environment needs access to external resources like DOC Docker registry and your enterprise Docker registry. Your enterprise registry can be public or private: this has no impact on the deployment, but the target deployment machine will need network access to it.

Depending on your needs and your target architecture, you may want to put in place, for example:

  • Some external storage system to secure your application data.

  • A directory system to easily give access to the application for your users.

  • A data integration system that gives access to your business data to the application.

We can also see that a deployed DOC application consists mainly of configuration and data files with micro services deployed as running Docker containers.

The application also generates log files to facilitate the exploitation of the application.

At the end of the process, users will access the application through a service gateway (often it is a reverse proxy). You will have many environments depending on the topology of your deployments and your organization. We recommend at least to have so-called integration, acceptance testing and production environments.

The following sections covers:

  1. Copying the necessary files on the target machine;

  2. Configuring the environment variables;

  3. Configuring, if need be, the allowed Origins for WebSocket notifications; and

  4. Starting the Docker containers.

  5. Backing up and restoring the application.

1. Copying the Files on the Target Machine

Procedure 7.1. To Copy the Files on the Target Machine
  1. Compress the deployment files gathered in the folder deployment/docker in your DOC application source folder.

    It is basically structured in three main folders: app, dbos and infra:

    platform_src ~> tree -L 1 deployment/docker
    deployment/docker
    |- app     # A dedicated docker compose for all DOC micro services.
    |- dbos    # A dedicated docker compose for the Optimization server infrastructure.
    |- infra   # A dedicated docker compose for general infrastructure services.

    The first thing to do is to copy these files on the target machine.

    We can zip them with the following command, for example:

    platform_src ~> (cd deployment/docker && zip -r deployment_docker.zip .)
    platform_src ~> # We now have a deployment_docker.zip file 
    platform_src ~> # in deployment/docker folder
  2. Moving the deployment files to the target machine. You may use the tool of your choice to move the file, WinSCP for example.

    For example, if the target machine answers to the hostname environment_host; and the Unix user on the target machine that will host your DOC application is platform.

    platform_src ~> scp deployment/docker/deployment_docker.zip  platform@environment_host:~/deployment/deployment_docker.zip
  3. Decompress the deployment files:

             ~ ~> cd deployment
    deployment ~> unzip deployment_docker.zip -d .

2. Configuring the Environment Variables

A deployment folder is now available in the home directory of the target machine.

deployment ~> tree -L 1 .
.
|- app
|- dbos
|- infra

For this section, we assume that you have released a version 1.0.0 of your project with DOC, that all your Docker images are available on your Docker (private) repository, and that the (private) repository is available at the URL docker-registry.internal.some-company.com.

Configuring a DOC application environment amounts to two things:

  • Defining where the Docker registry that hosts the application Docker images is located; and

  • Defining the version of your DOC application.

These pieces of information are held in a .env file in the app folder (app/.env).

Edit this file and ensure the first lines are as follows:

APP_DOCKER_REGISTRY=docker-registry.internal.some-company.com
DOCKER_PULL_REGISTRY=product-dbgene-prod-docker-group.decisionbrain.cloud
PROJECT_VERSION_DOCKER_TAG=1.0.0

3. Configuring the Allowed Origin for the WebSocket Notifications

DOC relies on WebSockets for notifications, and in order to work, WebSocket endpoints need to specify the allowed web origins.

For more details on cross-origin resource sharing, refer to the Section Configuring Cross-Origin Resource Sharing (CORS).

The allowed origin should be configured in the .env file in the app folder (app/.env).

For an application available with the URL https://my-application.internal.some-company.com/home should be edited as follows:

# Allowing '*' as origin is, generally speaking, a bad practice.
# You should not use it for deployed environment
# but change it for the public url of your DOC application.
# E.g. - WEBSOCKET_ALLOWEDORIGIN=https://www.my-gene-app
#
WEBSOCKET_ALLOWEDORIGIN=https://www.my-application.internal.some-company.com
[Note]

Note that, unlike the URL, the origin is only composed from the URL scheme, host and port but does not contain path information.

4. Starting the Docker Containers

Starting the Docker containers in deployment makes the application usable for the users. For more details, refer to Chapter Building and Running the Application.

We recommend not to have any Docker container already running on the target machine, as it may interfere with your application.

Procedure 7.2. To Start the Docker Containers
  1. Display running containers using the following command:

    deployment ~> docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
  2. Start the infrastructure containers using Docker with the following command:

    deployment ~> (cd infra && docker compose up -d)
    deployment ~> # The following command allows to ensure that every infrastructure 
    deployment ~> # services are up and running.
    deployment ~> (cd infra && docker compose ps)
    ~/deployment/infra
               Name                         Command               State                                                          Ports                                                        
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    gene-sample-keycloak         /opt/jboss/tools/docker-en ...   Up      0.0.0.0:9090->8080/tcp, 8443/tcp, 9990/tcp                                                                          
    gene-sample-mongo            docker-entrypoint.sh mongod      Up      0.0.0.0:27017->27017/tcp                                                                                            
    gene-sample-postgres         container-entrypoint run-p ...   Up      0.0.0.0:5432->5432/tcp                                                                                              
    gene-sample-rabbitmq         docker-entrypoint.sh /opt/ ...   Up      15671/tcp, 0.0.0.0:15672->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:61613->61613/tcp

    It creates and starts all the infrastructure services and DOC application Docker internal network.

  3. Start the Optimization server by typing the following command:

    deployment ~> (cd dbos && docker compose up -d)
    deployment ~> # The following command allow to ensure that every Optimization server 
    deployment ~> # services are up and running.
    deployment ~> (cd dbos && docker compose ps)
    ~/deployment/dbos
                 Name                           Command               State               Ports             
    --------------------------------------------------------------------------------------------------------
    gene-sample-dbos-documentation   nginx -c /home/optimserver ...   Up      80/tcp, 0.0.0.0:1313->8080/tcp
    gene-sample-dbos-master          sh -c java $JAVA_OPTS -jar ...   Up      0.0.0.0:8088->8080/tcp        
    gene-sample-dbos-web-console     sh -c envsubst < /home/das ...   Up      80/tcp, 0.0.0.0:8089->8080/tcp
  4. To make the whole system available, start your DOC application typing the following commands:

    deployment ~> # # Starting DOC application
    deployment ~> (cd app && docker compose up -d)
    deployment ~> # # Starting the Optimization server workers
    deployment ~> (cd app && docker compose -f docker-compose-workers.yml up -d)
    deployment ~> # Checking the Services
    deployment ~> (cd app && docker compose ps)
    
    ~/deployment/app
                  Name                            Command               State           Ports         
    -------------------------------------------------------------------------------------------------------------
    gene-sample-backend-service        java -jar /app.jar               Up       0.0.0.0:8080->8080/tcp, 8443/tcp 
    gene-sample-data-service           java -jar /app.jar               Up       0.0.0.0:8080->8080/tcp, 8443/tcp 
    gene-sample-execution-service      java -jar /app.jar               Up       0.0.0.0:8080->8080/tcp, 8443/tcp 
    gene-sample-gateway-service        java -jar /app.jar               Up       0.0.0.0:8080->8080/tcp, 8443/tcp 
    gene-sample-scenario-service       java -jar /app.jar               Up       8080/tcp, 8443/tcp 
    gene-sample-web                    sh -c envsubst < /home/web ...   Up       8080/tcp 
    gene-sample-checker-worker         java -jar /app.jar               Up       8080/tcp, 8443/tcp   
    gene-sample-engine-worker          java -jar /app.jar               Up       8080/tcp, 8443/tcp 
    gene-sample-python-engine-worker   java -jar /app.jar               Up       8080/tcp, 8443/tcp 

5. Backing Up and Restoring the Application

DOC relies on two main databases to store the application data.

  • A PostgreSQL relational database stores the project scenarios data used, for instance, by the Data Service. For more details, refer to Section Accessing PostgreSQL.

    [Note]

    Note that scenarios can be imported and exported individually. For more details, refer to Chapter Managing the Application Data.

  • A MongoDB database stores scenario metadata and specific properties as well as application elements. For more details, refer to Section Accessing MongoDB.

    [Note]

    Note that the application configuration can be imported and exported manually. For more details, refer to Chapter Configuring the Application.

Both PostgreSQL and MongoDB servers are running as Docker images, therefore, it is possible to save and restore an application data using Docker commands. For more details, refer to the official Docker documentation.

[Note]
  • Restoring a Docker volume overwrites the database, i.e. the new elements are not added to the ones in the application but they replace them.

  • One must have access to the corresponding files as, by default, the information about Docker containers and volumes is located in the db-gene/deployment folder.

  • It is key that the backup/restore of both the Data and Scenario Services happens at the same time to ensure that the content databases remain synchronized, even though the two Docker volumes are processed separately.