Once you followed the indications from Chapters Generating the Application Structure and Using the Repositories, you are ready to build and launch your application. During the development phase, this is necessary to test and perfect how the application meet requirements of the project at hand. When deploying the application for production, building and running require more parameters to consider. For more details, refer to Section Starting the Docker Containers.
The first step is to build the Spring and Angular components that make up the microservices of your application.
Once the application is running, users can export a scenario template as an Excel file. It can be duplicated and filled with data to be imported into the application. For details, refer to Chapter Managing the Application Data.
Your generated application uses the Gradle build system. The scripts that you need to build the application microservices are generated with the code of your application by the Application Generator tool.
At this point, it it is necessary to indicate your installation credentials as described in Chapter Using the Repositories.
The build of the microservices typically includes the following steps.
Generating the data access code from the JDL description of your data model (stored in gene-model\spec\entities.jdl
).
Compiling the various libraries and services, and
Running automated unit tests.
The above is performed by entering the following command in the root directory of your project.
./gradlew build
Note that:
|
Note that, if you are developing on Linux, you may want to add your username to the |
The previous step (./gradlew build
) generated JARs and Node.js modules that can be used to run the application locally, typically from an IDE. The present step will create Docker images that you can run locally, or push to your organization registry for further deployment, typically using Kubernetes.
In order to build the Docker images for the microservices in your application, use the following command in the root directory of your project:
./gradlew docker
As the result of this step, you should see that the following Docker images have been created.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker-registry.internal.some-company.com/app/data-service 0.1.0-SNAPSHOT 65703bf554c9 About an hour ago 145MB docker-registry.internal.some-company.com/app/scenario-service 0.1.0-SNAPSHOT 2524e6f37093 About an hour ago 128MB docker-registry.internal.some-company.com/app/gateway-service 0.1.0-SNAPSHOT 6344f811024a About an hour ago 109MB docker-registry.internal.some-company.com/app/execution-service 0.1.0-SNAPSHOT c085834c35b2 About an hour ago 125MB docker-registry.internal.some-company.com/app/web 0.1.0-SNAPSHOT 0f2acf2301c7 About an hour ago 395MB docker-registry.internal.some-company.com/app/backend-service 0.1.0-SNAPSHOT a711c95d3488 About an hour ago 123MB docker-registry.internal.some-company.com/app/engine-worker 0.1.0-SNAPSHOT a711c95d3688 About an hour ago 123MB docker-registry.internal.some-company.com/app/checker-worker 0.1.0-SNAPSHOT a711c95d3842 About an hour ago 123MB
As mentioned in Section Pushing Docker Images:
The typical name of a Docker image is of the form registry/optional-path/short-name:version
.
The docker
task generates Docker images where the registry
part of the image names is a dummy registry name.
This registry name is controlled by the DOCKER_PULL_REGISTRY
variable in the project gradle.properties
file.
Using a dummy registry name works without problems for testing the images locally.
If and when you want to publish the Docker images on a registry, you must change the value of the DOCKER_PULL_REGISTRY
variable in the project gradle.properties
file and running the ./gradlew updateCode docker
command.
The application is composed of a collection of microservices, which can be gathered in three layers:
Infrastructure services (provided by DOC
)
Relational Database: PostgreSQL
NoSQL Database: MongoDB
Messaging Service: RabbitMQ
Authentication Service: Keycloak
Optimization Server services (provided by DOC
)
Master Service
Web Console
Documentation
Application services (generated above)
Data Service
Scenario Service
Execution Service
Backend Service
Web Frontend
Gateway
Engine Worker
Checker Worker
The infrastructure and Optimization Server services are generic services that are always run as Docker containers. The Docker images for these services are retrieved from DecisionBrain repositories.
The Application services can be run either as Docker containers or as native processes.
As mentioned above, infrastructure and Optimization Server services are always run as Docker containers. The Application Generator tool generated configurations for these two layers, thus allowing to launch them.
Log into the DecisionBrain Docker registry using the installation credentials with the following command:
docker login product-dbgene-prod-docker-group.decisionbrain.cloud
Launch the infrastructure services by running the following command in the deployment/docker/infra
directory.
docker compose up -d
To monitor the progress of the services startup, use the following command:
docker compose logs -f
Once all services have finished starting, you can interrupt the log printing with ^C
and proceed to the launch of Optimization Server services. To this end, run the following command in the deployment/docker/dbos
directory.
docker compose up -d
Once the infrastructure and Optimization Server services have started, you can launch the microservices of your application using Docker.
Run the following command in the deployment/docker/app
directory.
docker compose up -d
In the same folder, run the following command to start the workers.
docker compose -f docker-compose-workers.yml up -d
Once all your application microservices have finished starting, you can now open a web browser and point it to http://localhost:8080
to get access to your application. The default credentials are summarized at the end of this page. You can monitor the progress of the microservices startup using the following command in the deployment/docker/app
directory:
docker compose logs -f
DOC
allows to configure the creation of the Docker containers hosting the application services and workers through the docker-compose.yml
and docker-compose-workers.yml
files located in deployment/docker/app
.
In the same directory, an .env
file defines the environment variables that these configuration files use, and especially the registry where to publish the Docker images, would you want to do so. Until you actually want to, there is no need to change the generated configuration.
APP_DOCKER_REGISTRY=docker-registry.internal.some-company.com
This line defines the environment variable APP_DOCKER_REGISTRY
to the same value as the DOCKER_PULL_REGISTRY
variable in the project gradle.properties
file. For more details, refer to Chapter Using the Repositories.
If and when you want to publish them, follow the procedure below.
Modify the value of the DOCKER_PULL_REGISTRY
variable in the project gradle.properties
file. For more details, refer to Section Using Gradle Configuration Files.
Run ./gradlew updateCode
in the root directory of your project to propagate the above change, in particular to the .env
file.
Run ./gradlew docker
in the root directory of your project to rebuild the Docker images with the adequate name.
Run ./gradlew dockerPush
to publish them.
Instead of launching the application microservices using Docker as just described, you can launch them as simple processes on your machine. A few shell scripts have been generated by the Application Generator for this.
To this end, open eight terminal windows, navigate to the deployment/shell
folder in each of them, and run one of the following scripts in each of the terminals:
./start-gateway-service.sh ./start-execution-service.sh ./start-backend-service.sh ./start-data-service.sh ./start-scenario-service.sh ./start-web.sh ./start-engine-worker.sh ./start-checker-worker.sh
You may want to make sure that the scripts are executable (use chmod +x start-*
command if necessary).
You can now open a web browser and point it to http://localhost:8080
to get access to your application. The default credentials are summarized at the end of this page.
You started your application by first launching the Docker container for the infrastructure services, then those for the Optimization Server, then you launched your application microservices. Shutting down will be performed in reverse order.
If you started the microservices of your application through shell script, simply interrupt each script with ^C
. If you launched them as Docker containers, run the following command in deployment/docker/app
.
docker compose down
Then, stop the Optimization Server containers by running the following command in deployment/docker/dbos
.
docker compose down
Finally, stop the infrastructure containers by running the following command in deployment/docker/infra
.
docker compose down -v
The -v
option instructs to delete the volumes where the PostgreSQL and MongoDB data have been stored. It is not necessary if you want to continue working with this data the next time you launch your application.
This section lists the default credentials for the Application web client, Optimization Server, Keycloak, PostgreSQL, MongoDB and RabbitMQ accounts.
For more details on how to configure the access to these endpoints, refer to Chapter Configuring Credentials.
For more details on how to communicate with these access points using DOC
APIs, refer to Chapter Understanding the APIs.
Once your application is running, you can access its web client frontend at http://localhost:8080. Some users are created in the default Keycloak configuration, as described in Section Managing User Accounts:
The user gene_admin
with the password gene
.
The user user1
, user2
, user3
and user4
with the password gene
.
Application elements available in the web client depend on the user. For more details, refer to Section Setting Permissions.
You can also access the web console of the Optimization Server at http://localhost:8089. One user is created in the default configuration:
The user optimserver
with the password optimserver
.
You can access the Keycloak administration console at http://localhost:9090. One user is created in the default configuration:
The user keycloak-r00t-us3rn4m3
with the password keycloak-r00t-p4ssw0rd
.
You can access PostgreSQL using a dedicated client. One user is created in the default configuration:
The admin user postgres-r00t-us3rn4m3
with the password postgres-r00t-p4ssw0rd
.
The PostgreSQL administration console exposes the following databases:
Database data_server
, used by the Data Service with the user data_server
and the password data_server
.
Database keycloak
, used by the Data Service with the user k3cl04k
and the password k3cl04k
.
The PostgreSQL database is stored in a Docker volume, which can be backed up and restored. For more details, refer to Section Backing Up and Restoring the Application. |
You can access MongoDB using a dedicated client. One user is created in the default configuration:
The admin user mongo-r00t-us3rn4m3
with the password mongo-r00t-p4ssw0rd
.
The MongoDB database exposes the following collections:
Collection scenario-db
, used by the Scenario and Data Services with the user scenario
and the password scenario
.
Collection session-tracking-db
, used by the Scenario Service with the user session-tracking
and the password session-tracking
.
Collection execution-db
, used by the Execution Service with the user execution
and the password execution
.
Collection permission-db
, used by several services with the user permission
and the password permission
.
Collection optimserver-master-db
, used by the Optimization Server with the user optimserver
and the password optimserver
.
The MongoDB database is stored in a Docker volume, which can be backed up and restored. For more details, refer to Section Backing Up and Restoring the Application. |
You can access the RabbitMQ administration console at http://localhost:15672. One user is created in the default configuration:
The admin user rabbit-r00t-us3rn4m3
with the password rabbit-r00t-p4ssw0rd
.