A generated application contains a set of projects which will be used to create the different application services.
They are organized as follows:
extensions
backend-service-extension
data-service-extension
execution-service-extension
gateway-service-extension
scenario-service-extension
gene-model
gene-model-dom
gene-model-dto
gene-model-jpa
spec
gene-services
backend-service
data-service
execution-service
gateway-service
scenario-service
processing
checker
engine
python-engine
web
workers
checker-worker
engine-worker
python-engine-worker
This directory organization divides your code between libraries and executables (referred to as services). The gene-model
, processing
, and extensions
directories contain the code of libraries; the gene-services
, workers
, and web
contain the code of executables. The purpose of this split between libraries and executables is to allow you to unit test the code in your libraries without the overhead of a Spring-based microservices architecture.
The processing/engine
and processing/checker
libraries are associated with the workers/engine-worker
and workers/checker-worker
executables, respectively. The extensions/backend-service-extension
library is associated with the gene-services/backend-service
executable, etc.
This structure is provided as a starting point. You may feel the need to augment it. If you do, do not forget to update the settings.gradle
file in the root folder of your application so that your changes are included in the build process.
The gene-model-dom
library contains the Java implementation of the data model that you described in JDL. For more details, refer to Section Understanding the Data Object Model (DOM).
This Java code is generated during the Gradle build of the library. It can be found in the build/generated
subdirectory of the module. It should not be modified. You can extend this library by adding Java code in the src/main/java
directory. Typical extensions include adding helper methods that work on the generated data model.
This library has been added as a dependency of all modules that need to process the data model in Java. This includes the default engine
and checker
libraries.
Python worker tasks use a different API to access the model. For more details, refer to Section Implementing Python Worker Tasks.
The processing/engine
and processing/checker
libraries are code skeletons, including templates of unit tests. Their intent is to provide a starting point for you to write the code of an optimization engine, and of a procedure to check the data model, respectively. For more details on how to integrate CPLEX libraries in the build mechanisms, refer to Chapter Integrating CPLEX.
They are typically developed in isolation and unit tested, then integrated in the microservices architecture through workers of the Optimization Server. The workers/engine-worker
and workers/checker-worker
modules provide skeletons for this integration.
You can also use other custom libraries in addition to, or as a replacement of, the two ones provided. Similarly, you can generate other workers to integrate some or all of these libraries in the Optimization Server. For more details, refer to Part Getting Started.
Your application will be deployed as a set of microservices: the web frontend service implemented in web
, the backend services implemented in the directories under gene-services
, and your Optimization Server workers implemented in the directories under workers
.
The extension libraries for the backend microservices are placed in extensions
folder. They are automatically integrated during the build process, and they are discovered during the microservices boot phase.
If your tasks invoke backend routines, the extensions/backend-service-extension
library is where you will implement them. The logic of the routines may be implemented in this library, or in another custom library, in particular if it makes unit testing easier.
The Data Service provides features on top of a relational database that stores all the scenario data. These features include querying the data or modifying the data.
There are two different ways to perform reading/editing:
using GraphQL language. For more details, refer to Section Understanding the Data Service API.
using a set of APIs that rely on the Data Object Model Collector, as described in Chapter Understanding the Data Object Model (DOM).
All interactions between the web client and the Data Service are performed using GraphQL language. All interactions between tasks and the Data Service are performed using a set of APIs.
When developing custom widgets, as described in Chapter Creating Custom Widgets, you may need to define new GraphQL queries, as described in Section Understanding GraphQL Default Queries. The extensions/data-service-extension
library is where you will insert these extensions.
The Scenario Service is responsible for the management of the applications settings.
The scenario extension point extensions/scenario-service-extension
allows you to provision the web client and security settings. For more details, refer to Sections:
The Execution Service is responsible for executing and managing the tasks that you launch from the web client. Tasks can invoke simple or complex logic statements as well as Java routines implemented in the Backend Service and Optimization Server workers. These tasks are declared as Spring beans in the Execution Service. For more details, refer to Chapter Understanding the Execution Service.
One can implement these beans in the extensions/execution-service-extension
library.
Tasks are executed as jobs, of which multiple instances can be run in parallel by the Execution Service. Whether they completed successfully or failed, they are stored in a database that is automatically cleaned over time.
When the execution of a task is launched, it runs in a dedicated thread of the Execution Service. The resources that this job requires add up to the microservice consumption. To manage this, there is a limit on the number of jobs that can run simultaneously in the Execution Service, with a default of 5. When this limit is reached and a new job is launched, it is queued until a running job finishes. This queue is by default unlimited.
The parameters that control the above are the following:
services.execution.maxConcurrentJobs
controls the maximum number of jobs that can run simultaneously in the Execution Service. It defaults to 5.
services.execution.maxQueuedJobs
controls the maximum number of jobs that may be queued waiting for a running job to finish. It defaults to 0, meaning that the queue is not limited. If the queue is limited and full, any new job is rejected and terminates with status FAILED.
They are Spring properties, and can therefore be overridden by the standard Spring mechanisms, typically environment variables set in the Docker configuration. For more details, refer to Section Using Docker Configuration Files.
Finished jobs, completed or failed, are automatically removed from the database. You can configure the cleaning criteria by editing the YAML file extensions/execution-service-extension/src/main/resources/application.yml
.
job-management: history: max-age: 30 # Use -1 to disable max-age jobs cleaning jobs-limit: 1000 # Use -1 to disable jobs-limit cleaning cleaning-cron: 0 0 0 * * * # Spring cron expression to periodically run jobs auto cleaning, default is every day at 00:00, see https://www.spring.io/blog/2020/11/10/new-in-spring-5-3-improved-cron-expressions
maxAge
controls the maximum number of days a finished job should remain in the database. By default, it is set to thirty (30) and can be disabled when set to -1
.
jobsLimit
controls the maximum number of finished jobs to keep in the database. By default, it is set to a thousand (1000) and can be disabled when set to -1
.
cleaningCron
allows to define when the automatic job cleaning task should occur. By default, it is set to run every day at midnight (00:00). As opposed to Cron
expressions in Unix-based systems, CronExpression
in Spring uses six space-separated fields: second
, minute
, hour
, day
, month
, and weekday
. For more details, refer to the Spring CronExpression
documentation page.
Users with the role APPLICATION_ADMIN
can also launch the cleaning process, at all times, through the following REST API call:
POST /api/execution/jobs/clean-jobs?maxAge=xxx&jobsLimit=yyy
The caller can provide a value for the parameters maxAge
and jobsLimit
. If none is provided, the default values are used.
The gateway service is a reverse proxy on top of backend HTTP and GraphQL APIs that unify the origin and HTTP routing.
The customization of this service is limited to application*.yml
file that you can place in the extensions/gateway-service-extension/config
folder, and to its application-*.yml
variants that you will activate through Spring profiles.
An application built using DOC
can implement CORS to secure and facilitate integration with resources from different origins than its backend.
Requests to DOC
API that require CORS usually come from the frontend of a web application other than yours. Instead, requests to DOC
API that originate from the backend of an application do not involve resource sharing and do not require CORS.
In such a situation, the web browser running the 3rd-party web application automatically makes “preflight requests” to ensure CORS is allowed and expects the API to answer with a correct response. In case the response is not correct, an error is displayed in your browser debug console.
![]() |
For safety reasons, CORS should not be accepting all routes and origins of your application, but only the ones required for the use case at hand.
To do so, the Gateway Service must be configured by defining the information that can be exchanged through headers. Configurations at micro-service level are ineffective regarding CORS, therefore, the file extensions/gateway-service-extension/config/application.yml
application should be edited by:
Adding one rule per route; or
Using a pattern-matching rule that matches several routes.
For example, a rule can be added as follows:
spring: cloud: gateway: globalcors: corsConfigurations: '[/api/service/controller-path]': allowedOrigins: "http://third.party.domain.com" allowedMethods: - GET allowedHeaders: "X-Api-Key"
The need for the "X-Api-Key" header in CORS depends on the security requirements and the context in which the API is used. For instance, it might not be required for internal use and same-origin requests, or public APIs and open resources. However, it might be needed to request any other type of private servers.
For more details, refer to the official Spring Cloud Gateway documentation.
Content security policy (also called CSP) is a layer of security that helps to detect and mitigate certain types of attacks on web application. It mainly helps for cross-site scripting, code injections and https only.
You can find additional information on this standard there:
The functioning of CSP is really simple. The server adds a header in the web document response that will tell the browser where it can load resources and how to behave with these resources.
The headers have to respect a given structure, you can find some specifications on this header there: CSP header specification
It is composed of a list of policy directives and has the following form:
Content-Security-Policy: <first-policy-directive>; <second-policy-directive>; <etc>
Each policy directive is composed of a directive name and some values.
The following figure illustrates a basic example of CSP header:
Content-Security-Policy: default-src "self" https: ; object-src "none"
The blue squares highlight two policy directives, the green squares highlight the directive names, and the yellow squares highlight the values.
For an application built on top of DOC
, a default configuration is provided for the CSP header. If you need to customize it, it can be achieved in the Gateway extension.
A typical case in which the CSP header needs to be customized is when you develop a graphical widget that consumes external resources like icons, fonts, etc.
The configuration is done in the following file: extensions/gateway-service-extension/config/application.yml
Note that Safari does not handle the CSP header in the same way as other browsers, like Chrome, Firefox or Edge. In order for the web notifications to work properly, you will have to add the base web socket URL among the values of the "default-src" directive policy. For an application accessible via the URL It should look like the following pseudo-code: spring: cloud: gateway: filter: secure-headers: content-security-policy: "default-src wss://my-application.io 'self' ..." |
The frontend service is in charge of the web client.
It can be extended beyond the changes allowed by the application configuration. For more details, refer to Chapter Configuring the Application.
For instance, all web client customizations occur within this service. For more details, refer to Section Using a Web Application Controller as well as Chapters Customizing the Default Widgets and Creating Custom Widgets.
It comes with an dedicated API. For more details, refer to Section Understanding the Web Client Library APIs.
DOC
provides a mechanism for implementing translation of the web client using the ngx-translate library, the default language file is web/src/assets/i18n/en.json
It is possible to customize strings or create a localized version for another language by duplicating this file structure, naming it accordingly and using it in AppComponent
constructor like this:
export class AppComponent {
constructor(translate: TranslateService, settingsService: GeneSettingsService) {
// add the new language
translate.addLangs(['fr','en']);
// this language will be used as a fallback when a translation isn't found in the current language
translate.setDefaultLang('en');
// translations will be taken from web/src/assets/i18n/fr.json
translate.use('fr');
// DOC
Settings
settingsService.registerDefaultApplicationSettings( DEFAULT_APP_SETTINGS );
}
}
The missing translation values from the file will fall back to the one provided by default by DOC
, and a warning will be emitted in the browser console with the missing keys.
DOC
uses AG Grid library for Data Grids, which uses its own translation file located under web/src/assets/i18n/ag-grid.en.json
.
To create a localized version for another language, you can, for example, duplicate the file structure of web/src/assets/i18n/ag-grid.fr.json
, the fr
locale containing the key/values. For more detals, refer to the Official AG Grid Documentation.
Example:
{ // Set Filter selectAll: '(Select All)', selectAllSearchResults: '(Select All Search Results)', searchOoo: 'Search...', blanks: '(Blanks)', noMatches: 'No matches', //... }
The withGeneAgGridTranslations()
DOC
function can be used to enable localization mechanism on any AG Grid instance.
import { withGeneAgGridTranslations } from '@gene/widget-core'; // ... let options: GridOptions; let translateService: TranslateService; // ... withGeneAgGridTranslations(options, translateService);
Note: Developing with AG Grid library in your code requires to purchase an ag-grid enterprise license from www.ag-grid.com.
See www.ag-grid.com/javascript-data-grid/localisation for more information about ag-grid localization.
It is possible to use DOC
error page component to display error messages using routing. In the frontend code raising the error change current location to /error/error/<error-code>
where error-code
is an integer greater than 100 (lower numbers are reserved for DOC
) and if the translation file contains the string with the key COMPONENT.ERROR.MESSAGES.<error-code>
it will be displayed on the error page.