Message "the input device is not a TTY" happens when running some commands on a Windows 10 environment using Git bash
.
![]() |
A way to avoid this problem is to use the bash.exe
that is located in your Git installation (c:\Program Files\Git\bin\bash.exe
) to start your shell environment instead of the standard Git Bash
shortcut.
To achieve this, you may define a shortcut as shown below:
![]() |
![]() |
This problem appears when generating an application in a Linux environment in a folder that does not have access rights set so that the generator can write. In that case, after entering the project information, the following error message appears:
Error: EACCESS: permission denied, open '/generated/.yo-rc.json' at Object.openSync ...
To solve the issue, change the access rights of the folder where you want to generate the application and then run the generator again:
# do this in the folder where you are calling the generator.sh script
$ chmod o+w .
# call the generator
$ ./generator.sh -v 4.5.0
Note that the generated files belong to an internal user 501
which is not the current user. If this is an issue, the generated files can be associated with the current user using the following command:
# Change the generated files ownership (replace `you.yourgroup` with your user characteristics) $ sudo chown -R you.yourgroup .
The message "Release file for xxx is not valid yet (invalid for another...). Updates for this repository will not be applied" may be produced during the creation of Docker images on Windows 10.
This problem is related to a date management issue that happens on a Windows 10 Docker environment: when the laptop/desktop running Docker is put in sleep mode, the internal clocks of the Docker processes are stopped. When the laptop/desktop is back on, the clocks restart but without getting synchronized with the actual current time.
To synchronize the Docker clock:
Run Windows Powershell ISE in administrator mode.
Execute the following script.
Disable-VMIntegrationService -VMName DockerDesktopVM -Name "Time Synchronization" Enable-VMIntegrationService -VMName DockerDesktopVM -Name "Time Synchronization"
In the context of a local/development Docker desktop environment, the Docker containers of the application do not start properly or are killed by the Docker system. Messages like "JBossAS process received KILL signal", when starting the keycloak
service, can be seen in the logs of the application containers.
This problem is related to a lack of memory in the Docker desktop configuration: the different Docker containers do not have enough space to be allocated causing either the application to start very slowly or for some services to be killed by Docker during their startup.
To correct this problem, go to the Docker Desktop Settings and increase the allocation of Memory to a valid value. We advise to allocate at least 6 GB of RAM to simple applications.
This message may appear when building the web
app. It is related to an inconsistency within the caches used by the tool we use to compile the different node modules.
To solve this issue, you have to clean this cache. This can be done using yarn
which is located into the web/.gradle/yarn/yarn-v
folder:1.22.17
/bin
$ cd web
$ .gradle/yarn/yarn-v1.22.17
/bin/yarn cache clean
"Once the cache has been cleaned, you should be able to run the gradlew build
command successfully."
This message appears when custom UI code import through an explicit path instead of using the module path:
// Wrong Import import { GeneContext } from '@gene/web-frontend-base/lib/generated/execution'; // Correct Import import { GeneContext } from '@gene/web-frontend-base';
This message may appear when no Chrome
environment is found to run the default tests of the web
service.
A simple workaround consists in disabling web
service tests by commenting out the following line in the gradle/template/yarn.gradle
file of your project:
... apply from: "${rootDir}/gradle/templates/node-common.gradle" ... assemble.dependsOn yarn_install check.dependsOn assemble // check.dependsOn yarn_test <- Comment this line ...
Job has failed and has been redelivered
In the context of an application deployed on Kubernetes/OpenShift, an Optimization Server Worker does not complete its execution and no explicit error is visible in the container logs. In the Job list widget, the job is marked as failed
with the following message:
java.lang.RuntimeException: Job had failed and has been redelivered, then abandoned
This is usually caused by a worker pod that tries to allocate more memory that it is allowed to do by the Kubernetes configuration. To solve the problem, change the memory limits in the worker service helm chart to a more appropriate value:
spec: containers: - env: - name: JAVA_TOOL_OPTIONS # Configures the JVM memory value: -Xmx4000m -Xms500m -XX:+CrashOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/carhartt-mso-checker-worker-heap-dump.hprof ... resources: # Configures the Kubernetes Pod resources limits: memory: 4256Mi requests: cpu: 100m memory: 1000Mi ...
Caused by: org.postgresql.util.PSQLException: ERROR: out of shared memory Hint: You might need to increase max_locks_per_transaction.
Make sure that the PostgreSQL server used by the Data Service has the recommended value for max_locks_per_transaction
which should be at least 512
.