Development and Deployment process
Olympe, is a development platform with Saas and On-premise offers. It provides specific steps to operate the development process in order to save, build and deploy what is developed by your team.
Here is a schema that summarizes the process of deploying work (from CODE and DRAW) done in one environment to another environment:
It is all based on docker images that can be built on 3 different basis:
- Nginx to deploy new code for the frontend applications
- NodeJS for backends (Service application)
- CodeAsData to deploy the compositions and development done in Draw, the code as data.
Once the docker images are built, the docker registry can call appropriate web hooks in order to run a deployment workflow on the Olympe cloud side.
When using CI/CD pipelines on git repositories (eg: Github actions, Gitlab CI, Azure DevOps pipelines, etc.), it can be important to avoid automatic build of code as data image. This can cause a brief downtime of the composition database while the update process is in progress.
Code as data
When developing applications on the Olympe platform, developers combine and compose pieces of code, known as coded bricks, using DRAW to create processes and applications. The system generates data in a graph database, referred to as the composition database. While applications composed of brick assemblies offer many advantages over traditional source code files, they require tools to extract these assemblies, create library packages, and deploy them to other environments.
The term Code as Data designates everything that developers assemble in Draw and store in the database. This is code, but stored like data instead of being stored in text files. Hence, that code must be considered as part of the source code of projects built on Olympe.
The files that are used to store the code as data are json
files and named patches.
Olympe Toolkit
The Olympe toolkit (@olympeio/toolkit
npm package) is the tool blended to handle operations with Olympe environments, in particular operations related to the code as data.
The toolkit is generally used as npm devDependency
in the Olympe projects to build the appropriate code as data files (also named patches
) and install or deploy them to your environment.
The toolkit is also available in DockerHub as docker image to be used directly in jobs on your kubernetes cloud (for Saas or on-premise Olympe environments).
Execute toolkit commands
The toolkit is a CLI tool written in JavaScript that can either be installed globally or locally in your project, or be deployed on a docker/kubernetes environment.
Use the help
command to know about existing commands:
- NPM
- Docker
To install the toolkit locally in your project, execute the following command in a terminal:
npm install --save-dev @olympeio/toolkit
or globally on your computer, available as a global binary:
npm install --global @olympeio/toolkit
To execute a command, use the following syntax:
npx olympe <command> [--<param> <value> ...]
For instance, use this to list the available commands:
npx olympe help
The list of commands and parameters can be found in the reference section.
docker run olympeio/toolkit:stable <command> [--<param> <value> ...]
For instance, use this to list the available commands:
docker run olympeio/toolkit:stable help
When using docker images, you can also provide parameters using the environment variables. This is typically used to define jobs to be executed on the instance with the right parameters and secrets injected. The list of commands and parameters can be found in the reference section.
Saving Code as Data in git
After developing libraries or projects in Draw, developers need to package that work to be able to:
- Deploy the project and changes to another environment (e.g.: from development to production environments)
- Package the project with its JavaScript code into an NPM artifact to be pushed to a registry
To achieve that, it is convenient to create a snapshot
of the code as data of your project and generate the patches files to be added into your git repository.
To save the code as data from Draw to patches files, use the toolkit snapshot
command. That command requires a configuration file you could store in your git repository. It must contain a list of projects/folders to save, a destination directory.
[
{
"snapshooter": {
"name": "My Projects",
"rootTags": ["10000000000000000000"],
"exclude": ["10000000000000000001"],
"outputDir": "snapshot"
}
}
]
This is a typical example of a snapshooter configuration file: it tells the toolkit to snapshot all the projects from Home (10000000000000000000
) except what is inside the Olympe folder (10000000000000000001
), and to store the patches in the snasphot
folder of your project.
Once the code as data files are generated, they can be commit
to the git repository like standard code source files.
Automatically commit and push snapshot to git
The snapshooter is designed to be used in jobs or cron jobs in your infrastructure to automatically save and commit the patches to the git repository. In order to tell the snapshooter to commit and push the changes to you repository, you need to complete the documentation like this:
[
{
"snapshooter": { ... },
"git": {
"repo": "https://<token>@<repository_url>.git",
"branch": "master",
"commitMessage": "Snapshot at {date} in {folder}"
}
}
]
This will tell the snapshooter to clone the specified repository on the given branch. Do the snapshot in the specified outputDir
based on the repository folder, and finishes the process by committing and pushing in case of changes, with the specified commit message.
The commit message can be formatted:
{date}
: write the date with the specified format: yyyy.MM.dd hh:mm:ss.{folder}
: theoutputDir
value from the configuration.
Build your project to an NPM package
Olympe requires NodeJS version 18 or above (download)
Like any modern Javascript project, you can use the javascript code bundler/builder of your choice (eg: webpack, esbuild, vite, etc.) to build your source code file. There are two possibilities, either your project is aimed to be a library of bricks, serving as an Olympe component to be used in multiple places, or it is a final project that uses libraries and need to be deployed to an environment.
- Final project
- Library
For the code as data, build the snapshot of your project aggregated with the code as data of all the npm dependencies. Use the buildCodeAsData
command from the toolkit.
npx olympe buildCodeAsData
This is equivalent to use the following command:
npw olympe buildCodeAsData --codeAsData.mode all
This will generate all the patches with the appropriate configuration files for the project to be deployed on an environment in the dist/codeAsData
directory.
That folder must be used as base directory to build the code as data docker image, to deploy the code as data to your environment via the web hook.
In the end, you typically end up with the following directories structure:
When building a library, it is important to only include the source code of that library without bundling the dependencies code. This a standard process when using javascript bundlers like webpack. The same approach must be applied with the code as data.
Use the buildCodeAsData
command from the toolkit to build the snapshot folder of your project and configuration files. But make sure to specify the toolkit to only consider patches from the current project and not from dependencies:
npx olympe buildCodeAsData --codeAsData.mode project
This will generate all the patches with the appropriate configuration files for the project to be deployed on an environment in the dist/codeAsData
directory.
The dist
is typically used as the base folder to store all files intended to be packaged and published on your/the NPM registry.
As an Olympe library, the package.json
file of your library must contain the specific codeAsData
extra entry which indicates the path where to find the patches of your package:
{
"codeAsData": "codeAsData"
}
This informs the toolkit about which dependencies are Olympe projects and contain code as data, particularly when building a project for deployment on an environment.
Once the library is ready to be published on NPM registry, it typically looks like this:
and the package.json
file should look like this:
Build docker images to be deployed
Olympe requires Docker version 24 or above: Download
As described in the introduction, an Olympe environment deployed on the Olympe cloud relies on 3 types of docker images:
- Frontend
- Backend
- Code as Data
FROM olympeio/frontend-base:stable
COPY --chown=nginx:root $SOURCES_PATH /usr/share/nginx/html
FROM olympeio/backend-base:18-stable
COPY $SOURCES_PATH /home/node/app
The code as data image must contain the patches generated by the toolkit when using the buildCodeAsData
command. See Build your project to an NPM package.
The typical dockerfile used to build that image is:
FROM olympeio/codeasdata-base:stable
COPY --chown=1000:1000 $SOURCES_PATH /root/patches
RUN chmod 775 ~/patches
RUN chown -R 1000:1000 ~/patches
Configure deployment triggers
In order to deploy the newly built docker images to the Olympe Cloud, you must now configure your container registry to trigger the appropriate webhook.
The Olympe Cloud is compatible with any container registry that accepts an authentication using standard token in authorization
header of the request.
In order to configure your registry, first contact your Olympe referent and ask for the webhook URL and authorization token. The URL should use the following syntax:
https://<client-name>.webhook.olympe.io/<project-name>/<instance-name>
Your Olympe referent will require some information to configure the webhook and deployment workflow:
- The repository, name and tag of the images to be pulled, that should be pulled from the registry. For instance, on Github registry it could look like
ghcr.io/<registry name>:<project-name>-frontend:master-42bd534f
.- In the example, the suffix
frontend
is used in the image name. There could be any kind of name. By default, we recommend to use one of these suffixes:frontend
,backend
orcodeasdata
for the three types of image to deploy. It is simpler to identify where to deploy it. - In the example, the tag
master-42bd534f
is used. It corresponds to the branch name followed by the commit id on which the image has been built. We recommend to use such a pattern to tag images to make it simple to know from what commit and branch the image has been built.tipWhen a same project has multiple environments (e.g.:
codev
,pre-production
andproduction
), this type of image tag can be used with a regular expression to specify on what environment to deploy an image. For instance, images withmaster-*
tag would be configured to only deploy toproduction
environment and the ones withdevelop-*
tag would be deployed oncodev
.
- In the example, the suffix
- Provide the authentication credentials for the system to be able to pull the image from your container registry:
{
"auths": {
"<url>": {
"username": "<username>",
"password": "<password>",
"auth": "<auth>"
}
}
}- the
url
field is the repository url - the
username
andpassword
fields are the user's credentials auth
is computed fromusername
andpassword
of the user used to pull the image. It corresponds to the base64 representation ofusername:password
use this in your terminal to get it.
echo -n 'username:password' | base64`
- the
- (Optional): Provide the patterns and regular expressions you need to apply to trigger the deployment pipelines.
When this is done, try to build and publish the images of your project and validate that the environment gets updated using your monitoring dashboard (see: managing environments)
Update code as data
Once the images are built and deployed, it's crucial to update the code as data to reflect the changes in the environment. This is achieved using the update
command from the toolkit. The update process requires the new patches to be deployed to the orchestrator volume, allowing it to compare these patches with the current state of the database.
To optimize the update process, the toolkit can utilize a changelog file generated by rsync
when the new patches are deployed to the orchestrator. This file contains a list of patches that have been added, modified, or deleted, enabling the update command to accelerate the update process. If you prefer not to use the changelog file, you can disable it with the --no-update.useChangeLog
parameter.
If the update process fails, the toolkit will attempt to rollback the applied patches to restore the database to its previous state. This can happen when patches have some errors or are incompatible with the current database state. The rollback process restores the patches from the last successful update or install, then calculates the differences between the current state and the last successful state to execute the rollback effectively.
The update can be performed in two modes:
Downtime mode (by default)
In Downtime mode, the orchestrator enters maintenance mode during the update, disconnecting all connected clients and temporarily halting application usage. Once the update is complete, the orchestrator returns to normal mode, allowing clients to resume using the application. This mode is recommended for development environments.
No Downtime mode (--update.noDowntime
)
In No Downtime mode, the orchestrator remains operational during the update, allowing clients to continue using the application. However, during the update, the code as data is in read-only mode, meaning only business data can be modified. After the update, the application prompts users to refresh the page to load the updated code as data. This mode is recommended for production environments as it ensures minimal disruption and provides an opportunity to verify that the deployed code is functioning as expected before it is fully utilized.
This is accomplished using the blue-green deployment
strategy, which involves maintaining two sets of pods to separate the active and preview versions of the application. The active version remains accessible via https://$APPLICATION.olympe.io
, while the preview version can be accessed at https://preview.$APPLICATION.olympe.io
. Once the preview version is validated, it can be approved in the pipeline, promoting it to become the new active version.
Summary - Deployment steps
- Build a project/library in Draw in environment A
- Snapshot from environment A to json files (save in git)
- Build and gather patches to build and publish npm package / docker image
- Deploy the patches to the orchestrator of the environment B
- Install/Update the graph database with patches to setup the environment B
Toolkit reference
To run a command using the toolkit locally installed as npm dev-dependency use this in a terminal:
npx olympe <command> [--<parameter name> <parameter value>, ...]
For each command, use the -h, --help
parameter to see the detailed list of parameter to use in that command.
Command | Typical use | Description |
---|---|---|
help | N/A | Display the list of available commands with the toolkit, as well as the associated parameters |
snapshot | Docker: scheduled CI/CD | Download and save the code as data into patches (json files) from a deployed environment |
buildCodeAsData | NPM: within CI/CD | From the project snapshot files and its dependencies, build the dist/codeAsData folder containing all the patches required to build/deploy the project to an environment |
update | Docker: Olympe cloud job | Update the graph database containing the code as data of an environment to be aligned with the patches deployed to the orchestrator. |
codeAsDataReadMode | Sub-command of update | Set Orchestrator code as data read mode to true/false |
broadcastRefreshApp | Sub-command of update | Broadcast a message to refresh the application for all connected users |
install | Docker: Olympe cloud job | Clean and Install the content of patches deployed to the orchestrator. /!\ This command starts by clearing the whole composition database, so it must be used with caution. |
snapshotUsers | Sub-command of install | Save users and their Draw permissions of an environment to patches (mainly as sub-command in process of install ) |
restoreUsers | Sub-command of install | Restore patches generated with command snapshotUsers to the database (mainly as sub-command in process of install ) |
snapshotBusinessData | Sub-command of install | Save business data (business objects visible in Runtime Data Set of projects), stored in the "Embedded graph database" of an environment to patches (mainly as sub-command in process of install ) |
restoreBusinessData | Sub-command of install | Restore patches generated with command snapshotBusinessData to the database (mainly as sub-command in process of install ) |
dependencies | NPM: manually | List all npm dependencies from node_modules folder which contain some code as data |
updateUser | Docker: Olympe cloud job | Update the username and password of the admin user |
maintenance | Docker: Olympe cloud job | Activate/Deactivate the maintenance mode of the orchestrator. When activated, ensures that all connected clients to the orchestrator are disconnected. |
getUpgradeConfig | Docker: during deployment | Generate the configuration for the upgrade tool to apply upgrade script on the composition database. Used by the code as data deployment workflow to automatically upgrade the language version from one major version to another. |
startGC | Docker: Olympe cloud job | Request the orchestrator to garbage collect useless files in file and backup services |
statsDB | Return information about the composition database |
Toolkit command parameters
Shell parameter | Environment variable | Type | Default value | Description |
---|---|---|---|---|
--logLevel , -l | LOG_LEVEL | string | info | logging level, one of debug , verbose , info , warn , or error . (Eg. olympe install --logLevel warn ) |
--help , -h | boolean | false | display help for the current command | |
--saveLogs | SAVE_LOGS | boolean | false | save logs to a file under directory logs/ |
--codeAsData.workingDir | string | ./dist/codeAsData | relative (to the project's root) path to the temporary directory used to store generated code as data | |
--codeAsData.mode | string | all | specify if the code as data to process is aimed to be used in a library package (only the code as data of the current project) or in a project to be deployed (including all the dependencies). Accepted values dependencies , project , all | |
--snapshot.excludeUsers | SNAPSHOTER_EXCLUDE_USERS | boolean | false | whether to exclude users from dump of business data |
--snapshot.dumpBinaries | boolean | false | whether to dump binary files while dumping business data | |
--snapshot.dataRootTag | string | 10000000000000000000 | what root tag to use to dump business data | |
--snapshot.userDataDir | SNAPSHOT_USER_DATA_FOLDER | string | users | snapshot directory sub-folder to dump users |
--snapshot.businessDataDir | SNAPSHOT_BUSINESS_DATA_FOLDER | string | data | snapshot directory sub-folder to dump business data |
--snapshot.repositoryDir | string | /tmp/repositories | define directory where the snapshooter can clone repositories if config contains git configuration | |
--snapshot.commitMessage | SNAPSHOT_COMMIT_MESSAGE | string | Snapshot at {date} in {folder} | define a commit message for snapshot |
--snapshot.dir | string | ./snapshot | relative (to the project's root) path to the directory containing the project's snapshots | |
--versionFileName | string | version.json | Name of the file containing the project's Draw/Framework versions | |
--runningInDocker | boolean | false | global option to set whether the Olympe Toolkit is running inside a docker image. Primarily used when calling the install command on local environment to determine whether to copy the patches inside the docker container or not . | |
--jsonConfig | json | set the commands configuration via a JSON config | ||
--downloadFolder | string | /tmp/toolkit | path to the temporary download folder used during update command. | |
--user.tag | DRAW_USERTAG | string | 014831d95fd7d12b8568 | tag of the user to update (command updateUser ) |
--user.name | DRAW_USERNAME | string | admin | new username to set for the specified user.tag (command updateUser ) |
--user.password | DRAW_PASSWORD | string | admin | new password to set for the specified user.tag (command updateUser ) |
--import.debug | LOAD_PATCHES_DEBUG | boolean | false | whether the orchestrator will import patches in debug mode |
--import.ignoreRelationFail | IGNORE_RELATION_FAIL | boolean | false | when importing patches, ignore relations that cannot be created because of the nodes missing |
--import.backupUsers | BACKUP_USERS | boolean | true | whether to backup users before resetting the database when using install command |
--import.backupBusinessData | BACKUP_BUSINESS_DATA | boolean | true | whether to backup business data before resetting the database when using install command |
--import.timeout | RESET_DB_TIMEOUT | number | 3600000 | the timeout for importing all patches when using install command, in ms. |
--orchestrator.host | ORCHESTRATOR_HOST | string | localhost | orchestrator http host |
--orchestrator.port | ORCHESTRATOR_SERVICE_PORT | number | 8080 | orchestrator http port |
--orchestrator.userName | DRAW_USERNAME | string | admin | user for Orchestrator connection |
--orchestrator.password | DRAW_PASSWORD | string | admin | user password for Orchestrator connection |
--orchestrator.importOrderFile | string | import_order.json | Name of the file that specifies the import order of modules | |
--orchestrator.containerUser | string | vertx | user which is used to copy patches inside of the orchestrator container | |
--orchestrator.codeAsDataDir | string | /patches | the path to copy code as data folder in the orchestrator container. Used exclusively when the toolkit is not running in a container. | |
--bus.protocol | string | amqp | protocol used for bus communication | |
--bus.hostname | RABBITMQ_HOST | string | localhost | host for bus communication |
--bus.port | RABBITMQ_PORT | number | 5672 | port for bus communication |
--bus.username | RABBITMQ_ORCHESTRATOR_USER | string | guest | username used to connect to the bus |
--bus.password | RABBITMQ_ORCHESTRATOR_PASSWORD | string | guest | user password used to connect to the bus |
--bus.vhost | RABBITMQ_ORCHESTRATOR_VHOST | string | / | virtual host used for any communication sent on the bus |
--bus.commandsQueue | RABBITMQ_ORCHESTRATOR_COMMAND_QUEUE | string | COMMAND_QUEUE | rabbitmq queue used for commands sent to the orchestratr |
--bus.commandsExchange | RABBITMQ_ORCHESTRATOR_COMMAND_EXCHANGE | string | "" | rabbitmq exchange used for commands sent to the orchestrator |
--bus.deploymentQueue | RABBITMQ_ORCHESTRATOR_DEPLOYMENT_QUEUE | string | SERVICE_DEPLOYMENT | queue used to send Orchestrator deployment commands |
--bus.deploymentExchange | RABBITMQ_ORCHESTRATOR_DEPLOYMENT_EXCHANGE | string | ORCHESTRATOR_BROADCAST_EXCHANGE | exchange for Orchestrator deployment commands |
--bus.commandTimeout | RABBITMQ_ORCHESTRATOR_COMMAND_TIMEOUT | number | 600000 | timeout applied for standard commands sent to the orchestrator (in ms) |
--update.useChangeLog | UPDATE_USE_CHANGELOG | boolean | true | whether to use a changelog file to accelerate the database update process (can be negated using --no-update.useChangeLog ) |
--update.changeLogName | string | change_log.txt | name of the file giving the patches change log | |
--update.excludeTags | UPDATE_EXCLUDED_TAGS | string | [] | list of tags of projects/folders to exclude from the check of the update command. In a development environment, the appropriate value is to put all the tags of projects that are snapshot in that environment. |
--update.includeTags | UPDATE_INCLUDED_TAGS | string | [] | list of tags of projects/folders should be included from the check of the update command, even though one of their ancestor in the Draw hierarchy has been excluded using the --update.excludeTags option. |
--update.noDowntime | UPDATE_NO_DOWNTIME | boolean | false | if true, the update will be done without downtime. This should be activated only for production environments. |