Execution History of Workflow
To alleviate the continuous storage growth pressure in the mdworkflow
database of MongoDB, users can choose to archive the execution history of workflows and store them in a new MongoDB instance. The archived execution records can also be viewed on the page.
Configuration Steps for Archiving
-
Deploy a MongoDB instance in advance for storing archived data
- We provide a MongoDB deployment document (single node) for reference.
-
Download the mirror (offline package download)
docker pull registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-archivetools:1.0.3
-
Create a
config.json
configuration file with the following example content:[
{
"id": "1",
"text": "Description",
"start": "2022-12-31 16:00:00",
"end": "2023-12-31 16:00:00",
"src": "mongodb://root:password@192.168.1.20:27017/mdworkflow?authSource=admin",
"archive": "mongodb://root:password@192.168.1.30:27017/mdworkflow_archive_2023?authSource=admin",
"table": "wf_instance",
"delete": true,
"batchSize": 500,
"retentionDays": 0
},
{
"id": "2",
"text": "Description",
"start": "2022-12-31 16:00:00",
"end": "2023-12-31 16:00:00",
"src": "mongodb://root:password@192.168.1.20:27017/mdworkflow?authSource=admin",
"archive": "mongodb://root:password@192.168.1.30:27017/mdworkflow_archive_2023?authSource=admin",
"table": "wf_subInstanceActivity",
"delete": true,
"batchSize": 500,
"retentionDays": 0
},
{
"id": "3",
"text": "Description",
"start": "2022-12-31 16:00:00",
"end": "2023-12-31 16:00:00",
"src": "mongodb://root:password@192.168.1.20:27017/mdworkflow?authSource=admin",
"archive": "mongodb://root:password@192.168.1.30:27017/mdworkflow_archive_2023?authSource=admin",
"table": "wf_subInstanceCallback",
"delete": true,
"batchSize": 500,
"retentionDays": 0
}
]- Adjust the configuration content according to the above configuration file.
Parameter Description:
"id": "Task Identifier ID",
"text": "Description",
"start": "Specify the start time of the archived data, in UTC time zone (if the value of retentionDays is greater than 0, this configuration will automatically become invalid)",
"end": "Specify the end time of the archived data, in UTC time zone (if the value of retentionDays is greater than 0, this configuration will automatically become invalid)",
"src": "Connection address of the source database",
"archive": "Connection address of the target database (if empty, no archiving will be done, only deletion according to the set rules)",
"table": "Data table",
"delete": "It is fixed to true; after the archiving task is completed, and the number of records verified is correct, clean up the archived data in the source database",
"batchSize": "Number of entries and deletions in a single batch",
"retentionDays": "It defaults to 0. If greater than 0, it means delete data X days ago and enable scheduled deletion mode, the dates specified in start and end will automatically become invalid, scheduled to run every 24 hours by default" -
Start the archiving service by executing the following in the directory where the
config.json
file is locateddocker run -d -it -v $(pwd)/config.json:/usr/local/MDArchiveTools/config.json -v /usr/share/zoneinfo/Etc/GMT-8:/etc/localtime registry.cn-hangzhou.aliyuncs.com/mdpublic/mingdaoyun-archivetools:1.0.3
Other:
-
Resource Usage: During program operation, there will be a certain amount of resource pressure on the source database, target database, and the device where the program is located. It is recommended to execute in the idle period of the business.
-
Viewing Logs:
-
Running in the background (default): Use
docker ps -a
to find the container ID, then executedocker logs container ID
to view the logs. -
Running in the foreground: Remove the
-d
parameter, and the logs will be output in real-time to the terminal for easy progress tracking.
-
-
In the example configuration file
config.json
, name the new database in the format ofsource database name_archive_date
. Each time you execute, modify the target database name in archive.- Since the program will first count the amount of data in the target table after the archive is completed, if they are not equal, deletion will not occur; if the archive target database name is not modified in the second run, it may result in more data in the target table than in the current archive, preventing the deletion of the source data.
-
Reclaim Disk Space: After archiving is completed, the corresponding data in the source database will be deleted. The disk space occupied by the deleted data will not be immediately released, but it is typically reused by the same table.
-
Configure Visualization of Archived Data
-
Create the
application-www-ext.properties
configuration fileFor example:
/data/mingdao/script/volume/workflow/application-www-ext.properties
spring.data.mongodb.archive.group[0].id=0
spring.data.mongodb.archive.group[0].text=\u5f52\u6863-2023
spring.data.mongodb.archive.group[0].uri=mongodb://root:password@192.168.1.30:27017/mdworkflow_archive_2023?authSource=admin
spring.data.mongodb.archive.group[0].start=2023-01-01
spring.data.mongodb.archive.group[0].end=2023-12-31Parameter Description:
- group[0]: It defaults to 0 and is incremented for multiple archived data
- id: It defaults to 0 and is incremented for multiple archived data
- text: Name displayed on the page, Unicode encoding is required if in Chinese
- uri: Connection address of the archived database
- start: Start date of the archived data
- end: End date of the archived data
-
Mount the configuration file
- Standalone Mode
- Cluster Mode
Add the following to the
docker-compose.yaml
volumes
corresponding to the microservice application:- ./volume/workflow/application-www-ext.properties:/usr/local/MDPrivateDeployment/workflow/application-www-ext.properties
Refer to Mount Configuration File to mount the created configuration file to the microservice container at the path
/usr/local/MDPrivateDeployment/workflow/application-www-ext.properties
-
Restart the microservice