What's New in RapidMiner AI Hub 9.10.4?
Released: February 7th 2022
Upgrading to RapidMiner AI Hub 9.10.4 includes database and RapidMiner AI Hub home directory migration steps. Ensure that you do not abort AI Hub startup while migration is in progress!
Before upgrading, the following steps are crucial and should be followed:
- In your running AI Hub instance, temporarily pause all Schedules
- Verify that all jobs are in a final state like finished, error, stopped or timed out
- Consider force stopping all currently running or pending jobs depending on your needs or
- Wait for all jobs to be executed
- Cross-check execution state on the Executions web page of AI Hub
- Shutdown all Job Agents attached to this AI Hub instance
- Shutdown AI Hub
- Backup (instructions depend on your setup)
- Upgrade to AI Hub to 9.10.4 (instructions depend on your setup)
- Start AI Hub and wait for all migrations to finish
- Observe startup and migration progress by tracking logs which are located inside the RapidMiner AI Hub home directory under
$rmHomeDir/logs/folder, e.g.migration.log,migration-eb.log - Once everything succeeded, start your Job Agents and wait for them to show up in the Queues page of AI Hub
- Resume all Schedules which you’ve temporarily paused before
If you’ve accidentally upgraded although not all executions have been finished before upgrading (see instructions above), non-final executions like pending or running ones might show up in the Job Archive view of the Executions page.
Please head over to the Troubleshooting section outlining more details regarding “Job Archive contains pending or running jobs”.
Enhancements
- Added Job Archive and improved periodic Job Cleanup mechanism to archive and clean archived jobs more efficiently
- Added database migration for moving existing job related tables to archive tables prefixed with
A_:jobservice_job_context_outrenamed toa_jobservice_job_context_outjobservice_job_context_inrenamed toa_jobservice_job_context_injobservice_job_context_macrorenamed toa_jobservice_job_context_macrojobservice_job_contextrenamed toa_jobservice_job_contextjobservice_job_errorrenamed toa_jobservice_job_errorjobservice_job_logrenamed toa_jobservice_job_logjobservice_operator_progressrenamed toa_jobservice_operator_progressjobservice_jobrenamed toa_jobservice_job- (Re-)Creation of the following tables is handled automatically after renaming migration succeeded:
jobservice_job_context_outjobservice_job_context_injobservice_job_context_macrojobservice_job_contextjobservice_job_errorjobservice_job_logjobservice_operator_progressjobservice_job- Unique and foreign constraints now have proper identifiers
- Added a migration step for Job Cleanup
- Changed how cleanup is enabled to
jobservice.scheduled.archive.jobCleanup.enabled = true, before the cleanup was enabled by setting thejobservice.scheduled.jobCleanup.maxAgeproperty - Changed existing property
jobservice.scheduled.jobCleanup.cronExpressiontojobservice.scheduled.archive.jobCleanup.jobCronExpression - Added new property
jobservice.scheduled.archive.jobCleanup.jobContextCronExpressionto clean up the job context of deleted jobs separately - Added new property
jobservice.scheduled.archive.jobCleanup.jobBatchSizeto set the number of jobs to be deleted at once - Added new property
jobservice.scheduled.archive.jobCleanup.jobContextBatchSizeto set the number of job contexts to be deleted at once
- Changed how cleanup is enabled to
- Added database migration for moving existing job related tables to archive tables prefixed with
- Added mechanism to log
System.outandSystem.errinside the Job Container- Property is
jobcontainer.systemOutLog.enabled(falseby default) - Enable by setting property
jobagent.container.jvmCustomOptions=-Djobcontainer.systemOutLog.enabled=trueinside theagent.propertiesfile
- Property is
- Added more logging when Job Agents retrieve a job deletion message via broker
- Added size and checksum mismatch checks for LFS object uploading which can be used in addition to the LFS
/verifyPOST endpoint- Change
repositories.lfsEnableUploadSizeCheckto enable/disable the size check for LFS PUT endpoint (defaults totrue) - Change
repositories.lfsEnableUploadChecksumCheckto enable/disable the checksum check for LFS PUT endpoint (defaults totrue)
- Change
- Added mechanism to remove unsuccessful LFS uploads directly after failure (
repositories.lfsRemoveUnsuccessfulUploads, defaults totrue) - Added a migration step which checks LFS objects consistency
- Migration step is warning only and will not prevent startup of AI Hub
- Overall migration result is written into the log file
migration-eb.log - In addition, there are per Project
$projectId-consistency-result.jsonfiles inside the$rmHomeDir/data/repositories/git_lfs_serverfolder with more information
- Increased robustness of Job Agent execution status propagation to AI Hub
- Each job has a
result.jsonfile inside the respective$jaHome/data/jobs/jobIdfolder - The file is used to determine the job state, if the executing Job Container has been flagged as unreachable due to high load or other system/environment conditions
- If creating the
result.jsonfile fails, aresult.errorfile is created inside the directory of that job
- Each job has a
- Introduced cleanup mechanism for completed and propagated (sent via broker to AI Hub) job state events inside the Job Agent (
falseby default)- Enable via
jobagent.jobStateEventsCleanup.enabled = trueinside theagent.propertiesfile - Adjust interval in which state events are deleted via
jobagent.jobStateEventsCleanup.interval = 120000inside theagent.propertiesfile, defaults to60000milliseconds
- Enable via
- Global theme color changes
- Added a safety net for CVE-2021-44228
- Bump
radoop-proxyto1.2.3 - Bump integrated JDBC driver version
postgresqlto42.3.2
