Migration¶
12.9.x to 12.10.0¶
Thanks to the upgrade from Quartz Scheduler to JobRunr, the postgresql database has been dropped (it's not needed anymore). JobRunr stores scheduled tasks in Elasticsearch, in indices prefixed by jobrunr_
.
This migration is automatically handled by the server after startup.
11.x.x to 12.0.x¶
A database migration is performed on octoperf_users
indice. This migration is automatically handled by the server after startup. Starting from OctoPerf 12.0.0
, a PostgreSQL database is required (along with Elasticsearch) by the test scheduler.
Various settings (defined in application.yml
) have been updated / added:
- Database settings:
elasticsearch:
indices:
prefix: octoperf_
include_type_name: true # not used anymore
include_type_name
can be safely removed if present within your configuration.
- Default Test Scheduler database settings:
spring:
datasource:
url: "jdbc:postgresql://postgres/postgres"
username: "postgres"
password: "postgres"
quartz:
job-store-type: jdbc
jdbc:
initialize-schema: always
schema: "classpath:tables_@@platform@@.sql"
properties:
org.quartz.scheduler:
instanceId: AUTO
instanceIdGenerator.class: "org.quartz.simpl.HostnameInstanceIdGenerator"
org.quartz.jobStore:
isClustered: true
driverDelegateClass: "org.quartz.impl.jdbcjobstore.PostgreSQLDelegate"
By default, the test scheduler is configured to use a PostgreSQL database with hostname postgres
, with database postgres
, username postgres
and password postgres
. The database schema is initialized automatically when starting OctoPerf On Premise Infra server.
11.1.0 to 11.2.0¶
A database migration is performed: octoperf_benchreport
and octoperf_benchreporttemplate
indices are upgraded. MonitoringMetric
, MonitorMetric
and ApdexMetric
are changed to HitMetric
.
This migration is automatically handled by the server after startup.
11.0.0 to 11.1.0¶
A database migration is performed:
octoperf_dockerprovider
:workspaceId
field is removed from each provider,octoperf_dockerprovidertoworkspace
: new indice to link providers to workspaces.
This migration is automatically handled by the server after startup.
10.6.x to 11.x.x¶
OctoPerf On Premise Infra 11.x.x
uses Elasticsearch 7.x.x. Before being able to upgrade, OctoPerf EE must be first upgrade to 10.6.x
which uses Elasticsearch 6.8.x
. Only indices created in this version can be upgraded to operate in Elasticsearch 7.x.x
.
That's why, to upgrade to OctoPerf EE 11.x.x
, the process consists of:
- First, if you have any OctoPerf EE
10.x.x
lower than10.6.x
, upgrade to latest OctoPerf EE10.6.x
which uses Elasticsearch6.8.x
, - Create an alias to point to your indices and configure OctoPerf EE to use it,
- Reindex all indices using Kibana,
- Delete the old indices.
Each step of the process is detailed below.
Warning
Backup all your data and/or make a snapshot of your database before and after every step involving the database. Should anything wrong happen during the migration, the database can be safely restored from the backup.
Prerequisites¶
Before proceeding to upgrade OctoPerf EE, make sure:
- To know how to backup and restore Elasticsearch data: either using snapshots, or by stopping the database and copying the entire
elasticsearch
data directory, - You have enough disk space: as the reindexing process copies all the data, there must be at least 50% or free disk space available on the disk where elasticsearch data is stored.
70%
or more free disk space is recommended for extra safety.
10.x.x To 10.6.x¶
This is upgrade is mandatory prior to upgrading to OctoPerf 11.x.x
. The database must be reindexed entirely with Elasticsearch 6.8.x
.
If your Elasticsearch database is deployed on multiple instances in cluster, make sure to follow their guidelines explaining how to upgrade a minor version. A rolling upgrade should be possible.
Other components can be upgraded without any prior preparation.
Indices Reindexation¶
Create Alias¶
Indice aliases are pointers which point to a real indice. The pointer can be atomically changed from one indice to another while the database is operating.
OctoPerf 10.6.x
has the following indices: (each storing a specific type of data)
octoperf_apm
octoperf_auditlog
octoperf_benchloadgenerator
octoperf_benchreport
octoperf_benchreporttemplate
octoperf_benchresult
octoperf_container
octoperf_correlationframework
octoperf_correlationrule
octoperf_dockerbatch
octoperf_dockercloudinstance
octoperf_dockerlog
octoperf_dockerprovider
octoperf_dockerproviderconfig
octoperf_error
octoperf_hit
octoperf_http
octoperf_httprequest
octoperf_httpresponse
octoperf_httpserver
octoperf_monitor
octoperf_monitorconnection
octoperf_numbercountervalue
octoperf_project
octoperf_scenario
octoperf_slaprofile
octoperf_softwareversion
octoperf_staticip
octoperf_textualcountervalue
octoperf_thresholdalarm
octoperf_user
octoperf_variablewrapper
octoperf_virtualuser
octoperf_webhook
octoperf_workspace
octoperf_workspacemember
The following steps must be repeated for each indice. The steps below use the octoperf_apm
indice as an example. All operations on the database are performed using Kibana Console:
- Create an alias named
alias_X
pointing tooctoperf_X
(whereX
is the indice name likeapm
for example):
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "octoperf_apm", "alias" : "alias_apm" } }
]
}
- Once done for each indice, double-check the alias has been defined properly. You should have the same result as:
GET /_cat/aliases?v&s=alias
alias index filter routing.index routing.search
alias_apm octoperf_apm - - -
alias_auditlog octoperf_auditlog - - -
alias_benchloadgenerator octoperf_benchloadgenerator - - -
alias_benchreport octoperf_benchreport - - -
alias_benchreporttemplate octoperf_benchreporttemplate - - -
alias_benchresult octoperf_benchresult - - -
alias_container octoperf_container - - -
alias_correlationframework octoperf_correlationframework - - -
alias_correlationrule octoperf_correlationrule - - -
alias_dockerbatch octoperf_dockerbatch - - -
alias_dockercloudinstance octoperf_dockercloudinstance - - -
alias_dockerlog octoperf_dockerlog - - -
alias_dockerprovider octoperf_dockerprovider - - -
alias_dockerproviderconfig octoperf_dockerproviderconfig - - -
alias_error octoperf_error - - -
alias_hit octoperf_hit - - -
alias_http octoperf_http - - -
alias_httprequest octoperf_httprequest - - -
alias_httpresponse octoperf_httpresponse - - -
alias_httpserver octoperf_httpserver - - -
alias_monitor octoperf_monitor - - -
alias_monitorconnection octoperf_monitorconnection - - -
alias_numbercountervalue octoperf_numbercountervalue - - -
alias_project octoperf_project - - -
alias_scenario octoperf_scenario - - -
alias_slaprofile octoperf_slaprofile - - -
alias_softwareversion octoperf_softwareversion - - -
alias_staticip octoperf_staticip - - -
alias_textualcountervalue octoperf_textualcountervalue - - -
alias_thresholdalarm octoperf_thresholdalarm - - -
alias_user octoperf_user - - -
alias_variablewrapper octoperf_variablewrapper - - -
alias_virtualuser octoperf_virtualuser - - -
alias_webhook octoperf_webhook - - -
alias_workspace octoperf_workspace - - -
alias_workspacemember octoperf_workspacemember - - -
Each alias_X
alias must point to the same octoperf_X
indice. It's now time to create new indices and reindex them.
Configure OctoPerf to use Aliases¶
Once aliases for all indices are configured, you can configure OctoPerf EE to use it. In application.yml
, define the elasticsearch.indices.prefix
property:
elasticsearch:
indices:
prefix: alias_
Restart OctoPerf EE to take changes into effect. The application should work seemlessly.
Create New Indices¶
To prepare for the future upgrade to Elasticsearch 7.x
, all indices must be fully reindexed in Elasticsearch 6.8.x
. Repeat the process below for each indice:
- First, let's retrieve the indice mapping and settings:
GET octoperf_apm
{
"octoperf_apm" : {
"aliases" : {
"alias_apm" : { }
},
"mappings" : {
"apm" : {
"dynamic" : "false",
"properties" : {
"projectId" : {
"type" : "keyword"
}
}
}
},
"settings" : {
"index" : {
"number_of_shards" : "5",
"number_of_replicas" : "1",
}
}
}
}
The mapping and settings must be kept as is to create the new indice in Elasticsearch 6.8.x
.
- Create a new indice name
v68_X
and pay attention to provide the same settings and mapping as originaloctoperf_X
indice:
PUT v68_apm?include_type_name=true
{
"mappings" : {
"apm" : {
"dynamic" : "false",
"properties" : {
"projectId" : {
"type" : "keyword"
}
}
}
},
"settings" : {
"index" : {
"number_of_shards" : "5",
"number_of_replicas" : "1"
}
}
}
In this example, we create v68_apm
in Elasticsearch 6.8.x
. octoperf_apm
is going to be reindexed to v68_apm
(which means copying all the documents from the first one to the second one). Repeat this operation with each indice by carefully creating them with the proper mappings and settings.
If your indice mapping contains such mapping:
"_all" : {
"enabled" : true
},
Make sure to remove the _all mapping attribute. Elasticsearch 7.x.x
doesn't support it, it has been deprecated in 6.0.0.
Reindexing Indices¶
The next step is to reindex the data into the newly created indices. It's recommended to fully stop OctoPerf EE while reindexing all the data. Otherwise, you may loose data being written to the old indice because it's not being reindexed to the new indice.
Reindexing indices one by one
- In Kibana, run the following command for each indice:
POST _reindex?slices=auto&wait_for_completion=false
{
"source": {
"index": "octoperf_X"
},
"dest": {
"index": "v68_X"
}
}
Replace octoperf_X
by the indice name and its v68_X
to its equivalent in the new version.
You can check the advancment of the reindexing task by running:
GET _tasks?detailed=false&actions=*reindex
Once all reindexing tasks are completed for one indice, you can proceed with the next indice.
Info
This operation might take from several minutes to several hours per indice depending on the amount of data to reindex.
Redirect Aliases¶
The final step is to point aliases to the new indices which now contain all the reindexed data:
POST /_aliases
{
"actions" : [
{
"remove": {
"index": "octoperf_X",
"alias": "alias_X"
}
},
{
"add": {
"index": "v68_X",
"alias": "alias_X"
}
}
]
}
Repeat this operation for each indice. Restart OctoPerf EE and make sure all the data is there (user login, analysis reports etc). Another good way to make sure all the data has been reindexed properly is to check indices size:
GET _cat/indices?v&s=pri.store.size
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open v68_correlationframework gyamU2B2RkKCtZXFRfRbPg 1 2 0 0 783b 261b
...
Check docs.count
by comparing octoperf_X
and v68_X
indices two-by-two. The number of documents should be absolutely the same.
You can now proceed to delete the old indices. At this point, if everything is fine, a backup and/or a database snapshot is strongly recommended: deleting indices manually is a dangerous task prone to errors.
Deleting old indices¶
Double-check all the data has been reindexed properly for each indice to a new indice named v68_X
. Double-check aliases (starting with alias_
) are pointing to the newly created v68_X
indices. Make sure you have made proper backups before proceeding.
Then, for each indice starting with octoperf_
prefix, run the Kibana command:
DELETE octoperf_X
Replace octoperf_X
by the real name of the indice (Example: octoperf_apm
). This will delete the indice along with all the data it contains from the database. You can't keep old indices around because Elasticsearch 7.x.x
only supports indices created in Elasticsearch 6.8.x
.
Upgrade to OctoPerf EE 11.x.x¶
Make sure you have made proper backups before proceeding. You can now upgrade to OctoPerf EE 11.x.x
which uses Elasticsearch 7.x.x
:
- Stop OctoPerf EE by running
docker compose down
from the directory where it has been launched, - Keep a backup of your existing
docker-compose.yml
in the case you customized it, - Replace
docker-compose.yml
by the latest enterprise-edition.zip, - Apply any custom settings again to
docker-compose.yml
- Start OctoPerf new version by running
docker compose up -d
.
The application should be up and running within a few minutes.
9.x to 10.x¶
As containers are now executed with a non-privileged user, data stored in default octoperf-data
files ownership must be adjusted accordingly. (otherwise the data is not readable / writable)
How to fix file ownership:
- Upgrade to 10.x first,
- Start the Enterprise Edition via
docker-compose
, - List containers using
docker ps
, and get container id ofenterprise-edition
container, - Execute as root inside this container:
docker exec -it -u root CONTAINER_ID /bin/bash
- Then, chown all
octoperf-data
volume files tooctoperf
user:
chown -R octoperf:octoperf /data
The command ls -al
should list all files with octoperf
user and octoperf
group.
root@CONTAINER_ID:~# ls -al
total 104228
drwxr-xr-x 1 octoperf octoperf 4096 .
drwxr-xr-x 1 root root 4096 ..
-rw------- 1 octoperf octoperf 221 .bash_history
drwxrwxr-x 2 octoperf octoperf 4096 config
drwxr-xr-x 4 octoperf octoperf 4096 data
-rw-r----- 1 octoperf octoperf 106689614 enterprise-edition.jar
-rwxr-x--- 1 octoperf octoperf 102 entrypoint.sh
drwxr-xr-x 4 octoperf octoperf 4096 license
8.x.x to 9.x.x¶
As of 9.0.0 and above, Rancher is no longer required to run OctoPerf EE. Rancher was used to manage load generators. Now, load generators connect to OctoPerf EE server directly.
Hosts registered on Rancher must be registered again on OctoPerf's On-Premise Infra using the command-line provided in Private Hosts > On-Premise section:
- Upgrade to
- Deactivate and remove hosts on Rancher UI,
- Stop and remove Rancher Agent containers on each host,
- Login on OctoPerf EE,
- Go to Accounts, then select On-Premise,
- Register again each host using the command-line provided.
8.3.x is the latest version which can be installed using our Rancher Catalog. 9.0.0 and above must be setup using docker-compose.
7.5.x to 8.x.x¶
Prior to upgrading from OctoPerf Enterprise-Edition 7.x.x to 8.x.x, a migration script must be run. The migration script can be DOWNLOADED HERE.
What does this script? It reindexes the 7.x.x elasticsearch indexes (analysis, design and monitoring) into smaller indexes compatible with Elasticsearch 6.x.x.
Indices created by On Premise Infra up to 7.x.x contain multiple types per indice. As of Elasticsearch 6, one index can only contain a single type of json documents. OctoPerf EE 8.0.x and above is based on Elasticsearch 6+. For this reason, a migration is required.
Which versions are supported?¶
The upgrade supports migrating Elasticsearch indices created by version 7.5.x. It upgrades the indexes to OctoPerf EE 8.0.x. Make sure to upgrade first to OctoPerf EE 7.5.x, before manually upgrading the database.
Elasticsearch Migration¶
How to migrate OctoPerf On Premise Infra from 7.x.x to 8.0.x
- Download the migration script on the host running OctoPerf EE,
- In Rancher UI, Make sure OctoPerf EE 7.5.x Elasticsearch service is running,
- In Rancher UI, stop all other OctoPerf EE services like frontend and backend to prevent any user interaction while upgrading the database,
- In a Shell Terminal, run the bash migration script on the same machine:
./v800/_v800.sh
. This operation may take several minutes / hours depending on the amount of data to reindex, - The script should have created many indices with name starting with
octoperf_
, - In Rancher UI, upgrade On Premise Infra to 8.0.x. The OctoPerf EE server will apply additional data upgrades once started,
- Login on OctoPerf EE and make sure all the previous projects, results are there and readable.
Warning
Make sure all your data is properly accessible through OctoPerf EE Web UI before deleting the old indices. Failing to do so may result in data loss.
On the migration is completed successfully, delete the old indices:
- Analysis:
curl -XDELETE localhost:9200/analysis
, - Design:
curl -XDELETE localhost:9200/design
, - Monitoring:
curl -XDELETE localhost:9200/monitoring
.
The 3 commands above delete the legacy indices.