Abstract
This new command, documented here, was introduced with IBM Informix Dynamic Server (IDS) version 14.10.xC2 in September 2019. In theory, it reduces the effort involved in creating a new active-active replica instance to a single command. It uses Enterprise Replication (ER) which supports participating servers being on different Informix versions or operating systems, providing a unique route for platform migration with almost zero downtime. The complex work to achieve this with ER was previously a major disincentive. This article looks at the current reality of “cdr migrate server” (CMS), showing that it can achieve its goal if some pitfalls are avoided.
CMS should not be confused with the earlier ifxclone command to automate the creation of another node in a high-availability cluster, which must be on the same platform and version as the source server.
Content
Links in this article not shown in full are for pages within the new IBM Documentation site:
https://www.ibm.com/docs/en/informix-servers/14.10
Included “cdr” commands are clickable documentation links and not described in detail for brevity.
See also PDF slides and YouTube link for a demo of CMS here:
https://github.com/nagaraju-inturi/informix-er2cloud
The test system used for this article is an IDS 14.10.FC5 Developer Edition Linux Docker container created with the image available from:
https://hub.docker.com/r/ibmcom/informix-developer-database
When you first run a new container with that image, a single instance “informix” is created and started. This was subsequently customised with the following dbspaces (see onspaces commands and/or sysadmin:task equivalents):
Note that the IDS version is in fact special build 14.10.FC5X4DE: a patch was applied so that, if “ONSTAT_LEGACY=ON” is set in the environment, the timestamp inserted after the “onstat” heading from 14.10.xC5 onwards is suppressed. This will be in 14.10.xC6. See defect details here:
https://www.ibm.com/support/pages/apar/IT35970
The standard “stores_demo” database was created in the “informix” instance as follows (with unbuffered logging, specifying the dbspace, without Time Series examples):
A second instance “informix2” was needed to demonstrate CMS which assumes the source and target database names will be the same, so they cannot be in the same instance. If you are evaluating CMS, you probably already have enough experience to create a new similarly configured instance. However, the outline steps were:
- Copy “$ONCONFIG” to “$ONCONFIG.2” and edit ROOTPATH, MSGPATH, SERVERNUM, DBSERVERNAME, and DBSERVERALIASES.
- Add new entries in the “sqlhosts” file similar to those already there but with different names and port numbers.
- Create empty chunks in new directory “/opt/ibm/data/spaces.2” as user “informix” with same names and permissions (“chmod 660”).
- Create shell aliases to switch between the settings of INFORMIXSERVER and ONCONFIG for the two instances.
- Backup “informix” using “ontape -s -L 0 -d” and restore “informix2” as shown below.
Restore commands on the “informix2” instance:
File “chunk_map.0-2” contents:
The stated prerequisites for CMS are:
- To auto create required storage spaces, storage pool is a requirement at source and target servers.
- SQLHOSTS files at both source and target server must be already configured with ER group information.
- Trusted host configuration must be already established between source and target servers.
- Source server must be 11.70xC1 or higher version.
In a real scenario, dbspaces on the target server – which must have the same names as in the original instance – are likely to be more precisely tailored to your preferences and infrastructure if created manually. We have already assumed that in this article, and prerequisite 1 above is therefore not relevant. You will most likely want full control over sizes and names of chunks, particularly when using raw devices rather than file system chunks (“cooked” files). If you are not already using ER, you will also need to create two new dbspaces that it needs on both source and target servers, and update configuration parameters accordingly. The following was run on both test instances:
eval cd $(dirname $(onstat -c ROOTPATH)) (umask 006 ; touch cdrdbs.000 cdrsbs.000) onspaces -c -d cdrdbs -p $(pwd)/cdrdbs.000 -o 0 -s 204800 onspaces -c -S cdrsbs -p $(pwd)/cdrsbs.000 -o 0 -s 512000 cdr change config "CDR_DBSPACE cdrdbs" cdr change config "CDR_QDATA_SBSPACE cdrsbs" |
Dbspaces of the above sizes are created and set automatically by CMS if not already defined, but the file names are not pretty, and will not necessarily be in the intended location. CMS also runs the following on both servers if not already set:
cdr change config -c group2 "CDR_QUEUEMEM 262144" cdr change config -c group1 "CDR_SUPPRESS_ATSRISWARN 1,2,3" |
The second prerequisite is to insert additional ER group information in the “sqlhosts” file as described here and shown in blue below:
group1 group - - i=1 informix onsoctcp *ifx 9088 g=group1 group2 group - - i=2 informix2 onsoctcp *ifx 9090 g=group2 |
Each asterisk instructs IDS to listen on all IP addresses available when started, which is not an ER requirement, but is often useful.
The third prerequisite is that the servers running the source and target instances must allow trusted connections between each other. This is most often done by adding the host names to both “/etc/hosts.equiv” files, but using configuration parameters S6_USE_REMOTE_SERVER_CFG and/or REMOTE_SERVER_CFG is more secure. See Trusted-host information. You can test afterwards on each with:
The above is not an issue in our test as both instances are on the same server, so will trust each other by default.
The fourth remaining documented prerequisite is that the source server must be on IDS 11.70xC1 or later. Should you wish to use “cdr migrate server” when moving from a version earlier than that, the best option would be, as a first phase, to perform an in-place upgrade on the source server to the latest IDS version supported on that operating system.
A fifth and undocumented prerequisite is that you must not have SHMTOTAL too low. The Docker test instance had this set to 796000 (KB) which resulted in the following obscure error when “cdr define server” was run via CMS which then aborted:
The Informix message log gave more information:
After restarting Informix with SHMTOTAL changed to 2097152 (2GB), CMS ran fine. Not much more shared memory was actually allocated once ER had been started, but it apparently needs to know that it can allocate up to 655360 KB more memory for ER if necessary. This is unlikely to be a problem on a production system which would usually have a much larger value set anyway.
Once you have completed all the above or equivalent steps, you should now be ready to test with a command such as this on the IDS 14.10 target server (“export INFORMIXSERVER=informix2” for this test):
cdr migrate server -s informix -t informix2 -p all -d stores_demo |
Those option letters can be spelt out in full with “–source”, “–target”, “–phase”, and “–database”. Without “-e” or “–exec”, it generates a shell script without actually running anything. The full syntax is fully explained here and listed by the program as follows:
Although we are including all phases in one run for this article, there may be some that would be easier run separately or in advance for real scenarios, as explained in context. Where possible, if any phase has already been run, CMS will detect that and skip those steps if “all” is specified.
If the database is not specified as we have done, CMS will attempt to migrate all non-system databases.
The shell script generated by the CMS command for all phases first runs any necessary “cdr change config” commands as already described. The next section defines ER in both instances for the first time if not already done:
cdr define serv -c group1 -I group1 cdr define serv -c group2 -I group2 -S group1 |
Next is a “dbaccess” session on the source server to ADD ERKEY to any table without a unique index. For “stores_demo”, the SQL is:
Not surprisingly, some form of unambiguous ID is necessary for replication of individual rows. Three hidden shadow columns are added as described here. These are not “in-place” but “slow” alters that rebuild the whole table and its indexes while the table is exclusively locked. This is the only step in the migration that may well need applications stopped. You will probably want to run these statements separately in advance at convenient times, but CMS can still construct the SQL for you.
Next is a “dbschema” command such as the following:
- Full paths are actually in front of every “dbschema” and “dbimport” occurrence but are not shown in this article for clarity.
- Output files are named $INFORMIXDIR/tdbschm_PID_counter and only removed automatically if running with “–exec”.
Options not documented here can be guessed:
-er | ER extended syntax for CMS |
-noindex | Skip indexes and referential constraints |
-raw | Create raw (unlogged) tables |
-C informix | Connect to instance “informix” |
It then creates the empty database(s) to be replicated in the target instance (“stores_demo” in our test) with the same dbspace and logging mode as in the source instance. If the database already exists, “Schema Phase 1” is assumed already to have happened and is skipped. It therefore cannot be created beforehand as an empty database (for example in a different dbspace), or the load phase fails.
Empty tables are then created in the target instance with “dbimport” using the “dbschema” output already saved above:
Options not documented here can be guessed:
-exists | Database already exists |
-er | ER extended syntax for CMS |
-C informix | Connect to instance “informix” |
-F … | Read from supplied file name |
The next phase then begins (first table shown only, same is then repeated for others):
More undocumented options that can be guessed are:
-indexonly | Index definitions only |
-noxdblock | Do not exclusively lock the database |
After all tables have been loaded and indexed, these global ER actions are performed:
#-- #-- Define the replicates #-- sleep 30 #Wait for control queue to drain cdr check queue -c group1 --qname=cntrlq --wait=60 --all cdr check queue -c group1 --qname=sendq --wait=60 --all cdr define grid --connect=group1 ifx_migrate_grid group1 group2 cdr define grid --connect=group1 --grid=ifx_migrate_grid --user=informix --node=group1 --node=group2 cdr define replicateset --connect=group1 ifx_rset_migrate_stores_demo |
The following is then performed per table (first shown only):
#-- #-- Defining Replicates for Database stores_demo #-- cdr define repl --connect=group1 ifx_migrate_1_1616253566_call_type --master=group1 \ --ats --ris --conflict=always --scope=row \ " stores_demo@group1:'informix'.call_type" \ "select * from 'informix'.call_type" \ " stores_demo@group2:'informix'.call_type" \ "select * from 'informix'.call_type" cdr change replicateset --connect=group1 --add ifx_rset_migrate_stores_demo ifx_migrate_1_1616253566_call_type cdr change replicateset --connect=group1 --add ifx_migrate_grid ifx_migrate_1_1616253566_call_type cdr start repl --connect=group1 ifx_migrate_1_1616253566_call_type |
ER is then synchronised and checked:
#-- #-- Sync data #-- cdr check queue -c group1 --qname=cntrlq --wait=60 --all cdr check replicateset --connect=group1 --replset=ifx_rset_migrate_stores_demo --master=group1 \ --name=ifx_rset_migrate_stores_demo --process=2 --repair group2 |
When the above is actually executed, the output per table will look like this:
The final steps generated are:
That contains one more undocumented option:
-refonly | Referential constraints only |
Much of the details above are only of interest if you wish to understand more about how CMS works. A quick guide is:
- Create the new IDS 14.10 instance with the same dbspaces but without the databases to be migrated.
- Add ER dbspaces and configuration settings to both old and new systems.
- Add ER group information into both “sqlhosts” files.
- Ensure connections are trusted both ways between the instances.
- Extract “ADD ERKEY” statements from a dummy CMS run and apply in advance on the old instance.
- Run and execute CMS.
- Reconnect applications to the new instance.
Should you need to revert to the original system, simply reconnect applications accordingly: CMS configures ER to be active-active by default, so updates applied to either instance will have been replicated to the other.
At a later time, you can stop and remove ER with:
cdr delete server -c informix2 group2 cdr delete server -c informix group2 cdr delete server -c informix group1 |
SQL to undo schema changes to the “stores_demo” database has “drop” instead of “add”:
To repeat the test, you also need to drop the target database:
Caveats
The CMS documentation page here lists these restrictions:
- Cannot mix multiple database code-sets in same data migration command.
- Parallel data load using ISTAR query do not support tables with smart large objects (BLOB and CLOB), user created UDTs and collection datatypes.
- Requires customization to unload/load data using external tables or using any other supported unload/load commands.
- Tenant databases and database sharding are not supported.
- Cannot use this tool for database code-set migration.
The following case was raised with IBM/HCL (and documentation updates requested for “dbschema” and “dbimport”):
We are running CMS via this shell script which works around the problem:
Conclusion
Down time when migrating Informix to a different operating system can be almost eliminated using “cdr migrate server”. It hides the complexity of Enterprise Replication by generating the commands for you. There are some steps that either must be done in advance or are better done by hand, but everything you need is given in this article. There are some restrictions, most notably that contents of tables containing “smart blobs” need copying separately via flat files before synchronisation.
Disclaimer
Suggestions above are provided “as is” without warranty of any kind, either express or implied, including without limitation any implied warranties of condition, uninterrupted use, merchantability, fitness for a particular purpose, or non-infringement.