You are currently on IBM Systems Media’s archival website. Click here to view our new website.

MAINFRAME > Administrator > Backup and Recovery

Five Points to Consider When Preparing for Backup and Recovery

Backup and Recovery

A successful disaster recovery plan identifies and accounts for the data that’s required for system recovery. Batten says many z Systems clients use synchronous or asynchronous data-replication methods to ensure all current data is present at a remote site in the event of a disaster. In other words, clients must know not only which data is necessary for system recovery, but also where it’s stored. For example, a client may have data that has been migrated to tape that is needed in order to restore data to a point in time. In this case, the client’s recovery plan should note the tape must also be replicated and available at the remote site.

GDPS* is designed to guarantee data consistency for z Systems data and automate the entire recovery process. GDPS utilizes IBM remote copy solutions such as Metro Mirror, Global Mirror and z/OS* Global Mirror. The automation achieved with GDPS is based upon Tivoli* System Automation (SA) for z/OS, which is the only automation product designed to exploit the Parallel Sysplex* environment. Tivoli SA provides the functionality and ability to manage the whole sysplex from a central point.

Data should be backed up at a time that will cause the least amount of disruption to users. Batten says modern technologies have allowed for more dynamic data backup; certain databases or data set types can be backed up while remaining open. In other cases, the backup software could lock the data and cause delays in application programs. “For data sets that are updated perhaps via a batch job, backup should be triggered by a scheduler after satisfactory completion of the job,” he says.

Challenges and Safeguards

Batten notes clients should consider revisiting their backup and recovery plans periodically, but especially after the adoption of new technologies (e.g., encryption) or when new data-compliance regulations are put into effect.

“It’s a good idea even in a static environment to look at what is being backed up, as well as how and where it’s stored on an annual basis at least,” Batten says. “It’s not uncommon for applications to be retired or rewritten and the old data never removed from storage.”

He also advises clients to develop robust naming conventions for their data sets. This ensures each backup file has a unique identifier and allows clients to quickly determine the details of the backup by publishing a list of existing management classes and their triggers. When a new application is introduced, this list allows clients to determine whether the application fits into an existing class or if a new one should be created.

According to Batten, z Systems data audits have uncovered a common error clients make while planning for backup and recovery. While the production systems are clean and controlled, he says the development and test environments tend to be messy and use valuable space. When application programmers develop a new program, for example, they create test data. Because this data will be changed by the application during the test, programmers create another copy from which to recover and test again. Once the testing is complete, all of this test data remains stored. Unless the developer complied with the same standards used to backup production systems, this development data may never be deleted. “It’s imperative that a backup plan extends to these environments as well, even if to just enforce the data life cycle,” Batten explains.

To ensure all backups or migrations are happening when required and the proper data lifecycle is being enforced, Batten suggests using automation tools available for z Systems clients. According to Batten, one of the biggest challenges companies face while developing a backup and recovery plan is not being able to standardize and control their data.

Plan for the Worst

An effective backup and recovery plan assumes nothing is safe from failure: hardware, software and even applications may contribute to unplanned outages. Batten says it’s paramount clients understand their data and how it relates to achieving business continuity. He suggests they try to identify all possible scenarios when it comes to data loss, and then ask themselves what steps they could take to recover.

Caroline Vitse is a freelance writer based in Rochester, Minnesota.



2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.


Active/Active Sites Helps Distribute Workloads During Outages

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters