You are currently on IBM Systems Media’s archival website. Click here to view our new website.

MAINFRAME > TRENDS > SECURITY

Tokenized Encryption: Changing the Lock

Tokenized Encryption

In the previous article “Tokenized Encryption: Algorithms and Keys,” periodic change of encryption keys—particularly key generation and file recryption—was discussed. What steps go into such a change? In a project I performed, we had to both invent necessary programs and identify the numerous steps involved.

Being project leader, I got the “privilege” of both creating and executing the procedure, which became two different procedures, one a mainframe-only cutover which addressed data recryption and associated symmetric keys, and the mainframe-to-website cutover involving both public and private keys, where multiple intelligent nodes had to be changed in concert. This article will discuss mainframe-only, and a following article will cover mainframe-to-website. It should be noted that changes discussed in this article apply to other platforms as well, but in a distributed manner. For example, encryption key changes would occur on an encryption server while program changes would occur on an application server and re-tokenization occurs on a database server. The process itself would stay essentially the same.

Encryption Key Generation

An encryption key manager—the only person granted authority to create encryption keys—had the flexibility of generating keys two to four weeks before actual cutover. No security exposure existed because of this time lag, because until the cutover process was complete, the keys were useless bits. Since multiple generations were kept due to backout and recovery considerations, the creation date was incorporated into key naming conventions, simplifying determination of the time period to which each applied. Since both Integrated Cryptographic Service Facility and MegaCryption were used, two different utilities were used for key generation, and since keys for business partners were different from the mainframe keys, numerous keys were needed.

Planning and Scheduling

Planning the first cutover was daunting because it was unprecedented. Since tokenization had been in place for a year, and I’d implemented mainframe encryption, I had the background regarding what steps were needed. Just as important was the security staffer who set up the security infrastructure: Definitions like a hot-ID had to be implemented, scenarios considered, precautions and contingencies established. It started with creation of a plan based on needed steps: Writing a program to decrypt with the old key and recrypt with the new key, necessitating access to both keys; and a full system shutdown planning for errors, data validation and the often overlooked but vital aspect of communication.

Scheduling was a challenging because encrypted data—primarily cardholder data (CHD) and checking account data (CAD)—affected almost every business process, meaning suspension of those processes during cutover. There was a lot of data to convert: hundreds of thousands to millions of records. Further complicating things, this was a 24-7 operation with a narrow batch window. A service company provided hardware, software and operations support; they had to be onboard, too. Lastly, encryption this intense was new, making it very difficult to estimate processing time. But we could measure backup and restore time, how long normal production restart took and performance reports produced job execution times, pinpointing the lightest day and jobs that could be deferred or rescheduled. That revealed the largest time window available.

The Cutover Program

Scheduling showed the need to produce a running cutover program, because that enabled benchmarks:

  • A file containing both current and new encryption key names was created
  • Open files containing encrypted data and encryption key names
  • Read past the first control record to the second record as the beginning of a loop
  • Verify if CHD data fields were binary zeroes, if not, decrypt field with old encryption key via cryptographic call
  • Encrypt CHD data fields using new encryption key via cryptographic call
  • Verify if CAD data fields were binary zeroes, and if not, decrypt with old key via cryptographic call
  • Encrypt CAD data fields using new encryption key via cryptographic call
  • Rewrite record
  • Write desired audit, performance or error information to print file
  • Repeat previous six steps until end-of-file
  • Handle errors, in most cases abend program
  • Close files, terminate

The Value of a Test System

A test system is usually for application developers’ testing and debugging, but they’re also invaluable for developing and testing system function. The same online (CICS Transaction Server) homegrown transactions could be used for analysis and debugging. Once the cutover program was compiled and debugged, it could be run against a copy of the production file to produce timing and performance information. Error situations could be created and recovery, backup and fallback procedures could be tested and validated. Ultimately, a cutover for real could be performed. Since the test system essentially mirrored production, any fallout would be fixed before occurring in production. There was also a training system that had to be cutover, doubling testing before the real thing.

Putting It All Together

Now all pieces were in place, albeit with some fine-tuning to be done based on how things went with test and training systems. But the following items had been sufficiently defined for detailed cutover plan creation:

  • The best time, day and window size
  • The token cutover program was written, debugged and tested
  • JCL were created and syntactically checked
  • Files containing old and new encryption key names for all three systems were created and populated
  • Jobs were created to copy, back up and restore token files
  • A hot-ID infrastructure was created for checking out a special ID with authority for cutover
  • Emails and phone numbers for programmers, operations and management were in place
  • Monitoring tools had been identified, and usage documentation provided

Mainframe Cutover Procedure

The resultant cutover procedure involves the following steps.

Prep Work

1. The day before cutover, the project leader updates Partition Data Sets (PDS) with encryption key names used by cutover job (RCRPTDTA)

2. The second task wasn’t really a task, hopefully. It contained directions on how to rebuild the real file using a backup—taken in RCRPTDTA’s first step—if an error in the cutover couldn’t be resolved within 30 minutes of occurrence. The most disastrous thing possible in the cutover was for the system to come up with a corrupted file filled with questionable sensitive data; this was top priority.

3. The cutover job took substantial time, and one of the dangers was that it could time out. Task three identified how to start the online performance monitor screen that displayed when this situation is nearing, provided information on how to contact the operations staff to extend the time limit, and how to abort the cutover if the program timed out.

4. Run a prebuilt job to syntax-check all JCL statements that would or could be used.

5. The afternoon of cutover day, notify computer operations of the cutover plans.

6. 4:00 p.m.: Cutover specialist should check out a hot-ID (good for 24 hours).

7. 7:00 p.m.: Notify the order-taking department to shut down the voice response unit that took automated orders.

Cutover

8. 1:00 a.m.: Send email to IT and user management that cutover has begun.

9. 1:00 a.m.: Notify computer operations that cutover is about to begin via phone.

10. Quiesce normal production jobstreams.

11. Submit RCRPTDTA and monitor job with performance monitor screens.

12. Assume RCRPTDTA completes normally, update the operational encryption key PDSes with new encryption key names.

13. Using encryption utility, encrypt, decrypt and recrypt a dummy number, then test predefined production inquiry transactions.

14. Resume normal production jobstreams.

Cutover Success

The first cutover went smoothly, within time limits and without any sort of errors. Extensive testing including users was performed to verify all data was consistent, and all business processes were resumed without incident. As years progressed, a certain amount of fine-tuning improved the process even more, testing became minimal and what started as a challenging now function evolved into a standard procedure.

Jim Schesvold can be reached at jschesvold@mainframehelp.com.



Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.



Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Application Integration With PCI

The problematic nature of PCI-compliance application integration makes research, analysis and planning important. It can also greatly simplify and reduce the effort involved.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters