The SWAMI
of VSE/VSAM

 

General Questions and Answers
The Swami's #1 Favorite Batch Performance Hint

Block Big!

If data is being processed sequentially, using large Control Interval (CI) sizes, but sizes such that CIs are not divided into multiple physical blocks, is critical to performance.

While it is true that the overall elapsed time for a sequential processing job can be shortened by increasing the amount of space VSAM will use for data buffering and hence increasing the number of data buffers, the CPU time will be further shortened by reducing the number of physical blocks transferred too.

Reducing the number of I/O operations (by increasing the number of buffers), while leaving the CI size shorter than optimum, will require additional Instruction Processor (CPU) time to build and translate the channel programs. That CPU time would be better spent running application instructions for this application and for others.

How Big?

The largest CI size you can specify is 32 KB, but that particular size will cause VSAM to subdivide the CIs into multiple physical blocks on any CKD or ECKD device. For these devices, the ideal CI size will be the size where exactly two physical blocks can fit on one disk track. For 3390 or ECKD type devices, this is 26 KB. For 3380 type devices, it is 22 KB.

The above paragraph does not apply verbatim for DFP/VSAM in the zOS (or MVS) environment. Inn VSE/ESA environments, VSE/VSAM permits "odd" block sizes for its files, while DFP/VSAM may not in every case.

Using the 3390 as an example, if you specify 26 KB for your data CI size, then it is unnecessary to specify more than 30 data buffers for sequential access, as VSAM does not sequentially read-ahead across Control Area (CA) boundaries, which occur at least once per cylinder.

What if the file is processed directly too?

You may be surprised at the relative amount of sequential vs. direct processing. Many files have thousands of logical record retrievals and updates during their on-line processing, but many times that number during initial load, backups, reorganizations, and other batch processing.

Given today's fast disks, with their large cache sizes, the physical I/O operations are substantially the same speed whether the blocksize and CI size are optimum or not. Reducing the CPU cost of the I/O operations, particularly in environments running VSE under VM, can provide significant benefits.

   
 
Up to the
Q and A
Page
Up to the
Batch
Q and A Page
E-Mail
The Swami
Go to
The Swami's
Home
 

This entire site -- including all its pages and content --
are the intellectual property of and copyright © 2002-2003 by
Dan Janda, theswami@epix.net