Lee LaFrese - 2 October 2020

IBM announced a code update to their all-flash, high-end storage array, the DS8900F, on August 11, 2020, and it became generally available in early September. This blog will take a closer look at what was announced and what it means.

The IBM DS8000 has been around for over 15 years. Over that time, the hardware and functionality has evolved and there have been frequent licensed internal code updates to support continuous enhancements. In August 2020, IBM announced Release 9.1 for their flagship DS8900F storage array, and it became available in September. Although there is nothing revolutionary in this release, there are quite a few product improvements that should be interesting and useful to any shop that either has DS8900F storage installed or is considering it.

Increased Cache and Capacity Flexibility

There are several hardware enhancements that this new code release enables. The top end DS8950F now supports cache sizes up to 3.4 TB, roughly 70% more than the previous maximum of 2 TB. Note that moving from 2 TB to 3.4 TB cache does not increase the size of the write cache (NVS) which is still capped at 128 GB.

The major potential benefit of the larger cache is to support more aggressive storage consolidation. For example, if you have three IBM DS8886 each with 1 TB of cache and you want to consolidate to a single IBM DS8950F, you might be hesitant if the maximum cache size were only 2 TB as the cache to backing storage ratio would be reduced. However, with the larger cache, this would no longer be a concern. Thus, there is little risk that the cache hit ratios for your critical workloads would be affected.

Another enhancement is support for 1.92 TB High Capacity Flash cards on the DS8950F, providing more capacity flexibility when planning your storage configuration.

Hardware Integration

With Release 9.1, the IBM DS8910F may now be integrated into the same rack as an IBM z15 T02/LT2.   This will take up 16U of reserved space in the z15 rack. This is especially useful to customers that are seeking to reduce the physical footprint of their z/OS data center.

The release also supports a new Hardware Management Console (HMC) with added memory and storage that will ship with any new DS8900 orders. The new HMC is not necessary for any existing applications but may be required later to support future functionality.

Flash Drive and RAID Management

IBM incorporated an enhancement to Flash Drive and RAID Management as part of this release. In the past, freed, unallocated, or initialized space on RAID arrays had zeros written to them which can increase wear on Flash Storage.

Now, the space can be zeroed using an UNMAP command that returns free pages directly to the free capacity pool in the Flash Drive. This can improve the wear life and endurance of the flash cells and reduce overheads on the drive due to garbage collection and free page management.

Other enhancements have resulted in faster rebuild for partially allocated RAID arrays. Now, if a parity stripe is not mapped, no rebuild of this stride is necessary. Before, all these strides would have been written with zeros.

Replication Enhancements

Release 9.1 expands the zHyperlink functionality to include writes to Global Mirror primary volumes. This is a performance benefit for Global Mirror users that could reduce the response time for DB2 Log writes by a factor of four. If you are not currently using zHyperlink, IntelliMagic Vision provides guidance to indicate how much of your workload could potentially exploit it.

Hardware reserves were not allowed on z/OS volumes in HyperSwap environments but Release 9.1 along with some z/OS PTFs will enable this. Note that Geographically Dispersed Parallel Sysplex (GDPS) support for this function is planned as a small program enhancement (SPE) on GDPS 4.3. This provides better compatibility between GDPS and applications that rely on hardware reserves.

Before Release 9.1, Fixed Block (FB) volumes were limited to 2 TB when used with any type of replication. Now, FB volumes up to 4 TB will work with copy services and they may either be standard volumes or Extent Space Efficient (ESE). This enables FB storage configurations in replication environments to be simpler and easier to manage.

Safeguarded Copy

IBM announced a few improvements to Safeguarded Copy (SGC) as part of this release. SGC is typically used to take many frequent copies of a production environment. This can protect your system from accidental or malicious corruption as the storage may be easily rolled back to an earlier time.

The space release function is now supported on both CKD (z/OS) and FB volumes configured in a SGC relationship. Also, it is now possible to dynamically expand the capacity available for SGC. In the past this was disruptive and required you to delete all backups. This is very handy if you want to retain backups longer or increase the frequency of SGC backups.

Another new feature is an enhanced sizing methodology for SGC using a write monitoring bitmap produced by the DS8900F (also available for the DS8880). This bitmap is also useful for sizing the repository for Extent Space Efficient (ESE) FlashCopy.

The write monitoring bitmap notes when each track has been written on a volume but has no other function beyond this. The bitmap may be started, stopped, reset, and queried. These commands are only available through the Copy Services Manager (CSM) and there is no GUI, DSCLI or TSO support.

IBM also provides an ESE Sizer tool to analyze the query outputs to do the sizing. At some point the sizing functionality may be incorporated directly into CSM. This capability should help improve sizing for SGC and ESE FlashCopy which will limit disruptions due to running out of capacity.

Transparent Cloud Tiering

IBM made several enhancements in this release that apply to using DS8900F with Transparent Cloud Tiering (TCT). With TCT, the DS8900F can migrate data to the cloud over ethernet network connections. Note that the standard ethernet connection on a DS8900F is 1 Gb but the optional 10 Gbit connection is highly recommended if TCT is being considered. The cloud may be a public or private cloud service such as Amazon Web Services (AWS) or it could be an IBM TS7700 tape library. The enhancements in this release pertain to TCT using TS7700 as the cloud repository.

The new release enables the DS8900F to use TS7700 as an object store. With object store, data is saved with comprehensive metadata, eliminating the tiered file structure used in traditional hierarchical storage, and places everything into a flat address space, called a storage pool. This approach is ideal for retaining massive amounts of unstructured data. Please note that IntelliMagic Vision already provides some functionality for monitoring object stores on the IBM TS7700.

A TS7700 cluster can now support both traditional FICON workloads and TCT object workloads at the same time. The TS7700 utilizes logical partitions to allocate storage from resident cache specifically dedicated to TCT objects in an object partition. Data redundancy is supported by forking the writes to two TS7700 clusters within the same grid.

Compression and Encryption

With the new release, both data compression and in-flight data encryption are supported with TCT.  Hardware accelerated compression is done by the DS8900F using the NX842 compression engine in POWER9. The compression engine is off chip and will not impact host I/O nor Copy Services performance on the DS8900F.

Compression is only available when a TS7700 is used as an object store. It will not work with other cloud repositories at this time. Data is only decompressed when recalled by the DS8900F. TCT Secure Data Transfer uses in-flight encryption to provide an extra layer of security for data transfer. Data is encrypted by the DS8900F and decrypted by the TS7700.

The DS8900F uses the POWER9 crypto engine so it does not use processor cycles and has no effect on DS8900F I/O performance. Note that this is not the same as encrypting data at rest and does not require an external key manager. This function also requires using TS7700 (with release 5.1 code level) as an object store.

The new release enables DFSMSdss Full Volume Dump Support with TCT, which is a new use case. Until now, TCT was primarily used for archiving inactive data. The new use case allows TCT to be used with active data when a dump and restore is required. DFSMSdss will enable a command-based dump to cloud and recall directly to production volumes. Since this activity occurs over ethernet, the potential performance impact of dump related FICON traffic on the DS8900F would not be felt.

Next Steps

I hope this blog helps you understand some of the features and functions included in the latest IBM DS8900F release. If you want more technical details, please see the presentation provided by the IBM Washington Systems Center here.

If you are looking for a complete solution to managing your IBM storage, you need look no farther than IntelliMagic Vision. For z/OS environments, IntelliMagic Vision can help you manage performance and capacity for IBM DS8000 (or EMC and HDS) as well as IBM TS7700 and see how they interrelate with your overall system performance. For Open Systems, IntelliMagic Vision gives you deep insight into your storage arrays and SAN.

This article's author

Lee LaFrese
Senior Storage Performance Consultant
More from Lee

Share this blog

6 Reports Every IBM TS7700 Performance Analyst Should Have

zHyperLink: The Holy Grail of Mainframe I/O?

This white paper discusses IBM’s positioning of the zHyperLink technology, and provides some considerations for installations that consider to deploy it.

Related

Blog

Understanding How Logical Workloads Behave On Physical Hardware

If you find elevated response time on a logical volume, how do you know which physical drives may be causing it?

Read more
Webinar

zAcademy: Life in the fast lane: zHyperLink

The z14 and z15 support a completely new I/O method called zHyperLink. This is easily the most substantial I/O announcement from IBM since the introduction of FICON.

Watch Webinar
Blog

AI: Too Much of a Good Thing

Solution providers will continue to entice us with bigger and better real-time analytics. Some of these should be employed, but first try to understand the logic you may be activating when you implement.

Read more

Go to Resources