About

z/OS V2R2 Redbooks: JES2, JES3, UI, Sysplex, Diagnostics, Performance, Storage, ServerPac, USS, Unix, Availability, Operations and Security

sábado, 21 de maio de 2016

















These IBM® Redbooks® publication helps you to become familiar with the technical changes that were introduced into the User Interface areas of IBM z/OS® V2R2.
These publication is a series of IBM Redbooks publications that take a modular approach to providing information about the updates within z/OS V2R2. This approach has the following goals:
- Provide modular content
- Group the technical changes into a topic
- Provide a more streamlined way of finding relevant information that is based on the topic
It is our hope that this approach is useful. We value your feedback.


IBM z/OS V2R2: JES2, JES3, and SDSF, SG24-8287-00
Redbooks, published 15 December 2015

IBM z/OS V2R2: User Interfaces, SG24-8311-00
Redbooks, published 17 December 2015

IBM z/OS V2R2: Unix Systems Services, SG24-8310-00
Redbooks, published 16 December 2015

IBM z/OS V2R2: Sysplex, SG24-8307-00
Redbooks, published 16 December 2015

IBM z/OS V2R2: Diagnostics, SG24-8306-00
Redbooks, published 26 January 2016

IBM z/OS V2R2: Performance, SG24-8292-00
Redbooks, published 17 December 2015

IBM z/OS V2R2: Storage Management and Utilities, SG24-8289-00
Redbooks, published 15 December 2015, last updated 17 December 2015

IBM z/OS V2R2: ServerPac, SG24-8500-00
Redbooks, published 23 February 2016, last updated 2 March 2016, Rating: http://www.redbooks.ibm.com/Redbooks.nsf/twostars.gif?OpenImageResource (based on 1 review)

IBM z/OS V2R2: Availability Management, SG24-8290-00
Redbooks, published 15 December 2015, last updated 17 December 2015

IBM z/OS V2R2: Security, SG24-8288-00
Redbooks, published 15 December 2015, last updated 17 December 2015

Mainframe from Scratch Volume 1 Initial Build for z/OS, SG24-8329-00
Draft Redbooks, last update 9 March 2016

IBM z/OS V2R2: Operations, SG24-8305-00
Redbooks, published 17 December 2015, Rating: http://www.redbooks.ibm.com/Redbooks.nsf/onestar.gif?OpenImageResource (based on 1 review)

Logical partitions - z13

sábado, 13 de junho de 2015

PR/SM enables the z13 to be initialized for a logically partitioned operation, supporting up to 85 LPARs. Each LPAR can run its own operating system image in any image mode, independently from the other LPARs. An LPAR can be added, removed, activated, or deactivated at any time. Changing the number of LPARs is not disruptive and does not require POR. Certain facilities might not be available to all operating systems because the facilities might have software corequisites. Each LPAR has the same resources as a real CPC: Processors: Called logical processors, they can be defined as CPs, IFLs, ICFs, or zIIPs. They can be dedicated to an LPAR or shared among LPARs. When shared, a processor weight can be defined to provide the required level of processor resources to an LPAR. Also, the capping option can be turned on, which prevents an LPAR from acquiring more than its defined weight, limiting its processor consumption. LPARs for z/OS can have CP and zIIP logical processors. The two logical processor types can be defined as either all dedicated or all shared. The zIIP support is available in z/OS.
The weight and number of online logical processors of an LPAR can be dynamically managed by the LPAR CPU Management function of the Intelligent Resource Director (IRD). These can be used to achieve the defined goals of this specific partition and of the overall system. The provisioning architecture of the z13 adds another dimension to the dynamic management of LPARs. PR/SM is enhanced to support an option to limit the amount of physical processor capacity that is consumed by an individual LPAR when a PU is defined as a general-purpose processor (CP) or an IFL shared across a set of LPARs. This enhancement is designed to provide a physical capacity limit that is enforced as an absolute (versus relative) limit. It is not affected by changes to the logical or physical configuration of the system. This physical capacity limit can be specified in units of CPs or IFLs. The “Change LPAR Controls” and “Customize Activation Profiles” tasks on the Hardware Management Console have been enhanced to support this new function. For the z/OS Workload License Charges (WLC) pricing metric, and metrics that are based on it, such as Advanced Workload License Charges (AWLC), an LPAR defined capacity can be set. This defined capacity enables the soft capping function. Workload charging introduces the capability to pay software license fees based on the processor utilization of the LPAR on which the product is running, rather than on the total capacity of the system: – In support of WLC, the user can specify a defined capacity in millions of service units (MSUs) per hour. The defined capacity sets the capacity of an individual LPAR when soft capping is selected. The defined capacity value is specified on the Options tab in the Customize Image Profiles window. – WLM keeps a 4-hour rolling average of the processor usage of the LPAR. When the 4-hour average processor consumption exceeds the defined capacity limit, WLM dynamically activates LPAR capping (soft capping). When the rolling 4-hour average returns below the defined capacity, the soft cap is removed. For more information about WLM, see System Programmer's Guide to: Workload Manager, SG24-6472. For a review of software licensing, see 7.16.1, “Software licensing considerations” on page 300. Memory: Memory, either main storage or expanded storage, must be dedicated to an LPAR. The defined storage must be available during the LPAR activation. Otherwise, the LPAR activation fails. Reserved storage can be defined to an LPAR, enabling nondisruptive memory addition to and removal from an LPAR, by using the LPAR dynamic storage reconfiguration (z/OS and z/VM).
Channels: Channels can be shared between LPARs by including the partition name in the partition list of a channel-path identifier (CHPID). I/O configurations are defined by the IOCP or the HCD with the CHPID mapping tool (CMT). The CMT is an optional tool that is used to map CHPIDs onto physical channel IDs (PCHIDs). PCHIDs represent the physical location of a port on a card in an I/O cage, I/O drawer, or PCIe I/O drawer. IOCP6 is available on the z/OS, z/VM, and z/VSE operating systems, and as a stand-alone program on the hardware console. HCD is available on the z/OS and z/VM operating systems. Consult the appropriate 2964DEVICE Preventive Service Planning (PSP) buckets before implementation. Fibre Channel connection (FICON) channels can be managed by the Dynamic CHPID Management (DCM) function of the Intelligent Resource Director. DCM enables the system to respond to ever-changing channel requirements by moving channels from lesser-used control units to more heavily used control units, as needed.
Copyright IBM Corporation

Improving Availability with Flash Express on the IBM zEC12

sexta-feira, 24 de maio de 2013

ZSL03189-USEN-01
What is Flash Express?
With the IBM zEnterprise® EC12 (zEC12), IBM is
introducing a new feature, Flash Express, designed to
help drive System availability and performance to
even higher levels. Flash Express is designed to help
reduce latency for critical paging that might otherwise
impact the availability and performance of your key
workloads. For paging flexibility and efficiency, Flash
Express is a compelling addition to traditional auxiliary
storage.
Now with the zEC12 flash memory has been
integrated within the memory hierarchy to provide
even higher levels of system availability and
performance. Flash Express is designed to offer
exceptional performance for paging spikes by
reducing paging latency. Transitional workload
processing shifts such as during start of day
processing, changes in loads, or collection of SVC or
standalone dumps are examples where paging might
surge. This is where Flash Express can help!
Flash Express is designed to improve availability
through improved paging performance where it
matters most to your business- like early morning
trading sessions, or during critical retail shopping
days.
Planning and Configuring Flash Express
How is Flash Express Packaged? Flash storage is
integrated on PCI Express attached RAID10 cards
which fit in the PCIe I/O expansion drawer. The Flash
Express feature is packaged as a two card pair and
each card holds 1.4 TB of memory per mirrored card
pair. Up to four card pairs may be used concurrently,
delivering up to 5.6 TB of memory.
Flash memory is assigned by the LPAR and z/OS®
currently supports up to 16 TB of flash memory in a
single system image. A Flash Memory allocation panel
on the SE specifies the amount of flash memory
initially brought online to a z/OS partition.
Incremental Flash memory can be also
subsequently brought online or offline as needed.
Sizing Flash Express
Plan to assign the same amount of memory on
Flash Express as defined for paging datasets on
disk. Usually one pair of Flash Express cards
provides enough paging space for the entire z/OS
partition. There is no need to perform detailed
capacity sizing to plan for Flash Express. Adding
Flash Express cards to your auxiliary storage can
improve paging performance and availability.
Because Flash Express is not persistent across
IPLs, it cannot be used for Virtual I/O or PLPA data
used in warm starts. VIO and PLPA datasets must
still be defined on disk.
Resiliency
Flash Express cards are delivered as a RAID 10
mirrored pair for superior resiliency and reliability.
In the unlikely event of a problem, Flash Express
cards can also be concurrently replaced. The
cards are designed for superior wear leveling and
have a long expected lifetime.
Security
Your Flash Express data remains protected. Data
is encrypted on the Flash Express adapter with
128 bit AES encryption. Encryption keys are stored
on smart cards that are plugged into the SE.
Removing the smart cards renders the data on the
card inaccessible.
Benefits from Flash Express
IBM Flash Express helps organizations improve
availability and performance especially during
periods of paging spikes.
Improving Availability
Flash Express can improve availability by reducing
significant paging delays that might otherwise
affect your system performance and impact your
mission critical workloads.
Improving Diagnostic Time
During diagnostic collection, as in SVC or standalone
dumps, systems can become sluggish effectively
rendering key systems unavailable. When data is
transferred into main memory as part of a dump, Flash
Express’ fast IO rates and low latency provide
decreased first failure data capture time, and faster
page-ins of the critical pages needed to create the
dump. This allows the system to return to normal
workload performance faster, without incurring extra
delays.
Improving Paging
z/OS uses both Flash Express and page data sets for
auxiliary storage by paging data to the preferred
storage medium first, based on response times, data
set characteristics, and other parameters. Wherever
possible the system will page first to Flash Express
resulting in faster performance. Especially for data
intensive applications the use of Pageable Large
Pages with Flash Express enables the transfer of large
amounts of data at faster speeds, which can result in
improved performance for DB2 analytic workloads.
Improving Performance at Transition Times
Banks and financial institutions need highly
responsive start of day performance. When the
workload shifts from a transactional workload, say,
from prime shift to batch and back to prime shift,
response time delays can occur. This is due to the
required page-ins of critical work needed to resume
transactional processing. These delays can be
dramatically reduced when data for the next shift is
transferred from flash into memory. The large number
of page-ins could otherwise delay performance at
“start of day” or “market open” activities, vital to
operations like trading and banking.
Reduce CPU Cycles
Flash Express works to reduce CPU cycles
associated with page translations. Typically, page
translations from virtual to real memory can impact the
performance of workloads like DB2® or Java. When
using small pages (4K pages), paging is less efficient
than paging using fewer larger 1 MB pages
Here is how it works:
Cache buffers are used by the operating system to
reduce virtual to real address translations.
Performance of this translation can be improved
through the use of having a greater number of
page entries in cache; this is made possible
through the use of larger 1MB pages. As a result
of improved cache hits, exploiters of pageable
large pages and Flash Express should experience
performance improvements both in elapsed time
and CPU.
Flash Express Benefits Many Industries
Flash Express is useful for any industry needing
improved service levels:
􀂃 Applications requiring high availability at the
start of the day like banks, insurance, trading
applications
􀂃 Service providers that compete on outstanding
service levels
􀂃 Retail applications with a web presence
􀂃 Public sector applications requiring high
availability, like emergency preparedness
􀂃 Development and test teams that collect
diagnostics frequently
􀂃 Any organization that needs high SLAs
Bottom line
􀂃 Flash Express is designed to improve
availability and can reduce paging latency at
critical times such as during morning
transitions or other periods experiencing
paging spikes.
􀂃 Flash Express can improve performance with
pageable large pages, for instance with DB2
and Java workloads
􀂃 Flash Express can also reduce delays from
SVC or standalone dumps
􀂃 Flash is automatically secured for security and
compliance needs
􀂃 Flash is easily deployed and is easy to
configure

© Copyright IBM Corporation2012

Design change yields performance benefit for OSA-Express4S 10 Gigabit Ethernet inbound traffic:

Performance using jumbo frames: In laboratory measurements, using an OSA-Express4S 10 Gigabit Ethernet (10 GbE) feature in a z196 defined as CHPID type OSD with an inbound-to-the-host streams workload operating at 10 Gbps, we achieved a maximum user-payload throughput of 1,180 megabytes per second (MBps) compared to a maximum of 680 MBps achieved with an OSA-Express3 10 GbE feature on a z196. This represents approximately a 70% increase for jumbo frames (8000 byte frames).
Measurements with mixed-direction streams workload in the same jumbo frames environment, with an OSA-Express4S 10 GbE feature, we achieved a maximum user-payload throughput of 2,080 MBps with an OSA-Express4S 10 GbE feature compared to a maximum of 1,240 MBps on an OSA-Express3 10 GbE feature on z 196. This represents approximately a 70% increase for jumbo frames.
Performance using standard frames: In laboratory measurements, using an OSA-Express4S 10 GbE feature in a z196 defined as CHPID type OSD with an inbound-to-the-host streams workload operating at 10 Gbps, we achieved a maximum user-payload throughput of 1,120 MBps compared to a maximum of 615 MBps achieved with an OSA-Express3 10 GbE feature on a z196. This represents approximately an 80% increase for standard frames (1492 byte frames).
Measurements with a mixed-direction streams workload in the same standard frames environment, with an OSA-Express4S 10 GbE feature, we achieved a maximum user-payload throughput of 1,680 MBps with an OSA-Express4S 10 GbE compared to a maximum of 1,180 MBps with an OSA-Express3 10 GbE feature on a z196. This represents approximately a 40% increase for standard frames.
OSA-Express4S 10 GbE performance was measured in a controlled environment using IBM Application Workload Modeler (AWM). The actual throughput or performance that any user may experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the network options and configuration, and the workload processed. One MBps represents 1,048,576 bytes per second.
OSA-Express4S Gigabit Ethernet is already capable of line speed for jumbo frames and standard frames.
Inbound Workload Queuing for Enterprise Extender - for improved scalability and performance: Inbound workload queuing (IWQ) for the OSA-Express features has been enhanced to differentiate and separate inbound Enterprise Extender traffic to a new input queue. The Enterprise Extender separation and processing associated with the Enterprise Extender input queue provides improved scalability and performance for Enterprise Extender.
With each input queue representing a unique type of workload, each having unique service and processing requirements, the IWQ function allows z/OS to use appropriate processing resources for each input queue. This approach allows multiple concurrent z/OS processing threads to process each unique input queue to avoid traditional resource contention. In a heavily mixed workload environment, this "off the wire" network traffic separation provided by OSA-Express IWQ reduces the conventional z/OS processing required to identify and separate unique workloads.
Inbound workload queuing for Enterprise Extender is supported by the OSA-Express4S and OSA-Express3 features when defined as CHPID types OSD or OSX. It is exclusive to zEC12, z196, and z114, and is supported by z/OS and by z/VM for guest exploitation.
Inbound workload queuing – for network performance: z/OS workloads are becoming more diverse; each type of workload may have unique service level requirements. The OSA-Express4S and OSA-Express-3 features support inbound workload queuing (IWQ), which creates multiple input queues and allows OSA to differentiate workloads "off the wire" and then assign work to a specific input queue (per device) to z/OS. With each input queue representing a unique type of workload, each having unique service and processing requirements, the IWQ function allows z/OS to preassign the appropriate processing resources for each input queue. This approach allows multiple concurrent z/OS processing threads to then process each unique input queue (workload), avoiding traditional resource contention. In a heavily mixed workload environment, this "off the wire" network traffic separation provided by an OSA-Express4S or OSA-Express3 feature IWQ is designed to reduce the conventional z/OS processing required to identify and separate unique workloads, which results in improved overall system performance and scalability.
A primary objective of IWQ is to provide improved performance for business-critical interactive workloads by reducing contention created by other types of workloads. The types of z/OS workloads that are identified and assigned to unique input queues are:
  1. z/OS Sysplex Distributor traffic - Network traffic that is associated with a distributed dynamic virtual IP address (DVIPA) is assigned a unique input queue allowing, the Sysplex Distributor traffic to be immediately distributed to the target host.
  2. z/OS bulk data traffic - Network traffic that is dynamically associated with a streaming (bulk data) TCP connection is assigned to a unique input queue allowing, the bulk data processing to be assigned the appropriate resources and isolated from critical interactive workloads.
IWQ is supported on zEC12, z196, z114, and z10 and is exclusive to OSA-Express4S and OSA-Express3 CHPID types OSD and OSX (CHPID type OSX is exclusive to zEC12, z196, and z114). IWQ is also supported by the z/OS operating system and by z/VM for guests.
Query and display OSA configuration (Display OSAINFO) – for network management:Previously, the OSA-Express system architecture introduced the capability for operating systems to dynamically register the Open Systems Adapter (OSA) configuration. This approach significantly improved the OSA-Express usability by reducing the burden placed on the system administrator to manually configure OSA-Express for each unique operating system configuration. Traditionally, the Open Systems Adapter Support Facility (OSA/SF) has provided the administrator with the ability to monitor the OSA configuration.
As additional complex functions have been added to OSA, the ability for the system administrator to display, monitor, and verify the specific current OSA configuration unique to each operating system has become more complex.
The OSA-Express4S and OSA-Express3 features support the capability for the operating system to directly query and display the current OSA configuration information (similar to OSA/SF). z/OS exploits this new OSA capability by introducing a new TCP/IP operator command called Display OSAINFO. Display OSAINFO allows the operator to monitor and verify the current OSA configuration, which helps to improve the overall management, serviceability, and usability of an OSA-Express4S or OSA-Express3 feature.
Display OSAINFO is exclusive to zEC12, z196, z114, and z10 and to OSA-Express4S or OSA-Express3 CHPID types OSD, OSM, and OSX, and to the z/OS operating system and z/VM for guest exploitation. CHPID types OSM and OSX are exclusive to zEC12, z196, and z114 servers.

HiperSockets for virtual LANs between logical partitions within a zEnterprise system

The HiperSockets function, also known as internal queued direct input/output (internal QDIO or iQDIO), is an integrated function of the zEC12, z196, and z114 that provides users with attachments to up to 32 high-speed virtual LANs and is designed to minimize system and network overhead.
HiperSockets can be customized to accommodate varying traffic sizes. Because HiperSockets does not use an external network, it can free up system and network resources, helping to eliminate attachment costs while improving availability and performance.
HiperSockets eliminates the necessity of using I/O subsystem operations and traversing an external network connection to communicate between logical partitions in the same System z server. HiperSockets offers significant value in server consolidation by connecting many virtual servers, and can be used instead of certain coupling link configurations in a Parallel Sysplex environment.
HiperSockets internal networks on zEC12, z196, and z114 servers support two transport modes:
Traffic can be IPv4 or IPv6, or non-IP such as AppleTalk, DECnet, IPX, NetBIOS, or SNA.
For more information on the Open Systems Adapter features and HiperSockets, refer to the IBM System z Connectivity Handbook, SG24-5444, which can be found at the following website:http://www.redbooks.ibm.com/

System z Integrated Information Processor (zIIP)

sexta-feira, 26 de abril de 2013

To assist in planning for zIIP, components of z/OS have been enhanced to report both projected zIIP usage and actual zIIP usage.
The projected usage function (PROJECTCPU) is intended to gather information about CPU time spent executing code which could potentially execute on zIIPs.
See Capacity Planning for zAAP and zIIP Specialty Engines
Note
  • Setting the IEAOPTxx parmlib member option PROJECTCPU=YES directs z/OS to record the amount of work eligible for zIIP [and zAAP] processors. SMF Record Type 72 subtype 3 is input to the RMF post processor. The Workload Activity Report lists workloads by WLM service class. In this report the field APPL% IIPPCP indicates which percentage of an processor is zIIP eligible, (APPL% AAPCP indicates which percentage of a processor is zAAP eligible). SMF Record Type 30 provides more detail on specific address spaces. Finding values as little as 25% for APPL% AAPCP or AAPL% IIPCP can make a new zAAP or zIIP processor worth your while.
For customers that have installed zIIPs, the reporting functions can provide current zIIP execution information that can be used for optimizing current configurations or for predicting potential future usage.
To assist in planning for zIIP Assisted IPSec, please see the technical papers on the underResources for basic sizing guidance for your workload.

***   http://www-03.ibm.com/systems/z/hardware/features/ziip/gettingstarted/index.html

 
Stein Performance © 2012 | Designed by Bubble Shooter, in collaboration with Reseller Hosting , Forum Jual Beli and Business Solutions