Hp p9000 raid manager 01.26.02 release notes (t1610-96040, november 2011) (5 pages)
Summary of Contents for HP XP P9500
Page 1
HP StorageWorks P9000 RAID Manager Installation and Configuration Guide Abstract This guide describes and provides instructions to install and configure HP StorageWorks P9000 RAID Manager Software on HP StorageWorks P9500 disk arrays. The intended audience is a storage system administrator or authorized service provider with independent knowledge of HP StorageWorks P9000 disk arrays and the HP StorageWorks Remote Web Console.
Contents 1 Installation requirements................5 System requirements........................5 Supported environments......................6 Supported Business Copy environments...................6 Supported Continuous Access Synchronous environments............7 Supported Continuous Access Asynchronous environments............8 Supported Continuous Access Journal environments..............9 Supported Snapshot environments..................9 Supported Data Retention environments................10 Supported Database Validator environments................11 Supported guest OS for VM....................12 Supported IPv4 and IPv6 platforms..................13 Requirements and restrictions for z/Linux...................14 Requirements and restrictions for VM..................15...
Page 4
Removing RAID Manager in a Windows environment..............45 Removing RAID Manager in an OpenVMS environment...............46 Removing the RAID Manager components.................46 5 Troubleshooting..................47 Troubleshooting........................47 6 Support and other resources..............48 Contacting HP........................48 Subscription service......................48 Related information.........................48 HP websites........................48 Conventions for storage capacity values..................49 Typographic conventions......................49 HP product documentation survey.....................50 A Fibre-to-SCSI address conversion..............51...
1 Installation requirements Unless otherwise specified, the term P9000 in this guide refers to the following disk array: P9500 Disk Array The GUI illustrations in this guide were created using a Windows computer with the Internet Explorer browser. Actual windows may differ depending on the operating system and browser used. GUI contents also vary with licensed program products, storage system models, and firmware versions.
Host memory: Static memory capacity: minimum = 300 KB, maximum = 500 KB ◦ Dynamic memory capacity (set in HORCM_CONF): maximum = 500 KB per unit ID ◦ Failover: RAID Manager supports several failover products, including FirstWatch, MC/ServiceGuard, HACMP, TruCluster, and ptx/CLUSTERS. See Table 2 (page 7) –...
Table 5 Supported platforms for Snapshot (continued) Vendor Operating system Failover software Volume Manager I/O interface AIX 5.1 – Fibre Microsoft Windows 2000, 2003, 2008 – Fibre Windows 2003/2008 on IA64* – Fibre Windows 2003/2008 on EM64T Red Hat Red Hat Linux 6.0, 7.0, 8.0 –...
Table 6 Supported platforms for Data Retention (continued) Vendor Operating system Volume Manager I/O interface IRIX64 6.5 – SCSI/Fibre * IA64: using IA-32EL on IA64 ** See “Troubleshooting” (page 47) for important information about RHEL 4.0 using kernel 2.6.9.XX. Supported Database Validator environments Table 7 Supported platforms for Database Validator Vendor Operating system...
Supported guest OS for VM Table 8 Supported guest OS for VM VM Vendor Layer Guest OS RAID Manager support Volume mapping I/O interface confirmation VMware ESX Server Guest Windows 2008 R2 Confirmed 2.5.1 or later using Windows 2003 SP1 Confirmed RDM* Fibre...
Requirements and restrictions for z/Linux In the following example, z/Linux defines the Open Volumes that are connected to FCP as /dev/sd*. Also, the mainframe volumes (3390-xx) that are connected to FICON are defined as /dev/dasd*. Figure 1 Example of a RAID Manager configuration on z/Linux The restrictions for using RAID Manager with z/Linux are: Command device.
In the previous example, the Product_ID, C019_3390_0A, has the following associations: C019 indicates the Devno 3390 indicates the Dev_type 0A indicates the Dev_model The following commands cannot be used because there is no PORT information: raidscan -pd <device>, raidar -pd <device>, raidvchkscan -pd <device> raidscan -find [conf] , mkconf Requirements and restrictions for VM Restrictions for VMware ESX Server...
About running on SVC. The ESX Server 3.0 SVC (service console) is a limited distribution of Linux based on Red Hat Enterprise Linux 3, Update 6 (RHEL 3 U6). The service console provides an execution environment to monitor and administer the entire ESX Server host. The RAID Manager user can run RAID Manager by installing “RAID Manager for Linux”...
hdisk2 -> NOT supported INQ. [AIX ] [VDASD hdisk19 -> NOT supported INQ. [AIX ] [VDASD The following commands discover the volumes by issuing SCSI inquiry. These commands cannot be used, because there is no port/LDEV for RAID information. raidscan -pd <device>, raidar -pd <device>, raidvchkscan -pd <device> raidscan -find [conf] , mkconf.sh, inqraid pairxxx -d[g] <device>, raidvchkdsp -d[g] <device>, raidvchkset -d[g] <device>...
Lun sharing between guest OS and console OS. It is not possible to share a command device as well as a normal Lun between a guest OS and a console OS. Running RAID Manager on Console OS. The console OS (management OS) is a limited Windows, like Windows 2008 Server Core, and the Windows standard driver is used.
For HP-UX (PA/IA) systems: /usr/lib/libc.sl However, RAID Manager may need to specify a different PATH to use the library for IPv6. After this consideration, RAID Manager also supports the following environment variables for specifying a PATH: $IPV6_DLLPATH (valid for only HP-UX, Windows): This variable is used to change the default PATH for loading the Library for IPv6.
Page 20
(3) IPC method using MailBox driver As an alternate method of the UNIX domain socket for IPC (Inter Process Communication), RAID Manager use the mailbox driver to enable the communication between RAID Manager and HORCM. Therefore, if RAID Manager and HORCM are executing in different jobs (different terminal), then you must redefine LNM$TEMPORARY_MAILBOX in the LNM$PROCESS_DIRECTORY table as follows: $ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP (4) Startup method for HORCM daemon...
Page 21
HORCM Shutdown inst 0 !!! inst 1: HORCM Shutdown inst 1 !!! (5) Command device RAID Manager uses the SCSI class driver for accessing the command device on the XP1024/XP128 Disk Array, because OpenVMS does not provide the raw I/O device such as UNIX, and defines “DG*,DK*,GK*”...
Page 22
-CLI or -CLIWP or -CLIWN or -CM for the inqraid options Environmental variable name such as HORCMINST … controlled by CRTL Also you need to define the following logical name to your login.com in order to distinguish the uppercase and the lowercase: $ DEFINE DECC$ARGV_PARSE_STYLE ENABLE$ SET PROCESS/PARSE_STYLE=EXTENDED (10) Regarding using spawn command You can also start the HORCM process easily by using the spawn command.
After making the S-VOL for Writing enable by using “pairsplit” or “horctakeover” command, you need to perform the “mcr sysman” command in order to use the S-VOLs for backup or disaster recovery. $ pairsplit -g CAVG -rw $ mcr sysman SYSMAN>...
Page 25
(5) Verify a physical mapping of the logical device. $ HORCMINST := 0 $ raidscan -pi DKA145-151 -find DEVICE_FILE S/F PORT TARG SERIAL LDEV PRODUCT_ID DKA145 CL1-H 30009 OPEN-9-CM DKA146 CL1-H 30009 OPEN-9 DKA147 CL1-H 30009 OPEN-9 DKA148 CL1-H 30009 OPEN-9 DKA149 CL1-H...
Using CCI with Hitachi and other storage systems Table 1 1 (page 30) shows the related two controls between CCI and the RAID storage system type (Hitachi or HP XP). Figure 6 (page 31) shows the relationship among the application, CCI, and RAID storage system.
Page 31
Figure 6 Relationship among application, CCI, and storage system Using CCI with Hitachi and other storage systems...
2 Installing and configuring RAID Manager This chapter describes installing and configuring RAID Manager. Installing the RAID Manager hardware Installation of the hardware required for RAID Manager is performed by the user and the HP representative. To install the hardware required for RAID Manager operations: User: Make sure that the UNIX/PC server hardware and software are properly installed and configured.
be different on your platform. Please consult your operating system documentation (for example, UNIX man pages) for platform-specific command information. To install the RAID Manager software in the root directory: Insert the installation medium into the I/O device properly. Move to the current root directory: # cd /. Copy all files from the installation medium using the cpio command: # cpio -idmu <...
Page 34
Change the owner of the following RAID Manager files from the root user to the desired user name: /HORCM/etc/horcmgr All RAID Manager commands in the /HORCM/usr/bin directory All RAID Manager log directories in the /HORCM/log* directories Change the owner of the raw device file of the HORCM_CMD command device in the configuration definition file from the root user to the desired user name.
Windows installation Make sure to install RAID Manager on all servers involved in RAID Manager operations. If network (TCP/IP) is not established, install a network of Windows attachment, and add TCP/IP protocol. To install the RAID Manager software on a Windows system: If a previous version of RAID Manager is installed, remove it according to the instructions in “Removing RAID Manager in a Windows environment”...
Because the ACL (Access Control List) of the Device Objects is set every time Windows starts-up, the Device Objects are also required when Windows starts-up. The ACL is also required when new Device Objects are created. RAID Manager administrator tasks Establish the HORCM (/etc/horcmgr) startup environment.
Execute the following command: $ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG - _$ /destination=SYS$POSIX_ROOT:[000000] Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-1.PCSI exists Verify installation of the proper version using the raidqry command: $ raidqry -h Model: RAID-Manager/OpenVMS Ver&Rev: 01-22-03/06 Usage: raidqry [options] Follow the requirements and restrictions in “Porting notice for OpenVMS”...
Figure 7 System configuration example and setting example of command device and virtual command device by in-band and out-of-band methods Setting the command device RAID Manager commands are issued to the RAID storage system via the command device. The command device is a user-selected, dedicated logical volume on the storage system that functions as the interface to the RAID Manager software on the UNIX/PC host.
Configure the device as needed before setting it as a command device. For example, use Virtual LUN or Virtual LVI to create a device that has 36 MB of storage capacity. For instructions, see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Launch LUN Manager, locate and select the device, and set the device as a command device.
Example 3 Setting example of virtual command device in configuration definition file (out-of-band method) HORCM_CMD #dev_name dev_name dev_name \\.\IPCMD-192.168.1.100-31001 About alternate command devices If RAID Manager receives an error notification in reply to a read or write request to a command device, the RAID Manager software can switch to an alternate command device, if one is defined.
Creating/editing the configuration definition file The configuration definition file is a text file that is created and edited using any standard text editor (for example, UNIX vi editor, Windows Notepad). The configuration definition file defines correspondences between the server and the volumes used by the server. There is a configuration definition file for each host server.
Page 42
Table 12 Configuration (HORCM_CONF) parameters (continued) Parameter Default Type Limit Numeric value 7 characters Serial# None Numeric value 12 characters CU:LDEV(LDEV#) None Numeric value 6 characters dev_name for None Character string 63 characters HORCM_CMD Recommended value = 8 char. or less 1: Use decimal notation for numeric values (not hexadecimal).
3 Upgrading RAID Manager For upgrading RAID Manager software, the RMuninst script on the CD-ROM is used. For other media, use the following instructions to upgrade the RAID Manager software. The instructions may be different for your platform. Consult your operating system documentation (for example, UNIX man pages) for platform-specific command information.
The Run window opens, enter A:\Setup.exe (where A: is diskette or CD drive) in the Open pull-down list box. An InstallShield will open. Follow the on screen instructions to install the RAID Manager software. Reboot the Windows server, and verify that the correct version of the RAID Manager software is running on your system by executing the raidqry -h command.
4 Removing RAID Manager This chapter explains how to remove RAID Manager. Removing RAID Manager in a UNIX environment To remove the RAID Manager software: If you are discontinuing local and/or remote copy functions (for example, Business Copy, Continuous Access Synchronous), delete all volume pairs and wait until the volumes are in simplex status.
You can remove the RAID Manager software only when RAID Manager is not running. If RAID Manager software is running, shut down RAID Manager using the horcmshutdown command to ensure a normal end to all functions: One RAID Manager instance: D:\HORCM\etc> horcmshutdown Two RAID Manager instances: D:\HORCM\etc>...
5 Troubleshooting This chapter provides troubleshooting information. Troubleshooting If you have a problem installing or upgrading the RAID Manager software, ensure that all system requirements and restrictions have been met. If you need to call HP Technical Support, provide as much information about the problem as possible, including the following: The circumstances surrounding the error or failure The exact content of any error messages displayed on the host system(s)
6 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Subscription service...
WARNING! Indicates that failure to follow directions could result in bodily harm or death. CAUTION: Indicates that failure to follow directions could result in damage to equipment or data. IMPORTANT: Provides clarifying information or specific instructions. NOTE: Provides additional information. TIP: Provides helpful hints and shortcuts.
A Fibre-to-SCSI address conversion Disks connected with Fibre Channel display as SCSI disks on UNIX hosts. Disks connected with Fibre Channel connections can be fully utilized. RAID Manager converts Fibre Channel physical addresses to SCSI target IDs (TIDs) using a conversion table (see Figure 9 (page 51)).
Figure 10 LUN configuration RAID Manager uses absolute LUNs to scan a port, whereas the LUNs on a group are mapped for the host system so that the target ID & LUN, that is indicated by the raidscan command, is different from the target ID &...
Page 54
The conversion table for Windows systems is based on the Emulex driver. If a different Fibre Channel adapter is used, the target ID indicated by the raidscan command may be different than the target ID indicated by the Windows system. Note on Table 3 for other Platforms: Table 3 is used to indicate the LUN without Target ID for unknown FC_AL conversion table or Fibre Channel fabric (Fibre Channel WWN).
Page 55
Table 16 Fibre address conversion table for Solaris and IRIX systems (Table1) (continued) AL-PA AL-PA AL-PA AL-PA AL-PA AL-PA AL-PA AL-PA 1 10 1 1 1 Table 17 Fibre address conversion table for Windows systems (Table2) C5 (PhId5) C4 (PhId4) C3 (PhId3) C2 (PhId2) C1 (PhId1)
Page 58
Poll: The interval for monitoring paired volumes. To reduce the HORCM daemon load, make this interval longer. If set to - 1 , the paired volumes are not monitored. The value of - 1 is specified when two or more RAID Manager instances run on a single machine. Timeout: The time-out period of communication with the remote server.
Page 59
If Windows has two different array models that share the same serial number, fully define the serial number, ldev#, port and host group for the CMDDEV. For under Multi Path Driver. Specifies to use any port as the command device for Serial#30095, LDEV#250 \\.\CMD-30095-250 For full specification.
Page 60
\\.\CMD-30095-250:/dev/rdsk/ Example for full specification. Specifies the command device for Serial#30095, LDEV#250 connected to Port CL1-A, Host group#1: \\.\CMD-30095-250-CL1-A-1:/dev/rdsk/ Other examples: \\.\CMD-30095-250-CL1:/dev/rdsk/ \\.\CMD-30095-250-CL2 \\.\CMD-30095:/dev/rdsk/c1 \\.\CMD-30095:/dev/rdsk/c2 \\.\IPCMD-158.214.135.1 13-31001 (3) HORCM_DEV The device parameter (HORCM_DEV) defines the RAID storage system device addresses for the paired logical volume names.
Page 61
The following ports can only be specified for XP12000 Disk Array/XP10000 Disk Array and XP24000/XP20000 Disk Array: Basic Option Option Option Target ID: Defines the SCSI/Fibre target ID number of the physical volume on the specified port. See “Fibre-to-SCSI address conversion ” (page 51) for further information on Fibre address conversion.
Page 62
MU# for HORC/Continuous Access Journal: Defines the mirror unit number (0 - 3) of one of four possible HORC/Cnt Ac-J bitmap associations for an LDEV. If this number is omitted, it is assumed to be zero (0). The Continuous Access Journal mirror description is described in the MU# column by adding “h”...
Page 63
# horcctl -ND -g IP46G Current network address = 158.214.135.106,services = 50060# horcctl -NC -g IP46G Changed network address(158.214.135.106,50060 -> fe80::39e7:7667:9897:2142,50060) For IPv6 only, the configuration must be defined as HORCM/IPv6. Figure 15 Network configuration for IPv6 It is possible to communicate between HORCM/IPv4 and HORCM/IPv6 using IPv4 mapped to IPv6.
Page 64
In the case of mixed IPv4 and IPv6, it is possible to communicate between HORCM/IPv4 and HORCM/IPv6 and HORCM/IPv6 using IPv4 mapped to IPv6 and native IPv6. Figure 17 Network configuration for mixed IPv4 and IPv6 (5) HORCM_LDEV The HORCM_LDEV parameter is used for specifying stable LDEV# and Serial# as the physical volumes corresponding to the paired logical volume names.
oradb dev1 30095 02:40 oradb dev2 30095 02:41 Specifying “CU:LDEV” in hex used by SVP or Remote Web Console. Example for LDEV# 260 01: 04 Specifying “LDEV” in decimal used by the RAID Manager inqraid command. Example for LDEV# 260 260 Specifying “LDEV”...
Page 66
The command device is defined using the system raw device name (character-type device file name). For example, the command devices for the following figure would be: HP-UX: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1 Solaris: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2 For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
Page 67
Example of RAID Manager commands with HOSTA: Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -f never This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in the above figure).
Page 68
where XX = device number assigned by Tru64 UNIX DYNIX/ptx: HORCM_CMD of HOSTA = /dev/rdsk/sdXX HORCM_CMD of HOSTB = /dev/rdsk/sdXX where XX = device number assigned by DYNIX/ptx Windows 2008/2003/2000: HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port# Windows NT: HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#...
Page 69
Example of RAID Manager commands with HOSTA: Designate a group name (Oradb) and a local host P- VOL a case. # paircreate -g Oradb never This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure).
Page 70
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command. AIX: HORCM_CMD of HORCMINST0 = /dev/rhdiskXX HORCM_CMD of HORCMINST1 = /dev/rhdiskXX where XX = device number assigned by AIX Tru64 UNIX: HORCM_CMD of HORCMINST0 = /dev/rrzbXXc HORCM_CMD of HORCMINST1 = /dev/rrzbXXc where XX = device number assigned by Tru64 UNIX...
Page 71
Figure 20 Continuous Access Synchronous configuration example for two instances Example of RAID Manager commands with Instance-0 on HOSTA: When the command execution environment is not set, set an instance number. For C shell: # setenv HORCMINST 0 Examples of RAID Manager configurations...
Page 72
For Windows: set HORCMINST=0 Designate a group name (Oradb) and a local instance P- VOL a case. # paircreate -g Oradb -f never This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure). Designate a volume name (oradev1) and a local instance P-VOL a case.
Page 73
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command. AIX: HORCM_CMD of HOSTA = /dev/rhdiskXX HORCM_CMD of HOSTB = /dev/rhdiskXX HORCM_CMD of HOSTC = /dev/rhdiskXX HORCM_CMD of HOSTD = /dev/rhdiskXX where XX = device number assigned by AIX Tru64 UNIX: HORCM_CMD of HOSTA = /dev/rrzbXXc HORCM_CMD of HOSTB = /dev/rrzbXXc...
Page 74
Figure 21 Business Copy configuration example (continues in next figure) Sample configuration definition files...
Page 75
Figure 22 Business Copy configuration example (continued) Example of RAID Manager commands with HOSTA (group Oradb): When the command execution environment is not set, set HORCC_MRCF to the environment variable. For C shell: # setenv HORCC_MRCF 1 Examples of RAID Manager configurations...
Page 76
Windows: set HORCC_MRCF=1 Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure). Designate a volume name (oradev1) and a local host P-VOL a case.
Page 77
For Windows: set HORCC_MRCF=1 Designate a group name (Oradb1) and a local host P-VOL a case. # paircreate -g Oradb1 -vl This command creates pairs for all LUs assigned to group Oradb1 in the configuration definition file (two pairs for the configuration in the above figure). Designate a volume name (oradev1- 1 ) and a local host P-VOL a case.
Page 78
For Windows: set HORCC_MRCF=1 Designate a group name (Oradb2) and a local host P-VOL a case. # paircreate -g Oradb2 -vl This command creates pairs for all LUs assigned to group Oradb2 in the configuration definition file (two pairs for the configuration in above figure). Designate a volume name (oradev2- 1 ) and a local host P-VOL a case.
Page 79
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command. AIX: HORCM_CMD of HORCMINST0 = /dev/rhdiskXX HORCM_CMD of HORCMINST1 = /dev/rhdiskXX where XX = device number assigned by AIX Tru64 UNIX: HORCM_CMD of HORCMINST0 = /dev/rrzbXXc HORCM_CMD of HORCMINST1 = /dev/rrzbXXc where XX = device number assigned by Tru64 UNIX...
Page 80
Figure 23 Business Copy configuration example with cascade pairs “Configuration definition for cascading volume pairs” (page 85) for more information on Business Copy cascading configurations. Example of RAID Manager commands with Instance-0 on HOSTA: When the command execution environment is not set, set an instance number. For C shell: # setenv HORCMINST 0 # setenv HORCC_MRCF 1 Sample configuration definition files...
Page 81
For Windows: set HORCMINST=0 set HORCC_MRCF=1 Designate a group name (Oradb) and a local instance P- VOL a case. # paircreate -g Oradb # paircreate -g Oradb1 # paircreate –g oradb –pvol <Ldevgrp> These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the configuration definition file.
Page 82
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command. AIX: HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rhdiskXX HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rhdiskXX HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rhdiskXX where XX = device number assigned by AIX Tru64 UNIX: HORCM_CMD of HOSTA(/etc/horcm.conf) ...
Page 83
Figure 24 Continuous Access Synchronous/Business Copy configuration example with cascade pairs Example of RAID Manager commands with HOSTA and HOSTB: Designate a group name (Oradb) on Continuous Access Synchronous environment of HOSTA. # paircreate -g Oradb Designate a group name (Oradb1) on Business Copy environment of HOSTB. When the command execution environment is not set, set HORCC_MRCF.
Page 84
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the configuration definition file (four pairs for the configuration in the above figures). Designate a group name and display pair status on HOSTA. # pairdisplay -g oradb -m cas Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M oradb...
Figure 29 Pairdisplay -d on HORCMINST0 Cascading connections for Continuous Access Synchronous and Business Copy The cascading connections for Continuous Access Synchronous/Business Copy can be set up by using three configuration definition files that describe the cascading volume entity in a configuration definition file on the same instance.
Page 90
Figure 31 Pairdisplay for Continuous Access Synchronous on HOST1 Figure 32 Pairdisplay for Continuous Access Synchronous on HOST2 (HORCMINST) Figure 33 Pairdisplay for Business Copy on HOST2 (HORCMINST) Sample configuration definition files...
Page 91
Figure 34 Pairdisplay for Business Copy on HOST2 (HORCMINST0) Examples of RAID Manager configurations...
Glossary AL-PA Arbitrated loop physical address. HP StorageWorks P9000 or XP Business Copy. An HP StorageWorks application that provides volume-level, point-in-time copies in the disk array. Circuit Breaker. Command-line interface. An interface comprised of various commands which are used to control operating system responses.
Page 93
to be associated with 1 to 36 LDEVs. Essentially, LUSE makes it possible for applications to access a single large pool of storage. Master control unit. MSCS Microsoft Cluster Service. Mirror unit. Out-of-Band This method is transferring a command from the client or the server to the virtual command device Method in the SVP via LAN, assigning a RAID Manager operation instruction to the DKC, and executing P-VOL...