Sometimes to speed up the setup of identical disk systems you might decide to use SSCS rather than the Common Array Manager GUI to configure them.
All syntax mentioned here can be found in http://dlc.sun.com/pdf/820-4192-12/820-4192-12.pdf
First login to your CAMS server using sccs
root@c14-48 # sscs login -h c14-48 -u root
This only tells you if the login has failed, if you don't get a response, everything is ok and you're now connected to your CAMS server until it times out.
Select the profile for the storage you are setting up. You can get a list of profiles on your array (esal-2540-2 in my example) using
root@c14-48 # sscs list -a esal-2540-2 profile
Profile: Oracle_OLTP_HA
Profile: Oracle_DSS
Profile: Oracle_9_VxFS_HA
Profile: Sun_SAM-FS
Profile: High_Performance_Computing
Profile: Oracle_9_VxFS
Profile: Oracle_10_ASM_VxFS_HA
Profile: Random_1
Profile: Sequential
Profile: Sun_ZFS
Profile: Sybase_OLTP_HA
Profile: Sybase_DSS
Profile: Oracle_8_VxFS
Profile: Mail_Spooling
Profile: Microsoft_NTFS_HA
Profile: Microsoft_Exchange
Profile: Sybase_OLTP
Profile: Oracle_OLTP
Profile: VxFS
Profile: Default
Profile: NFS_Mirroring
Profile: NFS_Striping
Profile: Microsoft_NTFS
Profile: High_Capacity_Computing
You can get more detail on a profile using
root@c14-48 # sscs list -a esal-2540-2 profile Oracle_9_VxFS_HA
Profile: Oracle_9_VxFS_HA
Profile In Use: no
Factory Profile: yes
Description:
Oracle 9 over VxFS (High Availability)
RAID Level: 1
Segment Size: 128 KB
Read Ahead: on
Optimal Number of Drives: variable
Disk Type: SAS
Dedicated Hot Spare: no
The next step is to create my pool 'bt-poc'
The syntax for the command is
sscs create -a-p pool
root@c14-48 # sscs create -a esal-2540-2 -p Oracle_9_VxFS_HA pool bt-poc
Logically, at this point you want to create a logical disk, however you can't create one directly but you can create one implicitly as part of the volume creation
sscs create -a-p -s
-nvolume
root@c14-48 # sscs create -a esal-2540-2 -p bt-poc -s 15gb -n 6 volume vol1
Check the name of you new virtual disk using
root@c14-48 # sscs list -a esal-2540-2 vdisk
Virtual Disk: 1
root@c14-48 # sscs list -a esal-2540-2 vdisk 1
Virtual Disk: 1
Status: Optimal
State: Ready
Number of Disks: 6
RAID Level: 1
Total Capacity: 836.690 GB
Configured Capacity: 15.000 GB
Available Capacity: 821.690 GB
Array Name: esal-2540-2
Array Type: 2540
Disk Type: SAS
Maximal Volume Size: 821.690 GB
Associated Disks:
Disk: t85d01
Disk: t85d02
Disk: t85d03
Disk: t85d04
Disk: t85d05
Disk: t85d06
Associated Volumes:
Volume: vol1
Since I need another 4 identical volumes on this virtual disk I can be lazy just script up the creation.
root@c14-48 # for i in 2 3 4 5
do
sscs create -a esal-2540-2 -p bt-poc -s 15gb -v 1 volume vol${i}
done
The sccs commands are asynchronous - they return before the action is completed, so long running tasks like creating a RAID5 volume will still be running while you create your volumes on it.
Can delete any mix ups simply
root@c14-48 # sscs delete -a esal-2540-2 volume vol3
At this point, you can either map your volumes to the default storage domain, and all hosts connected to the storage will be able to see all the volumes, or you can do LUN mapping and limit which hosts can see which volumes.
Map to the default storage domain
for i in 1 2 3 4 5
do
sscs map -a esal-2540-2 volume vol${i}
done
Create host based mappings
Create your hosts
root@c14-48 # sscs create -a esal-2540-2 host dingo
root@c14-48 # sscs create -a esal-2540-2 host chief
Create the initators that map to the World Wide Number (WWN) for the Host Bus Adaptor (HBA) of each machine.
First find your WWN - you can do this either by looking on the storage switch if you have one, or on the hosts that will be accessing the storage.
Looking on the host you issue the command fcinfo hba-port, and look for the HBA port WWN associated with the correct fibre channel devices. I've highlighted the entries in red for clarity.
dingo # fcinfo hba-port
HBA Port WWN: 21000003ba9b3679
OS Device Name: /dev/cfg/c1
Manufacturer: QLogic Corp.
Model: 2200
Firmware Version: 2.01.145
FCode/BIOS Version: ISP2200 FC-AL Host Adapter Driver: 1.15 04/03/22
Type: L-port
State: online
Supported Speeds: 1Gb
Current Speed: 1Gb
Node WWN: 20000003ba9b3679
HBA Port WWN: 210000e08b09965e
OS Device Name: /dev/cfg/c3
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Firmware Version: 3.03.27
FCode/BIOS Version: fcode: 1.13;
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b09965e
HBA Port WWN: 210100e08b29965e
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Firmware Version: 3.03.27
FCode/BIOS Version: fcode: 1.13;
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08b29965e
root@c14-48 # sscs create -a esal-2540-2 -w 210000e08b09965e -h dingo initiator dingo-1
root@c14-48 # sscs create -a esal-2540-2 -w 210000e08b29965e -h dingo initiator dingo-2
root@c14-48 # sscs map -a esal-2540-2 -v vol1,vol3 host dingo
And there you have it - you've formatted and mapped your volumes without needing the web interface.