Wednesday 27 January 2010

Formatting a 2540 using SSCS

Sometimes to speed up the setup of identical disk systems you might decide to use SSCS rather than the Common Array Manager GUI to configure them.

All syntax mentioned here can be found in http://dlc.sun.com/pdf/820-4192-12/820-4192-12.pdf

First login to your CAMS server using sccs

root@c14-48 # sscs login -h c14-48 -u root

This only tells you if the login has failed, if you don't get a response, everything is ok and you're now connected to your CAMS server until it times out.

Select the profile for the storage you are setting up. You can get a list of profiles on your array (esal-2540-2 in my example) using

root@c14-48 # sscs list -a esal-2540-2 profile
Profile: Oracle_OLTP_HA
Profile: Oracle_DSS
Profile: Oracle_9_VxFS_HA
Profile: Sun_SAM-FS
Profile: High_Performance_Computing
Profile: Oracle_9_VxFS
Profile: Oracle_10_ASM_VxFS_HA
Profile: Random_1
Profile: Sequential
Profile: Sun_ZFS
Profile: Sybase_OLTP_HA
Profile: Sybase_DSS
Profile: Oracle_8_VxFS
Profile: Mail_Spooling
Profile: Microsoft_NTFS_HA
Profile: Microsoft_Exchange
Profile: Sybase_OLTP
Profile: Oracle_OLTP
Profile: VxFS
Profile: Default
Profile: NFS_Mirroring
Profile: NFS_Striping
Profile: Microsoft_NTFS
Profile: High_Capacity_Computing

You can get more detail on a profile using

 root@c14-48 #  sscs list -a esal-2540-2 profile Oracle_9_VxFS_HA
 Profile: Oracle_9_VxFS_HA
Profile In Use: no
Factory Profile: yes
Description:
Oracle 9 over VxFS (High Availability)
RAID Level: 1
Segment Size: 128 KB
Read Ahead: on
Optimal Number of Drives: variable
Disk Type: SAS
Dedicated Hot Spare: no

The next step is to create my pool 'bt-poc'

The syntax for the command is

sscs create -a  -p  pool  
root@c14-48 #   sscs create -a  esal-2540-2 -p Oracle_9_VxFS_HA pool bt-poc

Logically, at this point you want to create a logical disk, however you can't create one directly but you can create one implicitly as part of the volume creation

sscs create -a  -p  -s 

-n volume
root@c14-48 # sscs create -a  esal-2540-2 -p bt-poc -s 15gb -n 6 volume vol1 

Check the name of you new virtual disk using

root@c14-48 # sscs list -a esal-2540-2 vdisk
Virtual Disk: 1

root@c14-48 # sscs list -a esal-2540-2 vdisk 1
Virtual Disk: 1
Status: Optimal
State: Ready
Number of Disks: 6
RAID Level: 1
Total Capacity: 836.690 GB
Configured Capacity: 15.000 GB
Available Capacity: 821.690 GB
Array Name: esal-2540-2
Array Type: 2540
Disk Type: SAS
Maximal Volume Size: 821.690 GB
Associated Disks:
Disk: t85d01
Disk: t85d02
Disk: t85d03
Disk: t85d04
Disk: t85d05
Disk: t85d06
Associated Volumes:
Volume: vol1

Since I need another 4 identical volumes on this virtual disk I can be lazy just script up the creation.

root@c14-48 # for i in 2 3 4 5
do
sscs create -a esal-2540-2 -p bt-poc -s 15gb -v 1 volume vol${i}
done

The sccs commands are asynchronous - they return before the action is completed, so long running tasks like creating a RAID5 volume will still be running while you create your volumes on it.

Can delete any mix ups simply


root@c14-48 # sscs delete -a  esal-2540-2 volume vol3


At this point, you can either map your volumes to the default storage domain, and all hosts connected to the storage will be able to see all the volumes, or you can do LUN mapping and limit which hosts can see which volumes.

Map to the default storage domain

for i in 1 2 3 4 5
do
sscs map -a esal-2540-2 volume vol${i}
done

Create host based mappings

Create your hosts

root@c14-48 # sscs create -a esal-2540-2 host dingo 
root@c14-48 # sscs create -a esal-2540-2 host chief 

Create the initators that map to the World Wide Number (WWN) for the Host Bus Adaptor (HBA) of each machine.

First find your WWN - you can do this either by looking on the storage switch if you have one, or on the hosts that will be accessing the storage.

Looking on the host you issue the command fcinfo hba-port, and look for the HBA port WWN associated with the correct fibre channel devices. I've highlighted the entries in red for clarity.

dingo # fcinfo hba-port
HBA Port WWN: 21000003ba9b3679
OS Device Name: /dev/cfg/c1
Manufacturer: QLogic Corp.
Model: 2200
Firmware Version: 2.01.145
FCode/BIOS Version: ISP2200 FC-AL Host Adapter Driver: 1.15 04/03/22
Type: L-port
State: online
Supported Speeds: 1Gb
Current Speed: 1Gb
Node WWN: 20000003ba9b3679
HBA Port WWN: 210000e08b09965e
OS Device Name: /dev/cfg/c3
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Firmware Version: 3.03.27
FCode/BIOS Version: fcode: 1.13;
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b09965e
HBA Port WWN: 210100e08b29965e
OS Device Name: /dev/cfg/c4
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Firmware Version: 3.03.27
FCode/BIOS Version: fcode: 1.13;
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08b29965e

root@c14-48 # sscs create -a esal-2540-2 -w 210000e08b09965e -h dingo initiator dingo-1 
root@c14-48 # sscs create -a esal-2540-2 -w 210000e08b29965e -h dingo initiator dingo-2 

root@c14-48 # sscs map -a esal-2540-2 -v vol1,vol3 host dingo

And there you have it - you've formatted and mapped your volumes without needing the web interface.


How to find out if you're accessing a RAC database

How can you tell if you're accessing a RAC database? Simple!


You can tell if it is a cluster database by looking to see if the cluster database parameter is set:-

SQL> select name, value from v$parameter where name='cluster_database';

NAME
VALUE
--------------------- ---------------------
cluster_database TRUE


or


set serveroutput on
BEGIN
IF dbms_utility.is_cluster_database THEN
dbms_output.put_line('Running in SHARED/RAC mode.');
ELSE
dbms_output.put_line('Running in EXCLUSIVE mode.');
END IF;
END;
/


You can tell how many instances are active by:-

SQL> SELECT * FROM V$ACTIVE_INSTANCES;

INST_NUMBER INST_NAME
----------- -----------------------

1 c1718-6-45:AXIOSS1

2 c1718-6-46:AXIOSS2

VIPCA fails when installing Oracle 10g CRS

So, you're onto your last node of a 10.2.0.1 CRS install and you see this message...

Running vipca(silent) for configuring nodeapps
The given interface(s), "nxge0" is not public. Public interfaces should be used to configure virtual IPs.

This is can be caused by your public network interface being 10.x.x.x - vipca assumes that this is a private network.

Login as root and run vipca from $CRS_HOME/bin - this will allow you to set the interface types correctly.

Using Oracle 10.2.0.3 on SPARC?

There is a really useful patch if you've got db_block_checksum=TRUE (the default) to reduce the CPU utilization of the checksum routine. The patch number is 6814520 - annoyingly the bug report is hidden in metalink, but if you do a 'simple' search for the patch, provide the number, you'll be able to download the patch.

The patch is rolled into 10.2.0.4, but it's not documented as part of the patch notes.