Today i had to do some work on a storage system where new storages where installed and the old one should be unconfigured from the system.
By using
sh#> vxprint -g $DISKGROUP -rtLv $VOLUMENAME
i checked what mirrors (plexes) are available for the volume.
I selected the mirrors located on the old storage an dropped them from the volume with
sh#> vxplex -o rm dis $PLEXNAME
I noticed that the naming convention, after deleting the older plexes, does not look really pretty, because when the plex was created VxVM added a sequentiell number add the end of the plexname, so i decided to rename the plexes.
This was done with:
sh#> vxedit -g $DISKGROUP -p rename $OLD_PLEX_NAME $NEW_PLEX_NAME
After that all looks pretty well.
What's going on in Mesiols work day
Showing posts with label VxVM. Show all posts
Showing posts with label VxVM. Show all posts
Tuesday, December 2, 2008
Thursday, November 27, 2008
Today's Storage Day
Everytime a new thing ;)
I like competitions.
So today i had to do some storage stuff. Setting up two new StorTeks in a SUN Cluster.
Two SUN StorTek 6140 (to replace 2 old T3's and 2 old 6120)
Two E2900 Solaris 9, SUN Cluster 3.0,VxVM 3.5
Both host directly connected to both storages via two HBAs. QLA26xx
Ater running 'devfsadm -Cv', to get the device entries updated, i called 'luxadm probe' to if the LUNs are found correctly on the host.
'vxdmpadm listctlr all' show the new connected controllers.
I enabled the new path with 'vxdmpadm enable $CONTROLLER'.
I labeled the new disks with format and found them shown state online in 'vxdisk list' output.
By using 'vxdisetup -i $DEVICE' i setup the private/public region on the disk.
I added them to the different diskgroups with 'vxdg -g $DISKGROUP adddisk $LOGICALNAME=$DEVICE'.
I began mirroring the old T3/4's 'vxassist -g $DISKGROUP mirror $VOLUME $LOGICALNAME'.
The performance was real okay, the old storages hosts 6 Oracle databases, with 450GB amount of data.
After around 5 hours all works was done.
I had to update the SUN Cluster configuration to recognize changes in diskgroups.
This can be done the following way 'scconf -c -D name=$DISKGROUP,sync' so the volume information of VxVM will be syncronized with SUN Cluster configuration.
After that all was fine.
I like competitions.
So today i had to do some storage stuff. Setting up two new StorTeks in a SUN Cluster.
Two SUN StorTek 6140 (to replace 2 old T3's and 2 old 6120)
Two E2900 Solaris 9, SUN Cluster 3.0,VxVM 3.5
Both host directly connected to both storages via two HBAs. QLA26xx
Ater running 'devfsadm -Cv', to get the device entries updated, i called 'luxadm probe' to if the LUNs are found correctly on the host.
'vxdmpadm listctlr all' show the new connected controllers.
I enabled the new path with 'vxdmpadm enable $CONTROLLER'.
I labeled the new disks with format and found them shown state online in 'vxdisk list' output.
By using 'vxdisetup -i $DEVICE' i setup the private/public region on the disk.
I added them to the different diskgroups with 'vxdg -g $DISKGROUP adddisk $LOGICALNAME=$DEVICE'.
I began mirroring the old T3/4's 'vxassist -g $DISKGROUP mirror $VOLUME $LOGICALNAME'.
The performance was real okay, the old storages hosts 6 Oracle databases, with 450GB amount of data.
After around 5 hours all works was done.
I had to update the SUN Cluster configuration to recognize changes in diskgroups.
This can be done the following way 'scconf -c -D name=$DISKGROUP,sync' so the volume information of VxVM will be syncronized with SUN Cluster configuration.
After that all was fine.
Subscribe to:
Comments (Atom)