Disk Re Pathing and ASM hdisks - Automatic Storage Management

We have 10gR2 + ASM on AIX .
Now the Sysadmin want to do repathing of hidks.
Repathing changes the hdisk number.
How to handle this situation in ASM ??
Thanks a lot in advance.

ASM should not have a problem with the hdisks being renamed. ASM will scan all of the disks in the asm_diskstring in the init,ora, so as long as the string is setup correctly asm will see the disks and mount the diskgroup. The only problems you'll run into is if this is a RAC setup and the OCR and Voting devices change names. That takes some manual intervention, but can be changed. 

Thanks a lot.
This is good news.

Look at the ASM best practices paper it covers that. ASM doesnt use path names.


ASM Failure / recovery

We have a two node RAC with asm and 2 diskgroups, one for data and one for backups. If we loose ASM on both nodes, will we be able to add the disk/diskgroup to another server and retrive the files there through a new ASM instance? 
Yes, very much. As long as you have disks intact you can recover every bit of it as the ASM metadata is stored on disks itself.
It will simplify the process if you have ASM_DISKString value and ASM disk group value from old SERVER ASM init file. ( but they are not the show stoppers). 
that sounds reassuring.
Could you please elaborate on what steps are required to do this operation?
Thank you. 
In order to mount the ASM diskgroups on another host, the steps look like the install steps where you first configured ASM.
1- Present the SAN/shared-storage to the new host
2- Install drivers if needed
3- chown the raw disks to oracle, chmod them 660
4- Start the ASM instance if it's not already started
5- MOUNT the diskgroups
That's all you have to do. I've been doing it a lot lately and it works great. ASM will do the log replay if needed (at ASM filesystem level).

EMC Snapview clone on ASM LUN on Prod. then bring up clone LUN on Dev.

Hi everyone,
I was able to use EMC Snapview clone to clone of the ASM LUN on the Production server. This LUN is label as ASM5 in the disk group. I fracture the clone and assigned the clone LUN to the Development server. But, when I tried to add the clone LUN to the ASM disk group as ASM11. I get an error said ASM5 is already assigned to the disk group. Any suggestion what I'm doing wrong or the solution to my problem?
First, I think what you are trying to do is a bad idea (unless i misunderstood you). You need to clone ALL of your ASM luns. One at a time is going to mess up your database. <br>But to answer your question:<br>I take it you are using asmlib? If so, you should try /etc/init.d/oracleasm renamedisk<br><br>Hope this helps. Dave.<br>davelehr.com 
thanks for your replied Dave. I understand what you are saying. But, all our databases are assigned to the single ASM disk that is on an individually LUN. For example, if GPROD is on ASM5 and ASM5 is LUN5 on my SAN. Can I just clone LUN5? When I fracture the clone LUN and assigned it to my Dev server. Will that work?
As for renaming ASM disk, can I rename ASM disk before I can add it to the ASM disk group? 
I have worked with organizations who are doing that. They fracture the mirror and then copy the diskgroup to the DR site. If you have the lun comprising an entire diskgroup, this should work. (However if you are only taking a portion of the diskgroup, i still think you will find that it doesnt work. )
You should be able to rename the disk using asmlib before the disk is mounted. This is the Linux library that you will call from the command line. 
Ok, I was able to rename the LUN to the ASM signature I want. But, when I tried to add it to the diskgroup I get this error.
"Failed to commit: ORA-15018: diskgroup cannot be created ORA-15072: command requires at least 1 failure groups, discovered only 0"
I haven't tried the process you're following, but just incase you haven't seen them, there are several whitepapers on using EMC to create ASM clones online at http://www.oracle.com/technology/products/database/asm/index.html. Nitin Vengurlekar is Oracle's foremost expert in this area and has had a hand in authoring all of them. He also co-authored a book on ASM. I haven't read it yet, but I would expect it may have some helpful hints for this type of operation as well--at least for understanding the things happening behind the scenes.
would highly appreciate... if you Can provide the white paper as i can am not able to find white paper.. 
if you cloned all disks (of one diskgroup) to another server, then you don't need to recreate the diskgroup. Actually it should be sufficient to simply mount it on the development server.
However it is important on the development server that there does not already exist a diskgroup with the same name (otherwise you need to rename the diskgroup before adding it to the development server. There is a new feature in 11.2 to rename diskgroups).
So if you
- Took a snapshot off all disks in a diskgroup and
- cloned and presented the disks to the development server
Then log into the ASM instance and try to see if you can see all (cloned) disks on the development server:
SQL> select * from v$asm_disks
If not: Is your discovery string set correctly?
If you see all, try to mount the diskgroup. (alter diskgroup XXX mount).
What message do you get?
Thanks Sebastian for quick response...Will verify the your instructions and revert back soon with the outcome..
What i am trying is to copy ASM disks from current server (10g r2.0.4 running on linux 4) as a snap-clone (from emc-Nevisphere) and present clone disks to new server ( 11g r2 running on linux 5). the ASM will be new 11gr2 but i will use database version same as using storage option ASM 11gr2. All luns are presented with simmillar configurations.
The only doubt that i have is how ASM on new servers understand the new clone disks are grouped with which disk group name...where is this information stored??(is it in ASM diskheader file?? or somewhere else??).
Is it just the disk string or some other metadata information is required to copy from source to target...
Thanks in advance for excellent assistance.. 
all this information is stored in the first few bytes on the disk (the ASM disk header).
What the ASM instance needs is the correct disk string to find the disks and the correct permissions to be able to access them.
All the rest is then discovered/grouped/organized by the diskheader information.

Changing ASM to a different machine

I just had a server go down... I'm wondering if it is possible to reload another server and point that server at the ASM instance to grab the old diskgroups?
I should also add that the ASM instance was on that same server... So I guess I'm asking if I can connect to those same diskgroups or if I'm hosed?
Message was edited by:
Hi Luke22,
all ASM disk/diskgroup information is stored in the disk header of the disks belonging to ASM. The only thing which is missing is the ASM_DISKGROUPS parameter in the init (or spfile) for automounting the diskgroups and maybe the ASM_DISKSTRING parameter for discovering the disks.
So you should be fine reinstallation ASM.
Getting physical disk access to all luns formerly belonging to ASM. (Note: access right oracle:dba 660).
And then after starting the ASM instance you should be able to already see the disks in v$asm_disk and then be able to mount the diskgroup.
Just make sure that if you running ASM in non Cluster configuration, you should only access it from 1 ASM instance (well never tried it to access a local ASM from multiple nodes..).
Thank you very much for the reply friend. I hope you have a great week!

Move ASM disks with database from one server to another?

I have a 11.2.0 ASM with a 11.2.0 database on two internal disks on server1. The disks are not in any raid or volume manager configuration, i.e. they are just two disks. The disks were physically removed from server1 and installed on server2, which is the same hardware, OS, patch level etc, in the same target position. Installed the 11.2.0 rdbms and grid infrastructure binaries on server2; changed the raw disk partition ownership to oracle and started asmca. asmca does not see the disks.
My question, is this possible and if so, what am I missing? 
First steps:
What OS?
Do you see the disk in the OS (e.g. via "fdisk -l")?
What happens if you try to mount the diskgroup in ASM?
Any errors in the Alert.Log ?
Ronny Egner
My Blog: http://blog.ronnyegner-consulting.de 
Interesting question that I have never faced, but cannot imagine why I did not think of it before. Now I see a number of scenarios where this would be applicable.
Since I have not try this before and not really read any such example, I will only be speculating.
Ronnie - Will the already marked ASM header on the LUNs prevent ASMCA from showing these disks as available for use? I would think so, to prevent accidental overwrite. Wouldn't just manually creating the ASM instance and adding the LUNs to the asm_diskstring allow this ASM instance to mount the diskgroups?
My OS is Solaris 10 u8.
I can see the disks from the OS side as well as using the GRID_HOME/bin/kfod disks=all command.
Since I don't have an ASM instance yet, I cannot add any diskgroups. I am trying to manually configure things, so that I can bring up the ASM diskgroups and the database on the disks. 
I did not manually create an ASM instance and try to add the diskgoup. My understanding is, if you use asmca then it starts an ASM instance, if one is not already running, and lets you configure your diskgroup. This is what I am trying to do, with no luck (asmca does not see the disks) 
1. I am assuming you took the backup of metadata of diskgroups using md_backup on server1. Otherwise put the disks back into server1 and run md_backup (asmcmd command).
2.Take the disks out from the first server and put them into second server and create an asm instance (if you dont have already) run md_restore on the second server. 
Thanks Gagan for sharing this..
Now looks like it can be achieved this.
Hi Sanjeev,
You can refer to below note id also :
ASMCMD - New commands in 11gR1 [ID 451900.1]
I did not manually create an ASM instance and try to add the diskgoup. My understanding is, if you use asmca then it starts an ASM instance, if one is not already running, and lets you configure your diskgroup. This is what I am trying to do, with no luck (asmca does not see the disks)I may be totally wrong here but afaik dbca creates the ASM instance. ASMCA is just for adding disks or diskgroups......
Did you try to create an ASM instance with dbca? When doing so do not put your disks in there - this will create a new disk group.....
Ronny Egner
My Blog: http://blog.ronnyegner-consulting.de

Voting Disks and OCR on ASM in RAC 11g

Morning everyone,
We've got a new 11.2 RAC implentation and want to take advantage of putting the VDs and OCR on ASM.
Should we assign them their own disks? Their own diskgroup? Or just store them to the +DATA diskgroup?
Any advice would be greatly appreciated
in R2 you don't have to provide seperate volumes for OCR and VD.
Oracle 11g r2 OCR and Vote 
In general I would put them into the same +DATA diskgroup.
However there are sometimes exceptions to the rule. One for example is if you would like to use storage split mirroring technology.
See this post:
Re: How many LUNs
Hi Rup,
Oracle's recommendation can be found in My Oracle Support Note 220970.1 - The RAC FAQ under the following entry:
"Is it recommended that we put the OCR/Voting Disks in Oracle ASM and, if so, is it preferable to create a separate disk group for them?"
In this context, you might also find this entry useful:
"How to efficiently recover from a loss of an Oracle ASM disk group containing the Oracle Clusterware files?"
Both entries discuss each of the items in detail and list the exception Sebastian mentioned.
Hope that helps. Thanks,
Thanks chaps, much appreciated 
If u store OCR and voting disks on asm then then how will oracle or u whenever required access them when asm is down.
and ASM's cluster configuration is also saved in OCR i.e. managed by OCR so how will u manage it. If OCR itself is not accesible.
I think it will not work.
I think It can work only when u use a asm diskgroup which is not part of ur cluster.