Moving a schema from one ASM instance to another ASM instance - Automatic Storage Management

I have a fairly large schema (60Gb) in a RAC database that uses ASM for storage. The database is 10gR2. I need to move a complete copy from one RAC database to another RAC database. The last time I did this, I used transportable tablespaces, but I could not import the tablespace directly into the ASM area. So I lost my indexes and and to rebuild them.
How do I move my entire schema from one ASM instance to an ASM instance in another database, and not lose any indexes? Thanks for the advice! 

I would think RMAN would handle this for you. I moved my database from one ASM diskgroup to another with RMAN copy and it worked flawlessly. I don't know why this wouldn't work in your case as well.

Another alternative is to use the dbms_file_transfer to copy the tablespace and then use transportable tablespaces. 

maybe it is a good idea to give us the step you did to create the transportable tablespace set (did you use rman or not) and what was the error received .
without that it is difficult to help you . normally it should work.
check for example if you indeed did include the constraints .


Steps For Migrating OL 11gR2 ASM instance to EXISTING Remote RAC 11gR2 ASM Database

Hello Oracle Experts,  I need to migrate an Oracle Linux ASM 11gR2 ( 1.7 terabyte DB instance to a remote EXISTING Oracle Linux 11gR2 RAC ASM database.From my understanding, the expdp/impdp network mode transfer would be easiest. However, I would like to take a RMAN (e.g. duplicate database) approach.Both databases are ASM but I would have to migrate the ASM Diskgroups.  Does anybody know of a document/steps outlining the tasks I must do.Any suggestions are welcome.  Thank you for your assistance
Hi, the easiest way would be datapump as you wrote. The runtime of this approach will depend on available cpu and io power on both systems, the network bandwith and latency for the transfer, etc.You can use either exporting to datapump files + copy via e.g. ftp + import, or alternatively a "direct import" via datapump using networklinks.Second approach will save you the extra space needed for the dump files and could be faster. It scales well with parallel parameter of impdp unless your bandwith is saturated of course. Cloning the database with rman is another way and only a bit more complicated. You can test that approach too in advance and get used to it and compare the time to the datapump approach.Smallest possible downtime could be archieved by building up a temporary dataguard setup and reducing the downtime to the upgrade itself or even less by using a transient standby db, but at a much higher complexity especially the later one. Before you decide between the method you must first define your available downtime window, test transfer times of files, etc.If your time frame allows it i would choose the simple approach. Kind RegardsThomas
Rather than DUPLICATE DATABASE, I would start with a BACKUP AS COMPRESSED BACKUPSET DATABASE  and then copy the BackupPieces over and RESTORE DATABASE , RECOVER DATABASE (of course, the controlfile and archivelog backups also need to be copied over)  Hemant K Chitale
Note that you'd have to either DROP the existing database or restore the database with a different name. >> I would have to migrate the ASM Diskgroups.You do not have to "migrate ASM Diskgroups". You'd just create them --- which you already have at the remote site.  If the DiskGroup names are different, use DB_CREATE_FILE_DEST for your RESTORE Hemant K Chitale
I suggest you look into into the Transportable Tablespace option. 
Thank you for the suggestion
noone does the easiest on 1,7TB.. maybe on 1,7GB, but not TB.. The best is either to restore or duplicate the database.. either from active database or a compressed backup.. if your company has 1,7TB data I believe they can give you a temporary NFS share so you can mount it on both servers.. make the Backup there and duplicate it from backup location duplicate database to orclbackup location '/nfs/orcl' ; practically it's so easy.. the problem is that you are in 11g.. there you open directly with resetlogs, you cannot stop it.. so if you don't have so much downtime, you can create a standby, configure dataguard and then in a short downtime switchover to the standby..
Thank you for the information.
Another possibility may be to build a physical standby DB (backup, duplicate) and make a switchover following new DB activation. Standby and TTX gives the lowest down time of the primary DB.
Why existing database? You cannot use RMAN to restore or duplicate "into" and existing database, unless the target, which is the database to be restored, backed up or to be duplicate is the same. Import/Export doesn't make sense in your case, unless you want to use RMAN transportable tablespace. You will probably also want to copy the system tablespace etc.There is no need to mirgrate the ASM storage, unless you really want to. I suggest to simply use RMAN active duplicate and then use DBCA to convert a standalone database to RAC.
PS: As Dude wrote, you cannot duplicate into an existing database.. but you can duplicate a database to another server where the database (instance) will be created.. 
Dude! wrote: Why existing database? You cannot use RMAN to restore or duplicate "into" and existing database, unless the target, which is the database to be restored, backed up or to be duplicate is the same.And . . . just to 'pick a nit' on semantics, even then you aren't really restoring "into", but overlaying what is there. For the OP ... any 'restore/recovery/duplicate/refresh/whatever' operation with rman backups is actually rebuilding a database at the 'physical' level by re-assembling data files, block-by-block.  It knows nothing about tables, schemas, etc. because it is working at a much lower level than that.

migration question

We are currently working towards migrating all of our databases which are currently on HPUX and HP Itanium. We are looking to move to on AIX platfrom.
I have a dev hpux database that is comprised of 576 tablespaces and 851 schemas...(yes it's ugly).
The database is 120gb in total size.
I am trying to come up with the best method to achieve the goal of getting this database onto an instance on an AIX server.
I am thinking that using datapump is a bit of a nightmare with the amount of tablespaces/schemas they have created and I do not believe that rman duplicate will be able to work between the two different rdbms versions?
Are there any other options that I could use here?
Why does the number of tablespaces and schemas affect the reasonability of using DataPump? Assuming that your intention is to generate a full export rather than doing a schema-by-schema export (or a tablespace-by-tablespace export), the number of schemas and tablespaces shouldn't matter. If you are intending to change your directory structure as well as moving to a different version, you'd probably want to precreate the tablespaces before doing the import. But that should be relatively easy to automate once you get past a handful of data files.
Is your downtime window large enough to handle doing a DataPump export and import? If it is, that's generally the easiest option.
Datapump is a good option. I believe, it will create the tablespaces automatically while importng full database as long as you have same disk layout.
Check this link if you want to use RMAN to restore to different version
RMAN Restore of Backups as Part of a Database Upgrade [ID 790559.1]. You can restore if your OS endians are same. 
Use Datapump.
If you have same disk layout, IMPDP will automatically create all required tablespaces and users. 
Downtime is not really an issue as these are primarily dev and qa databases. I was trying to avoid having to pre create all of those tablespaces because we are trying to use ASM for the storage on the new systems.
I see in some of the posts that datapump will automagically create the tablespaces if I do a full datapump export/import and I was not aware that datapump would actually be able to do that so it sounds like it may be possible to just create the same file systems and do full datapump to move the data...? 
thanks for your reply. So as long as I have the same file systems impdp will be able to create the tablespaces? What if we want to use ASM on the target system, is that possible? 
If you have ASM in your source schema then YES by making same diskgroup names in your target database, you will be able to use datapump and import your database successfully.
Please mark your question answered if you got your solution 
Thank you I think I have the question answered.

Utilizing a standby database for transportable tablespace export

We have a database on site A ( prod ) and a physical standby for that database on site B ( test ).
I need to periodically refresh my test database with production data - for one of the schemas.
schema size is ~1.5TB.
I want to use the standby for transportable tablespace kind pf export , to expedie the procedure.
However , I am unable to find a way to do it without putting my PRIMARY tablespaces in reads only mode.
I was thinking , since the physical standby is open in a read only mode , I will be able to use TTS directly on it , without having to put the primary tablespaces in read only - but that is not working.
Any way to use the standby for that task ? without opening the standby for read-write that is ...
There are 2 problems in your plan.
1. I'm assuming you are going to be running Data pump from some read/write database with a network link pointing to the standby. I'm not sure if transportable tablespace code can run on a read only database. I know this does not work in 11.1 and greater. I'm not sure about You would have to check that out.
2. Your tablespaces do need to be read only. There is no work-around for that. If this is a standby, don't the tables and therefore tablespaces get updated when the primary gets updated? I have to admit, I'm not that familiar with how standby databases get updated.
You may be better off running a network mode import without transportable to move your data over. If you have a fast enough network, I have seen customers move 3TB per hour. At least I seem to remember it being 3TB per hour. I could double check if you think this may be a solution for you.
You may be able to use rman to do a transportable tablespace without needing the read only option. However...
1) I'm not sure it works in 10.2 (it definitely works in 11.1 and above)
2) I'm not sure it will work on a standby (it possibly works on a snapshot standby but again I'm not sure that was available in 10.2)
If you look at this link i create a transportable set in 2 ways - the 'normal' way and then the rman way:
Can you not just rman restor into the test environment? Is the 1.5TB schema most of the database anyway?

Can we restore a tablespace or Schema from RMAN in different environment

Hi,We are currently using Oracle 12c. Our production database is running on 2 node RAC. DR and Test is on stand alone.We take daily Full and archivelog backups in production.We have Staging and Test set on a single database with different schemas. My question is, Is it possible to restore the Schema or Tablespace on Test environment using RMAN full backup taken on production.I know datapump is the option for this but I would like to know whether we can achieve this using RMAN? RegardsAsif
Adding to this. The schema name and the tablespace name in test environment are different that production.
You cannot use RMAN restore and recovery between different databases. You cannot simply perform a binary copy of a database file or tablespace and plug it into another database that has no record or history of that datafile and hence no information how to recover the datafile or how to synchronize it with other tablespaces of the database. As far as I understand, database export/import or RMAN transportable tablespace is your only option. You can create TTS also from RMAN backups, which I think was introduced in 11g:
Hi Dude, Thanks for your quick response.I have given suggestion that we create another database and restore entire database their and I take export of that particular schema and import to the test environment.Let me know if this is fine?Also our environment is created with ASM. Is it possible for your to give some high level steps to create a new database considering we have ASM instance and another db running on that server.Anyhow I will google out for the steps but any additional input from your side is appreciated. Thanks a lot.
I don't know if this is fine because I do not now what kind of data you are trying to export and import. From what I can tell, trying to answer your question based on the information you have given, yes, TTS can use an existing RMAN backup to restore a tablespace to a different database, which uses datapump.  Nobody can give you exact steps how to create a new database without knowing your database storage layout or requirements. You can try the Database Creation Assistant (DBCA), which provides a graphical user interface. However, considering the kind of question, you seem unaware of the overall complexity and should learn DBA fundamentals first. A user forum is no substitute for professional Oracle database training classes.
Thanks for the input Dude. I am aware of the Database creation steps but not sure how to create a new database on a server where ASM instance is already running. I will checkout for the steps in case we go that way.Only thing is I am new to ASM with some theoretical knowledge in it.Last question, as you said "TTS can use an existing RMAN backup to restore a tablespace to a different database, which uses datapump". Can you put some more light on this.How do we achieve this?My environment is:- Production: Running on 12c with ASM and 2 node RAC. I have full database backup + controlfile backup + spfile backup for some date.- Test: Running on 12c with ASM. The data structure is different that Production as we have additional schemas here for each Training/ Testing and Staging data.- My tablespace names are different in production than in test environment.- Lets say in production I have a schema with name SCHEMA1 and tablespaces (A, B, C,  D)- In Test we have schema with name SCHEMA2 and tablespaces (X, Y).Can I use the Full database backup taken on some day and refresh the tablespace (A, B, C,  D) on Test DB in SCHEMA2 ? How do I do this.Appreciate your timely response.
I suggest to post questions about how to create a database using ASM in the General Database forum for better topic alignment. In Oracle, users and schema s are essentially the same thing. I have not done it myself, but apparently you can use the REMAP_SCHEMA clause according to a previous post: Transportable Tablespace using REMAP_SCHEMA and REMAP_TABLESPACE
Hi, You can check one option... try to perform required tablespace restore (tablespace point in time recovery) with different server. once completed, you can do transportable tablespace method to move tablespace from restored place to your test.  Thanks
Hi Dude / Oratig-Oracle,Thanks for your valuable response. At this time I have convinced them to go for data pump. I'll try to check for all options we can user to do the restoration test in future.Thanks YouMohammed Bin Asif

The best methods to upgrade and migrate 11gr2 filesystem database to 12cr2 ASM database

I am working on database upgrade and migration project for one organization. We have about 200 databases in Prod, Dev, Test and Implementation environments.The current source database settings are:  Oracle 11gr2 single instance on Solaris 10 with file system.The future target database settings are: Oracle 12r2 single tenant on Solaris 11 with ASM file system. The servers are different physical servers. Now I am thinking about the best and the fastest methods to migrate these database from 11g to 12c. of course, we will test in DEV environment first and get all information, issue and step by step procedures. Then apply it to other environments.So the question is:  what is the best method to accomplish this job. Based on Oracle documentation and other gurus' online posts. I have picked up some methods with my concerns and questions. I would like to hear from all experts in this forum toprovide me the feedback and to help me to choose the best method for this project. Many thanks to your help in advance. The major methods I think is workable for our situation: 1. Migrate 11g DB to 12c ASM using RMAN. There are many preparing process. The major RMAN command is$backup as copy incremental level 0 database format '+DATA' TAG 'ORA_ASM_MIGRATION';$switch database to copy This one is easy to convert file system to ASM and RMAN is very reliable. 2. Migrate 11g DB to 12c ASM using RMAN duplicate$rman target / auxiliary sys/xxx#dupdbauxrun {allocate channel c1 type disk;allocate auxiliary channel c2 type disk;duplicate target database to dupdb;} This method need very good network configuration and two separate servers and databases may be not efficient. 3. Migrate 11g DB to 12c ASM with Full transportable  tablespace expdp/impdp. I do not list the detailed steps here. My concerns are: We need to convert all data files in 11g file system to 12c ASM system. We can use RMAN ro ASMCMD to convert the files. But for huge database with hundreds of data files. Conversion is a big job. 4. Migrate 11g DB to 12c ASM using expdp/impdp data pump. I also do not list the detailed steps here. My concerns are: How to convert the 11g filesystem files to 12c ASM files in thos method? One way is: create new tablespace in 12c ASM diskgroup. Only expdp/impdp user data into new tablespace by using remap in impdp. (metadata has been created when create CDB/PDB) In this way, no file-system conversion is needed. But the tablespace on 12c will be different from tablespace on 11g. (Application does not care about this. Right?) The data will be the same. For huge database with many tablespaces created in 11g. we have to create many tablespaces in 12c ASM and remap them during IMPDP. That also cause a lot of works. If I create one bigfile tablespace in 12c ASM to remap with many tablespace in 11g filesystem. Is this a good practice to do it?  There also are some other methods for database migration and upgrade. But I only prefer above these methods. Please feel free to feedback me your opinions and suggestions.  Thanks again.
Well there are many methods with varying downtime, but often the best method is the simplest one. So the simplest is to:1) shutdown the DB to a consistent state, and copy files to other server. eg scp2) Now upgrade to 12c using the DBUA. You need to decide how much downtime you can afford.
There are multiple factors to consider the major ones being. a. Size of databases. b. Downtime available.  For e.g. if it is a 10 GB database then using datapump would be a quick way on the other hand if it is a 10 TB system then expdp would no longer remain the best bet. You can explore other optioins like setting up dataguard and during downtime, failover and upgrade to 12c,  Regarding you questions about expdp .. you will need to create a 12c database before hand. And i would recommend creating all the required tablespaces (on ASM) in the new 12c database. Once this part is done, all you would need to do is take an export of the 11g database, import into 12c database and validate the options.   Thanks,Aish
May below update help you 1) is there is possible to setup standby 11g in target, after switchover OR failover do an upgrade2) goldengate3) you can use transportable tablespace4) you can use export / import expdp/impdp if you use expdp / impdp, one good thing is reorg will be done along with upgrade & migrate           
top.gun, Aish13 and Oratig-Oracle and others: Thank you so much for your advice. Yes, different methods can be used to different situation. As for using DBUA, because we install Oracle 12cR2 on the new server, if create 11g standby to this new server, then using DBUA to upgrade. It will guarantee the updrage go successfully. But for many 11g databases are installed on new servers, it will cost many cleaning work after upgrade. So DBUA is not my favorable one. I also need to correct myself. To use data pump to expdp/impdp full database, actually DBA can configure database schemas in traget server and do not need to care about the tablespace or other objects. Because using sqlfile parameter with impdp, you can see all DDLs are generated from expdp dump file. If you do impdp full=y. The same tablespace and other objects will be created into target database. So data pump for small size database migration will be a good approach. Thanks.
Hi, It is users comfort zone and the downtime what you have...from my point, I can give multiple available option and advantage of each. If you choose expdp / impdp, then you need to test once clearly to not to miss any steps.check if the application tables/ views/ some objects in SYS / SYSTEM schema (I have seen this case). in EXPDP /IMPDP, you can try reducing time to migrate.if you do database level expdp/impdp, it takes timewhen you do based on schema it will reduce, because expdp and impdp can run in parallelif you do in table level you can reduce stillif you do in row level (you will have to scan and find group of row ids) for big tables, then you can do fast in parallel