archived redolog not applied in real-time : IN-MEMORY - Data Guard

I created a DataGuard configuratiion in Oracle 11g so that the archived redolog are applied in real-time using the log writer.
It seems to me that the last one is not applied : when consulting the v$archived_log table the last entry have the colunm applied to "IN-MEMORY" :
select THREAD#, SEQUENCE#,APPLIED, deleted from v$archived_log;
......
THREAD# SEQUENCE# APPLIED DEL
---------- ---------- --------- ---
1 1935 YES NO
1 1936 IN-MEMORY NO
The Oracle 11g documentation say that if "IN-MEMORY" that mean not applied in the datafile.
Could someone help me to fix it ?
Why the last archived redolog is always "IN-MEMORY" ? Is there a way to change it ? 

I believe that it gets changed at the next log switch of the primary or maybe the next checkpoint at the standby. It doesn't mean that the standby is at risk as far as I know. But I will verify both things and get back to you.
Thanks.
Larry 

I encountered the same problem - looks like the curent log-file is applied IN-MEOMRY only. Is thre a way to got out of this ? 

Try opening a new question. 

Please check the value of the column REGISTRAR and APPLIED for the logs in the standby database.
standby database:
select sequence#,registrar,applied from v$archived_log;
If REGISTRAR = RFS and APPLIED = NO, then the log file has been received but has not yet been applied.
If REGISTRAR = RFS and APPLIED = IN-MEMORY, then the log file has been applied in memory, but the datafiles have not yet been updated.
If REGISTRAR = RFS and APPLIED = YES, then the log file has been applied and the datafiles have been updated.
Try Cancelling the Managed Recovery Process and bounce the standby database 

Try to bounce the standby database. It might help to all the logs to get applied. 

user11188564 wrote:
I encountered the same problem - looks like the curent log-file is applied IN-MEOMRY only. Is thre a way to got out of this ?If you encountered same problem open a new thread dont post in others questions. Its almost 1 year old thread.
Posted: Mar 5, 2010 3:45 AM
-- Please lock this thread

Related

Archive log can not ship to GAP logfiles to standby DB automatically

we have a non-real time standby database, which will receive the archive file from the primary database server most of the time, and will apply the logfiles only at one point of time daily.
Some times, we need to shutdown the Standby DB server for a while ( 3-4 hours).
The missed logfiles will catch up during the standby down time later.
But since last week we had an storage incident, the primary DB server stops to catch up the missed logfiles, and saw this message at the archvie trace file:
ABC: tkrsf_al_read: No mirror copies to re-read data
Current, we found the archive log gaps on the standby server, and have to manually copy those logfiles over and register them.
Saw some tips on the internet to change the parameter "log_archive_max_processes", but no help for us at all.
Here is the parameter on the Primary DB server:
log_archive_dest_2 = SERVICE=Standby_server reopen=300 
Hello
Can you tell me exactly what you mean by " non-real time standby database" is this Data Guard?
Also can you supply the OS and Oracle version?
Since the issue are there new entries in either alert log?
Best Regards
mseberg 
"non-real time standby database" means: we dont apply the archive log files on the standby DB server even the logfile arrives. Only apply them at one special time ( see 5:00 am daily).
Both OS are: Sun Microsystems Inc. SunOS 5.10
Both Oracle are: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
Here are the lines on the alert logfile:
Error 1034 received logging on to the standby
Errors in file /******/***arc210536.trc:
ORA-01034: ORACLE not available
FAL[server, ARC2]: FAL archive failed, see trace file.
...
Errors in file /******/***arc210536.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance **** - Archival Error. Archiver continuing. 
Thank you for the information.
I believe you have a connection issue ( no way to say this and be profound ). Double check the value for ORACLE_HOME used on the standby site.
Make sure the ORACLE_HOME has no trailing characters like a /
I would double check the listener.ora file for anything odd too.
I have a couple of notes :
ORA - 03135 : connection lost contact while shipping from Primary Server to Standby server [ID 739522.1]
PHYSICAL: When Data Guard and TCPS ASO Configured receive ORA-3113 and ORA-16055 during Redo Transport [ID 889763.1]
Bug 8842032 - ORA-3113 AND ORA-16055 OCCURS ON FAL REQUESTS
Will double check everything.
Best Regards
mseberg
Edited by: mseberg on Feb 28, 2012 12:03 PM 
which will receive the archive file from the primary database server most of the timeMost times from primary. Then remaining times? so you Copy manually and register?
Then it's dataguard not an manual standby.
Error 1034 received logging on to the standby
Errors in file /******/***arc210536.trc:
ORA-01034: ORACLE not available
FAL[server, ARC2]: FAL archive failed, see trace file.these errors in primary when , stanby is down and when primary tries to connect to standby, so tese errors not considerable to investigate ,
When you don't want to apply archives on standby no need to shutdown. Just put this value log_archive_dest_state_2='defer'
Once you enable check what are the errors in primary alert log file.
How is your network band width speed? Is it capable to hold that much archive data?
May be it will take some time when you pause & start.
Also use LGWR in log_archive_dest_2 for real time apply after creatin standby redo logs.
So post the alert log information once you enable standby database. 
Most times from primary. Then remaining times? so you Copy manually and register?
Then it's dataguard not an manual standby. Before Oracle would automatically fill the missed logfiles due to the shutdown of the standby server.
Now, I have to manually and register them. This is the reason I am asking this question here.
these errors in primary when , stanby is down and when primary tries to connect to standby, so tese errors not considerable to investigate ,
When you don't want to apply archives on standby no need to shutdown. Just put this value log_archive_dest_state_2='defer'
Once you enable check what are the errors in primary alert log file.
How is your network band width speed? Is it capable to hold that much archive data?
May be it will take some time when you pause & start. Yes, change the parameters may solve this. But we already use this architecture for years. No problem at all before. 
Now, I have to manually and register them. This is the reason I am asking this question here.No need to copy and apply manually.
That's why asked log file information. You have posted but that log information is when standby was down , I need log information once after you start database with MRP. what errors you have seen in standby?
Also
SQL> show parameter fal
Change your strategy to disable destination instead of shutdown. 
thanks for your help.
Errors in file /****/***stby_ora_18795.trc:
ORA-01009: missing mandatory parameter
...
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 3217860-3721862
DBID 2479581232703 branch 676712346655
FAL[client]: All defined FAL servers have been attempted.
-------------------------------------------------------------
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that is sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
-------------------------------------------------------------
SQL> show parameter fal
NAME TYPE VALUE
------------------------------------ --------------------------------- ------------------------------
fal_client string
fal_server string PRIMARY_DB 
I'm thinking your Primary log_archive_dest_n remote destination and your Standby fal_client should be using the same tns alias.
In Oracle 11R2 FAL_CLIENT is optional.
And this idea of CKPT's
"Change your strategy to disable destination instead of shutdown." I think this is a good idea too.
Best Regards
mseberg 
problem is: we have never set this on the standby server before, and it worked well before. 
I understand your position, in fact if I were you I would want to know the "why" too.
But I don't see how I can answer that. I think the best I can do is say here's another way to get is working again.
My best guess is the Primary tnsnames is not happy and if you want to continue down that path then try tnsping and try to connect to the Standby would be where I start.
Best Regards
mseberg 
Thanks for your help.
In fact,I did the test at the very beginning. Nothing wrong for those TNS connections.
Sure we have several options as suggested above to fix/around this.
But we wont want to touch the production DB at best, currently, I am writing a shell scripts to manually send the missed logfiles and register them. 
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 3217860-3721862
DBID 2479581232703 branch 676712346655
FAL[client]: All defined FAL servers have been attempted.It shows there is archive gap with three archives.
What is the sync status with primary now?
Post from primary
Sql> Select thread#,max(sequence#) from v$archived_log group by thread#;
From standby:
Sql> Select thread#,max(sequence#) from v$archived_log group by thread#;
Sql> Select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; 
Those gap already filled by my manual scripts (copy the logfiles, and register them)
At this moment, here is the info.
-----on the Primary -----
19:17:41 SQL> Select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 3218037
Elapsed: 00:00:00.24
-----on the standby -----
19:18:33 SQL> Select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 3218037
Elapsed: 00:00:00.02
19:18:53 SQL> Select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 3218014
Elapsed: 00:00:00.02 
Hi,
Your database version is 10.2. So, do you have the FAL_SERVER and FAL_CLIENT set on the standby database initialization file (pfile/spfile)
These two parameters are very much necessary on the standby side. As Mseberg already mentioned, FAL_CLIENT can be ignored on standby side from 11gR2.
Post from the standby pfile/spflie:
show parameter FALOk. From one of your previous post, I just checked that FAL_CLIENT is unset. Can you just set it to the NET Service Name used for Standby database and update us.
Edited by: Shivananda Rao on Feb 29, 2012 9:53 AM

the stubborn standby redo log

I have a dowstreams which has four standby redo log groups. one of these groups is still in a redo of the day 23/04/2010. this old redo is not filed, always there.
this implies that the database downstreams only work with three groups of redo log and some oacaciones all these are in active status and at that time the view v $ archived_dest returns error for that destination (downstreams).
the question is:
how I can eliminate that standby redo log which is not being archived.
thank you very much for your attention 
I run this query
SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS,FIRST_TIME FROM V$STANDBY_LOG;
and it returns me
GROUP#____THREAD#__SEQUENCE#__ARC__STATUS__________FIRST_TIME
4___________ 1__________0_________NO____UNASSIGNED
5 ___________1_________2760_______YES_____ACTIVE__________04/05/10
6____________1________2328 _______YES______ACTIVE__________27/04/10 --> The stubborn redo
7____________1__________0_________NO_____UNASSIGNED
Edited by: user13026590 on 04-may-2010 12:16 
Do you have any error messages in the alert log?.
See this Metalink Note:
Alert.log shows No Standby Redo Logfiles Of Size 153600 Blocks Available [ID 405836.1] 
Mr. Bocchi
thank you very much for answering. what you said was very very correct and eradicate the problem

Fal_client, fal_server triggering when applying with delay .

Hi,
I'm on 10.2.0.3 RAC witn physical standby , old fasion config with no standby redo so shiping only after log switch .
We've got 10h delay and looks like gap resolving via fal client/server is triggered only when the acctual arch log
needs to be recovered and is not found on standby site, no when it is not shipped .
So, is that feature or bug ?
In my understanding gap resolving should trigger as soon as gap is detected ..
Looks like delay we using messed things up .
Regards.
Greg 
Dear GregG,
Have you checkhed the v$archived_log view's latest sequence number on both sites and can you please post the "archive log list" command's output here?
Gap resolve and FAL processes triggered when there is a gap between primary and standby database. For instance the primary database online redolog sequence is 100 but the standby is 90. So the standby database tries to fetch the gap and the MRP0 process then tries to apply those 10 to the standby database. After than that under normal circumstances you should not have to see the gap messages in the v$dataguard_status fixed view.
You can try to copy the relevant and finished archivelog from primary to standby, register that on the standby database and wait for MRP to have it applied.
Regards.
Ogan 
Hi,
Well, the shipment of archive files is done by Primary database's Background. There is no involvment of Standy database in shipment of archive log files.
Hence Standy database cannot detect the recieved archived log files, unless while recovering. While trying to recover if it finds any missing archive log file, it sends a request to the primary for that particular log file and again Primary Database sends the files.
No, its not a bug, rather a neccessity. As Standby database does not know which sequence# is generated into archive log.
Yes, identifying the missed archived log while recovering will increase the delay. Below the example:
Primary Generates seq#: 100
Recovery process at DR : 90
Files transfered at DR:
90
91
92
93
94
95
96
97 missed
98
98
.. and so on.
Here until Standby recovers 96 seq number, it wont be able to know that Seq# 97 is missing, and hence the delay.
but for this we have to increase the Recovery speed by other ways documented.
Below given link is for Oracle 10g Best Practises: Dataguard Redo Apply and Media Recovery.
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10grecoverybestpractices-129577.pdf
Hope this answers to your question.
regards,
Sajjad 
Hi,
There are AFFIRM and NOAFFIRM attributes to the LOG_ARCHIVE_DEST_n parameter, using which the Redo Transport Destination(in our case, more likely standby destination), acknowledges received redo data before or after writing it to the standby redo log.
You may also try adding these attributes to the LOG_ARCHIVE_DEST_n parameter.
Below link states this:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/log_arch_dest_param.htm#i78506
regards,
S Quadri. 
Please do not forget to update the thread with Helpful or correct, if you find the answer either.
regards,
SQuadri

dataguard issue during switchover

I've set up a data guard environment consisting of the primary RAC 2node db and the physical standby RAC 2 node db, both on solaris and running oracle 10.2.0.2
Testing switchover using dgmgrl throws a problem
ora-16775:target standby database in broker operation has potential data loss.
This message appears to be bogus because a Verify Configuration confirmed the environment was working properly. I even did a log switch to confirm that the log was successfully applied to the standby and it got reapplied.
dataguard configuration shows its fine.but when doing a switchover it throws the above error.
i have removed the existing configuration and created a new one.
i have rebuilt the standby as well
but still throwing me the same error.
What is the cause of this problem and how do we fix it?
Thanks. 
hi,
It happened because the target standby database in a switchover operation did not have all the redo logs from the primary database. The switchover cannot continue in this case.
Possible Solution:
1.     Check what are the missing logs.
2. Make sure log transport service is functioning correctly.
3. Do some log switches on the primary and wait for the gap fetching mechanism to re-ship all the missing redo logs.
4. Issue the switchover command again when there is no missing redo logs on the target standby database.
good luck!
regards,
X 
Hi Ahmad,
i have checked the v$archive_gap and the result is no rows.
i have done couple of log switches and the logs are shipped and applied fine by MRP process
i have checked the logs applied in both primary and standby and it seems to be ok
Thanks for the suggestions 
Hi,
Below link can help you.
http://www.izzysoft.de/oracle/ifaqmaker.php?id=7;toc=1
Best regards,
Rafi. 
Hi Rafi,
tried that earlier but no joy
never knew what are the logs missing when all of them are applied
is there anyway to find what log is it looking for thiugh im pretty sure everyhthing is applied
Thanks 
hi,
you can Determine the most recent archived redo log file at each destination
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# FROM V$ARCHIVE_DEST_STATUS WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';
DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#
------------------ ------ ---------------- -------------
primary-site VALID 1 *947*
standby-site VALID 1 *947*
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.
regards,
X. 
valid 1 for primary is 6620 for thread 1
12213 for thread 2
valid for standby is 6620 for thread 1
6618 for thread 1
12213 for thread 2
12210 for thread 2 
ran a query to find received and applied log
in primary
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
2 12213 12213
1 6621 6621
in standby
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 6621 6621
2 12213 12213
so primary and standby are sync with each other 
hi CenterB,
is your issue resolved or not ?
regards,
X 
Hi ahmad,
i have raised the SR and it looks like a bug.
they have sent me the patch and asked to apply it and try switchover again.
i will let u know once i try that
thanks for the help 
I also tried to do something on the same lines and found different results on primary and standby.
But the results on standby were also listed in the result on the primary. What could be the reason?
This was the query I ran on both primary and standby:
select sequence#, applied from v$archived_log where applied='NO' order by sequence#; 
Hi CenterB:
Did you find the solution? 
just share the output of
IN primary
select max(sequenece#) from v$archived_log;
IN standby
select max(sequenece#) from V$log_history; 
Problem: You issued a SWITCHOVER TO <standby> at the DGMGRL prompt. But instead of the switchover to be performed, you get an
Error: ORA-16775: target standby database in broker operation has potential data loss
Cause: Recovery has not caught up (yet).
Solution: Make sure your standby database is mounted and recovery is running. If not, issue a STARTUP MOUNT to mount the standby database, and start managed recovery with ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;. Wait a few minutes, and try again. 
Please see:
data guard role transition problem

Physical Standby Database not opening in Read Only Mode

Hello All,
I have created the physical standby database on a different server by transferring all the archivelogs and datafiles and all the directories from the primary server.
Dirctory structure is same between the primary and standby database.
I used standby control file to mount the standby database and also pfile from the primary database. I did all the changes required in the pfile from the primary database.
Below is the link I followed to create my phyical standby database.
http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php
Now, the standby and primary databases are in sync. Please see below.
Primary :
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
366
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
--------------
366
Standby .
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
366
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
--------------
366
Also, I can see the archive logs transferring in the respective directory from the primary to the standby as soon as I do a log switch over.
Issue, I am facing is , I am not able to open the standby database in READ ONLY Mode. When ever I try to open it in read only mode, alert log file shows the below message.
"Media Recovery Waiting for thread 1 sequence 367 (in transit)"
When I see in the primary database, this sequence number is not archived and its a current log file. When I tried archiving this one, my standby also gets the archive file but the alert log jumps into next current sequence number from the primary and again it starts waiting on the new ones. Is this expected behaviour in Dataguard ?
BUt I understand it should allow me to open the database in read only mode.
Database version i am using is 11.2.0.3. I am not understanding why the standby database is waiting on the current logfile from the primary which is not archived in the primary itself ?
However, this is the procedure I am following.
SQLPLUS > startup mount
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
SQL > alter database open read only
AFter this it just hangs and when I kill this process it shows me the below error.
SQL> alter database open read only;
ERROR at line 1:
ORA-10458: standby database requires recovery
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/usr/local/oracle/data/tvapa10/mcada/system01.dbf'
Any help is greatly appreciated and let me know if any other information is needed.
Thanks, 
Hello;
Can you post the COMPATIBLE parameter for both databases?
From my test system
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> alter database open read only ;
Database altered.
SQL> Also can you confirm this file exists on the Standby side.
/usr/local/oracle/data/tvapa10/mcada/system01.dbf
Best Regards
mseberg
Edited by: mseberg on Feb 28, 2013 9:10 AM 
Thanks for the reply .
Here it is,
Primary database,
SQL> show parameter compatible;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 11.2.0.3.0
SQL>
Standby,
SQL> show parameter compatible
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 11.2.0.3.0
SQL> 
Hello again;
They match! So much for the simple fix. I would double check all the datafiles on the Standby side and the check the standby alert log for additional errors or clues.
If you find them, please post.
Best Regards
mseberg 
Hello,
What happens when you shutdown the standby database and then try to open it directly ? If you are permitted to do so, then
1. Make sure that the standby is in sync with primary.
2. Cancel the MRP on standby and shut it down.
3. Standby: Just give startup
4. Later start the MRP on itRegards,
Shivananda 
If your standby is in Sync with primary database, Then you must able to open database after cancel the recovery.
But ensure you haven't terminated MRP process unexpectedly.
If you see, Both primary and standby is in sync. I suggest you to perform couple of log switch. Let them apply completely and then cancel MRP.
These all steps so far up to mount status, Now open database.
If you have still issues, post here results. Before that please update have you performed any restoration of missing/new data files or any changes?
and have you ever opened database earlier? 
Ok. I have checked my datafiles .Infact I trasnferred all my datafiles again from primary to standby. But still it does not allow me to open the standby database.
I also tried the method Shivanand Sir was talking about and when i startup it just sits on this below command in the alert and never actually completes opening the database.
Media Recovery of Online Log [Thread=1, Seq=380]
Recovery of Online Redo Log: Thread 1 Group 5 Seq 380 Reading mem 0
Mem# 0: /usr/local/oracle/data/tvapa10/mcada/redo/redo2.log
Thu Feb 28 11:00:50 2013
NSV0 started with pid=58, OS id=12584
Thu Feb 28 11:00:54 2013
RSM0 started with pid=59, OS id=13616
ALTER SYSTEM SET log_archive_trace=0 SCOPE=BOTH SID='mcada';
ALTER SYSTEM SET log_archive_format='%t_%s_%r.arc' SCOPE=SPFILE SID='mcada';
Data Guard: Redo Apply Services will be started after instance open completes
In between, I dont see any errors in the alert log files which is driving me crazy, and the logs are in synchronised between my primary and standby. I am really going blank on this one. One more thing if I do switch over of the database then the standby database opens with no problem.
I am trying to understand why is it waiting on the current logfile from my primary which is not archived in the primary itself ? and just shows me in the alert log files this message.
Media Recovery Waiting for thread 1 sequence 380 (in transit)
And this sequence number changes when I do log switch in my primary which makes me feel that both are in sync but some how not able to open the database because of this current log files, I also tried recovering the database which also compalins about missing archive log which infact is never present in the primary itself. 
I tried transferring of the datafiles from the primary to standby but I never was able to open this standby database. 
Hello;
With all due respect I have some doubt about this statement :
and the logs are in synchronised between my primary and standbyCould you confirm this by running this SQL from your primary :
http://www.visi.com/~mseberg/data_guard/monitor_data_guard_transport.html
And posting the results.
I think you may transfer logs, but I have some doubt they are applying.
Best Regards
mseberg 
No Problem Sir,
Please find the results below.
DB_NAME HOSTNAME LOG_ARCHIVED LOG_APPLIED APPLIED_TIME LOG_GAP
---------- -------------- ------------ ----------- -------------- -------
CADA SHAPAN1 382 382 28-FEB/11:53 0 
OK.
I believe. Frankly I'm not sure what to tell you.
h3. Later
This note offers some hope.
Unable To open Standby Database READ ONLY after Creation [ID 733089.1]
Cannot say I like the solution much.
Best Regards
mseberg
Edited by: mseberg on Feb 28, 2013 11:16 AM 
I will try to get some questions answered so that I trouble shoot more by understanding it.
Tell me about this,
Primary database is writing all the changes to the redo.dbf files and I can see it up to date in my directories.
Same redo.dbf files are not getting updated in my standby database but they are getting updated in my standby log file directories which I created while creating standby database and with the extension of redo.log.
For example, I can see redo.dbf in primary showing me the uptodate time in the directory and same uptodate time in my standby log files in standby database.
Is this the expected behaviour ? Also, my primary database is not using the standby log files , its just using the redo.dbf files.
I am pretty sure I have something messed up in the redo or standby logfiles so trying to go in that direction. 
Yes.
The simple answer is a database in Standby mode does not use Redo, it uses standby redo logs.
A database in Primary mode does not use standby redo logs, it only uses redo.
However you want them both available for switchover or failover.
You saw the MOS note in my prior post right?
Best Regards
mseberg
Edited by: mseberg on Feb 28, 2013 11:49 AM 
Thank you . That makes sense.
I will try to trouble shoot more and update it when I find something. 
Hello All,
I resolved my issue.
Here is the thing,
I DID NOT Create online log files in my Primary database .
I learnt about this when I did switch over and when I could open my new standby database in read only mode.
Then I noticed that all the current transactions has been written to online log files.
But in my earlier setup , since I did not had online log files, Primary database was writing changes to the REDO.DBF files ,But I am going to assume, since it was writing to the REDO.DBF , my standby database was not receving the current files unless it was archived ? and that might be the reason why it was asking me the current logfiles archive log ? I appreciate if any one of you confirm this concept for me
But as soon as I created my online log files, primary database started writing transactions to the online log files and I could open the standby database as read only.
So, in simple words , we need the below things
1) Online Redo log files on both primary and standby
2) StandBy REDO log files on both primary and standby .
I say on both primary and standby because you are going to need it in primary if you switchover and ofcourse thats the main feature of Dataguard
Thanks a lot everyone for helping out here. This is my first ever OTN Discussion and I cant say it in words that how much great work you guys are doing by replying to the forum. YOu guys have inspired me and I will also try to visit forum often and help with what I can .
Thanks a lot again . Please anyone confirm if my understanding was right regarding the online and Standby REdo log files.

Categories

Resources