SETSPACE is missing - PeopleSoft OVM Templates

Hi,
Whilst I'm trying to create a new record onto the latest PSOVM (HCM9.1 / Peopletools 8.51.02) through the delivered Peopletools binaries, I was quite surprised to see the tablespace's list was empty. A tablespace specification is compulsory on record save, otherwise, we get the error "Please set a Tablespace for the <default> or current platform - Oracle. (47,98)".
Consequently, I cannot create a new record in that database. It should required to run setspace.sqr which has not been executed against the database (however I did not check onto a "regular" demo database if that's the case, but I think that's a script to be run when created one - see the Peopletools installation guide) but overall, it missing from the binaries, so I have even not a single chance to run it by myself within the given PSOVM templates.
Nicolas. 

Small correction, whether it is not available in the delivered Windows binaries, we can run that sqr from the App/Batch server as following :
[psadm2#psovmhcm sqr]$ pwd
/opt/oracle/psft/pt/tools/sqr
[psadm2#psovmhcm sqr]$ /opt/oracle/psft/pt/tools/bin/sqr/ORA/bin/sqr ./setspace.sqr sysadm/SYSADM#H91TMPLT -zif./pssqr.ini -i./ -f../log/setspace.log -keep -PRINTER:HT
SQR for PeopleSoft V8.51
Set Table Space Name in PSRECTBLSPC
Table PSRECTBLSPC column DDLSPACENAME have been updated
with the tablespace found in the system catalog table.
Detailed below are those tables that were not updated because they have
not yet been created in the database.
The total number of records updated appears at the bottom of this report.
Recname             Tablespace
---------------     ----------------
...
Ending SQR.
SQR for PeopleSoft: End of Run.
[psadm2#psovmhcm sqr]$But it would have been nice if that was already executed onto the database by default, as indicated in the Peopletools Installation Guide Task 7B-10: Running SETSPACE.SQR when building manually a database on Unix.
Thanks,
Nicolas. 

Seems to be fixed within the latest Peoplesoft OVM FSCM 9.1 Feature Pack 1 running on Peopletools 8.51.07 released on APR-2011.
Nicolas.

Related

Do we need to run catbundle.sql for ANY new db we create after CPU's?

We're in the middle of reviewing the JAN2009CPU readme, and found a curiosity in the patch notes.
- Our platform is Oracle 10.2.0.4 Enterprise Edition on Solaris 10 64-bit.
In the Post-Patch instructions for the CPU, we see the following clause
{color:#3366ff} ...
3.3.5, "Post Installation Instructions for New and Upgraded Databases"
These instructions are for both non-RAC environments and RAC environments
when a database is created or upgraded after the installation of CPUJan2009.
You must execute the steps in Section 3.3.2.1, "Loading Modified .sql Files
into the Database"" and Section 3.3.2.2, "Recompiling Views in the Database"
for *any new database that was created* by any of the following methods:
- Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
- Using a script that was created by DBCA that creates a database from a sample database
- Cloning a database that was created by either of the two preceding methods,
and if Section 3.3.2.1, "Loading Modified .sql Files into the Database"
was not executed after CPUJan2009 was applied.
Upgraded databases require no post-installation steps to be executed.
...
{color}
I don't understand -- does this mean we have to do the following any time
we create a new database from now on?
- run dbca
{color:#ff0000} - run '#?/rdbms/admin/catbundle.sql cpu apply'
- check $ORACLE_HOME/cfttoollogs/catbundle/catbundle_CPU_<SID>_APPLY_<TimeStamp>.log for errors
- run '#?/cpu/view_recompile/view_recompile_jan2008cpu'{color}
I would have thought that the new SQL scripts under $ORACLE_HOME are already
upgraded, so that we do not *have* to do any extra work on top of simply running
dbca to create a new DB.
I see a similar phrase in the CPU2008JUL (patch 7150470):
{color:#0000ff} "You must execute the steps in this section (Section 3.3.2.1,
"Loading Modified .sql Files into the Database") for any new database you create
or any database upgraded to this release since the CPUJul2008 patch was applied."
{color}
.. AND CPU2008OCT (patch 7375644):
{color:#0000ff} You must execute the steps ... for any new database you create or
any database upgraded to this release since the CPUOct2008 patch was applied.
{color}
.. but the April 2008 CPU (patch 6864068) doesn't have these clauses.
Can anybody clarify this issue? Have you created DB's after the JAN2009 CPU
and not run the catbundle.sql, had any problems?
Edited by: lrp on Feb 4, 2009 2:45 PM 
I believe ML Doc 605795.1 (Introduction To Oracle Database catbundle.sql) answers your questions.
HTH
Srini 
I read that document prior to posting the question. It has helpful background information, explaining what's in the bundle and how it works. however, it doesn't confirm/answer whether I need to run this script after a patch for a newly created database. Here's a scenario:
1) install Oracle 10.2.0.1
2) install Oracle 10.2.0.4 patchset
3) create database ORCL1
4) shutdown database ORCL1
5) apply Jan2009CPU
6) startup database ORCL1
7) #?/rdbms/admin/catbundle cpu apply
8) #?/cpu/view_recompile/view_recompile_jan2008cpu.sql
9) #?/rdbms/admin/utlrp
10) ... a month later, we want to create a brand new db.
11) run dbca and create a new database ORCL2
*12) #?/rdbms/admin/catbundle cpu apply*
*13) #?/cpu/view_recompile/view_recompile_jan2008cpu.sql*
14) #?/rdbms/admin/utlrp
Now because of the clause that was just introduced in Jan2009 (if I read this correctly), now for any new database installs, I have to run this new piece of code that is not explained in the original 10g install documentation. Am I correct in assuming that I now have to remember this crucial piece of additional install information? It seems like a rather bad hack/workaround for a patch: previous patchsets (oct/jul2008 notwithstanding) had us patch existing databases with new dictionary information, but it was understood that any newly created databases would be safe under the new Critically Patched code. 
The catcpu script calls the catbundle script - so catbundle does not need to be run explicitly for databases being patched. For new databases created in the patched home, catbundle needs to be explicitly run to fix vulnerabilties in the database.
HTH
Srini 
Yes, even after creating new db with dbca You will have to apply catcpu.sql from patch.
And view_recompile_jan2008cpu.sql as well.
You can check
select * from registry$history;
and You will see what patches are applied to the db. 
Just to make sure my understanding is correct about this step in the post-install:
Upgraded databases require no post-installation steps to be executed.
So if we take a brand new 10.2.0.1 database, apply the 10.2.0.4 patch, and then apply the 2009CPU patch, we do not need to perform the cpu post-install steps of catbundle, etc.? It is only for new databases created after this via dbca, correct? 
Any ideas on clarifying the above? 
The CPU will no doubt fix PL/SQL in the database, for example OWA and the like.
Since each CPU may be different it will come with its own instructions. These are what must be followed.
It is highly likely you will need to run a .sql after patch installation i.e. opatch apply ànd you will not be able to skip it. Whether the DB is patched or not would not be relevant IMHO 
Chris Slattery wrote:
The CPU will no doubt fix PL/SQL in the database, for example OWA and the like.
Since each CPU may be different it will come with its own instructions. These are what must be followed.
It is highly likely you will need to run a .sql after patch installation i.e. opatch apply ànd you will not be able to skip it. Whether the DB is patched or not would not be relevant IMHOAgreed -- even then, you would have had to download the correct CPU for your particular platform/patchset (ie 10.2.0.3 on Solaris), so those post-install instructions would have been specific to your particular platform.
Going back to Dallas_dba's question, I believe yes, if you install 10.2.0.1, upgrade to 10.2.0.4, and apply a 2009JANCPU, then create a new database -- you would have to run
rdbms/admin/catalog.sql
rdbms/admin/catproc.sql
sqlplus/admin/pupbld.sql
and then the catbundle.sql followed by view_recompile , as listed under the jan2009CPU post-install notes.
That much was drilled into my head during a phone call to metalink support. I believe it's also reinforced in the following metalink documents:
metalink 422303.1 - Should You Run Post-installation Scripts On A Newly Created Database If The Bundle Patch Or CPU Patch Is Already Applied On The Oracle Home?
metalink 311160.1 - After Applying Cpu April2005 and July2005 Is There A Need To Run Catcpu On A Newly Created Database ? 
On a related note, would we have to run catupgrd.sql (and other post-upgrade steps) on a newly created db after doing a patchset (not just a CPU)?
ie. if I installed 10.2.0.1, patched to 10.2.0.4, then created a DB (not using the templates). 
lrp wrote:
We're in the middle of reviewing the JAN2009CPU readme, and found a curiosity in the patch notes.
- Our platform is Oracle 10.2.0.4 Enterprise Edition on Solaris 10 64-bit.
In the Post-Patch instructions for the CPU, we see the following clause
{color:#3366ff} ...
3.3.5, "Post Installation Instructions for New and Upgraded Databases"
These instructions are for both non-RAC environments and RAC environments
when a database is created or upgraded after the installation of CPUJan2009.
You must execute the steps in Section 3.3.2.1, "Loading Modified .sql Files
into the Database"" and Section 3.3.2.2, "Recompiling Views in the Database"
for *any new database that was created* by any of the following methods:
- Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
- Using a script that was created by DBCA that creates a database from a sample database
- Cloning a database that was created by either of the two preceding methods,
and if Section 3.3.2.1, "Loading Modified .sql Files into the Database"
was not executed after CPUJan2009 was applied.
Upgraded databases require no post-installation steps to be executed.
...
{color}
I don't understand -- does this mean we have to do the following any time
we create a new database from now on?
- run dbca
{color:#ff0000} - run '#?/rdbms/admin/catbundle.sql cpu apply'
- check $ORACLE_HOME/cfttoollogs/catbundle/catbundle_CPU_<SID>_APPLY_<TimeStamp>.log for errors
- run '#?/cpu/view_recompile/view_recompile_jan2008cpu'{color}
I would have thought that the new SQL scripts under $ORACLE_HOME are already
upgraded, so that we do not *have* to do any extra work on top of simply running
dbca to create a new DB.
I see a similar phrase in the CPU2008JUL (patch 7150470):
{color:#0000ff} "You must execute the steps in this section (Section 3.3.2.1,
"Loading Modified .sql Files into the Database") for any new database you create
or any database upgraded to this release since the CPUJul2008 patch was applied."
{color}
.. AND CPU2008OCT (patch 7375644):
{color:#0000ff} You must execute the steps ... for any new database you create or
any database upgraded to this release since the CPUOct2008 patch was applied.
{color}
.. but the April 2008 CPU (patch 6864068) doesn't have these clauses.
Can anybody clarify this issue? Have you created DB's after the JAN2009 CPU
and not run the catbundle.sql, had any problems?
Edited by: lrp on Feb 4, 2009 2:45 PMIt's pretty explicit:
*You must execute the steps* in Section 3.3.2.1, "Loading Modified .sql Files
into the Database"" and Section 3.3.2.2, "Recompiling Views in the Database"
for any new database that was created <b><i><u>by any of the following methods</u></i></b>:
- Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
- Using a script that was created by DBCA that creates a database from a sample database
- Cloning a database that was created by either of the two preceding methods,
and if Section 3.3.2.1, "Loading Modified .sql Files into the Database"
was not executed after CPUJan2009 was applied.
So, if you create a database by any of the explicitly listed methods (all of which are creating a database from a backup that my have been made of a database that was not patched to that level) then you need to run the scripts. If you create a database by some other method (about all that is left is a command line CREATE DATABASE) then part of that creation will necessarily include running the necessary cat* scripts, which will already be at the correct level.

Document Upload Size limit in OAE DB

Hi,
I am a new user of Oracle Application express.Presently in my Cos server we have been assigned 5MB Workspace.
Thus i was confused as to what would be the Storage capacity limitation as to the documents uploaded into the DB.
If i need to upload a lot of documents and the HTML DB tables would not have sufficient to store all of them, as an alternative,is there a way i can upload it to our Oracle home directory in our file system and /or files online.
Any resources for the same would be welcome.This is kind of urgent.
Thanks & Regards,
Bala 
actually I have similar problems with local Ora XE - I was trying to import the UNLOCODE 2006 data (3.5M cvs file) and the thing crashed every time I tried. Then I created a SQL script (13M) and Ora XE still does not want to execute it!
Moreover, when I try to export my schema files, instead of PK I am getting some strange objects like
/
CREATE UNIQUE INDEX "SYS_IL0000013689C00017$$" ON "PROFILE" (
/
This is totally inacceptable behavior for product claiming to be stable and reliable and should be looked on very carefully.
Best,
Peter 
Hello,
I upload large files quite often with no problem both on local and remote instances. Can you describe a bit more can you describe a bit more the issues you are running into.
CREATE UNIQUE INDEX "SYS_IL0000013689C00017$$" ON "PROFILE" I believe these type of files have to deal with your recycle bin and are to be expected and in most cases you would want them http://orafaq.com/node/968
>>
This is totally inacceptable behavior for product claiming to be stable and reliable and should be looked on very carefully.
>>
Help us help you many people have success doing these sort of operations usually it's your particular setup or configuration not the product.
Carl 
This is totally inacceptable behavior for product
claiming to be stable and reliable and should be
looked on very carefully. I have recently uploaded a 16 MB file into our document managemet system w/o a problem (O10g, APEX 2.0). Took about 30 sec. for upload. I have not checked size of other uploaded files, but based on the lack of complaints, I'd say other users have not had any problems either.
It may be something about your PC/server setup?!?
Vojin 
Peter,
when the import crashes in Oracle XE, it might be better to post your problem in the XE forum:
Oracle Database Express Edition (XE)
actually I have similar problems with local Ora XE -
I was trying to import the UNLOCODE 2006 data (3.5M
cvs file) and the thing crashed every time I tried.
Then I created a SQL script (13M) and Ora XE still
does not want to execute it!1) Where can I download the file, then I can give it a try.
2) What version of XE are you using, the Western European or the Universal Edition (Unicode)?
3) Are you running on Windows or Linux?
4) Any error messages ?
5) Try to enable the error logging and post the result:
###
To see the error log as you see in Apache, logon as SYSTEM via SQL*Plus and execute:
SQL> execute dbms_epg.set_global_attribute('log-level', 3)
The error log will go to the database trace file app/oracle/admin/XE/bdump/xe_s00?_????.trc. Please ignore the bogus error message "Embedded PL/SQL Gateway: Unknown attribute 3" in the error log.
BTW, here are the log levels:
0 - LOG_EMERG
1 - LOG_ALERT
2 - LOG_CRIT
3 - LOG_ERR
4 - LOG_WARNING
5 - LOG_NOTICE
6 - LOG_INFO
7 - LOG_DEBUG
###
Moreover, when I try to export my schema files,
instead of PK I am getting some strange objects likeWhat do you mean by export? How did you do it?
CREATE UNIQUE INDEX "SYS_IL0000013689C00017$$" ON
"PROFILE" (Moreover, when I try to export my schema files,
instead of PK I am getting some strange objects likeWhat do you mean by PK (primary key ?) ?What is missing and should be there.
Please be more specific in your request and use as much code / error messages as reasonably possible.
Regards,
~Dietmar. 
Thanks for all the comments.
I am trying to import the UN/LOCODE 2006-1 table (the CSV or txt version), which is publicly available at the United Nations Economic Commission for Europe site:
http://www.unece.org/etrades/download/downindex.htm
My box is 1GB RAM, MS Windows Home Edition SP2 with istalled Oracle Database XE.
The export problem I mentioned appears when I try to export my schema from the Oracle Apex 2.1 (bundled with Ora XE), (Utilities > Export), and there are many lines like that - as you can see even they are not syntactically correct.
If someone succeeded importing the un/locode please let me know. Thanks! 
Hi Peter,
I just tried it and didn't have any problem at all. It worked right away.
This is what I did:
- Unzip file 2006-1 UNLOCODE CodeList.csv from zip download
- Change browser language from German to English
- Logged in as user HR at my local XE instance
- Home>Utilities>Data Load/Unload>Load>Load Data
- Load to : new table
- Load from: upload file
- Load data:
- separator: ,
- optionally enclosed by "
- File character set: Western European Windows (1252)
- next
- load data
I use the universal edition of XE on a Windows Media Center Edition, 1.5 GB RAM.
The export problem I mentioned appears when I try to export my schema from the
Oracle Apex 2.1 (bundled with Ora XE), (Utilities > Export), and there are many lines
like that - as you can see even they are not syntactically correct.Can you post some of these incorrect lines?
Regards,
~Dietmar.

Migration mysql to Oracle thr SQL Developer 1.2

Customer using SQL Developer migrate MySQL4.1.22 database data to
Oracle 9206 database. Capture and Convert completed but it hang during
generate stage.
Customer follow steps published in otn.oracle.com.
1-2 steps
Create a schema and user, named "club" on oracle ,as same as the user in
mysql database, and with sufficient tablespace size.
Capture the Mysql DB, and get the Capture Model.
Convert to oracle model, and get the Converted Model.
Capture Mysql and Convert to oracle model are successful,
3rd steps
Generate failed, and no error logs, the new windows can't failed to continue.
Generating Oracle SQL
Object Type No of Objects Generated
User 1
Schema 1
Sequence 36
Table 1
Generation Failed.
mysql DB is linux as 4
oracle DB is linux as 4
Appreciated for any suggestion.
Thanks,
Michael. 
Can you tell which version of SQL Developer are you using. Also it seems that generate is not hanging but fails at some stage. Hence you are getting the failure message. Please check if anything is displayed in Log window or console.
- Rakesh

PeopleSoft Upgrade Fin 9.1 to 9.2

We are having space issue trying to run upgrade Step Performing Data Conversion Concurrently. We have a 300 gb SQL server 2012 DB and the log files when this step runs grow to 500 gb.  We have recovery mode set to "Simple" on sql server.  How can make this step more manageable, reduce the log space needed and still complete this in a reasonable amount of time?Thank you
Hello this is Henry Ramirez from the Install and Upgrade Oracle Support Team.   Anytime a upgrade is being performed from one apps release to another we should look to track the init pass timings for data conversion.  With the latest version of upgrade we already made data conversion faster by no longer running via the command line but sending the job to the process scheduler to allow us faster results.  For MSS Databases we only have the cert version min release and other then Chapter 23 of the 8.53 install and admin guide for Microsoft is all we have for 2012 setup details in combo with 8.53.

ODI-1529 Refresh of variable "BIAPPS.13P_CALENDAR_ID" failed (ORA-00942)

Hi All,  Recently did Production BI Apps installation. When I execute initial domain load plan,  I am facing the below issue immediately at refresh variable step itself. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ODI-1519: Serial step "Start Load Plan (InternalID:390520)" failed because child step "Global Variable Refresh (InternalID:391520)" is in error.ODI-1529: Refresh of variable "BIAPPS.13P_CALENDAR_ID" failed :select CASE WHEN 'Global Variable Refresh' in (select distinct group_code from C_PARAMETER_VALUE_FORMATTER_V where PARAM_CODE= '13P_CALENDAR_ID')THEN (select param_valuefrom C_PARAMETER_VALUE_FORMATTER_Vwhere PARAM_CODE=  '13P_CALENDAR_ID'and group_code='Global Variable Refresh'and datasource_num_id = '#BIAPPS.WH_DATASOURCE_NUM_ID')ELSE (select param_value from C_GL_PARAM_VALUE_FORMATTER_V where PARAM_CODE=  '13P_CALENDAR_ID' and datasource_num_id = '#BIAPPS.WH_DATASOURCE_NUM_ID')ENDfrom Dual942:42000:java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Probable cause:      Few objects(synonyms) are missing in BIACOMP schema & not configured properly during BI Apps Installation. Is there any way to fix this issue without reverting or re doing the installation?Any suggestion would be more helpful us.  Thanks in Advance,Sekar
Have you completed or are you in the process of completing the post-install instructions in the installation guide? If so, open ODI Studio and log in. Go to your topology navigator tab, expand Physical Architecture > Technologies > Oracle > XX_BIACOMP (Where XX is PROD or whatever name you gave during the RCU installation). Double click on the blue icon and verify that the data server user is XX_BIACM_IO (Again, where XX is PROD or whatever) and the password entered during the RCU setup. Go to the JDBC tab and make sure the JDBC Url connection details are correct and then click the Test Connection text at the top. This will verify your connection is successful or if you have an issue with the actual connection to your DB and schema.
Also, while I'm thinking about it, double check that all of the perl patching completed successfully and the PSA.sh(bat) completed correctly each time (ATG, FSM, BIApps). For the perl patching, check the log by following the steps below. Log file locationIf you remember, you set a WORKDIR path in the apply_patches_import.txt file during installation. In that directory is where you will find the following log files:final_patching_report.logbiappshiphome_generic_patches.logodi_generic_patches.logoracle_common_generic_patches.logweblogic_patching.logOpen the final_patching_report.log to determine first if all patches were applied and identify ones that were not successful.cd $WORKDIR vi final_patching_report.log
Hello Wagner, Thanks a lot for your reply. As mentioned above:1.   We completed Post Installation steps. Also test connection was successful under XX_BIACM_IO schema in topology.2.    Also we checked those work_dir files once after applying patches. Though we will review again for confirmation. Examining the SQL query in Error, we found that few synonyms are missing under "BIACOMP".    " C_PARAMETER_VALUE_FORMATTER_V "By cross checking with Test environment, It is a view in XX_BIACOMP Schema, which in turn created as a synonym in XX_BIACM_IO schema by default during installation process. We doubt that there should be a problem in Dump files used for schemas. (OBIA, ODI, BIACOMP). (May not be configured properly)If so, we need to redo the installation from beginning. Kindly suggest a recommended solution at this stage.  Best Regards,DB
I am thinking maybe just re-running the OBIA rcu, to drop and then readd those schemas would work. I've on a couple occasions had to back out and restart an installation again about halfway through. I've successfully dropped the OBIEE rcu schemas and readded them without issue after an install. However, the OBIA rcu schemas I've not dropped and readded. There could be some internal GUIDs (I'm thinking like in the odi repo snp tables and such)in the db tables that are going to be broken in this process. It could go a couple of ways. You can start by seeing how many of the synonyms are missing and attempting to manually create them if it is just a couple. If you want a 100% clean PROD install, which since it is production, I'd probably lean this way, start by backing out the rcu schemas by running the rcu again. There is an excellent Oracle guide to back out and refinish the last few steps of the install (config.sh and configapps.sh). I would probably end up re-copying the three .dmp files to make sure they aren't the issue, and then follow these steps from this support.oracle.com Doc ID - OBIA 11g How Can I re-run the configApps process? (Doc ID 1951077.1) Keep us posted!
HI, Sorry for delay in reply! The actual problem in my case was the usage of old BI Apps RCU version. We then manually imported the all objects(w/o data) using DB scripts from Test to Prod environment fixed the issue. Though the suggestions mentioned above also will be possible fix for this type of error.Thanks a lot Wagner.Cheers 
We did not export/Import all three schema which is not recommended after installation. - BIACOMP - we created few missing synonym- DW - Exported all the table / sequences to Prod DW- ODI REPO - No changes made. Everything was fine.

Categories

Resources