Saving calculated fields of dynaset - OO4O and Wizards(Archived)

I use Oracle 8.0.6. Problem is that to keep data in calculated fields of dynaset I have to perform command UPDATE. This command write data into database. I don't like it because it take time and beside that I don't need to save those temporary data in database.
If I don't perform UPDATE then calulated fields became clean when I move to the next record.
Question is if the next versions of Oracle behave by the same way or it is a glitch of only 8.0.6. version?
Thanks ii advanse.
Vladimir Brodskiy

Related

Insert without update if records already exist

LS,
I'm trying to build a mapping to insert not yet existing locations into the locations-table.
If I use INSERT, it would fail as soon as the location already exists.
If I use UPDATE/INSERT or INSERT/UPDATE, it would overwrite any existing record, which results in unnecessary DML.
Is there a fast way to do something like TEST/INSERT, to first test whether the record already exists (with the offered attributes) and then insert it if it does not yet exist? Nested subqueries are not supported, so that's a no-go anyway.
For now I've built a rather time-consuming mapping which joins the locations-table with the incoming-table and performs an outer-join (the (+) at the location-table side), and after that a filter which selects the records which did not match up with a record in the locations table.
One other possibility would be to join the incoming-table and the locations-table, and after that a SET-operator which performs a MINUS, which would result in no records if everything already exists, or return any not yet existing records. In the end I'd say this solutions would not be much different in performance from the aforementioned implementation.
What are your thoughts on this?
Cheers, Patrick 
Patrick,
In general, you solution is not bad, but it may be that there are some indexes missing (e.g. on a column that you join with) that result in relatively poor performance. You may want to verify the execution plan of the statement.
Anyway, there are some other ways to go if you manage to identify when a record was created on the source. If that information is in a date column then you can use that as your filter column.
If your data source is at least Oracle9i then you may want to use Change Data Capture in order to let the database manage the capture of changes.
Thanks,
Mark. 
Hi Mark,
There will be an index on the main matching pair, upping performance sufficiently (there's two possible matches, first location code and country (with index), then location name and country).
Identifying when a record is created is a bit difficult, since it will be inserted in a staging table which will always be emptied before hand. I guess the information you were missing is the fact that the source is not another database, but flat files which are loaded from the Unix-environment on to the database using SQL*Loader. Also, warehouses (the actual locations that need to be inserted if not existing yet) are integrated with inventory-records and do not have a creation date as attribute in the source file.
Current strategy: every time a source file has been read, this will be processed to Staging Area and - if no errors - to the ODS. This is job-oriented. Refresh of the datawarehouse will happen in the early morning, therefore only representing yesterdays situation.
Summing up, in staging these aforementioned locations will always have a 'new' date.
I think this also means that Change Data Capture is out of the question. If that is a misconception, how could it be implemented then?
Cheers, Patrick 
Patrick,
Unless there is something you can change in the strategy, I guess you're on the right track:
- either you want to optimize the lookup,
- or you would want to update everything (even though that may imply unnecessary updates).
Thanks,
Mark.

How to boost migration of large SQL Server Table with CLOB field

I have a SQL Server Table with 864,839 rows of which one column is a CLOB.
Migrating the data across to Oracle is taking a very long time, so far 82% moved in 18 hours.
I increased the Parrallel Data Move Streams from 5 to 10, although this seemed to help move all of the other tables across it looks as if it moves one table per stream?
Also increased the commit from the default of 100 to 1000, but I guess I need to make that commit point even bigger for this one particular table.
I'd prefer to use SQL Developer Migration to do this transfer, I realise that the Hetrogenous Option could be a way forward, but that hasn't been tested for this migration (the initial tests used much smaller tables).
Any hints on how I can help SQL Developer migrate this data a lot quicker?
Also how can I tell SQL Developer to migrate data from only one table, rather than the entire SQL Server Database?
Thanks,
Alan... 
Ah I've worked out how to only migrate one table.
I just need to capture tables one or more at a time rather than the entire database.
The captured model can then be renamed, as can the converted model
Thanks,
Alan... 
Hi there,
We have exactly the same problem.
DBMS_HS_PASSTHROUGH cannot work with CLOB's.
Do you have the solution already?
Cheers,
Toon 
Hi
If you have created a external database link using HSODBC.ORA from Oracle-->SQLSERVER.
create database link ext_p1 connect to uid identified by pwd using iname
Try using copy command and see if it works using the database link ext_p1
garani 
Hi Alan
CLOBS and HS don't mix.
My advice, limited though it may be, is to cast your clobs into varchar2 or create a new table comprising of cast fields, and import from that table.
Also, seriously consider SQLLDR, as the performance will totally outstrip any HS process.
.. Good luck.

Variety of Issues with SQL Developer 2.x

Version 2.1.1.64
Build MAIN-64.45
I have a couple of problems with SQL Developer and wondered if anyone knew if the issues were to be fixed at any point soon, some are Oracle-oriented, some are third party-oriented.
I wanted to use Georaptor in order to easier work with Oracle Geometry datatypes that are part of Oracle Locator, but the last version of Georaptor is not compatible with 2.x SQL Developer. Does anyone know if a new version is planned, and if not, is there an alternative product that can be used that is either free, or that can be used within the scope of the standard Oracle Locator licensing
I also have immense problems with importing data from Excel files. SQL Developer 2.x appears to have serious issues for any excel file that contains more than a few thousand rows. It will start throwing out rows claiming that it can't insert null (despite every cell of every field having data of the correct format) or may claim that a PK constraint has been breeched when it hasn't. If you take the list of failures, copy/paste them and run them manually, 90% will then insert correctly. If I could put my finger on it and say it's happening because of "x" then that would be more helpful for me trying to avoid the issue, and to you guys trying to answer/fix it, but reality is, SQL Developer just feels ultra-flakey, seemingly giving spontaneous errors as and when it feels like it.
Thirdly, I found a specific bug in the insert code. I altered the size of one of the fields in my table. I committed it, i refreshed the table in all visible windows, but when i ran the import wizard, it still remembered the old size. The only work around was to exit the program and reload. 
Georaptor is a third-party extension and so any issues with that extension are best highlighted in a separate thread with a specific title.
However, a team of developers (a true opensource project here) updated this extension and posted the updates in July. Have you tried the latest download with 2.1?
It is called release 2.1.1 (Build 3) and is available from https://sourceforge.net/projects/georaptor/
The team are happy for you to get involved and to post bugs/request on that site.
For the import export issues, generally it's best to log bugs with Metalink, however, the team have done a lot of work in this area for SQL Developer 3.0. We will run an early adopter program on 3.0 and you can check out the updates in that release. No dates available for the early adopter program.
Sue 
Georaptor 2.1.1.5 is available as an 'update' from within SQL Developer, but though it downloads and claims to have installed, it never has/does 
I'm not sure when you last tried using Check for Updates, but I have not included the Georaptor extension in the Check for Updates option for sometime. You need to download the jar and then point your check for updates to that jar and it should install. I have just installed the latest in my SQL Developer 2.1 build. You'll see the Georaptor preference in the Preferences dialog and the menus on the View menu.
Sue 
Alternatively, you can add their update centre: http://georaptor.sourceforge.net/install.xml to the SQL Developer Check for Updates list, and this will install the latest, without you having to download the jar file.
Sue 
Just downloaded the latest jar and pointed updates to that. It went through the motions of installing, asked to restart SQL Developer, which I duly did. No sign of any Georaptor options in either of the places mentioned.
Are you sure you're using the 64bit version of SQL Developer that I am? 
hmmm, no mention of 64-bit in your initial query that I can see. Nope, I'm not using 64-bit. You can continue to track the query on the georaptor site or start a new thread for them, on this forum.
(The 64 in the version number is not related to 64-bit, but just the build number)
Sue 
Would using the 64bit version also account for the generalised flakyness of SQL Developer? 
It could be. It's worth always mentioning that, so that we can track the issue in the same release and verify that it's the build. I'm assuming you have the 64-bit download and the correct 64-bit driers.
Sue 
I don't have the freedom to pick and choose what software is provided for me at work, but the versions that are in use are:
Java(TM) Platform 1.6.0_21
Oracle IDE 2.1.1.64.45
Versioning Support 2.1.1.64.45
running on Windows 7 Professional 64bit
Java and Oracle SQL Developer are both 64bit versions 
I don't think that the errors you are encountered are related to the size of the file, but certainly your problems are more unweildy as the file gets larger. I have just imported an xls file with 25k rows without any problems. I re-imported after deleting the data in some cells and still no problems on import.
I am assuming that you are doing a standard import (without selected the send to worksheet option). This means that java batch inserts are being executed by the importer. There are some differences which may result in different errors being encountered versus executing the sql in the worksheet.
One important option to know about is that the SqlArrayFetchSize preference. This preference controls the number of rows processed in a batch. Note that if an error happens in any row of the batch, the entire batch is rejected. If you are having trouble with errors, it may be helpful to set the SqlArrayFetchSize to 1 before the import. (tools->preferencs->database->advanced). This will result in each row being inserted separately and your bad file will be created with only the rows that are actually in error.
Another option you have is to click the "send to worksheet" and an insert script will be generated which can then be executed. 
Can you please provide some clarification of the third problem you are having. I am able to change the size of a column ( I did both: making it larger, then making it smaller). When I import into the table, the new size is displayed correctly on the Target Definition Page in the Target Table Columns panel. When I continue with the export it appears to me that the new size is being used. When I made the column shorter, errors were encountered for rows that had fields in the file that were longer... as expected. 
As I described, if you change the size of a table using the SQL:
ALTER TABLE
<tablename>
MODIFY (
<field> char(32)
);
(or similar) then commit, then right click on the table, go to IMPORT DATA, choose the filename of the Excel file to import, click OPEN, proceed to step 3 of 4 of the import wizard. When you choose the modified field as the target field, it will show the original size, not the altered one. In order to make it show the new size, you must first close/re-open SQL Developer (or close/reopen/refresh every window/tab that is currently active)
I would note that in a table with 2 fields and 4 rows, and data whose maximum length was 4 characters, it would allow me to make the column size bigger, but not smaller, even if trying to set it back to the original size, giving:
ALTER TABLE
CHAINSPAN_META_YEAR
MODIFY (
YEAR_DISPLAY CHAR(16)
);
Error starting at line 1 in command:
ALTER TABLE
CHAINSPAN_META_YEAR
MODIFY (
YEAR_DISPLAY CHAR(16)
)
Error report:
SQL Error: ORA-01441: cannot decrease column length because some value is too big
01441. 00000 - "cannot decrease column length because some value is too big"
*Cause:   
*Action:
SELECT * FROM META_YEAR;
YEAR VALUE|YEAR_DISPLAY
6     2006
20     2020
50     2050
80     2080 
I would note that in a table with 2 fields and 4 rows, and data whose maximum length was 4 characters, it would allow me to make the column size bigger, but not smaller, even if trying to set it back to the original sizeuse varchar2 for variable data; char(16) means you always have 16 characters in database - when you insert '2080' it's 16 characters in database and you cannot go back to char(4).
Philip 
When I tried to reproduce your problem, I did right-click on table, edit and made my changes using the editor. This does not have the same problem you described with using the sql worksheet to edit the table.

MySql command to overwrite the data in a table

Does anyone know a MYSQL command that could overwrite the records of a database table .. I need to overwrite the content with new records (Not referring to UPDATE )
This has nothing to do with JDBC. JDBC is just a Java API which enables interaction with DB's using standard DB languages like SQL. Your problem is in the DB / SQL corner. Refer the SQL documentation at the website of the DB vendor in question. Good luck. 
Delete (or truncate if it has it) followed by insert? Otherwise update.
Edit: With mysql you can even do "insert ... on duplicate key update ..." 
localpott wrote:
Does anyone know a MYSQL command that could overwrite the records of a database table .. I need to overwrite the content with new records (Not referring to UPDATE )You should have asked this on a MySQL forum. Or Google.
I don't really understand what you want though. What is the difference in your mind between an UPDATE and "overwrite"? MySQL does have a REPLACE query which looks like an INSERT and acts like an INSERT unless a key violation would happen in which case it works like an UPDATE.
But your question seems nonsensical. If not update then truncate and delete. The only other thing I think you might be talking about but if so this should have been a lot clearer is how to reset auto-increment counters maybe. 
localpott wrote:
Does anyone know a MYSQL command that could overwrite the records of a database table .. I need to overwrite the content with new records (Not referring to UPDATE )Your question is nonsensical in terms of a database.
Either expand on the reason for why you think this is necessary or read up on how databases are implemented.

Problem in Mapping

My questions is that mapping is working correctly till deployment .. Now when i execute the mapping for the first time it loads data successfully no matter how many records are.. But when i execute it for the second time it donot loads the data (coz it is trying to reenter the same records which produces an error)..
Now can any one help me to append only those records
which are new in source table . or in other way to enter only those records which were not previously loaded? 
Kaleem,
You would have to explicitly design the incremental updates. For example, if the source system contains a column DATE_UPDATED then you can use that date field in a filter (in conjunction with a parameter if you like) to load the records incrementally.
For the next major new release, we are working on a feature to support the database's change data capture feature (which was introduced in the Oracle9i database). In that case, the database would take care of identifying the updated records.
If your source database is currently Oracle9i then you may want to check the change data capture feature and leverage it yourself (via Oracle Warehouse Builder). The out-of-the-box support will come in the next release.
Mark. 
If the number of rows is not too big, it is very simple to achieve using a minus operator:
Take source minus target and put these rows into the target. We implemented it a lot of times to our satisfaction.
Filtering would also be a way of doing.
Another way to implement could be truncate/insert, thus effectively emptying the taget every load.
Off course it will depend on your own discretion which implementation you will choose for, somtimes forced by the possibilities.
Wilco 
You could use a "MINUS" a specified above or you could outer join the source to the target using operational keys.
For example if you are loading clients which are uniquely identified by their SIN you could :
source.sin = target.sin (+)
and target.sin is null
This join condition between your source and target table will only return the source records that do not exist in the target table.
This solution might not be suitable for all your needs though.
This will only return what is new.
You might need to add something that updates records if the key already exists, depending on what type of data you are loading.
In respect to previous reaction I would like to bring appendix A of the OWB 9.2 Userguide to your attention.
Perhaps not as clear as it could be, but certainly effective (once you figered it out yourself).
Have a special look at page A-9, here the source and target table are joined(like in previous reaction), then split up in two sets (update old row, and insert new row)
Depending on your trigger fields update performs an update or a closing of the current record.
After all all actions are merged with a union, then update/inserted into table (or merged in Oracle 9i RDBMS).
Greetings,
Wilco. 
Gurus,
I have read all your responses regarding above topic but can not get data loaded after successful execution of my mapping.
Any clues?
Moses Moya 
Just fixed the problem.

Categories

Resources