Too many open cursors exception caused by LRS Iterator - weblogic.developer.interest.kodo(Archived)

Using Kodo4.1.4 with Oracle10, and Large Result Set Proxies, I encountered
the error "maximum number of open cursors exceeded".
It seems to have been caused because of incomplete LRSProxy iterators within
the context of a single PersistenceManager. These iterators were over
collections obtained by reachability, not directly from Queries or Extents.
The Iterator is always closed, but the max-cursors exception still occurs.
Following is a pseudocode example of the case... Note that if the code is
refactored to remove the break; statement, then the program works fine, with
no max-cursors exception.
Any suggestions?
// This code pattern is called hundreds of times
// within the context of a PersistenceManager
Collection c = persistentObject.getSomeCollection(); // LRS Collection
Iterator i = c.iterator()
try
{
while(i.hasNext())
{
Object o = i.next();
if (someCondition)
{
break; // if this break is removed, everything is fine
}
}
}
finally
{
KodoJDOHelper.close(i);
}

Related

Strange transaction behaviour on WLS conn pool

What I have done:
1. I developed EAR and deployed it on BEA WLS.
2. I made conn pool and corresponding Data Source.
What I am doing:
1. I am using select ... for update nowait statement for row locking
The issue:
1. user-1 as operator login to app and select 1 record (select ... for update nowait) which supposed to lock particular selected row.
2. user-2 as supervisor login to app and select the same record (select ... for update nowait) via different menu of course.
3. And the error is not showing. I am expecting resource busy error due to row locking
My investigation:
1. When user-1 locks particular row. I could see the lock session still holding the row.
2. When user-2 do whatever it does esp on committing a transaction, the locking that belongs to user-1 get lost.
3. When user-2 is not committing any transaction but gets logout from app, the locking that belongs to user-1 get lost too.
My suspect:
1. Kind of shared transaction session or something.
How to solve this issue?
Is there something to do with conn pool settings on BEA WLS?
Regards
Eric
--
Edited by pramudya81 at 06/19/2008 11:15 PM 
At first blush this sounds like an application issue, not a
pool issue. You should ensure that your application does
not share a JDBC connection across threads.
Here is the ideal standard for safe WebLogic JDBC. If you can
adopt this, many such problems will go away. Pooling is fast,
and is best used in a quick per-invoke fashion:, and if
many of these 'top-level' methods are required for a
given transaction, WebLogic pooling/DataSources and the
WebLogic transaction coordinator will cooperate to ensure
that all work is included in the transaction. Multiple
connection closings at the user level will not hurt.
For instance, for a non-XA datasource, we will transparently
ensure that all of these multiple methods will (necessarily)
get the exact real same connection for a given transaction.
/* This is how you should make any of your top-level methods
* that will do JDBC work for any of your user invokes.
*/
public void myTopLevelJDBCMethod()
{
Connection c = null; // All JDBC objects should be method
// level objects to ensure thread-safety
// and prevent connection leaking.
// Define the connection object before
// the JDBC 'try' block.
try {
// This is the JDBC try block for this method. Do
// all the JDBC for this method in this block.
// Get the connection directly from our DataSource
// in the try block. Do not get it from any method
// that has kept a connection and is sharing it for
// repeated use.
c = myDataSource.getConnection();
... do all the JDBC for this method in the scope of this try block...
... you can pass the connection or sub-objects to sub-methods
... but none of these methods must expect to keep or use the
... objects they receive after their method call completes...
(eg)
DoSomethingFancyWith(c);
// Use Prepared/Callable Statements. They are faster usually,
// Especially because we cache them transparently with the pool.
PreparedStatement p = c.prepareStatement(...);
ResultSet rs = p.executeQuery();
ProcessResult(myrs);
// Close JDBC objects in the proper order: resultset, then statement, then connection
rs.close(); // always close result sets ASAP at the level they were created
p.close(); // always close statements ASAP at the level they were created
...
// When the JDBC is finished in the try-block, close the con:
c.close(); // always close connection ASAP in the same method
// and block that created/obtained it.
c = null; // set the con to null so the finally block below
// knows it's been taken care of.
}
catch (Exception e ) {
... do whatever, according to your needs... you do not have to
... have a catch block if you don't want it...
}
finally {
// Always have this finally block. A finally block is crucial
// to ensure the connection is closed and returned to the pool,
// (not leaked).
// failsafe: Do every individual thing you want to do in the
// finally block in it's own try block-catch-ignore so everything
// is attempted.
try {if (c != null) c.close();} catch (Exception ignore){}
}
} 
I pretty much followed the instructions you ave above.
This is snippet of my codes:
protected Forward viewData(ViewDataForm vdForm){
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rs = null;
try{
conn = getDS(httpPort,DSName).getConnection();
stmt = conn.prepareStatement(query.toString());
rs = stmt.executeQuery();
...looping thorugh the resultset here...
rs.close();
stmt.close();
}
catch(SQLException sqle){
return new Forward("error_update");
}
catch(Exception e){
return new Forward("error");
}
finally
{
try
{
if (rs!=null){
rs.close();
}
if (stmt!=null){
stmt.close();
}
if (conn!=null){
conn.close();
}
rs=null;
stmt=null;
conn=null;
}
catch(Exception e)
{
e.printStackTrace();
}
}
return new Forward("success");
} 
Hi. That's almost perfect. The only important difference is
that in your finally block, if your resultset close or your
statement close throw an exception, you'll skip the all-
important connection close. If you must do multiple things
in a finally block, do each one it it's own try-catch-ignore
block so they are all attempted.
Joe 
Ok. In Finally block, I tried to separate each process in its own try-catch block. But still the issue still persist.
This one is starting to frustrating me. :)
Regards 
Are you sure that you are not seeing different
tables, depending on who you log in as? 
Yes they see the same table. 
OK. Then to debug this completely, I ask you to
use the BEA-branded driver for your pool, and
add the spyAttributes property to the list
of driver properties, so the driver generates
a spy log file of all JDBC done. Then boot
the server and reproduce the problem as
quickly and as short as possible, and let's
look at the spy log.
Joe

Cache Problem, Please

I have two different type objects, one refer another, using one to one mapping.
At first, I get the parent object, and then I also get the child object, using session.executeQuery, as you know, toplink will get the child object from the cache, as my database is modified by another program, so I use session.refreshObject(chindObject) to refresh the child object, and then I modify the child object and succuess, but when i get the parent object again, i find the childobject didnot be modified, and the parent pointer to childobject is the old address, but the toplink cache address of childobject has been changed.
I cant use session.refreshObject(parentObject) to refresh the parentObject, because I have many parent object. and my database may be modified by another program. so what can do? 
What identity map type are you using for the child object? NoIdentityMap? Without an identity map TopLink cannot maintain java object references and ensure referential integrity, hence the child object referenced from the parent will not be the same instance returned from a query. If the child object is a highly volatile object then I recommend using the WeakIdentityMap. NoIdentityMap should only be used in those rare cases where there are no relationships between objects or the application does not traverse java object references.
--Gordon
This doesn't sound like a cache issue to me, it sounds like a misunderstanding of working copies (aka "clones") from units of work. IF you do a refresh, you should absolutely see the changes if you are looking at the correct objects (and assuming you've set the cascade of the refresh properly -- you might want to consider using a Query for the refresh rather than "refreshObject" so you can control and experiment with the cascade options).
For unit of primer, see here: http://www.oracle.com/technology/products/ias/toplink/technical/unitOfWorkWP.pdf
- Don
Thank your responses, the following function is my adding and modifying function, please help me check it whether or it is right, thx!!!
And I used SoftCacheWeakIdentityMap. and whether all the searching function must use roq.dontCheckCache() to insure that the searching result is the latest data? if so, my cache is dirt cheap.
public static PersistenceObject getObjectByOid(Session session, Class clazz, String oid)
throws DomainServiceException {
String funcName = "getObjectByOid()";
try {
ReadObjectQuery roq = new ReadObjectQuery();
roq.setReferenceClass(clazz);
roq.setSelectionCriteria((new ExpressionBuilder()).get("oid").equal(oid));
roq.dontCheckCache(); //Use the Statement instead of session.refreshObject
PersistenceObject pObj = (PersistenceObject)session.executeQuery(roq);
integrator.logDebug(funcName, "session.executeQuery(): clazz: "+clazz+", oid:"+oid);
session.printIdentityMap(ProcessingMethod.class);
if (pObj != null){
//session.removeFromIdentityMap(pObj);
session.printIdentityMap(ProcessingMethod.class);
}
return pObj;
} catch (Throwable e) {
integrator.logError(funcName, "Input: clazz: "+clazz);
integrator.logError(funcName, "Input: oid: "+oid);
throw new DomainServiceException(e);
}
}
public static String addOrUpdateObject(Session session,PersistenceObject pObj, String userId)
throws DomainServiceException {
String funcName = "addOrUpdateObject()";
String oid = null;
Produce produce1 = (Produce)session.executeQuery("SelectyByOid", Produce.class, "51");
session.printIdentityMap(Produce.class);
if (pObj != null) {
// try to retrieve the corresponding object from DB.
if (pObj != null) {
//session.removeFromIdentityMap(pObj); // remove the pObj from identity map to make sure the object we get in the next line is from DB.
integrator.logDebug(funcName, "removeFromIdentityMap is called");
}
session.printIdentityMap(ProcessingMethod.class);
PersistenceObject pObjFromDB = (PersistenceObject) this.getObjectByOid(session, pObj.getClass(), pObj.getOid());
integrator.logDebug(funcName, "QueryService.getObjectByOid() is called");
// pObjFromDB = (PersistenceObject)session.refreshObject(pObjFromDB);
session.printIdentityMap(ProcessingMethod.class);
try {
UnitOfWork uow = session.acquireUnitOfWork(); // acquire the unit of work.
integrator.logDebug(funcName, "session.acquireUnitOfWork() is called");
uow.registerObject(pObjFromDB);
integrator.logDebug(funcName, "uow.registerObject() is called");
session.printIdentityMap(ProcessingMethod.class);
// merge the changes from pObj.
PersistenceObject pObjClone = (PersistenceObject) uow.mergeCloneWithReferences(pObj);
integrator.logDebug(funcName, "uow.mergeCloneWithReferences() is called");
updateObjHistory(uow, pObjClone, userId); // update the changelog.
integrator.logDebug(funcName, "updateObjHistory() is called");
uow.commit();
integrator.logDebug(funcName, "uow.commit() is called");
Produce produce2 = (Produce)session.executeQuery("SelectyByOid", Produce.class, "51");
oid = pObj.getOid();
} catch (DatabaseException dbException){
integrator.logError(funcName, dbException.getMessage());
dbException.printStackTrace();
if (dbException.getCause() instanceof SQLException) {
SQLException sqlException = (SQLException)dbException.getCause();
if (sqlException.getErrorCode() == 1){  // unique constraint violated.
throw new DuplicateRecordException(dbException);
} else {
throw new DomainServiceException(dbException);
}
}
}
}
return oid;
}
The ServerSession has a identityMapManager attribute, and the identityMapManager attribute has a identityMaps attribute, and the identityMaps is a HashMap, its key is Class, and value is SoftCacheWeakIdentityMap, the SoftCacheWeakIdentityMap has two attributes cacheKeys and referenceCache,
My Question is what difference the cacheKeys and referenceCache? 
dontCheckCache doesn't mean it will refresh, it just means that the query will go to the database -- but it won't necessarily refresh. There are cases where you'd want to hit the database and not refresh, they're separate issues.
You should use query. refreshIdentityMapResult() and then query.cascadeAllParts() (or whatever is appropriate for what you're trying to do).
- Don
My Question is what difference the cacheKeys and referenceCache? These are internal structures, however they are,
cacheKeys - a Hashtable of all of the objects (wrapped by CacheKeys) these use WeakReferences to allow garbage collection.
referenceCache - a LinkedList structure of SoftReferences of the <identity-map-size> last objects accessed. To avoid too much garbage collection and improve caching.
Thank you!!!
We ask our client could not modify the database using other program, otherwise, they must restart our program.
thank you again!!!

Issue with using DirectToXMLTypeMapping

I am mapping an Entity field directly to an XMLType oracle field. In my entity I am using a String to store the data and a Converter to change the data retrieved from the database from and XMLDocument to String. When I did this I saw that I was always getting multiple UPDATE statement for this field even when it didn't change. In fact event when I loaded data from the database it would cause and UPDATE to the XML field. When I debugged the code I found 2 issues that need resolution:
1. In the M6 release the DirectToXMLTypeMapping code does not line up in the debugger. This indicated to me that what was compiled is not actually the code in the source bundle. Certainly not a big deal for this class.
2. The real issue is that the method compareObjects() is broken for Strings and always returns true, even when the data is null. The issue is that the method thinks it's dealing with String when firstObject and secondObject are actually Entity objects. It really needs to get the actual data from the objects to compare not compare the objects.
public boolean compareObjects(Object firstObject, Object secondObject, AbstractSession session) {
if (getAttributeClassification() == ClassConstants.STRING) {
return firstObject.equals(secondObject);
} else {
Object one = getFieldValue(getAttributeValueFromObject(firstObject), session);
Object two = getFieldValue(getAttributeValueFromObject(secondObject), session);
if ((one == null) && (two == null)) {
return true;
}
if ((one == null) || (two == null)) {
return false;
}
if (one instanceof Node && two instanceof Node) {
return xmlComparer.isNodeEqual((Node)one, (Node)two);
}
return one.equals(two);
}
}
The statement 'return firstObject.equals(secondObject);' is comparing the whole objects not the actual field data. Once I extended the class and overrode the method and used the new class for my mapping I am back to only seeing a single INSERT or UPDATE.
Hope this helps...If this is not the right place for bugs please let me know.

TransactionListener issue

I have a 1:N relationship between User and Address. When I attempt to
delete an Address associated with a User via a PersistenceManagerImpl that
has a TransactionListener registered against it, the program hangs when
iterating over the Collection of modified objects in
TransactionListener.beforeCommit(). I finally get a stack trace, however
it just states, "This operation cannont be performed while a Transaction
is active." The line in my code which is referenced in the stack trace is
on close() of the PersistenceManager in the piece of code performing the
delete. It appears that some exception is caused during dirty object
iteration in beforeCommit(), yet my try/catch block in this method doesn't
seem to get the chance to report the exception.
When I don't register a TransactionListener against the
PersistenceManagerImpl, the object is successfully deleted.
Can you post the code to your TransactionListener and the code you are
executing against the PM with the listener on it?
I should have mentioned that I get the Address by OID and call
deletePersistent() on it. I do not get the User and remove the Address
object from the User's addresses Collection.
Brian Gebala wrote:
I have a 1:N relationship between User and Address. When I attempt to
delete an Address associated with a User via a PersistenceManagerImpl that
has a TransactionListener registered against it, the program hangs when
iterating over the Collection of modified objects in
TransactionListener.beforeCommit(). I finally get a stack trace, however
it just states, "This operation cannont be performed while a Transaction
is active." The line in my code which is referenced in the stack trace is
on close() of the PersistenceManager in the piece of code performing the
delete. It appears that some exception is caused during dirty object
iteration in beforeCommit(), yet my try/catch block in this method doesn't
seem to get the chance to report the exception.When I don't register a TransactionListener against the
PersistenceManagerImpl, the object is successfully deleted.
TransactionListener.beforeCommit():
public void beforeCommit(TransactionEvent evt) {
try {
System.out.println("** TransactionListener.beforeCommit():");
System.out.println("** evt = " + evt);
Collection objs = evt.getTransactionalObjects();
Iterator it = objs.iterator();
while (it.hasNext()) {
System.out.println("** " + it.next());
}
System.out.println("Done iterating dirty objects.");
}
Code which deletes Address:
manager = getPM();
try {
Object id = manager.newObjectIdInstance(Address.class,
request.getParameter("addrid"));
Transaction tx = manager.currentTransaction();
tx.begin();
Address addr = (Address)manager.getObjectById(id, false);
String addrInfo = addr.getStreet() + " " + addr.getCity() + ", " +
addr.getState().getName();
// Remove this Address object from it's owning User's collection
// of Address. I get the error regardless of whether or not this
// method is invoked.
addr.getUser().getAddresses().remove(addr);
manager.deletePersistent(addr);
tx.commit();
pw.println("Deleted address: " + addrInfo + "<hr>");
} catch (Exception ex) {
ex.printStackTrace();
} finally {
manager.close();
}
This code executed fine against our internal codebase. Of course, we've
made modifications since the second beta that might be making the
difference, but it seems doubtful. Are you sure your JSP code isn't
throwing an exception? I'm not sure where the result of
"ex.printStackTrace" would go in a JSP, but you might consider some more
obvious way to figure out if that catch block is getting invoked.
All code is running in a servlet, not in JSP. I guess I'll keep playing
around with it to determine if I've done something wrong.
Could you run your test app, comprised of my code, against 3.0b2?
Could you run your test app, comprised of my code, against 3.0b2?Done. No problems.

maximum open cursors exceeded

Greetings [Kodo 3.3.4, Oracle9i]
I am getting the error "java.sql.SQLException: ORA-01000: maximum open
cursors exceeded" in the following code pattern:
for(Iterator x=reallyBigCollection.iterator(); x.hasNext();)
{
somePCObject = (SomePCObject)x.next();
Collection stuff = somePCObject.getSomeCollection();
// stuff is an LRS collection by reachability
for (Iterator i = stuff.iterator(); i.hasNext();)
{
moreStuff = i.next();
If (someCondition)
{
break;
}
}
}
The problem appears to be the execution of the "break" statement, which will
leave each inner Iterator not fully iterated, thus presumably hanging onto a
cursor. Then we run out of cursors long before the Garbage Collector comes
along to clean up the old Iterators...
So to the question, is there some way for me to close this Iterator and free
up it's resources before or after executing the break statement? (i.e. In
this case where the Iterator is not obtained from an Extent...)
Is this a Kodo bug?
Cheers and thanks
.droo.
Relevant part of stack trace:
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:305)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:272)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:623)
at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:18
1)
at
oracle.jdbc.driver.T4CPreparedStatement.execute_for_describe(T4CPreparedStat
ement.java:420)
at
oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.ja
va:896)
at
oracle.jdbc.driver.T4CPreparedStatement.execute_maybe_describe(T4CPreparedSt
atement.java:452)
at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java
:978)
at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedSta
tement.java:2888)
at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatem
ent.java:2929)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:354)
at
com.solarmetric.jdbc.PoolConnection$PoolPreparedStatement.executeQuery(PoolC
onnection.java:341)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:352)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:352)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:352)
at
com.solarmetric.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPre
paredStatement.executeQuery(LoggingConnectionDecorator.java:1106)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:352)
at
kodo.jdbc.runtime.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBC
StoreManager.java:1730)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPrep
aredStatement.java:339)
at kodo.jdbc.sql.Select.execute(Select.java:1581)
at kodo.jdbc.sql.Select.execute(Select.java:1533)
at
kodo.jdbc.meta.AbstractCollectionFieldMapping$LRSProxyCollection.itr(Abstrac
tCollectionFieldMapping.java:741)
at
kodo.util.AbstractLRSProxyCollection.iterator(AbstractLRSProxyCollection.jav
a:294)
at java.util.Collections$1.<init>(Collections.java:993)
at
java.util.Collections$UnmodifiableCollection.iterator(Collections.java:992)
at MYCODE 
I think this is a quite common Oracle error with OR-mappers.
Here is the solution Kodo Documentation gives (1 or 3).
http://solarmetric.com/Software/Documentation/3.3.4/docs/manual.html#sql_errors
Oracle limits the number of statements that can be open at any given time, and the application has made requests that keep open more statements than Oracle can handle. This can be resolved in one of the following ways:
1. Increase the number of cursors allowed in the database. This is typically done by increasing the open_cursors parameter in the initSIDNAME.ora file.
2. Ensure that Kodo query results and extent iterators are being closed, since open results will maintain an open ResultSet on the server side until they are garbage collected.
3. Decrease the value of the MaxCachedStatements parameter in the ConnectionFactoryProperties configuration property.
Similar problem with ColdFusion.
http://jehiah.com/archive/maximum-open-cursors-exceeded
-- Ville 
Thanks for the response. However, my question still remains open...
I understand the solution provided by Kodo Doc, but their option 2 (ensure
that Kodo query results and extent iterators are being closed) is fine if I
have an Extent Iterator, but I don't... I have an Iterator obtained by
reachability. So how can I close this Iterator, other than fully iterating
it?
Incidently, the other options are not really viable, I would need
potentially millions of cursors and in my case, MaxCachedStatements won't
help...
Cheers
.droo.
On 8/7/06 12:05 PM, in article 600038377#newsgroups.bea.com, "Ville
Kuokkanen" <ville.kuokkanen#ktl.fi> wrote:
I think this is a quite common Oracle error with OR-mappers.
Here is the solution Kodo Documentation gives (1 or 3).
http://solarmetric.com/Software/Documentation/3.3.4/docs/manual.html#sql_error>
s
>
Oracle limits the number of statements that can be open at any given time,
and the application has made requests that keep open more statements than
Oracle can handle. This can be resolved in one of the following ways:
1. Increase the number of cursors allowed in the database. This is
typically done by increasing the open_cursors parameter in the initSIDNAME.ora
file.
2. Ensure that Kodo query results and extent iterators are being closed,
since open results will maintain an open ResultSet on the server side until
they are garbage collected.
3. Decrease the value of the MaxCachedStatements parameter in the
ConnectionFactoryProperties configuration property.
Similar problem with ColdFusion.
http://jehiah.com/archive/maximum-open-cursors-exceeded
-- Ville 
I think the option 2 can only be implemented by creating a query instead of reachability iteration. This means more complexity to application, but the performance should be equal. Kodo should create only one prepared statement for inner query, which is executed multiple times with different parameters.
Example in pseudocode how this can be done.
for(Iterator x=reallyBigCollection.iterator(); x.hasNext();) {
  somePCObject = (SomePCObject)x.next();
  // instead of iterating the LRS collection execute query method.
  // query implementation can be here, if external method is not needed.
  Object stuff = executeInnerQuery(somePCObject.getId());
}
/**
  * This method can be for example  in service layer.
    * #return some application specific object
   */
public Object executeInnerQuery(Long id) {
  final String filter = "this.identity == id";
  Query q = this.pm.newQuery (MoreStuff.class, filter);
  q.declareParameters ("Long id");
  q.compile();
  Collection stuff = (Collection) q.execute (id);
  Object moreStuff = null;
  for (Iterator i = stuff.iterator(); i.hasNext();) {
    moreStuff = i.next();
    if (someCondition) {
      break;
    }
  }
  // close query and oracle cursor
  q.closeAll();
  return moreStuff;
}

Categories

Resources