Version 11: EventSource deprecated? - Complex Event Processing

at the moment I'm going through the examples provided with the new version of the Oracle CEP suite. When I create a class that implements EventSource, Eclipse shows me that this interface (as well as EventSender) are deprecated. How come? Did I misconfigured Eclipse or what is going on?
Best regards,

at the moment I'm going through the examples provided with the new version of the Oracle CEP suite. When I create a class that implements EventSource, Eclipse shows me > that this interface (as well as EventSender) are deprecated. How come? Did I misconfigured Eclipse or what is going on?11g is the first release to integrate the CQL engine into OCEP and the CQL processing model is slightly different to that of EPL. In particular CQL supports the notion of two kinds of event feeds - streams and relations. Streams are insert only and generally used for filtering fast moving data. Relations support insert/update/delete and complex querying. To distinguish these in the EPN and to disambiguate from the EPL constructs we created two new APIs - StreamSource/StreamSink and RelationSource/RelationSink. Thus if you are using EPL use the Event* APIs, if you are using CQL use the Stream*/Relation* APIs.

Thanks for the hint.
But now another questions arises. Am I right that it is a good idea to work with CQL, as this is the "new stuff"? At the first glance I couldn't find any valuable hints on when to use EPL and when CQL. As I plan to work with your suite for a while, I guess it's reasonable to start with the latter, isn't?

Yes, CQL is based on the emerging ANSI standard so would be the right thing to use if you are starting out with 11g. EPL was the only option in WLEVS 2.0 & OCEP 10.3 and so is still supported in 11g and beyond (but deprecated with the advent of CQL). 

Where can I find the Javadoc for these new classes?

OK I found the Java doc at :
I notice that StreamSender doesn't have a sendEvent(..) method but instead a sendInsertEvent(..). Are these functionally equivalent? Also the examples in the documentaion show a ArrayList collection being passed to sendEvent(...), is that the same for sendInsertEvent(...) ?

Martin - javadoc is also available in the IDE for the CEP server's public APIs. You access it just like you would for anything else in the IDE, for example by hovering your mouse over the class or method you're interested in. The javadoc should pop up after a second or so of hovering. 

In CQL streams are insert-only and relations are insert/update/delete, the naming is to disambiguate between these (RelationSender inherits from StreamSender for instance).
StreamSender is similar to EventSender, but not identical - streams are used for filtering rather than ad-hoc querying and there are rules about what you can use where.
sendInsertEvent() only takes a single event.


Visualization of DPL Entity Beans?  Has anyone already done this?

Hello All,
So our BDB DPL prototype was a success and we're moving forward in putting out a real app. Now that it's no longer a 1 person effort, I need to document my work for collaborators and stakeholders...and like most programmers, I HATE redundant documentation (really I'm pretty crappy at documenting, in general). I'd rather spend 20 hours writing code to do my documenting than the actual 2 hours needed to write the documentation. I've already written the code to produce HTML diagrams of my models by using reflection and reading the annotations, but it looks pretty amateurish.
I'd like to get something that can be posted to my team's wiki and used UML instead of my proprietary HTML tree/table structure. I'm thinking of using Linguine Maps to generate something for DPL Entity Beans that produces a map like their Hibernate map. I'd have the graphing code triggered by maven site. It's my first time writing graphing code.
1. Has anyone already done this?
2. Does anyone know of any reason why this endeavour or using Linguine Maps would be a bad idea?
If this produces useful code, I'll be sure to share as I work on an open source project.
Hi Steven,
Cool idea! I don't know of anyone who has tried that. It would definitely be generally useful.
I've been thinking about it and talking with Alan Bram, the engineer here who designed and developed our DPL Assistant plug-in for Eclipse:
So far, the plug-in is only for BDB JE, not BDB native, and only for Eclipse, not NetBeans or JDeveloper as yet. Do you use Eclipse?
Alan and I had this thought, that perhaps if what you implement is reusable as a library / component, that we could use it in our Eclipse plug-in to display an entity-relationship diagram. The plug-in currently does primarily source code validation, and adding a diagram capability would be pretty nice.
Do the packages you're using all have Apache or BSD-style licenses? That would be a prerequisite for us to re-package it with our plug-in.
We were also wondering if the inputs to your DPL Schema Grapher (for lack of a better term) could be abstracted in a way that it could be used both inside and outside of Eclipse. And it occurred to us that we already have such an abstraction: the EntityModel class (and related classes) in the c.s.persist.model package. These classes describe the metadata for an application's persistent classes, and are intended to be used for tools that need this information. It seems to be a perfect fit.
One advantage to using EntityModel as the input source for the DPL Schema Grapher is that you don't have to do any extra work to obtain the metadata, when using it as you described in your message (outside of Eclipse). You simply instantiate the AnnotationModel and it does all the parsing of the annotations in the persistent classes. Or, you can call EntityStore.getModel if you have a live store that you're graphing.
Inside Eclipse, we would want to populate the EntityModel from information that Eclipse provides, which comes from the source code, not the compiled classes. That's how Eclipse plug-ins normally work -- they operate on a representation of the source code.
To make this a little more concrete, what I'm imagining is a DPLSchemaGrapher Java class, available as a library, that performs schema graphing. One constructor parameter would of type EntityModel. Methods for graphing would transform the EntityModel information (using Linguine Maps or whatever you decide on) into something that can be graphed. The output would be an image of some sort.
Anyway, these are all just thoughts to consider. I don't know what other requirements you have, and whether this fits. Does this kind of abstraction sound like it is worth pursuing to you?
Thanks a lot for posting your ideas here, and for being willing to share your code!
Hello Mark,
I am an eclipse user, so such a program would be useful. I even imagine that the Eclipse UML project ([]) will render much prettier diagrams. Here are my requirements for the piece I need:
<ul><li>Use latest JDK and maven (we want code coverage, static code analysis, and all the other goodies associated with maven). </li>
<li>Model DPL Schema</li>
<li>Publish results in format that can be manually pasted into wiki.</li>
<ul><li>Maven plugin</li>
<li>Link diagram regions to JavaDocs of the classes they model.</li>
<li>Be able to reuse code to model other classes.</li>
<li>     Reach:
<ul><li>Have the diagrams actually look nice.</li>
<li>Create custom annotations to model virtual relationships and constraints (implicit relationships that would negatively impact peformance if declared explicitly)</li>
<li>     Ambitious:
<ul><li>Automatically publish results to confluence</li>
<li>Integrate with JavaDoc</li>
I'll write more about the licensing as soon as I get the official answer. My project has a strong interest in diagramming functionality, so I anticipate the project actually going forward as a secondary/tertiary priority.
I'm very happy to make the code reusable and I'm more than happy to do things the eclipse way as long as it's not doesn't require a herculean effort compared to using reflection.
Is the DPL Assistant open source? I'd like to see how eclipse analyses code.
Thanks for the tip on EntityModel. I'll be sure to check it out.
JavaGeek_Boston wrote:
Is the DPL Assistant open source? I'd like to see how eclipse analyses code. Hi Steven,
When you go to install the DPL Assistant from our update site, you'll see that we have it packaged in a couple of different ways. The "DPL Assistant SDK" includes source code and tests.
I think it should be pretty straightforward. But if not, please let me know if I can help in any way. We are looking forward to the possibility of working with you on this.
Alan Bram
Hi Steven,
Thanks for describing more about your requirements.
The DPL Assistant has the same open-source license as BDB JE, in fact it's really part of the same product.
I forgot to mention one other thing about EntityModel. Using it as the input for graphing will enable your graphing tools to work with DPL apps that do not use annotations. I'm not sure whether you are already aware of this, but annotations are optional. A user can implement some other way of describing the metadata (XML file, naming conventions, etc) and create their own EntityModel.

Toplink with Integration/WebServices

I have to start exposing our application services via Web Services, and am beginning to look at the issues we will face.
Our first issue is we have extensively used indirection, which doesn't serialize.
Has Oracle, or anyone else published any articles on integration best practices when using Toplink? Or do you know of a source of information which will help deal with these issues?
We you are sending data to a client you need to decide how deep you wish to traverse the object. There are many patterns for doing this, such as using Data Transfer Objects, which well define exactly what is sent to the client.
If you intend to send your persistent objects directly to the client, one method is to read the objects in a unit of work and only instantiate the relationships that you desire to be sent to the client for the interaction, and then serialize the object. One easy way to force instantiation of any object is to use the session.copyObject() API.
If you are using web-services and XML the response data should be well defined for each interaction as in the XML schema for the response.
How are you building the XML response for the web service? You may wish to consider using 10.1.3 TopLink XML mapping support for this.
Thanks for input James,
Do you know of any case studies anyone has written on implementing web services supporting both xml and java rpc, while utilizing Toplink as O/R? I appreciate there are lots of patterns I could use, but I am hoping skip a couple steps in terms of which of all the patterns is best suited for us, and to what level of flexibility/complexibility I need for the web services/remoting layer.
We are going to support xml, I was leaning towards JAXB. What advantages would I see from using the Toplink xml capabilities as opposed to the alternatives?
We haven't upgraded to 10.1.3 yet. Is it production ready yet? Last time I tried the upgrade I was getting some issues, and didn't have time to deal with them. I do want the new per-class sequencing abilities. I believe that is in the latest too, so I can probably justify the upgrade depending on the responses to other concerns
Little more info about our application.
We use the spring framework, and it uses Axis for it's web services.
I expect to support java-rpc and xml for the web services.
Do you know of any case studies anyone has written on
implementing web services supporting both xml and
java rpc, while utilizing Toplink as O/R? I'm not aware of any case studies but there is a "how to" on how to use TopLink as a custom serializer for webservices on OTN
To return to your earlier question about indirection, if you define how to marshall a set of objects into XML you shouldn't have to worry about the presences of O/R indirection if you go through getters. If you configure your TopLink O/X project to use getter/setter methods for attribute access, then any objects you want to marshall will be faulted into memory. Just in case it isn't clear, with TopLink it's possible to have both O/X and O/R mappings for the same objects so you can read and write them to both technologies to do exactly what you are talking about.
We are going to support xml, I was leaning towards
JAXB. What advantages would I see from using the
Toplink xml capabilities as opposed to the
alternatives?The production release of TopLink 10.1.3 will be 100% JAXB compliant. This means your client code will use the portable JAXB API and not a TopLink specific one.
What is unique is how TopLink implements JAXB. Instead of being code generation based, TopLink uses mapping meta-data as it does for O/R mapping. If you generate classes from a schema with TopLink you get normal POJOs that you might have written yourself, unlike most JAXB implementations that embed the marshall/unmarshall code right in the generated classes.
What this meta-data approach enables beyond JAXB 1.0 is the ability to map existing classes to XML. It also means you can edit the mappings generated by the schema compiler to do things like not marshall some attributes or to apply transformations. Your client API is still JAXB but the marshalling/unmarshalling can be much more sophisticated, as necessary.
- Shaun 
Hi Shaun, I proved that example but I don't work me.
I wanted to know if you have some code example source?
since I need to make something very similar, I will thank your help.
Thank you.
- Roberto

is sdo adapters supported in tp4

I want to use the sdo adapter in tp4 , is this now supported. in tp3 it wasn't working 
In TP4, this can only be used for a very specific and narrow usage: entity variables and ADF-BC services. We use this internally quite a bit but we should probably have removed it from this technology preview to avoid confusion (this is not yet documented nor do we have any simple tutorial we can expose at this time).
May I ask what is your use case? This would help us putting something together for the next preview. 
I want it to use it with adf bc sdo web service. Then for now only the get operation and maybe later the update operation . I find this much easier then using a database adapter with a select or update . 
When you write "In TP4, this can only be used for a very specific and narrow usage...", do you mean that the final release will include a full SDO approach in order to propagate and/or access heterogeneous data between and from SCA components ?
Will this layer be based on ADF-BC, which is very restrictive from my point of view, or will you open your approach (if so, how) ?
Hi Dominique (and all SDO users out there),
Well, we are still exploring here. We want to hear from developers on how they plan on using SDO. Expansion of the binding beyond ADF-BC is an option. So:
What do you mean by "full SDO approach"? Do you think the current approach is limited in functionalities or in interop options?
Do you have any specific DAS in mind that you think we should interop with?
Are you already using SDO in your production environments (or planning a deployment in the near-future)? What was the motivation factor for moving to SDO?
Thanks in advance for any feedback. 
Hi Demed
One of the main goals of SDO we have to keep in mind is unifying access to various data sources. Not only data sources accessed through jdbc, but also through EJB3, JPA, persistance frameworks, xml data store and even JCA or JMS. That gives an idea of the specific DAS I have in mind (this is not theory, this is real world).
Of course it has also to provide frameworks in order to manipulate data graphs and bind user interface components.
That what I meant by "full SDO approach".
Considering this, I think ADF-BC is too integrated with Oracle (from both the client and data point of view) and therefore may be mainly limited in interoperability (except if you plan to reconsider and/or expand your persistance layer and introduce datagraphs w/change summaries...which is quite a big deal).
Hope this helps

Transparent vs valueholder performance impact

What is the criteria for using one over the other? I was trying to get some samples
for transparent collections, when I looked at the threetier example shipped
with 10g all the indirection is done through valueholder, any special reason?
Anywhere I can find some sample implementation of transparent collections?
Updates, please 
Probably the example is old and has not been updated to use transparent collections. In general I would always recommend using transparent collections for indirection.
Unfortunately it seems all the examples are out of date and not using transparent indirection. You can find some examples in the TopLink manual. The implementation is simple though, just define your classes as normal, or as if they were not using indirection. You will need to use the collection interface List, Set, Map not the impls.
-- James : 
Just curious, I am trying to understand where we get the performance benefit as
against using a ValueHolder, ValueHolder by itself has a small memory foot print
when initialized which gets replaced during runtime by the appriopriate collection
instance. By defining them as List or Set what kind of benefits do we see?
Are they initialized during runtime (lazy initialization, I do not have to initialeze
the objects during object construction?)
Thanks much 
Performance comparaison is meaningless if both implementation generate the same SQL at the same time, which is my understanding. The important difference between the two is that one is transparent and don't polute your domain objects with TopLink class.
It's rare that performance of the Java code interacting with the database become a priority. The problem with the Java code are more likely that it has bugs or interact more than it should with the database so extra latency.
I would like to see TopLink optimized first to avoid sending the same sql to the database during the same transaction before we optimize the implementation of a list.

hook java method into xquery engine

Hello everyone,
I'm using XmlBeans to execute dynamic XQuery computations on some XML data - via XmlObject.execQuery() (I'm currently using WL8.1sp3).
Now, I need to query an external DB table, necessarily from within the XQuery source; so I need to configure the engine to call my method upon invocation of an XQuery function. Is this possible? How?
Thank you for your attention,
D. Barra
There's a great deal about this subject that I don't know, but this is exactly what AquaLogic Data Services Platform (formerly Liquid Data) provides. You'll end up with XQuery function libraries that directly access a data source. 
Hmm... i just can't use AquaLogic, since I'm bound to a production environment than won't be changed.
I was figuring out how to interact directly with the XQRL engine, and there seems to be a possibility, but it's totally undocumented, I see...
Anyone already faced this issue?
D. Barra 
What do you mean by "won't be changed"? Obviously, if you can't change it at all, you can't accomplish anything.
By saying this, you may be thinking that "AquaLogic" means WLS 9. That's not the case for ALDSP at this point. It's implemented on WLS 8.1. 
I know what ALDSP is, and I've used it already in some other contexts. I meant, it won't be bought by this customer, so it's not available to me :-(
What I was trying to accomplish is to emulate the 'User Method's as you see and use them by XQuery into Workshop's transformation controls: you write a method in java, then mark it with the "#dtf:xquery-function" annotation, et voilà, it's magically available in other xquery transformations.
The problem is that this area is undocumented (maybe intentionally), so I'm stuck to bypass this problem in some other way.
D. Barra 
There is no good way to do this. You would need information on the xquery engine internals and its private APIs.