When I should abort a instance, when I should not? - aqualogic.bpm.modeling(Archived)

Hi there.
I'm designing a process that controls an insurance policy sale and that has a deadline for it's processing. If this deadline is reached, the process must go through a exception flow which will cancel the sale and decline the insurance policy. My question is: whenever this happens, I should end the process instance within ALBPM with an aborted status or by reaching the end activity after processing the exception?
In other words, thinking in a generic way, one could question: is it okay, withing a modeling view, to abort an instance when a business exception occurs? 

Well that depends on if the business needs to view the reports of historical data stored in BAM table.
So once you abort the instance, the information for that instance would not be available in BAM table for reporting purpose.
So once you make it go to end, and have one business variable which can hold the instance status , like in ur case , it wil hold: Success or Failed.
As you must be knowingm the business variable get stored in BAM tables.
Regards
Right Chord 

Dear Right,
I am working on a project where I need to do reporting using Hyperion from ALBPM work list/process instance data. The report needs to be generated on current instance data as well as completed or rejected instance data. Please confirm if my approach is correct or not.
1) Use BAM or Data Mart database for the reporting on current instance.
2)Use Archival Data for the Reporting on historical data for the instance that are completed and rejected.
However, I have following question for that I didn't find any clear idea. I would appreciate if you get me some help here:
+ Our business process might stay longer for some cases, say could be a year or more before it gets approved or rejected. As per Bea documentation of Business Activity Monitoring (BAM) and Business Activity Data Mart (Data Mart), BAM records information about process instance performance and process workload over a recent time period, usually 24 hours where as the Data Mart stores data similar to BAM, but over longer periods of time.
So what do you/Bea suggest/recommend to have out of BAM and Data Mart? Can it Data Mart hold data for such long.
+ For historical reporting:
0) Does this archival happen from engine Db or BAM/Data Mart?
0) As per Bea; process information is copied to the archive database after a process is completed or aborted, based on the archiving schedule configured in the Process Administrator in the Services pane of the Edit Engine section. And also ALBPM business processes do not require data from an instance once this instance has ended, so process instances are discarded upon instance expiration. So does this also flush out the process instance data from BAM/Data Mart as well or BAM/ Data Mart data will flush out as per duration configured in ALBPM. Please confirm.
+ There must have been some reason while designing the three different way to hold process instance data from:
#1. Engine DB,
#2. BAM/Data Mart,
#3. Archival Database.
Can you please throw some light why it has been designed this way? 

I too would to better understand the rationalization for the execution dB, the BAM DB, and the process mart.
Our total process executions are on the order to years weeks (R&D pipeline).
We wish to combine process execution metrics with capacity utilization and forecasting.
Question is whether it we should be pulling this data directly from the process execution dB for load into our in-house data warehouse or go through the BEA process datamart first? 

Hi,
1) You could use BAM or Data Mart database for the reporting on current instances. You could also use these databases to track thing that have reached the End activity of a process. Even though instances in the Engine database (if you want them to) are sent to the Archive database after they've sat in the End activity long enough (based on your Engine's "caducity" setting - default 15 days). The BAM and Data Mart databases still track things in the End activity. The BAM database default is to store agregated information for the last 24 hours (this too is an Engine setting). You don't have to do anything to cause rows in the BAM database to roll off after they've aged suffiently. It is done automatically. The Data Mart database, however, keeps its rows forever.
2) As mentioned above, you could use the Archive database to look at instances have reached the End activity, but be aware that they are not going to be inserted into this database unless you have selected the option to archive instances and that the instances have aged sufficiently in the End activity. Be careful, the default setting is 15 days. Your far better off reporting agregated information from the BAM and Data Mart databases.
Instances that are active in the process (not in the End activity) stay active in the Engine's database. This might be 20 years and it still won't matter. Instances will not be sent to the archive database until they reach the End activity and they've sat there for the caducity's setting in the Engine (default 15 days). Instances added to the Archive database are sent there from the Engine database. Once there, they are removed from the Engine's database.
Archiving instances has no effect on what is stored in the BAM or Data Mart databases.
hth,
Dan

Related

How to Achieve Performance Tuning In BPM Studio

Please Tell me how to achieve performance Tuning in BPm Project . let me know do have any documentation for this .
Thanks in Advance . 
Could you clarify this just a bit?
Are you really looking for tips on improving Studio's performance (e.g. increase memory to at least 3gb) as the title suggests or how to improve your project's performance at runtime (e.g. use procedures or "Greedy" execution of automatic activities to reduce the number of transactions).
Thanks,
Dan 
could u Please tell me how to Increase project performance . 
I'm sure others will have other ideas on performance, but here are a couple tips on Oracle BPM Enterprise performance:
*1. Bottleneck for Engine with Automated Processes*
If the Engine is running mostly automated processes, the Engine Database and backend system calls will cause most of the bottlenecks.
<li> Ensure the network access and throughput to the Engine's database is fast.
<li> Track and ensure backend system latency is kept to a minimum
*2. Bottleneck for Engine with Mostly Interactive Processes*
If the Engine has a mix of Interactive (human) and automated activities in the proceses deployed to it, the bottleneck will normally be the Workspace web application:
Consider adding additional Workspace web applications as the number of end users increase. Use hardware / software load balancing with "sticky" sessions to keep the end user load evenly distributed across the different Workspaces.
Dan 
*3. Keep the Maximum Instance Size as Small as Possible*
The maximum size of all the instance variables for a single work item instance defaults to 16k. When the overall instance size exceeds the threshold set on the Engine, you might need to increase the size of this setting. Too often, this setting is changed without realizing the negative impact it will have on the Engine’s performance.
Instance variables are serialized into a single BLOB and committed to the BPM engine database as each activity in the process successfully completes. If the overall size of the instance variables in a process is large, then it will take longer to de-serialize (read from the BLOB) at the beginning of each activity and serialize (combined into the BLOB) at the end of each activity. Having an overall large instance size will slow the overall performance of instances flowing through the process.
Rather than simply incrementally increasing the Engine’s instance size setting, examine the number of instances variables and the size of the objects they represent during code reviews. In some cases, instance variables need to be defined as "Separated". Separated instance variables are stored in a separate table in the Engine's database and are only read when the separated instance variable is needed in the activity and written to only when it has changed in the activity. 
*4. Keep the Production Engine Log Property Setting at WARNING or Above*
Keep the Production Engine's Log Property setting at WARNING or above. If set to DEBUG, the Oracle BPM Engine log file will be written to in each individual transaction and performance will be degraded.
<li>Always to use the severity argument when using logMessage so that you are conscious of the severity of the message to be logged in the Oracle BPM Engine's log.
<li>It is always better to have more log files of smaller size than a reduced number of big log files. The operating system will take more time to write to a big file than a small file. 
*5. Group Automatic Activities in a Single Transactional Boundary*
When you have several automatic activities in a sequence, recognize this as a potential performance improvement opportunity. The default behavior of Oracle BPM is during each Automatic activity's execution:
1. Initiate the transaction
2. Read the work item instance's variable information from the Engine's database
3. Execute the logic in the Automatic activity
4. If no system exception occurs, commit the transaction and write the instance variable information back to the Engine's database
Many times you'll instead want to speed execution when there are several Automatic activities in a sequence. If three Automatic activities are in a sequence, then the four items listed above will occur three times. By grouping these into a single transactional boundary, instead of 12 steps you would have:
1. Initiate the transaction
2. Read the work item instance's variable information from the Engine's database
3. Execute the logic in the first Automatic activity
4. Execute the logic in the second Automatic activity
5. Execute the logic in the third Automatic activity
6. If no system exception occurs, commit the transaction and write the instance variable information back to the Engine's database
This grouping of Automatic activities into a single transactional boundary can be done in one of these three ways:
1. Create a Group around the sequence of Automatic activities (lasso the three activities) -> right mouse click inside the dotted line -> click "Create a Group with Selection" -> click "Runtime" in the upper left corner -> click the checkbox "Is Atomic".
2. Instead of placing the Automatic actiivities in the process, add them in a Procedure and then call the Procedure from a new Automatic activity in the process.
3. In Oracle BPM 10g you can enable "Greedy" execution for the process by right mouse clicking the process's name in the Project Navigator tab -> click "Properties" -> click the "Advanced" tab -> click the "Enable Greedy Execution" radio button.
Dan 
Hi Dan,
Is there other way to improve performance? I already apply these tips but the performance is too low.
Thanks,
Andrea 
Can you elaborate more on how the performance is low? Is it automated processesing that is slow or interactive activities? 
One of the tricks I've used to find bottlenecks in automatic activities (especially those that have a good deal of code, calling web services, database transactions etc) is to log the start and end time...
stTime as Time = 'now'
logMessage activity.name + "::Starting.."
... Code Here ...
logMessage activity.name + "::Finished in: " + ('now' - stTime)Then just check out your logs, and see which activity is taking the longest, and you can figure out from there what methods may be problematic.
HTH 
Thanks for the advices.
I have a Global Interactive that uses a screen flow. It has an automatic activity that calls 2 web services, store info in a database and then create instances from that info, but takes to much time, and I need to decrease that time.
Could you advice any best practice to improve this time or any tip to consider in this case.
Thank you so much 
I would logs similar to what Kevin has posted above to narrow down where the bottleneck is. You will want to time the 2 web service calls, the DB queries and the instance creation separately. Also time the whole automatic. This will allow you to see exactly how long it is taking and why. 
Do I need to change something on the Weblogic server?
Is there any recommendation for tuning the server? It has a warning about the ThreadPool has stuck threads.
Thank you so much. 
all great points above. Here's my 2 cents:
Where I typically start is to first figure out whether the slowness is on the engine, or the workspace. each has different tuning tricks and techniques.
The 10g workspace is a memory hog, and terribly written, so most of your optimizations will revolve around JVM tuning (memory, garbage collection, session timeouts, web server caching for images, etc), as well as instance cache configurations.
I've hardly ever found memory issues with the engine, but the database connection pool settings, as well as the number of execution threads available are incredibly important. 
Thank you so much for your advice.

Force Capture to mine specific redo logs

I have been looking for a way to run 2 capture process where each one works against a different set of redo logs.
The current limitation when using multiple capture processes is that one process will wait while the other is mining the logs,
I need both capture processes to mine all the time.
Is there a way to have say C001 mine all the "a" redo logs (redo01a.dbf,redo02a.dbf,redo03a.dbf) and
C002 mine all the "b" redo logs (redo01b.dbf,redo02b.dbf,redo03b.dbf). This way the capture processes would
not be interfering with each other and each would be continuously mining logs.
The reason I am asking is that we have realtime customers and batch customers. With one capture the batch users
can back up the streams processing causing the realtime customers to complain if the backup is longer than expected.
Unfortunately expectations were not set so realtime expect data to be processed and accessable in reporting (target) immediately.
I have searched Metalink and in this forum but have not found any solutions.
we are using 10.2.0.3.
Thanks for any help that can be provided.
Reid. 
This sounds, conceptually, like a really bad idea.
What happens if one process is mining the parent table and another the detail table in a foreign key relationship?
What happens if a DELETE occurs before the preceding INSERT? 
We use streams in our environment to move all records, about 20 tables, related to a patient to a reporting instance (target)
as one transaction, all RI is done during the apply into staging at the source so table order is taken care of, capture is done off of staging.
The records on the reporting side for the patient are deleted and the new records applied. Each Streams
transaction contains all data for a single patient and only that patient.
Currently everything is working fantastic but there are occasions where a batch load comes in and causes the queue to backup
and the return stream from the target, used to notify the front-end of completion, is held up behind the batch causing excessive waits for the realtime users.
Splitting the batch and realtime capture would enable us to have 2 capture/propagation/apply streams that would not impact each other.
If this cannot be done then it looks like shipping the logs to the target and capturing the batch transactions at that end is the way that we will have to go.
Thanks. 
Are you doing synchronous or asynchronous processing?

Best practice for creating a bulk of new instances

Hi,
My customer has a requirement to create a bulk of 30,000 new instances by scanning a database table (an export from a billing system) which contains a list of a call center tasks (interactive) to be performed. They will be executed throughout the month.
What is the best way to do it? If the answer is "this is a worst practice" that's a good answer too, but the question is how do I provide a solution to their needs.
When the process was executed as is we ran into the maximum limit of 1,000. Configuring the automatic item queue removed the 1,000 limit exception but caused JTA timeout and other exceptions against the engine database.
It seems to me that some sort of division into bulks of a few hundred instances should be the way to do it instead of just exploding the engine with so many instances. Am I right? If so, how would be the best way to code it? If not, any other suggestions?
Any recommendations would be highly appreciated.
Thanks,
~ronen 
Hi,
Know how you feel.
Others are sure to have other ideas, but here are a thoughts:
1) You might also want to consider only creating 995 in a single transactional boundary at a time. Once 995 have been created, end the transaction so it will commit. You could then fire another transaction that creates another 995. Keep doing this in separate transactions until all have been created. Sure you know many ways you could keep track of where each group of 995 ended and where your next group of 995 should begin, but a common technique is to keep track of the last row processed in a separate database.
2) Sure you've thought of this, but consider running this from a Global Automatic that is scheduled to run at a time when everyone's gone home. The last thing you want is to get complaints from the end users about performance. Their interaction will be degraded and your creations will take longer.
3) If you can, be especially conservative about your instance size when creating instances in a batch. Avoid adding attachments and binary variables as they are being created (both of these can potentially be huge). Get rid of the incoming argument variables that you do not need right away in the process. Many times you can look up the additional information downstream when needed later in the process.
4) Note what is happening immediately downstream of your Begin activity in the process. If for example, you have more than one automatic activity that immediately follow the Begin, consider either turning all the automatic activities it a procedure (one transaction) or set your process to "Greedy" execution. If you have a Multiple activity immediately after the Begin, you might be creating 10x the number of instances (e.g. your Multiple's logic creates 10 copies). If the is the case, you might want to throttle back the number of instances created in one transaction even further.
Hope this helps,
Dan 
Dan,
Couldn't hope for a more thorough answer!
Many many thanks!
~ronen 
Hi I'm passing through this, my client want a "bulk insert" from the database and create a new instance with every row in the database. But he doesn't want that I use a counter to read the database (because in his experience there are sometimes that a commit comes later and there's a missplaced id) so how do I can read a row each time, and create a new instance.
I have an automatic global activitry for reading the database and create a new instance following this: create new process instance
can you help me?????
thanks 
Hi I'm passing through this, my client want a "bulk insert" from the database and create a new instance with every row in the database. But he doesn't want that I use a counter to read the database (because in his experience there are sometimes that a commit comes later and there's a missplaced id) so how do I can read a row each time, and create a new instance.
I have an automatic global activitry for reading the database and create a new instance following this: create new process instance
can you help me?????
thanks

Process reports

Hello,
Not sure if that's been already answered somewhere, but what's the best way to provide process reports? I am looking for the answers to the following questions:
1. Is there a way to get all processes (completed/in-progress) for a user-specified period of time? For example, report that for the first week of 2009 there were 10 voting process instances, 9 complete and 1 still pending.
2. Is it possible to access instance variables (custom BPM Objects) for a collection of processes for a user-specified period of time? For example, to build a CSV file with voter information (approx. 30 fields) for voters who participated in the voting process for the last month.
Thanks.
Nick. 
I briefly looked into BAM, and it might be useful for Q1, but this post mentions that BAM DB gets cleaned up regularly by the BPM Engine (not sure what specifically is cleaned from the DB though):
Queries Regarding BAM Database
Also, it doesn't look like BAM DB includes information about processes that are "in-progress".
For Q2, it looks like Fuego.Papi API might be useful, however, I am still not sure if I will be able to access process instance variables. Can anyone suggest if I am on the right track? 
Hi Nick,
Is there a way to get all processes (completed/in-progress) for a user-specified period of time? For example, report that for the first week of 2009 there were 10 voting process instances, 9 complete and 1 still pending.While you could use the BAM database for both instances that are still active and those that have reached the end, it looks like you'd like to go further back in time than what the BAM database stores. The BAM database typically stores what's either happening right now or what has happened in the last 24 hours. While this time period is an easily modifiable Engine setting, for performance reasons you'd be better off leaving it set to the default (24 hours).
There is a Data Store database with the same exact structure as the BAM database. This database has no time limit associated with it. As long as it's a project variable defined as a "Business Indicator", this information will automatically be stored in the Data Store database and not roll off.
My preference would be (not that I get a vote) to instead just use external tables that you define to store the information. As instances flow through the process, you'd update the table(s) via SQL in PBL.
Is it possible to access instance variables (custom BPM Objects) for a collection of processes for a user-specified period of time? For example, to build a CSV file with voter information (approx. 30 fields) for voters who participated in the voting process for the last month.Niether the BAM nor the Data Store databases store BPM Object information.
I think you'd be better served using external tables that you define also for this requirement. That way you would not have to clutter the process up with the specifics on the voter information (unless you needed it inside the process) and yet you could still access it at any time in the future from the database. If you don't go this route, the challenge you will have is when you want to extract information from instances that have reached the End activity. The BPM Object information can be retrieved from instances currently running inside the process (would be glad to share how this is done if you'd like), but it's a different story once the instance is archived.
Hope this helps,
Dan 
Thank you so-o-o-o much for the answer.

Recommended value for BAM Data Expiration Time

Hi,
Can anyone tell me what is the recommended value for BAM Data Expiration Time ?
The Enterprise Server default is 24 hours, but I would like to be able to gather the average instance execution time after several months. Is it reasonable to set the expiration time to such a high value? Or will it have an impact on BPM/BAM performance?
Thanks in advance.
Best regards,
CA 
Normally we keep the BAM data expiration time at somewhere with 24 - 72 hours. For the historical reporting you are looking for the Data Mart/Data Warehouse DB makes more sense. This database keeps the data forever and takes snapshots at longer intervals, normally 24 hours. This data won't be real time normally then because a snapshot is only take once per day but will give you the historical reporting you are looking for. The data structure of this database is almost the same as the BAM DB. 
Can we show the collected data (Mart) simultaneously in the BAM unified dashboard, changing the related queries (if so, how) ?
or we should implementing a specific graph for viewing data from data Mart ? 
It probably wouldn't make sense because of the time scales to try and show both sets of data on the same chart, but you could have 2 charts on the same presentation. One would contain your real time BAM information and the other would show your historical trends. You will need to create your own DB external resource configuration to connect to Data Mart though and use the DynamicSql component. The BamQuery component only knows how to connect to the real time BAM database. 
Ok, I understand. At the time, we’re just working on a proof-of-concept, so I would prefer not to have to create and query the datamart database. Would it have an impact on performance if we set the BAM expiration time to, say, 30 days? Of course, when we move to the real implementation we’ll have to change this, I just want to know if I can have an acceptable performance for now, using solely the BAM database. Also, I must point out that we only have 4 processes and about 10-15 users, for now. 
With that number of processes and users and the fact that it is just a POC I would say you are fine with 30 days of BAM. We have had customers go much longer then this time even. The only performance decrease would happen if the BAM DB was becoming slow. Then the dashboards would load slower and it would take the updater longer to run. Any decent database should handle the load you will see with no problem.

Categories

Resources