Dripfeeding via JMS - weblogic.developer.interest.jms(Archived)

Hi,
          Can someone answer the following questions?
          I have a Weblogic 8.1 system which has 2 managed servers, 1 clustered MDB with 4 pooled instances, multiple clients producing 1000's of messages, 1 distributed destination.
          
          The MDB calls out to another system which i DO not want to over load.
          
          Can anyone tell me a non-kludgeway of getting Weblogic to slow down the many producers, i have tried getting flow control to work, but it seems that no matter how low i set the threshold, weblogic will still send as many as it thinks is sensible (or as many as it can currently handle?)
          
          I want Weblogic to buffer the messages and only let, maybe 2 at a time be consumed by the MDB.
          
          Currently i am just using Weblogic flow control, not sure if that is sufficient....
          
          Any suggestions?
          
          Kwikk
          
          --
          Edited by kwikksilva at 05/29/2008 11:12 AM 

Hi,
          
          Flow-control is enabled according to the backlog of messages in a destination and controls the rate at which senders inject messages. It doesn't directly control the rate at which consumers such as MDBs receive from the destination.
          
          Flow-control and/or even quota-blocking-sends will definitely slow down producers if configured properly, but it seems a little strange to handle things this way.
          
          If you need to reduce the number of concurrent threads in an MDB pool, you can simply tune the MDB. If that still isn't enough, you can look into tuning the system that the MDB is calling (presumably the downstream system already has some sort of flow control).
          
          The "JMS Performance Guide" white-paper on dev2dev explores MDB tuning and flow-control for WL 8.1.
          
          Tom

Related

Distributed Queue: Unable to search message

Hello mates,
          We have a Distributed Queue (DQ) mapping to two physical queues (PQ1 and PQ2).
          
          There are two web services (WS1 and WS2) that dumps messages into the Distributed Queue. Due to its very nature, the message either goes to PQ1 or PQ2.
          
          Now the reader of this Queue is a standalone client. It obtains a reference to this Distributed Queue (DQ) and uses QueueBrowser to search a specific message. Now, if current DQ session points to PQ1 then messages of PQ2 are skipped and vice versa.
          
          This is an undesirable behavior. The client needs a uniform view of all the messages.
          
          We tried two options to resolve this:
          1. Used Forward-Delay (1 sec)
          2. Enabled "Load Balancing" and Disabled "Server Affinity"
          
          Unfortunately, none of them worked.
          
          
          Could anyone please suggest a possible resolution for this? Or am I missing something here?
          
          
          Thanks
          Yogesh
          --
          Awaiting responses... 
When you connect a client to a JMS queue, it doesn't connect to both queues in the session. The distributed destination "pegs" a connection to one of the physical queues in a round robin fashion.
          
          I would suggest by-passing the distributed destination by connecting directly to both queues using two queue browse sessions within a single transaction. 
I would suggest by-passing the distributed destination by connecting directly to both queues using two queue browse sessions within a single transaction.                    Browsing is non-transactional, no need to use transactions.
          
          If you are using version 9.0 or later, an alternate solution is to use WebLogic JMS JMX mbean management APIs to browse each individual queue. These APIs are more capable in that they can optionally view messages that are normally invisible to JMS API browsers...
          
          Tom 
Thanks for your reply Tom!
          
          I have gone through your previous posts in this forum and believe you have fair understanding about Distributed Queues.
          
          I'll describe my problem statement.
          1. External requests are handled by components residing in clustered managed servers (CM1 & CM2)
          2. These components process the request and put the response in response queue (PQ1 & PQ2)
          3. Client invoke a standalone application to retrieve the Response message via an identifier (Id). This standalone application should have access to all the response messages
          4. The number of managed servers vary between different environments (Dev, Test, Prod, etc.)
          5. We use Weblogic 8.1.4
          
          Could you suggest which would be the best approach to tackle it?
          
          - Should I use a single Physical Queue?
          - Should I use a Distributed Queue mapping to the physical queues and use Forward Delay or Message Bridge for synchronization. I tried using Forward Delay but its not working. WL console always shows "1" active Consumer
          - Any other alternative?
          
          
          Thanks
          Yogesh 
Hi Yogesh,
          
          I'm back from vacation.
          
          Since your client app must have access to all response messages, here are some options:
          
          (1) Move to a single respone queue (as you already mentioned).
          
          (2) Enable forwarding (as you already mentioned), but ensure that only one destination has consumers (otherwise forwarding will not activate).
          
          (3) Simply have the client create consumers on all of the member queues of the response destination. (This is the behavior of 9.x MDBs that receive messages from a 9.x distributed queue in another cluster. In this scenario, not only will the MDB automatically handle reconnects, but it will even automatically detect the creation of new destination members.)
          
          Tom 
Hi all,
          
          This post seems quite old, but in case someone is still listening : I'm almost in the same situation as Yogesh, except that my client is an MDB deployed in another weblogic instance (where the JMS distributed destination is declared as foreign).
          
          Tom's solutions (1) and (2) aren't appropriate in my case, because I want high availability on the destination, therefore I can't have consumers on only one physical queue (the instance hosting this queue could be the one that fails).
          
          Solution (3) sounds perfect, however I'm running weblogic 8.1sp2. Do I have any way of emulating 9.x behavior through custom development?
          
          In case the answer is no, the only alternative I can think of would be to have one MDB per physical destination, but that would make the distributed destination quite useless, and would require re-configuration if I decide later to add more physical queues...
          
          Thanks,
          
          Olivier
          
          --
          Edited by olivier_m at 01/11/2008 5:22 AM
          
          --
          Edited by olivier_m at 01/11/2008 5:24 AM 
The forwarding option mentioned in (2) ensures that all messages will eventually be forwarded to a destination that has active consumers, where multiple destinations can have consumers. That said, it may not scale well in the event of a few failures, as the remote MDB consumers could end up only consuming from a single destination with messages being actively forwarded from all of the other destinations.
          
          Periodically redeploying the remote MDB would force it to recreate its consumers, and so help make sure that all distributed destination instances continue to be serviced . This assumes that the MDB is configured to use a custom connection factory with appropriate distributed destination load balance settings.
          
          Tom
          
          PS. Not only would upgrading to 9.x, or even 10.x, address the problem, there would have other advantages: much higher performance, can pause/resume MDBs or destinations, administrative message management, store-and-forward, and the unit-of-order feature come to mind. 
Tom,
          
          Thanks for your quick answer and excellent support.
          
          I'm not that happy with having to redeploy the client MDB periodically, but it seems that the answer to my problem will have to be a compromise anyway, so I'll suggest this solution to my customer.
          
          As for upgrading, you're preaching to the converted :-)
          However, it's not a trivial task as we have a big production site with multiple servers. The JMS improvements will definitely be a good argument to promote the idea.
          
          Cheers,
          Olivier 
I am facing a similar situation. To resolve the issue, I would like to have the client browse all member queues directly. The challenge I am facing is that given a uniform distributed queue, how can I discover its member queues at runtime?
          
          I am using WLS 9.2 and have gone through JMX docs but unable to determine the attributes that should be used to find the member queues.

Throttling MDB performance running against MQSeries queue

I have an unusual requirement for an application. We have a non-transactional MQSeries queue that is bound as a foreign JMS destination in client mode (thanks to the folks on the forum who helped me configure that!).
          
          The problem is that the cluster that we are running our application on has limited capacity and I need to make sure that the MDB that will be listening to the queue does not surpass a certain TPS limit (say, 1 TPS). This article explains how to throttle performance using a separate execute thread -- http://e-docs.bea.com/wls/docs81/perform/AppTuning.html#1105201
          
          However, this requires you to set the MDB's "dispatch-policy' attribute to the lower priority queue in ejb-jar.xml. However, the documentation on this attribute -- http://edocs.beasys.com/wls/docs81/ejb/DDreference-ejb-jar.html#1113605 -- indicates that this attribute is only honoured if the source queue is transactional in nature.
          
          What to do? A transactional queue is a hard sell to our infrastructure folks. Can I use the extended transactional client and have this work?
          
          FYI - using the max-beans-in-free-pool does not do a sufficiently good job of limiting performance.
          
          Any other ideas? 
As the doc describes, your MDB's "onMessage" method is invoked by a thread created by MQ. That means we have no control over it, or how fast it runs. I don't know of any "throttling" features in MQ either that would help.
          
          The only (ugly!) suggestion I can think of is that you should set "max-beans-in-free-pool" to 1 and periodically "sleep" in your onMessage method so that you don't get messages too fast!
          
          (And even if you were able to use a separate execute queue, or a transactional queue, you still could potentially get messages more than once per second, so you might still need to sleep anyway.) 
another possibility might be the following scenario:
          create a local jms-destination, maybe with a jdbc-store.
          then use the weblogic bridge to transfer messages into the local jms-queue.
          at this point you can configure flow-control for the connection factory
          to limit transferred messages.
          
          
          the problem in this case is, that unprocessed messages have to be stored
          on the application server or in the database.
          
          
          but maybe this can help you.
          
          
          --Klaas
          
          
          gbrail schrieb:
          > As the doc describes, your MDB's "onMessage" method is invoked by a thread created by MQ. That means we have no control over it, or how fast it runs. I don't know of any "throttling" features in MQ either that would help.
          >
          > The only (ugly!) suggestion I can think of is that you should set "max-beans-in-free-pool" to 1 and periodically "sleep" in your onMessage method so that you don't get messages too fast!
          >
          > (And even if you were able to use a separate execute queue, or a transactional queue, you still could potentially get messages more than once per second, so you might still need to sleep anyway.) 
Thanks for the suggestions.
          
          I've decided that my only real option is to do the max-beans-in-pool mixed with a sleep value in the MDB. It is ugly but I don't know what else I can do.
          
          Would the extended transactional client help me out at all here?

Process topic message once by MDB deployed to a cluster

Problem: How to ensure JMS messages are processed only once by an application yet have benefit of the automatic failover provided by a cluster.
          
          Environment: WebLogic 9.2; a cluster of two physical servers each with four managed servers
          
          Applications: MDB based to receive messages from a topic configured in a SonicESB system.
          
          Replies to similar problems posted to the forum inlude -
          
          Change the destination to a queue - that will not be done for our environment.
          
          Configure the MDBs as durable subscribers with the same client ID - if that will work, that could be done but what are the consequences of periodically retrying to establish sessions with a topic? At what level is the retrying done? At each managed server?
          
          The problem goes beyond how to ensure JMS messages are processed only once. Some of the applications do other processing that should be done only once; e.g. each day at a specified time distribute e-mail with information about events that occurred during the preceding 24 hours; data about the events having been stored in a file shared between the physical servers using NFS.
          
          For that reason the question could be broader - What is the best technique on which to base an application only one
          instance of which should be running but if the managed server in which the application is running fails another instance of the application, without intervention, begins running in an operational managed server.
          
          I think this must be a common circumstance and want to know how others have solved the problem.
          
          Thank you. 
Hi,
          
          Here are some exactly-once processing keywords you can google to understand exactly-once processing concepts: "xa transactions", "ACID", "WebLogic LLR" (but definitely not LRO or LRC)", "duplicate elimination", and "compensating transactions".
          
          The latter two areas describe how to make an arbitrary non-XA resource work pseudo-exactly-once in conjunction with something like message processing, and generally work by (A) reliably storing a record of succesfully processed work in some central history table, (B) detecting if a particular work request was already processed (a duplicate) by checking the history, and finally (C) periodically cleaning older records out of the history table.
          
          In addition, with something like "sending out an email summary", there's usually no way to do this exactly once. The problem is the question "What happens if the email request fails?" When there's such an error, the application has no way to determine if the email was actually sent or not (unless it has its own email account that's in the distribution list? that might help most of the time?), and normally must compensate for its lack of knowledge simply by attempting the same email operation again. In this case, users will have to be prepared to get the same email twice...
          
          As for which one is "best", the short answer is that ACID via XA/LLR is usually by far the "least evil choice" as in practice it turns out to be the simplest option if you can swing it - dup elim and especially compensating tx style approaches are infamous for become very complex very quickly.
          
          Finally, on a related note, depending on your application you may find it simplifies things to use WebLogic's built-in JMS instead of a third party ESB, as this naturally allows a smoother cooperation between the application running in the MDB and the facilities that the application depends on, and can lead to more scalable/performant/workable clustering and/or HA solutions.
          
          Tom
          
          PS. Regarding your questions about scheduling work at specific times: WebLogic JMS has a scheduled messaging feature, and WebLogic server has persistent timer feature (I think they're called EJB timers?) 
Tom,
          
          Never having used XA I have only a little understanding of it. I associate XA with updating multiple resources for a transaction to ensure that if all resources aren't updated, none is updated.
          
          I googled the keywords you suggest but regret to write that I'm not able to understand how XA solves the problem. Is this statement true - If an MDB that subscribes to a topic in a foreign JMS system is configured to use transactions and is targeted to all WebLogic managed servers in the cluster, the container managed transaction facility of WebLogic will ensure that every message from the topic is delivered to only one instance of the MDB in the cluster. (You may recall from my first post that our cluster environment is two physical servers each with four managed servers.)
          
          When I first did analysis for the problem I discovered WebLogic's migratable targets facility but, if I understand it, it applies only to WebLogic JMS and its producers/consumers. I think what would solve my problem is a migratable consumers facility; i.e. the MDB would be targeted to only one WebLogic server in the cluster but if that managed server should fail, WebLogic would migrate the MDB to an operational managed server.
          
          I also considered what you refer to as "duplicate elimination" or "compensating transactions" but would rather implement almost any solution but that one.
          
          As you might guess, to use WebLogic JMS instead of SonicESB is not an option. To use SonicESB was decided by our IT Services Architecture for delivery of messages between systems.
          
          Thank you.
          
          Jeff 
Hey Jeff,
          
          Regarding exactly once: Unless all resources aren't XA capable (global transaction capable), then I don't know of any solutions to the exactly-once problem except for some form of either duplicate elimination or compensating transactions implemented at the application level. Without these mechanisms, the choices are at-most-once or at-least-once:
          
          In at-least-once, the app acknowledges a message after it completes processing. The steps for a failure case are:
          - a msg is delivered by JMS to the application
          - system may crash before application ever sees the message (forcing redelivery)
          - application performs related work, and could throw (forcing redelivery)
          - system may crash here before msg is acknowledged (forcing redelivery)
          - msg is acknowledged
          In the case of a crash, JMS will then redeliver the message and mark its header field with the "redelivered" flag ("redelivered" can be read as "possible duplicate" - as without some sort of application developed history mechanism its normally impossible for the application to tell whether the message is a true dup that can be discarded or a message that must be processed again).
          
          In at-most-once, the app acknowledges a message regardless of whether the work succeeds:
          - a msg is delivered by JMS to the application
          - system may crash before application ever sees the message
          - application acknowledges the message
          - application performs related work
          - system may crash before msg is acknowledged
          
          You asked "If an MDB that subscribes to a topic in a foreign JMS system is configured to use transactions and is targeted to all WebLogic managed servers in the cluster, the container managed transaction facility of WebLogic will ensure that every message from the topic is delivered to only one instance of the MDB in the cluster. " WebLogic MDBs can either have (A) the same subscriber-id for on all servers, or (B) a different subscriber-id on all servers. The JMS specification normally requires that only one subscriber with the same "id" is allowed, so scenario (A) results in only one server's MDB in processing messages for the subscription (other MDBs will fail to create their subscription, as the JMS provider will throw an exception). Scenario (B) results in each server getting a duplicate of the sent message. I think you're looking for a shared subscription - where the messages are divided between all servers. This requires an extension to JMS, which I beleive Sonic supports, but you'll need to consult their doc. Without such an extension, you'll need to forward the messages to a queue, so that multiple MDBs on different servers can process the message stream.
          
          Note that the "shared subscription exactly once" case will require either full "XA resource" capability on the persistence mechanisms for all participating work (I think most of our customers use XA BTW) or that all MDBs in the cluster will need to have some kind of exactly synchronized shared state maintained by the application for dup detection (so that if a msg is redelivered to a different server, you can still detect whether it was already processed by a different server.)
          
          Tom 
Tom,
          
          I understand "shared subscription" to mean that each message would be delivered to only one instance of the MDB but messages are distributed among the MDBs. I don't need that capability. I simply want an MDB to be targeted to the cluster for purposes of always having an operational MDB should a managed server fail but a message would be delivered to only one instance of the MDB.
          
          If an MDB could be targeted to only one managed server (thus ensuring that a message could be delivered to only one instance) but if the managed server to which the MDB is targeted should fail the MDB dynamically would be re-targeted to an operational managed server, that would be ideal; i.e. ever only one active MDB.
          
          I previously read a posting about the technique of using the same subscriber-id for all managed servers but wondered about the effect on resource utilization. If the MDB is targeted to the cluster but only one subscriber-id is used, I think there will be periodic attempts to establish sessions at some level. I'm assuming sessions are established at the managed server level. In our cluster that would be - one managed server successfully establishes a session, seven periodically retry. What are the consequences of that for both WebLogic and the foreign JMS system?
          
          Having written that causes me to think I should confirm my understanding of how messages are delivered to an MDB targeted to a cluster. I'm assuming that if an MDB is targeted to the cluster and the cluster has eight managed servers (as does our environment) that every message from the topic will be delivered to eight MDBs. That understanding has led to this exchange of postings. At what level does WebLogic establish a session to a topic for an MDB? At the managed server level? At the cluster level? If at the cluster level, then perhaps my understanding is wrong and there isn't a problem.
          
          There isn't need for the MDB to be a durable subscriber but if the subscriber-id technique will work, it can be configured to be a durable subscriber. I thought that was done through the element IDs jms-client-id, in weblogic-ejb-jar, and subscription-durability, in ejb-jar.xml. I don't know about subscriber-id. Please elaborate on that element ID or refer me to documentation.
          
          Thank you.
          
          Jeff 
Hi,
          
          why do'nt you try to deploy the MDB on your cluster with client-id[i.e Durable Subscription] and put max-beans-in-free-pool =1[in weblogic-ejb-jar.xml].
          [http://e-docs.bea.com/wls/docs100/ejb/DDreference-ejb-jar.html#wp1114854]
          This will make sure that there would be only on MDB instance active and if the managed server hosting this MDB fails, The MDB will be migrated to another available managed server.
          
          Thanks,
          Qumar Hussain 
Hi Jeff,
          
          >>> I previously read a posting about the technique of using the same subscriber-id for all managed servers but wondered about the effect on resource utilization. If the MDB is targeted to the cluster but only one subscriber-id is used, I think there will be periodic attempts to establish sessions at some level.
          
          Correct. Each MDB pool on each server has an associated connection, session, and subscriber.
          
          >>> I'm assuming sessions are established at the managed server level.
          
          Correct.
          
          >>> In our cluster that would be - one managed server successfully establishes a session, seven periodically retry. What are the consequences of that for both WebLogic and the foreign JMS system?
          
          The periodic retry is infrequent (every few seconds), and so will have little overhead. The individual failing MDBs will log their failures to the server log - an annoyance that your sys admins would need to ignore. Thankfully, I think newer versions of WebLogic (9.x and later) suppress repeated error logging by the MDBs so that they don't redundantly report the same error message again and again.
          
          >>> Having written that causes me to think I should confirm my understanding of how messages are delivered to an MDB targeted to a cluster. I'm assuming that if an MDB is targeted to the cluster and the cluster has eight managed servers (as does our environment) that every message from the topic will be delivered to eight MDBs.
          
          The use of a connection-id prevents this.
          
          >>> That understanding has led to this exchange of postings. At what level does WebLogic establish a session to a topic for an MDB? At the managed server level?
          
          Yes.
          
          >>> At the cluster level?
          
          No.
          
          >>> There isn't need for the MDB to be a durable subscriber but if the subscriber-id technique will work, it can be configured to be a durable subscriber. I thought that was done through the element IDs jms-client-id, in weblogic-ejb-jar, and subscription-durability, in ejb-jar.xml. I don't know about subscriber-id. Please elaborate on that element ID or refer me to documentation.
          
          JMS durable subscriptions require both a connection-id and a subscriber-id. It looks like MDBs use "connection-id" for both purposes. I don't know if MDBs pay attention to the "connection-id" field if the subscription is non-durable - in which case you might need to make the subscription durable or use a work-around. A likely work-around would be to configure the Sonic connection factory (not the MDB) with a connection-id, I assume that they have such a configurable as its alluded to in the JMS specification and most vendors therefore have something along those lines. (The JMS specification requires that no two connections with same connection-id can connect at the same time.)
          
          Tom
          
          PS. The fact that you plan to use non-durable messaging implies that your system may be resiliant to an "at-most-once" design, as non-persistent messages 
Qumar,
          
          Currently, the MDB is configured to have a maximum of one bean in the free pool but, because of the nature of the application, it need not have been a durable subscriber.
          
          I think now that the technique of being a durable subscriber with the required client-id may ensure that only one instance of the MDB in the cluster is active at any time.
          
          It may be that, as you wrote, if the managed server in which the active instance is running fails, the next retry attempt by one of the operational managed servers in the cluster to establish a session with the topic will succeed (the failed managed server resulting in recognition by the foreign JMS system of a lost session thus freeing the client-id) and the single instance of the MDB in that managed server will be used.
          
          Thank you.
          
          Jeff 
Tom,
          
          I intend to test the technique of using a durable subscriber to confirm that only one instance of the MDB is active in the cluster and that if the manged server in which that instance is running fails the next retry attempt by an operational managed server will succeed and a single instance of the MDB again will be active.
          
          Thank you for your help.
          
          Jeff 
Hey Jeff,
          
          I have now the same situation you have, and I was wondering what were the results of your test... Did it work...?
          As I see, defining the suscription as Durable leads you to the fact that you have to set a Peristent Store that you don´t need, and that, as I've read in the documentation:
          
          "Durable subscribers cannot subscribe directly to a WebLogic Server distributed destination topic, but must instead subscribe to the JNDI name of the destination topic physical members."
          
          what means that there much more configuration tasks to do... Am I right...?
          
          Thanks,
          
          Fco 
Hi Fco,
          
          It is not possible to configure or create a durable subscription unless the destination supports persistence - the attempt will result in an exception. Also, in 9.0 and later, keep in mind that if you don't configure a store for a JMS server, the JMS server will simply use its host WebLogic server's default store.
          
          If you like, feel free to post a description of your needs and I may be able to help you come up with a solution.
          
          Tom 
Hi Tom, thanks.
          
          The situation is this, I have a WLS 8.1 cluster doing an asynchronous requests, so an EJB sends messages to a distributed topic running in another cluster of 6 servers. I have a MDB (not durable subscription) consuminig the messages from the topic, so I realized that each MDB gets a copy of the message and processes it, and send a resposne back. So, I get 6 time the answer...
          
          What I would like, and was expecting) it's that only one instance MDB gets and processes the mesage, and returns the answer...
          
          I don't need a durable subscription, that's why I don't want to configure a store and make the subscription durable, so I was wondering if there's a work-around on this...
          
          It seems this situation is very similar to the one reported by Jeff, right...? but you never mention to configure the store...
          
          Thanks,
          
          Fco 
First, a key question: If only one copy of each message needs to be processed, why not use a distributed queue (point-to-point) instead of a distributed topic (pub/sub)?
          
          I pretty sure that your answer is going to be "some clients need copies, but the MDBs don't", but sometimes such a question is still worth asking. Assuming I'm on the mark, here are a few possibilities:
          
          (1) Configure a distributed queue and a distributed topic. Have senders send a message to both destinations. Have the MDBs only receive from the distributed queue.
          
          (2) Restrict things so that only one MDB can have an active subscription - all 5 other MDBs will be locked out. (This is the solution outlined elsewhere in this thread.) There's no need for a durable subscription if you instead choose to modify the MDB descriptor to reference a custom WebLogic JMS connection factory and also ensure that the custom factory is configured with a "connection-id" (also mentioned elsewhere in this thread).
          
          (3) Only target the MDB to a single server. If you need to load balance the work, have the MDB forward messages to a distributed queue in the local cluster, and then have another MDB consume from the local distributed queue.
          
          (4) Some use cases can use this approach: Do not use a distributed topic, but instead substitute multiple topics with the a different "local-jndi-name". Set up each of the 6 remote MDBs to reference a different name. I think more aspects of this type of solution are outlined in the appendix of the "WebLogic JMS Performance Guide" white-paper on dev2dev (emulating distributed destinations).
          
          Tom
          
          --
          Edited by barnes at 06/10/2008 4:43 PM 
Sorry, I didn't make myself clear...
          
          I need topics because I have more than one MDB consuming the messages (7 different types endeed) , so for every single message sent to the topic, each MDB type procceses it and send back a reply.
          
          I need 7 different aswers, but only one of each.., then, only one instance MDB of every type should get and process the mesage, and return the answer...
          
          All MDB types descriptors reference WebLogic default connection factory weblogic.jms.ConnectionFactory...
          
          I think that option 4 could be helpful in this case. I'll check it out...
          
          Thanks,
          
          Fco

1 JMS Server [3 Queues] OR 3 JMS Servers [1 Q each]

          Hi
          I am still debating myself which configuration is better purely from performance
          perspective whether to have multiple queues hosted my the same JMS server or give
          one JMS server per Queue. Just wondering if someone can throw some light on this
          as I dont see this thing addressed in the JMS performance guide - is it because
          it dosent make any difference?
          
          I am trying to figure out if in the first setup since all queues will use the
          same store they will end up using a where clause to get data from the Queue and
          in the 2nd case it will be like select * without a where clause? In that case
          the 2nd config will give better performance. I am using file store.
          
          Its likely that in my configuration one of the Queues is going to be pounded big
          time whereas the other 2 queues message volume will be significantly low.
          
          Any thoughts/pointers is appreciated.
          thanks
          Anamitra
          
Internally, the JDBC store does simple
          inserts/deletes/single-record-selects -
          it has notion of destinations.
          
          I'm fairly sure that the JMS Performance Guide does cover this topic,
          but I don't have time to comb through it. Anyhow, with DB stores
          I think it is likely you will get better performace by using
          multiple stores - provided the introduction of multiple
          stores doesn't start forcing too many transactions to become
          two-phase that were one-phase before. Each store
          counts as an XA resource, so two stores in the same
          transaction forces a 2PC transaction.
          (With file stores one might get worse performance.)
          
          
          Tom
          
          Anamitra wrote:
          
          > Hi
          > I am still debating myself which configuration is better purely from performance
          > perspective whether to have multiple queues hosted my the same JMS server or give
          > one JMS server per Queue. Just wondering if someone can throw some light on this
          > as I dont see this thing addressed in the JMS performance guide - is it because
          > it dosent make any difference?
          >
          > I am trying to figure out if in the first setup since all queues will use the
          > same store they will end up using a where clause to get data from the Queue and
          > in the 2nd case it will be like select * without a where clause? In that case
          > the 2nd config will give better performance. I am using file store.
          
          SQL has nothing to do with file stores, so I don't understand
          the question. Anyhow, I happen to
          know that no JDBC store SQL references queues - messages
          are handled individually regardless of queue
          or topic.
          
          >
          > Its likely that in my configuration one of the Queues is going to be pounded big
          > time whereas the other 2 queues message volume will be significantly low.
          
          I think there is little point in seperating into seperate stores unless
          all stores would be very active.
          
          Anyhow, this is all simple enough to test on your own as it
          only involves configuration changes. I highly
          recommend trying it out.
          
          >
          > Any thoughts/pointers is appreciated.
          > thanks
          > Anamitra
          
          

Bean Pool Issue with WLS 7.1 SP2

We have a MDB pool with the following settings
          
          <pool>
          <max-beans-in-free-pool>2</max-beans-in-free-pool>
          <initial-beans-in-free-pool>2</initial-beans-in-free-pool>
          </pool>
          
          I have noticed that whenever there are more than 10 messages on the queue, both beans work in parallel, but whenever the message count is less than 10, only 1 bean is working at any given time. 10 is a number that I have noticed, it may be higher or lower.
          
          Each txn takes about a minute to complete per bean, so I really need these beans running side by side.
          Is there a WLS setting that I can change to make both of the beans run at the same time even if I have 2 messages?
          
          Thanks,
          
          Ashar 
Ashar,
          
          This is expected behavior. Messages are pushed to asynchronous consumers in batches, where the maximum backlog is configurable on the consumer's connection factory via the "MessagesMaximum" setting. If you pushed more than 10 messages at a time, you would start to see the load balance out.
          
          So, to reduce the backlog, create a custom connection factory with
          - "MessagesMaximum" set to its minimum
          - user and XA transactions enabled (to support transactions)
          - acknowledge policy set to "previous" (needed if the connection factory is ever used for non-transactionally topic drive MDBs)
          And then modify the MDB's WL descriptor file to refer to the JNDI name of this connection factory.
          
          You can find a nice summary ofMDB descriptor attributes in the 8.1 docs (these docs still apply to 7.0).
          
          I'm fairly sure that in 7.0 even with MessagesMaximum set to its minimum, there still can be a backlog of one message. If this is a problem, you might try contacting customer support - they may have a solution for this...
          
          Tom 
Tom, Thanks for the update. Really appreciate it.
          
          Ashar

Categories

Resources