How to load test JMS Servers in Weblogic - weblogic.developer.interest.jms(Archived)

hi
          
          Can anybody let me know as to what tool can be used to test the working of JMS servers in a cluster in WLS 8.1. I need t check the performance of the same. I have the script to measure the messages recieved , but am not able to think of the tool used to load test the JMS Servers
          
          thanks in advance
          
          Preet 

Hi,
          
          We have a benchmark kit that we sometimes hand to customers through our field organization (typically sales), and I think its actually pretty cool. There's also a generic load generator called "the grinder" (google to find). That said, since application and hardware tend markedly unique performance characteristics, we usually recommend writing your own JMS traffic generator that closely matches your particular application rather than using a generic 3rd party tool. Otherwise there is a high risk of getting far different results.
          
          Tom
          
          PS. Just an FYI: WebLogic 9.x and later tend to be far faster than 8.1. 

Thanks for your help. I am using Loadrunner for the same. hope it works out.. 

Tom Barnes wrote:
          > Hi,
          >
          > We have a benchmark kit that we sometimes hand to customers through our field organization (typically sales), and I think its actually pretty cool. There's also a generic load generator called "the grinder" (google to find). That said, since application and hardware tend markedly unique performance characteristics, we usually recommend writing your own JMS traffic generator that closely matches your particular application rather than using a generic 3rd party tool. Otherwise there is a high risk of getting far different results.
          >
          > Tom
          >
          > PS. Just an FYI: WebLogic 9.x and later tend to be far faster than 8.1.
          
          We did just that when we load tested our system - load runner was used
          for the web side of the application, but to test out JMS, we developed
          some simple multi-threaded clients which just queued messages on JMS.
          The queues in the first iteration had no consumers....then run tests
          again adding consumers, so you're writing messages and consuming at the
          same time.
          
          The messages were sized so that we used realistic volumes, but we also
          used huge messages to test out the memory usage and paging settings.
          
          We also ran comparison's between JDBC persistence and filestore - file
          store won that one.
          
          One thing we did find, was that one of our distributed queues could get
          loaded so much that we did use 2 jms servers on one JVM and spread the
          destination across more than one JMS server on a JVM. Can't remember
          exactly what happened, but I think there was contention in the JMS file
          store or something like that.
          
          Developing your own clients allows you to test JMS in isolation and you
          can pretty much do what you want then.
          
          WLS 8.1 JMS is pretty quick, and if 9.*+, as Tom says is faster, then it
          must really fly!
          
          Pete 

thanks

Related

JMS pass-by-reference

Hello,
          
          I have a doubt about JMS message passing. I am sure the implementation of JMS(weblogic) is done the pass-by-value way. I am facing some performance issues and I dont know if I can somehow set it to pass-by-reference?If it can be done? how? and if not why?
          
          Thanking in Advance
          Nazneen 
Hi Nazneen,
          
          Messages must be immutable according to the JMS specification. The intent is to prevent receivers or senders from modifying a message that has already been put into the system. Note that text message contents are immutable Strings already, which effectively makes them pass-by-reference.
          
          I think this information may already be in the "JMS Performance Guide" white-paper, available here:
          http://dev2dev.bea.com/technologies/jms/index.jsp
          
          Exactly what performance issues are you facing? If you give me an idea of your setup and known bottlenecks, I may be able to help.
          
          Tom, BEA 
Hi Tom,
          
          Well the set-up I have is a cluster on weblogic8.1. The application is jms heavy, the messages are xml files. I have looked into the db and directory figures and they do not seem to be a bottleneck. The CPU utilization hasnt been too high but occassionally when simulating load scenario they do touch the range of 30-35%(cpu utilization).
          
          The execute threads for each server has been configured to 25(as per the BEA recommendation for a production env) and the cluster has 3 servers. The m/c on which the cluster is deployed has 4 cpu's. But I have just not been able to get a good throughput.
          And from what I have been looking through so far, JMS seems to be the bottleneck, ofcourse it could be because of the architecture of the application and something there which is causing the problem, but since I do not have any great knowledge about JMS, I am just not able to get my way round it:(
          
          Please Help.
          
          Thanks & Regards,
          Nazneen 
Since your CPU utilization is low, I suspect pass-by-ref wouldn't help. Some easy things to check first, to trace down bottlenecks:
          --disk utilization
          --database utilization
          --network utilization
          
          Another thing to do is to check the console stats to see if performance is limited by things like:
          --thread pool sizes too small (all threads active) - consider configuring custom thread pools for apps and modifying the apps  to run in these thread pools
          --JDBC connection pools too small (all connections active)
          --app pool size too small (max-beans-in-free-pool for EJBs, note that MDBs are also limited by number of threads)
          
          Finally, if JMS is the bottleneck, it is often persistence that is the reason. To determine the overhead of JMS persistence run benchmarks where each JMS server is configured without a store and compare the results to runs with a store.
          
          Tom

Is JMS suitable for my application?

Hello,
          
          Our application is required to send a message to 100,000's (if not millions) of remote listeners. Could JMS publish/subscribe be the best solution for this?
          
          For example, could I use JMS for (say) a News Feeder application that sends news update messages to all subscribing (remote) clients assuming that there could be a very large number of them?
          
          If JMS is not scalable to such an extent then kindly recommend the best solution for WebLogic that could satisfy this need.
          
          As a bonus, a solution that works well with a Web Services architecture would be preferred (perhaps WS-Notification??). Kindly recommend the best configuration for reliably publishing to a large number of clients.
          
          
          Thanks in advance! 
I think there's some chance JMS may fit your needs, but I think the answer is highly dependent on the number of WebLogic servers, the power and number of machines, the message volume (size and msg/sec), performance tuning, app design, message QOS, etc.
          
          As with any benchmarking, and especially in your case, I recommend you benchmark prototypes that match the proposed scenario.
          
          Some notes:
          
          - WebLogic WS-RM uses the same core engine as WebLogic JMS.
          
          - Applications of this type sometimes use a combination of servlet polling to handle clients and server-side JMS, and may leverage the WebLogic JMS "indexed subscriber feature".
          
          - The newly public 9.x "future response" and "asynchronous servlet" extensions may aid scalability (WebService internals sometimes implicitly use this feature.)
          http://e-docs.bea.com/wls/docs91/webapp/progservlet.html
          
          Tom 
Thanks for the quick response. Can you please clarify on the following:
          
          <i><quote:barnes>
          - Applications of this type sometimes use a combination of servlet polling to handle clients and server-side JMS, and may leverage the WebLogic JMS "indexed subscriber feature".
          </quote></i>
          
          I am not sure if I could use polling or a request/response based messaging where a client passes in a servlet request and the server side produces a logic. My requirement is more of a publish/subscribe (as in voluntary, pre-emptive broadcasts to the listeners) by the service. However, if you think the servlet based approach could work, can you please refer me to a sample that could satisfy our requirement?
          
          We are trying to prototype with our 'feed' service with a single instance of WLS for now that processes a few thousand subscribers and use that for benchmarking. I understand that this approach comes with a price (i.e., clustering for higher performance etc).
          
          FYI: Is there someone at BEA who could work with us more intimately on some architectural aspects of the application that we wish to develop? We'd at least like to get an hour or so of introduction to the best of the server that we could tap into for our goals. Many things are unclear just with documentation, especially the evolution of the services etc. Kindly advice.
          
          Thanks again. 
The "async servlet + server-side-only JMS" approach doesn't incur the overhead of a polling solution from the server's viewpoint - as it can be designed such that no threads are consumed on the server while clients wait for messages. But I'm jumping ahead by throwing these ideas around; I think it may be more unhelpful than unhelpful without a wealth of information about your application.
          
          Your BEA sales rep can likely help find you resources for a more in depth consultation. Its good you are exploring a variety of solutions - especially given that scalability is a prime consideration.
          
          Tom 
Thanks. Can you please let me know if there is a sample based on the "async servlet + server-side JMS" architecture that we could use for rapid prototyping?
          
          Does WLS have a serverless JMS support? I've read articles that this could scale well irrespective of the number of subscribers.
          
          I could write up and email you a short description of the application so that you could recommend a suitable architecture. Let me know. Thanks. 
Can you please let me know if there is a sample based on the "async servlet + server-side JMS" architecture that we could use for rapid prototyping?                    I don't know of a sample.
          
          >>> Does WLS have a serverless JMS support?
          
          If you mean no JMS servers, no. OTOH, WL JMS does support running applications on the same server as the JMS server. 
Thanks for all the information.

JMS real time and acknowledgements

Hi all,
          
          We are trying to setup our JMS to be as near to a real time system as possible. I've been reading about NO_ACKNOWLEDGE mode and it seems like a good thing for our case.
          
          Basically we are contributing messages to a topic and we are interesting on doing it as fast as possible. Duplicates and lost messages are not a problem as the messages should not be persistent and the system is able to handle duplicates.
          
          However, after doing some tests with NO_ACKNOWLEDGE I didn't manage to see any noticeable change respect to an AUTO_ACKNOWLEDGE mode. So, some questions come to my mind.
          
          Is it possible to use NO_ACKNOWLEDGE with distributed topics?
          how is it done? currently I'm just passing WLSession.NO_ACKNOWLEDGE as a parameter on session creation. Do I have to change anything on the weblogic server configuration?
          
          At the end, what we want to do is just get rid of any acks as the system probably will run in high-latency networks and having to ack will slow the whole process. Is there another way to get rid of acks for suiting real-time system scenarios?
          
          Thanks for your time,
          Martin 
Some things of note:
          
          - There is performance advice in the Performance and Tuning edocs and the "JMS Performance Guide" white-paper available on dev2dev. The white-paper is meant for 8.1, so its especially out of date with respect to thread-tuning and persistence tuning.
          
          - The "AUTO_ACK" case for non-durable subscribers has been optimized so that it normally yields equivalent performance to "NO_ACK". In essence, there's no ack back to the server in either case.
          
          - And yes, you can use "NO_ACK" with distributed topics.
          
          - Distributed topics internally use XA transactions internally to forward messages between distributed topic instances. Since XA transactions can be heavy-weight, they may impact performance in your use case.
          
          - WebLogic 9.1 and later includes a new optional feature on sends called "one-way sends", which can greatly improve performance depending on the use case (sometimes as much as 5X or 10X!). The option removes an internal ack that occurs during calls to non-transactional "send()" for non-persistent messages provided the connection factory does not have "XA transactions enabled" option enabled. See the performance and turning edoc for details.
          
          Tom 
Tom,
          
          Thanks for the really useful piece of advice.
          
          Actually I was just playing right now with one-way sends. I enabled that in the connection factory and checked that connection factory was non-XA. I set a one-way window of 15.
          
          However, I cannot see really improvement on the producer side.
          
          I have just calculated the time that takes to execute the publish() call and apparently it takes more or less the same amount of time with and without one-way sends enabled for that connection factory. I also used a latency simulator tool to raise the latency of the client and see if there is any improvement with high latency but I can't really see any noticeable change. 
Actually for what I see in the docs it seems that if you are using a distributed topic one-way message will be disabled.
          
          Is this true? 
I'd forgotten that one-ways were disabled for distributed topics - let me check the edocs - "One-way message sends work with distributed destinations provided the client looks up the physical distributed destination members directly rather than using the logical distributed destination's name.", there's also a section to help work-around the problem (also works-around connection host routing, which also disables one-ways) - see "One-Way Send Support In a Cluster With Multiple Destinations".
          
          Just run the send in a loop with a small message - you should definitely see a large difference in send performance - otherwise the one-way isn't enabled. If your timing each individual call, you need to use fine-grained (nanosecond) timers - the System.currentTImeMillis() call has a granularity that's too large 1 ms or even 10 or 20 ms on some operating systems.
          
          Tom 
Hi again Tom.
          
          Thanks again, you're being very helpful.
          
          I've changed the code to connect directly through the jms server topic (with a different sintax than the one it comes in the article though, something like jmsserver#topic). Made several tests, enabling and disabling one-way and usen cluster and concrete server JNDI names.
          
          Using one-way and pointing to a concrete jms server gives the best performance with a 2x factor improvement. Say from 1 milisecond to 500 microseconds.
          
          I'm not sure though about one point in the docs that say that RMI affinity should be enabled. I didn't enable this but apparently got some performance improvements. Not sure if I have to do it .
          
          Thanks,
          Martin 
Hi Martin,
          
          I assume your sender client is not running inside one of the cluster servers, in which case the following applies:
          
          If the connection factory is targeted to more than one server, the connection factory "createConnection" call may load balance the client's connection host to a different host than the target destination. When this happens, one-ways are automatically disabled, and all sends go through an extra hop. The sends are routed from the client to the connection host, and then on to the final destination. The JMS performance guide white-paper discusses this routing - i think in its clustering section.
          
          The RMI affinity option disables the connection factory connection load balance algorithm so that the connection host is always the same host as the JNDI context host. The JNDI context host is established when the client creates its JNDI context, and is determined by the URL. If you look up a "local JNDI" destination name in a JNDI context, the context always returns a destination that is local to the JNDI context host...
          
          Tom 
Tom,
          
          That explains why yesterday when I was doing my tests on a 2 cluster node with a clustered JMS connection factory I had very good results in one execution but in the next one I had very bad results. So if the expected time was 900 microseconds I had either 300 microseconds or 1200 microseconds which was really weird.
          
          Anyways. I changed the JMS connection factory settings today and now it works as you stated below. Really fast.
          
          With this configuration we lost any failover capabilities however in this case there is not problem to put some failover logic in the client.
          
          Thank you very much indeed,
          Martin 
Just a last question that perhaps somebody could be aware of,
          
          We're using WebLogic 9.2 MP2 currently. Is there any feature available in WebLogic 10 that allows you to use this one-way messaging mode without having to implement failover and load-balancing yourself? If not, is that planned for future releases?
          
          Thanks,
          Martin 
Martin,
          
          These improvements would need to be added in a future release. Right now, we're focusing all of our efforts on a native .NET client for JMS and improved HA (automatic JMS migration). This whole thread is very good feedback, though. Thanks!
          
          -Dave

JMS Server

          Hi ,
          
          Could somebody tell me, the drawbacks of running JMS server in Weblogic like performance,
          resource requirements etc when compared to not running JMS Server?
          
          Thanks in advance,
          Ranjith Pillai.
          
Ranjith Pillai wrote:
          > Hi ,
          >
          > Could somebody tell me, the drawbacks of running JMS server in Weblogic like performance,
          > resource requirements etc when compared to not running JMS Server?
          >
          > Thanks in advance,
          > Ranjith Pillai.
          
          Short question long answer. You owe me one. You can
          pay me back by buying more WL licenses, or mailing me a beer
          at 140 Allen Road, Liberty Corner, NJ 07938. ;-)
          
          The short answer for performance is "it depends".
          It depends on number of concurrent clients, throughput per client,
          use of persistence, message size, and even message type and message
          contents. OK, that's not very helpful, I'll give it a shot.
          Running the JMS server on the same server as the services that
          use it may yield only marginal or even indetectable performance
          benefits if message rates are low or if persistence is used
          and concurrency is low. If over-all message bytes per
          second is relatively "high", (where high > 200Kbytes per second
          and more than a few hundred messages a second at a wild guess),
          then the network and serialization overhead of a communicating
          with a remote JMS server becomes dominant and there are very significant
          gains to be made by running the JMS server on the same
          WL server as the EJBs and servlets that access its destinations.
          
          As for resource requirements, that also depends. Some customers
          like to put JMS on higher end hardware (fast RAID, replicated
          disks, HA framework, etc), which drives the decision to run
          JMS on its own server. Running JMS on the same server as
          the servlets and EJBs may increase the resources needed by
          a server in terms of memory and/or disk space, but there is
          generally a win in terms of performance and reduced network
          usage. Some customers compromise by running JMS on the
          local server and configuring JMS to use a remote
          database message store where the database runs on
          high-end hardware (note that database stores generally
          perform slower than file stores).
          
          As usual, given the number of variables, I highly recommend
          creating your own benchmark to test out the possibilities.
          
          Tom, BEA
          
          

JDBC Store Connection Pool

We have configured a JDBC store to be used by the JMS server as a persistent store. After some performance tests we noticed that only one connection from the JDBC connection pool assigned to the JDBC store is used, despite the fact that we have a number of concurrent message producers continuously producing large volumes of messages. We suspect that we have a serialization bottleneck here. Do anyone know if there is a way to configure the the JMS-server/JDBC store to use multiple connections from the pool simultaneously in order to improve performance?
          
          We did a workaround test by creating an additional JMS server with a new JDBC store assigned to the same connection pool as the first one. We also moved some of the queues from the first JMS server to the second one. When running the same test again, we noticed that two connections from the connection pool were used. We also noticed a performance improvement in our system by ~10% compared to the first case.
          
          We are running 8.1 SP3.
          
          Regards
          Jim Gustavsson 
Are there no one from Bea that can comment this? 
We have this same issue at my company
          right now, after dealing with tech support
          ends up that in 7.0 at least the db persistence
          is single threaded and there's no way to
          change it. We are currently trying to
          cluster and use distributed destinations
          to scale, you might want to try that too.
          Although in all honesty, in our initial tests,
          we've been having timeouts during the global
          tx commits... Hope this helps. 
Up to SP3 the easiest way to scale the JBDC backed store is to setup multiple JMS Servers. Sp4 should give a much better performance with JDBC store OOTB.
          
          -Kai 
Below is a snip from WLS JMS Documentation
          
          All things being equal, file stores generally offer better throughput than a JDBC store.
          Note: If a database is running on high-end hardware with very fast disks, and WebLogic Server is running on slower hardware or with slower disks, then you may get better performance from the JDBC store 
Below is a snip from WLS JMS Documentation          >
          > All things being equal, file stores generally offer
          > better throughput than a JDBC store.
          > Note: If a database is running on high-end hardware
          > with very fast disks, and WebLogic Server is running
          > on slower hardware or with slower disks, then you may
          > get better performance from the JDBC store
          
          
          This documentation is out of date with respect to 8.1SP4, as there are significant performance enhancements in the SP4 JDBC. These enhancements introduce the possibility that the 8.1SP4 JDBC store may scale better than the file store. Since there are many variables involved, a definitive answer for a particular application requires a custom benchmark that closely models the application's JMS load.
          
          Tom Barnes

Categories

Resources