I frequently see confusion regarding concurrent VU’s versus VU’s per hour, or what should be called sessions per hour or transactions per hour. When modeling web traffic on the open Internet, the rate-based metrics are better suited to finding out what will really happen. If a site slows down, do your users know that *before* they arrive at your site or after they get there. The concurrent user model assumes that a new user doesn’t arrive until the previous user leaves. If the last user in the queue isn’t leaving, that is the users is stuck on the system trying to perform a task, then no new user arrives. This simply doesn’t happen. A user doesn’t know and frankly doesn’t care how many other users are on your site until the user gets to the site and discovers that it is about to die–or that the user would rather die than use this site.
Transaction rate and the number of virtual users concurrently on the system affect the application server differently. Transaction rate primarily takes CPU to process the delivered pages, while the number of concurrent users primarily affects memory. Both are important, but they are independent variables. If the site performs well and the scripts are modelled correctly, then the transaction rate and total number of concurrent users will match your web analytics. If the site degrades, CPU is still maxed out, but memory may not immediately be maxed out. However, as the number of concurrent users increases, memory utilization will also increase, as well as database connections, etc.
This means that applying a rate-based metric to drive the load combined with the right scripts and use cases will drive the best load to allow you to see the behavior of your application under high loads.