After setting up a small CentOS server with Oracle Express Edition (XE), I wanted to stress test it to see how much load such a barebones installation can handle. In other words, how many users can you serve using this minimal, license-free setup?
The test application
I set up a test application in Apex with a single page that has both some static and dynamic (PL/SQL) regions, as well as a query and some processes. I set the page authentication to "No authentication" as the various testing tools need to be able to access this page without logging in (in which case we'd have to make a more complex test script).Just checking the page via a web browser we can see that it is delivered fairly quickly from the server, in less than 250ms, and that static resources are cached. (Not all automated load testers respect cache settings, though. More about that later.)
First tests - BlazeMeter.com
For testing, I started with BlazeMeter.com, as they have a free plan that can stress test a site with up to 50 concurrent users. This is probably more concurrent users than most business applications built for small and medium companies will have/need, and so the free test works well for our purposes. (This obviously depends on your definition of small/medium company, but if you have so many users/employees, then why are you using Oracle XE? :-)
Setting up a stress test using BlazeMeter is easy, although the user interface is a bit cluttered with advanced options that can all be ignored for simple tests. Because we only want to load test a single page, we need to create a so-called "URL Test", and give it a name and specify the URL. Select an appropriate location to run the test from, the number of users (50 is the max for the free account) and the duration (the default is 20 minutes, which may be a bit long for simple tests; you can set this to 5 or 10 minutes for a quick test).
Running the first test
Having set up the test page in Apex and the test itself in Blazemeter, I started the test and watched it... crash and burn! :-( This is what the test results looked like:As you can see from the chart above, more than a third of the requests resulted in errors returned from the web server, and the average response time is almost 10 seconds! Not good! I checked the Tomcat logs (at /usr/share/tomcat7/latest/logs if you have followed the setup in this series of blog posts) and found lots of this message:
java.sql.SQLException: Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException: All connections in the Universal Connection Pool are in use
Turns out the default ORDS connection pool size is too small. I found this article which gives some advice. I changed the following in the ORDS configuration (/u01/ords/config/ords/conf/apex.xml if you have used the config location suggested in my previous blog posts) and restarted Tomcat:
Re-running the first test
Re-running the test gave the following result:
Much better! No errors, and the response time is pretty good for this minimal server. Remember it only has 1 CPU and 1GB of memory, running Oracle XE, Tomcat and Apache. And yet it quite happily serves 50 concurrent users (average of 3,5 requests per second) without any problems.
Scaling up
One of the nice things about DigitalOcean (and other cloud server providers) is that it is easy to scale up (and down) the server as needed. I decided to re-run the same test using a server with 2 CPUs and 4 GB of memory (although Oracle XE is limited to 1 CPU and 1 GB of memory, it should free up some more resources for the web server and other OS processes).
The test results on this somewhat bigger server shows a flatter response time curve, compared to the 1GB server which had a couple of random peaks. The average response time is less than 700ms, compared to 950ms for the smaller server. We are still limited to 50 users (average of 3,5 requests per second) since this is the maximum that BlazeMeter's free plan allows.
More testing - LoadImpact.com
As the chart indicates, the response time remains more or less flat regardless of the number of concurrent users.
Never mind that the chart indicates that each page view took several seconds to complete. The FAQ page of LoadImpact states that "simulated clients in a test will never cache anything (except for cookies). This means that in a test, every client that loads a page from your site will behave like a new visitor to the site and thus be quite 'heavy' on the server." Which means that actual page load times (when client caching is enabled) will be significantly better than the tests indicate, since static files don't have to be downloaded every time.
In other words, the flat response time curve means that the application scales very well, and could probably support many more users. Quite impressive, really, for a setup that costs just USD 10 per month! :-)
Even more tests - loader.io
Finally, I tested with loader.io, who offer a free plan with up to 10.000 (!) clients in 1-minute tests. Running a number of different tests (and also bumping up the jdbc.MaxLimit setting from 60 to 100), I found that the "breaking point" for the server when using my test page was around 12-15 requests per second, which still gave sub-second response times. If pushed any further, the response times would quickly go up to several seconds.
I also ran some tests against a very barebones Apex page that shows nothing but a static HTML region.
When stress testing against this page, the server could handle up to around 25 requests per second while still maintaining sub-second response times. So obviously scalability depends on what you put on your pages.
Conclusions
Real-world performance will depend on a lot of factors. The test page I used was fairly typical for a business application in that it has a report, some dynamic PL/SQL content, and also does an insert into a table (so it's not just read-only). In a typical application, there will be pages that are more complex than this, and pages that are simpler than this, so it should average out.So let's assume that the server can handle 15 requests per second with acceptable response times. What does that translate to in real-world terms? Here are some quick calculations (I'm assuming most Apex business apps will be used during an 8-hour office hour period each day):
- 450 users doing 1 000 page views each in an 8-hour period per day (450*1000/8/60/60 = 15 requests per second)
- 1 500 users doing 250 page views each in an 8-hour period per day (1500*250/8/60/60 = 13 requests per second)
- 10 000 users doing 45 page views each in an 8-hour period per day (10000*45/8/60/60 = 15 requests per second)
- 25 000 users doing 50 page views each in a 24-hour period per day (25000*50/24/60/60 = 14 requests per second)
Now remember, the above is based on a single server with 1 CPU and 1GB of RAM, running Oracle Express Edition (XE), Apache and Tomcat, and costing USD 10 per month! If this is not good value for money, I don't know what is...
6 comments:
I believe this topic is worth submitting an abstract for APEX connect 2016! Would be nice to see you in Berlin on 7-9 of June 2016.
This is the first article I read regarding how to stress testing your APEX application. Very informative. Thanks.
Very nice post Morten
Hello Mortan,
Thank you very much for this valuable information. implemented.
short one : I really liked that APEX theme / interface : (see link : http://2.bp.blogspot.com/-4vKIctYpSM0/VaghHajoJPI/AAAAAAAAAxM/RQI_2wLSs0g/s640/Screen%2BShot%2B2015-05-05%2Bat%2B20.04.28.png)
I would be rude :
1. Is it possible to get that export app / page whatever ?
2. if not, what is the theme that u used / or most similar so I can begin with ? , really liked that neat design.
Thank you
Etay G
etay.gudai@gmail.com
Thank you very much Mortan for this very valuable information.
Hi Mortran! Just wanted you to know that your blog has helped me a lot of times since 2014 when i got the pleasure to work with apex for the first time (version 4.1 back then).
This is specifically the info i was looking for and could not find anywhere else!
Thanks!
Post a Comment