Monday, March 28, 2011

Amazon S3 API for PL/SQL



Amazon S3 is part of Amazon's Web Service offering and the name is an abbreviation for Simple Storage Service:
"Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers."
A few months ago, Jason Straub published an Oracle whitepaper on how to integrate Oracle Application Express (Apex) with Amazon S3.

As Jason points out, Amazon has a Free Usage Tier which allows you to get started using Amazon S3 for free. If you have ever bought a book from Amazon, they already have your credit card on file, so signing up for Amazon Web Services is quick and easy (and they won't start charging your credit card until the free trial period is over).

Introducing the S3 API for PL/SQL

Inspired by Jason's whitepaper, I decided to write a stand-alone PL/SQL API for Amazon S3. This API can be used in any PL/SQL solution, with or without Apex.

The API supports all common S3 operations, including

  • Authentication
  • Creating new buckets
  • Listing existing buckets
  • Listing existing objects (with or without filtering)
  • Creating (uploading) new objects (and setting an Access Control List - ACL)
  • Generating download links (with or without expiry dates)
  • Downloading objects
  • Deleting objects


See the examples below for more details.

Use Cases

So what can you do with Amazon's S3 service in combination with a PL/SQL API?

I can think of several interesting use cases, some of which I might explore further in future posts:

  • Backing up your database (use DBMS_DATAPUMP to dump a file to disk, then compress it using ZIP_UTIL_PKG, then encrypt it using CRYPTO_UTIL_PKG, and upload it to S3)
  • Backing up your PL/SQL source code (use data dictionary views or DBMS_METADATA to extract the source code, optionally zip and/or encrypt it, and upload to S3)
  • Backing up your Apex applications (use WWV_FLOW_UTILITIES.EXPORT_APPLICATION_TO_CLOB to generate export file, optionally zip and/or encrypt it, and upload to S3)
  • Cloud storage for file uploads (instead of storing [large] files inside your database, store them in the cloud and download them on demand -- especially relevant for Oracle XE which has a file size limit)
  • Serve static content (generate static [text, CSV, HTML, PDF] files from the database and upload to S3)
  • Replication or shared storage (upload from one machine/database, download to another)
  • Data loading or message processing (set up to poll for new incoming files - uploaded by other S3 clients - and process them)

Remember that all these things can be scheduled to run in the database using DBMS_JOB or DBMS_SCHEDULER.


Where to get the Amazon S3 API for PL/SQL

You can download the API as part of the Alexandria Utility Library for PL/SQL.


Getting started

Download and compile the relevant PL/SQL API packages. Then register with Amazon for an S3 account and get your AWS keys (key and secret key), and login to the AWS Management Console to get familiar with the basic operations.



If you are unfamiliar with Amazon S3, I recommend that you read this short getting started guide that describes the common operations.

In the following examples we shall see how you can do the same operations using PL/SQL.


Authentication

From your Amazon account control panel, you'll get the key strings you need to use the Amazon web services.

Before you call any of the following API methods, you must initialize the authentication package. You only have to do this once per database session (but remember, on the web, every page view is a separate database session, so in Apex you'll need to run this code for every page, typically as a Before Header page process).



Creating new buckets

Buckets are what you use to organize your objects (files) in Amazon S3. Think of them as top-level folders, but note that you cannot create more than 100 buckets in a single account, and the bucket name must be unique across all user accounts on Amazon S3. So creating buckets is not really something you'd do very often, and usually it will be done manually (to resolve any name conflicts with existing buckets).

A bucket is associated with a specific region where your objects will be stored. For reasons of latency/speed and possibly legal issues, it makes sense to select a region that's close to you and your users (although you may actually want to locate it far away if the purpose is backup for a major disaster in your own area).

Here's how to create a new bucket via PL/SQL code:


Checking the AWS management console to verify that the bucket has indeed been created (in the specified region):



Listing existing buckets

With one or more buckets created in your account, you can list the bucket names.

There are two way to do this, either by retrieving an index-by PL/SQL table using the GET_BUCKET_LIST function:



or, alternatively, via SQL using a pipelined function named GET_BUCKET_TAB:



Creating (uploading) new objects

An "object" is a file, and this is really what the S3 service is all about, storing files. So let's upload a file or two to our new bucket!

The API lets you upload any BLOB data to S3 using the NEW_OBJECT procedure.



When you upload a file to S3, the default Access Control List (ACL) makes sure that only the owner of the file (you!) can access (download) it.

Others get an "Access Denied" message (but see the "Generating download links" section for how to generate special time-limited download links):



There are a number of predefined ACLs that you can specify if, for example, you want to make the file publicly available.



Which can then be freely downloaded by anyone (the use of HTTPS is optional).


A note about "folders": S3 has no concept of "folders", but you can simulate folders by using a forward slash in your file names (as seen in the previous example). Some S3 clients, such as the AWS management console, will present such files in a folder structure. As far as the PL/SQL API is concerned, the slash is simply part of the file name and has no special meaning.

Listing existing objects

Now that we have uploaded a couple of files, we can list the contents of the bucket via the GET_OBJECT_LIST function:



You can also get a list in SQL via a pipelined function named GET_OBJECT_TAB:



In both cases, you can optionally specify a prefix that acts as a search filter for the file names you want to return, and/or the maximum number of items you want to return.



Generating download links

You can access a file that has been protected by an ACL by including a special checksum parameter in the URL.

The GET_DOWNLOAD_URL function lets you generate the URL needed to access the file. You can specify when the link should expire, so this means you can share a download link that will stop working after a specified amount of time, which can obviously be useful in a number of scenarios.



Pasting the generated URL into the browser allows us to access the file:



Downloading objects

Downloading a file from S3 using PL/SQL is straightforward with a call to the GET_OBJECT function which returns a BLOB:



Deleting objects

Removing a file is likewise very simple, just call the DELETE_OBJECT procedure:




Summary

The ability to upload and download any file from the Oracle database to "the cloud" (and vice versa) via PL/SQL is extremely useful for a number of purposes.

Let me know if you find this API useful!

Monday, March 14, 2011

How I hacked the Apex 4 Websheets

Or, to be more specific (and less sensationalistic), how I struggled to, and finally succeeded in, using the new Apex 4 Websheets feature in a "Runtime-Only" environment.

What is an Apex runtime environment?

"Oracle recommends that you run any sensitive production Oracle Application Express applications with a runtime installation of Oracle Application Express. A runtime installation does not expose the Web-based application development environment, thus preventing the use of Application Builder, SQL Workshop, and related utilities on a production installation. Additionally, a runtime environment only includes the Oracle Application Express database objects and privileges necessary to run applications, making it a more hardened environment."

So, naturally, I set up Apex production environments using the runtime-only installation. But I hit a problem when I tried to deploy a Websheet application to such a production environment.

The setup

Here is what I did:

  1. Installed Oracle XE
  2. Installed Apex 4 runtime (apxrtins.sql)
  3. Manually created an application schema (SMALL_APPS) in the database
  4. Manually created the Websheet tables (APEX$ tables) in the SMALL_APPS schema (to do this I had to manually export the table scripts from the Apex development instance using TOAD, but I guess it could also have been done through the SQL Workshop in Apex itself)
  5. Manually created an Apex workspace for the SMALL_APPS schema (via the APEX_INSTANCE_ADMIN package)
  6. Exported the Workspace definition from the development instance. The generated workspace script contains the statements to create the workspace users. I removed the part of the script that creates the workspace itself (as I had already done this in the preceding step), and changed the workspace ID to the newly created workspace ID before I ran the script.


So far, so good.

The problem

I then proceeded to import the actual Websheet application.

This, however, threw up the following error:

WEBSHEET APPLICATION 112 - WebSheetSandbox
Set Credentials...
Check Compatibility...
API Last Extended:20100513
Your Current Version:20100513
This import is compatible with version: 20100513
COMPATIBLE (You should be able to run this import without issues.)


Set Application ID...
...Remove Websheet Application
...Create Websheet Application
...Create Access Control List
...Create Application Authentication Set Up
...Create Data Grid
Rollback

Error starting at line 163 in command:
declare
  q varchar2(32767) := null;
begin
q := null;
wwv_flow_api.create_ws_worksheet (
  p_id => 1311502709921962+wwv_flow_api.g_id_offset,
  p_flow_id => 4900,
  p_page_id => 2,

(snip...)

ORA-02291: integrity constraint (APEX_040000.WWV_FLOW_WORKSHEETS_FLOW_FK) violated - parent key not found
ORA-06512: at "APEX_040000.WWV_FLOW_API", line 14562
ORA-06512: at line 5
02291. 00000 - "integrity constraint (%s.%s) violated - parent key not found"
*Cause:    A foreign key value has no matching primary key value.
*Action:   Delete the foreign key or add a matching primary key.


From this it was evident that the data grids in the exported application are linked to application 4900, which is the built-in Websheets application. However, the Apex runtime-only installation does not install application 4900, hence the integrity error.

At this point I started to wonder if Websheets are supported in a runtime-only installation of Apex, and I posted the question in the Apex discussion forum on OTN.

But the only answer there was silence, so after a few days I decided to just go ahead and try to install application 4900 in the runtime environment, by running the f4900.sql script (from the apex\builder folder).

More problems

With application 4900 installed successfully, I was able to install my own Websheets application.

However, after login I was greeted with the following error message:

ORA-06550: line 9, column 46: PLS-00201: identifier 'WWV_FLOW_F4000_PLUGINS.RENDER_SEARCHBOX' must be declared ORA-06550: line 9, column 1: PL/SQL: Statement ignored

So, a package is missing. I located the package scripts (in the apex\core folder).

The comments in the package header (wwv_flow_f4000_plugins.sql) actually state:

"RUNTIME DEPLOYMENT: YES"

But evidently the package is not installed by the runtime installation script, so either this is a bug or the comment is wrong.

So I added the missing package by running:

alter session set current_schema = APEX_040000;


@wwv_flow_f4000_plugins.sql
@wwv_flow_f4000_plugins.plb

The final problem

This fixed the PLS-00201 error, but now I got several other errors, all similar to the following:

Unable to bind ":WS_APP_ID"
Unable to bind ":WEBPAGE_ID"
ORA-20001: run_query error q=select id, title, content, section_type, data_grid_id, report_id, data_section_style, nav_start_webpage_id, nav_max_level, nav_include_link, created_by from apex$_ws_webpg_sections where ws_app_id = :WS_APP_ID and webpage_id = :WEBPAGE_ID order by display_sequence
ORA-01003: no statement parsed


Still no feedback in the discussion forum, so I decided to dig deeper into the Apex internals (an interesting exercise in itself).

A solution!

In the end, the solution turned out to be simple. The WWV_FLOW_COMPANY_SCHEMAS table contains the workspace to schema mappings. This table contains a column called IS_APEX$_SCHEMA, and this needs to be set to "Y" (the APEX_INSTANCE_ADMIN.ADD_WORKSPACE procedure leaves the column value as NULL).

So just update the column to enable Websheets:

update wwv_flow_company_schemas
set is_apex$_schema = 'Y'
where schema = 'SMALL_APPS';

Voila! I now have a working Websheet application in my runtime-only Apex environment.


Postscript

While I was typing up this for the blog post, I stumbled across the following statement in the Apex Administration Guide:

"Tip: Websheets are not supported in an Oracle Application Express runtime environment."

I wish somebody could have pointed that out to me in the discussion forum thread. But then again, if they did, I probably wouldn't have discovered how to make it work anyway.

And if anyone from Oracle is reading this, consider this an enhancement request for Apex 4.1: Support Websheets in a runtime-only environment by:

  • Including application 4900 in the runtime installation
  • Including the wwv_flow_f4000_plugins package in the runtime installation
  • Add a parameter to the ADD_WORKSPACE and ADD_SCHEMA procedures to specify whether websheets should be enabled or not

Tuesday, March 1, 2011

Stress Testing Oracle XE 10g

Oracle Express Edition (XE) is Oracle's free entry-level database product, currently available only in a 10g version. XE is usually pitched as suitable for personal work or (very) small departmental applications, but I was curious as to what kind of load it can support.

Oracle XE 10g has the following limitations:

  • Up to 1 instance per server
  • Up to 1 CPU (will not use more even if available)
  • Up to 1 GB RAM (will not use more even if available)
  • Up to 4 GB datafiles (not including XE system data)
  • Free to develop, distribute and deploy in production


By the way, several things seem to indicate that Oracle Express Edition (XE) 11g is just around the corner. It's rumored that XE 11g will raise the datafile limit from 4 GB to 11 GB, but the other limits will remain as far as I know (and I don't really know anything about it...!). So these performance tests will probably be representative both for XE 10g and 11g, although I can't know for sure until 11g is available.

Here is the test environment I set up:

Hardware

Note that the "hardware" in this case actually runs virtualized in a VMWare environment.
Here are the resources allocated to the server:

  • 1 Intel Xeon X7350 2,9 GHz CPU
  • 4 GB RAM (but remember that XE won't use more than 1 GB anyway)
  • 30 GB disk space


Software

The software setup was as follows:

  • Windows Server 2003 R2 Standard Edition Service Pack 2
  • Microsoft Internet Information Server (IIS) 6.0
  • Thoth Gateway
  • Oracle Express Edition (XE) 10g
  • Oracle Application Express (Apex) 4.0


Note that I am not using the Embedded PL/SQL Gateway (DBMS_EPG) as the web server, but rather IIS in combination with the Thoth Gateway, a free ASP.NET replacement for mod_plsql and DBMS_EPG that I wrote in C# using ODP.NET.

Everything (database, web server, Apex) was installed using default settings.
The Apex images folder (/i/) was set up with "Expires"-headers (7 days) to allow browsers to cache images.

Both the database and the web server run on the same machine (server), in order to keep the setup as simple as possible (and show what is possible with just one box).

Test page

I set up an Apex test page with a mix of regions, including a report region, a couple of PL/SQL regions and an HTML region. For each page view, a random number is generated (to simulate different users looking at different things). The report query selects up to 200 rows based on this random number (using bind variables, of course), out of a system table/view (DBA_OBJECTS) with around 17,000 total rows.



There is also an after footer page process that inserts a row in a log table, so this is not just a read-only page.

The application was set up with an authorization scheme of "No application authorization scheme required" to allow it to be accessed from the online stress test tool.

Stress test tool

LoadImpact is a user-friendly, web-based stress testing service that allows you to run free tests that simulate up to 50 concurrent clients (you can pay to test with more clients). A key point here is that the load is actually generated from their test servers on the public internet, so this gives a more realistic test than simply running a stress testing tool on your local network.

You don't even need to sign up, just enter the URL of a website to test it (apparently, there are safeguards to prevent this from being used as a denial-of-service attack tool).

A note regarding image caching: The LoadImpact service does not cache anything. This means that all the Apex images, Javascript and stylesheets will be loaded on every page hit, even though in reality they would be cached in the user's browser after the first page view. Since I wanted to test the database's ability to handle page views, rather than IIS's ability to serve up static files, I changed the Apex page so that the standard Javascript and CSS files were not included. I then manually added the core stylesheets (core Apex + theme) back on the page via the page template, just to preserve the look and layout of the page. But even these could have been left out in order to simulate a scenario where most users have the files cached in their browser.

Test results

The free test at LoadImpact runs five different subtests, starting with 10 concurrent clients and ending with 50 concurrent clients. The full test takes 10 minutes, so that is in five 2-minute parts with increasing load.

While this was going on, I watched the CPU usage in the Windows Task Manager and grabbed the following screenshots:





While there are some peaks at the start of each subtest, it seems like the database "warms up" and actually does less work at the middle and end of the subtest compared to the start. Even at 40 and 50 clients, the CPU is (on average) not working at more than half capacity.

Here is the summary from LoadImpact at the end of the test:



First of all, note the nice and almost flat curve, with response times of under 300 ms even with 50 clients (and remember that the load generator is actually located in a different country, accessing the Apex page via the public Internet).

Actually, while the test was running, I was also clicking through a separate Apex application (an actual application, not just a test page) on the same server, and I couldn't really notice that the server was under any kind of stress; the response times were excellent.

The total number of requests for the whole test was 4500, but these requests include not only the actual page (the call to the "f" procedure) but also 2 stylesheets and 3 images. So the database was hit 1 in 6 times, in other words 16,7% (of 4500) or 750 times. Each hit inserted a row into a log table. If we query that log table and group by down to the second, we can get the number of database requests per second:



This shows that the XE database was handling 10 or 11 page views per second. If this can be sustained over time, that is 36,000 page views per hour, or 864,000 page views per day.

By comparison, apex.oracle.com is reported to have 4,800,000 page views per week or 685,000 page views per day. And according to this, Reddit has 10 pageviews per second per server (with 80 servers), while Facebook has just 2 pageviews per second per server (with 30,000 servers!).

I won't read too much into these numbers (as obviously a "page view" can vary wildly between different websites, and only actual use can tell whether your site is performing well or not), but clearly the free XE database is more than a toy.

I'm even tempted to buy a Basic or Professional test at LoadImpact to test with more than 50 clients and see where that curve leads with 60, 80 or 100 concurrent clients...