Amazon Aurora: TCO, Performance, and Migration

Ahh Aurora, beautiful, sweet Aurora. No one can forget the timeless classic tale – Disney or Grim Fairytales or the most recent iteration Maleficent lead by Angelina Jolie – of a young princess cursed to prick her finger on a spindle and fall into an endless slumber. Hmmm. I am not sure Amazon’s new relational database offering was created with the express intent to last a short period of time before succumbing to an uncontrollable urge to prick its proverbial finger and fade out of existence. I think it is safe to assume Bezos and the AWS marketing team were hoping to drum up images of a fantastic light display in the sky with the new RDS offering Aurora, or possibly the Roman goddess of dawn. In any event, down to business.

In this blog we will take a look at the total cost of ownership (TCO) of Aurora against the enterprise relational database engines on the market, then have a performance stackup, and lastly discuss migration strategies. We are going to cover a fair amount of ground here, so hold onto your seats; or just skip to the section of your interest (TCO/Performance/Migration).

TCO: The Value Proposition

Amazon Aurora is pitched as a cost effective alternative to enterprise relational database engines, with the added benefit of automated provisioning, patching, backup, recovery, failure detection and repair. All of which sounds fantastic, but is that really the case? Let’s drill into the numbers and take a look at the TCO of Aurora verses the big players in the relational database market.


Instance Type vCPU Memory (GiB) Instance Cost
db.r3.large 2 15.25 $0.29 / hr
db.r3.xlarge 4 30.5 $0.58 / hr
db.r3.2xlarge 8 61 $1.16 / hr
db.r3.4xlarge 16 122 $2.32 / hr
db.r3.8xlarge 32 244 $4.64 / hr

The grid above is quite frankly astonishing. What this means is that you can run your production SQL workloads on a cloud platform for $6.96/Day, or, to put it another way, you can run a fully managed two node SQL cluster backbone all year round for $5080.80. Of course I am not factoring in the storage and request, which can have a significant impact on the bill at 10 cents per GB and 20 cents per million requests. That would drive your bill up to $6744.80/yr for a database handling 20 million requests a day. However, consider this: Even if your business is pushing Twitter numbers at about 500 million requests a day, your bill would be roughly 41K/year on a two node 2TB cluster. Now Twitter most likely has many more databases and larger ones at that, and I am not factoring in the egress networking costs, which would add up, but I think you get the picture. Aurora provides a production enterprise SQL platform for a significantly lower price point than any other platform on the market, and it is fully managed with replicated storage and free backups. Let’s look at this from a different angle. What if you wanted to bring your own MSSQL or Oracle license to the cloud and run a pair of r3.large instances for a full year?

EC2x2 R3 Large + BYOL
Oracle Enterprise MSSQL 2012 Enterprise MySQL Enterprise AWS Aurora
$52,756.00 $19,004.00 $10,256.00 $6,744.80

The clear winner is Aurora. A move from Oracle to Aurora would see a cost savings of $46,011.20/yr, moving from Microsoft SQL to Aurora would yield less exciting but still moderate savings of $12,259.20/yr, and a move from MySQL would see a savings of $3,511.20/yr; and these prices are not even factoring in the annual 22% Oracle support/maintenance fee or the Microsoft 25% software assurance. Aurora is clearly cheaper than running two EC2 instances of comparable RDS engine for a full year, but what about a like for like comparison of database engines running on Amazon RDS? Well, the price drops significantly for Oracle and MSSQL, but oddly increases for MySQL (probably storage).

$14,716.80 $8,322.00[1] $13,140.00[2] $6,744.80

Aurora is still less expensive, and it even undercuts MySQL in price. That is great, but I am more of a visual guy. Let’s take a look at the numbers over a 5 year period. The graph below represents the soft cost of each RDS engine running on an AWS EC2 r3.large instance against AWS Aurora. I say soft cost because I did not factor in EBS, network egress, and backups; and I am assuming the instances are on 100% of the year.

Comparing a PaaS offering to a build-your-own is not truly a fair comparison because the very idea of PaaS is to offer you more for less. Given that knowledge, we need to compare apples to apples and take a look at the TCO on each engine running for a full year on Amazon RDS.


That is more like it. The price of each has leveled out, and yet, as you could have probably guessed, Aurora is still less expensive than all other relational engines on Amazon RDS. Based on these numbers it is clear running your workload on Aurora will yield at the minimum a cost savings of over $3k a year, and on the high side $46k/yr. All of that, and we still have not even mentioned reserved instance pricing for Aurora which comes in at a massive 44%/1yr discount and 63%/3yr discount.

Although the cost savings are clear with Aurora, let’s be honest, if you are running Microsoft SQL or Oracle in your production environment you have probably already justified the outrageous licensing costs for a very specific feature set or performance metric that MySQL or PostgreSQL simply cannot offer, and you are most likely running those engines on some much heavier physical or virtual “metal” than an r3.large (2 CPU/15GBRam). If you do not have a very specific reason why you are running MSSQL or Oracle, you should definitely consider Aurora. If, however, you do have a specific reason for running the big-boy relational engines, then let’s quickly address the most common reasons now.

First off, Aurora is horizontally and vertically scalable allowing for 15 active read replicas of scale out, as well as the ability to scale up to 32 Cores and 244GB RAM. And as for the other commonly sought after enterprise relational database feature set of seamless disaster recovery and high availability available with MSSQL and/or Oracle, well this is where the real value in Aurora shines. Aurora offers up to 15 active read replicas (seven more than Microsoft SQL 2014 AlwaysOn) which are available in multiple availability zones, and ready for automated active failover. This not only creates an automated DR solution, but also spreads read requests across multiple nodes to lighten the load on your primary write database to create a performant highly available relational database offering. Of course these replicas are not free. You will have to pay the same rate for your replicas as your primary write node, but considering the costs are relatively insignificant in the grand scheme of your business adding another replica is a simple financial decision and a push button technical implementation.

Given the significant cost savings offered by Amazon Aurora the only real question is, how does it stackup from a performance perspective against the big dogs in the relational database market like Oracle and Microsoft SQL and MySQL?

Performance Stack up

In order to derive a fair performance comparison of the major RDS engines on the market in relation to Amazon Aurora, I created an experiment that leverages HammerDB and places the new Amazon RDS offering up against SQL, Oracle, MySQL, and PostgreSQL in multiple scenarios with the addition of a control system. Unfortunately, due to time constraints, this blog only includes the MySQL, SQL, and Aurora data; however, I do plan to publish a more extended blog with additional results. The experiment is broken down into a few different components.

Additionally, please keep in mind that the performance images contained within this section were taken over the course of weeks of testing each platform. (Updated 8/13/2015 from feedback provided by EPS Cloud’s Shashi Raina – Cloud Architect)

The Control:

First, my control system was m3.medium AWS system which had SQL, Oracle, MySQL, PostgreSQL, and HammerDB installed. HammerDB was configured and run against each platform on this single box to establish performance baselines for both the underlying AWS platform and HammerDB itself. Why test HammerDB Jon? Great question. Well, the simple answer is because we are creating a control system. By creating a control system using a smaller instance class with less resources than Amazon Aurora, we obtain a clear baseline of not only how each RDS engine performs on an m3.medium, but we also are able to then compare the HammerDB results to the results of each RDS system running on a larger instance class where resources are relatively equal to Amazon Aurora. The idea being, that we will see variation in results between the smaller and larger AWS instance classes. This in turn will tell us that HammerDB is doing its job and not just spitting out preconceived numbers based on the RDS engine selected.

HammerDB Test Case Criteria

Test A-1 Build Database

Warehouses – 1

Users – 1

M3.medium and R3.large – Used in the control

R3.large – RDS instance size used to test all platforms

Test A-2 Load Test 50 Users

Users – 50

Script – Preconfigured HammerDB

M3.medium and R3.large – used in the control

R3.large – RDS instance size used to test all platforms


MySQL reached a peak of 91,554 transactions per minute (TPM), and averaged 85,000 TPM during database creation. While running the test script with 50 users on the same instance created a rough steady state of 70,000 TPM.


After increasing the EC2 instance size to r3.large TPM increased to 134,000 peak and 99,000 average sustained during the 50 user load test, which is an increase of roughly 40,000 TPM.

Microsoft SQL peaked at 119,130 and maintained a steady TPM between 100,000 and 118,000 for the duration of database creation. During the 50 user test MSSQL performance was significantly less and quite surprising at a rough average of 16,000 TPM.


Just as the MySQL server did when the EC2 instance was changed from an m3.medium to a r3.large, the MSSQL server experienced an increase in TPM to a peak of 279,144 and maintained a high average around 255,000. Significant TPM increases were also seen in the 50 user load test, where MSSQL increased from a paltry 16,000 TPM to 60,000 on the r3.large.




This is good. As anticipated TPM is a direct corollary, or so it seems, to increased instance size. So as resources increase with instance size, we should see an increase in TPM.

Now that we have our control, let’s take a look at MSSQL and MySQL Amazon RDS offerings in comparison to Aurora. The second test will use the same HammerDB criteria as used in the control case.

First up on the performance stackup is Microsoft SQL AWS RDS.

The first test revealed that MSSQL peaks at around 35,000 TPM and averages 34,000 TPM. The 50 user load test reached a peak of 50,598 TPM and averaged 45,000. What is interesting about MSSQL is that it took significantly more time to reach 49,000 TPM, and hovered around 6,000 TPM for roughly 2 minutes before shooting up to peak.



Next up is MySQL Amazon RDS. The results from this test were quite interesting given the unexpected massive gap in TPM between MySQL and MSSQL on Amazon RDS. During database creation Amazon RDS MySQL reached a max TPM of 109,782 and maintained a stead state TPM of around 106,000. That is an astounding 75,432 TPM gap between MSSQL and MySQL. Now for some of the most interesting results of the first round of testing. MySQL hit a peak of 784,152 and averaged 701,000 over the course of the test. Frankly I thought HammerDB was broken or I did not configure something correctly, so I started from scratch by creating a new MySQL RDS instance and reran the tests with similar results. I can only conclude that either HammerDB is not accurately reporting TPM on Amazon RDS, or MySQL runs brilliantly on AWS RDS.


And we finally arrive at Amazon Aurora. How does Aurora stackup against MSSQL and MySQL? Is the new cloud built relational database service all that AWS makes it out to be?

The short answer is YES! In single-threaded workloads run against Aurora, only Amazon MySQL was on par. Aurora outperformed MSSQL by roughly 20,000 TPM during database creation, and destroyed MSSQL in the user load test peaking at 780,000 and then falling to an average steady state of 660,000 for the duration of the test. Aurora and MySQL performed roughly the same with MySQL edging out Aurora by about 30 – 40 thousand sustained TPM.  However, these results were just single-threaded workloads. Where Aurora really shines is in concurrent workloads. The most significant performance findings showed that a single instance run against Aurora was linearly scalable. This means that the 660,000 TPM sustained performance was able to scale to 2,640,000 TPM running four concurrent workloads of the same type. This means the platform was able to achieve 44,000 transactions per second.


The TPM numbers put out by Aurora are quite compelling with single-threaded workloads, and unbelievable running concurrent workloads against the platform.

My parting Aurora performance note, which, consequently, makes the Amazon Aurora platform even more impressive, is that all test results were performed between December 2014 and February 2015, and that since that time Amazon has made internal adjustments to improve single-threaded performance. Over the next few months I plan to release an updated post with additional performance metrics to prove out the increase in single-threaded performance, and to really hammer on the platform with a couple dozen concurrent workloads. In addition to these tests, in the same blog post I plan to dive into more detail on the performance characteristics of Aurora in comparison to Oracle, MSSQL, MySQL, and PostgreSQL.

Now that we have a firm understanding of the TCO and performance, let’s take the next step and investigate migration to Aurora.

Migration Strategies

Is it possible to migrate your SQL, Oracle, and/or MySQL workloads to Amazon Aurora? Well, I will give the response that you are probably expecting from a consultant: it depends on your specific use case. And that is really not evading the question– that is just the reality. If you have a production database of 5TB with hundreds of custom stored procedures that is at the core of your business, I would not say no you cannot move to Amazon Aurora, but I would say give me a call to discuss your specific technical and/or business reasons for moving to Amazon Aurora and we can map out a migration strategy. If, on the other hand, you are looking to build a new cloud application and would like to leverage a relational database as part of the backend data processing, I would say Aurora is a no brainer from both business and technical perspectives. And, if you are already on Amazon RDS MySQL, it is even easier. Just restore a snapshot of your existing database into Amazon Aurora and you are ready to go. If, however, you are somewhere in the middle of the scenarios above and are curious to test a development database in Aurora, I have created a sample migration below.

Now, I could have just followed the Amazon tutorial on restoring an RDS MySQL backup to Aurora and had a simple migration example for this blog, however that would not have been nearly as interesting or fun as migrating a Microsoft SQL database to Aurora. Consequently, I decided to do some additional research into the new cloud first relational database. And, after studying Aurora and its compatibility with MySQL, I developed a theory about what engine resides under the covers of the new Amazon offering. To test this theory I setup a migration test case that leveraged MySQL Workbench to migrate a Microsoft SQL database directly to Aurora. Although the results successfully validated my theory – to some extent – and the MSSQL database was easily migrated to Aurora in a fully functional state, I would not recommend attempting to migrate a production database without thorough testing.



Download and install MySQL Workbench and the Adventure Works 2014 Database.


Once the database has been restored, open up MySQL Workbench and select Database Migration.


Fill out the source as Microsoft SQL and the destination as the connection string to Amazon Aurora. Follow the GUI through to completion.


Once complete, open MySQL Workbench again, create a connection to your Amazon Aurora database and you should now see your Adventure Works Database.


Run a few basic queries to validate the data was migrated successfully. And there we go. A migration from Microsoft SQL to Amazon Aurora with relative ease. Again, this is all development and I recommend migrating your Dev/Test databases to Amazon before undertaking a production database migration.


For us, of course, things can change so abruptly, so violently, so profoundly, that futures like our grandparents’ have insufficient ‘now’ to stand on. We have no future because our present is too volatile. … We have only risk management.

The prescient quote above from William Gibson’s fantastic cyberpunk novel Patter Recognition fits perfectly into any discussion about cloud technology and the rate of change and innovation of information technology in general. Given the frenetic pace of technological innovation, change risk mitigation and management remains one of the last vestiges of insulation available to corporations of all sizes.

If you are building and working in a Greenfield, Amazon Aurora is the clear relational database option. The TCO is fantastic, the performance is unreal (did I mention 44,000 transactions a second), and I cannot express enough the true tranquility of mind that comes with having someone else worry about your database disaster recovery and high availability. If, however, your organization has been around for years, it is likely whatever database platform was selected at the outset is now entrenched and will probably be a difficult proposition to migrate. That said, the Brownfield of Oracle and MSSQL does not have to be a bottomless pit to which there is no escape but attempting to go farther down the lightless hole of increasing cores to accommodate demand and thus increasing your costs. There is a way out, and it can start with something as simple as checking when your “true-up” licensing period with Microsoft and/or Oracle. I cannot tell you how interested your CFO will become when they see a bill for a few hundred thousand or million dollars to cover your licensing costs of Oracle or MSSQL.

Thank you for checking in, and happy cloud hopping.