Database Technology 3: NewSQL in Plain English
The world has changed
The world has changed massively in the past 20 years. Back in the year 2000, a few million users connected to the web using a 56k modem attached to a PC, and Amazon only sold books. Now billions of people are using to their smartphone or tablet 24x7 to buy just about everything, and they’re interacting with Facebook, Twitter and Instagram. The pace has been unstoppable.
Expectations have also changed. If a web page doesn’t refresh within seconds we’re quickly frustrated, and go elsewhere. If a web site is down, we fear it’s the end of civilisation as we know it. If a major site is down, it makes global headlines.
Instant gratification takes too long! - Ladawn
Clare-Panton
Forgive me for interrupting. This is part 3 of a series of articles on Databases and Big Data. If you’re a seasoned Database Architect, then read on, otherwise you may want to start with my previous articles on Scalability and Database Architecture.
What's Changed?
The above leads to a few observations:-
The Internet of Things is sending velocity through the roof! - Dr Stonebraker (MIT)
The above demands have lead to the truly awful marketing term Translytical Databases which refer to hybrid solutions that handle both high throughput transactions and real time analytics in the same solution.
What’s the problem?
The challenge faced by all database vendors is to provide high performance solutions while reducing costs (perhaps using commodity servers). But there are conflicting demands:-
The only realistic way to provide massive incremental scalability is to deploy a Scale Out distributed system. Typically, to maximise availability, changes applied on one node are immediately replicated to two or more others. However, once you distribute data across servers you face trade-offs.
For example:-
Performance Vs. Availability and Durability
Many NoSQL databases replicate data to other nodes in the cluster to improve availability. If immediately following a write, the database node crashes, the data is available on other machines, and changes are therefore durable. It’s possible however to relax this requirement, and return immediately. This maximises performance at the risk of losing the change. The change may not be durable after all.
Consistency Vs. Availability
NoSQL databases support eventual consistency. For example, in the above diagram, if network connectivity to New York temporarily fails there are two options:
Clearly we trade consistency for availability.
Flexibility Vs. Scalability
Compared to general purpose relational systems like Oracle and DB2, NoSQL databases are relatively inflexible, and don’t (for example) support join operations. In addition to many not supporting the SQL language, some (eg. Neo4J and MongoDB) are designed to support specific problem spaces – Graph processing and JSON data structures.
Even databases like HBase, Cassandra and Redis abandon relational joins, and many limit access to a single primary key with no support for secondary indexes.
Many databases claim 100% ACID transactions. In reality few provide formal ACID guarantees. - Dr Peter Bailis (University of Stanford)
ACID Vs. Eventual Consistency
One of the major challenges in scaling database solutions is maintaining ACID consistency. Amazon solved the performance problem with the DynamoDB database by relaxing the consistency constraints in favour of speed which led to a raft of NoSQL databases.
As an aside even the most successful databases (including Oracle), don’t provide true ACID isolation. Of 18 databases surveyed, only three databases (VoltDB, Ingres and Berkeley DB) were found to support Serializability by default. The primary reason is it’s difficult to achieve while maintaining performance.
Eventual consistency is a particularly weak model. The system can return any data, and still be eventually consistent. - Dr Peter Bailis (Stanford)
Eventual consistency on the other hand provides almost no consistency guarantees. The diagram below illustrates the problem whereby one user deducts $1m from a bank account, but before the changes are replicated, a second user checks the balance. The only guarantee, is (provided there are no further writes), the system will eventually provide a consistent result. How this this even useful, let alone acceptable?
The OLTP Database Reimagined
Ten years ago Dr Michael Stonebraker wrote the paper The End of an Architectural Era where he argued the 1970s architecture of databases from Oracle, Microsoft and IBM were no longer fit for purpose.
He stated an OLTP database should be:-
To demonstrate the above was feasible, he built a prototype, the H-Store database, and demonstrated a TPC-C benchmark performance of 82 times faster than an unnamed commercial rival on the same hardware. The H-Store prototype achieved a remarkable 70,000 transactions per second compared to just 850 from the commercial rival, despite significant DBA tuning effort.
Achieving the impossible !
Dr Stonebraker’s achievement is remarkable. The previous TCP-C world record was around 1,000 transactions per CPU core, and yet H-Store achieved 35 times that on a dual-core 2.8GHz desktop machine. In his 2008 paper OLTP through the Looking Glass he went on explain why commercial databases (including Oracle) perform so badly.
The diagram above illustrates the 93% overhead built in to a traditional (legacy?) database including locking, latching and buffer management. In total just 7% of machine resources are dedicated to the task at hand.
H-Store was able to achieve the seemingly impossible task of full ACID transactional consistency, orders of magnitude faster, by simply eliminating these bottlenecks, and using memory rather than disk based processing.
NewSQL Database Technology
First released in 2010, VoltDB is the commercial implementation of the H-Store prototype, and is a dedicated OLTP platform for web scale transaction processing and real time analytics. As this infographic demonstrates there are 250 commercially available database solutions of which just 13 are classified as NewSQL technology.
VoltDB
In common with other NewSQL databases, VoltDB aims to run entirely in memory with optional periodic disk snapshots. In runs on 64 bit Linux on premises, AWS, Google and Azure cloud services, and implements a horizontally scalable architecture.
Unlike traditional relational databases where data is written to disk based log files, VoltDB applies changes in parallel to multiple machines in memory. For example, a K-Safety of two guarantees no data loss even if two machines fail, as data is committed to at least three in memory nodes.
Transactions are submitted as Java stored procedures which can be executed asynchronously in the database, and data is automatically partitioned (sharded) across nodes in the system, although reference data can be duplicated to maximise join performance. Unusually, VoltDB also supports semi-structured data in the form of JSON data structures.
In terms of performance a 2015 benchmark demonstrates VoltDB at almost double the processing speed of NoSQL database Cassandra, while also six times less expensive in AWS cloud processing costs.
Finally, VoltDB version 6.4 passed the remarkably stringent Jepsen distributed safety tests.
To put this in context a previous test with NoSQL database Riak demonstrated dropping 30-70% of writes even with the strongest consistency setting. Meanwhile Cassandra lost up to 5% of writes using lightweight transactions.
MemSQL
In common with VoltDB, MemSQL is a scale out, in memory distributed database designed for fast data ingestion and real time analytics. It also runs on premises and the cloud, and provides automatic sharding across nodes, with queries executed in parallel on each CPU core.
While there are many similarities with VoltDB, the diagram above illustrates a key difference. MemSQL attempts to balance conflicting demands of real-time transactions, with data warehouse style historical data processing. To achieve this, MemSQL organises data in memory as a row store, backed by a column oriented disk store to combine real time (recent) data with historical results.
This places it firmly in the OLTP and Data Warehouse space, although both solutions target the real time data ingestion and analytics market.
Which Applications need NewSQL Technology?
Any application which requires very high ingest rates and fast response times (average 1-2 milliseconds), but also demands transactional accuracy provided by ACID guarantees – for example involving customer billing.
Typical applications include:
While these initially may seem edge cases compared to the majority of OLTP applications, in a 24x7 web connected world, these present the new frontier for real time analytics, and with the advent of the Internet of Things - a massive opportunity.
Conclusion
Although Hadoop is more closely associated with Big Data, and has received huge attention of late, database technology is the cornerstone of any IT system.
Likewise, NoSQL databases appear to provide a fast, scalable alternative to the relational database, but despite the lure of licence free open source databases, it really does seem you get what you pay for (consistency). Finally, as VoltDB demonstrates, it may actually be cheaper than the NoSQL alternatives in the long run.
In conclusion, if you have a web scale, OLTP and/or real time analytics requirement, the NewSQL class of databases need serious consideration.
Disclaimer: The opinions expressed in my articles are my own and will not necessarily reflect those of my employer (past or present) or indeed any client I have worked with.
Almost done…
We just sent you an email. Please click the link in the email to confirm your subscription!
OKSubscriptions powered by Strikingly