|

Oracle’s ‘In-Memory’ option may be costly

Author(s)

Ben Woo

On June 10, 2014, Oracle announced its “in-memory” option for its pervasive Oracle database, called Oracle 12c. Both the NY Times and the Wall Street Journal reported a positive perspective on the announcement.

Neuralytix does not agree, from a database perspective. There is no question that database performance (relational or otherwise), is the key to creating enterprise competitive advantage. But Neuralytix suspects that there is another motive that underlies this announcement.

In-Memory

The concept of “in-memory” computing is not new. It has been around since the beginning of mainframe computers. Data and processing are brought “in-memory” and computations take place there. Memory represents the fastest storage medium in any computer system.

Some memory is persistent (such as flash memory, or NVRAM). Other memory is not (such as DRAM). Either way, memory is up to two orders of magnitude faster than the fastest magnetic rotating hard disk drives. But memory is expensive, especially DRAM. Also, computer hardware has to be specially designed to take advantage of massive memory.

Of all the “in-memory” databases, SAP HANA has probably captured the most attention and discussion. SAP launched SAP HANA, its own in-memory database product back in 2010. Arguably, SAP has had a head start in contemporary in-memory computing. But SAP HANA required heavy lifting to move from a traditional relational SQL database to SAP HANA. For some very large enterprises, the benefits gained was worth the effort.

Despite some early misconceptions, SAP HANA does not displace SAP’s Sybase (relational SQL database), but complements it for those enterprises needing the real-time, ultra-high performance HANA gives them.

Where SAP has been successful, is adapting SAP HANA to run on distributed x86/64 systems (i.e. the Intel Architecture). Most recently, SAP HANA announced its ability to scale across VMware’s vCloud Hybrid Service, an on-demand, and elastic computing service that leverages the distributed nature of “clusters” of x86 servers rather than very expensive RISC based systems. SAP has also used HANA for many of their cloud applications, demonstrating its ability to scale. On the other hand, Neuralytix believes that Oracle’s “in-memory” option will be optimized on Oracle hardware, taking the choice of infrastructure away from customers.

RISC or risk?

Historically, “mini” computers, as they were once known, including IBM’s p-Series (and to an extent i-Series), HP’s NonStop servers, and Oracle/Sun’s SPARC servers served the high compute, high memory space. These servers use RISC based processors. But these servers carry a hefty premium in price. For example, Oracle’s new SPARC M6-32 server can be configured with up to 32TB of memory – perfect for running in-memory databases. But deploying RISC based systems today is a risk. There is an extraordinarily high upfront capital investment. While they can scale up to very large systems, RISC systems tend not to scale out as well.

That said, for IBM, HP and Oracle, these systems have high margins. They draw loyalty and commitment from users given the investments enterprises have to make when leveraging these systems.

The underlying motive

Neuralytix believes that the Oracle 12c announcement with “in-memory” option is a very positive step by Oracle. It did not race to a first-mover advantage, but designed “in-memory” as an option. For many, this is in stark contrast to the seemingly “rip-and-replace” approach SAP HANA requires. (However, Neuralytix observes that SAP HANA is not, in fact, a “rip-and-replace” strategy).

However, we do believe that “in-memory” is more about providing a path for customers to buy Oracle’s SPARC systems rather than necessarily about the evolution that Larry Ellison, Oracle’s CEO, suggested during the announcement of Oracle 12c.

By investing in “in-memory”, Oracle expects its customers to invest in its SPARC systems that are optimized for supporting multi-terabyte memory options. It also means that Oracle will once again be the sole provider of the complete stack for computing – hardware, software, and services. Single sourcing critical solutions is not necessarily a bad idea. It often results in optimal outcomes.

The question customers need to ask is, how devoted are they to the Oracle vision? Have their competitors, suppliers and customers also made the commitment? Or are stakeholders up and down the supply chain interested in data mobility, such that newer technologies such as Hadoop may be adopted by these stakeholders make it a better strategic decision to stay with distributed x86/64 platforms and data management solutions?

While each enterprise will have different requirements, leading to different hardware and software solutions, Neuralytix is not as positive as either the NY Times or the Wall Street Journal. Realistically, we believe the “in-memory” option will allow devotees of Oracle new opportunities to expand on their investment. But in the end, we believe that these enterprises are likely to baulk at the suggestion of going backwards into “big-iron” days and moving away from the agile, distributed, modular datacenter in which they have been investing over the last several years.

Sources: New York Times, Wall Street Journal

Related Consulting Services

TAGS