|

OpenStack – and the Race to Connect the Clouds

Author(s)

Jean Bozman

Introduction

The OpenStack 2015 conference in Vancouver showed that OpenStack standards for building clouds – and connecting them—are finding wider use in enterprises and cloud services datacenters. This vision of hybrid clouds linking enterprise datacenters and clouds, and of interconnecting clouds, demands technologies that make application development faster, operations more efficient – and produce simplified management of workloads across the data center.

One key ingredient for making that happen is improved support for containers, the software entities that allow customers to group large groups of virtual machines (VMs) together for more efficient management. Grouping VMs into “clusters” or “swarms” makes it easier to move them all at once to a new computing resource in the datacenter. An additional benefit: there is less system overhead when containers are used. Keynote speakers at the OpenStack conference showed both the Docker and Kubernetes container technologies at work, along with the orchestration and management software shipping with the OpenStack Kilo release.

Signs of increasing adoption and relevance for OpenStack include:

  • Strong attendance with more than 6,000 attending the Vancouver conference – far more than the 4,500 who attended the OpenStack conference in Atlanta, GA, in May, 2014, or the 2,600 who attended in Portland, OR, in April, 2013.
  • Vendor presence, with all major systems OEMs participating
  • Wide ecosystem of software companies supporting OpenStack
  • Rapid adoption of “container” software, encapsulating workloads intomore efficient footprints on virtualized server systems.
  • Multiple community projects that are a part of the broad OpenStack ecosystem, now supported with a “big tent” approach.
  • Broader use-cases for customer deployments, providing real-world feedback on the challenges and difficulties, along with the business benefits, of OpenStack in virtualized infrastructure and clouds.
  • A range of use cases, including enterprise datacenters and cloud service provider datacenters with scale-out infrastructure.
  • Support for “federated identity” allowing user access to many clouds

Kilo is the eleventh release of OpenStack, shipping this year, with improved container support; and support for bare-metal deployments. The next “snapshot” of OpenStack capabilities, in the Liberty release, is set for just four months from now, in October, 2016, with an OpenStack conference in Tokyo.

Cross-data-center management

Achieving a unified approach to manage computing on a data-center-wide basis has been a longtime dream of IT managers: Earlier attempts to do that, via grid computing, fell short, mostly due to interoperability between platform “silos” within the data center. Grid computing failed because it was adopted in a fragmented, inconsistent way in the early 2000s. But things are changing.

Now, by using Cloud Computing, and a higher degree of abstraction to separate hardware from software, the goal of unified management, orchestration and automation, is much, much closer.

One sign of that is the high degree of industry support for the OpenStack app dev and deployment initiatives, with more than 500 companies (vendors and OpenStack users worldwide that are supporting the release and its software ecosystem.

Drivers for the OpenStack adoption include:

  • Emergence of cloud datacenters with tens of thousands of servers
  • Economic and space utilization concerns within datacenters create pressure to consolidate systems and to simplify IT
  • Demand to link multiple clouds – for hybrid computing, for multi-cloud communications and cloud-enabled data back-up
  • Interoperability and standards are high priorities for customers seeking to simplify IT infrastructure, to leverage previous multi-platform investments

That’s why OpenStack can be seen in the context of a gradual migration of many workloads from on-prem to off-prem hosting, and in an environment that’s increasingly looking like cloud computing inside any datacenter.

Interoperability

A history of inherited infrastructure – a mix of platforms inherited from previous IT decisions – is what has sculpted today’s datacenters. With that heavy history reflecting previous investments, business and IT managers are looking to replace aging technology, while virtualizing most of the remaining technology platforms. Abstraction that separates the software from the underlying hardware is also key in this transition, allowing workloads to be re-hosted more easily, if needed, as applications scale up.

An era of datacenter transformation is leading IT organizations to shift workloads to available server and storage resources, as needed. This approach to datacenter-wide IT supports end-to-end management for hybrid clouds that link enterprise and cloud datacenters. This transformation will support Cloud Computing, Big Data/Analytics, Mobility and Social Media – a combination of megatrends.

OpenStack is one way to take that to the next step, to use Cloud Computing technology to provide a more unified mechanism for application deployment and data-management. Neuralytix notes that there are other, competing technologies and ecosystems, so the element of choice is key to customer decisions.

Working with Other Clouds

Importantly, this year, OpenStack is releasing more interoperability testing capabilities, to certify that different clouds, from multiple cloud providers, can be – and will be – connected. New support for “federated identity” will ensure consistent access methods throughout cloud-enabled deployments.

That’s going to be increasingly important, as more workloads are migrated from traditional enterprise datacenters into clouds, and as multiple cloud “types” are adopted – including Microsoft Azure, Amazon Web Services (AWS) clouds and Google clouds, and large cloud services in China and Asia/Pacific that will gain more data processing and storage business from global companies.

With all of these technologies available in the marketplace, Neuralytix believes that a pragmatic strategy will emerge at many large datacenters: Individual vendors will retain proprietary technologies, but the computing platforms themselves will be “linked” via open-source technologies that will be supported across the data center.

It’s significant that companies not previously associated with OpenStack are finding ways to work with it. For example, VMware recently announced that its partner program would include links with OpenStack clouds; and OpenStack technologies will allow OpenStack clouds to exchange data with Amazon Web Services, Microsoft Azure and the Google Cloud Platform. This is important to enterprises, which generally tap multiple clouds to provide multiple data services to their end-users and end-customers, across geographic regions.

Software Defined Infrastructure (SDI)

Cloud adoption (private, public and hybrid), and the virtualization that supports it,  are the key milestones in adoption of software-defined infrastructure solutions in the datacenter. For years, achieving datacenter-wide management capabilities has been a kind of “Holy Grail” trek for technologies.

Along the way, previous attempts – such as grid computing in the early 2000s – fell short of the lofty goals, up-ended by obstacles such as incompatible technologies, lack of standardization, insufficient levels of abstraction, and common APIs. Grids worked, but were not as widely adopted as originally thought – and the grid environments were not standardized across vendors. Now, customers are focusing on software-defined infrastructure, provisioning and automation across systems.

Progress Made

In this Vancouver edition of the OpenStack conference, with seaplanes landing in-between the tree-sided mountains surrounding the bay, attendees from more than 100 countries strolled the booths in the convention center, sipping strong coffee. Many of the attendees were from the developer and DevOps community, but the crowd included many IT managers and executives from large, well-known companies, resulting in a multi-generational audience.

Many projects had moved beyond the pilot stage – and are already in production. Examples discussed in Vancouver include deployment at Walmart Labs for its cloud-based customer services; Comcast for its xFinity services; eBay/PayPal for transaction-based services, even during peak holiday seasons; and the Jet Propulsion Laboratory (JPL)/CalTech for processing space-flight tracking data (NASA was an OpenStack Foundation co-founder, along with RackSpace). Also presenting were TDBank from Canada, for transaction-based financial services and CERN for high-performance computing (HPC).

Conclusion

OpenStack has moved from the status of a “nice to have” technology into the realm of one of the foundational building blocks for datacenter infrastructure. To be sure, it resides alongside other technologies for building cloud-style workloads. But what is striking is the way that vendors have managed to build interfaces to the OpenStack APIs, leveraging vendor-specific technologies into hybrid cloud systems spanning the enterprise and clouds “outside” the traditional enterprise datacenter.

Neuralytix expects the movement toward OpenStack support to grow in 2015 – and to broaden to include even more platform-specific frameworks. This trend will support Datacenter-to-Cloud, as well as Cloud-to-Cloud connections (clouds based on different frameworks). Most important will be the impact on business outcomes: It will help businesses and organizations to link data-centric and transactional workloads across their networks – and into cloud service provider networks, with end-to-end management for OpenStack-enabled applications and databases.

 

Related Consulting Services

TAGS