Priorities 2024

Neuralytix presents our annual Priorities 2024. This year our outlook is more bullish than previous years, driven by the impact of the global pandemic in the years prior.
By Neuralytix
January 1, 2024
Photo Of Woman Wearing Turtleneck Top




Priorities 2024 is Neuralytix’s annual outlook for the upcoming year. The audience for our annual Priorities reports are enterprises customers who consume technology products and solutions.

Five years have passed since the start of the global COVID-19 pandemic. Prior to the onset of the global pandemic, it is our opinion that many enterprise customers had not finalized either their strategies or their decision as it related to how they would balance their IT architecture, infrastructure, and applications between on-premises infrastructure and public cloud providers. In fact, it is our opinion that were many enterprise customers that were not even convinced that the public cloud could or would play a strategic role in satisfying their business, regulatory, and strategic needs.

However, the global pandemic gave enterprise customers little choice, and little time, to move part, or in some cases all their applications and services into the public cloud whether they were convinced or not, and whether they were ready or not.

In the second half of 2022, the major countries of the world began relaxing their pandemic protocols, and by the last quarter of 2022, for many countries even masks were no longer required even in crowded areas such as public transportation, as well as cultural and sporting venues. As such, our outlook for 2023 was made on the assumption that it would be the first full post-pandemic year.

But assumptions, given what they are, our 2023 outlook was very conservative, and for some, it was considered negative. Our advice to enterprise customers included minimizing or withholding capital investments in technology; minimizing or withholding changes to the technology architectures, infrastructures, and/or operations; and we even cautioned enterprise customers to minimize or withhold organic capacity increases.

In its place, we recommended that enterprise customers use 2023 as a year for strategic planning that encompassed the evaluation of the success or failures of the public cloud in terms of its ability to provide enterprise customers with competitive advantage, cost efficiencies, and/or meet any current or future regulatory requirements.

In our opinion, most enterprise customers agreed with our recommendations and indeed minimized and/or withheld technology spending. We also observed that many enterprise customers did engage in a strategic review of the costs and benefits of using public cloud providers, and that they have either concluded that the cost of the public cloud may exceed the cost of maintaining and operating on-premises datacenters or have concerns that this may be the case.

But enterprise customers find themselves between a rock and a hard place because during the pandemic period, having been forced into migrating their infrastructure, platforms, and applications into the Cloud, they spent the last several years optimizing for this new architecture and to reestablish any degree of on-premises deployment, even if it were deemed that it would be better (by any measure), will neither be straightforward, nor can it be done without risk.

Despite the risks, it is our opinion that many enterprise customers have decided that they will undertake some degree of repatriation of infrastructure and data to self-maintained and managed on-premises datacenters. It is with this foundation in mind that we formulated Neuralytix’s Priorities 2024.

Priorities 2024

Strategic Priorities

Tactical Priorities

1. Take a big picture approach to data planning (as opposed to technology planning) that is designed to last through (and beyond) 2030. 6. Avoid public cloud vendor lock-in.
2. Be realistic about Artificial Intelligence (AI). 7. When reestablishing on-premises operations, leverage economies of scale to reduce operational and management costs.
3. Recognize the competitive advantage and proprietary nature of existing data. 8. Reintroduce technology differentiators to create competitive advantage.
4. C-level IT leaders must establish themselves as “first class” C-level executives. 9. Repatriate data that is not actively needed for customer facing cloud applications.
5. Take a globalized view of data as well as globalizing your data. 10. Do not hesitate to use proven and established technologies over new or trending technologies.

 Priorities 2024 (Source: Neuralytix, January 2024)

The Challenges coming Out of 2023

Earlier, we reviewed our recommendations to enterprise customers for 2023 – namely, minimize spending, changes, and instead focus on strategic planning. We noted that in our opinion, most enterprise customers agreed with our recommendations and acted accordingly.

However, the one thing that enterprise customers were not able to slow down during 2023 was the need to continue optimization of their applications and IT operations to extract as much competitive advantage as they can.

The net result is that enterprise customers get further and further locked into a single vendor strategy. Enterprise customers have fought against vendor lock-in for over 30 years, and in the space of less than 5 years, suddenly, enterprise customers find themselves at the mercy of a single provider of all technologies – from compute, storage, and networking infrastructure to database platforms, web hosting, and even application development all coming from a single vendor – the public cloud provider.

What is worse is that the single vendor essentially sells the same product to all the enterprise customer’s competitors. Public cloud providers have essentially commoditized the entire computing stack. While it can be argued that enterprise customers have the flexibility to adjust performance and capacity, every enterprise customer has access to the same array of offerings.

Competing enterprise customers can observe each other and dynamically adjust accordingly. With each adjustment (which is generally in an upward direction), competing enterprise customers are spending more and more with public cloud providers to simply reach parity with each other or temporarily leapfrog each other. At the same time, the addressable market for each of these competitors has not changed, but their spending continues to rise.

Observations such as these by enterprise customers and the realization that the opportunity for true technology differentiations can only be achieved by being in control of the much of the technology stack is driving enterprise customers towards reestablishing on-premises datacenters and repatriating its infrastructure stack, and most importantly, its data.

But optimization in the public cloud is most often achieved through deeper integration into the public cloud provider various services and offerings. This makes it much more difficult (and by extension more costly and riskier) to repatriate infrastructure, data, and operations back to an on-premises datacenter.

This is the major challenge coming out of 2023.

The following sections take a deeper look at each of the priorities we posit.

Disclaimer: we caution that the Priorities and arguments presented are solely based on our observations, our perceived views of markets, and our assumptions of needs of enterprises on a global and represent generalized views. Enterprises must take the Priorities we present and adapt them to their own business needs and priorities; their own business strategies and goals; and their own assumptions, their own business practices, and their own culture.

Strategic Priorities

The first five of our priorities deal with the strategic priorities for enterprise customers.

Priority #1 – Take a Big Picture and Long-Term Approach to Data Planning

Neuralytix argues that the most valuable assets of any enterprise today is its data. Knowing what to do with the data and how to leverage data is the key to long term business success. Enterprises collect, protect, and store more data than they use.

In the past, enterprises have either expired (deleted) the data or simply kept it without much specificity in terms of its future value or use. Terms such as data lakes provided some justification for the proliferation of data that an enterprise kept, but what an enterprise did or could do with the data continued to be somewhat nebulous.

The reason this is Priority #1 is that not understanding the opportunity of data will be costly to an enterprise. This priority is closely linked to Priority #3 (understanding the competitive advantage of your data) and Priority #5 (the globalization of data).

Enterprises must align long term business planning with long term data planning. Depending on the enterprise, “long term” can often refer to a time horizon of 5 to 10 years. In other words, what data needs to be available (either collected or retained) to support and drive the success of the enterprise in the next 5 to 10 years?

Data planning goes beyond collecting and retaining data. It also involves what technologies will be required to take advantage of that data. Almost without doubt, some form of machine learning (ML) and training will be required. But will the enterprise leverage a Software-as-a-Service (SaaS) based or “off-the-shelf” offering, or will a proprietary algorithm need to be developed? How can artificial intelligence (AI) assist in taking the learnings of ML and predictively modeling outcomes that the enterprise can use, and importantly, rely upon? We will examine this in Priority #2.

Fortunately, today, business leaders understand the importance and reliance on technology to drive business success. In fact, increasingly, business leaders are the ones initiating the question of what technology or technologies are required to achieve an objective.

The key difference between such a question being asked by a business leader and what a technology leader can initiate is one of knowledge. We will examine this further in Priority #4.

Priority #2 – Be Realistic About Artificial Intelligence (AI)

It seems like every enterprise is using AI for something. AI is a marketing tool. AI is the basis of competitive advantage. AI is what creates differentiation. AI is driving business. AI seems to be invincible in terms of its ability to help consumers and make enterprises profitable.

But although the term AI stands for artificial intelligence, ultimately, AI requires actual intelligence.

The success of AI is a function of real human beings who formulate bases on which AI takes place. If you get the foundation wrong, then everything that AI does is wrong.

Equally Generative AI (GenAI) is nothing more than an extension of using actual intelligence to filter out what is useful and that what is not, so that the AI can generate the next iteration (or “learn” from the previous generation of good data versus bad data).

For the time being, senior business leaders, including CEOs, have this romantic notion that AI can quickly be integrated into a business process or used as a marketing tool. But the reality is quite different.

Italicized paragraph added January 2, 2024:

Using AI as a marketing tool will be short-lived. Neuralytix estimates at best, using AI as a (in many cases, perceived) competitive advantage or differentiation, can only be sustained through the last quarter of 2026. Realistically, Neuralytix believes that using AI in marketing will lose its competitive edge by calendar Q2 2025.

Before the term AI was so prolifically used, enterprises relied on focus groups, consumer panels, and other tools such as customer surveys to drive the intelligence needed to help drive revenue, create new products, or improve customer satisfaction.

Neuralytix argues that AI accelerates the time to analysis and the exponentially increases the number of combinations, permutations, and iterations that analyses can be performed to (hopefully, if actual intelligence did its job properly) come up with results that are more accurate, more timely, and ultimately more useful. Outcomes can then be interpreted, learned, and further programming can be done to create the next generation of AI – resulting in GenAI.

The emphasis on actual intelligence is that AI is only useful if the right people provide the right approaches, the right methodologies, and the right data is available and used to make the AI engine accurate. This brings us full circle back to the need for data planning (Priority #1) and understanding of not only what data needs to be retained (i.e. data that is already being collected), but also what additional data needs to be collected. We further extend this to asking what data additional and available data needs to be incorporated. For example, is there a business partner that the enterprise works with that can provide data that can help enhance, accelerate, or provide better precision to the development of the AI engine?

Given that many of the methodologies used by Neuralytix to help our Clients succeed are based on observations of human behavior (we do not pretend to be psychologists, but simply consider ourselves as observers of human nature), actual intelligence goes further than just basic algorithmic modeling, but must also include psychological factors for an AI engine to be truly useful and applicable to humans.

Priority #3 – The Competitive Advantage of Proprietary (i.e. Your) Data

Earlier, we noted that public cloud providers have essentially commoditized the entire computing stack. We also noted that competing enterprise customers can observe each other and dynamically adjust their computing stack accordingly to match or supersede each other. These two observations make it very difficult for enterprises to use technology to their advantage.

However, in the first two priorities above, Neuralytix notes the need for technology for enterprises to succeed in the long term. If technology stacks are commoditized, how can enterprises gain competitive advantage and differentiation from technology?

While the first two priorities are reliant on technology, it is even more reliant on data. Neuralytix believes that the best differentiator and the best opportunity for competitive advantage is the leveraging of proprietary data – namely, an enterprise’s own data.

Even if two enterprises use the same software as each other, the data one enterprise collects are specific to its customers and the way its customer interacts with that enterprise. The larger the enterprise, the larger the database from which it can draw proprietary information.

Simply using proprietary data is insufficient to create sufficient competitive advantage. However, the combination of competitive intelligence that any enterprise undertakes as part of the normal course of business, the knowledge of the behaviors of customers in the enterprise’s market, augmented with proprietary data provides a very good basis for intelligent and smart business decisions.

Of course, if the data is good, and this can be made programmable, then using a combination of ML and AI will yield faster, and hopefully more accurate, results.

Priority #4 – CIOs Must Elevate Themselves to “First Class” C-Level Executives

Neuralytix believes that of all of our 10 priorities, this one will cause the greatest controversy, as it is the most subjective.

The tide is changing and changing quickly. However, for too long, Neuralytix believes that the role of the CIO has been relegated to a “second class” C-level executive. Far too often, Neuralytix has observed that other C-level executives, in particular, CEOs, consider CIOs simply as the “computer geek” with a C-level title, because at some point, it became de rigueur to give the leader of any given department a C-level title.

At the same time, CIOs have not exactly advocated for themselves. This must change! The chance for CIOs to elevate themselves has never been more opportune – especially given the heavy emphasis we have placed on the importance of data to business success.

What is often forgotten, even by CIOs themselves, is that they hold the keys to the vault that contains an enterprise’s most valuable assets – the data.

Neuralytix has previously argued that data is currency – if invested wisely, can provide high returns. While a CIO may not be the person who “invests” the data, the CIO understands the potential value of the data the enterprise already owns. If we combine this understanding, with the arguments in Priority #3 above, augmenting that with the proper data planning from Priority #1, and the appropriate application of AI from Priority #2, one can quickly see how the role of the CIO can be an instigator of business rather than simply a fulfillment agent for technology and data provider to other business leaders.

The evolution of the CIO role is such that many (if not most) CIOs in larger enterprises hail from a business background. They understand business opportunities. They understand the business of the enterprise. Neuralytix also observes that perhaps this change has gone too far, in which CIOs are such that they are business leaders, but don’t understand the business of technology, data, or information.

Italicized paragraphs added January 2, 2024:

We believe that CIO’s must advocate their roles as digital asset “bankers”. They need to understand the larger picture and suggest, promote, and innovate the value of the assets they control. In other words, by understanding what data the business owns and what data they control, that they are the ones who brings ideas to traditional business leaders regarding new products and services, evolution and improvements to existing product lines and services, and bring to the boardroom how a global view and globalization of data will allow the enterprise to gain advantage on a global scale (as we will discuss below in Priority #5).

The time is now. 2023 saw a depressed market for many, if not most, enterprises – CEOs and business leaders are leaning on trending ideas such as AI as a “silver bullet” to gaining some degree of competitive edge, but these (in many cases, perceived) advantages will be short-lived. See Priority #2.

Priority #5 – Take a globalized view of data as well as globalizing your data

Taking regulatory compliance aside for now (with the term regulatory compliance encompassing privacy, which we will discuss later), many enterprises localize (or in technology terms, silo) their data in country, or by business units.

Furthermore, technology leaders have struggled for many years (Neuralytix argues for over a decade) to get a handle on what data the enterprise actually owns or data to which the enterprise has access to within the enterprise’s control (especially data stored locally on employee PCs and laptops).

IT administrators are often assigned to business units and while some data, such as financial, HR, ERP, and transactional systems are common across business units (including localized business units), there is insufficient global understanding of how much data is replicated between business units or in some cases, how data can be proliferated between business units and between localized business units. (Here, localized business units include those businesses that operate distinct businesses for each country or region, either by regulation, or due to opportunistic incremental business in those countries or regions. Again, we note that regulatory and privacy concerns will be addressed later in this section – Priority #5.

Neuralytix recognizes and accepts that each market has different buying patterns and consumers of products and services prioritize different aspects of products and services based on regional, cultural, or simply local requirements. A differentiator that drives demand in the U.S. may be (and is likely to be) different to a differentiator that drives demand in Asia.

Referencing Priority #3, where proprietary data is a major source of, and for, competitive advantage, especially for larger enterprises, a global view of variations in buying decisions can often drive the opportunity for changing buying criteria in different countries and regions that result in globalizing consumer behavior. This globalization of consumer behavior can ultimately lead to economies of scale in terms of distribution, cost reductions in the bill of materials (BOM) and cost of goods sold (COGS) through improved positions for price negotiations of BOM and COGS; and/or the cost of sales and marketing and sales enablement, training, and development. With these cost savings and economies of scale, it clearly leads to increased revenue and profits, and ultimately shareholder value.

Of course, by no way does the globalization of data guarantee these results. But taking a global view of the (proprietary) data the enterprise owns and controls, at the very least, from an IT perspective, can enable economies of scale in data management, controls, and even regulatory compliance.

Regulatory Compliance and Privacy

In advising that enterprises take a globalized view of their data and actually globalizing the proprietary data they collect, own, and manage, we recognize the need to comply with local regulations and privacy laws.

Recognizing this, we now incorporate the need to comply and how our recommendations in this Priority can still be achieved while still obeying and conforming to regional, country, and in some cases, state based regulatory and privacy laws.

As of January 1, 2024, Neuralytix research shows that over 150 countries have some form of privacy laws. Here we are not considering the efficacy of any of these laws. According to a leading international law firm, DLA Piper, which publishes a free and downloadable handbook called “Data Protection Laws of the World”, 163 countries have privacy laws.

Neuralytix compared this to the total population as provided by an online source, that lists 234 nations, its population, and the percentage of each population to the world total population, as one reference source, and it shows that from a country perspective, over 93% of the population is covered. However, we have to adjust this by the population of the United States, the third most populous nation in the world, not currently by any form of enacted privacy laws, and it reduces the coverage to less than 90%.

As another point of reference, Neuralytix also referenced the 193 member states of the United Nations, and showed that only 102 of the 193 member states have enacted some form of privacy laws, and the world’s population covered by privacy laws (adjusted for the United States), is statistically speaking, the same as the DLA Piper findings.

While generic in nature, and by no means comprehensive, we believe that the use of anonymized metadata, working in concert with regional and local enterprise resources, and using qualitative observations and adjusting for local culture can yield enterprises sufficient or “good enough” data that it can effectively globalize (aggregate) its data sources, complying with regulatory requirements, and take our recommended globalized view of its data.

Tactical Priorities

Priority #6 – Avoid public cloud vendor lock-in

A brief history

For almost 60 years, enterprises have had the choice of not being tied to a single vendor for its computing resources.

IBM introduced its first IBM mainframe computer in April, 1964 – the System/360. 12 months later, RCA introduced the Spectra 70 that could also run the same software as the IBM System/360. Although RCA withdrew from the mainframe market in 1971, the precedent had been set that enterprises do not have to buy from just one vendor to gain the benefits of commercial computing. In fact in 1975, Amdahl (produced by Fujitsu) shipped an IBM compatible machine to NASA.

The same happened in the disk drive industry. Although the IBM RAMAC drive, introduced in 1957 is most commonly considered the first disk drive in the industry, by 1968, Memorex shipped the first IBM-compatible disk drive, and since then hundreds of data storage companies have emerged to challenge IBM’s early dominance.

By the 1980s, once interface technologies such as ESDI and SCSI became standards, adhesion to these standards allowed disk drive manufacturers to proliferate.

The same can be said of the IBM compatible market. The original IBM PC in the summer of 1983, within a year or two, companies including Olivetti and Compaq (famous for its Compaq Portable, often referred to the “luggable”) gave enterprises choices in hardware to run the same operating system and software as the IBM PC, often for less money and better performance. For example, the IBM PC XT cost over $7,000 when launched, compared to the Compaq Portable that claimed full compatibility with the IBM PC XT was launched at roughly $3,000.

The COVID impact

When the global COVID-19 pandemic hit in 2019, enterprises were forced whether they believed in the public cloud, were ready for the public cloud, or whether they objected to the public cloud, had very little choice but to adopt the public cloud.

During the pandemic, as we noted above in the section entitled “The Challenges Coming Out of 2023”, enterprises had very little choice but to leverage integrations from its public cloud provider to achieve competitive parity or competitive advantage. While competitive advantages were often short lived, nonetheless, deeper, and deeper integration within the enterprise’s public cloud provider was necessary.

As a quick digression, Neuralytix notes that the global pandemic demonstrated that the idea of multi-cloud was simply untenable, although it did promise the ability to provide vendor diversity.

Returning to the challenges faced by enterprises, the, arguably obligatory, increasing (particularly native) integrations within the public cloud provider ecosystem meant that enterprises were essentially locking themselves further and further into a single vendor.

So, while the industry has fought for nearly 60 years to reduce costs; realized additional benefits, including performance; and innovations were completely negated within a two to three year period.

Neuralytix also posits that whether it is increased integration, simple organic scaling, or the realization that although infrastructure hardware has become more reliable (thus reducing the need to replace core infrastructure every three to five years), essentially, their commitment in costs were at the mercy of the public cloud provider, and in many cases, some of these commitments would be ad infinitum.

As our quick digression above noted, the global pandemic also demonstrated the deficiencies of a multi-cloud strategy. Essentially, wherever your data resided, that was the public cloud provider that you were committed to. Moving terabytes or even exabytes of data between public cloud providers is expensive not only in the monetary commitments, but also it was expensive in that the enterprise was responsible for ensuring data integrity, data consistency, and data synchronicity. While these things can be achieved within the confines of a single data center, trying to do this over the public Internet or even a leased line is almost impossible, and even if it were able to approach technical viability, it would have been cost prohibitive and unjustifiable for most enterprises.

The only way enterprises can remove the single vendor lock-in is by reestablishing on-premises operations, which also has the benefit of being able to return to the opportunity of using different technologies to achieve differentiation and sustainable competitive advantage (Priority #8), as we will outline in Priority #7.

It is important to note that Neuralytix is a supporter and proponent of the public cloud, but only where appropriate, and not on a wholescale basis.

Priority #7 – Reestablishing on-premises operations, using economies of scale to reduce operational and management costs

Neuralytix’s position on how public cloud providers make money is that they essentially market the same product range, with certain variations, and sell it in varying sizes. Essentially, public cloud providers leverage economies of scale. They sell the same service (or product, depending on your viewpoint) in different sizes. You can buy a small, medium, large, extra large, etc. compute platform.

An analog to this model is the integrated designers, manufacturers, and retailers of casual clothing such as UNIQLO, H&M, and The Gap. You can buy variations of the same product. We will use the example of a dress shirt. A dress shirt may come in various colors or designs (stripes, checked, solid), sizes (small, medium, large, etc.), and other basic differences such as collar and cuff styles or whether there is a pocket or not. The quality of each line is the same. The manufacturing is essentially the same. Although the number of combinations and permutations may be quite large, ultimately, it’s the same product. For a company like Gap Inc. that owns multiple brands, the quality of the shirt may be one other differentiator. We argue that a shirt from Old Navy may be of a less quality (in our example, the cotton used may be of a lesser thread count), compared to its higher end brand, Banana Republic, which may market a similar (if not the same shirt), with cotton that has a higher thread count, but with fewer combinations and permutations to give it a greater sense of exclusivity or prestige. Nonetheless, Gap Inc. may leverage the exact same manufacturer to improve its economies of scale.

In the same way, public cloud providers may not use different brands to differentiate, but use variations such as shared computing resources vs dedicated computing resources; different numbers of cores, memory, or storage capacity; different types of CPUs or GPUs; different types of storage technologies such as block, versus file, versus object; etc.

Another type of differentiator may include managed infrastructure platforms (such as integrated container technologies such as Docker or Kubernetes); managed databases; etc.

We refer to public cloud providers such as Akamai (previously Linode) that has a relatively small number of offerings compared to Amazon AWS but has a very clear price list to illustrate our point.

If an enterprise decided to move its data to Akamai, then for intents and purposes, it would limit itself to the offerings of Akamai, as noted earlier, to move data around would likely be cost prohibitive, and introduce a layer of risk that most enterprises, Neuralytix argues, is unjustifiable.

However, in reestablishing an on-premises datacenter, enterprise customers can certainly leverage a similar strategy, on its own scale. With many applications and platforms now running in hypervisors or distributed containers, enterprises can utilize the promise of Hyper-converged Infrastructure (HCI). In fact, this is the infrastructure architecture called composable infrastructure, or what Neuralytix calls Datacenter 4.0, a term we introduced in March 2017, to distinguish composable infrastructure’s acronym from continuous integration.

But Datacenter 4.0 does have its limitations. Just like the public cloud providers, in deploying a Datacenter 4.0 strategy, enterprises will run into the same small, medium, large, etc. restrictions as the public cloud provider. The difference here is that enterprises can utilize technology differentiators that differ from those more generic and industry standardized offerings of public cloud providers. The small, medium, large, etc. offerings may not be limited to (say) Intel processors, with “some” solid state drives (SSDs), and “some” memory capacity. Instead, the economies of scale may come in the form of either higher performing or even lower performing memory technologies, a combination of SSDs and magnetic hard disk drives (HDDs), consistent with Priority #10. It might also have the inclusion of a mix of trunked high speed 100 Gb/sec Ethernet to deliver 400 Gb/sec of Ethernet speeds and/or much lower cost 10 Gb/sec Ethernet to give the enterprise greater control over the technologies available for its economies of scale.

Additionally, Datacenter 4.0 does not limit enterprises to use of some generic technology determined by the public cloud provider for servers (i.e. limited to some “CPU” or “GPU” brand/technology), networking (that might be limited to (say) non-trunkable 10 Gb/sec or 100 Gb/sec), or storage technologies (the most nebulously defined of all the technologies, in that enterprise customers have no choice over which SSD or HDD brand they can choose to create technology advantages). Instead, Datacenter 4.0 simply promotes the investment in infrastructure that is reusable.

By having full control over the specific technology/technologies, brand, down to the specific models of technologies allows enterprise customers to have absolute and full control over the technology differentiators that work specifically for their needs to create sustainable competitive advantage, consistent with Priority #8.

Priority #8 – Reintroduce technology differentiators to create competitive advantage.

“In the old days …” the use of specialized technologies were able to give enterprises differentiations that allows competitive advantage.

Ironically, in today’s contemporary world in which SSDs have become more of a norm than a differentiator as they were in the beginning, back as far as the late 1990s some enterprises, in one case, a stockbroking firm used SSDs (back then, instead of NAND flash, it was actually very expensive DRAM, backed by a magnetic HDD. It allowed the firm to trade a stock 30 seconds faster than its competitors. At the time, they spent around US$500,000 per year on this technology (almost US$1M in today’s money) each and every year. They were able to justify this by the multiples in commission revenue they received as a result of the technology. (To be fair, the same firm also used the SSD because they didn’t want to rewrite their application and the SSD overcame the increasing burden created by the application).

As a quick digression, it is equally important to note how the firm also minimized risk by not rewriting its application. Yet it was still able to gain a quantifiable competitive advantage through the use of a technology differentiator, in this case DRAM SSDs over rotating magnetic HDDs since the SSDs were plug compatible and yielded no discernable incompatibilities to their infrastructure.

This example is a perfect example of what Neuralytix asserts the public cloud is unable to do. It is too easy for a public cloud customer to buy their way to parity or short term advantage because every customer has the access to the same technology.

Now, it could be argued that a competitive stockbroking firm above could have also purchased DRAM SSDs to achieve some degree of parity in performance. But, it must be clearly noted that the application written was proprietary and the competitor would have had to have the same software and infrastructure combination to achieve this.

However, one could present a opposing argument that platform software (such as databases) have also commoditized, so the degree to which competitors can gain advantage is limited. This is true because competitors can buy the same “size” of the same (or even) different databases to run their software. Any tuning of the databases is somewhat limited because the promise of the public cloud is that enterprise customers can essentially outsource the management of infrastructure and software (in this case, databases) to the public cloud provider, and the way public cloud providers make money, as has been noted many times in this report, is through economies of scale. If every customer demanded or needed highly specialized customizations to the database platforms, this takes away the advantage of the public cloud, and reduces the public cloud provider’s ability to minimize its ability to maximize its profits.

This opposing argument is further weakened

But this argument fails because it is based on the assumption that the data required for the application is stored in the cloud, which commoditizes the storage and network characteristics for the SaaS. However, leveraging an enterprise-specific infrastructure architecture, especially in terms of the storage infrastructure, and having repatriated the relevant data from the public cloud provider, per Priority #9, enterprises can then return to a position in which they can utilize technology again to create (even the smallest) competitive differentiation that its competitors cannot match.

Priority #9 – Repatriate data that is not actively needed for customer facing cloud applications.

Even before the start of the global pandemic, Neuralytix was an advocate of the considered placement of data and storage as it relates to the public cloud. It is our assertion that all data that is not necessary for the operational aspects of customer facing cloud applications should be protected and controlled (read: managed) by the enterprise itself, and not by a third party, including public cloud providers.

Our advice is based on a very simple premise. You cannot assign away responsibility. Whether you are a storage/system administrator, a CIO, or the CEO, you cannot blame a third party (be it a public cloud provider or some other managed provider) for mismanagement, loss, damage, corruption, or any other kind of event that would lead to something that would compromise the integrity of an enterprise’s data.

Even the largest of the public cloud providers can, and have, experience outages. Some outages are inconsequential, while others end up on the front page of the financial or business presses. Some outages are somewhat contained, and may not impact transactional or customer facing applications, while others do.

For the most part, most public cloud providers provide a guaranteed service level agreement (SLA) of 99.9%. When public cloud providers fail to meet its SLA, the penalties are often limited to the reimbursement of charges incurred in excess of the SLA either in fact, or by way of credits towards other, or future, services.

While this is reasonable from a business perspective – in that, if the public cloud provider failed to deliver what it promised, it prorates its charges to the enterprise customer – this in no way compensates the enterprise customer for real costs and losses, including, but not limited to opportunities lost, revenues/profits lost, reputational loss,

they fail to meet this, the penalties public cloud providers (generally) provide to enterprise customers are credits for the equivalent of outages in excess of the SLA. Public cloud providers protect themselves by

While it does seem simpler for enterprises to pay a premium, or at least from one perspective, less money, for a third party to manage its data. The reality is often quite different. Today, given the nature of global transactions and the speed with which any fault of pretty much anything can go viral within a matter of minutes, any outage by an enterprise is likely to be noticed and reported quickly, resulting in the worse of all losses – reputational loss – an incalculable loss to any enterprise, and a cost that may haunt the enterprise for years to come.

Let’s use the example of a US regulated financial firm and the reality of using the public cloud to store its regulated data. For this example, we are going to assume that the data is stagnant and that over the course of the 7 years demanded by the Securities and Exchange Commission (SEC), the data will not actually be required to be used again.

Example of the use of public cloud storage of a regulated financial firm

First, we assume that the data to be stored is stagnant for the duration of the 7 years. We will change this assumption as the example moves along. Let’s also assume that data is stored at the lowest cost storage with some degree of redundancy at AWS. This would be AWS S3 Glacier Deep Archive. The cost is US$0.00099 per GB per month. Furthermore, for the sake of argument and illustration, we will assume that 70TB of data needs to be retained (the seemingly odd 70TB will become obvious shortly).

The cost calculation is as follows:

AWS S3 Glacier Deep Archive (US$ Cost per GB per month) US $0.00099
GB in a TB 1024
# TB stored 70
# months in a year 12
# of years necessary 7

Cost of storing 70TB for 7 years using AWS S3 Glacier Deep Archive (Source: AWS, Neuralytix, January 2024)

Now let’s compare the cost of a 10TB enterprise grade 7,200 RPM SATA drive. Since we’re using Amazon AWS in our example, we will use as a reference for the cost of our SATA drives::

Cost of Western Digital 10TB WD Gold Enterprise Class Internal Hard Drive – 7200 RPM Class, SATA 6 Gb/s, 256 MB Cache, 3.5″ – WD102KRYZ US $240.20
# drives needed to form a RAID 5 array to store 70GB 8

Cost of 8x 10TB Enterprise Grade HDD to form a RAID 5 array providing 70TB of storage (Source:, Neuralytix, January, 2024)

This already represents a cost saving of 68%.

To make our example more realistic, let’s attribute US$1,000.00 for the cost of the RAID 5 controller based on the attribution of a static enterprise array with many more terabytes that is shared among other applications. (Neuralytix believes that this cost attribution is probably 4x the reality). We will add 7% of the cost of per year of the cost to cover maintenance.

Cost of disk drives US $1,921.60
Attributed cost of RAID 5 controller US $1,000.00
Subtotal US $2,921.60
7% maintenance cost per year for 7 years (i.e. 49% of cost) US $1,431.58

Total cost of owning and maintaining a 70TB RAID 5 shared array (Source: Neuralytix, January 2024)

So far, we are still saving 27%.

Here is where we go from black and white to grey. In our calculation of US$4,353.18, it does not include the cost of operations staff to maintain the RAID 5 array. This is true. We argue that the cost of maintaining an on-premises array with minimal access is not dissimilar to maintaining data stored in AWS S3 Glacier Deep Archive over the 7 years, since there is a premium already built into AWS S3 Glacier Deep Archive for management, and since the data we’re talking about is stagnant.

However, since this is regulated data, there has to be some degree of regular testing of the integrity of the data. In the case of Amazon AWS, to test the integrity of the data, the data must move to a tier that allows the testing to be performed. In this case, the data would have to be promoted from AWS S3 Glacier Deep Archive at a premium (as egress carries a relatively steep cost), stored for a short period of time at higher priced storage, and then demoted back to AWS S3 Glacier Deep Archive (with no cost for ingress).

Assume that we test the integrity of the data once a year (although best practices should be at least twice a year), then we’re starting to incur costs associated with promoting data and demoting data. This will add to the US$5,960.91 cost for Amazon AWS indicated above. For simplicity, we will simply leave our argument at “more than” US$5,960.91 to persist the data over 7 years using AWS S3 Glacier Deep Archive.

The cost of maintenance across the 7 years means we do not have to worry about the drives failing. We are also assuming a constant year by year maintenance cost, which is likely to be included in the price of the array at least for the first 3 years with an increase in the latter 4 years. However, we maintain that our 49% is sufficient to cover the cost of the latter 4 years, and to reach parity with the AWS S3 Glacier Deep Archive cost, would mean that the latter 4 years would cost 104% of the cost of the shared array, which is severely unrealistic.

An alternative to avoid egress costs and the management costs of promoting and demoting data would be to use AWS S3 Glacier Instant Retrieval. This is an outrageous example as the data is stagnant and there is really no argument to use this level of service. Nonetheless, let’s look at the cost of this service for our 70TB example:

AWS S3 Glacier Instant Retrieval (US$ Cost per GB per month) US $0.004
GB in a TB 1024
# TB stored 70
# months in a year 12
# of years necessary 7

Cost of storing 70TB for 7 years using AWS S3 Glacier Instant Retrieval (Source: AWS, Neuralytix, January 2024)

This represents a 5.5X cost over our proposed on-premises solution for a US regulated financial firm.

Example of the use of public cloud storage of an enterprise regulated by HIPAA

Imagine now, the cost differential over the cost of 100 years for a US HIPAA regulated healthcare firm? For the same 70TB AWS S3 Glacier Deep Archive solution, it would be US$85,155.86; and assuming we replace the on-premises solution every 7 years, which in our opinion is a reasonable assumption, the cost of the on-premises solutions would be US$62, 188.29. Furthermore, the on-premises cost does not take into consideration the falling cost of storage over the course of the 100 years.

The difference here is still 27%, not including the cost of promoting and demoting data on at least a once a year, nor the falling cost per TB of HDD over time. Equally, it does not take into consideration the migration costs every 7 years from array to array.

Other considerations

To be fair, in our example we have excluded the cost of real estate for an on-premises deployment, as well as the environmental costs associated with running the datacenter. While these are real costs, and not insignificant, it is our considered argument that with some degree of economies of scale that the attributed costs of these items would still render an on-premises implementation less costly, and worse case, at, or maybe slightly above, parity to a public-cloud option.

Even if the cost of on-premises storage is slightly above that of public cloud storage, an enterprise cannot outsource away the responsibility of integrity, consistency, and availability of the data just because it is stored in the public cloud.

Potential consequences of using public cloud storage

Part of the reason for keeping the data for the time required is for regulatory purposes. If there is a perceived breach of the regulation, a Court may be involved to prove the innocence or guilt of the enterprise.

If a Court demands data to be produced, and, as happens from time-to-time, the public cloud provider has an outage, or worse still, data is corrupted, lost, and/or the integrity of the data is compromised, since public cloud providers only guarantees a 99.9% SLA, resulting in the enterprise not being able to produce (reliable) data, the Courts have set precedents that the enterprise is by default, considered at fault, and the enterprise cannot argue that it’s the fault of the public cloud provider. It’s the IT equivalent of the “the dog ate my homework” argument.

It’s still the responsibility of the enterprise to produce the data. While Neuralytix in no way represents itself as having any standing in terms of legal knowledge, it is not beyond the realm of possibility, that we could potentially see a Judge, who may be less technologically savvy, ask the enterprise why it assumed that a third party would be responsible for maintaining the data. The regulatory responsibility is on the enterprise not the public cloud provider.

This situation gets aggravated in certain parts of the world where, if the enterprise used a US public cloud, and in particular Amazon and AWS, where the use of a foreign (specifically US owned) public cloud provider is considered less desirable than a local public cloud provider, again, it is not within the range of possibility that a local Court may be further biased by the enterprise’s choice of public cloud provider. An example would be certain countries in Europe, such as France.

Priority #10 – Do not hesitate to use proven and established technologies over new or trending technologies.

When it comes to enterprise customers, using the latest and “greatest” technologies is often the wrong approach. Yes, there are many new and emerging technologies that have a lot of promise. The current prime example is Kubernetes. It promises (and for the most part delivers) many of the promises of VMware on a virtual server level at a container level.

But let’s be honest, if you are an enterprise customer, would you wholesale move from your (relatively) stable VMware environment, or whatever environment you are running, to Kubernetes, without at least months, but probably years of testing? Neuralytix research shows clearly that the answer is NO.

In fact, our research shows that many “new” technologies, no matter their promise, do not get implemented for at least 5 years, and sometimes 10 years. Testing of all existing applications, dependencies, plus regressive testing, documentation, etc. takes time.

Many enterprise customers have already documented many (our research shows most) of the challenges and problems that may arise. This documentation lists out the procedures for remedy, or (often) custom scripts are written to avert such problems to begin with.

Take another example, many large multinational banks still run mainframes, with archaic software. Why? It is certainly not to keep IBM in business. It is because no CIO is going to be the first to put up his/her hand and say, let’s go and change our current fully operational banking environment for a new one without extensive testing, and often, the risk of doing so, even if the testing shows a positive outcome is too great to change. We noted in Priority #8, the stockbroking firm that invested over US$500,000 on code that was clearly out of date, running on servers that were also out of date, simply because, it works – and the risk of change is far greater than any marginal benefits gained from upgrades.

This goes for technologies such as rotating magnetic HDDs. Just because SSDs are plug compatible, just because they sit behind controllers that are proven does not mean that a datacenter needs to move to an all-SSD design.

Enterprise customers must consider the use case for SSDs. Is access to the data so critical that it requires SSDs? Neuralytix research shows that despite data reduction technologies, the reality is that SSDs still carry a premium that is an order of magnitude higher than HDDs.

SSDs also have the negative that they tend to wear out within 3-5 years (although minimal use may extend its lifetime), while a well-maintained HDD can last 7+ years. For regulated industries such as financial firms, 7 years is the perfect time for data to be retained. A lot of this data is stagnant. There is even an argument to be made for migrating this data to streaming tape, although, the perception of streaming tape is that the cost of management, and its ability to persist data, is still questionable after 50 years of use.

Our argument here is that as enterprises repatriate their data, enterprises should not be afraid to use proven, established technologies, even if such technologies are considered passé and uncontemporary. The adage of “if it ain’t broke, don’t fix it” applies here more than ever.


In concluding our presentation of our recommendations for the priorities for 2024, we refer to the disclaimer noted earlier:

We caution that the Priorities and arguments presented are solely based on our observations, our perceived views of markets, and our assumptions of needs of enterprises on a global and represent generalized views. Enterprises must take the Priorities we present and adapt them to their own business needs and priorities; their own business strategies and goals; and their own assumptions, their own business practices, and their own culture.

We can sum up our outlook for 2024 in the following way:

  • The CIO needs to be a hero to the enterprise because he/she holds the key to the most valuable assets of an enterprise – its proprietary data. The enterprise’s proprietary data is likely to be the primary source of competitive advantage, don’t ignore it.
  • Leverage your proprietary data but take a global view of it – literally. Ultimately enterprise customers are global companies, not local companies.
  • Understand that AI requires actual
  • Define and execute on reestablishing an on-premises strategy and begin repatriating data that is not actively needed for customer facing cloud applications. Your on-premises strategy should use proven technologies, ignoring new, emerging, or trendy technologies.
  • Finally, avoid vendor lock-in.

2024 is not going to be an easy year. Tough decisions need to be made and execution must begin. We believe that our recommendations will allow enterprise customers to establish the foundations that will help them succeed through 2030.

Good luck!

Related Consulting Services