Unlike Hamlet, a 21st century CIO isn’t confronting existential questions of mortality and morality. That would be too easy. Alas, poor Yorick, all our IT choices are like selecting wallpaper in a churning Purgatory in which none among the user community is ever happy, because denied the nirvana they’ve glimpsed, everything you show them on this side of the veil is a pale shadow at best. All life is suffering, said the Buddha, and nobody knows this better than a CIO theoretician forced to mix spiritual metaphors in a desperate quest to capture the angst of the current Cloud/On-Premises zeitgeist. (Except maybe his readers.)

Oh yeah, they’d love to wallow in the mud of public cloud resources, unfettered by budgetary constraints, security concerns, GDPR or other data location regulations, and technology lifecycle horizon considerations.  Once a developer or other end-user has tasted the freedom afforded by a credit card and internet access to the instant gratification delivered by agile provisioning and containerized cloud-native services based on k8s, kafka, RocksDB, and all that jazz, it’s hard to please them with even the best that yesterday has had to offer on-prem.  (How ya gonna keep ’em down on the farm when they’ve seen those city lights?)

Increasingly, CIO’s want their on-prem environments to resemble the cloud, if not actually partially be cloud-based — though not always public cloud (avoided sometimes for data privacy, special security, or other reasons).  You might expect legacy applications to still be reside on tried-and-true in-house servers — if it ain’t broke, don’t fix it, right?  But even those are being phased out in favor of cloud-based services.  Some examples:  those legions of MS Exchange servers are going, going, gone, in favor of Office 365 and/or webmail;  back-office business ops suites from J.D. Edwards, Siebel, Oracle, etc., have been replaced by now SalesForce.com or Oracle Cloud hosted alternatives;  and even SAP customers are increasingly often choosing SAP’s offering of cloud-based options or sometimes running on cloud alternatives like Dell EMC’s Virtustream.  What is left is being rearchitected to allow agile provisioning that while perhaps not so instant as public cloud can still be measured in hours (or a few days), instead of what used to take traditional IT departments weeks or months to procure, configure, deploy, and then adapt for actual daily use by employees.

This is sometimes attempted with pre-integrated (occasionally pre-racked) Converged or HyperConverged Infrastructure (“CI” or HCI”) — building blocks for rapid deployment of familiar components in balanced increments of compute, network, and storage elements, sometimes further integrated with one or more hypervisors (VMware being the most prevalent, but others gaining market share, such as Nutanix’s Acropolis, Microsoft’s Hyper-V, and variations on Xen or KVM).  However, even after having racked-and-stacked CI or HCI servers, a CIO will still preside over an innately non-cloud infrastructure.  Unless further steps are taken, that is.

I’m not here to advocate for any particular CI or HCI solution.  In fact, I am specifically pointing out that in their raw state they may only take you halfway where you want to go, and leave you (or your end-user community) unsatisfied.  One of the hallmarks of a cloud native environment is that resources are ample and available to be provisioned rapidly.  As with CI or HCI itself, the fundamentals are compute, networking, and storage.  HCI can help you craft a method for allocating compute resources, and possibly even networking if you leverage the network virtualization some flavors provide (especially VMware).  Although it looks like you can do this via storage virtualization (e.g., vSAN), that approach is limited to block storage.  What is needed is both larger complements and more robust storage abstractions than raw block if your goal is to re-create on-prem a cloud-like environment.

Consider the seamless object storage layers in the largest and most widely-used public cloud providers outside China:  AWS offers the effectively inexhaustible S3 (Simple Storage Service), and similarly there is Azure Blob and Google Cloud Storage.  When building an on-prem infrastructure ready to meet the expectations of cloud-native applications, it’s a good idea to provide a scalable S3-API-compatible object storage solution.  Minio is a widely-used open-source S3-API object storage solution that can be run on industry standard hardware.  Other good choices are Cloudian, HGST’s ActiveScale (formerly Amplidata), IBM’s COS (formerly CleverSafe), NetApp’s StorageGrid and Scality.  There are others — it’s a crowded field.  Not all of them can pass Minio’s “MINT” software test (also open source) which verifies complete compliance with the S3-API, btw.  Beware claims of “S3 compatible” which might turn out to be missing one or more aspects of the S3-API that an application you develop or select may turn out to rely upon.

Another storage resource which is all too often overlooked is file service.  This has been a relatively underwhelming area among the top tier public cloud service providers for years, but now there is growing interest in providing more and better choices.  NetApp is promoting its Cloud Volumes service on all the major cloud providers.  And our own Matrix, the world’s fastest filesystem, is also available via the AWS marketplace, making it possible to run it in the cloud the same as you might have it on-prem — in both scenarios it scales linearly without performance trade-offs (as a single global namespace or presented as many logical filesystems which are not physically segregated, allowing them all to benefit from the performance of the whole cluster) out to as many PetaBytes as any CIO could ever hope to deploy (literally to WekaBytes — hence the name — where 1 WekaByte =  1 billion PetaBytes).

But what about “To Cloud or Not To Cloud?”  Well, it’s 2018.  You are going to do cloud.  You might not be able to use a public cloud service provider, or you might choose not to do so for whatever reason(s).  But even in that case, you will find yourself implementing something resembling those cloud offerings in your own local environments, because there are OpEx cost-savings, synergies with new applications your organizations need (or are developing), and because doing otherwise will make it increasingly difficult to attract and retain staff.

And from your own cloud(s), you will look for ways to span among them, to burst applications across and maybe even up to the public cloud.  But data has gravity.  Significant bodies of data take time to be moved or copied.  Here is my last promotional recommendation, as I wrap up.  For DR (Disaster Recovery) purposes as well as to facilitate cloud-bursting, WekaIO Matrix allows you to direct an opaquely-formatted special-snapshot of your data up to a public cloud — extra-safely because it is not only in a unique format but also encrypted in-flight and at-rest — and then store it there (maybe taking advantage of the low-cost S3 “infrequently accessed” rate).  You can refresh it from time to time by only uploading the deltas.  Keep it there, available to either restore as needed to an on-prem location, or to re-hydrate in the Cloud if you ever want to cloud burst, running you apps on cloud-provided CPU resources and instantly standing up a WekaIO file service cluster which can ingest the special-snapshot and then run with it.  Later, the deltas from the cloud-burst events can be res-synced back to the on-premier WekaIO cluster.

To Cloud or Not To Cloud?  That is the question, and the answer is Yes.