Software is emerging as the key to our insatiable appetite and dependence on data.
Data has been a hot topic conversation for years now as vendors, consultancies, partners and customers have dissected every aspect of the subject from size, source, and location to identification, mining, analytics and value. Or so we thought.
While these discussions have proved insightful and, in many cases, improved business knowledge and value, they have overlooked one critical aspect: our growing dependence on the data around us.
Many of the other aspects of data have been viewed as a choice - a sort of ‘take it or leave it”. That is not the case with dependency. We are all increasingly dependent on data. And as more and more devices become connected to the internet, that dependency and the implications for all of us, will also increase.
First, let’s clarify dependency: Individuals, businesses and governments now rely on storing and accessing data – almost minute by minute, if not more. For the individual, it may simply be an email or iCloud photos; for businesses, it’s critical information on public, private and hybrid clouds; and for governments, its citizenship and resource data required to ensure smooth and safe running of the country.
The loss of an iCloud account and the associated data could obliterate a lifetime of memories; the inability to retrieve customer information or data on request, could fatally wound any enterprise - large or small, and the loss of government data could cause economic turmoil, bringing the country to its knees - with ripples for the global economy. None of these are insignificant, at any level.
The issue here is - as demand and dependency increases, dependency begins to look increasingly unsustainable. Currently, only 5 percent of datacentres worldwide have been modernised, meaning 20-year-old legacy architecture is being charged with ensuring our transition into a data dependent world.
Unless we rapidly adopt new technologies designed to reduce the gap between the infrastructure hardware layer and the application software layer, there will be a great number of risks.
Storage: The vast quantity of data available is not a new phenomenon, but quantity has physical limits. We often talk of the cloud as a nebulous, limitless space and have become so used to just “topping up” our storage capacity at will. But limitless it is not. There is a physical space required to store all our relevant and increasingly irrelevant material.
Data is rapidly becoming similar to landfill sites. We add more and more, hour after hour, and we keep everything - because there is no need to dispose of anything. As a result, our datacentres become even bigger. The world’s largest datacentre is The Citadel in Nevada, in the USA. It is a staggering 1.62 Km2 – That’s about 20 times the size of Buckingham Palace.
If the volume of data created continues its current trajectory (quadrupling in size every 5 years) – how will we manage it?
Access and Latency: Then there is access. How and when do we access our data and how quickly do we need it? That varies on the type, location and requirement of the data. As we move to autonomous vehicles, AI and machine learning – the demands on data will increase as will the requirement to ensure critical applications receive priority. Which brings us to fragmentation.
As more and more devices centralise and decentralise data consumption, and we separate data across multiple locations for efficiency and cost effectiveness - how will we ensure all our data is safe, secure, available and accessible when and where we need it?
Environmental Impact: I mentioned about the Citadel earlier. While it is the greenest datacentre in the world – it still comes with an environmental impact. As do the others. The impact on the environment from running, cooling and then upgrading these monumental datacentres is significant again, and only set to expand.
Cost: And let’s not forget cost. we have mentioned the environmental impact from our dependency on data. There is also the physical cost.
Value: I began this discussion highlighting the value of data to the individual, businesses or government. But in a world that quadruples the data created every 5 years – how do we ensure the value increases. Let’s not forget Bad Data. The average cost of bad data is already costing US businesses on average USD 15 million per annum. Cutting through the clutter will become harder and harder.
Security: And finally, there is security. More and more data, from more and more devices, stored in more and more locations increase physical and digital risks to the integrity of the data and the entire connected system.
While this may sound overwhelming, it should not be disheartening. New technologies are narrowing the gap between the physical and virtual world. As that gap shrinks even further, our ability to access, manage and benefit also increases. We are becoming less reliant on the physical and more reliant on the virtual.
Operating systems that simplify the operating environments for the enterprises entrusted with storing and maintaining our data are emerging. This of course does nothing to reduce our growing dependency, but it does help mitigate the risks and challenges associated with the deluge of data and our dependency on it.
The author is Vice President, ASEAN, India, ANZ at Nutanix.