Kevin Leahy, Group General Manager ─ Data Centre Business Unit shares his views on the top digital infrastructure trends for 2017.
There’s growing acceptance that technology is hybrid. And it’s not a future state. Hybrid IT has arrived. For CIOs, this brings the challenge of ensuring that the various elements of their hybrid environments come together to deliver a single integrated set of services.
I use the term ‘services’ very deliberately. I’ll explain why…
If you were building a data centre a few years ago, your biggest concern would be uniting the different technologies within your facility. So you’d evaluate different tools to help you achieve this. Security would be a key consideration that guided many of the decisions made. Next, you’d turn your attention to middleware: here it was all about integrating tiers of applications to deliver a single, end-to-end application experience.
Fast-forward to today. In the world of hybrid IT, you’re acquiring capabilities from a variety of different sources. Some may be provided by your own IT department, and some might be software-as-a-service (SaaS) applications such as Salesforce. It’s likely that much of it will be infrastructure-as-a-service, delivered from cloud providers.
Fast-forward to today. In the world of hybrid IT, you’re acquiring capabilities from a variety of different sources
DevOps has also evolved. Today, developers typically use a variety of portals – some may work in Amazon, others in Azure – and they’ll create applications using a combination of different toolsets. Next, they move the applications into production. Here again, there are many options. Production might take place in the very clouds where development took place, or it could be done in an internal VMware, Microsoft, or Zen production environment.
Now the burning question is: ‘How do I bring these services together securely, and deliver a single cohesive experience to my customers and employees?’
One way to optimise your hybrid IT environment is to rationalise the number of participants within it, especially the number of clouds you’re using. If you’ve always required a dual-sourcing strategy, leveraging just one single cloud isn’t an option, but it’s unlikely that you need more than three or four at the infrastructure level, in addition to the SaaS choices.
Optimising the network is also critical. Failure to do so will impact your ability to provide a high-quality user experience. And it will introduce unwelcome costs that can negate the promised value of cloud.
Many organisations see value in enlisting the expertise of a managed services provider to help them integrate the disparate elements within their environments and to deliver the speed and user experience their business requires, while maintaining their security posture.
Over the last year, we’ve seen a number of disruptive data centre technologies coming to market. These have great appeal in terms of speed and ease of use at dramatically improved price performance.
Vendors of these products promise to deliver ‘a cloud-like experience inside your own data centre’. However, the challenge with adopting these technologies is that they impact how you manage your environment.
Hyper-convergence changes all your internal processes because most of them were originally built around the separation of the network, storage, and compute layers. Hyper-convergence allows you to operate all three under a single stack.
But while hyper-convergence vendors have all achieved various levels of integration within their products, most have done very little integration with the network, or considered one another’s products. So a process that works for one may not work with another and as a result, you’re limited to undertaking very device-specific technical activities.
This limits how many of these products can actually be implemented, and – going back to the point I made earlier about dual-sourcing – nobody wants to be locked-in. Yes, the ability to move into production in a short period of time is alluring, but be aware that these products do come with the risk of creating operational islands.
Flash storage is another disruptive technology. When it was introduced a few years ago, it came with a hefty price tag, so businesses only used it in very specific areas where they absolutely needed the speed it delivered. Today, however, its cost is more aligned with that of traditional storage, so it’s getting more attention.
Flash can introduce new operational challenges
The appeal of using flash is that it delivers high levels of performance, and it dramatically simplifies the management of physical storage devices. Our clients tell us that today, they’re using flash for 15─20% of their estate, but in future they foresee that they’ll use it exclusively, as it makes everything easier and interchangeable.
However, flash can introduce new operational challenges. That’s because it disrupts the processes within your existing environment for managing storage and it affects decisions about where you want your information to be.
So, instead of focusing on managing units of storage, you’re now going to have to address where your information needs to reside in order to get the most business value from it.
In addition, network performance can be a stumbling block for successful flash deployments and connectivity requirements need to be carefully considered.
I was recently asked by the CIO of one of our global automotive clients how we could help them move their applications closer to the cloud and to low-latency points in the network through a co-location arrangement. Many businesses are even questioning whether they want their own data centre at all. Those that do want their own usually are those with sufficient scale and business differentiating processing needs.
We receive many requests from organisations that are looking for help in moving to a data centre that can provide them with a high-speed network to the cloud
Of course this wouldn’t be something that the automotive client would do overnight – they see it as part of a long term strategic plan ─ and something that would likely coincide with their next technology refresh. Whichever technologies form part of that refresh would be moved into a data centre that’s co-located with, or at least has a low-latency cross-connect to, the major cloud providers.
This conversation is one of many that we’ve had with our clients over the last year. As part of the NTT Group – the largest data centre space provider in the world – we receive many requests from organisations that are looking for help in moving to a data centre that can provide them with a high-speed network to the cloud, as well as to the Internet.
In the world of hybrid IT, it’s important that you make infrastructure purchasing decisions from an applications perspective.
Years ago, when the focus in the data centre was all on technology standardisation, developers would have to write their applications to the standards defined by the IT department.
That’s all changed: in today’s hybrid IT world, major application sets are being delivered in a hybrid environment. Developers are composing applications, sometimes using in-house resources and environments, sometimes not ... so the infrastructure needs to be able to support the applications, with high levels of automation and programmability, from wherever they’re sourced to ensure an integrated outcome.
With the trend to increasing hybrid options through technologies like containers, businesses need an approach that ensures the flexibility needed as the application models evolve.
Most importantly, every infrastructure plan needs to consider how to move quickly from DevOps into production.
A few years ago, businesses would typically issue 1─2 change releases per year. Today, even organisations that are still running mainframes are putting out up to seven changes a day. And the Facebooks of the world release thousands of changes a day.
Today, even organisations that are still running mainframes are putting out up to seven changes a day
Achieving this speed and agility calls for a shift in mindset about how you build your applications, and how you move application development into production. Most companies don’t have processes, or the infrastructure software integration skills, that support this new model, so they elect to engage a managed services provider to assist.
Faults on networks managed by Dimension Data are repaired 32% faster than those on non-monitored or non-managed ones - 2016 Network Barometer Report.
Increasingly, big data projects are going through multiple updates in a single year – and the Internet of Things (IoT) is largely the reason.
In the past, big data was a somewhat abstract concept. The idea was that if you looked long enough at big data sets, you’d be able to find patterns that would add business value. That approach was time-consuming, and the returns were uncertain.
IoT makes the whole exercise more concrete as it provides you with very specific things to look for. For example, some of our manufacturing clients are looking into their machine maintenance windows so that they can start to predict when they need to perform maintenance, and so keep improving their manufacturing quality standards.
IoT makes it possible for you to examine specific patterns that will deliver specific business outcomes
So IoT makes it possible for you to examine specific patterns that will deliver specific business outcomes. This in turn is driving a healthier investment in big data projects and a faster return on them – because now businesses know what they’re looking for.
The network itself is also becoming a source of business intelligence. The information it contains about how people move around and interact can be used to improve services or health and safety. For example, in a retail environment, even without you knowing, the network can track where you’re walking, and how long you dwell in a particular area. This information can be linked back to architectural maps and provide rich insights to retail marketers about the foot traffic through their stores.
See how Dimension Data’s Connected Conservation solution is protecting the endangered rhino.
With the infrastructure in place and the business return in sight, new uses of the data start to emerge and the true promise of big data will be realised.
The marriage of big data and IoT has some significant infrastructure implications. What storage infrastructure do you require? How do you compose your network to support both long range outdoor Wi-Fi links to battery powered sensors; and to move large data sets around for analysis? What are the implications for your data governance policy?
And, of course, you need to make your infrastructure decisions based on what’s going to deliver the business outcomes you’re looking for the fastest.
In order to expedite the returns from their big data investments, many organisations are opting for cloud- or managed services-based infrastructure for IoT and big data. This approach allows them to focus on the value of the data instead of spending months or years building infrastructure themselves.
We believe technology is the key that unlocks potential for businesses, and for the world, in ways we’re only beginning to comprehend. By applying our capabilities in digital infrastructure, hybrid cloud, workspaces for tomorrow, and cybersecurity, we look forward to continuing to help our clients accelerate their journeys to become digital businesses in 2017.
Sign up for bi-weekly updates on new insightsSubscribe
Industry specific stories from business leaders on digital transformationRead more