Why automation is key to big data success in 2019 – Data Economy

By Rick Haggart, SVP of Professional Services, Virtual Instruments

Today’s global enterprises are driven towards digital business transformation to become more agile, consolidate IT infrastructure, and reduce costs often by migrating applications to a public cloud. The promise of the public cloud has been hailed as the solution to achieving much of this, but cloud migration is a major initiative that doesn’t come without its concerns, and price tag – and it shouldn’t be attempted without firm answers to crucial questions.

So how do IT executives and IT operations teams make sure they do it right?

Navigating the stages of cloud migration

Firstly, it is vital to choose the right cloud solution provider for the business’ unique needs. Each key business application may even have its own CSP as multi-cloud deployments are becoming increasingly popular. With so many bold claims from cloud vendors, how does one know with certainty which solution is right for each of their business’ particular application workloads, ensuring no latency or downtime issues?

Then, decisions need to be made as to which data to migrate, as recalling data, and cloud repatriation itself (if a mistake is made), can be prohibitively expensive. There are also compliance and security minefields to navigate; for many organisations, migrating data means potentially making the business (and customer data) exposed to security risks. Furthermore, the major cloud service providers don’t currently offer service level agreements (SLAs) that guarantee application performance which can impact customers, as after all, the cloud is just another datacentre, subject to the same vulnerabilities as any other.

So is it still possible to achieve all of the benefits the promise of cloud has to offer – without all of these costly risks? The answer is yes. There are innovative tools to help organisations understand performance, assess cost performance trade-offs, and provide insights to de-risk the process prior to migration.

Predicting application workload behaviour in the cloud

The reason cloud migration goes wrong for many organisations is that, up until recently, there was no accurate way for organisations to know for sure how business-critical applications would perform in a particular cloud. There was no scenario or test to gain insights into the behaviour of their bespoke workloads prior to migration. These insights are especially critical to the success of a digital transformation strategy, leading to the utopia of business agility, performance scalability and cost-effectiveness that enterprises crave.

A successful cloud migration project starts with understanding on-premise application workload profiles. This workload intelligence, combined with dependency mapping (to analyse the knock-on effect systems can potentially have on one another, e.g. during peak times), add critical insight to the decision-making process, and significantly reduce the time it takes to successfully migrate a significant number of varying workloads to the cloud.

As already mentioned, cloud providers don’t offer SLAs for the performance of applications in the cloud, and without these vital insights, organisations have no way of knowing whether the throughput of their applications will be improved, or run slower in the cloud than in their current environment. If more capacity or resource is required to alleviate the problem, the cloud costs could rise. Which is why benchmarking application performance prior to migration is an important step in the journey. With the proper tools, enterprises can methodically validate the suitability of the targeted applications based on their on-premises performance requirements.

Application workloads all perform very differently, experiencing peaks and troughs in behaviour depending on the environment and resource demands at any given time. All of this information needs to be gathered in advance of cloud migration in order to understand the patterns of behaviour and application requirements in the cloud. Knowing how applications will perform in the public cloud and certainty as to how much it will cost, are vital pieces of information that enterprises should be absolutely clear on before migration.

The innovation behind cloud migration planning tools

The right service/solution should have the capability to simulate and validate cloud workload performance before migrating the workloads. It should establish whether migrated workloads are performing adequately, and provide guidance on the next steps if this isn’t the case. With such a solution, IT executives can effectively compare and contrast estimated costs and viability of various cloud platforms. Such cloud migration readiness services must also offer the ability to select the optimal CPU, memory, network and storage configuration for each migrated workload using simulated workloads.

Performance dashboards can now reveal how different applications housed in an organisation’s datacentre will use resources in AWS, for example. IT teams can easily see how much leeway they have in terms of storage vacancy, or RAM during peak times. Extremely detailed information on historical performance can be captured over both long and short periods of time, which can be synthesised, and then scenario tested against each different cloud service provider configuration, so performance and cost comparisons can be made before migration.

The high resolution of infrastructure information and depth of granular monitoring capability available today deliver key benefits over the traditional legacy monitoring tools still on the market. Real-time insights from this new generation of tools mean that anomalies lasting for a few milli-seconds can easily reveal important performance spikes that will be missed by tools only capable of measuring every few minutes.

Mind the gap: the impact of a lack of application workload intelligence

Prior to this technology, organisations didn’t have a detailed understanding of their application workloads or performance requirements. They didn’t understand what the peak ‘seasons’ or workload behaviour patterns for older applications were, nor did they have any intelligence about the performance of other applications that they’re perhaps not focused on, which could potentially cause an issue. Without those insights, enterprises wouldn’t have any idea of how to take real action for improved performance and cost savings.

If it was revealed that an organisation was significantly overprovisioning, for example, it could move its on-premise configuration to the cloud, and implement right-sizing in the process capable of yielding substantial cost savings, with a built-in contingency to ensure correct provisioning. With the right tools, such decisions could be made with confidence, as they wouldn’t be based on guess-work, but on actual historical data.

Awareness of application dependencies

Many enterprises are unaware of the multiple application dependencies on services that they perhaps weren’t planning to move in phase one of their migration. These scenarios can cause unwelcome and unplanned for issues after the fact.

Companies tend to spend a lot of time on due diligence for a critical application of concern, and then return as a trader for another one, only to find the latter turns out to have a performance issue. This common occurrence delays the whole process, resulting in only moving a hundred or so applications, instead of the few thousand that were originally planned.

As every organisation has its own unique infrastructure environment, having a detailed map of interdependencies means that IT teams and CIOs alike can be empowered with the certainty of what they can and need to move to the cloud with the exact cost/saving.

The four phases of intelligent cloud migration

This is why the planning phase and pre-work of a cloud migration strategy is so key to its success. In this initial phase, cloud providers should be questioned in advance about the cost of running applications in their cloud.

With the aid of migration planning proper tools, the stage of profiling workload characteristics should include testing for the discovery and identification of application dependencies between compute, networking, and storage (as mentioned above), utilising real-time insights and empirical data – as opposed to a best guess in terms of what services it uses. This vital step enables the accurate characterisation of workload performance, which removes all guesswork, providing assurance and peace of mind.

With these accurate insights, IT teams can then take into account their unique corporation’s own detailed, on-premise dependencies and performance requirements to create synthetic workloads to playback in the cloud. In this phase, teams can compare and select cost-optimal configurations that work best for them.

And the benefits to this approach don’t end there – this functionality will also enable the preservation of performance in the cloud even after the migration is complete- with the ability to monitor for any capacity or performance issues post-migration.

With the right strategy and tools in place, organisations can confidently reap the full benefits of the public cloud. Armed with a full understanding of application performance requirements and their dependencies, and workload simulation, enterprises can finally mitigate the inherent risks to successfully migrate to the most cost-effective public cloud environment for each of their key applications.