The procurement, architecture and management of IT services are undergoing dramatic change and placing enormous demands on next-generation IT.
In global competition, the corporate goals set by companies often have one thing in common: All departments – not least the IT department – are required to optimise their cost structures. Information technology is not only confronted with growing demands at the customer end and ever-more flexible competition, but also has to deal with growing mobility, the integration of new processes and the right responses to big data.
This all involves additional requirements for the infrastructure and the data centre along with growing pressure on existing processes which have to become more efficient and faster. In order to achieve this, companies have to shift their focus to three central elements of this change, i.e. procurement, architecture and management, and gear their next-generation IT to this. This will allow them to save investment costs and release capacities for strategic projects in order to reach their corporate goals.
Technology: modularity and automation
During the course of technological change, the procurement of technology is the first thing that will change. As purchasers in a mobile world, we expect to enter a (virtual) shop, to select an application, pay a fee and then be able to directly use the application. If it meets with our expectations, we will pay for additional features. If not, we will stop using the service or application. This model shifts the risk to the retailer. The retailer must permanently supply the customer with added value or otherwise he will lose out.
Besides that, solutions have become more modular. Software is developed, when possible, completely independent of hardware in order to enable upscaling or downscaling. Security and availability have always been top of the list of demands for IT services. As the number of business-critical systems rises, companies are also increasingly looking for unlimited availability and 24/7 support.
The management of IT operations is also undergoing considerable change. Once integrated into the running process, system diagnosis for IT services or applications becomes increasingly scarce. Instead, companies are now more inclined to use result-orientated monitoring in order to identify the added value which a certain overall process brings to the company. They want to automate as many of the processes as possible which are vital for the functionality of the infrastructure or applications. Customers want “disposable servers”. If virtual memory is no longer needed or even defective, it is shut down and a new environment, if necessary, is then started with the required applications.
No virtualisation without standardisation
For IT managers, this means that planning will become more important than maintenance. Virtualisation is already used in many companies today – but cost savings are not achieved because the move is often far too complex and the expertise needed to master this is not available. Integrating various services and tools and embedding processes in existing structures calls for additional administrative effort which results in higher costs and misallocation of qualified resources. Companies should focus their energy on standardising the data centre infrastructure and simplifying management of this overall infrastructure.
A low degree of standardisation means that IT “haphazardly” develops different configurations for different systems in order to integrate them into running operations. Targeted virtualisation is not possible in this way and this in turn increases the cost of maintaining the status quo. And this is where the lion’s share of the IT budget goes today. Other problems with this makeshift type of solution include non-fulfilled service level agreements and higher probability of failure.
The next-generation data centre
It is frequently the case that the infrastructure in data centres is obsolete while the workload and data quantity increase to degrees that are not forecast. Then there is also the question of power supply and cooling along with the fact that the existing network is often not capable of keeping pace with the growing number of mobile terminal devices, modern applications, growing memory demand and the analysis of large data volumes.
All the same, the importance of company data is visibly growing. But the move to cloud models is slow due to security concerns. At the same time, there is growing demand for memory capacity managed by external service providers and for external use of back-up systems – all the more so, the greater the quantity of data to be processed. With solutions like these, it is possible to introduce cloud solutions for additional capacity or back-up and hence lessen the load for the company’s own data centre.
According to the International Data Corporation (IDC), 82 percent of customers who have successfully virtualized their environment already want a partner who can manage these workflows and transfer them to the cloud. This applies both to computing as well as memory capacity and not least to the network that has to meet with the higher requirements that go hand in hand with virtualization and mobility.
In order to meet the challenges of next-generation IT, the documentation of operative best-practice examples is vital because dependency on optimized processes and clear-cut schemas is growing. IT staff have to increasingly control processes instead of focusing on managing systems. It is essential here that the workflow environment be optimised and sufficiently automated with a view to the existing infrastructure.
First published at: Computerwoche.de