Every modern business is a digital business. Whether a company sells food, consultancy services or mining equipment, it’s likely to rely on computing systems and software in many parts of its operations. In recent years, many organizations have suffered from high-profile IT problems – from failed system upgrades and power outages at data centers to large-scale cyberattacks. These incidents have disrupted businesses for days or weeks, leading to high recovery costs, angry customers and even fines from regulators.
The companies that provide enterprise IT equipment – such as servers, network infrastructure and storage devices – know all too well that reliability and high availability are top priorities for their customers. Those organizations have developed sophisticated service and support operations. They employ teams of skilled field engineers ready to respond to problems at short notice, and they keep stocks of spare parts close to customer sites.
Those robust networks have enabled the technology sector to offer service levels that are the envy of many other industries, but the ability to respond to problems within hours rather than days comes at significant – and increasing – cost.
The jump to hyperscale
The biggest change to hit the computing market in recent years has been the increasing importance of so-called hyperscale data centers. Originally built to meet the demand for computing power created by the giant internet firms, these huge, centralized computing facilities each contain tens of thousands of individual servers. Today, a significant chunk of their capacity is rented out to third-party organizations that choose to run their business application remotely in the cloud. According to analyst Synergy Research Group, the number of hyperscale data centers worldwide passed the 500 mark in 2019, up from around 400 two years earlier.
Hyperscale data centers and cloud computing have brought significant benefits for users. Outsourcing and centralization help to bring the cost of running an IT system down, and new software and hardware techniques make it easier to share computing and storage tasks dynamically across different computers, which boosts the utilization of each machine, cutting costs still further.
Hyperscale data center operators are powerful customers, however, with the ability to negotiate competitively with equipment manufacturers for the best possible price. A shift to fewer, larger infrastructure projects, meanwhile, has made the overall enterprise IT market more cyclical in nature. After growing strongly in 2018, for example, the server market contracted sharply last year, as a number of major customers completed a wave of development projects.
Service scrutiny
Facing slower growth and a squeeze on margins, IT equipment makers are putting every part of their costs under renewed scrutiny. That includes the cost of the service logistics operations. According to Scott Allison, Chief Customer Officer at DHL Supply Chain, one particular area of concern is the volume and disposition of spare parts inventories.
“Technology companies have made great strides in the management of the inventories in their central and regional distribution centers,” he says. “Most now operate a global inventory management platform that allows them to track demand, see exactly what they have in each location and adjust their inventory levels to balance service and cost.”
"Nevertheless", says Allison, “these companies often have lots of inventory in places where visibility is poor. That might be parts that are sitting in the offices, homes and vans of service engineers, or consignment stocks waiting at customer sites.”
That invisible inventory drives up inventory buffering and therefore costs, explains Allison. Companies may overinvest in stocks of service parts because they don’t know what is available in the field. Or they ship parts unnecessarily through their logistics networks because they don’t realize that the relevant item is already available at a location much closer to the point of use, and if service inventory isn’t consumed in a timely fashion, it quickly becomes obsolete and must be written off or sold at a loss.
Why go looking for the latest logistics trends and business insights when you can have them delivered right to you?
Transparency from end to end
Leading companies in the sector are now taking steps to achieve true end-to-end visibility of their service inventories. That, says Allison, has required changes to inventory management software, and to the hardware used by service engineers and customers in the field. “These companies now treat their service engineers’ homes and vehicles as additional stock locations, for example, so that inventory is visible to everyone, and they are provided with dedicated scanners or software on engineers’ personal devices that allow them to log the details of each item as they receive or install it.” At customer sites, meanwhile, a new generation of smart lockers acts as a satellite warehouse and inventory management system, holding parts securely, logging their removal and even automatically issuing replenishment orders.
From hyperintegration to hybrid
If the growth of centralization and cloud computing has put technology companies’ service models under one sort of pressure, the industry is now preparing itself for a swing in the other direction. There is growing realization across the industry that putting critical data and computing tasks in distant data centers isn’t always the ideal solution.
Some of the switch is being driven by privacy and security concerns, with companies keen to maintain full control over sensitive information. Changes in attitudes by regulators, meanwhile, are encouraging some companies to ensure customer data is held inside the jurisdiction where those customers reside.
In other applications, performance is becoming a critical factor. For artificial intelligence systems used in industrial control systems or to support networks of autonomous vehicles, the extra time taken to exchange information with a remote data center can be a critical drag on performance. Those applications are driving up the need for so-called edge-computing capacity, where powerful servers are located as close as possible to the point of use. And some companies are employing hybrid IT architectures, in which tasks are shared between local and remote computers according to urgency and available capacity.
“In a hybrid world, technology companies will need logistics systems that can support high service levels in a widely distributed environment, while keeping costs under tight control,” concludes Allison. “They need to ensure they are taking action now to be ready for an even more demanding future.” ― Jonathan Ward
Published: April 2020
Images: Adobe Stock; Getty Images/iStockphoto