In 1747 James Lind, ship’s surgeon on the HMS Salisbury, conducted a pioneering medical experiment on 12 members of his crew. They were all suffering from scurvy, a widespread and often fatal condition that afflicted thousands of sailors on long voyages. Lind divided the group into pairs and offered each pair a different treatment. The options included daily doses of seawater, vinegar and cider, but one pair received two oranges and a lemon each day. After a week, the health of the men who ate the citrus fruit had improved dramatically; one was even well enough to resume normal duties.
Lind’s experiment is now considered the precursor of today’s clinical trials. Although he didn’t understand the root causes of scurvy (a lack of vitamin C in the diet), Lind had discovered an effective treatment and had data to prove it. Unfortunately for sailors, citrus fruit was an expensive luxury in Europe at the time. It would be another 50 years before the British navy included lime juice as standard in crew rations.
Modern medicine is built on data. Novel treatments undergo a rigorous series of trials, and the sector has developed sophisticated experimental protocols and analytical techniques to assess the efficacy – and risks – of drugs and medical devices. Advances in computer modelling and simulation techniques are helping companies to conduct an increasing share of drug discovery and early development work “in silico.”
The quantity of data available to the life sciences industry is growing extremely rapidly. A 2018 analysis by Dell EMC suggested that the volume of data held by the world’s healthcare organizations increased by 878% in just two years. Drug companies are also making greater use of secondary sources of data – for example by mining social media feeds for references to rare adverse reactions to their products.
The sector is also investing heavily in new data analysis tools and approaches. In the third quarter of 2019, for example, U.S. investors poured $1.5 billion into companies that use artificial intelligence (AI) and machine learning techniques for biopharma applications. That sum represents a quarter of total U.S. AI funding over the period.
Minding the gaps
Yet despite their reliance on data for upstream research and development activities, many companies in the pharma and medical device industries have significant blind spots when it comes to the application of data in their downstream operations. Manufacturing and distribution operations in the healthcare sector are complex and highly regulated. That environment has encouraged organizations to develop “siloed” structures with different divisions, each of which focuses on excellence in their own function at the expense of close collaboration across the wider business.
The sector’s frequent mergers and acquisitions add additional complexity. When two companies join forces, they create duplicate siloes in their sales, manufacturing, supply chain and other functions. Connecting and eventually integrating those siloes can take significant time and effort, especially when the merger participants previously used different technologies, processes and organizational structures.
The sector’s complex market channels make the picture even more opaque. Many items pass through a long chain of stakeholders – including distributors, wholesalers and pharmacies – before they reach the medical professionals who prescribe them or the patients who use them. A manufacturer may have little direct contact with its end customers in this scenario, and no real idea how its products are being used.
The integration imperative
The healthcare sector’s “divide and conquer” approach worked pretty well for decades, but it is now facing significant pressure from a number of different directions. On one side, rising healthcare costs mean payers are negotiating harder with their suppliers, pushing prices down. Similarly, middle-income countries striving to improve the provision of health services to their citizens need to keep a close eye on every dollar spent. Tighter margins force companies to look for significant cost savings and operational improvements, something that requires close collaboration between functions and an end-to-end view of business performance.
On the other side, a new generation of complex, costly and increasingly personalized treatments is forcing companies to rethink their operating models. Where products are short-lived, custom-made or just hugely expensive, high inventories are no longer a feasible solution for supply chain shortcomings.
Then there are supply chain shocks. As the COVID-19 pandemic demonstrates all too clearly, supply and demand in the healthcare sector can be subject to extreme volatility. An organization’s ability to respond to such events depends on a clear understanding of the capacity and flexibility available across its operations.
From siloes to data lakes
In response to these challenges, the industry is embarking on a large-scale effort to change the way it uses operational data. Companies are pooling data from different functions and different parts of the supply chain into integrated “data lakes.” Then they apply new analytical tools to extract more information from that data to facilitate better, faster decision-making.
Merck group, the world’s oldest chemical and pharmaceutical company, is building a “self-driving supply chain,” using digital technology and AI tools to improve visibility, forecast accuracy and service levels across its operations (see here). Alessandro De Luca, the company’s chief information officer, told Delivered that the first stages of its ambitious program had helped to cut forecast variability by more than a third, paving the way for significant reductions in inventory with no impact on service.
Other organizations are using digital tools to allow seamless data sharing between supply chain participants. GE Healthcare and biopharma company Amgen, for example, have established a digital data exchange system between their respective manufacturing operations. GE manufactures biological materials that Amgen processes into finished products. The new system provides scientists and engineers at Amgen with detailed information from the GE production lines, allowing them to see exactly how variations in raw materials affect downstream manufacturing operations and the quality of end products.
In the medical device sector, companies are taking advantage of internet of things (IoT) technologies to extend their data connections all the way to the point of use. Philips, for example, offers a suite of remote monitoring, upgrade and diagnostic services for its range of medical imaging equipment. The company says that its remote service engineers can continuously analyze equipment to proactively detect any potential issues and take appropriate corrective action.
Why go looking for the latest logistics trends and business insights when you can have them delivered right to you?
Data in logistics and transport
Data is also helping life sciences companies to improve the performance of their logistics processes, says Larry St. Onge, President of Life Sciences & Healthcare at DHL. He describes how DHL is using the data generated by its Thermonet network to help customers improve the reliability and cost-effectiveness of their logistics systems.
“We have operated Thermonet for eight years as a dedicated, standardized network for temperature-sensitive life sciences shipments,” explains St. Onge. “The network has multiple elements, including staff trained in suitable operating procedures and certified warehouse facilities around the world, but an important part of the system is its data platform.”
Shipments moving through the Thermonet network are equipped with wireless data logging devices that record the temperature outside a package throughout its journey. Those sensors act as an alarm system, transmitting an alert to a DHL control tower if a package spends too long in an uncontrolled environment. They also allow DHL to track conditions across the network, helping it continually to adapt and improve performance. “Thermonet data helps us run an extremely reliable network,” says St. Onge. “But now, with years of data from many thousands of shipments available, we can also apply smart analytics to add value for our customers.”
Thermonet data is already being used to aid the selection of transport lanes, explains St. Onge, allowing companies to avoid routes where the risks of temperature excursions are the highest. “Now we are also able to use our data to make recommendations about the most appropriate packaging systems for a given route,” he says. “If we see that packages in a certain lane only experience high temperatures for two months of the year, we can suggest that the customer adopts lower-cost passive packaging most of the time, only using the most expensive active cooling systems when they are really needed.”
The ability to pick the right packaging solution for every shipment can have significant impact on the overall cost of logistics, says St. Onge. And it can also help companies when supply chains are stretched, enabling them to make logistics decisions that minimize risks while keeping vital supplies moving. “When you combine historical Thermonet data with realtime information on supply chain risk from sources such as DHL’s Resilience360 platform, you have a really powerful set of tools to aid logistics operations in demanding and unpredictable environments,” says St. Onge.
The healthcare sector is grappling with the most significant challenges it has faced for decades. The industry’s mastery of data – in research, in the supply chain and in the wider community – will be a critical part of its response. — Jonathan Ward
Published: June 2020
Images: Adobe Stock; Philips; Adobe Stock; Stefan Wermuth/Bloomberg/Getty; Frederic Sierakowski/Isopix/action press (2); DHL (3)