A direct freight train linking Wuhan in central China and Lyon in France began operation in April this year. It now takes 16 days to complete the 11,300-kilometer journey, compared to the 50-60 days to transport from Wuhan to France by sea.
If you’re a supply chain expert managing the China-Europe route, the new rail line potentially opens up cost reduction opportunities. And I’ll soon explain why only ‘potentially’.
We are proud to announce our new partnership with Atos to build a Big Data appliance aimed at serving banks and other financial institutions.
The appliance combines ActivePivot with a bullion server from Atos as well as the Zing Java Virtual Machine from Azul Systems.
Bullion is an innovative, modular server that can scale up (or down) from 2 CPUs and 1.5 TB of memory to 16 CPUs and 12 TB. It fits well with the equally scalable nature of ActivePivot, which fits just as well on smaller machines as on the 16 TB juggernauts, as we demonstrated last year.
Even the best sales and operations planning (S&OP) can only take you so far. Regardless of the investment and brains you put into planning – by the time your blueprints are released into the unpredictable air of the real world, a significant chunk of your plans (at least 20% by some claims), will have to be re-planned.
As noted in this study by Accenture on Retail Supply Chain Reboot: Agilely Facing the Unknown, “As more companies increase their investments in supply chain logistics to offer the seamless customer experience consumers crave, some aren’t acknowledging the unusually high levels of uncertainty they face today. Lacking the ability to anticipate multiple scenarios or the agility to deal with them effectively, retailers could find themselves on the wrong side of fast-moving and costly logistics trends. ”
The art of online pricing is a delicate one. It takes more than just comparing yourself to the competition and setting the price accordingly.
Instead, pricing algorithms must take into account a large set of parameters including competitor pricing and product availability, sales margins, price elasticity and indices, current and projected inventory, geography, weather, target groups, service levels, variance between online prices physical store pricing, etc.
Needless to say, no single pricing algorithm fits all. It must be adjusted to your unique business rules and environment in order to obtain the desired results.
About 80% of supply and demand can be planned perfectly in advance. But how do you handle the remaining 20% that can never be predicted? The missed shipments, late materials supply, or the unexpected demand peak.
The answer is an agile supply chain. But what does that mean? Here are three ways to increase your supply chain agility.
The recent explosions in Tianjin China have been devastating, killing at least 114 people and injuring hundreds.
Beyond the dreadful loss of life, the explosions also demonstrated how unplanned events can disrupt commercial supply chains. A recent article in Automotive Logistics highlighted the effect of the 10,000 vehicles destroyed during the event on the automotive industry. All car manufacturers concerned are trying to assess the damage to their vehicles, identify which customer orders are affected, and evaluate the alternative supply possibilities and their related costs.
Everyone, it seems. Take Procter & Gamble. In a recent talk, Procter & Gamble’s SVP of Product Supply, mentioned they have created a “real-time instrumented supply chain,” which they believe could achieve an upside of 1-2% sales increase, 2-5% margin improvement, and 5-10% improvement in asset utilization.
Only several years ago companies updated their supply chain plans approximately once a month, whereas today forecasts and plans are adjusted twice a day for some product categories. Such frequent updates enable responding much faster to changing demand and allow implementing a more accurate resupply of products to stores.
Nevertheless, achieving a real-time supply chain is not trivial and typically involves revisiting the deployed technology stack.
Two months ago the Basel Committee decided that banks will have to set aside less capital against trades through central clearing houses in a bid to encourage them to use their services. The aim is to make banks use the central counterparties (CCPs), making it easier for regulators to follow the flow of banks’ trades and exposures to each other.
This followed a joint statement by the European Central Bank and the Bank of England over the City’s clearing houses that finally agreed that Euro denominated transactions could be cleared outside the Eurozone whilst making a point that “CCP liquidity risk management remains first and foremost the responsibility of the CCPs themselves”1.
In the April 2015 edition of its Global Financial Stability Report, the IMF raised concerns about potential financial stability risks posed by the asset management industry, calling for regulatory scrutiny on a sector which intermediates 40% of the world’s financial assets. Whether under regulatory or client pressure, asset managers should consider the technology implications of a greater transparency in risk reporting sooner rather than later. This post will delve into the implications of the look-through approach from a data management standpoint, building the case for the use of modern in-memory aggregation technology to process massive amounts of highly granular data.
In a recent video blog published on March 18, Satyam Kancharla from Numerix* highlighted some of the issues introduced by the draft proposal of the Fundamental Review of the Trading Book (FRTB) run by the Basel Committee on Banking Supervision (BCBS). Among those challenges are the transition from Value-at-Risk to Expected Shortfall, the use of varying liquidity horizons, and revisions brought to the methodologies.