The sudden decision by the SNB to remove the longstanding cap on the Swiss Franc against the Euro took markets by surprise, causing many casualties amongst the foreign exchange broker community. As stated by the Financial Times on January 19, “In one of the most damaging currency swings in the modern trading area, the Swiss Franc soared in value, leaving investment banks across the world with big losses and hitting foreign exchange brokers particularly hard”.
Historical data analysis is typically enabled using data duplication technologies. But is this method still valid today when users need to analyze historical data that’s moving fast and changing rapidly throughout the day? All we know is that in ActivePivot, we practically had to re-invent our core database to support the requirements of our customers who wanted to travel back in time and analyze large volumes of dynamic data.
In the last post, I explained the difference between SMP and NUMA architectures as we enter the “many-core” era. I also asked the following question: “Is it reasonable to expect massive performance improvements when you run an existing application on new NUMA-enabled hardware?” The answer is yes. However, improved performance is not guaranteed and you must be prepared to rewrite the code of your application to get the best out of many-core hardware. Continue reading
When your business is analyzing big data with the goal of providing answers in split seconds, you find yourself trying to squeeze every bit of speed into your solution. Among other things, this also includes finding the optimal processor architecture. This is why we’ve spent quite a lot of efforts studying the memory architecture alternatives – NUMA (non-uniform memory access) and SMP (symmetric multiprocessing) – to see which one could provide us with the best results.
In our previous post, How In-Memory Computing is Accelerating Business Performance, we explained the disruptive potential of in-memory computing. Performance gains resulting from faster execution of queries were one of the top benefits mentioned. However, in-memory computing goes way beyond performance gains, allowing organizations to do things differently and achieve new levels of competitiveness. This post illustrates this with a few examples.
Countless articles are written about Big Data every day. Beyond the hype, the Big Data phenomenon is a real change agent delivering capabilities that were never thought of before. Financial institutes and banks, for example, can calculate and asses their risk in near-real time, throughout the day.
To a large degree, the phenomenal performance and interactive analysis capabilities of Big Data projects are enabled by in memory computing. In-memory databases have become the foundation of a new generation of business applications that bring the power of analytics to the hands of decision makers. Continue reading
Complex aggregation has become a common requirement for business users looking to analyze sophisticated metrics across multiple dimensions. There are numerous use cases for complex aggregation such as cross-currency aggregation, which was explored in our last post. Dynamic bucketing is another use case example.
This blog takes a deep look at the technical considerations for a successful implementation of dynamic bucketing.
In previous posts, we’ve delved into the principles of multidimensional databases. Among all the benefits that a multidimensional database delivers is complex aggregation, a process by which KPIs are written once and are immediately available across any dimensions, through any filtering, letting the user follow his train of thought.
But how does complex aggregation actually work? This post explores a concrete use case, articulates the technology challenges behind complex aggregation and demonstrates why ETL and SQL relational databases are not a fit.
Real-time decision making is commonly linked with Complex Event Processing (CEP). Indeed, CEP systems can extract and alert about meaningful events from streams of data. However, for decision makers to have context and turn a notification into a meaningful, actionable event, CEP must be supplemented with mixed workload and multidimensional capabilities.
Let’s take a look at what it means.
In a previous post comparing multidimensional and relational databases we mentioned that the decision making imperatives in the Big Data era were disrupting the clear-cut border between OLTP and OLAP, enabling a new type of mixed workload database that addresses both needs.
This post takes a closer look at mixed workload systems – what they are, how they work, and what are they useful for. Continue reading