Part of me wonders why we focus our trend and market predictions at the beginning and end of each year, when we all know these changes are fluid, and don’t happen according to a perfect calendar date. But it’s important to constantly evaluate the changes we are seeing, and understand where they will take us next. And in that spirit, I’d like to share some market predictions.
1. Expectations for database systems have expanded beyond relational to include alternative models
Non-relational database technology, such as NoSQL and Hadoop, have emerged over the last few years. However, now the expectation is that leading database platforms can provide a wider range of capabilities and address the broader range of use cases and workloads that these non-relational technologies have enabled.
This has resulted in a “new normal” definition of capabilities for a general-purpose database platform, including support of new data types / multiple data models, in-memory, data virtualization, support for distributed storage, and extended capabilities such as graph and spatial. Customer are looking for a modern database platform that can natively support these additional workloads and functions.
2. Rise in demand in real-time analytics on transactional data
Driven by the need to perform “transaction window” analytics with a simplified technology architecture, hybrid transactional analytic database systems, enabled by in-memory technology, are seeing increasing adoption.
Increasing demand to support operational workloads that incorporate real-time analysis, such as recommendations, targeting, and fraud analysis, are leading to increasing adoption of hybrid transactional analytic database systems. Industry analysts are recognizing this trend; Gartner uses the term “hybrid transaction / analytical processing (HTAP)”, IDC uses “analytic transactional processing (ATP)”, and Forrester uses “translytic data platforms”, for which it recently published a brand-new Wave report. In-memory is a key technological enabler of hybrid transactional analytic databases, which also provides the added benefit of simplicity of architecture – one system to maintain with no data movement.
3. Information management is evolving to manage disparate data sources
Companies are swimming in a sea of data right now, and to this point it’s been nearly impossible to make sense of it. And with the Internet of Things and additional sources emerging every day, more data means more problems.
A major challenge is that this data is increasingly becoming constrained by the reality of multiple data lakes and disparate data sources. This is further driven by new data regulations, such as GDPR, which are mandating enterprise-wide data governance.
A new approach, as part of a modern data architecture, facilitates orchestrating, managing, and creating data flow pipelines, with push-down processing to the data where it resides, for data professionals as well as LoB users.
Previously, it was difficult for organizations to address this challenge, which required a build-it-yourself approach combined with piecemeal commercial products. However, new commercial solutions are now emerging to address this opportunity. This isn’t a nice-to-have product – these days, it’s an absolute necessity.
What do you think? Are we on the right track with these ideas? Do you see trends playing out differently? Please share your thoughts on the comment section.