Why the 2020s will be dominated by graph technology

This is a guest blogpost by Emil Eifrem, CEO, Neo4j. In his view, four major trends driving interest in graph technology are starting to surface.

Let’s look ahead to what could be in store for the next 10 years in graph databases. I believe four key drivers will shape how the graph database category will evolve during this timeframe:

Shift to the cloud

The cloud shift happened at the application level first and then at the virtual machine level, but it has been slower at the data level due to data gravity: it just takes time. Regulatory and compliance reasons have been an issue. But the shift to the cloud is happening and database growth is now the fastest part of public cloud ingestion.

There are two primary drivers for this transition. The first driver is simplicity, because you can focus on building applications rather than managing infrastructure. The second driver is elasticity; you can write your application and your backend can grow and shrink automatically, and you only pay for what you need. On-premise and hybrid cloud deployments will remain important. Overall, there will be an inexorable shift to cloud-first graph databases.

Developer-orientated software development, product evolution and bottom-up enterprise software adoption

We will witness a strong focus on developers, and it’s a trend that will keep graph databases relevant and usable and will expand their use in the enterprise. Graph technology vendors will increasingly pay attention to creating software that is highly useful and consistent with other daily apps and tools. Ultimately, successful graph databases will be magical, with the technology disappearing into the background like clockwork inside of a watch.

The continuing rise of the data scientist and graph data science

Gartner states, “Finding relationships in combinations of diverse data, using graph techniques at scale, will form the foundation of modern data and analytics; a remarkably high 92% of organisations it polled also say they plan to employ graph techniques within five years”. And there have been over 28,000 peer-reviewed scientific publications about graph data science in recent years. It’s an apparent exponential curve where the machine learning research community has graphs as a top area of focus

to advance AI and machine learning. You can expect significant strides in advancing state-of-the-art graph data science from vendors in the next decade.

Property graph databases are about to come into their own

I contend that for the majority of applications, most domain models are inherently connected. As a test, try an image search on Google for ‘domain model’ and you’ll see how every single one is highly connected. Consider most of the applications you’ve built in the past 20 years and you will realise they are inherently connected. I’m not suggesting you can’t represent these domain models in other data models – you can, but is it a good fit? If we can expose that beautiful data model in an equally beautiful, dare I say magical, product surface, I believe the property graph model will live up to its full potential.

Ultimately these trends will mean that graphs will be the default database for enterprise data projects going forward. They won’t replace RDBMS, but their richness will increasingly attract the best developers for the best enterprise IT projects at scale.

It is the decade where graphs will be for everyone.