DataStax drives high-performance RAG for AI
DataStax appears to have changed its name.
Once the enterprise database company, the organisation then became the real-time vector database company… then, after a while, it became the real-time AI company, only to now progress into the generative AI data company.
This evolution of core moniker and company descriptor can be both baffling and insightful i.e. while many were bemused during IBM’s ‘solutions for a smart planet’ period, the renaming at DataStax is arguably more logical given the shifts we have seen in the data management space in relation to Artificial Intelligence and, in particular, the use of vector database technologies and onward into Retrieval Augmented Generation (RAG) for AI validation and veracity.
It is indeed RAG that the firm now turns its attention towards as DataStax is now enterprise RAG use cases by integrating the new inference microservices from capitalisation-focused company Nvidia.
The core proposition here hinges around a promise that Nvidia NIM inference microservices and NeMo Retriever microservices with DataStax Astra DB to deliver high-performance RAG data solutions for superior customer experiences. Nvidia NeMo is a cloud-native framework to build, customise and deploy generative AI models that includes training and inferencing.
With this integration, users will be able to create instantaneous vector embeddings 20x faster than other popular cloud embedding services and benefit from an 80% reduction in cost for services.
Near real-time embeddings
Organisations building generative AI applications clearly face many technological complexities, security and cost barriers that are associated with vectorising both existing and newly acquired unstructured data for seamless integration into Large Language Models (LLMs). The urgency of generating embeddings in near-real time and effectively indexing data within a vector database on standard hardware further compounds these challenges.
DataStax says it is collaborating with Nvidia to help solve this problem.
According to DataStax, “Nvidia NeMo Retriever generates over 800 embeddings per second per GPU, pairing well with DataStax Astra DB, which is able to ingest new embeddings at more than 4000 transactions per second at single-digit millisecond latencies, on low-cost commodity storage solutions/disks.”
This deployment model is said to reduce total cost of ownership for users and performs fast embedding generation and indexing.
With embedded inferencing built on NVIDIA NeMo and Nvidia Triton Inference Server software, DataStax AstraDB vector performance of RAG use cases running on NVIDIA H100 Tensor Core GPUs achieved 9.48ms latency embedding and indexing documents, which is a 20x improvement.
When combined with Nvidia NeMo Retriever, Astra DB and DataStax Enterprise (DataStax’s on-premise offering) provide a fast vector database RAG solution that’s built on a scalable NoSQL database that can run on any storage medium.
RAGStack
Out-of-the-box integration with RAGStack (powered by LangChain and LlamaIndex) makes it easy for developers to replace their existing embedding model with NIM. In addition, using the RAGStack compatibility matrix tester, enterprises can validate the availability and performance of various combinations of embedding and LLM models for common RAG pipelines.
“In today’s dynamic landscape of AI innovation, RAG has emerged as the pivotal differentiator for enterprises building genAI applications with popular large language frameworks,” said Chet Kapoor, chairman and CEO, DataStax. “With a wealth of unstructured data at their disposal, ranging from software logs to customer chat history, enterprises hold a cache of valuable domain knowledge and real-time insights essential for generative AI applications, but still face challenges. Integrating Nvidia NIM into RAGStack cuts down the barriers enterprises are facing to bring them the high-performing RAG solutions they need to make significant strides in their genAI application development.”
DataStax is also launching, in developer preview, a new feature called Vectorize.
Vectorize performs embedding generations at the database tier, enabling customers to leverage Astra DB to easily generate embeddings using its own NeMo microservices instance, instead of their own, passing the cost savings directly to the customer.