Kinetica, the real-time database for analytics and generative AI, announced the availability of a Quick Start for deploying natural language to SQL on enterprise data. This Quick Start is for organizations that want to experience ad-hoc data analysis on real-time, structured data using an LLM that accurately and securely converts natural language to SQL and returns quick, conversational answers. This offering makes it fast and easy to load structured data, optimize the SQL-GPT Large Language Model (LLM), and begin asking questions of the data using natural language. This announcement follows a series of GenAI innovations which began last May with Kinetica becoming the first analytic database to incorporate natural language into SQL.
Here is how it works:
- First, sign up for Kinetica Cloud Free edition;
- Second, simply load files into Kinetica;
- Third, create context for those tables that will help the LLM associate the words and terminology with the names of fields and columns;
- Finally, use the prompt to ask explicit questions and get near instantaneous answers.
“We’re thrilled to introduce Kinetica’s groundbreaking Quick Start for SQL-GPT, enabling organizations to seamlessly harness the power of Language to SQL on their enterprise data in just one hour,” said Phil Darringer, VP of Product, Kinetica. “With our fine-tuned LLM tailored to each customer’s data and our commitment to guaranteed accuracy and speed, we’re revolutionizing enterprise data analytics with generative AI.”
Also Read: CyrusOne Announces Owen Morris as Chief Financial Officer.
The Kinetica database converts natural language queries to SQL, and returns answers within seconds, even for complex and unknown questions. Further, Kinetica converges multiple modes of analytics such as time series, spatial, graph, and machine learning that broadens the types of questions that can be answered. What makes it possible for Kinetica to deliver on conversational query is the use of native vectorization that leverages NVIDIA GPUs and modern CPUs. NVIDIA GPUs are the compute paradigm behind every major AI breakthrough this century, and are now extending into data management and ad-hoc analytics. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint.
SOURCE: GlobeNewswire
Leave a Reply