Pinecone launches new features to lower the barrier of entry for vector search

Pinecone Systems Inc., a search infrastructure company, today announced the release of new features and enhancements that make it significantly easier for developers — regardless of AI or ML experience and background — to get started with vector search for applications such as semantic search and recommendation systems. New features include up to 10x faster indexes, flexible collections of vector data, and zero-downtime vertical scaling.

“Our vector database makes it easy for engineers to build capabilities like semantic search, AI recommendations, image search, and AI threat detection, but for teams who are new to vector search, some challenges remain,” said Edo Liberty, founder and CEO of Pinecone. “Those challenges centered on the limited capacity of indexes, supporting high-throughput applications, and changing index size to support growing data volume. Our new release addresses these technical challenges, further simplifying and speeding up vector search.”

Also Read: Cloud Cost Optimization Challenges

With Pinecone’s new vertical scaling, if a company’s index grows beyond the available capacity, pods can be changed on a live index with zero downtime to accommodate more data. Pods are now available in different sizes — 1x, 2x, 4x, and 8x — so engineering teams can start with the exact capacity they need and easily scale their index. Hourly costs for pods change to match the new sizes, meaning they still only pay for what they use.

Pinecone’s new Collections allow engineers to experiment with and store vector data in one place. Users can save data from an index and create new indexes from any collection. Whether using collections for backing up and restoring indexes, testing different index types with the same data, or moving data to a new index, users can now do it all within Pinecone.

Pinecone is also launching p2 pods that are purpose-built for performance and high-throughput use cases. The new p2 pod type provides blazing fast search speeds of under 10ms and throughput as high as 200QPS per replica (throughput can be increased by adding more replicas). That’s 10x better than what was previously available in Pinecone. This is achieved with a new graph-based index that trades off ingestion speed and filter performance in exchange for lower latencies and higher throughput.

“Our new features make it easier and more cost-effective than ever for engineers to start and scale a vector database in production, furthering our mission of democratizing vector search,” added Liberty.

You can read the full announcement for more information about these features and performance improvements.

Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.