Distributional, the modern enterprise platform for AI testing, announced that it has raised $19 million in Series A funding led by Two Sigma Ventures with participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, Alumni Ventures and dozens of angel investors. The new round brings Distributional’s total capital raised to $30 million less than one year since incorporation. The milestone also aligns with the initial enterprise deployments of its AI testing platform that gives AI engineering and product teams confidence in the reliability of their AI applications, reducing operational AI risk in the process.
Unlike traditional software testing, AI testing needs to be done consistently and adaptively over time on a meaningful amount of data due to AI being inherently probabilistic and dynamic. As the power and pervasiveness of AI applications grows, so does the need for better AI testing. The various operational risks of deploying faulty products are becoming increasingly significant to a business’s financial, regulatory and reputational bottom line.
“Between my previous line of work optimizing AI applications at SigOpt and deploying AI applications at Intel, and through conversations with Fortune 500 CIOs, it became clear that reliability of AI applications is both critical and challenging to assess,” said Scott Clark, co-founder and CEO of Distributional. “With Distributional, we have built a scalable statistical testing platform to discover, triage, root cause, and resolve issues with the consistency of AI/ML application behavior, giving teams confidence to bring and keep these applications in production.”
Distributional is built to test the consistency of any AI/ML application, especially generative AI, which is particularly unreliable since it is prone to non-determinism, or varying outputs from a given input. Generative AI is also more likely to be non-stationary with many shifting components that are outside of the control of developers. As AI leaders are increasingly under pressure to ship generative AI, Distributional helps automate AI testing with intelligent suggestions on augmenting application data, suggesting tests, and enabling a feedback loop that adaptively calibrates these tests for each AI application being tested.
“We are inspired by Distributional’s mission of making AI reliable so teams are confident deploying it across their full set of use cases, maximizing the impact of AI on their organizations in the process,” said Frances Schwiep of Two Sigma Ventures. “By building for enterprise scale, precision and flexibility from day one, Distributional occupies a unique position in the broader landscape of AI testing, monitoring and operations. We have strong conviction in the Distributional team’s deep expertise in the field, as evidenced by how the company is already addressing both the complexity and scale of its design partners in finance, technology and industrial sectors.”
Distributional’s platform allows AI product teams to proactively and continuously identify, understand and address AI risk before customer impact. Prominent features include:
- Extensible Test Framework: Distributional’s extensible test framework enables AI application teams to collect and augment data, test on this data, alert on test results, triage these results, and resolve these alerts through either adaptive calibration or analysis driven debugging. This framework can be deployed as a self-managed solution in a customer VPC and is fully integrated with existing datastores, workflow systems and alerting platforms.
- Configurable Test Dashboard: Teams use Distributional’s configurable test dashboards to collaborate on test repositories, analyze test results, triage failed tests, calibrate tests, capture test session audit trails and report test outcomes for governance processes. This enables multiple teams to collaborate on an AI testing workflow throughout the lifecycle of the underlying application, and standardize it across AI platform, product, application and governance teams.
- Intelligent Test Automation: Distributional makes it easy for teams to get started and scale AI testing with automation of data augmentation, test selection and calibration of these steps in an adaptive preference learning process. Intelligence is the flywheel that fine tunes a test suite to a given AI application throughout its production lifecycle and scales testing across all properties for all components of all AI applications.
SOURCE: BusinessWire