Overview
Zoro is a B2B eCommerce company in the MRO space. They run analytics on BigQuery and use Looker to publish human-reviewed official dashboards. Teams still need fast answers to ad hoc questions that often require one-off SQL and create backlog. Zoro wanted a governed natural-language analytics layer so teams could self-serve reliable answers faster, without replacing Looker or opening broad access to the full warehouse.
The Challenge
High-volume ad-hoc questions created churn and inconsistency: Zoro saw roughly 20 to 25 analytics questions per day, often in Slack. Many became one-off queries or backlog, which created duplicated work and inconsistent answers.
‘Official’ reporting had to stay governed and reviewable: Zoro wanted to keep Looker as the home for human-reviewed official dashboards. Any natural-language layer needed SQL visibility and exportable outputs for validation, and it had to fail safely when context was missing.
Security and data scope had to be tightly controlled: Zoro wanted a clean pilot with a small set of approved BigQuery tables, a data dictionary, and curated definitions, plus alignment on SOC 2 Type 2 expectations and deployment options in SaaS or Zoro’s GCP.
The Solution
WisdomAI connected to Zoro’s BigQuery environment and grounded natural-language answers in Zoro’s existing definitions and documentation, including a data dictionary, selected LookML, and query examples.
What this enabled:
- Self-serve natural-language Q&A on a scoped set of approved BigQuery tables
- SQL visibility and exportable outputs to support validation/sharing
- Consistent metric definitions by reusing curated LookML and data dictionary context
- Admin controls, feedback workflows, and evaluation tests to improve accuracy over time
- Faster answers for product and business leaders without creating BI backlog



