Semarize
Back to Blog

Author

Alex Handsaker

Alex Handsaker

Founder, Semarize

Alex spent most of his career in tech sales and revenue enablement, where he saw a consistent gap - the most important data in go-to-market lived in conversations, but couldn't be structured or used.

Semarize is built to change that: turning conversations into something systems can actually operate on.

Posts by Alex Handsaker

RevOps

AI Call Scoring Is Theatre Without a Knowledge Layer

AI call scoring that runs on a good LLM with a well-written rubric can look accurate until you test it against what actually happened. The failure isn't one missing check. Every commercial dimension worth assessing has multiple facets, and each facet requires its own grounded document to evaluate properly. A knowledge layer is what makes scoring checkable across all of them rather than plausible about none of them.

Developers

Conversation Intelligence for Developers: Don't Build a Fragile Pipeline, Don't Buy a Black Box

Most teams don't fail to add conversation intelligence because the model is bad; they fail because the integration is fragile and unstructured. The fix isn't a better LLM pipeline or a platform API you can't control. It's a layer that takes a transcript, runs it against a versioned Kit, and returns deterministic typed JSON you can test, version, and route into your product.

Sales Intelligence

Conversation Data Warehouse: Getting Consistent Call Fields Into BigQuery, Snowflake, and Databricks

BI teams can't query transcripts. They can't join AI summaries to CRM objects. To make conversation data useful for analytics, it needs to arrive as consistent typed fields - booleans, scores, text fields, lists - with join keys that connect calls to opportunities, accounts, and contacts. This is the pipeline, the schema, and the governance model that makes sales call analytics possible in your warehouse.

QA & Compliance

100% QA Scoring Without Manual Review: Deterministic Rubrics for Every Call

Manual QA sampling at 2–5% has two problems: coverage and consistency. Automated scoring with deterministic rubrics solves both - every call gets scored the same way, with no reviewer required to generate the result. The shift isn't just efficiency - it changes what coaching is built from and turns compliance verification from sampling into complete coverage.

RevOps

Conversation Intelligence Produces the Signals. Outcomes Depend on What You Build With Them.

CI vendors sell outcomes - better forecasts, improved coaching, higher win rates. The outcome claims are accurate for teams that wire CI signals into their downstream workflows. For teams that don't, the dashboards fill up and the outcomes don't move. The gap between running CI and seeing results is always an implementation gap, not a vendor gap.

Customer Success

Churn Risk Shows Up in CS Calls Before It Shows Up in Health Scores

Most churn detection models catch the consequences - usage drops, NPS decline, support spikes - after the customer has already started disengaging. The predictive signals are in CS call recordings: escalation language, stakeholder engagement changes, absent expansion mentions, deferred follow-through. Knowledge-grounded extraction turns these into signals calibrated to your definitions - making early intervention possible in a way generic extraction can't.

RevOps

Stop Running Win/Loss Surveys. Start Capturing Deal Signals From Calls.

Win/loss surveys have a structural timing problem: they collect buyer memory after the outcome, not the decision inputs during the deal. Competitor mentions, pricing responses, and stakeholder dynamics exist in call recordings as they happen. Extracting them as structured signals makes win/loss real-time - and far more useful for deal coaching and pipeline risk.

Sales Coaching

Conversation Intelligence Isn't Enablement Analytics. Here's What Is.

Sales enablement teams buy conversation intelligence to measure coaching impact, then find the dashboards don't produce what they need: consistent rubric scoring, queryable time-series data, and before-and-after skill lift metrics. Visibility into calls and measurement of skill development are different problems - and most CI tools only solve the first one.

Sales Intelligence

Capacity Planning Lags Because Sales Data Misses the Act of Selling

Sales capacity models built on CRM events are structurally late. Stage labels and activity counts record what happened to deals, not what was happening inside them. The missing ingredient isn't more pipeline data - it's structured signals from selling conversations that show whether buyers actually understood, committed, and progressed.

Sales Coaching

AI Scorecards Don't Disagree. Your Prompt Does.

Inconsistent AI scorecards aren't an AI problem - they're a process failure. Freeform prompts ask the model to re-interpret evaluation criteria on every run, and that interpretation drifts with phrasing, model updates, and context. The fix is an evaluation contract: a locked schema with defined output types that produces the same result on the same call, every time.

Developers

Introducing the Semarize MCP

Today we're shipping the Semarize MCP. Connect Claude, Codex, or any MCP-compatible AI tool to your workspace and build evaluation schemas from inside a conversation: create Bricks, draft Kits, attach knowledge bases, and publish, without leaving the tool you're already working in.

Sales Coaching

Why Conversation Intelligence Doesn't Drive Behavioural Change (and What Does)

Eighteen months into a CI implementation, many teams find that call scores have improved but win rates haven't moved. The data is there. The dashboards are running. The coaching is happening. What's missing is the step where insight becomes a different behaviour in the next conversation - and CI alone doesn't close that gap.

Sales Intelligence

Overhiring Is a Measurement Failure, Not a Hiring Strategy

Sales teams don't overhire because of poor judgment. They overhire because CRM-driven capacity models are built on stage labels and activity counts - data that can't reveal whether buyers are actually progressing. By the time deal reality becomes visible, headcount decisions have already been made.

RevOps

MEDDICC Without the Admin: Deterministic Scoring for Every Discovery Call

Most MEDDICC data is stale before it reaches CRM. Reps update fields from memory after the call, introducing timing gaps and sampling bias that make qualification scores unreliable. Extracting MEDDICC signals directly from transcripts fixes the data freshness problem that better training never will.

RevOps

CRM Enrichment From Sales Calls: The RevOps Data Ops Playbook

Most CRM enrichment stalls at 30% field coverage because the output is unstructured - reps updating from memory, summaries stored as notes. The fix is a structured extraction pipeline: transcript to consistent fields to CRM to automation triggers. This playbook covers the schema, the routing, and the implementation in Salesforce and HubSpot.

RevOps

What Conversation Intelligence Is Actually Missing and How to Fill the Gap

Most teams already have conversation data. The problem isn't volume - it's that transcripts sitting in Zoom Cloud or a shared Drive folder are locked in text no system in your stack can read. Semarize turns what was said into structured JSON your CRM, BI, and automations can consume directly.

Sales Intelligence

Conversation Intelligence Doesn't Fail on Calls. It Fails on Knowledge.

Early CI tools were built on ML classifiers - talk ratios, question counts, keyword detection. LLMs changed what's possible. But they introduced a new risk: model knowledge. When scoring runs against what the AI infers from training rather than your pricing, ICP criteria, and qualification playbooks, outputs are plausible and wrong.

Sales Coaching

AI scorecards are theatre unless they measure customer understanding

Most AI call scorecards measure what the rep did - agenda set, questions asked, next step mentioned. That's measuring inputs. What actually matters is whether the buyer understood anything. The two are not the same thing, and the gap between them is where scorecard theatre lives.

Product

Bricks and Kits: the mechanism for stable conversation evaluation

Freeform prompts produce inconsistent evaluation results - scores drift, output shapes change, and you can't tell whether coaching improved anything or whether the rubric moved. Bricks define a locked evaluation schema: one question, one output type. Kits group them into reusable evaluation workflows. The result is schema-stable conversation analysis you control.

Sales Intelligence

Sales is human. Sales data is not.

CRM data captures events - stage changes, activity counts, timestamps. It doesn't capture the human act of selling. The evidence that explains why deals move or stall lives in conversations, not in fields a rep updated.

Conversation Intelligence

What is a Conversational Intelligence API?

Conversational intelligence gets applied to three very different things - deal intelligence, note-taking, and pattern-level analysis. Only one produces data your systems can act on. Here's what a CI API actually does and how the shift away from full-platform solutions is changing what's possible.

Thoughts

Why I've built Semarize.

It's about time we looked at our conversations through a more scientific lens - This is why I've built Semarize, what it's for and what I want it to help people do.