
Stas Kulesh (CPO at Sky AI)
May 15, 2026
AI summaries break down when findings cannot be traced back to exact pages. Bates numbers turn source attribution from a nice-to-have into a defensible review workflow.

AI can summarize a medical file in seconds. That does not mean anyone should trust the summary.
The real problem in medical-legal and claims review is not just speed. It is reference integrity. If a reviewer, adjuster, lawyer, or physician cannot trace a finding back to the exact page where it came from, the output becomes hard to defend and even harder to use.
That is where Bates numbers matter. They look simple. In practice, they solve one of the most important problems in AI-assisted document review.
Most teams evaluate AI by asking whether it can read faster, summarize better, or answer questions across thousands of pages. Those are valid questions. They are not the decisive ones.
The decisive question comes one step later: when the system says the claimant reported radicular pain, when the chart shows a medication change, or when an operative note contradicts a later examiner, can the reviewer jump straight to the source page and verify it immediately?
If the answer is no, the workflow breaks.
That is the hidden cost of generic AI outputs. They may sound useful. But every unsupported sentence creates a second job, which is manually hunting through the record to confirm whether the claim is true. In legal and claims work, that verification burden can erase most of the productivity gain the AI promised in the first place.
Bates numbers are unique identifiers assigned to document pages. In legal and medical record review, they create a stable way to cite source material even when files are large, duplicated, or reorganized.
That matters because page 184 of one PDF is not always page 184 of the full record. Files get merged. Exhibits get reordered. Supplemental productions arrive later. A plain page reference can become ambiguous fast. A Bates-stamped page reference is much harder to dispute.
For AI-assisted review, Bates numbers act as the bridge between generated insight and defensible evidence. They turn “the AI says this happened” into “here is the exact page where this appears.”
Medical files are dense, repetitive, and often contradictory. The same diagnosis may appear across intake notes, clinical follow-ups, insurer forms, hospital records, and later IME reports. Dates get copied forward. Histories get paraphrased. Important findings get buried inside long narrative notes.
That is why a clean answer is not enough. The reviewer has to know which source the answer came from.
In a claims or medico-legal workflow, the difference is operational. A summary without page-level proof is just a draft. A summary with precise source attribution can support reserve discussions, expert review, QA, and report writing. That is the standard many AI tools still miss. They generate text well. They do not always preserve trust well.
Many products now claim to provide citations. That sounds good, but not all citations are equal.
A vague footnote, a document title, or a source list at the bottom of an answer still leaves the user doing manual work. The real win is navigable attribution, where a citation takes the reviewer directly to the exact page and location inside the viewer.
That is why Sky AI uses Bates Link citations in chat responses and review workflows. The user does not just see a source reference. They can click through to the exact page and location where the evidence lives. That changes the experience from “interesting AI output” to “usable professional workflow.”
It also changes trust. Instead of asking the user to believe the model, the system makes verification nearly immediate.
This issue appears everywhere serious document review happens.
In each case, the same rule applies. If the output cannot be traced back to exact pages fast, the user falls back to manual checking. That means the system may save typing, but it does not truly save review time.
The next generation of document AI should not be judged only by how well it summarizes. It should be judged by how well it preserves evidentiary traceability.
That is where Bates-linked workflows are stronger than generic chat outputs. They narrow the gap between extraction and proof. The model can help organize, compare, and answer. But the human can still click back to the exact source page without losing context.
Sky AI was built with that standard in mind. Across medical, legal, and claims workflows, the platform combines organization, cited chat, and verifiable source navigation in one system. That is part of the broader difference between AI that produces impressive language and AI that supports actual review work. Teams that want more context on the full document workflow can also look at how AI-assisted document analysis changes review operations and the company’s Trust Center for privacy and compliance context.
As AI becomes standard in medical record review, the competitive line will move. It will no longer be enough for a system to answer quickly. It will need to answer in a way professionals can verify instantly.
That is why Bates numbers still matter in an AI era. They are not a legacy legal artifact. They are infrastructure for trust.
For teams reviewing medical records, the real question is no longer “can AI summarize this file?” It is “can the summary take me straight to the proof?” When the answer is yes, AI becomes far more useful. When the answer is no, the referencing problem is still unsolved.