top of page
backBestOne.jpg

Vijilent Blog

Some ideas we feel would be selfish to keep to ourselves

Search

The Digital Hall of Mirrors: Why AI Can’t Be Trusted for Personal Background Checks


Artificial intelligence promises instant insights and lightning-fast research. It’s tempting to rely on tools like ChatGPT and other large language models (LLMs) to learn more about jury candidates, judge rulings and other people insights.

 

But when it comes to researching real people, AI can become a persuasive fiction writer instead of a credible investigator.

 

While AI is transforming the legal field—streamlining document review, contract analysis, and e-discovery—it still falters when accuracy and accountability matter most. Nowhere is that more dangerous than in jury research and personal background checks.

 

Here’s why:

 

1.  AI Predicts What Sounds Right — Not What Is Right

LLMs don’t “know” facts. They generate responses based on probability. That means they sometimes fabricate information—a phenomenon researchers call “hallucinations.”

 

Studies show hallucination rates can range from 20% to 50% depending on prompt complexity. In practice, that could mean inventing job histories, misattributing political views, or implying criminal records that don’t exist.

 

In legal research, even a small error can distort strategy—or damage someone’s reputation.

 

2.  The Identity Confusion Problem

Search a common name like “John Smith,” and AI may blend multiple individuals into one composite profile. Research from the University of Cambridge highlights how LLMs frequently conflate people with identical or similar names.

 

The result? Mixed biographies, misattributed social media posts, and inaccurate professional histories.

 

In law, precision isn’t optional. It’s foundational.

 

3.  A Real-World Test

I ran a simple experiment: I asked AI to find all publicly available information about me.


It found nothing it could confidently attach to my identity.

 

Then I conducted my own search—cross-referencing social media references, tracing digital breadcrumbs, and validating sources. I identified more than 10 accurate links, including social media profiles, academic history, and professional background.

 

Same subject. Same public internet. Completely different outcomes. The difference wasn’t access to data—it was context and verification.

4.  The Legal and Ethical Risks

Beyond technical errors, there are serious compliance concerns:

 

•         Algorithmic bias can skew interpretations.

 

•         Data privacy laws like GDPR and CCPA create processing risks.

 

•         The Fair Credit Reporting Act requires verifiable, disputable information—standards AI cannot currently meet.

 

If it can’t cite, it shouldn’t decide.

 

The Bottom Line

AI is a powerful assistant. But it is not a reliable adjudicator of someone’s character or history. When reputations, legal strategy, or compliance decisions are at stake, probability is not proof. Human-led research—grounded in context, accountability, and verification—remains the gold standard. It’s why we rely on human intelligence instead of artificial intelligence to do jury research.

 

Because in the courtroom, “close enough” isn’t good enough.

 
 
 

Comments


Ready to get a competitive edge?

bottom of page