Gresham’s Law is an economic principle from 16th-century England that states “bad money drives out good.” At that time, the monetary system was frequently hacked by individuals passing coins that were worth less than their face value. There were several ways to achieve this: "clipping," where small edges of a coin were shaved off, and "sweating," where bags of coins were shaken together until silver dust rubbed off to be collected and sold.
As a result, people hoarded the “good” money (full-weight coins) while only the “bad” money remained in circulation. By the 1540s, the prevalence of clipping, sweating, and counterfeiting had rendered English currency nearly worthless. When we apply this logic to the digital currency of our era—data and AI—striking parallels emerge. Here are some methods companies are using to prevent “Bad AI” from taking over their ecosystems:
Method 1: Establish "AI Assay Offices" (Centralized Governance)
Just as historical mints used assay offices to test the purity of gold, companies are creating AI Centers of Excellence (CoE) or AI Studios.
- The "Bad AI": Shadow AI (employees using unvetted consumer models with company data).
- The Fix: A centralized hub that provides "gold-standard" reusable components, pre-approved models, and security sandboxes. Instead of 50 different "bad" versions of a customer service bot, the company deploys one trusted and governed version with verified data lineage.
Method 2: Move from "Prompting" to "Agentic Orchestration"
Gresham’s Law suggests that "cheap" wins if the value is perceived as equal. To stop low-quality AI from being the default, companies are raising the bar on what "Good AI" looks like by focusing on Agentic Workflows.
- The "Bad AI": Single-shot prompts that produce generic text (the "bad money" of communication).
- The Fix: Multi-agent systems that verify their own work. A "Researcher Agent" might find data, but an "Audit Agent" must cross-reference it against the internal knowledge base before the "Writer Agent" ever sees it. This internal verification ensures only high-fidelity outputs "circulate" within the company.
Method 3: "Datarails" and Input-Side Governance
To prevent model collapse and hallucinations, enterprises are implementing "Data Guardrails” - hard restrictions on what data can enter an AI’s training or inference cycle.
- The "Bad AI": Models trained on public web scrapes or unverified synthetic data.
- The Fix: Prohibiting the use of AI-generated content in training sets for internal mission-critical models. Companies are increasingly paying for "Human-in-the-Loop" (HITL) labeling to ensure the intrinsic value of their data remains high.
Method 4: Zero-Trust for AI Outputs
Gresham’s Law thrives when people can't tell the difference between "good" and "bad" currency.
- The "Bad AI": Hallucinations that look like facts.
- The Fix: Implementing AI Observability tools (like Watsonx.governance) that act as real-time "counterfeit detectors.” These systems flag drift or bias the moment a model begins to underperform, allowing the company to "pull it from circulation" before it damages the brand.
The ultimate defense against an "AI Gresham’s Law" is AI Literacy. When employees can spot the telltale signs of a hallucinated or low-quality AI response, the "bad money" loses its power to circulate.
Are you at a point in your AI journey where you need a policy for AI quality standards? LRS can help. If you are ready to explore AI for your business, LRS can help you select a use case and implement it. Please Contact Us to request a meeting. Don’t have an information architecture for AI you trust yet? LRS can also help you collect, organize, and analyze your data so that it is business-ready.