When
Where
Large language models (LLMs) turn writing into a live
exchange between humans and software. We characterize this
new medium as a discursive network that treats people and
LLMs as equal nodes and tracks how their statements
circulate. We define the generation of erroneous information
as invalidation, meaning any factual, logical, or structural
breach, and show it follows four hazards: drift from truth,
self-repair, fresh fabrication, and external detection. We
develop a general mathematical model of discursive networks
that shows that a network governed only by drift and
self-repair stabilizes at a nonzero error rate. Giving each
false claim even a small chance of peer review shifts the
system to a truth-dominant state. We operationalize peer
review with the open-source Flaws-of-Others (FOO) algorithm:
a configurable loop in which any set of agents critique each
other while a harmonizer merges their verdicts. We identify
ethical transgressions that occurs when humans fail to
engage in the discursive network. The takeaway is both
practical and cultural: reliability in this medium comes not
from perfecting single models but from connecting imperfect
ones into networks that enforce mutual accountability.