Models vs. Reality
This is the summary of a conversation with Gemini, incorporating my fundamental definitions, my experienced critique of the practice, and my forward-looking approach to using LLMs.
💎 First Principles
These are the foundational truths we established about the nature and purpose of any model.
Model Defined by Utility: A model is a simplification of reality intended to promote understanding. Its goodness is measured purely by its ability to promote the requisite level of understanding needed for its purpose.
The Inevitability of Error: Following Box, all models are flawed (”wrong”) due to the necessary act of simplification and the inherent limitation of human knowledge. The focus must be on usefulness, not objective truth.
The Inescapable Bias: All models are biased because their creation requires subjective choices regarding boundaries, structure, and metrics. This bias is mitigated, but never eliminated, by group processes.
🧠 Core Wisdom
This wisdom stems from the practical implications of those principles, particularly concerning the experienced modeler’s perspective.
Boundary is Emergent, Not Fixed: The system boundary should not be set prematurely but should emerge organically, defined by the limit of causal relevance discovered by iteratively asking: “What influences this?” and “What does this influence?”
The Problem of Unknown Unknowns: The most profound flaw in any model stems from what the modeler didn’t know they didn’t know. This limits the model’s predictive power.
Shift from Prediction to Robustness: Because models are inherently flawed by simplification and ignorance, their greatest utility lies in acting as a flight simulator for policy interventions, testing their robustness against hypothetical scenarios and structural gaps, rather than generating specific, trusted forecasts.
💡 Leverage Points
These are the strategic actions or tools that maximize a model’s utility despite its inherent flaws.
Embrace Discrepancy: When a model fails to reproduce the real-world behavior (the reference mode), the gap between the model’s output and reality is a powerful cue to search for the missing causal mechanism (the unknown unknown). The model is a map of what’s not yet understood.
Aggressive Abstraction: Manage the complexity of the emergent boundary through aggregation and abstraction (chunking) of variables. This is a deliberate, necessary bias that maintains the model’s simplicity, ensuring it remains useful for communication and insight.
LLMs as Epistemological Partners: Large Language Models serve as powerful tools to overcome the limits of individual and group experience. They can be leveraged to:
Expand the Causal Field: Suggesting cross-disciplinary links and concepts to widen the emergent boundary.
Challenge Structure: Acting as a sophisticated devil’s advocate to identify missing loops or flawed assumptions.
Source: Gemini Discussion
Do you know someone for whom this story might be relevant?
Upgrade to a paid subscription for a deeper dive, including: model explanation, wisdom, leverage points, knowledge, systems archetypes, primary principles, key insights, future implications, and model source.


important point the many miss is that the boundary being an emergent property. in the need for control, people set boundaries inevitably become illusions.
Thanks for a timely reminder. Some heretical questions:
1 The first principle: ‘model defined by utility’: should it perhaps be slightly expanded — to acknowledge that ‘utility’ includes not just ‘understanding’ but desired guidance in making ‘plans for changes to the reality the model is representing? I suspect this might lead to a few additions to the following aspects and principles?
2 The ‘Causal Field’ is a useful concept. Is the term ‘cause’ too ‘backward-looking’ (e.g. tempting people to focus on ‘root causes’ (which I feel is problem in itself) and neglecting the ‘consequences’ of system interventions or lack of them)? Yes, of course, consequences are ‘caused’ — but the focus is slighly different — and important?
3 In a planning discourse, isn’t the purpose of generating shared understanding that supports decisions concerned with ‘opening’ the ‘black boxes’ of reasoning and attributed meaning of participants’ contributions and arguments? Which brings us to the LLMs: for all their amazing ability, Aren’t the LLM’s the ultimate Blach Boxes but with un-openable lids due to their very size? Are we too willing to trust their machinery let alone the data they hav gobbled up?