The Journal
Essays
Long-form writing on AI, philosophy, psychology, and systems thinking.
-
Phenomenology of the Prompt: Talking to a Machine
When you write a prompt, you translate intention into language shaped by your model of machine processing. Phenomenology reveals this is not conversation but a new form of cognitive labor.
-
The Scope Creep Diagnosis: Why Projects Expand
Analysis of 26 projects with significant scope creep found 73% shared the same root cause: unclear decision rights about what constituted an acceptable change.
-
Lakatos and the Research Program of Machine Learning
Lakatos distinguished progressive research programs from degenerative ones. ML has a progressive core of genuine prediction surrounded by a degenerative protective belt of scaling assumptions.
-
Existential responsibility in the age of automation—if the machine can do it, should you still?
-
Phenomenology of the Prompt: Talking to a Machine
When you write a prompt, you translate intention into language shaped by your model of machine processing. Phenomenology reveals this is not conversation but a new form of cognitive labor.
-
The Dashboard Paradox: More Dashboards, Less Understanding
The median company maintains 340 dashboards but only 38 are viewed weekly. Dashboard proliferation creates the illusion of data-driven culture while fragmenting attention.
-
The Junior Data Engineer Pipeline Is Broken
AI automation reduced entry-level data engineering postings by 34% since 2024. The traditional training pipeline for developing craft judgment is collapsing.
-
The Ethics of AI Consulting: Selling Responsibility
Reviewing 23 AI ethics consulting engagements found 61% delivered unused documentation averaging $185,000 per engagement. Selling responsibility requires operational change.
-
Human-in-the-Loop as Architecture Pattern
Human-in-the-loop as deliberate architecture reduced critical errors 89% while maintaining 74% throughput. Four production patterns for integrating human judgment.
-
The AI Ethics Officer Role Is a Systems Design Problem
AI ethics officers fail when positioned as compliance gatekeepers. The role succeeds when restructured as a cross-functional architecture position embedded in engineering.