As tensions between Iran, Israel, and the United States escalate, a familiar argument resurfaces in op-eds and think-tank threads: if only the decision-makers had real skin in the game — if generals’ sons were drafted, if politicians stood in the line of fire — they’d be far less eager for war.
It’s an appealing theory. It’s also built on an assumption so large, and so rarely examined, that accepting it uncritically will lead you to make systematic forecasting errors about some of the most dangerous situations in the world.
The Theory and Its Hidden Assumption
Skin in the game (SITG) is a simple mechanism: agents who personally bear the consequences of their decisions make better decisions. Remove the asymmetry between risk and reward, and recklessness follows. Restore it, and rational caution returns.
This works. In the right domain.
When a bank trader risks his own capital rather than the bank’s, he sizes positions more carefully. When a contractor has to live in the building he designed, he doesn’t cut corners on the foundations. The feedback loop is tight, the consequence is personal and immediate, and the agent’s objective — not losing money, not dying in a collapsing structure — aligns with ours.
But notice what the mechanism assumes: that the agent is primarily motivated by self-preservation or material wellbeing, and that personal loss is weighted heavily enough to override other motivations.
This is not a universal property of human beings. It’s a cultural and ideological variable.
The Valuation Function Problem
Every model of incentives contains, usually implicitly, a model of what people want. SITG assumes people want, above all, to avoid personal loss. Change that assumption and the entire mechanism inverts.
Consider a soldier who genuinely believes that dying in battle guarantees paradise. To him, the “cost” you’re trying to impose isn’t a cost — it’s the prize. SITG has nothing to say to him. The more skin he has in the game, the more motivated he becomes.
Or consider a leader who believes his civilization faces extinction. His calculus isn’t war vs. peace — it’s fight now vs. die later. Personal exposure to risk doesn’t dampen aggression; it confirms the urgency of it.
Or consider a political culture where the failure to fight — showing weakness, losing face, betraying ancestors — carries a social cost greater than death itself. In a shame-honor framework, the SITG lever doesn’t just fail to work. It points in the wrong direction.
Consider Duels: For several centuries across Europe and America, educated men with full knowledge of the consequences — death, serious injury, legal jeopardy — regularly accepted challenges rather than endure the social cost of refusing. SITG was maximally present: the consequence was immediate, personal, and fatal. The SITG mechanism predicts they would prefer dishonor to death. They didn’t. The social cost of backing down outweighed the physical cost of dying, reliably enough that dueling persisted as an institution for generations. It wasn’t abolished by making the stakes higher. It was abolished when the underlying honor culture that made refusal worse than death finally eroded.
These aren’t exotic edge cases. They describe large portions of recorded human history.
History Didn’t Get the Memo
If SITG reliably suppressed war, you’d expect the eras of maximum skin in the game — when kings led from the front, when warlords sent their sons into battle alongside conscripts, when the decision-maker and the foot soldier shared the same mud — to be the peaceful ones.
They were not. They were the most warlike periods in human history.
Alexander the Great fought in the front rank and took serious wounds at multiple engagements. He conquered most of the known world. The Mongol khans rode with their armies and suffered casualties in their own families. They prosecuted campaigns that killed roughly ten percent of the world’s population — among the most destructive in pre-modern history.
The counterargument sometimes offered is that these leaders had asymmetric skin in the game — they fought on horseback with bodyguards, insulated from the worst of it. But this escape hatch actually weakens the SITG position. It concedes that even substantial personal exposure didn’t suppress aggression, and raises the bar for “real” SITG to a standard that has never existed and cannot be legislated into existence.
The more honest explanation is simpler: for these men, in these cultures, war was not a cost to be minimized. It was the primary mechanism for acquiring status, legitimacy, wealth, and meaning. Personal risk didn’t deter them. It was the point.
Why This Failure Is Predictable
The domains where SITG works reliably share a structure: consequences are immediate, personal, physical, and unambiguous. A bridge engineer on his own bridge. A trader with his own capital. A pilot in his own aircraft. The feedback loop is tight and not easily reinterpreted.
SITG fails — predictably — when consequences are delayed, statistical, collective, or abstractable. And crucially, when a cultural or ideological frame exists that revalues the consequence. War almost always falls in this second category.
Death in battle can be reframed as martyrdom, sacrifice, honor, a ticket to paradise, a debt to ancestors, proof of manhood, or the cost of survival against existential threat. Each of these frames converts the cost into something else — honor, proof of faith, debt to the dead — until it no longer functions as a deterrent.
A related claim deserves attention: that experiencing war makes you anti-war. Sometimes true. Often not — and the exceptions illuminate exactly why the mechanism fails.
World War I produced some of the most powerful anti-war literature in history — Owen, Sassoon, Remarque. It also produced the Freikorps: hardened veterans who embraced violence as identity and became the street-fighting nucleus of early European fascism.
Adolf Hitler served as a frontline infantryman in WWI. He was wounded. He was gassed. He received the Iron Cross for bravery. By every metric, he had maximum skin in the game. He concluded from this experience not that war was hell to be avoided, but that Germany had been stabbed in the back before achieving its rightful victory — and that what was needed was a harder, more committed, more ideologically pure war next time.
This is not an anomaly that breaks the theory. It is the clearest available demonstration of why it fails: prior ideology determines what lessons you extract from experience. The war confirmed his existing worldview. The loss became a grievance narrative. SITG had no mechanism to correct this, because he wasn’t loss-averse in the relevant sense — he was interpreting loss through a cultural frame that converted defeat into motivation.
The same mechanism appeared in 20th-century Japan. A military culture that explicitly revalued death — Bushido, the kamikaze program, the expectation of fighting to the last civilian on the home islands — produced decision-making that Western analysts found literally incomprehensible. Not because the Japanese were irrational. Because they were operating on a different objective function.
The selection problem compounds all of this. SITG environments tend to select for the people on whom the mechanism works least. Who volunteers for the most exposed roles in warfare, ideology, and high-stakes politics? People already culturally primed to discount personal loss. The mechanism selects for zealots, glory-seekers, and true believers — and then wonders why it can’t deter them.
The Forecasting Implication
This matters practically whenever you’re trying to predict the behavior of actors operating under a different valuation function than your own.
If you assume SITG as a universal corrective — that personal exposure to consequences will reliably produce caution — you will systematically underestimate the aggression of actors for whom:
- Death in the cause is a reward, not a cost
- Existential threat perception makes ordinary risk calculus irrelevant
- Cultural frameworks convert restraint into shame or betrayal
- Ideological identity overrides individual self-preservation
These are not fringe conditions. Right now, several of the key actors in the Middle East fit one or more of these descriptions. Applying SITG logic to predict their behavior — expecting that exposure to consequences will moderate their choices — is not analysis. It’s projection.
What Actually Moves Behavior
None of the mechanisms that reliably constrain conflict are reliable across all contexts. But they share something the SITG mechanism lacks: they operate on what actors actually value, rather than assuming a universal preference for self-preservation.
Economic interdependence. When elites on both sides have material interests that war destroys, the calculus changes — not because of personal risk, but because the prize disappears. The mechanism assumes elites have more to gain from peace than from war, which holds until ideological or existential stakes reorder the preference ranking. It failed in 1914 despite deep European economic integration.
Structural veto players. Institutional constraints — legislative bodies, independent militaries, allied states with credible red lines — can prevent individual decision-makers from converting aggressive preferences into action. The mechanism isn’t SITG; it’s friction. It works until the veto players themselves are captured by the same ideology driving the aggression, or until a leader acquires enough institutional control to neutralize them.
Deterrence at civilizational scale. Nuclear deterrence works not because leaders fear death — many demonstrably don’t — but because the consequence is civilizational annihilation, a scale at which even martyrdom narratives break down. It requires credible capability and credible will, and it deters only actors who believe the threat is real.
Modeling the actual valuation function. This is the hardest and most neglected. Before you can predict or influence behavior, you need an accurate model of what the actor actually wants — not what a rational self-interested agent would want. This requires deep cultural literacy, historical knowledge, and the intellectual honesty to acknowledge that your default assumptions are culturally specific, not universal.
That last point is unglamorous. It doesn’t fit on a bumper sticker the way “skin in the game” does. But it’s the work.
A Closing Note
The current Iran-Israel-USA situation will not resolve according to the logic of rational loss-aversion. Iran’s leadership operates under a theology of resistance that revalues sacrifice; Hezbollah and Hamas under explicit martyr frameworks; Israel’s decision-makers under existential threat perception rooted in historical memory that overrides ordinary risk calculus; domestic politics on all sides rewards projections of toughness and punishes restraint.
This doesn’t mean the situation is unreadable. It means you have to read it correctly: by modeling what each actor actually values, what they believe they’re buying with their risk, and what cultural or ideological frames are converting the costs you see into rewards they feel.
The map that says “give them skin in the game and they’ll calm down” is not wrong because incentives don’t matter. It’s wrong because it assumes everyone is using the same map.
Most of the dangerous actors in history were not. Most of the dangerous actors right now are not.
That’s the thing about skin in the game. It only works on people who share your theory of what skin is worth.