The Trades
One card per trade. The bar at the top of each card shows how often each pick produced the trade's best player across the simulations. The total above each side is the share of simulations in which that side's best beat the other side's.
How this works
The conservative question
Cade Massey and Richard Thaler, in their 2013 paper The Loser's Curse, tested NFL draft skill with about as conservative an instrument as one can use. They paired up every consecutive same-position pick in a given year, the first and second quarterbacks taken, the first and second tackles, and the same down through every position, and asked whether the earlier pick ended up with the better career than the one taken right after him. Across twelve years of data, the answer was 52%, statistically indistinguishable from a coin flip.
That instinct shapes this page. When a team trades multiple picks for one, the question we ask is not whether the secondary picks paid for themselves on a trade-value chart. That is a real question, but it is a separate one. The headline here is whose pick ended up the best. (For the longer treatment of how teams have actually fared at ordering the draft over the last fifteen years, see Are NFL Teams Getting Better at Drafting?.)
Why the trade-up bet is hard to score
The familiar way to evaluate a draft trade is the pick-value chart that Jimmy Johnson popularized in the early 1990s. Each pick gets a point value, the values on each side are summed, and the trade is judged fair when the sums roughly match. Better versions of the chart exist now, and the site's pick value curve writeup covers the modern alternatives. The form is the same in all of them. A value per pick, added up across each side.
That arithmetic works for matching capital at the margin, but pVAR does not lend itself to the same operation. Career value sums in raw points, but rosters do not benefit from one elite career and three competent ones in proportion to those raw points. Two pVAR-30 careers and one pVAR-80 career do not occupy the same place on a roster, do not earn the same paycheck, and do not produce the same number of playoff appearances. The harder a metric tries to add multiple careers across a trade, the further it drifts from how a front office actually experiences the deal.
The conservative version of the question is who ended up with the better single player. That is the question fans argue about in retrospect anyway. When the 49ers gave up three first-round picks for the third overall slot in 2021, the pick-value chart said they paid roughly fair price. When Trey Lance started four games and was traded for a fourth-rounder, no one was satisfied with the chart's verdict. The page is built around the player question because it is the one history is going to settle regardless of how the math is framed at the time of the deal.
Simulating one pick
The 2006–2020 mature window has roughly 3,800 player-picks with stable pVAR. Every player in it has had five or more NFL seasons, which is long enough for the career distribution to settle. To build an outcome distribution for a given draft slot, we weight every historical player by how close his pick was to that slot, using a Gaussian kernel that runs tight near the top of the draft and widens through the late rounds (sigma climbs from 5 to 20). A simulation draws one pVAR from the weighted pool.
We resample from history rather than fit a parametric distribution because the empirical data doesn't lend itself to a clean closed form. Quarterback outcomes are bimodal: teams either hit on a quarterback or they don't, and most of the mass sits at the two extremes. The late rounds are zero-inflated. Half of the picks return nothing but every so often an Antonio Brown or Jason Kelce comes out of the sixth. A fitted distribution would smooth those edges off. Bootstrapping from the actual draws preserves the empirical distribution's shape.
Position-weighted mode tilts the draws toward the drafted player's position. With probability 0.35 a sample comes from the same-position window only. With probability 0.65 it comes from the position-agnostic window. The reason for a soft mixture rather than a hard filter is sample size. A pure-cornerback distribution at pick six has roughly twenty samples across fifteen drafts, too thin to read as signal at the tails. The mixture recovers most of the positional information without giving up the broader pool's stability.
Scoring the trade
Each trade runs through 500,000 simulations. Every pick on every side draws a pVAR independently, each side keeps its highest, and the two highs are compared. A side wins the simulation when its best is more than five pVAR ahead of the other's; anything closer counts as a coin flip.
The five-point band is a concession to noise. pVAR is calibrated to about a single point of precision, bootstrapping adds its own sampling variance, and gaps narrower than five pVAR sit inside what most observers would call a draw. Each card carries an italic note beneath its stacked bar, reporting how often the simulation lands inside the band. When that share is high, the trade is too close to call regardless of how the headline percentages tilt.
Reading the headline bar
The stacked bar at the top of each card answers the conservative question directly. Each segment is one pick, sized by the share of simulations in which it produced the trade's highest pVAR. The percentage above each side is the share of simulations in which the trade's best player came from one of that side's picks. A single-pick side shows up as a single segment; a three-pick side shows exactly how the case is distributed across its picks.
That decomposition is where the trade-back logic shows itself. Three picks each with a 10% chance of clearing 60 pVAR collectively offer roughly a 27% chance that at least one lands. That is about what a single top-20 pick produces on its own, only distributed across more independent swings. When the Bills traded back four times in one night, that was the math they were playing.
The per-pick tier table
Each pick has its own row at the bottom of the card, broken into seven tiers: Bust (<20 pVAR), Solid (20–40), Good (40–60), Great (60–70), Elite (70–80), Star (80–90), and HOF (90+). The cells in a row sum to one and together describe the full distribution of how that pick is likely to resolve. A pick at 141 lands in the Bust tier roughly 70% of the time. A pick at 6 lands in Great-or-better closer to 35%. The heatmap moves monotonically in both pick position and tier.
What this doesn't capture
The simulation does not model the actual player drafted. The names shown on each card are nominal. They identify which player the slot turned into in the 2026 draft, but the math behind the percentages comes only from the slot itself and, in position-weighted mode, the position group. Whether a particular pick at #6 in 2026 turns out closer to Patrick Mahomes or closer to Mitchell Trubisky is the outcome the simulation is sampling over. It does not know which.
The simulation also does not model 2026 class strength, the drafting team's positional need, coaching or scheme fit, or any other team-specific factor. It pools all 32 teams across fifteen draft classes and treats picks as independent draws.
Future picks, marked with a leading tilde, are modeled at the slot they project to. No future-pick discount is applied, even though the rest of the league does discount them in practice.
Position coding uses the site's canonical groups, so interior offensive line isn't distinguished from tackle and edge rushers are collapsed into defensive end. That limit usually doesn't bite, but it can on position-heavy trades.
Background reading: Introducing pVAR, The Draft Pick Value Curve, Are NFL Teams Getting Better at Drafting?, and the original Massey & Thaler (2013).