Home ยท Original Bracket Post


I Removed All Emotion From My Bracket. 32 Games Later, Here's What Happened.

By Collin Lee | March 21, 2026


Two days ago, I published a 2026 NCAA Tournament bracket built entirely by algorithm. No gut feelings. No alma mater bias. No "I just have a feeling about this team." Just an XGBoost model trained on 54,000 games, a logistic regression model for backup, and the cold comfort of probabilities.

The March Madness Round of 64 is now in the books. Time to see whether removing emotion from the bracket was brilliant or delusional.

ROUND OF 64 SCORECARD
Correct picks25 / 32 (78.1%)
ESPN points earned250 / 320
Upset picks correct2 / 3

25 out of 32. Not bad for a system that doesn't watch basketball.

I'll be honest: I didn't expect to care this much. I published the bracket, told myself I was just running an experiment, and then spent two days refreshing scores on my phone like a person with a gambling problem. When Iowa beat Clemson - an upset we called at 84% - I caught myself pumping my fist. For math. I was fist-pumping for math. The whole point of this project was to remove emotion, and here I was, emotionally invested in whether the emotionless system was right.

The full results

Here's every game, showing our predicted winner, our confidence level, and the actual result. Strikethrough means we got it wrong. Bold green means we correctly called an upset.

East - 7/8 correct
Our PickOpponentConfActual
(1) Duke(16) Siena99%71-65
(8) Ohio St.(9) TCU52%TCU 66-64
(5) St. John's(12) Northern Iowa96%79-53
(4) Kansas(13) Cal Baptist94%68-60
(6) Louisville(11) South Florida78%83-79
(3) Michigan St.(14) N. Dakota St.98%92-67
(7) UCLA(10) UCF81%75-71
(2) UConn(15) Furman99%82-71
West - 6/8 correct
Our PickOpponentConfActual
(1) Arizona(16) LIU96%92-58
(9) Utah St.(8) Villanova69%86-76
(5) Wisconsin(12) High Point96%HP 83-82
(4) Arkansas(13) Hawaii96%97-78
(6) BYU(11) Texas76%TEX 79-71
(3) Gonzaga(14) Kennesaw St.98%73-64
(7) Miami FL(10) Missouri68%80-66
(2) Purdue(15) Queens99%104-71
South - 6/8 correct
Our PickOpponentConfActual
(1) Florida(16) Prairie View A&M99%114-55
(9) Iowa(8) Clemson84%67-61
(5) Vanderbilt(12) McNeese92%78-68
(4) Nebraska(13) Troy97%76-47
(6) North Carolina(11) VCU59%VCU 82-78 OT
(3) Illinois(14) Penn98%105-70
(7) Saint Mary's(10) Texas A&M64%A&M 73-50
(2) Houston(15) Idaho99%78-47
Midwest - 6/8 correct
Our PickOpponentConfActual
(1) Michigan(16) Howard99%101-80
(8) Georgia(9) Saint Louis50%SLU 102-77
(5) Texas Tech(12) Akron90%91-71
(4) Alabama(13) Hofstra88%90-70
(6) Tennessee(11) Miami OH97%78-56
(3) Virginia(14) Wright St.98%82-73
(10) Santa Clara(7) Kentucky53%UK 89-84 OT
(2) Iowa St.(15) Tennessee St.97%108-74

What went right

Let's start with the good news, because there's plenty of it.

The upset calls worked

In the original post, I talked about the model's upset threshold. It won't pick an upset for vibes - only when the numbers actually favor the lower seed or come close. The model flagged three upsets in the Round of 64:

  1. 9-seed Iowa over 8-seed Clemson (84% confidence) - Nailed it. Iowa won 67-61. This was the model's strongest upset conviction, and the Hawkeyes delivered.
  2. 9-seed Utah State over 8-seed Villanova (69% confidence) - Nailed it. Utah State won 86-76, and it wasn't even close. The model saw what the seeding committee didn't.
  3. 10-seed Santa Clara over 7-seed Kentucky (53% confidence) - Missed. Kentucky won 89-84 in overtime. A true coin flip that went the other way.

Two out of three. And the one we missed was a toss-up that went to overtime. I'll take that. More than I should, probably - the relief I felt when those upset calls landed was decidedly not the emotion-free experience I signed up for.

The confidence was honest

This is the part that actually matters more than the raw hit rate. Six of our seven misses came in games where the model had less than 80% confidence. The model essentially said "these are close games" and close games went the other way. That's not a prediction failure - that's probability working exactly as advertised.

Of the 20 games where the model was 90% or more confident, it went 19 of 20. The one miss - Wisconsin at 96% - stings. But a 95% hit rate on high-confidence picks is exactly the kind of reliability you want from a prediction system.

What went wrong

The High Point problem

Let's talk about the elephant in the room. We had Wisconsin winning at 96% confidence. High Point - a 12-seed from the Big South and a mid-major that wasn't on most people's radar - won 83-82.

The model gave Wisconsin a 96% chance because the underlying numbers were overwhelming. Wisconsin had a top-20 offense by the advanced metrics the model tracks, a top-30 defense, and a strength of schedule that dwarfed High Point's. By every number the model cares about, Wisconsin was the far better team.

And they lost by one point.

This is the kind of result that makes people say "that's why they play the games." And they're right. A 96% probability means roughly 1-in-25. Across 32 games, you'd expect one or two high-confidence upsets. We got one. The model isn't broken - it just ran into the 4% scenario.

I know that intellectually. But when I saw the final score, what I actually felt wasn't "ah, variance" - it was a knot in my stomach. Not because I care about Wisconsin. Because if the model can be wrong at 96%, what else is it wrong about? That's the thought you're not supposed to have when you've committed to trusting the math.

But try telling that to your bracket.

The real upsets we missed

Eight upsets happened in the Round of 64. We called two of them. Here are the six we missed:

The honest assessment: three of these were games where the model was basically shrugging (50-59% confidence). The Texas A&M result is the one that bugs me most after Wisconsin - Saint Mary's at 64% felt like a reasonable lean, and the Aggies absolutely destroyed them.

The cascade starts

In the original post, I wrote about one of the most important things we learned building this system: 76.5% of late-round bracket misses aren't the model picking the wrong winner - they're the wrong team being in the game in the first place.

We're already watching that play out.

Our bracket had Wisconsin beating Arkansas in the Round of 32, then losing to Arizona in the Sweet 16. Wisconsin is out. That entire branch of our bracket is now rewritten - not because we got the later picks wrong, but because we missed the first domino. High Point's one-point win just cascaded into at least two dead picks downstream.

Same story across the bracket. We had North Carolina losing to Illinois in the Round of 32 - a game that can never happen now because VCU took out UNC. We had Saint Mary's falling to Houston in the Round of 32 - now it's Texas A&M who gets that date with Houston.

Seven first-round misses. Each one kills at least one downstream pick. Some kill two or three. These seven games will ripple through every remaining round of our bracket.

This is exactly what the data predicted. And it's exactly why early-round accuracy matters so much more than people think.

What we're still alive for

Here's the good news: the big picks are all intact.

All four of our 1-seeds - Duke, Arizona, Florida, and Michigan - won their Round of 64 games. Our champion pick (Michigan) beat Howard 101-80. Our runner-up (Duke) beat Siena 71-65. Our entire Final Four is still in the tournament.

Seven R64 misses shuffled some opponents around, but the teams we actually picked to go deep are almost all still standing. The only fully dead path is Wisconsin's branch in the West. Everywhere else, our projected Sweet 16 and Elite Eight teams survived.

The bracket's highest-value bets - Michigan as champion, Duke as runner-up - are both still alive. In ESPN scoring, the champion pick alone is worth 320 points and each Final Four pick is worth 160. All four of those picks are still on the table.

What would a human have done?

Here's the question I keep coming back to. Would I have done better if I'd gone with my gut?

Honestly? I would have picked Wisconsin over High Point too. So would you. So would everyone. I might have taken North Carolina over VCU because of the name, which would have been wrong. I definitely would have picked Kentucky over Santa Clara because it's Kentucky - and that would have been right. But I also might have talked myself into a 12-over-5 upset somewhere because "it feels like an upset year," and statistically, I would have been wrong more often than right.

The model's 78.1% hit rate is above the historical average for reasonably sharp brackets. It's not perfect, but it's consistent. It doesn't get rattled. It doesn't chase narratives. It doesn't change its Wisconsin pick because some analyst on TV said "High Point has shooters."

Seven misses. Four were coin flips the model acknowledged as coin flips. One was a genuine blind spot (Texas A&M). One was a mid-range miss (BYU). And one was the kind of low-probability upset that happens exactly as often as math says it should.

Re-running the model with real results

Here's where it gets interesting. Instead of just looking at our original bracket, I went back and locked in every actual Round of 64 result, then let the model re-predict the rest of the tournament from scratch. Same 54,000-game training set, same XGBoost model, same logistic regression blend - but now working with the 32 teams that actually survived, not the 32 we predicted.

The headline: the model's championship prediction is unchanged. Michigan over Duke, 79-78. Same Final Four. Same path. Even with seven first-round results going differently than expected, the model's view of the tournament's endgame didn't budge.

Updated predictions from the Round of 32 onward

Round of 32 - Model predictions with actual R64 results
WinnerLoserConfScore
(1) Duke(9) TCU97%76-64
(5) St. John's(4) Kansas73%78-76
(3) Michigan St.(6) Louisville73%78-76
(2) UConn(7) UCLA78%75-70
(1) Arizona(9) Utah St.96%89-73
(4) Arkansas(12) High Point96%93-79
(3) Gonzaga(11) Texas79%82-76
(2) Purdue(7) Miami FL91%80-69
(1) Florida(9) Iowa91%81-72
(5) Vanderbilt(4) Nebraska72%78-76
(3) Illinois(11) VCU96%82-73
(2) Houston(10) Texas A&M96%78-66
(1) Michigan(9) Saint Louis97%94-77
(5) Texas Tech(4) Alabama51%87-85
(3) Virginia(6) Tennessee73%73-72
(2) Iowa St.(7) Kentucky88%78-74
Sweet 16 through Championship
RoundWinnerLoserConfScore
S16(1) Duke(5) St. John's83%78-72
S16(2) UConn(3) Michigan St.75%75-74
S16(1) Arizona(4) Arkansas91%96-83
S16(2) Purdue(3) Gonzaga75%79-74
S16(1) Florida(5) Vanderbilt79%88-80
S16(2) Houston(3) Illinois72%72-72
S16(1) Michigan(5) Texas Tech93%84-71
S16(2) Iowa St.(3) Virginia64%74-74
E8(1) Duke(2) UConn93%77-65
E8(1) Arizona(2) Purdue81%85-71
E8(1) Florida(2) Houston64%76-73
E8(1) Michigan(2) Iowa St.87%84-74
F4(1) Duke(1) Florida73%80-72
F4(1) Michigan(1) Arizona66%90-86
NCG(1) Michigan(1) Duke80%79-78

What changed (and what didn't)

The model's view of the tournament barely shifted. All four 1-seeds still reach the Final Four. Michigan still beats Duke by a point for the title. The only meaningful changes are in the early rounds where new opponents replaced the ones we predicted:

The stability is striking. Seven first-round results went differently than expected, reshuffling opponents across the bracket, and the model's championship game didn't change by a single point.

The games I'm watching most closely

The model is confident. I'm less sure. Here are the Round of 32 matchups that keep me up at night:

That's the tension of this whole project. The model has deep structural conviction about these teams. It sees Michigan 79, Duke 78, and it doesn't flinch. It's either exactly right or about to be very, very wrong - and everyone's watching.

The model would tell you not to get emotionally invested in that prediction.

But between you and me, it's getting harder to take that advice.


© 2026 Collin Lee