Burla demo · May 2, 2026

Where the market is wrong
about the Derby.

Start betting like a sharp.

Twenty horses go to the gate Saturday at Churchill Downs. The morning-line odds tell you what the betting market thinks each horse’s chances are. The model below tells you what the math thinks. Where the model is much higher than the market, you bet; where it’s much lower, you fade. Post 6:57 PM ET.

5 Value bets identified
126/160 Backtest hits (2010–25)
1 Trillion Monte Carlo simulations
20 Horses analyzed

Start here: the basics

If you’ve never bet a horse race (or never read a model like this), these six boxes cover everything you need before scrolling. Everything below is just applying these ideas to the 2026 field.

Bet types
Win, Place, Show
  • Win: horse finishes 1st.
  • Place: finishes 1st or 2nd.
  • Show: finishes 1st, 2nd, or 3rd.

Show is the easiest to hit, so it pays the smallest. Win pays the most.

Reading the price
Morning-line odds

6.5-1 means a $1 win bet returns $6.50 profit if the horse wins.

To turn odds into probability: 1 / (odds + 1). So 6.5-1 becomes 1 / 7.5 = 13.3%, the “market implied” chance.

Finding the edge
Bet, Fair, Fade

Compare the model’s win % to the market’s implied %. Three buckets, three different actions.

  • Bet Model probability is at least 1.5× the market-implied. Buy a win ticket. Example: Robusta, model 5.6 vs. market 2.0.
  • Fair Model and market within ~15%. Pass on a win bet, maybe use in exotics. Example: Further Ado, model 17.8 vs. market 14.3.
  • Fade Model probability is meaningfully below the market-implied. Don’t bet it to win, and leave it off your exotic tickets. Example: Renegade, model 3.9 vs. market 20.0.
Speed rating
What is a Beyer?

A Beyer Speed Figure is a single number (usually 60 to 120) that grades how fast a horse ran. Higher is faster. It adjusts for track and distance, so a 100 anywhere is a 100.

  • Claiming horse: 70 to 85
  • Stakes-level horse: 90 to 100
  • Top Triple Crown contender: 100+

2026 field tops out at 106 (Further Ado).

The trillion races
What is a Monte Carlo simulation?

Run the same scenario over and over with random luck mixed in, and count what happens. For this Derby:

  1. Each horse gets a strength score from the model.
  2. Add random noise (good days, bad days, racing luck).
  3. Rank them. Record 1st, 2nd, 3rd.
  4. Repeat 1 trillion times.

If Further Ado wins 17.8% of those races, that’s his model win %.

Trusting the model
What is a backtest?

Running the model on past races we already know the outcome of, to see if it would have picked the winners.

We tested 5,000 weight combinations against 16 historical Derbies (2010 through 2025). The best one scored 126 / 160 using a 10-5-2-1-0 ranking metric, picking the actual winner first in 11 of 16 years. That weight set is what runs the model today, and a 2,000-permutation null test on Burla confirmed the score is not search noise.

Caveat: 16 races is still a small sample. Strong against the null, not a guarantee.

What the model looks at

Each horse is scored on these data points, then weighted by the backtest.

The horse
Form & speed
  • Career record (starts, wins, places, shows)
  • Career win rate and in-the-money rate
  • Beyer Speed Figure on the horse’s top race
  • Run style: Pace, Press, Stalk, Close, or Deep
  • Stamina test: pass / fail at 1.25 miles
  • Pace fit if the early pace gets hot
The team
Trainer & jockey
  • Trainer’s lifetime Derby wins
  • Trainer’s Churchill Downs win rate
  • Trainer’s historical Derby win %
  • Jockey’s lifetime Derby wins
  • Jockey’s historical Derby win %
  • Jockey age and experience
The bloodline
Pedigree
  • Sire (the father horse)
  • Whether the sire ever won the Derby
  • Pedigree distance aptitude at 10 furlongs
  • Dosage Index (a breeding formula that predicts stamina)
The setup
Post & market
  • Post position (gate 1 through 20)
  • Historical win % from that gate
  • Historical in-the-money % from that gate
  • Morning-line odds (the market’s price)
  • Expert consensus score (averaged from major handicappers)
  • Special adjustments (jockey changes, post 17 curse, rail penalty)

The 5,000-combo backtest, scored across 16 historical Derbies, decided which of these matter most. The top three weights all leaned toward what the horse can actually do: stamina test (19%), year-level Beyer (16%), and dosage score (16%). Trainer / jockey edges still matter, but less than the headline narrative usually claims.

Top picks

Three longshot value plays. After scratches reshuffled the field, the headline favorites all collapsed back to fair. The model’s edge now lives entirely in the bottom of the morning line, where 50-1 and 30-1 prices still pay multiples of the model’s probability after takeout.

Top value bet
Robusta
Post 20 · 50-1 ML
5.6% Model win%
2.0% Market implied
89 Beyer

Drew into the field from the also-eligible list when Right To Party scratched on Friday. Doug F. O’Neill is two-for-Derby (won 2012 with I’ll Have Another, 2016 with Nyquist). The model has him at 2.78× the market-implied probability, the largest gap on the board. Buy a small win ticket at 50-1: Kelly sizing lands around 4% of bankroll, and the multiplier still pays cleanly after takeout. Expect variance.

Longshot saver
Great White
Post 18 · 50-1 ML
4.1% Model win%
2.0% Market implied
84 Beyer

Drew in from the also-eligible list when Silent Tactic scratched on Wednesday. Beyer 84 is bottom-tier, John Ennis trains. The model still has him at 2.07× the market-implied probability. Small win saver at 50-1: Kelly says ~2% of bankroll. Pure value play, no other angle.

Longshot saver
Litmus Test
Post 4 · 30-1 ML
5.7% Model win%
3.2% Market implied
96 Beyer

Bob Baffert is back in the Derby with Martin Garcia up. Beyer 96 is below the figure leaders but not bottom-of-field. Post 4 sits in the rail-side cluster. The model has him at 1.78× the market-implied probability. Buy a small win ticket at 30-1: Kelly says ~3% of bankroll. The longshot multiplier still clears takeout cleanly.

The contrarian read
Live longshot: Intrepido at 50-1
Beyond the three picks above, two more longshots cross the BET threshold (Intrepido at 1.73×, Ocelli at 1.52×). Intrepido has the better profile.

At 50-1 the market treats Intrepido as a throwaway. The model disagrees: 3.5% probability vs 2.0% implied is a 1.73× multiplier. The catch is the Pace style puts him head-to-head with three other front-runners, so the likeliest scenario is he burns out on the first turn. Treat him as a tertiary saver, not a main bet. The headline favorite Renegade (4-1 ML) is the cleanest fade in this model. The rail draw at post 1 is historically the worst gate in the Derby (no winner from post 1 in our 2010-2025 sample), and the model has him at 3.9% vs 20.0% implied. The market is paying for his favorite badge, not his odds of winning.

3.5%
Intrepido model win%
1.7×
Model over market
$2
Suggested saver stake

All 20 horses ranked

Model Win % comes from 1 trillion Monte Carlo simulations. Market Win % is what the morning-line odds imply, with the market’s own rank below it. Value compares the two, before takeout. Each row has a reason box explaining the call.

Swipe the table to see Beyer, run style, and place / show probabilities.

Post Horse Odds Beyer Style Model Win% Market Win% Place% Show% Value
BET Strong bet: model is at least 1.5× the market-implied probability and the gap is wide enough to clear Churchill’s takeout, or it is a longshot saver where the multiplier is large enough to justify a small stake. Robusta, Great White, and Litmus Test qualify.
BET Some edge: model is above market but the multiplier is too small to clear takeout cleanly. Pass on a win bet; use as a structure piece in exotics if you want exposure.
FADE Skip it: the public is overpaying for this horse. Don’t bet it to win, and leave it off the top of your trifecta or superfecta tickets. The red rows below are horses you walk past.

Exotic plays

Where small longshots actually live. Exotics let you bet structure (top-2, top-3, top-4 finish ordering) instead of trying to nail a single horse’s win at 6.5-1. Takeout is higher (~22%) but the payouts compound when the structure hits. All costs at minimum base unit.

Exacta
$1 box · $6.00
Further Ado The Puma So Happy

Three-horse box covers all 6 permutations of the model’s top three by win probability. Further Ado on top is the highest-probability ordering by a wide margin (17.8% vs the next horse at 8.4%).

Trifecta
1-key · $12.00
Key on top
Further Ado
Second & third (any order)
The Puma So Happy Robusta

Key Further Ado in the win slot, wheel three horses across second and third. Robusta is the cheapest live longshot in this slot; if he hits second or third the trifecta pays heavy.

Superfecta
10¢ box · $2.40
Further Ado The Puma Robusta Litmus Test

10-cent four-horse box covers the model’s clear #1 plus the second tier (The Puma) plus the two best longshot value bets. Most cost-efficient chaos hedge in a 20-horse field; pays big if any ordering hits.

Toss off the top of your tickets: Renegade (Post 1, 4-1 ML, 3.9% model). The public is on him as the morning-line favorite, but he drew the rail and post 1 has not produced a Derby winner in our 2010 to 2025 sample. Don’t key him underneath; the gap between the 20.0% market-implied and the 3.9% model probability is the largest on the board.

Where the data comes from

Every input is measured, scraped, or pulled from a public source. No hand-typed numbers, no author guesses. Receipts in derby/derby_ingest.py and derby/derby_build.py.

Input Source What it drives
2026 morning linekentuckyderby.com post-draw page (20 horses + ALEs)The market-implied % column and the BET / FAIR / FADE call
2026 speed and paceHorse Racing Nation handicapping article (Beyer, Brisnet, TFUS, Last-1f, Last-3f)Beyer column, run-style classification, pace-fit signal
Historical Derby resultsWikipedia 2010 through 2025 (305 starters, finishing order, fractions, track condition)5,000-combo back-test, ML training set, per-post historical win rate
Historical winner BeyersWashington Post Beyer archive and BloodHorse recap articlesYear-level speed-figure feature in the ML model
Pedigree (DP, DI, CD)PedigreeQuery (24 horses; 2026 field)Stamina-test flag and dosage-score feature
Trainer and jockey at ChurchillTrackMaster Churchill StatsMaster + historical Derby win countsTrainer and jockey signals in the composite score
Workouts and layoffChurchill Downs press releases at churchilldowns.comCross-checking pace and run-style assignments
Race-day weatherNational Weather Service (Churchill Downs lat/lon, KSDF observations)Track-condition prior; fast track for Saturday

One honest gap: per-horse Beyers for losing finishers in historical Derbies are DRF-paywalled. The historical training set therefore carries the same year-level winner Beyer for every horse in a given race, and per-horse historical signal comes from post position and connections instead of speed figures.

Methodology audit

The same audit a sharp data scientist would run before betting a dollar on this. We ran it ourselves, including a real permutation test on Burla. Receipts in derby/derby_audit.py.

The back-test clears its own null

In plain English: the back-test score is points the model earns for putting the actual winner near the top of its rankings (10 for 1st, 5 for 2nd, 2 for 3rd, 1 for 4th, 0 after that). Higher is better, 126/160 is what we got. We tried to fool the test: we re-ran the full 5,000-combo search 2,000 times after secretly scrambling which horse actually won each historical Derby, then graded the model against the fake winners. If the model is genuinely picking real winners, the fake-winner versions should score much lower. Every single one of the 2,000 scrambled runs did. The model is finding real signal, not getting lucky on the rankings.

126/160 Published
75/160 Null median
0.0% P( null ≥ 126 )
+78 Edge over random

Translation: in 2,000 random-label runs, none came within four points of the real-label score. The framework is picking up real signal, not search noise.

Five design choices that make this honest

  1. No synthesized Beyer figures. Historical Beyers come straight from the Washington Post archive and BloodHorse recaps (winner only). No per-horse Beyer is back-calculated from finishing position; per-horse signal comes from post draw and connections instead.
  2. The scrape actually saves what it scrapes. derby_ingest.py dispatches a real Burla call across 114 scrape tasks and writes every payload to derby/data/raw/. Nothing is hard-coded as a fallback; the pipeline reproduces from raw bytes.
  3. The back-test runs on real fields, not curated rows. The 5,000-combo search reads historical_results.csv: top-5 favorites by closing odds plus the actual winner, across 16 years. About 80 starters drive the search, no horse is hand-picked.
  4. The ML model does not see the market price. implied_prob = 1 / (odds + 1) is excluded from the feature set. The 8 inputs are post, per-post historical win rate, trainer and jockey Derby wins, year-level Beyer, dosage, run-style, and a muddy-track flag. Predictions are independent of the market they are compared against.
  5. No hand-coded “expert” dictionaries. Trainer and jockey scores come from real Derby win counts. Pace fit comes from each horse’s scraped HRN Last-3f figure. Post-draw adjustments are zero unless a measured signal supports them.

Honest gaps still open

  1. Morning line is not the closing tote. The BET / FAIR / FADE call compares the model to the morning line. By post, Renegade (4-1 ML) is likely shorter and longshots will drift. We are comparing to a forecast of the market, not the bell-time tote.
  2. Takeout eats real edge. Churchill keeps ~17% of the win pool, ~22% of exotics. To break even after takeout the model needs roughly a +20% gap over the market, not +15%. The five BET-tagged horses are all longshot saver tickets; the multipliers (1.5× to 2.8×) are wide enough that the multiplier still clears takeout, but stake sizes have to stay light.
  3. Per-horse historical Beyers for losing finishers are paywalled. DRF charges for them and we did not subscribe. The historical training set therefore has the same year-level Beyer for every horse in a given Derby. Per-horse historical signal has to come from post position and connections, not from speed figures.
  4. No Ragozin / Thoro-Graph / Brisnet pace sheets. Pace figures, trip notes, sectional times for losers, current jockey-trainer combo win rate, recent workouts, and live-odds movement in the last ten minutes are all outside the model’s view.

What this site is and isn’t

Trust

The 2026 inputs are measured (HRN, Wikipedia, NWS, the post-scratch morning line); the historical pipeline reads real Wikipedia results across 16 years; the back-test score clears its own null distribution by a wide margin. The ranking (Further Ado clear at the top, a tightly packed second tier, then Robusta, Great White and Litmus Test as the longshot value plays) is a defensible read of what is in the data.

Don’t trust

Win probabilities to four decimals, the BET / FAIR / FADE labels as if they were perfectly calibrated, or the implication that more compute equals more truth. The model still cannot see pace figures, sectional times, current workouts, or the closing tote board. Treat the picks as a sharp first read, not a guarantee.

How it ran on Burla

Burla is a Python library for parallel cloud compute. You write a function, call remote_parallel_map, and it fans out across a cluster instantly. No Docker, no Kubernetes, no orchestration glue.

For this pipeline, four separate Burla calls did all the heavy lifting: a 114-task scrape pulling 16 years of Wikipedia results plus the 2026 field; 164 ML model configurations trained in parallel; 5,000 weight combinations back-tested against 16 historical Derbies (2010 through 2025); and 1 trillion race simulations run as 10,000 parallel Burla workers in 47.8 minutes.

10,000 concurrent Burla workers at peak during Monte Carlo, each running 100,000,000 race simulations. 5,000 weight-combo workers ran simultaneously during sensitivity analysis, all results back in 7 seconds.

Full source code on GitHub. Burla docs at docs.burla.dev.

# Test 5,000 weight combinations across all 10 factors.
# Back-test each against 2010-2025 Derby results.
# Returns in 7 seconds on Burla (5,000 parallel workers).
from burla import remote_parallel_map
import numpy as np

def backtest_weights(weights_list, factors, backtest_fields):
    weights = np.array(weights_list)
    total_score = 0
    for year, horses in backtest_fields.items():
        scores = [(h["name"], sum(weights[i] * h[f]
                   for i, f in enumerate(factors)))
                  for h in horses]
        scores.sort(key=lambda x: -x[1])
        actual = next(h["name"] for h in horses if h["is_winner"])
        rank = next(i for i, (n, _) in enumerate(scores) if n == actual)
        total_score += [10, 5, 2, 1, 0][min(rank, 4)]
    return {"weights": weights_list, "score": total_score}

# Dirichlet sample: uniform over the probability simplex
combos = np.random.dirichlet(np.ones(10), size=5000).tolist()
args = [(c, FACTORS, BACKTEST_FIELDS) for c in combos]

# 5,000 workers run simultaneously → results in 7 seconds
results = remote_parallel_map(backtest_weights, args, grow=True)
best = max(results, key=lambda r: r["score"])
# → stamina_test: 19.2%, beyer_norm: 16.1%, dosage_score: 15.9%
# Train 164 ML configs (GBM, RF, LogReg) in parallel.
# Validate on 2022-2025 holdout (real data, no leak). Best log-loss 0.12.
from burla import remote_parallel_map
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import log_loss
import pandas as pd, numpy as np

def train_and_eval(cfg, train_rows, holdout_rows, field_rows):
    pipe = Pipeline([("scaler", StandardScaler()),
                     ("clf", GradientBoostingClassifier(**cfg))])
    pipe.fit(X_train, y_train)
    score = log_loss(y_hold, pipe.predict_proba(X_hold)[:, 1])
    field_probs = pipe.predict_proba(X_field)[:, 1].tolist()
    return {"cfg": cfg, "log_loss": score, "field_probs": field_probs}

# 164 configs dispatched simultaneously to Burla cluster
results = remote_parallel_map(train_and_eval, args_list, grow=True)
results.sort(key=lambda r: r["log_loss"])
# Best: GBM depth=2, lr=0.10, subsample=0.9 → log-loss=0.1199
ensemble_probs = np.mean([r["field_probs"] for r in results[:5]], axis=0)
# 1,000,000,000,000 race simulations across 10,000 Burla workers.
# Each worker runs 100,000,000 sims, returns position tallies.
from burla import remote_parallel_map
import numpy as np

def simulate_race_batch(scores, n_sims, seed):
    rng = np.random.default_rng(seed)
    n = len(scores)
    counts = np.zeros((n, 4), dtype=np.int64)
    for _ in range(n_sims):
        noise  = rng.normal(0, 1.8, n)       # calibrated upset rate
        noisy  = np.array(scores) + noise
        exp_s  = np.exp(noisy - noisy.max())
        probs  = exp_s / exp_s.sum()
        order  = rng.choice(n, size=4, replace=False, p=probs)
        for rank, idx in enumerate(order):
            counts[idx][rank] += 1
    return {"counts": counts.tolist()}

# 10,000 workers × 100,000,000 sims = 1,000,000,000,000 total in ~48 minutes
args = [(log_probs, 100_000_000, seed) for seed in range(10_000)]
results = remote_parallel_map(simulate_race_batch, args, grow=True)
total = np.sum([r["counts"] for r in results], axis=0)
# → Further Ado: 17.8% win  The Puma: 8.4%  Commandment: 7.3%

What’s the point?

The real result isn’t the win probability. It’s the time it took to build a rigorous model from scratch.

If you ran all three compute steps locally, sequentially, with Python’s GIL limiting CPU threads, the sensitivity analysis alone would take ~20 minutes. The full pipeline would take the better part of an afternoon.

With Burla, the same pipeline runs in under 3 minutes of actual compute time. That changes how you iterate: you can test 5,000 weight hypotheses, not 50. You can train 164 model configurations before picking a winner, not 5. You can run a million simulations, not ten thousand.

The sensitivity analysis pointed at three signals splitting the weight roughly evenly: stamina test (19%), year-level Beyer (16%), and dosage score (16%). Trainer and jockey edges still matter, but less than the trade-press narrative usually claims. That clarity only emerged because we could afford to test 5,000 weight combinations on real 16-year back-test data instead of a handful of curated rows.

Task Sequential local Burla parallel
164 ML configs ~8 min ~110 sec
5,000 weight combos ~20 min 7 seconds
1M Monte Carlo sims ~6 min ~90 sec
Total ~34 min ~5 min
Empirically-validated factor weights (best of 5,000 combinations, backtest score 126/160 pts on 2010–2025):
19.2%
Stamina test
16.1%
Beyer (year-level)
15.9%
Dosage score
13.2%
Win-rate norm
12.3%
Post win%