Subscriber Mailbag: Breaking Down The Value Of Draft Models

Image credit: Scouts (Mike Janes/Four Seam)

Chamaco from Mexico asks:

Can you explain what a draft model is, and how it works? I generally understand that data goes in and a projection comes out – but what type of data? What program is the model in? Is it an algorithm, a formula, etc.? Have you ever seen one?


The draft model can probably best be described as an attempt to provide various weights to a large number of inputs and to then create the programming that will take all those many different pieces of info and create a way to rank the many players who teams are considering for the draft. It’s not really, as best I understand, a formula, but a series of formulas, algorithms and weights to try to predict in the best way possible how players will perform in the future.

I haven’t seen one in action, as no team has brought me in to watch a draft and I wouldn’t want to sign the NDA required to see it, but I have talked to a lot of front office officials and scouts about models and the strengths and weaknesses of them.

The first thing I would note with all of these models is they are tools, much like scouting reports, analytical data, psychological tests and in-person interviews. Teams can opt to draft off their model, but more generally, they are a guide. There’s still a decision maker (or makers) who decide whether to follow the draft model’s recommendation.

And for some teams, I’ve been told that the model is just one of multiple inputs. The model’s ranking of players is one consideration, but it’s just a piece of information that is weighed along with other inputs.

Nowadays, it’s possible to include a lot of different types of data into a model. Injury history can be part of a model, as can biomechanical information. If you want, a model can ding a pitcher for certain characteristics in their delivery, or give another pitcher extra weight because of ideal mechanics. You can have pitch characteristics (velocity, movement, etc.) and hitting metrics (exit velocity, chase rate, swing %, etc.). Demographic information is also generally part of the models, as teams can decide how much to weigh draft age, height, weight, position.

One difficulty with the complexity of today’s models is they can get so big and so complex that it’s hard for even those who program them to fully understand and explain the weighting that goes into why one player ranks ahead of another. That’s especially true now that teams may use machine learning in creating their models.

Really, models are the latest effort to bring order to an extremely hard to organize subject. If you go back to the 1920s, scouts were largely on their own. If you read old scouting reports from the first half of the 20th century, their lack of specificity is jarring when you compare it to now. How a general manager could compare an outfielder in Augusta with another one in Tuscon seemed nearly impossible.

But you didn’t have to rate one or the other. There was no draft. So a scout’s ability to sign players was a matter of persuasion and making the best financial offer. There was no amateur baseball draft at the time, so there wasn’t nearly as much need to compare all the players around the country.

By the time you get to the amateur draft in the 1960s, teams had to start to line up the talent. And that led to the creation of draft boards. The idea was that scouts would file reports on the top players, and then a scouting director (a role that hadn’t become all that common until right around the rise of the draft) would line up the various players onto a preference board.

Not coincidentally, scouting terminology and metrics became more standardized in this era. The 20-to-80 scouting scale (and other variants of scouting grades) became more widespread, because having a grade on a hitter’s bat or a pitcher’s fastball enabled easier comparisons.

By the 1990s and 2000s, the amount of information teams had to sort through kept growing and growing. And by the 2000s, statistical information, especially with college players, became a much bigger factor in draft evaluations.

That and the steady growth of computing power meant it was a fertile time for draft models to sprout up. At first, draft models were, as best I understand from conversations, just complex Excel spreadsheets that tried to weight various inputs. By the time Trackman, Pitch FX and Statcast arrived, spreadsheets were supplanted by much more complex programs, as the data had simply gotten too large for Excel.

Do draft models work? They can, but they can also create problems. 

If you’ll bear with me, think of a draft model as much like trying to communicate with a spaceship that’s orbiting Mars. Depending on where the planets are in their orbits, it can take as many as 20 minutes for a radio signal to travel from the Mars orbiter to Earth. So if you want to order the spaceship to change its orbit, you would be sending a message knowing that by the time it reaches the ship, it will be 20 minutes ahead of wherever its position was when you sent the message. And if you want to get a positional update from the spaceship, you know that message is from 20 minutes before wherever the ship is right now.

A draft model tries to predict who will be the best players in baseball in five to seven years from right now. So if you’re right, you’ll find out in 1,500 days from now. If you’re wrong, you may know a little earlier, but still, it will be, say, several years from draft day before you have a good sense of where you erred.

Ideally, you can test your draft model against past draft years to see how well it did in hindsight, but there also has to be the realization that the environment is always changing. What worked five years ago may be the wrong approach next year. You’re reading signals from the past in hopes that it will help you figure out where the spaceship will be in the future. In 2019, a number of hitters with modest power had excellent seasons thanks to a lively MLB baseball. If you were building a draft model in 2019, rating those as excellent picks in 2012-2017 would seem to be useful aspects of a model.

But some of those players saw their offensive production plummet when the ball was not as lively in 2020 and beyond. And any models that were based around the lively baseball would suffer because of that.

To take an extreme example we’ve written about before, the A’s Moneyball draft was an early example of a team using a draft model. The A’s believed that by using college statistics they could outsmart teams that were using scouting reports and less objective information.

They would have been better off listening to the scouts. A comparison of the A’s preference list with Baseball America’s top of the board (compiled by reporting on scouts’ views) shows the A’s list was significantly worse than the scouts’ views.

That wasn’t proof models didn’t work. It was proof that the A’s model needed more work. It didn’t really control very well for different offensive environments for one example. A few years later, the Blue Jays, using their own statistical modeling, produced a truly horrific 2005 draft. Picking sixth overall, the Blue Jays landed only two major leaguers in a 50-round draft, and only one (Ricky Romero) who played more than 10 MLB games.

But a few years after that the Cardinals used statistical modeling in a blend with scouting to produce an outstanding 2009 draft class where, picking 19th, the Cardinals landed 10 future big leaguers including Matt Carpenter, Matt Adams, Joe Kelly, Shelby Miller and Trevor Rosenthal.

Many teams became much more model heavy in the following decade. Modeling isn’t going anywhere, but the quest to make those projections better and better will be never-ending.

Comments are closed.

Download our app

Read the newest magazine issue right on your phone