Moritz Seider, The Model and how usage affects an NHL player’s on-ice numbers

Dec 21, 2022; Detroit, Michigan, USA; Detroit Red Wings defenseman MoritzÊSeider (53) handles the puck against Tampa Bay Lightning right wing Nikita Kucherov (86) during the first period at Little Caesars Arena. Mandatory Credit: Brian Bradshaw Sevald-USA TODAY Sports
By Dom Luszczyszyn
Mar 28, 2024

It was Red Wings vs. Lightning, January 21. By many eyewitness accounts, it was a Moritz Seider masterclass in shutdown defense. He went toe-to-toe with Nikita Kucherov for 86 percent of his ice time and lived to tell the tale, keeping the shot and goal count even. 

That’s the job of a No. 1 shutdown defenseman and he aced it.

Advertisement

The postgame reports analyzing his work through raw data weren’t nearly as complimentary, though. That’s been an increasingly common occurrence through the back half of this season: Red Wings fans swear by Seider’s performance, but the numbers don’t back it up. Top of their hearts, bottom of the charts.

Usually when there’s a gap between the two schools of thought the truth lies somewhere in the middle. In this case, there’s something clearly missing in the equation that shifts the balance toward what many saw that night. 

Seider faced off against Kucherov for roughly 15 of his 17 minutes at five-on-five, one of the most difficult defensive assignments any player has had to face this season. What should his numbers look like?


This season Seider has a Defensive Rating of minus-6.9. That means Detroit is estimated to have allowed seven more goals compared to an average player because of Seider, an extremely porous number that ranks 181st out of 189 regular defenders. 

That’s putrid for any defenseman and especially awful for someone many see as a legitimate No. 1 guy. It’s why many analysts view Seider as overrated for what he brings to the table. When he’s on the ice, the Red Wings surrender 0.3 more expected goals against per 60 and 0.1 more goals against per 60. That’s not good on a team that ranks bottom five in both respects. It’s why Seider’s defensive value comes out looking so poor despite his reputation.

What any Red Wings fan will tell you is that Seider’s game against Kucherov was not an isolated incident. He’s fed to the wolves on a nightly basis, more so than any other player in the league. Detroit chases matchups more than any team and that puts Seider immediately behind the eight ball. It’s hard for a player to put up good numbers when they’re constantly facing designed adversity.

Advertisement

The question is just how far Seider’s difficult usage sets him back.

The answer? Roughly four goals per 82 games, by my estimation anyway. That means because of his usage, the expectation for someone playing minutes as difficult as Seider’s over 71 games is an expected Defensive Rating of minus-3.6. Right off the bat, Seider starts with what would be the 32nd-worst Defensive Rating in the league based entirely on factors outside his control. It’s the largest burden any defenseman has faced this season.

Seider’s difficult usage may not excuse his numbers entirely, but they do paint him in a much kinder light. He’s still struggling under the weight of what’s expected of him, but he’s far from one of the absolute worst defensive defenders in the league as many models (including my own) show. Seider is tasked with that role for a reason.

Context matters but has always been extremely difficult to tease out properly. There’s a reason quality of competition has been the white whale of the hockey analytics community since its inception. It’s almost always why there’s a great divide between what people see and what’s measured.

The problem with figuring out the importance of quality of competition has always come down to “quality.” 

Past attempts to suss out quality of competition suggested it didn’t matter as much as many believed because the range between the toughest and easiest minutes appeared trivial. Take Detroit as an example. Seider’s average opponent plays 14.8 even-strength minutes per night while Detroit’s most sheltered defensemen face an average opponent who plays only one minute less per night. It’s a virtually insignificant gap.

That narrow divide is misleading to what’s actually happening though, a product of how quality is being measured. 

Ice time doesn’t cut it because it treats very different players equally, especially if forwards and defensemen are lumped together. Other methods aren’t much better. Bucketing players into groups creates the same problem and anything that’s based on a per-60 rate treats all roles equally which exacerbates the problem. Each method is also only based on current season metrics which introduces unwanted volatility into the mix. A general lack of consideration for separating offense and defense for all of the above doesn’t help matters.

Advertisement

None of it works for its intended goal which causes the apparent dilution, making it appear as if the range between the toughest and easiest assignments is smaller than it actually is. 

In reality, the difference is much larger than previously estimated now that we have a better understanding of player quality. For Seider, the difference over 82 games is worth roughly six goals over what Detroit’s bottom pair faces — or one win in the standings.

That’s a big deal.


A one-win range is a massive deviation from a one-minute range, one that obviously warrants further explanation. Here’s how we got there.

For starters, our measure of quality was each player’s projected Offensive Rating and Defensive Rating — with one separate quality of competition measurement for each. That means looking at how good a player’s average opponent is offensively and how good they are defensively. 

The key here is using projected values — which use three seasons of data weighted by recency — for a more stable measure of quality compared to using just a player’s work this season. The latter would prefer facing Matthew Tkachuk to Sam Reinhart as one example, with Reinhart having the stronger season. The former would view Tkachuk as the bigger threat thanks to his body of work. It’s also vital that the measure of quality is centered around a player’s total value based on per-game ability rather than per-60 efficiency. Players who earn similar per-minute value from the first and third line are not the same.

From that point, it’s a matter of looking at how usage in one direction affects each player’s value at the other end. That means quantifying how the average Offensive Rating faced affects a player’s Defensive Rating and vice versa for a player’s Offensive Rating. 

It’s the same process used for this article about Connor Bedard, only this time it’s for every player on a game-by-game level over the previous two seasons. The idea is to compare any deviation in a player’s game rating relative to his season average to any deviation in his competition quality. That’s nearly 100,000 data points to work with that aim to answer how much a change in competition quality influences a change in a player’s value for a single game. How much of a player’s results can be explained by role?

Advertisement

That kind of data is difficult to come by which is why it’s taken such a long time for me to look into this. Thankfully, I got a massive assist from Cole Palmer of HockeyStatCards.com who was able to make all of this possible by pulling the necessary data from Natural Stat Trick.

For Seider’s defense in particular, it meant facing an average skater with an Offensive Rating of 3.1. That’s the highest mark in the league.

But that doesn’t necessarily mean the effect size is the same toward his Defensive Rating. Based on the regression between the two factors, facing a collection of skaters whose average Offensive Rating is 3.1 equates to around a 0.05 deduction per game for Seider. Over 82 games that adds up to just over four goals above an average defenseman. The measured effect size is even larger than the average quality of the players faced. For defensemen, it’s not a one-to-one trade-off (though it is close to that for forwards).

It’s also worth noting the average Offensive Rating faced for all players is plus-one. That should make sense — better players play the most and every player does get a sniff against them — but also means anything below that would deliver a positive expected Defensive Rating. That’s where most of the league’s third-pair defensemen reside and why it’s important to measure the actual effect. Shayne Gostisbehere may seem like he faces an average group of offensive players with a collective Offensive Rating of 0.1, but that equates to an expected Defensive Rating of plus-1.7 — one of the most sheltered situations in the league.

Seider faces the most grueling minutes at the top along with his frequent partner Jake Walman, but he’s not alone. Every team has a pair being tasked with the toughest assignments each night, enough to drag their Defensive Rating down by a significant margin — a margin that’s larger than the average of who they actually face. That makes for a sizeable adjustment that’s previously gone unaccounted for by my model and many other similar ones.

It’s also not the only adjustment that needs to be made. Who a player plays with also matters.


Teammate quality is a lot trickier to deal with in this instance because it’s not as simple as taking the average and calling it a day. Good players play with other good players and are expected to. On a good team, there are a lot more of those to go around, too. The opposite is true for bad teams. 

Advertisement

For a model that aims to isolate player ability and add it all back together to measure roster quality, using unfiltered teammate quality would bring the best and worst teams a lot closer to average where they don’t belong. There’s a balance to be struck to make sure all the values add together properly, while still giving appropriate credit to whoever drives the bus. Call it “The Michael Bunting Effect.”

To solve that, I created an expected quality of teammate metric based on how good the player is and how strong his team is. 

Here’s how that works for Connor McDavid. His average teammate has an Offensive Rating of plus-9.5, one of the highest marks in the league — but what should we expect from a bunch of guys who get to play with the best player in the world? Based on McDavid’s own Offensive Rating of plus-29.3 and his team’s average of 4.3, McDavid’s expected quality of teammate has an Offensive Rating of plus-8.9. 

That’s still lower than where McDavid actually is, meaning he does benefit slightly from strong teammates this season to the tune of 0.6 goals. The highest forward this year is Matthew Knies, which shouldn’t be surprising. His offensive teammate quality comes in at 4.3 goals, one more than Bunting last season.

What’s interesting here is that while the absolute gaps in offensive quality a player plays with are larger than who they play against, the effect size isn’t. Using the same methodology as was used for opponent quality, I found that the effect size for teammate quality is almost three times smaller than the perceived gap in teammate quality dictates. It’s the opposite effect of what was measured for competition quality. For Knies, that means his expected Offensive Rating only comes in at plus-1.5 goals.

That doesn’t necessarily mean who a player plays against is more important than who they play with. It more likely means that for the purposes of adjusting the model, the effect that competition has carried is a much larger blind spot.

The goal of looking into all this was to limit those blind spots as much as possible. While it may not be perfect (no model is!), the hope is that these adjustments are a strong step toward bringing things closer to reality.

Advertisement

That said, if a bigger adjustment was necessary for teammate quality it would’ve shown up in the regression as it did for opponent quality. That it didn’t is perhaps an important lesson, especially a few days after we saw Zach Hyman score 50 goals. Of course, a lot of that has to do with playing with Connor McDavid, but if scoring 50 goals were a given based purely on playing with McDavid, Edmonton would’ve had a lot of other 50-goal scorers over the last decade. Jesse Puljujarvi never surpassed 15. 

This isn’t a call to blindly trust this model or any other. It’s just to say that when we’re trying to apportion credit between a line or pair, the supporting player is often doing more than many give him credit for — even if he’s not the one driving the bus.

Now let’s put all the pieces together.


To adjust a player’s Offensive Rating, we look at the average Offensive Rating of his teammates and the average Defensive Rating of his opponents. For a player’s Defensive Rating, we look at the average Defensive Rating of his teammates and the average Offensive Rating of his opponents.

In both cases, the average offense is what carries the largest effect which is why it’s been the primary focus here, but the defensive side matters too even if its ranges were much smaller (possibly because they don’t include the adjustments made here). Put them both together and each player has an expected Offensive Rating and an expected Defensive Rating based on his usage. 

It’s their starting point, one that creates more equity in player evaluation by acknowledging that some situations are a lot easier than others. That plays a far bigger role than many in the analytics community previously gave it credit for. 

As previously mentioned with Seider, his expected Defensive Rating is minus-4.1 per 82 games. It’s the lowest mark of any player in the league. It’s also nearly two goals lower than the average No. 1 shutdown option on each team whose expected Defensive Rating is minus-2.2. Here’s how it breaks down by lineup slot.

Being the guy with the toughest job doesn’t automatically make them good at that job, though. The range for what to expect from the top and bottom of the lineup is now much wider to contextualize results better — but they still don’t excuse those results completely. It’s all about what’s expected of them and whether they’re delivering above that line. 

That was the case with Seider last year when he was also tasked with the league’s most daunting role, this time with an expected Defensive Rating of minus-3.4. The difference is that his results, a minus-2.0 Defensive Rating, were above his expected line, meaning he was succeeding in his extremely challenging role.

Advertisement

That’s not the case this season, even after a very aggressive adjustment for usage. It adds much-needed context that makes Seider’s numbers look a lot more palatable, but it still suggests the burden he faces is too large to handle.

These adjustments may not be perfect, but they’re a step in the right direction toward an aspect of the game that’s too often ignored by analysts. Some jobs are easier than others and that needs to be better acknowledged when comparing numbers. 

This year, no player has it tougher than Seider — and that’s finally being accounted for.


For those who are curious, I’ve provided a chart for each team that shows each player’s expected Offensive and Defensive Ratings this season over 82 games. 

Only players who have played 25 games or more are shown and are grouped with their current teams.

 

Data via Natural Stat Trick courtesy of Cole Palmer

(Top photo of Moritz Seider and Nikita Kucherov: Brian Bradshaw Sevald / USA Today)

Get all-access to exclusive stories.

Subscribe to The Athletic for in-depth coverage of your favorite players, teams, leagues and clubs. Try a week on us.

Dom Luszczyszyn

Dom Luszczyszyn is a national NHL writer for The Athletic who writes primarily about hockey analytics and new ways of looking at the game. Previously, he’s worked at The Hockey News, The Nation Network and Hockey Graphs. Follow Dom on Twitter @domluszczyszyn