## The DWTS Overscore Express

I’d been wanting to write a post about the early season DWTS scores for a few seasons now. Thanks to a tweet from @reaganvincennes, I was prompted to finally do it. She said, “Seriously, though the judges were on the overscore express.” I felt the same way and so I wanted to dig into the numbers to see if the judges were doing something different the past couple seasons than they’d done previously.

I started by first looking at the range of scores from the first week of each season of Dancing with the Stars:
Season 16 – 12-24
Season 15 – 17-24.5
Season 14 – 20-26
Season 13 – 14-22
Season 12 – 13-24
Season 11 – 15-24
Season 10 – 14-25
Season 9 – 18-27 (Week 1 had a relay so I used week 2)
Season 8 – 13-24
Season 7 – 12-23
Season 6 – 15-27
Season 5 – 16-26
Season 4 – 13-24
Season 3 – 12-26
Season 2 – 12-24
Season 1 – 13-20

The list actually shows that DWTS has been pretty consistent about skewing the scores high in almost every season. In fact, season 14 was skewed abnormally high for all of the dancers. This was a bit of a surprise to me. However, from the above list you can also see that the early season scores have never gone below 12 and never gone above 27.

This illustrates my issue with the early scoring really well (I call it overscoring). The show basically gives the good dancers no room to improve their scoring. The only exception was season 1 where there was at least a little wiggle room for some scoring improvement as your dance skills improved over the season.

There are a lot of ways to look at the scoring. One is to consider a score of 0 as someone who can’t dance and a score of 10 as a ballroom professional. However, if that was the case, then almost no one on the show should get above a 6 or below a 4. Instead, if you broaden the definition of what a score of 10 or 0 means, then it’s easier to score the dancers. For example, a score of 0 would be the worst dancer that’s ever been on DWTS (We all know this is Master P). A score of 10 would represent the best dancer that’s ever been on DWTS (I’ll let you choose who you think is the best and it definitely wasn’t their first dance of the season). This shift in perspective would help the scoring make a lot more sense.

It would mean that someone in a season could strive to become a 10 throughout the arc of the season as opposed to being nearly perfect at the start. As it is now, where do Amber & Derek have to go with their score of 27? Certainly they could score lower and then come back up, but do we really think that their dance deserved 9’s if you compared them to the best that have graced the DWTS floor?

Take a look at the scoring from the premiere episode of DWTS Season 17:
Amber & Derek – 27
Elizabeth & Val – 24
Corbin & Karina – 24
Jack & Cheryl – 23
Nicole & Sasha – 23
Brant & Peta – 22
Christina & Mark – 22
Valerie & Tristan – 21
Leah & Tony – 21
Bill E. & Emma – 18
Keyshawn & Sharna – 17
Bill N. & Tyne – 14

There’s a simple way to adjust these scores to better reflect the scoring framework I mentioned above. Let’s imagine that the worst dancer on the premiere (Bill Nye) was scored just above Master P and received a score of 3 instead of his 14. Now we’ll assume the judges scores did rank each couple properly based on their performance, but were just skewed too high. Using Bill Nye as the benchmark we subtract 11 from each score and get the following results:
Amber & Derek – 16
Elizabeth & Val – 13
Corbin & Karina – 13
Jack & Cheryl – 12
Nicole & Sasha – 12
Brant & Peta – 11
Christina & Mark – 11
Valerie & Tristan – 10
Leah & Tony – 10
Bill E. & Emma – 7
Keyshawn & Sharna – 6
Bill N. & Tyne – 3

This provides a much better range of scoring: 3-16. Now every person on the show has room to have their score grow as they grow on the season. Why the producers don’t see the value of this small change is beyond me. Imagine the storyline if Keyshawn could go from a 6 the first week to 27 weeks later. That kind of story line isn’t even possible with the overscoring that happens on the DWTS premieres.

We all know that late in the season the scoring gets weird. You have two dancers who have the same score and they definitely aren’t as skilled as each other. The reason for this is that the early high scoring leaves the judges no place to go with the good dancers. If they’d start the scores lower, then they could give Bill Nye better scores without the scoring saying that he’s as good a dancer as Elizabeth.

I think the reason they don’t do this is because stars have egos and images and the producers don’t want to damage those. Getting 1’s or 0’s on your first dance would hurt. Although, is it any worse than getting a 4 when everyone else is getting 7’s, 8’s and 9’s? Getting 1’s when the top person is getting 5’s isn’t much worse. Although, it could still tell the story of Bill Nye dancing better than Elizabeth if indeed that’s what happens (unlikely, but remember how Ty Murray grew over his season?).

Of course, I’m sure Heidi can dig into how the above shift in scoring would impact the value of audience voting. If my math is right, I think it would give those scoring higher more of an advantage over those scoring lower. The shift in scores would make it harder for someone scoring lower to “upset” someone who scored higher. Maybe this is the other reason producers like the high scoring. Although, at the end of the season the current scoring makes the judges scores basically irrelevant since there’s no room for dancers to differentiate themselves based on the judges scores.

What do you think? Is this a good idea? I’m not saying the judges scoring is wrong (that’s the topic of another discussion). What I am saying is that they’re using the wrong scale for the scoring. A little shift in the scale would make the judges more relevant and interesting as the season goes on.