D III Women's Volleyball

A source for NCAA Division III women's volleyball info and opinion, with a focus on the Midwest and Central regions

2017 Week 1 regional rankings: Gusties, Titans at the top

The first regional rankings of the 2017 season have been published.

Central Region Week 1
1. Gustavus Adolphus
2. Northwestern
3. Washington-St. Louis
4. St. Thomas
5. Wartburg
6. Saint Benedict
7. Dubuque
8. Luther

Midwest Region Week 1
1. Illinois Wesleyan
2. Chicago
3. Wisconsin-Whitewater
4. Millikin
5. Wisconsin-Eau Claire
6. Aurora
7. Wisconsin-La Crosse
8. Wisconsin-Stevens Point

Top four in other regions
Great Lakes: Wittenberg; Ohio Northern, Otterbein; Calvin.
Mid-Atlantic: Carnegie Mellon; Johns Hopkins; Stevenson; Juniata.
New England: Johnson & Wales; MIT; Wesleyan; Babson.
New York: Vassar; Ithaca; Stevens; Clarkson.
South: Berry; Emory; Averett; Randolph-Macon.
West: Colorado College; Claremont-Mudd-Scripps; Whittier; Trinity (Texas).

Sorry for the late post. In the end I decided to forego the long response I wrote in regard to these initial rankings.

-Ricky Nelson

Written by Ricky Nelson

October 18, 2017 at 12:08 am

Posted in Uncategorized

14 Responses

Subscribe to comments with RSS.

  1. If anyone is looking for a long response on the West Region, click my name to get to my site. The West rankings were interesting with Whittier coming in higher than I originally thought. (I had them at 5.) CMS ahead of Trinity and SU was also a surprise.

    Looking forward to more words from Ricky!!!!!


    October 18, 2017 at 5:01 pm

  2. rbrochelle,
    Frustration and disappointment are not great starting blocks for a rational response.

    I’ve written many times what I would write again today.

    I adamantly oppose what I call the Criteria Scorecard. We don’t need committees if the default is the Criteria Scoreboard. Committees are tasked with determining criteria valuations.

    There is intent and purpose behind the wording of the criteria. If you are like me and see obvious and implied hierarchies among the criteria, you would expect those hierarchies to show in the rankings. If head-to-head is just another factor, why even play? If Team A beats Team B, to my way of thinking Team B has a lot of ground to make up in that direct comparison. How much ground is up to the committee.

    I understand that there is not a natural, linear progression of results that makes ranking teams simple. And if I hear one more time about an RPI justifying rankings … an RPI in our division would be a redundancy of two of the five primary criteria — strength of schedule, winning percentage — that are nowhere near as important to me as HTH results, results versus common opponents or results versus ranked opponents. However, there are debatable, theoretical threshold numbers in terms of winning percentage and SOS that teams should have to meet. And how many wins would theoretically cancel out X percent of an SOS deficit.

    I also sense that people may miss the all-important word “results” in the criteria. It’s not “winning percentage” versus ranked opponents or common opponents. It’s much more nuanced than that. Down the road, a 10-0 record versus ranked opponents could be less impressive than, say, a 7-4 mark. Also, a win against the New England No. 9 is not the same as one over the West No. 3. But those specific arguments are for future weeks.

    Anyway, using an RPI would double the impact of winning percentage or, on the flip side, halve the importance of head-to-head and the other — more directly meaningful, in my opinion — criteria.

    See? Those are the same words I’ve written many times before. I simply need to someone to tell me why I’m wrong.

    -Ricky Nelson

    Ricky Nelson

    October 18, 2017 at 5:54 pm

  3. I’m with you brother. I have a post ready for tomorrow to discuss Ranked Wins at it’s like you stole my example (although I used New England’s #8).

    It seems like every year the head to head win is the least valued criterion and it makes no sense to me. SOS was a commanding force in the West RAC rankings, too.


    October 18, 2017 at 6:49 pm

  4. Winning % and SOS are valuations of a team’s entire body of work. HTH is a snapshot of how teams compared on 1 day. With teams playing 20-40 matches in a season, how would you justify making one match that much more important than all of the others? Also, you often get loops, where Team A beats Team B, who beats Team C who beats Team A. Then people are outraged that a team is ranked behind a team they beat – no kidding. That was inevitable. It’s just a matter of which fan base get riled up.


    October 19, 2017 at 12:23 am

  5. Noreaster,
    Winning percentage and SOS are manipulable figures. Results are not. When directly comparing two teams, I’d rather rely on a verifiable truth than a biased equation as a deciding factor.

    An example this week would be Gustavus and Northwestern.
    Similar winning percentages.
    Definite SOS advantage for Gustavus, which is a given due to their conferences.
    GAC beat St. Thomas and Eau Claire, lost to Millikin.
    UNW beat Millikin, split with St. Thomas, lost to Eau Claire.
    GAC signature wins: Stevens Point; Eau Claire, Chicago; La Crosse; Luther; Wartburg; St. Thomas.
    UNW signature wins: Stevens Point; La Crosse; Wartburg; St. Thomas; Millikin; Saint Benedict; Gustavus.

    Seems close. The decisive factor in this case is SOS instead of HTH? I disagree.

    -Ricky Nelson

    Ricky Nelson

    October 19, 2017 at 1:03 am

  6. Noreaster,
    You made me think about a team’s body of work. That is true as a slogan, but we are talking about the elite teams. The body of work for many regionally ranked teams boils down to nonconference play. In fact, Northwestern’s body of work is lessened by its conference. That is true for several regionally ranked teams that don’t get SOS bounces from playing in the UAA, WIAC, etc.

    The SOS is flawed. There have been proposals to separate nonconference SOS from conference SOS in the criteria. I support that. As it stands now, SOS is barely usable as a tool to separate teams within a region. On selection day, raw SOS numbers without layers of context are questionable at best as tools.

    Northwestern’s SOS and winning percentage (“body of work”) include a couple conference testers, a great nonconference slate and about 14 matches that require the Eagles to get off the bus. Northwestern will also have a below-average SOS for an elite team despite playing St. Thomas twice, Stevens Point, La Crosse, Eau Claire, Wartburg, Gustavus, Millikin, Saint Benedict, Whitewater, Bethel and Carthage. Seriously, what else can Northwestern do to improve its overall SOS? The general SOS problem is that Northwestern is not an outlier. Several other elite programs are never going to have upper-tier SOS figures either.

    What else can Northwestern and some others do about their SOS? Change conferences? Have the conference play a single round-robin and fill another eight nonconference matches by Minnesota teams from Duluth to Morris to New Ulm via, I don’t know, winning the lottery? Or … how about the various committees not rely on a biased SOS number, and then consider results in a general sense more highly. One of those options makes more sense than the others.

    So, yes, some matches are merely disregarded for the most part unless it involves a “bad loss.” Does anyone seriously think La Verne’s wins over Caltech this year matter? Yet they do matter, in a net-negative way (and I will debate anyone who says Caltech, Mount Mary, etc., are bad for D III by the way). Some matches are more important than others and, in some specific cases, one match could be the most important. That is nothing new to any sport at any level. Calvin vs. Olivet is not the same as Calvin vs. Hope. Emory vs. Brandeis is not the same as Emory vs. Washington-St. Louis. Cubs vs. Cardinals is not the same as Cubs vs. Rays. Nor should they be.

    SOS also involves an element of luck. I suppose some match results involve luck as well, but not in a systematic way. For example, a team that two years ago scheduled Dubuque for this season must be pleasantly surprised with the SOS boost while regretting the result.

    That’s enough typing for one comment. I sincerely appreciate your sentiment, Noreaster. I just don’t agree with it.

    -Ricky Nelson

    Ricky Nelson

    October 19, 2017 at 3:26 am

  7. First, you are looking to closely at GA and UNW. Look at the data sheet. GA has a higher SOS and a higher Win %, so move on to the next comparison. Honestly, it wouldn’t surprise me if you spent more time on that comparison than the entire committee combined. And it isn’t all about SOS because UNW is ahead of Wash U despite being 80 points behind them on SOS. Which proves the second point, that UNW doesn’t have to have a stronger SOS than GA but you can’t lose more, vs a weaker schedule and be upset that you aren’t #1. Weaker schedule means you better win the big ones. And BTW, the UNW head coach is on the committee so I’m sure she knows exactly how and why she is where she is.


    October 19, 2017 at 8:17 am

  8. Hi Ricky, It is only the first poll, at least that is what I am trying to tell myself. I did destroyed a paper cup when I saw the NE rankings!

    The addition of non conference SOS shows a number of people agree with you, as do I, that strength of schedule in total can be tricky. They can call it a secondary criteria but if it is on the data sheet people will use it from the get go. The phrase “not listed in priority order” is frustrating.

    I think results versus common opponents should be the starting point for comparing A to B since it is more data points and gets away from the “1 day” problem. I also think a “big” win can trump a head to head loss. But, that said, I destroyed the paper cup because the head to head criteria was ignored!

    Noreaster did you like the NE list? I thought 2 schools were misplaced by several spots.


    October 19, 2017 at 9:58 am

  9. Noreaster,
    Everything you wrote makes me think you did not read anything I wrote. Your first four sentences do not do justice to the ranking process. That should not be how it works. At least I hope the committee takes more care with the task than that. Do you think the committee really just looks at the two most questionable criteria and moves on? Ugh, let’s hope not.

    You start by determining the top team in a region. That comes down to Northwestern and Gustavus. Even if you don’t care about the process and merely use two manipulable criteria among the five primary ones, GAC has SOS for the biased reasons I detailed. Winning percentage is .870 to .867. You are being purposely obtuse if you think those percentages either don’t need much closer examination or are not a wash.

    -Ricky Nelson

    Ricky Nelson

    October 19, 2017 at 10:14 am

  10. NED3VballFan,
    I am not entirely sure why the proposed separated SOS would be controversial. As a division, we should be striving for more useful information. I think a separate SOS is useful information. When I was writing the national column at D3baseball.com, the national chair at the time went on record over the span of years bemoaning the few stringent, sometimes-flawed criteria we use in the division.

    Links to those interview transcripts for anyone interested in a national chair’s and a director of championship’s perspectives:

    I actually like that there is not a stated order of importance for the ranking/selection criteria. I doubt we’d need committees if there were an order of importance. The art of the process is most interesting to me. I can understand not liking it though.

    Send me an invoice, and I’ll make sure my people get you a new cup. We can all use a little extra help when dealing with collateral damage.

    -Ricky Nelson

    Ricky Nelson

    October 19, 2017 at 4:14 pm

  11. NED3VballFan,
    One more point that you did not initiate, but I’ll reply to your comment that touched on it.

    This “one-day” nonsense is based on flawed logic. Of course one match can, and often does, matter more than all others. There are hinge points (HTH, important common opponent results, etc.) with each set of rankings that has ever been ranked.

    I would love to be a gym if a No. 1 seed loses to a No. 4 seed during the semis of a one-bid conference tourney and the No. 1 team says, “But … but what about our body of work?”

    -Ricky Nelson

    Ricky Nelson

    October 19, 2017 at 5:31 pm

  12. last post for me in this article. I need to pace myself for 17 more days of this. I read the baseball articles. It is a hard job and there is no one answer, as we already know. One technical note I took out of that. He said in one interview that the regional chair does not vote at the regional level. Is that also true in Volleyball I wonder?
    Lastly something I have mentioned to RR and what I think will be my all encompassing answer to the rankings/selections we can not figure out – advocacy. at the end someone is going to argue better/longer/harder for team A than someone else is going to do for team B.


    October 20, 2017 at 12:41 pm

  13. NED3VballFan,
    Chairs do not vote in the weekly rankings. However, chairs have the responsibility to amend results that eschew the criteria.

    Your point regarding advocacy is well taken.

    -Ricky Nelson

    Ricky Nelson

    October 20, 2017 at 2:02 pm

  14. NED3VballFan,

    I have no particular issue with the NE rankings. My guess is you think J&W is too high, but that is an instance where they did honor the head to head, which probably pumped them up about 4-5 spots vs if they hadn’t played MIT at all. And the New England Challenge, plus NE1 and 3 at MA 4 and 7 has the potential to shake things up for week 2. It’s early.


    October 21, 2017 at 12:07 am

Comments are closed.

%d bloggers like this: