Captain, My Captain

Each year, the American Wine Society (AWS) sponsors two wine competitions recognizing both amateur and commercial winemaking. The wines are judged in Pittsburgh with extra bottles of wine sent to be informally evaluated at the annual conference in a mock competition. At conference, tables are staffed by volunteers who are wine judges, graduates of the Wine Judging Training Program (WJTP) and/or those currently in the program. These table captains are to lead their table members in an evaluation of the wines. Since I had just began the WJTP, I volunteered to be a table captain.

Called up to the front of the room, each table captain had to grab an unmarked box of wine and returned to his/her table. Mine was a flight of Chardonnays. After carefully reading the instructions, and opening the wines, I waited until the participants entered the room and my table was filled. In all, there were five of us at my table, representing one of about 20 tables altogether.

I welcomed the group and explained how a wine competition was run and what they should do this afternoon. Once everyone was fully briefed, we began to pour samples of the first five wines into our glasses, passing each bottle onto the next person. The next step was to begin tasting/evaluating and scoring using the AWS 20-point format. After everyone had completed this process, I asked each person for the total score on each wine, before we discussed each wine on its own. The first wine’s scores ranged from 10 to 16. One person continued to be an outlier at the low end throughout the scoring process. The following bottle seemed to be flawed and our scores reflected it. We continued to taste through bottles 3, 4 and 5 in the same manner. Dumping out wines 1 through 5 (we only had 5 glasses each), we then poured and tasted wines 6 and 7, following the same procedures. We agreed that our bottle of wine 6 was faulted and chose not to score it. At the end, the official scores were unearthed from the envelope and we compared our average scores with the judges.

Here is how our scores compared:
Wine 1 – 13.7 (Us) vs. 14.0 (official judges)
Wine 2 – 10.3 vs. 14.17
Wine 3 – 11.7 vs.12.0
Wine 4 – 12.3 vs.11.83
Wine 5 – 14.2 vs.12.67
Wine 6 – Not scored vs. 8.67
Wine 7 – 14.7 vs.16.0

As evidenced from the above comparison, while our scores didn’t fully match the official judging, we were relatively close with the exception of Wine 2, for which there may have been a problem with our bottle since we felt it was flawed. 

All in all, I thought it was a very interesting and instructive exercise and look forward to volunteering again in November 2010.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.