Judging factors are really the criteria used by the judges to evaluate the competitor or entry using a set of "standards" as guidelines on what to judge. A collection of factors are grouped together into each class and given relative weighting to reflect the comparative importance of the factors within that class.
Sample Class
In the sample class below, you'll see we are using 8 judging factors, each with a weight of 12% (except the last factor, which has a 16% weighting). The factors used in this class can be reused in other classes.
Judging Factors Explained
You can manage your judging factors here in the Judging Factor Library. Clicking on any factor will allow you to make changes and assign scoring standards to the factor. Most factors simply need ONE standard (e.g. SCORE or EXECUTION), but you are free to nuance a factor by adding multiple standards (e.g. precision, creativity and performance) to that one factor. Each standard within a factor will have its own judge slider so that judges can accurately assess the entry or contestant.
In the example to the left, we are using a multi-standard approach with the simple judging view. Notice that for the factor "Takeoff & Hover" the judge can award scores for precision, realism and scale speed. Although the event host could have simply assigned ONE standard (e.g. "Score" or "Execution") to the Takeoff & Hover factor, having THREE standards makes the judge's analysis more nuanced and helps guide the judge through the rubric with more specificity.
Notice also that every slider/standard has a range of 0-10. The system will enforce a single scoring range for EVERY standard/factor combination in a class in order for the normalization calculations to remain accurate. To vary the importance of one factor to the next, use the weighting percentages, which you can assign within the CLASS tool.
Managing Judging Factors
The Judging Factors library will permit you to create, modify and remove judging factors at any time. Your judging factors will always be available for use across events.
For additional guidance on judging factors and standards, review these best practices.
Why Using The Same Point Scale for Different Factors Works Best
In many competitions, judges have a score sheet that instructs them to award, say, "up to 15 points" for one element and "up to 5 points" for another. This methodology evolved because when judging on paper, it's considerably more difficult to assign percent weighting to each element. The problem with this approach lies with the human mind. We intuitively understand a rating scale of 1-10. It's more difficult to conceptualize a 1-15 scale or a 1-5 scale, especially when those scales are interspersed among multiple judging factors. In the LiveJudge system, we allow you to assign a SINGLE scale (say, 1-100) that will apply to EVERY standard slider in any judging factor. This makes it far easier for a judge to assign scores. The software will apply your relative weighting percentages to account for the importance of each factor relative to the others.
The Math Behind it All
Mathematically, here is what happens with a sample cooking event using 3 factors and different standards within each factor:
Factor 1: Appetizer - 25%
Factor 2: Entree - 50%
Factor 3: Dessert - 25%
Let's say you assign the following standards:
Appetizer: Execution (1-10), Appearance (1-10) & Taste (1-10) [30pts possible]
Entree: Execution (1-10), Presentation (1-10), Texture (1-10) & Taste (1-10) [40pts possible]
Dessert: Execution (1-10), Appearance (1-10), Wow Factor (1-10), Originality (1-10) & Taste (1-10) [50pts possible]
At first glance, you might wonder how the system will maintain the proper weighting (relative importance) of the three factors given that dessert has a possible 50 points while it's equivalent factor, Appetizer, has only 30 possible points.
Think of the point scales as ways to guide the judges through a precise analysis vs a way to accumulate competition points. Let's say a judge awards the following to a contestant:
Appetizer: 27 of 30 possible points
Entree: 32 of 40 possible points
Dessert: 25 of 50 possible points
We convert those awards to a ratio which reflects what percent of the possible points the contestant earned:
Appetizer: 27 of 30 possible points = 90%
Entree: 32 of 40 possible points = 80%
Dessert: 25 of 50 possible points = 50%
Next, we apply the ratio to the weighting:
Appetizer: 90% of 25 = 22.5pts
Entree: 80% of 50 = 45pts
Dessert: 50% of 25 = 12.5pts
Now, total those points:
22.5 + 45 + 12.5 = 80 points of the 100 possible (from your weighting distribution). Now, if you remove all but one standard (keep Execution, for example), the results will be IDENTICAL.
Calculating raw scores using an achievement ratio allows your judging to be extremely precise while giving judges a live roadmap through each factor. If the judge has to move 5 sliders for a single judging factor, they will think about each criteria carefully. More importantly, your contestants will understand, at a granular level, why they received the total score they got.
From here, the software will allow you to scale the 80 to, say, a range using an absolute scale multiplier (like 10x) or a distribution type scale (like a curve on a test grade that goes from 1-1000, for example). If we scaled by 10, this contestant would have an 800 for their final score from this judge. This example does not account for penalties, bonuses or difficulty multipliers.
Comments
0 comments
Please sign in to leave a comment.