Why you got that A (or not)
Most instructors hate the
grading as much as students hate the getting, and grading methods are hardly
a standardized set of rules. Methods change from department to department, and
in many cases, from faculty member to faculty member. When I was an undergraduate,
I asked a former music instructor how he handled grade complaints. "Oh,
I just bring out my book with a bunch of number mumbo-jumbo, and it always satisfies
them."
Actually, much of his grade was based on his own assessment of a student's performance, and that's inevitably somewhat subjective.Okay, that's music. What about more concrete, quantifiable fields? Some instructors may be mathematically specific, basing grades solely on numbers from exams. Others may combine exams with letter grades from daily work. Some just fly by the seat of their pants: "Irving was clearly stronger in his group presentation than Raoul, but worse than Philippa." That's not mathematical. But it's not necessarily invalid to put weight behind an experienced and considered opinion.Most professors at some point set up a grading scale based on one of two systems: norm-referenced or criterion-referenced. A norm-referenced system compares you to everyone else in the class ("the norm").
Of course, if you're lucky enough to happen on a dumb bunch of classmates, you'll soar. Get stuck with the best and brightest, and you may bomb. I say "may" because often hanging out with smart people makes you smarter too, and you'll do better than you thought possible. In sports, you don't want to practice against an opponent worse than you are. They drag you down. (Then, try to play tennis against a serve you can't touch.)The other method sets up a set of standards the prof thinks you need to meet to be awarded a certain grade. The more you reach, the better the grade.The criterion approach works best for pure sciences and quantitative-based subjects (read: math and its ilk), where you either know the material you need to advance, or you don't. It's tougher to do that in a humanities or, say, social science such as communication. The demands of "what every student needs to know" differ from place to place, and assessing your relationship to those demands is more difficult.The hot topic in assessment nowadays is trying to set up a way to test "higher-order thinking." That is, you don't just ask, "When did Columbus discover America?" Fact, memorized, spit out, forgotten. You ask, "Compare and contrast Columbus' discovery to last week's football game." (Okay, far-fetched, but I bet you could do it.)
According to the classic
theory, based on Bloom's taxonomy (try tossing that toward your professor, for
a reaction), teachers have these objectives:
* Knowledge.
* Comprehension.
* Application.
* Analysis.
* Synthesis.
* Evaluation.
Good exams should be designed primarily to measure 4, 5 and 6, "higher-level thinking." But it's a lot harder to write those kinds of questions. You'd be expected to take what you learn and relate it in new ways to situations you didn't necessarily discuss in class. You'd also be expected to discover the limits of the information. For instance, test questions that have a premise ("If...then"); an analogy ("An editor is to a newspaper as a ____is to television"); a classification ("Jefferson is classified as a ____") or ask several questions based on a scenario are considered higher-level.The point behind them is to encourage you to know more than just facts: to know how to use them, how to connect them, how to evaluate them, how to discover new ones and how to spot false ones. Because that's what makes you smart, not just "book learning."
Copyright 2004 by Ross F. Collins <www.ndsu.edu/communication/collins>