New course evaluation forms made their debut last semester in the History, Near Eastern and Judaic Studies and Theater Arts Departments. The surveys showed an increase in the amount and specificity of questions. This is consistent with the aim of the evaluations reformatting, according to Associate Dean of Arts and Sciences Elaine Wong.
According to Wong, the ultimate objective of the new evaluations is to clarify and distill responses and compile more useful data by using a user friendly design.
While maintaining the three standard write-in questions, the latest version of the assessment contains nearly triple the amount of fill-in bubble prompts. The jump from 16 fill-in questions to a more comprehensive 44 is supposed to be redundant, according to Course Evaluation Coordinator and Ph.D. candidate Shannon Gent. This is a transitional questionnaire, which, after more research and statistical analysis of the results, will be made less cumbersome.
The fundamental difference between the pilot and standard evaluation is the rewording of the questions and redirection of answer options. For example, in order to assess the skill of the instructor, the standard form asks students to rate their professors effectiveness on a scale ranging from poor to excellent. The new survey asks students to respond to the prompt: the instructor was effective as a lecturer and/or discussion leader by selecting a response ranging from strongly disagree to strongly agree.
NEJS Department Chairman Marc Brettler 78, Ph.D. 86, said he feels that the launch of the surveys was a wise decision, and is typical of what happens when survey instruments are changed.
A small group pilots the new form so we can see if it is useful, and does offer the information we were hoping to get, He said describing the process. After a pilot, the form may be tweaked, and then may be used more widely, if there is a sense that it is better or more useful.
Behind the piloting of the revamped course evaluation forms is the Committee for the Support of Teaching (CST) on which Wong, Brettler and Gent serve as members, along with other representatives of Brandeis staff, faculty and student union.
According to its website, CST develops policy and procedural recommendations to support the teaching mission of the university and initiates faculty development programs to improve the quality of teaching at Brandeis.
Survey results may be used by individual professors and department chairs to maximize teaching effectiveness, play a determining role in professors tenure, promotion, and salary adjustment, and is one of the factors in the selection of the recipients of teaching awards (nomination forms are also available though the Brandeis website). The Student Senate also relies heavily on this survey in their compilation of the Course Evaluation Guide (available at http://union.brandeis.edu/ceg/).
In designing the pilot evaluations CST consulted focus groups composed of its primary users, students and faculty, in order to gain insight regarding the content and layout of the instrument.
I think that the more specific and clear questions will be very helpful to faculty teaching courses, Brettler commented regarding the usefulness of the new evaluations. Also, as a department chair who is supposed to evaluate the facultys teaching, I will find many of the more specific questions in the pilot form much more helpful.
A future direction for the evaluation process may include on-line evaluations as opposed to in-class surveys. Some other universities are already experimenting with the on-line methodology.
Problems with the internet, as outlined by Gent, include a possible lack of student participation. In-class surveys are speculated to be more likely to be completed than a survey requiring the students initiative to fill it out. Internet safeguards are also a very serious consideration, as it maybe possible for students to complete a questionnaire for a course he or she is not enrolled in, or to complete more than one questionnaire per course.
Gent said she feels that, although it would be easier to compile results of online surveys, until more exploration in web efficacy has been conducted, the written surveys are too valuable to take chances with.