The World Behind Course Evaluations

Charlotte Egger
Author: Charlotte Egger

Our small evaluation team creates thousands of evaluation forms a year for Leiden University. Different questions in these forms are used for different goals, but they are all important to monitor and improve the quality of education. Although other types of evaluation, such as panel discussions, are also used, these evaluation forms remain the main source of information about our education. There are, however, some things university staff can do to use evaluation more effectively.

Different questions for different goals 

Because different goals require different methods, our evaluation forms consist of a range of questions. Every evaluation contains the same seven university-wide questions. On top of these, some faculties have added faculty-specific questions and course coordinators are sometimes allowed to add questions of their own. All evaluations end with a small number of open questions, asking about the course’s strong and weak points. 

The information collected from the evaluations is used on different levels too. For example, by placing the same seven questions in all forms, we are able to consistently monitor educational quality on a university-wide scale. Every year, aggregations of all evaluations are made. Because these seven questions have been featured for years in every evaluation, changes over time and within faculties can be made visible. 

The information from the evaluations is also used by teachers and managers to monitor the quality of the course and improve teaching. For these purposes, the answers to open questions are of vital importance. 

Building the evaluation forms 

After all parties involved have decided on which questions they would like to include in their evaluation, our team creates the evaluation form. This is a process which takes place within the programme Evasys, where we enter the course details and build the evaluations out of different sets of questions. 

It is not, however, quite as simple as that. Large differences exist between the faculties in terms of the questions that are put in the evaluations. For example, Humanities has highly standardised evaluations, whereas Social Sciences asks students to rate to which extent each learning goal has been reached and lets teachers add extra questions. LUMC sticks mainly to the university-wide questions, while Law provides lots of open space to write comments. 

Once we have created the requested evaluation, the course’s teachers, who are responsible for having their students fill them in, receive a link or paper forms.  When the forms are returned to us, they are processed and a report is generated. This report is then used by teachers and other involved parties to address concerns and improve the course for next year as well as to examine faculty and university-wide developments. 

How can you use evaluation more effectively? 

Depending on your role, some information is more useful to you than others. To collect different kinds of information, it is important to employ a variety of methods. Significantly, don’t forget to match your methods to your goals. 

A teacher, for example, often gets the most valuable feedback from qualitative data. In addition to open questions in an evaluation form, panel discussions with students, and peer reviews from other teachers could be valuable tools. However, quantitative average scores can perhaps provide a more balanced view of the students’ overall opinion than would be possible through other means. 

Another effective strategy in the creation of evaluations is not asking unnecessary questions. This is a way of appreciating the time students put in to provide you with feedback. In addition, sharing which things about the course you changed based on last year’s evaluation or letting students know you will share the results of the evaluation with them afterwards can also help to increase their willingness to share their thoughts. These small changes increase the effectiveness of your evaluation. 

Innovation 

Following this line of thinking, we are ourselves currently busy improving the evaluation process. In an ongoing pilot, in which LLInC is collaborating with ICLON, we are testing an evaluation form that could provide teachers with more qualitative data while also asking a smaller time investment from the students. 

This is achieved by presenting students with a list of aspects of the course such as organisation, course materials, study load and learning goals. From this list they can choose one or two aspects they particularly appreciated and one or two which they believe could be improved before commenting on these selected aspects alone. For larger courses, AI is used to generate a summary of the answers to improve readability of the results for the teachers. 

The first results of the pilot look very promising. The teachers who participated in the pilot were enthusiastic about the type of feedback they received and got concrete ideas to improve their course. Providing teachers with this type of specific and qualitative feedback could further improve the quality of education. After all, the teacher designing and teaching the course has the largest influence on its quality. We will continue the pilot and explore upscaling, so more teachers can reap the benefits of this way of evaluating. 

The input of students, teachers, and staff is essential when it comes to gathering useful information in the evaluation process here at Leiden University. The support of all parties involved is crucial in making evaluation a truly efficient and valuable tool to keep improving our education – long may this continue! 

Related stories