# Specifying outcomes

Choose tasks that students should be able to do at the end of the course. Here’s an example for an introductory programming course:

Given a data file with a list of transactions, write a program that will output summary statistics, like mean, and maximum.

Students have 45 minutes for the task. They can use any information resource, including any Web site, as long as they don’t communicate with another human.

The task is fairly generic. It doesn’t say what the data is, or what statistics will be computed.

Write a Web application for adding two vectors. The input page has two text fields. Users enter numbers separated by commas. The output page validates the data, showing errors for nonnumeric entries, or differing vector lengths. If the data is valid, the output page shows the sum of the vectors.

Instructors want to compare student performance across time. That’s easier when they specific tasks. The problem is that past students pass on information about tests to future students, as with the infamous fraternity test bank. You might offer instructors different task sets they can choose from, to suit their circumstances.

Here are three things to keep in mind when specifying outcomes: skill variance, task difficulty, and what students know when they start the course.

# Skill variance

• More skilled students will produce higher quality solutions. For example, two programs might work, but one might have better indenting, variable names, etc., than the other.
• Given a fixed time, more skilled students can complete more tasks than less skilled students.
• Given a fixed number of tasks, more skilled students will complete them more quickly than less skilled students.

Current practice is to have time limits for exams, so let’s go with that. To be able to detect differences in student performance, we need:

• Tasks where there can be variance in solution quality. To put numbers behind the quality difference, we need rubrics that capture quality dimensions. We can do that, with Cyco’s exercise/feedback system.
• Enough tasks (or subparts of larger tasks) so that there’ll be an observable difference in the number completed by more accomplished students.

# What students know when they start the course

Be conservative. Better to underestimate what students know, than have them lost at the start of the course.

Because of the expert’s blindspot, it’s easy to overestimate what students can learn in a course.

Ask around. Show instructors the tasks you’ve created. Do they think students can learn to do the tasks in a semester?

Create sample solutions for each task. This makes it more obvious what students will have to know to complete each one.

If you want other people to adopt your Cycourse, you want them to look at your outcomes, and think, “That looks about right.”

This isn’t easy. Many instructors pack too much content into skills courses. Their goal is to “cover chapters 1 to 12” rather than “help students become skilled problem solvers.” Students don’t have enough time for deep learning.

Some instructors won’t let go of the “wide and shallow” approach. Some universities might even have social norms of cramming as much content as possible in courses, regardless of the effect on learning.1

You should decide how you’re going to handle this issue. Good luck. If you have any suggestions, please let us know.

# Summary

Choose tasks that students should be able to do at the end of the course. You can use generic or specific tasks. You might offer instructors different task sets they can choose from. Choose tasks where there can be variance in solution quality. Have enough tasks (or subparts of larger tasks) so that there’ll be an observable difference in the number completed by more accomplished students.

Write down your assumptions about what students know at the start of the course. Be conservative.

Many instructors pack too much content into skills courses. Some won’t let go of the “wide and shallow” approach.

1 Kieran: I suspect this is common, but don’t know for sure.

Editors:
kieran