This table allows you to find where problems exist in your course. Each row in the course exercise table represents either a complete exercise or a step in an Iterative or Sequential exercise.
By default, the table is ordered by descending number of feedback messages, then by the position of the exercise within the course. Try sorting on different metrics using the sort and filter functions to determine the exercises with the biggest problems for each metric.
Number of Messages:
This is the first metric to look at as problems reported by students are the often the clearest indicators of problems in exercises. Any exercise with 1 or more reported messages should be examined for possible improvement. However, you should use this as a way to isolate an exercise that could use further examination, and also focus on the % asked hint and %asked solution metrics.
This is the first of three difficulty metrics, calculated as the percentage of students who submitted at least one attempt and asked for the hint. For most exercises, this should be less than 35%. Values larger than this are often a symptom of unclear instructions or code comments.
This is calculated as the percentage of students who submitted at least one attempt and asked for the solution. For most exercises, this should be less than 20%. Values larger than this are often a symptom of an incorrect solution, unclear instructions, or a weak hint.
To judge the power of the hint, it's also worth looking at the difference between these metrics. Ideally, a hint should get the student halfway to the solution, so % Solution should be about half of % Hint.
Keep in Mind: If the %Hint is much lower than the %Solution, then it was a good hint. This is perfectly normal - we want our exercises to be challenging! However, if the rate of %Hint and and %Solution are similar, that means a student asked for a hint and still could not get the answer, and something needs to be fixed.
% First Attempts:
This is another difficulty metric, calculated as the percentage of first attempts on an exercise that are correct. For most exercises (except video exercises, which are hard to get wrong), this should be between 40% and 80%.
% Hint helpful:
This is a secondary metric to judge the quality of hints. This is very useful for high-volume courses, but unreliable for low volume courses.
% SCT helpful and % Duped:
These relate to the quality of submission correctness tests, and are mostly for DataCamp internal usage. If you have a concern that the %SCT Helpful Metric is high, contact Content Quality through a Github issue.
Once you have identified problematic exercises, you can use this column to visit the exercise in Teach and Campus.