Chapter-level metrics give you a slightly finer grained measure of where problems exist. Use the “sort” feature to sort by each metric. Pay attention to these things:

Avg Rating:

A dip in a Chapter’s Average rating can indicate structural problems with that chapter. For example, is the topic dull or not solving real-world problems? Is the dataset boring? Does the flow of exercises not make sense? You'll need to look closer at the feedback messages in the exercise-level metrics (see below on the dashboard) to determine what went wrong, but this can help you determine where to focus your efforts.

% Completions 

Typically this is lower for Chapter 1 (because students start the course out of curiosity, or after seeing an advert, then drop out), then stabilizes from Chapter 2. If you see a big drop in this metric in the final chapters, there is an engagement problem. Are there exercises that are impossible to complete? Is the content boring?

Number of Messages:

This gives an indication of where students are struggling. See the exercise-level metrics to determine the exact location of the problems.

% Hint and % Solution:  

These are the proportion of exercise starts where a hint or solution was used. These are the main metrics for difficulty. Ideally, a course should have a gradual increase in the level of difficulty as it progresses. That is, these numbers should gradually increase from Chapter 1 to the final Chapter. On a Chapter, level, this metric may indicate an issue, but as one bad exercise may cancel out a good one, it is still vital to inspect the % asked hint and % asked solution metrics on the exercise level to determine what needs fixing

However, a % Hint above 20% is an indicator that there may be an issue, above 35% is a definite red alert that something needs to be addressed. For % Solution, a rate above %20 is a sign that something definitely needs to be fixed. 

Did this answer your question?