Skip to main content
Course Metric Expectations

Brief summary of the overall metrics expected for a well performing DataCamp Course

Amy Peterson avatar
Written by Amy Peterson
Updated over a week ago

Overall Course Metrics 

When you first log in to the Content Dashboard, you will see an overview of your course metrics. 

Screeshot of course-lvel metrcis for Introduction to SQL, from the Content Dashboard. The four metrics shown are "average rating", "number of completions", "28-day completion rate", and "number of feedback messages".

Here is a breakdown of our expectations for course metrics. 

Average rating: This is the primary rating for course quality, calculated from weighted chapter ratings (1 to 5 stars). 4.6 or higher is considered excellent. 4.2 or lower is indicative of substantial problems with the content.

Number of Completions: Use this to track revshare (not a quality metric).

28 Day Completion Rate: Percentage of learners who started the course in the selected date range and completed it within 28 days. Under 20% is indicative of engagement problems with the content.

Number of Feedback messages: Number of messages left via in-exercise "Report Issues" tool. Feedback referring to problems is much more common than praise, so lower is better. For high traffic courses, the no. of feedback message should be less than 5% of the no. of starts.

Chapter Level Quality Metrics 

Screenshot of chapter-level metrics for Introduction to SQL from the Content Dashboard. Each rows corresponds to a chapter, and metrics include average rating and number of feedback messages.

DataCamp tracks several quantitative metrics for each course chapter. These can help you begin to isolate which parts of your course may need maintenance. 

Keep in mind that the % completions for Chapter 1 will normally be lower than other chapters, as learners may start a course out of curiosity before discontinuing. This does not indicate an issue that needs to be solved. 

Avg Rating: Average overall chapter rating. If one chapter has a significantly lower course rating than the others, this may be an indication that the chapter contains problematic exercises. 

% completions: Completion rate for users who have started the course in this time period. (Note that we would expect % completions to decrease in recent time periods, since users have had less time to complete the course) 

# Messages: number of feedback messages left by learners on exercises. These often report bugs, typos, or mistakes in our submission correctness tests, and they should be your first priority when looking to maintain your exercises. 

% Hint: Percentage of exercise starts where a hint was used. 

% Solution:  Percentage of exercise starts where the solution was requested.


See here for more information on interpreting Course Chapter Metrics.

Exercise Level Quality Metrics 

Screenshot of exercise-level metrics for Introduction to SQL, from the Content Dashboard. Each rows in the table refers to an exercise or step of a multi-step exercise. Rows are ordered by decreasing number of feedback messages.

DataCamp tracks several quantitative metrics related to each exercise. Students can also report qualitative feedback for any exercise. See below for the metrics tracked for each Course exercise. (Found in the Dashboard under the “Browse Exercises” tab)

Quantitative: 
Difficulty of the exercise: 

  • Percentage of students who asked for hints

  • Percentage of students who asked for the solution

  • Percentage of students who got the answer right on the first attempt 

Count of students 

  • How many students completed the exercise

Student happiness 

  • Number of feedback messages (issues) reported

  • Percentage of students who volunteered to provide feedback on whether they found a hint or SCT helpful or not 

Qualitative: 

Issues

  • Learner provided feedback on a problem they had with the exercise 

SCT (Submission Correctness Test) related feedback messages

  • SCTS are given when a learner submits incorrect code

Incorrect Attempts

  • See what code learners submitted as incorrect answers by using the diff view 

See here for more information on Interpreting Course Exercise Metrics

See here for more information on Interpreting User Feedback Messages

Did this answer your question?