This guide explains how to use the Content Dashboard to make targeted improvements to your course or project.

The dashboard is designed to help you answer three questions:

  1. How good is my course/project?
  2. Which exercises/tasks have problems?
  3. What are learners struggling with in those exercises/tasks?

Before you begin

  1. Create a course or project.
  2. See Accessing the Content Dashboard to get your login credentials.
  3. Log in to the dashboard.

Select the content to maintain

Choose the correct dashboard

In the left-hand navigation bar, under the Quality item, select either Courses or Projects, depending upon the type of content you wish to maintain.

Select your Course or Project

Begin on the "View Course" pane. Use the "Select Course"/"Select Project" dropdown to select the content to maintain. The search box searches titles. It is often easiest to search for a keyword in the middle of your course rather than typing the whole name.

Select the date range

In the left-hand navigation bar, use the "Select date range" dropdown to select the period to calculate metrics over. (This affects most but not all values in the dashboard.)

The value defaults to the last 8 weeks: DataCamp uses this date range for most internal performance goals.

Determine overall performance with top-level metrics

Top-level metrics for Courses:

Top-level metrics for Projects:

The top-level metrics are used to get a sense of the overall performance of your content. 

Average rating is the primary metric for determining the quality of your content. A score of 4.6 or higher is considered excellent. 4.2 or lower indicates problems to resolve. If you have at least 25 ratings during the selected date range, the ranking against other Courses/Projects is also shown.
Number of Completions isn't a quality measure, but is shown here because it is closely linked to revshare payments, and is a useful measure of the popularity of a course.
28-day Completion Rate
is the primary measure for how engaging a course is. It counts the number of learners who started the course during the selected date range, then calculates the percentage of them who completed the course within 28 days from their start date. (All-time completion rate metrics decrease when you look at shorter timescales; the 28-day completion rate only decreases when you look at timescales shorter than 28-days and is stable otherwise.) Values under 20% are indicative of engagement problems.
Completion Rate is used by Projects, though this may be switched to 28-day Completion Rate in the future, if that metric is deemed a success.
Number of Feedback Messages is a rough indicator of the amount of problematic content in a course. Although learners occasionally leave praise via the in-exercise "Report Issues" feature, most feedback indicates a problem, so the total number of feedback messages provides an indicator of how much content you need to urgently maintain.
% Used Hint is the primary metric for difficulty in Projects. (Courses have other metrics for this.)

Determine Chapter-level performance (courses only)

The Course dashboard includes chapter-level metrics. Pay attention to these things:

A dip in Avg Rating can indicate structural problems with that chapter. For example, is the topic dull or not solving real-world problems? Is the dataset boring? Does the flow of exercises not make sense? You'll need to look at the exercise-level metrics (see below) to determine what went wrong, but this can help you determine where to focus your efforts.
Typically the % Completions is lower for Chapter 1 (because students start the course out of curiosity, or after seeing an advert, then drop out), then stabilizes from Chapter 2. If you see a big drop in this metric in the final chapters, there is an engagement problem. Are there exercises that are impossible to complete? Is the content boring?
Number of Messages
gives an indication of where students are struggling. See the exercise-level metrics (below) to determine the exact location of the problems.
% Hint
and % Solution are the proportion of exercise starts where a hint or solution was used. These are the main metrics for difficulty. Ideally, a course should have a gradual increase in the level of difficulty as it progresses. That is, these numbers should gradually increase from Chapter 1 to the final Chapter.

Determine where problems exist with Exercise/Task-level metrics

This table allows you to find where problems exist in your content. The columns are slightly different for courses and projects, as is the workflow.

Course exercise-level metrics

Each row in the course exercise table represents either a complete exercise or a step in an Iterative or Sequential exercise. 

By default, the table is ordered by descending number of feedback messages, then by the position of the exercise within the course. Try sorting on different metrics to determine the exercises with the biggest problems for each metric.

Number of Messages is the first metric to look at as problems reported by students are the often the clearest indicators of problems in exercises. Any exercise with 1 or more reported messages should be examined for possible improvement.
% Hint is the first of three difficulty metrics, calculated as the percentage of students who submitted at least one attempt and asked for the hint. For most exercises, this should be less than 35%. Values larger than this are often a symptom of unclear instructions or code comments.
% Solution is calculated as the percentage of students who submitted at least one attempt and asked for the solution. For most exercises, this should be less than 20%. Values larger than this are often a symptom of an incorrect solution, unclear instructions, or a weak hint.
To judge the power of the hint, it's also worth looking at the difference between these metrics. Ideally, a hint should get the student half way to the solution, so % Solution should be about half of % Hint.
% First Attempts is another difficulty metric, calculated as the percentage of first attempts on an exercise that are correct. For most exercises (except video exercises, which are hard to get wrong), this should be between 40% and 80%.
% Hint helpful is a secondary metric to judge the quality of hints. This is very useful for high-volume courses, but unreliable for low volume courses.
% SCT helpful and % Duped relate to the quality of submission correctness tests, and are mostly for DataCamp internal usage.

Project task-level metrics

Since learners cannot ask for the solution in Projects, the most important maintenance problem is to manage difficulty, with % Hint as the primary metric for this. For most tasks, this should be less than 35%.

Deal with student reported issues

The first step in diagnosing problems in your exercise/tasks is to read what your students have to say about them.

Course reported issues

For courses, reported issues live at the exercise level. Click on a row in the exercise-level metrics table to see the reported issues for that exercise.

Messages are grouped by the solution that the learners provided (so one row can represent multiple messages). By default, rows are ordered by the date of the most recent message in the group, from oldest to newest.

The Message column shows the most recent message for that group. Click the plus icon on the left to see all the messages for that group.

In the Diff column, click View to see the difference between the expected solution and the student's attempt. Red lines are in the solution but not the student's attempt; green lines are in the student's attempt but not the solution.

Project Reported Issues

For Projects, feedback lives at the whole project level.

Each row represents a single feedback message, and rows are ordered from newest to oldest.
In the nb_url column, click View Notebook to see the notebook submitted by the learner who wrote the feedback. When viewing the notebook it is useful to look at the code the student wrote, and try to determine which tests failed for them.

Update hints (courses only)

The course dashboard includes a tool for matching bad hint and SCT feedback messages to student submissions.

Each row in the table represents one student submission. By default they are sorted by creation date, from most recent to oldest. As as instructor, you mostly care about hints (where the Problem column equals "hint"). However, if you see lots of messages about SCTs, please let your contact at DataCamp know.

For hints, the student asked for a hint, rated it not helpful, then provided a message to explain why the hint wasn't helpful. 

Below the SCT/Hint tool, the Contents of the exercise are shown. Switch to the hint pane to remind yourself of what you wrote.

Now read the message that the student provided to explain why they didn't like the hint. Thirdly, in the Diff column click "View" to see the difference between what the student submitted and the expected solution. Again, red lines are in the solution but not the student's attempt; green lines are in the student's attempt but not the solution.

Explore incorrect answers (courses only)

The third tool for finding things students struggle with shows incorrect attempts on exercises.

Each row represents one wrong answer (possibly by several students). The top five most common incorrect submissions over the last 6 weeks are shown. The # Submissions and % Submissions columns show the number and percentage of students submitting that particular incorrect answer.

In the Diff column, click "View" to see the difference between what the student submitted and the expected solution. As before, red lines are in the solution but not the student's attempt; green lines are in the student's attempt but not the solution.

Once you have determined what the student did wrong, take another look at the instructions and code comments to ensure that they are clear about what you wanted the student to type. You can see the exercise contents at the bottom of the dashboard.

Read the SCT Feedback Message to see if it seems appropriate. (Ideally it should provide some guidance to nudge the student toward the correct answer, but not give everything away.) If not, ask your Contact at DataCamp to update it.

Implement the changes

After using the dashboard to locate problems, use the Teach editor to make your changes. Changes made to the master branch are live on datacamp.com, so you will need to create and work in another branch. When you're finished, open a pull request on GitHub to the master branch and tag your Content Quality contact as the reviewer. They'll let you know if they have any questions or concerns about your updates and will merge the changes live.

Did this answer your question?