This guide explains how to use the Content Dashboard to make targeted improvements to your project.

Please note we have updated our content guidelines for Guided Projects as of October 2020 because of the introduction of the “Show Answer” button on tasks. This is a great starting point for project maintenance for Guided Projects.

The dashboard is designed to help you answer three questions:

  1. How good is my project?
  2. Which tasks have problems?
  3. What are learners struggling with in those tasks?

Before you begin

  1. Create a project.
  2. See Accessing the Content Dashboard to get your login credentials.
  3. Log in to the dashboard.

Select the project to maintain

Choose the correct dashboard

In the left-hand navigation bar, under the "Quality" item, select "Projects".

Select your Project

Begin on the "View Project" pane. Use the "Select Project" dropdown to select the project to maintain. The search box searches titles. It is often easiest to search for a keyword in the middle of your course rather than typing the whole name.

Select the date range

In the left-hand navigation bar, use the "Select date range" dropdown to select the period to calculate metrics over. (This affects most but not all values in the dashboard.)

The value defaults to the last 8 weeks: DataCamp uses this date range for most internal performance goals.

Determine overall performance with top-level metrics

The top-level metrics are used to get a sense of the overall performance of your project. 

Average rating is the primary metric for determining the quality of your project. A score of 4.6 or higher is considered excellent. 4.3 or lower indicates problems to resolve. If you have at least 25 ratings during the selected date range, the ranking against other projects is also shown.

Number of Completions isn't a quality measure, but is shown here because it is closely linked to revshare payments, and is a useful measure of the popularity of a course.

Completion Rate is the primary measure for how engaging a project is. It counts the number of learners who started the course during the selected date range, then calculates the percentage of them who completed the course up to now. (Note that this metric decreases when you look at shorter timescales.)

Note that while completion rates will vary depending on the difficulty of the project, and whether it is a Guided or an Unguided Project (with the latter often exhibiting lower completion rates). Currently, we interpret completion rates below 20% for Unguided Projects and below 30% for Guided Projects to be reflective of engagement problems.

% Used Hint is the primary metric for difficulty in projects.

You can click each the number in each metric panel to see a time series of that metric.

Determine where problems exist with Task-level metrics

This table allows you to find where problems exist in your project. Each row in the task table represents a task.

Since learners cannot ask for the solution in projects, the most important maintenance problem is to manage difficulty, with % Hint as the primary metric for this. For most tasks, this should be less than 35%.

% Completions is the number of completions divided by the number of starts. Typically this gradually decreases gradually from the start of the project to the finish. Sudden dips are indicative of broken tasks, or poor instructions.

Note that if your project is an Unguided Project, you will only be shown one task, and the metrics such as completion rate and hint rate will be identical to those shown at the top of the dashboard (i.e. top-level metrics).

Deal with student reported issues

The first step in diagnosing problems in your tasks is to read what your students have to say about them. 

Feedback lives at the whole project level.

Each row represents a single feedback message, and rows are ordered from newest to oldest.

In the nb_url column, click View Notebook to see the notebook submitted by the learner who wrote the feedback. When viewing the notebook it is useful to look at the code the student wrote, and try to determine which tests failed for them (which will be visible in the output of the notebook).

See Filtering Content Dashboard tables to learn how to filter the tables.

Did this answer your question?