Assessment at DataCamp

Getting started with assessment

Written by Aimée Gott
Updated over a week ago

Welcome to Assessment at DataCamp! Throughout this training we will be introducing you to the world of assessment and how we implement assessment at DataCamp. You are probably familiar with assessments, you have no doubt taken them at some point and you may even have written some yourself. But did you know there is a science behind assessment? You don’t need to know it all to be able to write good assessment content, but it forms the basis for many of the guidelines that we use. Before we get started with introducing these guidelines, let’s start with what it is we are trying to achieve.

Note: Over time we have learnt more and more about what makes a good assessment and how we should write items. This means that you may see live items that do not meet the criteria in this documentation. Over time these items will be removed, and we would ask that you follow the guidelines set. Any changes after your training as we learn more will be highlighted to you by the team.

The Goal of DataCamp Assessments

At the time of writing, assessments are used across the DataCamp platform to enable learners to measure their current level in a range of data related domains. This could be an assessment taken after a series of courses or an assessment taken as part of a certification exam.

When it comes to writing the content for these assessments, it is our goal to ensure that all our assessments are valid, reliable and fair assessments that allow any individual to measure their current level in competencies required for data roles.

What do we mean by valid, reliable and fair assessments?

A valid assessment is one that tests what it is intended to measure. This means that if we are making a claim that our assessment tests exploratory analysis, then it really does test exploratory analysis. We go through a long process of test design to make sure that we plan to test the right things. When it comes to writing items, it is really important that you stick to the test specifications so that we can maintain this validity of our tests.

A reliable assessment is one that would give the same result repeatedly for someone with the same ability. From a content perspective, we are aiming for our tests to be reliable for everyone, regardless of their ability. This means that we have to create a large enough number of test items across the whole range of abilities. From time to time you may be asked to focus on items at specific ability levels so that we can fill gaps in our content to ensure reliability.

A fair assessment is one that does not unfairly disadvantage a sub-population of test takers. This could be for a range of reasons from native language or nationality to those who are unfamiliar with a particular sport. You will notice that many of our guidelines relate to fairness and you will be asked to review for this.

Did this answer your question?