Automatically checking student submissions, providing meaningful feedback, and helping students when their submission is incorrect are central to the learning experience on DataCamp. The life and blood of the automated system that points out mistakes and guides students to the correct solution is the submission correctness test or SCT.
The SCT is a script of custom tests that accompany every coding exercise. These custom tests access the code students submitted, its output, and the workspace they created with their code. For every taught language, there is an open-source library that provides a wide range of functions to verify these elements of a student submission. When SCTs spot a mistake, they will automatically generate a meaningful feedback message.
As an example, the following SCT for an R exercise uses functions from the testwhat package to check whether an object m
was created and whether its value corresponds to the value of m
provided in the solution code:
ex() %>% check_object("m") %>% check_equal()
success_msg("Well done!")
As another example, the following SCT for a Python exercise uses functions from the pythonwhat package to check whether the function round()
was called with the correct arguments:
Ex().check_function('round').multi(
check_args(0).has_equal_value(),
check_args('ndigits').has_equal_value()
)
success_msg("You're a coding rockstar!")
In both examples, notice how success_msg()
is used to specify the message that we show to students when they successfully complete the exercise. When you are building a DataCamp course with one of our Content Developers, you should only write these success messages and not worry about other SCTs. DataCamp employees will write the SCTs for your course.
Want to learn more about SCTs? Visit the following article for a deep dive into how SCTs work behind the scenes.