What is an SCT?
To learn about why a Submission Correctness Test (typically referred to as SCT) exists, what it looks like and who writes them, read this article first. As mentioned there, the SCT is a script of custom tests that accompany every coding exercise. These custom tests have access to the code students submitted and the output and workspace they created with their code. For every taught language, there is an open-source library that provides a wide range of functions to verify these elements of a student submission. When the functions spot a mistake, they will automatically generate a meaningful feedback message.
Below, you can find a list of all these utility packages used to write SCTs. If you're authoring R exercises you write your SCT in R. If you're building Python, SQL, Shell, or Spreadsheets exercises, you write your SCT in Python. The documentation pages for each package list out all of its functions, with examples and best practices:
In the remainder of this article, when xwhat is used, this means that the information applies to all of the SCT packages listed above.
How it works
When a student starts an exercise on DataCamp, the coding backend:
Starts a student coding process, and executes the
pre_exercise_codein this process. This code initializes the process with data, loads relevant packages, etc., so that students can focus on the topic at hand.
Starts a solution coding process at the same time, in which both the
solutionare executed. This coding process represents the 'ideal final state' of an exercise.
When students click
Submit Answer, the coding backend:
Executes the submitted code in the student coding process and records any outputs or errors that are generated.
Tells xwhat to check the submitted code, by calling the
test_exercise()function that is exposed by all of the SCT utility packages. Along with the SCT (the R/Python script with custom tests), the backend also passes the following information: (1) the student submission and the solution as text, (2) a reference to the student process and the solution process, and (3) the output and errors that were generated when executing the student code. If there is a failing test in the SCT, xwhat marks the submitted code as incorrect and automatically generates a feedback message. If all tests pass, xwhat marks the submitted code as correct and generates a success message. This information is relayed back to the coding backend.
Bundles the code output and the correctness information so that it can be shown in the learning interface.
To understand how SCTs affect the student's experience, consider the markdown source for an R exercise about variable assignment:
## Create a variable
In this exercise, you'll assign your first variable.
Create a variable `m`, equal to 5.
# Create m
# Create m
m <- 5
ex() %>% check_object("m") %>% check_equal()
Let's look at what this SCT does for different code submissions:
a <- 4: A feedback box appears: "Did you define the variable
mwithout errors?". This message is generated by
check_object(), that checks if
mwas defined in the student coding session.
m <- 4(correct variable name, incorrect value): Feedback box appears: "The contents of the variable
maren't correct.". This message was generated by
check_equal(), which compares the value of
min the student coding session with the value of
min the solution coding session. Notice that there was no need to repeat the value
5in the SCT; testwhat inferred it.
m <- 5(correct answer): All checks pass, and the message "Well done!" is shown, as specified in
Want to learn more? Learn best practices for writing great SCTs that are both robust to various ways of solving an exercise, yet specific about the mistakes students are making!