What is an SCT?
To learn about why a Submission Correctness Test (typically referred to as SCT) exists, what it looks like and who writes them, read this article first. As mentioned there, the SCT is a script of custom tests that accompany every coding exercise. These custom tests have access to the code students submitted and the output and workspace they created with their code. For every taught language, there is an open-source library that provides a wide range of functions to verify these elements of a student submission. When the functions spot a mistake, they will automatically generate a meaningful feedback message.
Below, you can find a list of all these utility packages used to write SCTs. If you're authoring R exercises you write your SCT in R. If you're building Python, SQL, Shell, or Spreadsheets exercises, you write your SCT in Python. The documentation pages for each package list out all of its functions, with examples and best practices:
Python: pythonwhat (docs)
In the remainder of this article, when xwhat is used, this means that the information applies to all of the SCT packages listed above.
How it works
When a student starts an exercise on DataCamp, the coding backend:
Starts a student coding process, and executes the
pre_exercise_code
in this process. This code initializes the process with data, loads relevant packages, etc., so that students can focus on the topic at hand.Starts a solution coding process at the same time, in which both the
pre_exercise_code
and thesolution
are executed. This coding process represents the 'ideal final state' of an exercise.
When students click Submit Answer
, the coding backend:
Executes the submitted code in the student coding process and records any outputs or errors that are generated.
Tells xwhat to check the submitted code, by calling the
test_exercise()
function that is exposed by all of the SCT utility packages. Along with the SCT (the R/Python script with custom tests), the backend also passes the following information: (1) the student submission and the solution as text, (2) a reference to the student process and the solution process, and (3) the output and errors that were generated when executing the student code. If there is a failing test in the SCT, xwhat marks the submitted code as incorrect and automatically generates a feedback message. If all tests pass, xwhat marks the submitted code as correct and generates a success message. This information is relayed back to the coding backend.Bundles the code output and the correctness information so that it can be shown in the learning interface.
Example
To understand how SCTs affect the student's experience, consider the markdown source for an R exercise about variable assignment:
## Create a variable
```yaml
type: NormalExercise
```
In this exercise, you'll assign your first variable.
`@instructions`
Create a variable `m`, equal to 5.
`@sample_code`
```{r}
# Create m
```
`@solution`
```{r}
# Create m
m <- 5
```
`@sct`
```{r}
ex() %>% check_object("m") %>% check_equal()
success_msg("Well done!")
`` `
Let's look at what this SCT does for different code submissions:
Student submits
a <- 4
: A feedback box appears: "Did you define the variablem
without errors?". This message is generated bycheck_object()
, that checks ifm
was defined in the student coding session.Student submits
m <- 4
(correct variable name, incorrect value): Feedback box appears: "The contents of the variablem
aren't correct.". This message was generated bycheck_equal()
, which compares the value ofm
in the student coding session with the value ofm
in the solution coding session. Notice that there was no need to repeat the value5
in the SCT; testwhat inferred it.Student submits
m <- 5
(correct answer): All checks pass, and the message "Well done!" is shown, as specified insuccess_msg()
.
Want to learn more? Learn best practices for writing great SCTs that are both robust to various ways of solving an exercise, yet specific about the mistakes students are making!
โ