Instructors write tests (using nose for Python projects and testthat for R projects) to grade the output of student code.

Because projects are authored locally then pushed to GitHub then reflected on datacamp.com (as described in the project build flow article), your tests may succeed locally but fail on datacamp.com or vice versa. The reason for this discrepancy is usually software version discrepancies. For example, your code may produce different outputs if it uses ggplot2 version 2.3.0 locally vs. version 3.0.0 on datacamp.com and the tests you wrote only pass the outputs of the former version.

If you ensure your local software environment matches the one set up in requirements.sh and/or requirements.R, you don't need to check that your tests pass both locally and on datacamp.com. In practice, it's good to check both places as described below.

Running and checking your tests locally

To ensure tests work locally, you can do so by processing the @solution cell and then the @tests cell for each project task in your Jupyter notebook. A demo:

https://www.useloom.com/share/1dc647f7b58340cfa7adcadf5244937b

Checking your tests on datacamp.com

To ensure your tests work on datacamp.com, inspect the Teach dashboard, which is linked in the README.md file in your private GitHub repository set up by DataCamp. You will see "[TESTING] All tests passed" in the latest build log if your tests pass.

If your tests don't pass, you will see something like "[TESTING] Test failed due to a backend error: The project validation failed on task x: tests raised an error" in the latest build log. You should inspect your software installation locally and in the requirements.sh and/or requirements.R files. Copying and pasting solution code into the preview of the project on datacamp.com and debugging there can also be helpful.

Did this answer your question?