Instructors write tests (using nose for Python projects and testthat for R projects) to grade the output of student code.

Because projects are authored locally then pushed to GitHub then reflected on (as described in the project build flow article), your tests may succeed locally but fail on or vice versa. The reason for this discrepancy is usually software version discrepancies. For example, your code may produce different outputs if it uses ggplot2 version 2.3.0 locally vs. version 3.0.0 on and the tests you wrote only pass the outputs of the former version.

If you ensure your local software environment matches the one set up in and/or requirements.R, you don't need to check that your tests pass both locally and on In practice, it's good to check both places as described below.

Running and checking your tests locally

To ensure tests work locally, you can do so by processing the @solution cell and then the @tests cell for each project task in your Jupyter notebook.

Checking your tests on

To ensure your tests work on, inspect the Teach dashboard, which is linked in the file in your private GitHub repository set up by DataCamp. You will see "[TESTING] All tests passed" in the latest build log if your tests pass.

If your tests don't pass, you will see something like "[TESTING] Test failed due to a backend error: The project validation failed on task x: tests raised an error" in the latest build log. You should inspect your software installation locally and in the and/or requirements.R files. Copying and pasting solution code into the preview of the project on and debugging there can also be helpful.

Did this answer your question?