After you have isolated a low performing exercise using the Dashboard Exercise metrics, the next step is to look at the Code Diffs in the Issues/Feedback/Incorrect Attempts tab. 

Code diffs show only that -- the difference in code between what the student has submitted and what the solution was. 

Not every diff represents an incorrect answer. Not every difference causes the student’s answer to be counted as incorrect. Not every answer that’s counted as incorrect is really incorrect.

Not every feedback comes with an incorrect submission.

A student has submitted a feedback message on exercise 4.10 in “Introduction to SQL”. Their diff was as follows:

Does that mean that our exercise is case-sensitive, and the student’s answer using lowercase select and from was not accepted?

Not necessarily, because code diffs only shows the difference in text. They do not -- and cannot -- make conclusions about the correctness of submitted code (just like a Github diff doesn’t show you which part of code causes a certain test to fail!)

Going to the exercise and actually checking the student’s code shows that the student's solution was actually accepted.

In other cases, the exercise might indeed be case-sensitive due to an oversight from us. The only way to know is to check the suspect code yourself in the live exercise.

Not everything in the diff is the cause of failure.

76 students have submitted the following response to the exercise 2.8 in Introduction to PySpark

Does our SCT only accept double quotes, and mark single quotes as incorrect?

If we open the exercise, take the solution, and change double quotes to single quotes while not changing anything else, we’ll see that the answer IS accepted. So single quotes are not the problem, even though they are highlighted in the diff.

The real culprit is the incorrect argument to sum(): students don’t understand which column they are supposed to use to find total # of hours in a flight (probably because we didn’t do a good job describing the dataset!). This should be resolved by the instructor.

The only way to know is to check the suspect code yourself in the live exercise.

A good way to start figuring out which part of the code fails is to look at the SCT message that is attached to the incorrect attempt in the “Incorrect attempts” tab

Sometimes actual correct answers are counted as incorrect

2.5% of all incorrect submissions to ex 3.5 in Introduction to SQL have this diff:

Why is the answer counted as incorrect?

  • Is it case-sensitivity? No, as we can verify by going into the exercise and submitting the correct solution but with all keywords in lowercase
  • Is it the missing space after avg_duration_hours?  No, as we can verify by going into the exercise, and removing the space from solution
  • Is it the missing semicolon? No, as we can verify ……….

The real problem is that students submitted avg(duration/60.0) instead of avg(duration)/60.0. While the results of two operations differ slightly due to rounding issues, both should be counted as correct. In such cases, the instructor should contact Content Quality.

Did this answer your question?