Skip to main content
All CollectionsCoursesCourse Development
Course "Multiple Choice with Console" exercises
Course "Multiple Choice with Console" exercises

Learn what a Multiple Choice with Console exercise is and how it differs from a Multiple Choice exercise.

Amy Peterson avatar
Written by Amy Peterson
Updated over 3 years ago

While a Multiple Choice exercise doesn't allow the learner to write any code, a Multiple Choice with Console exercise provides a console that the student can use to test out code before submitting an answer. Typically, in a Multiple Choice with Console exercise, you will ask the learner to write or run some code, look at its output, and then choose an answer from the list provided.

Multiple Choice with Console exercises are ideally suited for debugging exercises and "what if?" questions. For example, you can give the learner code that draws a graph, ask them to log-scale the axes, and then ask a question about the change in the data's appearance.

Guidelines

Instructions are the same as for Multiple Choice exercises.

  • The maximum number of Multiple Choice or Multiple Choice with Console exercises per course is 3 or 5, depending on the presence or absence, respectively, of other types of conceptual exercises within the course.

  • No back-to-back Multiple Choice or Multiple Choice with Console exercises.

  • Coding exercises should be chosen over Multiple Choice with Console exercises, which should themselves be chosen over Multiple Choice exercises. DataCamp students should spend most of their time on coding.

Writing the success message for Multiple Choice with Console exercises

Multiple Choice with Console exercise require the instructor to write the messages shown for each of the incorrect answers and also the message if the learner got the answer correct. In addition, the correct answer is specified in the language specific function. The syntax is slightly different for Python, SQL, and R courses. Examples of each are shown below. Note that only the messages and function are shown and not the instructions corresponding.

Python exercise

msg1 = "Although this is listed in the _Zen of Python_, it is not the 7th idiom."
msg2 = "Yes, we prefer beautiful code over ugly code, but this isn't the idiom we are looking for."
msg3 = "That's correct! Python has a design philosophy that emphasizes readability. Throughout the course, we'll see that writing efficient Python code goes hand in hand with writing code that is easy to understand. Faster code is good, but faster & readable code is best!"
msg4 = "We do love Python, but this isn't listed in the idioms of the _Zen of Python_."

Ex().has_chosen(correct=3, msgs = [msg1, msg2, msg3, msg4])


SQL exercise

# Error and success messages
err_msg1 = "Half of this answer is correct."
err_msg2 = "Make sure to count the number of rows and columns in each table to compare."
success_msg = "Great start!  Knowing how much data you have is a first step in exploratory data analysis."
err_msg4 = "Half of this answer is correct."

# Note that, despite being python code, this function uses one-based indexing!
Ex().has_chosen(
  correct = 3,
  msgs = [err_msg1, err_msg2, success_msg, err_msg4]
)


R exercise

msg1 <- "This is incorrect, based on the value of the year estimate we can tell that 5% of the 77 countries experienced a decrease in life expectancy."
msg2 <- "Nope, did you sort your data correctly? Based on the estimate, Oman experienced the fastest growth in life expectancy."
msg3 <- "You got it! Based on thse models we can conclude that 73 of the 77 countries experiened a growth in life expectancy during this time period."
msg4 <- "Nope, there is at least one incorrect answer in this list."
msg5 <- "Nope, there is at least one correct answer in this list."
ex() %>% check_mc(3, feedback_msgs = c(msg1, msg2, msg3, msg4, msg5))
Did this answer your question?