The Code Test Widget is designed to assess code proficiency. With its interface, learners can write and test code against predefined test cases. This widget provides feedback on whether the written code passes against all the test cases or not.

You can design personalized test cases to evaluate the learner’s solution. These personalized test cases can be tailored to match the specific requirements and objectives of the coding task. By defining test cases that reflect the desired outcomes and edge cases, the learner’s code can be efficiently evaluated for correctness.

Edit mode

Here’s the editor view of the widget:

We’ll be discussing each component shown in the above illustration from top to bottom in the table below:

  • Language dropdown: This allows you to select the language in which the learner’s code will be evaluated.

  • “Enable Languages”: This is a drop-down menu from which you can select the allowed languages in which the learner can write their code.

  • “Add Sample Code”: This button resets the code to the sample code.

  • “Import Data Structures”: This button opens a view of the import statements required to use the in-built implementations of linked list, or binary tree data structures in the code.

  • “Function”: The name of the function to be evaluated is specified here. The sum function will be evaluated from the code in the above example.

  • “Timeout (sec)”: The time limit to process the code until it times out is specified here.

  • “Memory (KB)”: The available memory the learner can utilize in their code is specified here.

  • “Define Class Name”: This feature will be explained in the additional features section.

  • “Define Input”: The data type of your input parameter(s) is selected from these dropdown menus. The “-” and “+” buttons control the expected number of parameters. In the above example, there are two inputs which are both of type “Integer”.

  • “Define Output”: The data type of your return value(s) is selected from these dropdown menus. The “-” and “+” buttons control the expected number of return values. In the above example, there is one output of type “Integer”.

  • “Input”: This opens a view of how the input(s) must be specified in the test cases.

  • “Output”: This opens a view of how the output(s) must be specified in the test cases.

  • “Evaluation Function”: This feature will be explained in the additional features section.

  • “Code Feedback”: This feature will be explained in the additional features section. It is currently not available in the courses by default. You will need to contact an admin to enable this feature in your course.

  • “Visualizer”: This feature will be explained in the additional features section.

  • “Add File”: This button allows you to add a new file. This file can be used for different purposes, such as providing utility functions. 

  • List of files: These are the currently available files. The "👁” button to the left of each file can be toggled on or off depending on whether the file needs to be displayed to the learner.

  • Code interface: This is the interface on which you write code.

  • “Run”: This button evaluates the learner’s code against all (both public and private) the specified test cases (explained below).

  • “Public Test Cases”: This tab shows the test cases visible to the learner.

  • “Private Test Cases”: This tab shows the test cases hidden from the learner.

  • “Results”: This tab shows the number of test cases that our code passed after the “Run” button is clicked.

Additional features

Custom data structures

If the input and/or output data types of a function are not included in the already available data types, a class will need to be created in the respective language that implements this data type. Some common data types that require you to do this are the following:

  • 2-D array

  • Stack

  • Queue

  • Set

  • Integer Arrays (Java)

Python example

Consider a problem where the learner has to implement a Python function, add_matrices, that takes two 2D matrices as input and returns the matrix resulting from their sum.

Both the input and output types in the problem above are 2-D integer arrays. This data type is unavailable in the in-built data types available in the code test widget, so we must create a separate class for it. Here are the steps required to create this class:

  • Under the “Custom Data Structures” heading, add the name of your class in the “Define Class Name” field, and click the “+” button. For this example, we will be naming the class TwoD.

  • Once the class is added, a text field, “Define File Name” will appear to its right. You have to define the name in which the above class will be implemented in this field. Make sure that name of the file should be the same as that of the class’s name, i.e., TwoD.py, as shown in the figure below:

- Next, select the "TwoD" type when defining input and output, as shown in the figure below:

  • Below these data type options, you will see the default implementation of your file (TwoD.py in this case). You have to implement the construct and traverse functions in this file in the following way:

    • construct: This function takes two arguments: value and type_of_arr. You can ignore the second argument as it is irrelevant to the code you will write. The value argument is the input defined in the test cases in string form. You must convert value to the required data type (a 2D array, in this case) using a JSON parser.

    • traverse: This function takes two arguments: input and type_of_arr. You can ignore the second argument as it is irrelevant to the code you will write. The input argument is returned data structure of the function to be evaluated (add_matrices in this case). You must convert input to a string using a JSON parser. This string will be compared to the string defined in the output section in the test cases.

Observe the file related to the class, i.e., TwoD.py in the widget below. Under the “TwoD.py-default” tab is the default code. Follow the code in the “TwoD.py-updated” tab to see how to make the widget work:

import sys
import json
sys.path.append("../ds_v1")
from data_structure import DataStructure
# throw parsing errors for incorrect test case format
def parsingError(element_string, error_message):
sys.stderr.write(f"Parsing Error: {element_string} {error_message}")
exit()
# className: MyCustomDS
class MyCustomDS(DataStructure):
def __init__(self):
# constructor
pass
def __str__(self):
# override print() for your data structure
pass
# return value will be passed to user program
def construct(self, value: str, type_of_arr: int) -> any:
# parse test case in json string 'value' to your desired data structure
pass
# input param contains return value of user program
def traverse(self, input: any, type_of_arr: int) -> str:
# convert your data structure 'input' to a json string representing a test case output
pass

- We set up the code interface as follows:

- Similar to basic testing, for custom classes, you can specify inputs and outputs both in public/private test cases. Here's an example below:

A common strategy is to sort the expected and actual outputs in the same order.

Evaluation function

Suppose you’re designing a problem where the learner has to generate and return an array of all possible permutations of a given string. The order of the elements (permutations) within the returned array does not matter.

In the example above, using the widget’s in-built evaluation function would simply compare the expected output defined in the test cases with the learner’s actual output. Suppose for the input string “ab”, the learner’s function returned [“ba”, “ab”] while the defined test case expects [“ab”, “ba”] for this input. This will make the learner’s output fail for this test case since the order of elements within the learner’s returned array does not match the one defined in that test case.

Therefore, if the output does not follow a strict order, check the “Evaluation Function” option, which creates an “Evaluation function” tab at the bottom of the widget. In this tab, you will modify the existing evaluation function of the widget that compares the expected and actual outputs. The evaluation function will always be in Python, irrespective of the selected language. By doing this, you can skip writing the output in the public/private test cases. An example of this is shown in the figure below:

A common strategy is to sort the expected and actual outputs in the same order. Here's how we will modify the existing evaluation file:

def evaluate_results(self) -> object:
"""
self.__inputs is the list of inputs that are passed as input to function
self.__actual_output is a list of outputs that is produced by the learner's code
return : self.__edu__result object
"""
# multiple outputs will be added at each index respectively for a single test_case
expected_outputs = []
# store the 1st expected output that should be displayed to the user
expected_outputs.append(self.__inputs[0] + self.__inputs[1])
if json.dumps(self.__inputs[0] + self.__inputs[1]) == json.dumps(self.__actual_outputs[0]):
self.__edu_result.store_result(expected_outputs, TestCaseResult.PASS)
else:
self.__edu_result.store_result(expected_outputs, TestCaseResult.FAIL)
return self.__edu_result
  • Lines 8–17: We write the correct Python solution to the problem in the solution function, which calculates the expected outputs.

  • Lines 26–29: We sort and compare the expected and actual output for each test case. If they match, the learner’s function passes for this test case. Otherwise, it fails.

Code feedback

The code feedback feature allows us to leverage the Generative AI to evaluate the learner’s code.

This is not enabled by default. You will need to contact the admin at authors@educative.io to enable it in your course.

Do you want to explore this widget in detail?

Visualizer

When enabled, this feature allows the learner to view a diagram of each input in preview mode. Here are the data types supported by this feature:

  • Array

  • Linked list

  • Binary tree

  • Hash map

You can check how this feature will be displayed to the learner in the “Viewing the visualizer” section.

Preview mode

Here’s the preview of the widget:

Viewing the test cases

You can view the inputs for each public test case (defined in the “Public Test Cases” tab in the editor mode) by clicking on the “Case \” button. In the below example, each test case has two inputs, and there are a total of two public test cases:


Similar to the edit mode of the widget, clicking on the "Run" button evaluates the learner's code against both public and private test cases.

A common convention is to check the correctness of the created test cases by pasting the correct solution in the preview mode and checking if this solution passes for all the defined test cases, in which case they will all be correct. If the code fails against one or more test cases, you will see the following messages:

The message in red tells us the number of passed test cases. In this case, none of them pass:

  • “Reason”: This field displays the reason why our code failed against a particular test case. In the above example, “Incorrect Output” means that our code does not return the correct output against the test case.

  • “Input”: This field displays the inputs for the test case that our code failed to satisfy.

  • “Output”: This field displays the returned incorrect output of our code against the test case.

  • “Expected”: This field displays the correct output of the test case.

  • “Console:” This field displays the output of any statements that we print to the console. A common convention is to use it to display trace statements to debug the code.

This indicates that either the solution we pasted is incorrect or the test case in which our solution failed does not have the correct expected output.

Otherwise, the code will be correct and pass against all test cases. You will see the following messages:

The message in green tells us that all test cases (four in this example) have passed:

  • “Input”: This field displays the input(s) of the last private test case that our code passes.

  • “Output”: This field displays the return correct output(s) that our code gives for the last private test case.

  • “Expected”: This field displays the correct output of the last private test case.

  • “Console:” This is the same field that was explained above.

Viewing the code feedback

After specifying the prompt-label pairs in the edit mode of the widget, you can view how they’ll be displayed to the user after clicking the “Run” button and then clicking the “Get feedback on your code” button, as shown by the figure below:

Based on the example in the “Code feedback” section above, the learner's code will either pass or fail the test cases.

canvasAnimation-image
1 of 2

Viewing the visualizer

After enabling the “visualizer” option in the widget’s edit mode, the learner can now view a diagram of the input types of the test cases. Below is an example of a test case that contains a binary tree:

  • The “👁” button allows the learner to decide whether to view the visualization or not. If all the buttons are toggled off, the regular view of the test cases shown in the “Viewing the test cases” section will be displayed.

  • The arrow buttons allow the learner to toggle between the inputs.

Published mode

Here’s the published view of the widget:

We'll be discussing the "Run" and "Submit" buttons of the above illustration, as this feature is not available in the preview mode of the widget:

  • “Run”: This button evaluates the learner’s code against the public test cases.

  • “Submit”: This button evaluates the learner’s code against the private test cases.

Configuration

The purpose of this widget is to present the user with an interface called a challenge, on which they can write the solution to a given problem.

How to create a challenge

Here are the key steps required to create this interface:

  • Select the appropriate language(s).

  • Write the name of the function to be evaluated in the “Function” field.

  • Set the timeout period to 55s in the “Timeout (sec)” field.

  • Set the appropriate input and output types.

  • In the code interface, place a dummy return statement as the last line in the function to be evaluated. This statement should return the same value as the specified output type. Above this line, you write a comment: “Replace this placeholder return statement with your code.” This enables the code to run without any errors if the “Run” button is clicked without writing any additional code.

  • Create both public and private test cases in their respective tabs:

    • We prefer that you limit the public test cases to not more than 5.

    • Include both public and private test cases in the “Private Test Cases” tab. Therefore, when the “Submit” button is clicked, the user’s function is comprehensively evaluated against all test cases.

  • Ensure that the data types selected while defining the evaluation criteria match the test cases. In the “Results” tab, you can verify the response of your code and test cases by clicking on the “Run” button when designing the problem.

  • Check the correctness of the created test cases by pasting the correct solution in the preview mode.

Test your knowledge of data structures and algorithms by implementing the solutions in Python for the below challenges.

Challenge 1

Implement the two_sum function that takes an array of integers arr and a target t. You have to return an array of two indices that add up to generate the target t. Moreover, you can’t use the same index twice; there will be only one solution.

Note: The order of the indices in the returned array doesn’t matter.

Python
usercode > main.py
def two_sum(arr, t):
# Replace this palceholder return statement with your code
return []
Two Sum