Title: | An R Autograding Extension for Otter-Grader |
---|---|
Description: | An R autograding extension for Otter-Grader (<https://otter-grader.readthedocs.io>). It supports grading R scripts, R Markdown documents, and R Jupyter Notebooks. |
Authors: | Christopher Pyles [aut, cre]
|
Maintainer: | Christopher Pyles <[email protected]> |
License: | BSD_3_clause + file LICENSE |
Version: | 1.5.1 |
Built: | 2025-02-09 03:29:32 UTC |
Source: | https://github.com/cran/ottr |
Execute checks in a test suite and return the TestFileResult object from executing the test. Optionally prints results of the test to console.
check(test_file, test_env, show_results)
check(test_file, test_env, show_results)
test_file |
Path to a test file |
test_env |
An environment against which to run tests |
show_results |
Whether to print the results to stdout |
The parsed test results for the suite
## Not run: check("tests/q1.R") ## End(Not run)
## Not run: check("tests/q1.R") ## End(Not run)
A collection of test file results created while grading an assignment
test_file_results
The TestFileResult objects created during grading
new()
Create a CheckCollector. Add a TestFileResult to this collector.
CheckCollector$new()
add_result()
CheckCollector$add_result(test_file_result)
test_file_result
The TestFileResult to add Retrieve the list TestFileResult objects stored in this collector.
get_results()
CheckCollector$get_results()
The list of TestFileResult objects
clone()
The objects of this class are cloneable with this method.
CheckCollector$clone(deep = FALSE)
deep
Whether to make a deep clone.
An environment into which a collector will be initialized (so we don't need to update global variables).
collector_env
collector_env
An object of class environment
of length 0.
The name of the active collector variable in collector_env
collector_varname
collector_varname
An object of class character
of length 1.
Execute a string as an R script and return the environment from that execution.
Converts a string to an AST and executes that script in a dummy environment for running test
cases against. Transforms all expressions of the form . = ottr::check(...)
by replacing the .
with an index into a list in the environment with name check_results_{SECRET}
to collect the
TestFileResult objects generated from those checks. (This helps to handle variable
name collisions in tests when grading a script.)
execute_script(script, ignore_errors)
execute_script(script, ignore_errors)
script |
The string to be executed |
ignore_errors |
Whether to ignore errors thrown while executing the script |
The global environment after executing the script
Export a submission to a zip file for submitting. If indicated, a PDF of the submission is generated and included in the zip file. (PDF generation is only supported for Rmd and ipynb files.)
export( submission_path, export_path = NULL, display_link = TRUE, pdf = FALSE, force_save = FALSE, debug = FALSE )
export( submission_path, export_path = NULL, display_link = TRUE, pdf = FALSE, force_save = FALSE, debug = FALSE )
submission_path |
The path to the submission |
export_path |
The path at which to write the zip file (optional) |
display_link |
Whether to display a download link with |
pdf |
Whether to include a PDF of the submission (only works for Rmd and ipynb files) |
force_save |
Whether to attempt to force-save the notebook if running on Jupyter |
debug |
Whether to stop on PDF generation errors |
## Not run: export("hw01.ipynb") # with pdf export("hw01.ipynb", pdf = TRUE) ## End(Not run)
## Not run: export("hw01.ipynb") # with pdf export("hw01.ipynb", pdf = TRUE) ## End(Not run)
Retrieve the global CheckCollector
get_collector()
get_collector()
Execute a script, parse check outputs, and run additional tests specified by the
glob pattern tests_glob
on the test environment.
grade_script(script_path, tests_glob, ignore_errors)
grade_script(script_path, tests_glob, ignore_errors)
script_path |
The path to the script |
tests_glob |
The pattern to search for extra tests |
ignore_errors |
Whether to ignore errors thrown while executing the script |
The GradingResults object after executing tests referenced in the script
and those specified by tests_glob
A collection of test case results that correspond to a single test file.
test_file_results
The TestFileResult objects that make up this grading
new()
Create a grading result.
GradingResults$new(test_file_results)
test_file_results
The TestFileResult objects that make up this grading result
to_list()
Convert these results to a JSON-like list
that can be convert to a
GradingResults
object by Otter's Python library.
The returned list has the JSON format
{ "test_file_results": [ { // output of TestFileResult$to_list } ] }
GradingResults$to_list()
The generated list
to_json()
Export these results to a JSON string.
GradingResults$to_json()
The JSON string
clone()
The objects of this class are cloneable with this method.
GradingResults$clone(deep = FALSE)
deep
Whether to make a deep clone.
Create a new global CheckCollector
initialize_collector()
initialize_collector()
Load test case data from a test file. Executes the file and grabs the global test
variable, which should be a list
.
load_test_cases(test_file)
load_test_cases(test_file)
test_file |
The path to the test file |
The test cases
Run autograder in a Gradescope container and return the results as a properly-formatted JSON string.
run_autograder(script_path, ignore_errors, test_dir)
run_autograder(script_path, ignore_errors, test_dir)
script_path |
The path to the script |
ignore_errors |
Whether to ignore errors thrown while executing the script |
test_dir |
A directory of tests to glob from |
The JSON string
## Not run: run_autograder("hw01.R", "ABC123", TRUE, "tests") ## End(Not run)
## Not run: run_autograder("hw01.R", "ABC123", TRUE, "tests") ## End(Not run)
Determine whether this R session is running on Jupyter by checking for a CommManager in IRkernel.
running_on_jupyter()
running_on_jupyter()
A boolean indicating whether IRkernel is running.
## Not run: running_on_jupyter() ## End(Not run)
## Not run: running_on_jupyter() ## End(Not run)
Attempt to save the notebook by displaying Javascript if running on Jupyter. This function waits until the modification time of the file has changed or until the specified timeout expires.
save_notebook(nb_path, timeout = 10)
save_notebook(nb_path, timeout = 10)
nb_path |
The path to the notebook |
timeout |
Number of seconds to wait for save |
A boolean indicating whether the file was saved successfully. If Jupyter is not running, this function returns TRUE.
## Not run: save_notebook("foo.ipynb") ## End(Not run)
## Not run: save_notebook("foo.ipynb") ## End(Not run)
A test case for Ottr. Contains configurations and code to be executed for the test.
name
The name of the test case
code
The code to be executed as part of the test case
points
The point value of the test case
hidden
Whether the test case is hidden
success_message
A message to show to students if the test passes
failure_message
A message to show to students if the test fails
new()
Create a test case.
TestCase$new( name, code, points = 1, hidden = FALSE, success_message = NA, failure_message = NA )
name
The name of the test case
code
The code to be executed as part of the test case
points
The point value of the test case
hidden
Whether the test case is hidden
success_message
A message to show to students if the test passes
failure_message
A message to show to students if the test fails
run()
Run the test case against the provided environment.
TestCase$run(env)
env
The environment to run the test case in
to_list()
Convert this test case to a JSON-compatible list with all of its fields.
TestCase$to_list()
The list representation of this test case
clone()
The objects of this class are cloneable with this method.
TestCase$clone(deep = FALSE)
deep
Whether to make a deep clone.
tc = TestCase$new("q1", { testthat::assert_true(q1.ans) }) env = new.env() env$q1.ans = TRUE tc$run(env)
tc = TestCase$new("q1", { testthat::assert_true(q1.ans) }) env = new.env() env$q1.ans = TRUE tc$run(env)
Represents the results of running a test case against a global environment. Contains metadata about the passing/failing of the test case as well as a reference to the test case itself.
passed
Whether the test passed
error
An error raised by executing the test, if any
test_case
The TestCase that this result tracks
new()
Create a test case result.
TestCaseResult$new(passed, error, test_case)
passed
Whether the test passed
error
An error raised by executing the test, if any
test_case
The TestCase
that this result tracks
get_score()
Get the score earned for this test case, accounting for whether the test passed or failed.
TestCaseResult$get_score()
The score
repr()
Convert this result into a human-readable string for display.
TestCaseResult$repr()
The string representation of this result
to_list()
Convert this result to a JSON-compatible list with all of its fields.
TestCaseResult$to_list()
The list representation of this result
get_message()
Get the message to be displayed to the student based on whether the test case passed or failed, if any.
TestCaseResult$get_message()
The message or NA
clone()
The objects of this class are cloneable with this method.
TestCaseResult$clone(deep = FALSE)
deep
Whether to make a deep clone.
A collection of test case results that correspond to a single test file.
test_case_results
The TestCaseResult objects that make up this test file
filename
The name of the test file
points
The point value of the test file or a list of test case point values
new()
Create a test file result.
TestFileResult$new(filename, test_case_results, points = NULL)
filename
The name of the test file
test_case_results
The TestCaseResult objects that make up this test file
points
The point value of the test file or a list of test case point values
get_basename()
Get the basename of the file this result corresponds to.
TestFileResult$get_basename()
The basename of the test file
get_score()
Get the total score earned for this test file as a percentage. Uses
TestCaseResult$get_score()
to determine the points earned for each test
case.
TestFileResult$get_score()
The score as a percentage.
repr()
Convert this result into a human-readable string for display.
TestFileResult$repr()
The string representation of this result
to_list()
Convert this result to a JSON-compatible list with all of its fields.
TestFileResult$to_list()
The list representation of this result
clone()
The objects of this class are cloneable with this method.
TestFileResult$clone(deep = FALSE)
deep
Whether to make a deep clone.
ottr::check
in an ASTTraverse an AST (a list of expressions) and change calls of the form
. = ottr::check(...)
so that they are appended to a list with name list_name
.
If list_name
is check_results_XX
, then . = ottr::check(...)
becomes
check_results_XX[[<int>]] = ottr::check(...)
, where <int>
is an integer
update_ast_check_calls(tree, list_name)
update_ast_check_calls(tree, list_name)
tree |
The tree to traverse |
list_name |
The quoted name of the list |
The tree with substitutions made
Determine whether a code snippet has any syntax errors.
Determine whether a code snippet has any syntax errors.
valid_syntax(script) valid_syntax(script)
valid_syntax(script) valid_syntax(script)
script |
The code snippet |
Whether the code snippet is valid (can be parsed with parse
)
Whether the code snippet is valid (can be parsed with parse
)
s = " a = TRUE b = c(1, 2, 3) d = function(x) x ^ 2 f = d(b) " valid_syntax(s) #> [1] TRUE s = " if (TRUE) { a = c(1, 2) " valid_syntax(s) #> [1] FALSE s = " a = TRUE b = c(1, 2, 3) d = function(x) x ^ 2 f = d(b) " valid_syntax(s) #> [1] TRUE s = " if (TRUE) { a = c(1, 2) " valid_syntax(s) #> [1] FALSE
s = " a = TRUE b = c(1, 2, 3) d = function(x) x ^ 2 f = d(b) " valid_syntax(s) #> [1] TRUE s = " if (TRUE) { a = c(1, 2) " valid_syntax(s) #> [1] FALSE s = " a = TRUE b = c(1, 2, 3) d = function(x) x ^ 2 f = d(b) " valid_syntax(s) #> [1] TRUE s = " if (TRUE) { a = c(1, 2) " valid_syntax(s) #> [1] FALSE