ReFrame¶
Description¶
ReFrame is a Python-based unit/regression testing utility. ReFrame allows developers to write tests as Python classes, and then run those tests with a command line utility.
Environment Modules¶
Run module spider reframe
to find out what environment modules are available for this application.
Environment Variables¶
- HPC_REFRAME_DIR - installation directory
- HPC_REFRAME_BIN - executable directory
- RFM_CONFIG_FILES - default configuration files directory
Additional Usage Information¶
For particularly complex topics such as parameters and pipeline hooks, please view the documentation for ReFrame.
Command line usage¶
To run a basic test with ReFrame, load the module and use the following command:
reframe -C <path/to/config> -c <path/to/test.py> -r
Common and useful options¶
Option | Function |
---|---|
-h , -help |
Shows a help message containing a full list of options. |
-C <path> |
Indicates the configuration file (.py or .yaml) to be used by this run of ReFrame. |
-c <path> |
Indicates a specific Python file containing ReFrame tests or directory to check for ReFrame tests. |
-L , --list-detailed |
Lists all tests that ReFrame will run with the given option and the path to their origin file. |
-l , --list |
Lists all tests that ReFrame will run with their dependencies. |
-r , --run |
Tells ReFrame to run a test or set of tests. |
--dry-run |
Performs a "dry run" of the tests (creates relevant files but doesn't submit for execution) |
--exec-policy |
Determines the execution policy. async , the default, will run tests asynchronously. serial will run tests in sequence. |
--skip-sanity-check |
Skips the sanity checking phase of a run. |
-s <path> , --stage <path> |
Sets the test staging directory. |
-o <path> , --output <path> |
Sets the test output directory. |
--report-file <file> |
Sets the file path to write the report JSON to. |
Reading the results¶
With the current intended configuration, ReFrame generates the following files upon a run:
- A report of the run (stored in the
rfm_reports
subdirectory of the scratch directory) as a timestamped JSON file. - A SQLite database
rfm_results.db
is created if it does not exist and updates. - For each individual test:
- If the test succeeds, the
stdout
,stderr
, and job script will be stored in the output subdirectory of the scratch directory. - If the test fails, the staging directory for the test will be retained.
ReFrame will inform the user of the staging and output directories when running from the command, in addition to the location of the results database.
Python package usage¶
Loading the ReFrame module will also load Python version 3.12.
The ReFrame package is accessed by loading the module. To use ReFrame in a Python file, add the following header:
import reframe as rfm
import reframe.utility.sanity as sn
from reframe.core.builtins import sanity_function, run_after
Creating tests¶
To create a test, create a Python class that extends rfm.RegressionTest
, rfm.RunOnlyRegressionTest
, or rfm.CompileOnlyRegressionTest
.
The RunOnlyRegressionTest
will likely be what most users will need. Remember to annotate the class as a @rfm.simple_test
so ReFrame will register it as such.
@rfm.simple_test
class ReFrameSampleTest(rfm.RunOnlyRegressionTest):
valid_systems = ['default']
valid_prog_environs = ['default']
executable = 'echo'
executable_opts = ['"Hello World!"']
In the above test, executable
specifies the main command for the test, with executable_opts
representing the provided options. valid_systems
and valid_prog_environs
represent which system and environment (specified in the configuration file) the test is allowed to run in, and should typically be left as "default".
Additional commands¶
A test can have commands that also run before and after the executable
command, represented by lists called prerun_cmds
and postrun_cmds
. When working with pre- and post-run commands, note that the options are not separate from the command.
prerun_cmds = ['echo "1"']
executable = 'echo'
executable_opts = '"2"'
postrun_cmds = ['echo "3"']
Sanity checking¶
All ReFrame tests require a function decorated as a @sanity_function
that returns a boolean value to perform sanity checking with. ReFrame comes with a host of built-in sanity function helpers within reframe.utility.sanity
. A sanity function can contain any number of sanity function helpers.
# This sanity function checks that the exit code of the job == 0.
@sanity_function
def _validate_ec(self):
return sn.assert_eq(0, self.job.exitcode)
Sources directory¶
ReFrame will use a staging directory to perform the commands within a test. Providing a sourcesdir
is important for tests that involve file operations, as it allows ReFrame to copy a directory to the staging directory.
sourcesdir = '/blue/sample-user/test_files'
Loading modules¶
ReFrame will load modules provided in the modules
field using lmod.
modules = ['intel', 'openmpi']
Keeping files post-run¶
Sometimes, a user may want or need to keep files generated during a run of a test. These files can be specified using keep_files
.
keep_files = ['keep_this_file.txt']
Scheduler options¶
Do not set the --output
or --error
scheduler options when working with ReFrame tests. ReFrame relies on these in order to get the stdout
and stderr
for a test. Additionally, be careful about the usage of --partition
.
ReFrame supports setting job options for the Slurm scheduler programmatically, but requires a custom function in order to do so. Create a function that runs after the setup stage of the ReFrame pipeline and set self.job.options
to a list of strings representing the desired scheduler options.
@run_after('setup')
def _set_scheduler_options(self):
self.job.options = [
'--mem=8gb',
'--cpus-per-task=4',
'--ntasks=1'
]
Test template¶
import reframe as rfm
import reframe.utility.sanity as sn
from reframe.core.builtins import sanity_function, run_before, run_after
# Give this test class a unique name
@rfm.simple_test
class TemplateTest(rfm.RunOnlyRegressionTest):
sourcesdir = '<PATH/TO/SOURCE>' # Change this to your source directory for the test
valid_systems = ['default'] # Change this if using a unique config
valid_prog_environs = ['default'] # Change this if needing a specific environment (e.g. bigmem)
modules = [] # Place any environment modules that need to be loaded here
env_vars = {} # Place any environment variables here in 'key':'value' format
prerun_cmds = [] # Place any commands that run before the executable here
executable = '<EXECUTABLE>' # Change this to your main executable
executable_opts = [] # Place options for the main executable here
postrun_cmds = [] # Place any commands that run after the executable here
# Basic pipeline hooking function that sets SLURM scheduler options
@run_after('setup')
def _set_scheduler_options(self):
self.job.options = [] # Place SLURM scheduler options here
# Basic validation checks that the exit code is equal to 0
# Replace with more robust sanity checking if needed
@sanity_function
def _basic_validation(self):
return sn.assert_eq(self.job.exitcode, 0)
Categories¶
performance_analysis, utility