PyNVMe3 Test Suites

PyNVMe3 Test Suites

Last Modified: November 21, 2025

Copyright © 2020-2025 GENG YUN Technology Pte. Ltd.
All Rights Reserved.


Suite: scripts/conformance

folder: scripts/conformance/01_admin

file: scripts/conformance/01_admin/abort_test

function: scripts/conformance/01_admin/abort_test.py::test_dut_firmware_and_model_name

Log controller identity information by reading the Identify Controller data and invoking a namespace format to ensure readiness.

Reference

  1. NVM Express Revision 1.4a, Figure 249.

Steps

  1. Log controller model number retrieved from Identify data
  2. Log controller firmware revision for traceability
  3. Report additional controller and namespace properties
  4. Format the namespace to ensure subsequent tests start from a clean state

function: scripts/conformance/01_admin/abort_test.py::test_abort_specific_aer_command

Validate abort handling by issuing an AER command, aborting it via CID, and confirming the AER completion reports the aborted status.

Reference

  1. NVM Express Revision 1.4a, Section 5.1.

Steps

  1. Track asynchronous event request abort status for verification
  2. Define an AER callback that records the abort status from the completion entry
  3. Submit an AER command and record the issued command identifier
  4. Issue the abort targeting the AER command and expect an aborted completion status
  5. Confirm that the callback observed the aborted AER command identifier

function: scripts/conformance/01_admin/abort_test.py::test_abort_abort_command

Verify nested abort handling by issuing abort commands against other abort commands and AERs to ensure each completes successfully or reports 00/07 as expected.

Reference

  1. NVM Express Revision 1.4a, Section 5.1.

Steps

  1. Skip the test if the controller supports fewer than two outstanding abort commands
  2. Issue an abort command and abort it with a second abort, expecting clean completions
  3. Repeat the sequence using an invalid CID to ensure abort handling stays consistent
  4. Abort an AER with two nested abort commands and expect the AER to report an aborted completion

function: scripts/conformance/01_admin/abort_test.py::test_abort_io_burst

Stress bursty abort behavior by posting many writes plus a flush, delaying briefly, and aborting the flush via SQID and CID.

Reference

  1. NVM Express Revision 1.4a, Section 5.1.

Steps

  1. Create dedicated submission and completion queues for the burst test
  2. Post 100 writes and a flush before ringing the doorbell once
  3. Wait for the configured delay before issuing the abort
  4. Abort the flush command using its CID and queue identifier
  5. Reap all completions to ensure none remain outstanding
  6. Delete the queues now that verification has completed

file: scripts/conformance/01_admin/aer_test

function: scripts/conformance/01_admin/aer_test.py::test_aer_limit_exceeded

Validate the controller enforces the Identify AER limit by submitting one more request than allowed.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Read the AER limit from the Identify Controller data.
  2. Issue the maximum number of AER commands allowed by the controller.
  3. Submit one additional AER command to verify the limit is enforced.
  4. Abort each outstanding AER command and expect a successful completion.

function: scripts/conformance/01_admin/aer_test.py::test_aer_no_timeout

Confirm an AER command stays outstanding without timeout before being aborted by the host.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Submit one AER command without waiting for completion.
  2. Observe the command for 15 seconds without expecting completion or timeout.
  3. Abort the outstanding AER command and confirm the abort succeeds.

function: scripts/conformance/01_admin/aer_test.py::test_aer_sanitize

Verify sanitize completion events trigger AER by issuing sanitize commands while monitoring notifications.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Skip test execution if sanitize is not supported.
  2. Log the sanitize capabilities so the test report shows controller support.
  3. Post one AER command to capture the sanitize completion notice.
  4. Issue a sanitize command to trigger the sanitize operation.
  5. Poll the sanitize log until completion while watching for the AER notification.
  6. Confirm the sanitize log reports a completed state.
  7. Run a sanitize flow once to ensure the completion triggers an AER.
  8. Repeat the sanitize flow to ensure consistent AER behavior across runs.

function: scripts/conformance/01_admin/aer_test.py::test_aer_mask_event

Ensure masking SMART health events suppresses the AER notification even when a thermal event is forced.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Post an AER command to capture any future asynchronous notifications.
  2. Mask the SMART/Health AER source so temperature changes should not raise notices.
  3. Read the SMART log to capture the current composite temperature.
  4. Program the temperature threshold below the current value to force a warning condition.
  5. Read SMART logs while confirming no AER notification is triggered.
  6. Ensure the SMART critical warning bit reflects the triggered temperature alarm.
  7. Restore the original temperature threshold and AER mask configuration.

function: scripts/conformance/01_admin/aer_test.py::test_aer_fw_activation_starting

Validate Firmware Activation Starting notices raise AERs by enabling the notice and activating firmware.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Skip the test when firmware management commands are unsupported.
  2. Skip when Optional Asynchronous Event Support does not advertise firmware notices.
  3. Configure the controller to unmask Firmware Activation Starting notices.
  4. Skip if the controller rejected the configuration change.
  5. Arm an AER command with a callback to validate the Firmware Activation Starting notice.
  6. Activate an existing firmware slot to trigger the notice.
  7. Read the firmware log page to clear the notice.
  8. Restore the original AER configuration.

function: scripts/conformance/01_admin/aer_test.py::test_aer_event_no_aer

Ensure no notifications occur when no AER command is outstanding even if a thermal threshold is crossed.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Initialize the controller without posting any AER commands while enabling all notices.
  2. Confirm that temperature events are supported before forcing thresholds.
  3. Get the current temperature from the SMART log without generating any notices.
  4. Drop the temperature threshold below the current reading to trigger a warning condition.
  5. Wait briefly and confirm no AER notification is triggered without an outstanding command.
  6. Verify the SMART critical warning bit reflects the condition despite the lack of an AER.
  7. Restore the original temperature threshold.

function: scripts/conformance/01_admin/aer_test.py::test_aer_abort_all_aer_commands

Exercise abort behavior by filling all AER slots plus one and aborting every outstanding command.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Disable all asynchronous event notifications to control the test flow.
  2. Determine the maximum number of concurrently outstanding AER commands.
  3. Define a callback template for verifying completion status if hook support is available.
  4. Fill every available AER slot with outstanding commands.
  5. Submit one more command to ensure the controller flags the exceeded limit.
  6. Abort every command and expect the controller to complete the aborts successfully.
  7. Write to an invalid doorbell register and confirm that no new AER notification appears.

function: scripts/conformance/01_admin/aer_test.py::test_aer_temperature

Confirm both over- and under-temperature thresholds raise AERs by moving the limits around the current reading.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Post an AER command to receive temperature-related notices.
  2. Capture the original over- and under-temperature thresholds for later restoration.
  3. Enable every asynchronous event type so temperature notices can be delivered.
  4. Allocate a SMART log buffer that will hold the composite temperature data.
  5. Sample the current composite temperature from the SMART log.
  6. Force an over-temperature condition by lowering the high threshold below the reading.
  7. Read the SMART log to acknowledge and clear the over-temperature event.
  8. Restore the original high threshold and verify no further AERs occur.
  9. Force an under-temperature condition by raising the low threshold above the reading.
  10. Read the SMART log again to acknowledge the under-temperature event.
  11. Ensure the SMART critical warning still signals the temperature event.
  12. Restore the original low temperature threshold configuration.

function: scripts/conformance/01_admin/aer_test.py::test_aer_doorbell_invalid_register

Check invalid doorbell register writes raise an AER by deleting a queue and writing to its tail doorbell.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Post an AER command so the controller can raise notices.
  2. Create an I/O completion queue and a paired submission queue.
  3. Delete the submission queue to make its doorbell invalid.
  4. Write the deleted SQ doorbell to trigger the invalid doorbell register event.
  5. Read the error information log page to clear the event.
  6. Delete the completion queue to clean up resources.

function: scripts/conformance/01_admin/aer_test.py::test_aer_doorbell_out_of_range

Trigger the invalid doorbell value AER by writing out-of-range SQ tail values after creating admin I/O queues.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Post an AER command so the controller can report invalid doorbell writes.
  2. Create an I/O queue pair sized to expose tail pointer bounds.
  3. Write an out-of-range SQ tail value to force the invalid doorbell value status.
  4. Read the error information log page to clear the notice.
  5. Delete the SQ and CQ to free resources.

file: scripts/conformance/01_admin/dst_test

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_valid_namespace

Validate a short device self-test against a valid namespace and inspect the Device Self-test log for progress.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a short device self-test for the requested namespace
  2. Read the Device Self-test log to confirm the short test is reported as in progress

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_processing

Exercise an extended device self-test on a valid namespace and verify the log reports the extended run.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue an extended device self-test for the namespace
  2. Read the log page to ensure the extended test is reported as in progress

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_time

Measure a short device self-test duration to ensure completion occurs within the mandated two-minute limit.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Launch a short device self-test on all namespaces and record the start time
  2. Poll the Device Self-test log until the operation completes or a timeout is hit
  3. Assert the elapsed wall clock time does not exceed two minutes

function: scripts/conformance/01_admin/dst_test.py::test_dst_invalid_namespace

Send a device self-test to invalid namespace identifiers and confirm the controller reports Invalid Namespace or Format.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Attempt the device self-test with the invalid namespace and expect an abort status

function: scripts/conformance/01_admin/dst_test.py::test_dst_invalid_stc

Issue device self-tests with unsupported STC values and ensure the controller fails them with Invalid Field in Command.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Attempt the DST with invalid STC values and expect ERROR status from the controller

function: scripts/conformance/01_admin/dst_test.py::test_dst_in_progress

Run concurrent device self-tests to confirm later commands fail with Device Self-test in Progress while the first test runs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Launch the first DST operation and wait for it to start
  2. Reissue the DST and expect Device Self-test in Progress status
  3. Attempt another DST to ensure the busy status persists
  4. Retry once more to confirm every new command fails while the first test is active

function: scripts/conformance/01_admin/dst_test.py::test_dst_in_progress_abort_dst

Abort an in-progress device self-test using STC Fh and verify the log indicates the cancelled operation.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Start the first DST command and read the Device Self-test log
  2. Issue the abort DST command to cancel the running test
  3. Read the Device Self-test log page to confirm the previous run was aborted

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_abort_by_controller_reset

Trigger a controller-level reset during a short device self-test and verify the log records an abort due to reset.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Confirm no device self-test is running before starting the scenario
  2. Start a short DST and ensure it is reported as running
  3. Issue a controller reset to abort the running short DST
  4. Verify the controller reports no device self-test in progress
  5. Validate the newest result indicates a reset-induced abort
  6. Confirm the log shows no active DST before issuing a cleanup sequence
  7. Start another short DST to clean up and abort it explicitly

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format

Start a device self-test and then run Format NVM to confirm the self-test log shows the abort due to formatting.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a device self-test for the namespace and verify it remains active when short
  2. Format the namespace to force the device self-test to abort
  3. Read the Device Self-test log to confirm the format aborted the run

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format_fna_0

Validate that Format NVM aborts any running device self-test when FNA is cleared and runs across nsid combinations.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the scenario when the controller implements a spec version earlier than 1.4
  2. Read the current LBA format identifier for later format commands
  3. Skip when the FNA bit is not 0 so the scenario remains applicable
  4. Issue a short DST on namespace 1 to begin the first scenario
  5. Format namespace 1 while the DST is active and wait briefly
  6. Confirm the log reports the format-induced abort
  7. Run another short DST on namespace 1 to validate formatting all namespaces
  8. Format the entire subsystem to abort the short DST
  9. Check that the abort result matches the previous case
  10. Issue a short DST against all namespaces for cross-checking
  11. Format the entire subsystem while the DST is running
  12. Ensure the abort reason reflects the format action
  13. Run another short DST on all namespaces to test formatting a single namespace
  14. Format namespace 1 while the global DST is active
  15. Verify the device self-test log captures the mixed-namespace abort

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format_fna_1

Validate the Format NVM abort behavior for both specific and global namespaces when FNA is set.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the scenario when the controller advertises an NVMe revision earlier than 1.4
  2. Skip when the controller reports FNA is clear
  3. Cache the current LBA format identifier for the format command
  4. Issue a short DST using the parametrized namespace ID
  5. Format the specified namespace to abort the running DST
  6. Inspect the Device Self-test log to confirm the abort reason

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_sanitize

Start a sanitize operation while a device self-test is running and ensure the log records the sanitize-induced abort.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip when the NVMe spec version is below 1.4
  2. Skip when sanitize operations are not supported by the controller
  3. Define a helper callback to highlight sanitize notification events
  4. Issue a DST command so the sanitize can abort it
  5. Check the Device Self-test log to confirm the DST is active
  6. Issue the sanitize command to force the DST to abort
  7. Wait until the sanitize operation completes while tracking notifications
  8. Clear the sanitize event by reading the log page
  9. Verify the Device Self-test log indicates a sanitize-induced abort

function: scripts/conformance/01_admin/dst_test.py::test_dst_after_sanitize

Attempt to start a device self-test during an in-progress sanitize command and expect Device Self-test in Progress errors.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the scenario when sanitize is not supported
  2. Start a sanitize operation to occupy the controller
  3. Skip when the sanitize operation already finished before issuing DST
  4. Issue a DST command that should be aborted by the in-progress sanitize
  5. Monitor the sanitize status and capture the associated AER
  6. Clear the sanitize event by reading the log page

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_command

Start a short device self-test, issue the abort opcode, and confirm the log reflects the abort result.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a short DST and confirm it is running
  2. Send the abort DST opcode and allow background update
  3. Refresh the Device Self-test log after the abort
  4. Check that the newest result shows an abort while the current status is idle

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_time_limit

Measure an extended device self-test to ensure it completes within the Extended Device Self-test Time from Identify Controller.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Retrieve and validate the Extended Device Self-test Time from Identify Controller
  2. Force PS0 and disable APST to keep timing deterministic
  3. Verify no device self-test is running prior to issuing the extended test
  4. Issue the extended DST and note the start time
  5. Poll the Device Self-test log until the extended DST finishes
  6. Monitor progress while enforcing a guard timeout
  7. Ensure the elapsed time is under the advertised EDSTT figure
  8. Check that the log records a successful extended DST completion

function: scripts/conformance/01_admin/dst_test.py::test_dst_with_ioworker

Run a short device self-test while applying varying IO workloads to ensure completion occurs under two minutes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Start the IO stress workload and keep track of the total elapsed time
  2. Issue the short DST while IO traffic is running
  3. Poll the Device Self-test log until the DST finishes or times out
  4. Reset the controller once the DST completes to clear the environment
  5. Warn when the overall runtime exceeds the short DST two-minute requirement

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_not_abort_by_flr_reset

Perform an FLR during an extended device self-test and ensure the test either continues or finishes normally.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Check the PCIe capability to ensure FLR is supported
  2. Confirm there is no in-progress DST before the test begins
  3. Start an extended DST operation and allow it to progress
  4. Read the log to ensure the extended DST is still running
  5. Perform the FLR and reset to recover the controller
  6. Confirm the extended DST is either ongoing or reported as a normal completion

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_not_abort_by_controller_level_reset

Reset the controller while an extended device self-test runs and verify the operation persists across the reset.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Start an extended DST operation and allow it to progress
  2. Check the Device Self-test log to verify the DST is running
  3. Reset the controller via CC.EN to introduce a controller-level reset
  4. Confirm the Device Self-test log still reports the DST in progress

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_abort_by_flr_reset

Issue an FLR during a short device self-test and confirm the test is aborted with the correct status.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Check FLR capability support before running the scenario
  2. Ensure there is no device self-test in progress beforehand
  3. Start a short DST and confirm it is running
  4. Poll the log once more prior to issuing FLR
  5. Perform an FLR and reset the controller to apply the effect
  6. Verify the short DST is aborted with the appropriate status code

file: scripts/conformance/01_admin/features_test

function: scripts/conformance/01_admin/features_test.py::test_features_fid_0

Validate Feature Identifier 0 rejects Set Features through an expected error status.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue Set Features for Feature Identifier 0 and expect an Invalid Field status

function: scripts/conformance/01_admin/features_test.py::test_features_sel_00

Confirm Select value 0 returns the current attribute by issuing Get/Set Features.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Write and read Feature Identifier 4 with Select 0 to capture the current value
  2. Validate the change took effect and restore the original configuration

function: scripts/conformance/01_admin/features_test.py::test_features_sel_01

Verify Select values 1 and 2 report default and saved attributes and reset restores defaults.

Reference

  1. Source: NVM Express Revision 1.4c.

Steps

  1. Confirm Temperature Threshold Current Setting is not persistent across power events
  2. check if the feature is saveable
  3. check the current operating attribute shall be the same as the default
  4. a Get Features command to read the saved value returns the default value.
  5. setfeature to change the current operating attribute
  6. verify if the current operating attribute is set correctly
  7. verify the default attribute value is not changed
  8. the default value is used after a Controller Level Reset
  9. restore the current operating attribute to original value

function: scripts/conformance/01_admin/features_test.py::test_features_sel_01_reserved_bit

Verify reserved bits in Feature Identifier 4 writes are ignored by the controller.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Check the current operating attribute before injecting invalid data
  2. Attempt to program Feature 4 with the reserved bit set in cdw11
  3. Confirm the reserved bit is ignored when reading the current operating attribute
  4. restore the current operating attribute to original value

function: scripts/conformance/01_admin/features_test.py::test_features_sel_11

Check Select value 3 reports feature capabilities when Save is supported.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. skip if feature Save is not supported
  2. Issue Get Features commands with Select 3 to report capability data
  3. Expect each command to complete successfully

function: scripts/conformance/01_admin/features_test.py::test_features_invalid_sel

Ensure Get Features commands with invalid Select values return errors.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue Get Features commands with invalid Select values
  2. The commands shall complete with Invalid Field errors

function: scripts/conformance/01_admin/features_test.py::test_features_set_volatile_write_cache

Measure write latency impact when toggling the Volatile Write Cache.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Collect the SMART temperature for additional telemetry
  2. skip if volatile cache is not present
  3. get the original write cache setting
  4. enable the write cache and verify the feature is set correctly
  5. Prime the queue pair with several writes to obtain a stable latency reading
  6. Measure the write latency with the write cache enabled for comparison
  7. disable the cache and verify the feature
  8. Measure the write latency with the write cache disabled
  9. recover original write cache setting
  10. Compare the two latency values and confirm caching improves performance

function: scripts/conformance/01_admin/features_test.py::test_features_set_invalid_ncqr

Confirm invalid queue counts in Feature Identifier 7 raise Invalid Field errors.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. set feature Number of Queues to 0xffff, the command shall complete with error
  2. set feature Number of Queues to 0xffff0000, the command shall complete with error
  3. set feature Number of Queues to 0xffffffff, the command shall complete with error

function: scripts/conformance/01_admin/features_test.py::test_features_num_of_queues

Verify a controller limited to two queues rejects a third I/O queue creation.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Define a custom initialization routine that limits the controller to two queues
  2. Disable cc.en and wait for csts.rdy to clear
  3. Configure the admin queue registers
  4. Program the CC register with desired queue entries
  5. Enable the controller by setting cc.en
  6. Wait for csts.rdy to assert
  7. Identify the controller and namespaces
  8. Set and query the number of queues via Feature 7
  9. Create a controller instance constrained to two queues
  10. Check the reported number of queues and skip if the limit differs
  11. Create two I/O queue pairs within the configured limit
  12. Ensure creating an additional queue fails
  13. Delete the created qpairs to release resources

function: scripts/conformance/01_admin/features_test.py::test_features_apst_buffer_length

Validate the APST data structure occupies 256 bytes when retrieved by Get Features.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. create 4k buffer, pvalue is all-one data
  2. check apst is enabled, the data structure is 256 bytes
  3. check the buffer 256 byte should be original value all-one data
  4. create 4k buffer, pvalue is all-zero data
  5. check apst is enabled, the data structure is 256 bytes
  6. check the buffer 256 byte should be original value all-zero data

function: scripts/conformance/01_admin/features_test.py::test_features_timestamp

Exercise the Timestamp feature across resets and programmed values to ensure expected behavior.

Reference

  1. Source: NVM Express Revision 1.4a.
  2. Source: NVM Express Revision 2.0.

Steps

  1. check ONCS
  2. verify the length of the data buffer
  3. get current timestamp
  4. get the timestamp again after 1 second
  5. get original timestamp status
  6. reset and check status
  7. set timestamp and check status again
  8. get current timestamp
  9. get the timestamp again after 1 second
  10. set a max value: 0xffff_ffff_ffff
  11. set a min value: 0
  12. reset and check timestamp and status according to the Timestamp Origin

file: scripts/conformance/01_admin/format_test

function: scripts/conformance/01_admin/format_test.py::test_format_function

Verify namespace and controller-wide Format NVM success by issuing commands with nsid 0xffffffff and nsid 1.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue a Format NVM command targeting all namespaces (nsid 0xffffffff) and expect completion
  2. Issue a Format NVM command targeting namespace 1 and expect completion

function: scripts/conformance/01_admin/format_test.py::test_format_secure_erase_function

Validate secure erase settings by issuing Format NVM with SES 1 and SES 2 for both broadcast and namespace-specific scopes.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue a Format NVM command with SES 1 for all namespaces and expect completion
  2. Issue a Format NVM command with SES 1 for namespace 1 and expect completion
  3. Issue a cryptographic erase when supported and expect the commands to complete successfully

function: scripts/conformance/01_admin/format_test.py::test_format_with_ioworker

Exercise Format NVM under IO load by running an ioworker during the format and validating completion status.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Prepare a completion callback that records the Format NVM result status
  2. define a callback function for format command
  3. Run ioworker traffic while issuing a Format NVM command to namespace 1
  4. Check the completion status returned by the Format NVM command

function: scripts/conformance/01_admin/format_test.py::test_format_and_read

Validate read behavior during Format NVM by overlapping a read command with an in-flight format.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue a Format NVM command and intentionally leave it outstanding
  2. Issue a read command while the format is outstanding and validate completion status
  3. Wait until the outstanding Format NVM command completes

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_ses

Confirm the controller rejects unsupported secure erase settings by issuing Format NVM with invalid SES values.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue Format NVM commands with SES values 3 through 7 and expect an error status

function: scripts/conformance/01_admin/format_test.py::test_format_not_support_crypto_erase

Verify cryptographic erase is rejected when unsupported by attempting Format NVM with SES 2 on such controllers.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Skip the test when the controller advertises cryptographic erase capability
  2. Issue a Format NVM command requesting cryptographic erase and expect an error response

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_lbaf

Check LBA format validation by issuing Format NVM with invalid LBAF values derived from the Identify data.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Identify the namespace to determine the highest supported LBA format
  2. Issue Format NVM commands with unsupported LBAF values and expect Invalid Format errors
  3. Restore the namespace to the original LBAF and expect success

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_nsid

Exercise Format NVM namespace validation by issuing commands to valid and invalid nsid values.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Issue baseline Format NVM commands with valid nsid parameters
  2. Issue Format NVM commands with nsid zero and expect Invalid Namespace or Format errors
  3. Issue Format NVM commands with nsid 0xfffffffb and expect Invalid Namespace or Format errors
  4. Issue Format NVM commands with nsid 0xff and expect Invalid Namespace or Format errors

function: scripts/conformance/01_admin/format_test.py::test_format_verify_data

Confirm that Format NVM erases user data by writing, reading, formatting, and re-reading specific LBAs.

Reference

  1. Source: NVM Express Revision 1.4a.

Steps

  1. Prepare read and write buffers and initialize data patterns
  2. Write the pattern to LBA 0 and verify it by reading back
  3. Issue a Format NVM command and read the LBA to ensure the pattern is cleared
  4. The data in specified LBA is expected to be erased after the format
  5. Repeat write, verify, format, and validation cycles with SES 1
  6. Perform the same validation for cryptographic erase when supported

file: scripts/conformance/01_admin/fw_download_test

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_out_of_order

Validate firmware download accepts reversed ranges by submitting the second chunk before the first.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Allocate firmware download buffer that spans two chunks
  2. Issue firmware download requests out of order to confirm reversed ranges complete

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_overlap

Check overlapping firmware downloads by repeating an already transferred range.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Download adjacent firmware chunks in order as a control
  2. Reissue a previously downloaded chunk to validate overlap handling

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_reset

Verify download progress is discarded after a controller reset by resending the final chunk post reset.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Download sequential firmware chunks prior to triggering a controller reset
  2. Reset controller and resend the final chunk to ensure the transfer restarts cleanly

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_prp

Exercise firmware download PRP offset handling by issuing transfers with valid and invalid offsets.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Allocate a buffer large enough for valid and invalid PRP offset tests
  2. Program a valid PRP offset covering exactly one image chunk and download it
  3. Program an invalid PRP offset near a page boundary and repeat the download

file: scripts/conformance/01_admin/identify_test

function: scripts/conformance/01_admin/identify_test.py::test_identify_all_nsid

Verifies Identify rejects invalid namespace IDs by issuing requests to both valid and invalid NSIDs.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Send Identify commands with valid namespace IDs to confirm baseline behavior.
  2. Send Identify commands with invalid namespace IDs and expect Invalid Namespace or Format warnings.

function: scripts/conformance/01_admin/identify_test.py::test_identify_namespace_data_structure

Validates the Identify namespace data and the active namespace list by reading and checking key fields.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Retrieve the active namespace ID list with an Identify command.
  2. Confirm that the controller reports exactly one namespace.
  3. Read the Identify Namespace data structure for namespace 1.
  4. Verify the returned buffer contains data.
  5. Check that NSZE equals NCAP as required for single namespace controllers.

function: scripts/conformance/01_admin/identify_test.py::test_identify_reserved_cns

Ensures Identify commands issued with reserved CNS values are aborted with Invalid Field in Command status.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Issue Identify commands using reserved CNS values and expect Invalid Field warnings.

function: scripts/conformance/01_admin/identify_test.py::test_identify_nsze_ncap_nuse

Checks the NSZE, NCAP, and NUSE fields from Identify Namespace data to ensure they follow the mandated ordering.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Read NSZE, NCAP, and NUSE values from namespace 1.
  2. Check that Namespace Size >= Namespace Capacity >= Namespace Utilization.
  3. Force NUSE to be zero when ANA indicates inaccessible or persistent loss conditions.

function: scripts/conformance/01_admin/identify_test.py::test_identify_controller_with_nsid

Validates Identify Controller behavior when the nsid field is supplied by issuing commands with allowed and disallowed NSIDs.

Reference

  1. Reference: NVM Express Revision 1.4c
  2. Reference: NVM Express Revision 2.0

Steps

  1. Read Identify Controller data with nsid set to zero.
  2. Issue Identify Controller commands with invalid NSIDs and expect possible Invalid Field status.

function: scripts/conformance/01_admin/identify_test.py::test_identify_new_cns

Confirms Identify supports the CNS values introduced for command-set-specific namespace and controller data.

Reference

  1. Reference: NVM Express Revision 2.0a

Steps

  1. Skip the test when the controller advertises an NVMe spec revision earlier than 2.0.
  2. Read the I/O Command Set specific Identify Namespace data structure.
  3. Read the I/O Command Set specific Identify Controller data structure.
  4. Read the active namespace ID list associated with the specified I/O Command Set.

file: scripts/conformance/01_admin/logpage_test

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_page_id

Validate log page identifier handling by issuing Get Log Page commands with valid and invalid IDs.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. send get log page command with valid Log Page Identifier, and commands shall complete successfully
  2. send get log page command with invalid Log Page Identifier, and commands shall complete with error

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_lid_0

Confirm Log Identifier 00h handling by enumerating supported IDs and reading mandatory log pages.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. skip if NVMe spec version is below 2.0
  2. Read Log Identifier 00h to confirm controllers expose the reserved entry.
  3. lid 0,1,2,3,12h are mandatory
  4. if LPA Bit_5 = 1, check lid 13h

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_different_size

Check partial Get Log Page transfers by reading SMART information with varying lengths.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. get the full smart log page
  2. read partial smart log page, and check data
  3. read data beyond smart log page, and check data

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_data_unit_read

Verify SMART Data Units Read increments by issuing read, compare, and verify workloads.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. skip if compare command is not supported
  2. Prepare SMART log buffer and capture namespace geometry for unit conversions.
  3. get original Data Units Read
  4. send 1000 read commands
  5. check the Data Units Read has increased
  6. send 1000 compare commands
  7. check the Data Units Read has increased
  8. if the controller supports verify command, send 1000 verify commands
  9. the Data Units Read shall be increased

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_data_unit_write

Verify SMART Data Units Written increments using write workloads and write-uncorrectable commands.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. check if the controller supports Write Uncorrectable command
  2. Prepare SMART log buffer and derive namespace geometry for the increments.
  3. get original Data Units Written
  4. send 1000 write commands
  5. check the Data Units Written has increased
  6. send 1000 Write Uncorrectable commands
  7. check the Data Units Written has not changed
  8. check if the controller supports Write Zeroes command
  9. send a Write Zeroes commands
  10. check the Data Units Written has not changed
  11. write the lba

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_namespace

Validate SMART log namespace scoping by issuing Get Log Page commands with several NSIDs.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. skip if NVMe spec version is below 1.4
  2. Fetch the SMART log with the broadcast NSID to capture controller-wide measurements.
  3. the command completes successfully and the composite temperature is not 0
  4. send get log page command with nsid=1: log page on a per namespace basis
  5. the command completes successfully and the composite temperature is not 0
  6. send get log page commands with invalid namespace
  7. check the get log page command complete with error

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_offset

Verify log page offsets by comparing overlapping SMART data and triggering offset errors.

Reference

  1. NVM Express Revision 1.4c

Steps

  1. read smart data
  2. read smart data with log page offset
  3. compare smart data with different offset, shall be different
  4. offset is greater than the logpage size

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_smart_composite_temperature

Exercise composite temperature thresholds by manipulating features and reading SMART logs.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. set the composite temperature threshold lower than current temperature
  4. check AER notification is triggered
  5. send a getlogpage command to get the SMART data
  6. check if Critical Warning bit 1 in SMART data was set
  7. over composite temperature threshold
  8. clear event by read log page
  9. set the composite temperature threshold higher than current temperature
  10. revert to default

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_after_error

Validate error information log updates by injecting admin command failures.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Record the Log Page Attributes for later conditional checks.
  2. Track the command completion M bit used to increment the error count.
  3. Capture the M bit from command completions via a callback hook.
  4. send admin command with opcode=0x6, cdw10=0xFF
  5. send get error log cmd and record nerror1
  6. send admin command with opcode=0x6, cdw10=0xFF
  7. send get error log cmd and record nerror2
  8. verify error count value and number of Error Information Log Entries

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_retain_asynchronous_event

Confirm retained asynchronous temperature events remain latched when RAE is set in Get Log Page.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. set the composite temperature threshold lower than current temperature
  4. check AER notification is triggered
  5. send a getlogpage command to get the SMART data
  6. check if Critical Warning bit 1 in SMART data was set
  7. over composite temperature threshold
  8. get log page with retain asynchronous event.
  9. clear over Temperature Threshold event
  10. trigger under Temperature Threshold event, but the event type is masked

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_not_retain_asynchronous_event

Verify temperature asynchronous events clear when Get Log Page does not set the RAE bit.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. get current temperature
  4. set the composite temperature threshold lower than current temperature
  5. check AER notification is triggered
  6. send a getlogpage command to get the SMART data
  7. check if Critical Warning bit 1 in SMART data was set
  8. over composite temperature threshold
  9. send get log page command to clear an Asynchronous Event.
  10. clear over Temperature Threshold event
  11. send get log page command to clear an Asynchronous Event.
  12. trigger under Temperature Threshold event, the event type is masked
  13. send get log page command to clear an Asynchronous Event.
  14. clear Over Temperature Threshold event
  15. power cycle the drive

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_persistent_event_log

Generate persistent event log activity and verify format records before and after formatting.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Parse the returned PEL buffer into a list of event IDs for validation.
  2. check PEL size
  3. fresh events in the logpage
  4. check events: no format event left
  5. format start and complete: 07/08
  6. check events of format, and reset is still there

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_exceed_mdts

Validate MDTS enforcement by issuing Get Log Page commands exceeding the controller transfer size.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. check mdts value
  2. Read a small SMART log page as a control transfer.
  3. Read another log page below MDTS for reference.
  4. logpage size is larger than MDTS, error expected

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_host_initiated_telemetry

Exercise host initiated telemetry log capture and walk through all advertised data areas.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. Allocate a telemetry buffer for subsequent log transfers.
  2. Skip execution if the controller does not advertise telemetry support.
  3. check if telemetry is supported
  4. capture host initiated telemetry log
  5. get Telemetry Host-Initiated Data Generation Number
  6. capture host initiated telemetry log
  7. check if Telemetry Host-Initiated Data Generation Number is incremented each time
  8. get telemetry header info
  9. Record the last block value for telemetry data area 1.
  10. Record the last block value for telemetry data area 2.
  11. Record the last block value for telemetry data area 3.
  12. Bit 6 of the Log Page Attributes field is set to ‘1’ in the Identify Controller Data Structure
  13. Extended Telemetry Data Area 4 Supported (ETDAS) field is set to 1h in the Host Behavior Support feature
  14. get area 4 last block
  15. Validate that successive data area last block values are monotonic.
  16. Record the host-initiated generation number after the header read.
  17. print header data
  18. get all data block: [1, last]
  19. read last block twice and compare data
  20. get telemetry data beyond last block, error expected
  21. with pytest.warns(UserWarning, match=”ERROR status: 00/02″):
  22. nvme0.getlogpage(7, buf, offset=512*(last+1)).waitdone()
  23. check the generate number again
  24. invalid offset, error expected

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_controller_initiated_telemetry

Exercise controller initiated telemetry retrieval across all populated data areas.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. Allocate a telemetry buffer to read controller-initiated data.
  2. Skip execution when telemetry log pages are not supported.
  3. check if telemetry is supported
  4. get telemetry header
  5. check telemetry header data
  6. Record the last block value for telemetry data area 1.
  7. Record the last block value for telemetry data area 2.
  8. Record the last block value for telemetry data area 3.
  9. Ensure the successive telemetry data areas remain monotonic.
  10. Record the last block value for telemetry data area 4 when present.
  11. print header data
  12. get all data block: [1, last]
  13. read last block twice and compare data
  14. check the generate number again
  15. invalid offset, error expected

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_telemetry_offset_not_512

Check telemetry offset alignment by issuing misaligned Get Log Page commands.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. define non-512-aligned offset
  2. Skip when telemetry logs are not advertised by the controller.
  3. Test for Telemetry Host-Initiated Log Page (ID = 7)
  4. Expecting error due to misaligned offset
  5. Test for Telemetry Controller-Initiated Log Page (ID = 8)
  6. Expecting error due to misaligned offset

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_telemetry_length_not_512

Ensure telemetry transfers require 512-byte multiples by issuing misaligned lengths.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. using buffers of non-512-aligned lengths
  2. Skip when telemetry logs are not advertised by the controller.
  3. Test for Telemetry Host-Initiated Log Page (LID = 07h)
  4. Expecting error due to misaligned length
  5. Test for Telemetry Controller-Initiated Log Page (LID = 08h)
  6. Expecting error due to misaligned length

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_host_initiated_telemetry_data_change

Verify host initiated telemetry data updates only after issuing a regeneration command.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. check if telemetry is supported
  2. Skip when telemetry logs are not advertised by the controller.
  3. Trigger telemetry regeneration to capture fresh host data.
  4. get the data area 1 size
  5. get host initiated telemetry data
  6. capture new host initiated telemetry data
  7. get new data area 1 size
  8. get another copy of telemetry data

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_nsid_0

Validate SMART log behavior when NSID is set to 0h by monitoring Data Units Written increments.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. skip if NVMe spec version is below 2.0
  2. get original Data Units Written
  3. send 1000 write commands
  4. check the Data Units Written has increased

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_error_info_cid_ffff

Verify error log entries report FFFFh for SQID/CID when events are not tied to a specific command.

Reference

  1. NVM Express Revision 2.0a

Steps

  1. issue a flush command with cid 0xffff
  2. issue an AER command
  3. create CQ and SQ
  4. delete SQ first
  5. write doorbell of the deleted SQ to cause the event Invalid Doorbell Register
  6. read log page to clear the event
  7. issue a error info log page
  8. check sqid and cid value in log page, expect 0xffff
  9. delete cq

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_eye_opening_measurement

Collect Rx Eye Opening Measurement data by driving EOM log commands and parsing the results.

Reference

  1. TP4119a Rx Phy Eye Opening Measurement (EOM)

Steps

  1. Allocate a large buffer for EOM log transfers and cache the controller ID.
  2. read log data, No measurement has been started
  3. skip if EOM is not supported
  4. Verify the initial measurement header reports the idle state.
  5. start measurement and read log data
  6. check status after estimated time
  7. get the lane descriptor
  8. Retrieve the lane descriptor and eye data for the requested measurement.
  9. abort measurement and clear log
  10. read log data
  11. start measurement and read log data
  12. reset to initilize the EOM log
  13. reserved action

file: scripts/conformance/01_admin/queue_test

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_basic_operation

Validate IO queue creation by issuing write and read traffic and confirming completions are serviced.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Issue a baseline write followed by repeated read commands on the created queue.
  2. Wait for every outstanding read command to complete without errors.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_with_invalid_id

Verify IO CQ creation rejects invalid queue identifiers by issuing Create IO CQ commands with unsupported IDs.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a CQ with ID 5 to confirm the nominal creation path succeeds.
  2. Attempt to create a CQ with ID 0 and expect Invalid Queue Identifier.
  3. Attempt to create a CQ with ID 0xffff and expect Invalid Queue Identifier.
  4. Attempt to create CQs whose IDs exceed the supported count and expect Invalid Queue Identifier.
  5. Attempt to create a CQ with a duplicate ID and expect Invalid Queue Identifier.
  6. Delete the baseline CQ to clean up the resources.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_id

Ensure IO SQ creation rejects invalid queue identifiers by issuing Create IO SQ commands with unsupported IDs.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a CQ with ID 1 to serve as the target for SQ creation.
  2. Create a baseline SQ with ID 5 to confirm successful creation.
  3. Attempt to create an SQ with ID 0 and expect Invalid Queue Identifier.
  4. Attempt to create an SQ with ID 0xffff and expect Invalid Queue Identifier.
  5. Attempt to create SQs with IDs that exceed the supported number and expect Invalid Queue Identifier.
  6. Attempt to create a duplicate SQ ID and expect Invalid Queue Identifier.
  7. Delete the SQ and CQ to release resources.

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_cq_with_invalid_id

Confirm Delete IO CQ commands fail when issued with invalid queue identifiers by targeting unsupported IDs.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Delete a CQ with ID 5 to confirm the baseline path succeeds.
  2. Attempt to delete CQ ID 0 and expect Invalid Queue Identifier.
  3. Attempt to delete CQ ID 0xffff and expect Invalid Queue Identifier.
  4. Attempt to delete CQ IDs bigger than the supported maximum and expect Invalid Queue Identifier.
  5. Attempt to delete a non-existent CQ ID and expect Invalid Queue Identifier.

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_sq_with_invalid_id

Ensure Delete IO SQ commands return Invalid Queue Identifier when targeting unsupported IDs.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a CQ and SQ pair and confirm the baseline delete succeeds.
  2. Attempt to delete SQ ID 0 and expect Invalid Queue Identifier.
  3. Attempt to delete SQ ID 0xffff and expect Invalid Queue Identifier.
  4. Attempt to delete SQ IDs beyond the supported count and expect Invalid Queue Identifier.
  5. Attempt to delete a non-existent SQ ID and expect Invalid Queue Identifier.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_with_invalid_queue_size

Validate Create IO CQ rejects invalid queue sizes by exercising legal and illegal QSIZE values.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create CQs with supported sizes to confirm the reference behavior.
  2. Skip the overflow size checks on hardware that already advertises the maximum MQES.
  3. Attempt to create CQs at oversized values such as 0xffff and expect Invalid Queue Size.
  4. Attempt to create a CQ with size 0x10000 and expect Invalid Queue Size.
  5. Attempt to create a CQ with size 1 and expect Invalid Queue Size.
  6. Attempt to create CQs that exceed the reported MQES and expect Invalid Queue Size.
  7. Attempt to create a CQ with size 0 and expect a failed assertion.
  8. Create a CQ with a supported size as a final sanity check.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_queue_size

Confirm Create IO SQ enforces queue size limits by issuing commands with supported and unsupported QSIZE values.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a CQ that the SQs can target.
  2. Create SQs with supported sizes to prove successful operation.
  3. Attempt to create an SQ with size 1 and expect Invalid Queue Size.
  4. Delete the remaining SQ and CQ to clean up.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_queue_size_mqes

Stress Create IO SQ queue-size validation near MQES limits by issuing oversized QSIZE requests.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Skip the extended tests if the controller already advertises the largest MQES.
  2. Create a CQ that will be referenced by the SQs under test.
  3. Attempt to create an SQ with size 0xffff and expect Invalid Queue Size.
  4. Attempt to create an SQ with size 0x10000 and expect Invalid Queue Size.
  5. Attempt to create SQs larger than MQES and expect Invalid Queue Size.
  6. Attempt to create an SQ with size 0 and expect a failed assertion.
  7. Create a supported SQ to verify cleanup and then delete the CQ.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_physically_contiguous

Verify Create IO SQ enforces physical contiguity when CAP.CQR requires PC=1 by toggling PC and PRP offsets.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Skip the test if the controller does not require physically contiguous queues.
  2. Attempt to create an SQ with PC cleared and expect Invalid Queue Deletion.
  3. Create an SQ with PC asserted to confirm normal completion.
  4. Set a non-zero PRP offset and expect PRP Offset Invalid.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_non_physically_contiguous

Validate Create IO SQ behavior when CAP.CQR allows PRP lists by toggling PC and PRP offsets.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Skip the test if the controller mandates physically contiguous queues.
  2. Create SQs with PC asserted and cleared to confirm normal completion with valid PRPs.
  3. Provide a PRP with a non-zero offset and expect PRP Offset Invalid.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_non_physically_contiguous

Validate Create IO CQ behavior when CAP.CQR allows PRP lists and ensure non-zero offsets fault.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Skip the test if the controller enforces physically contiguous queues.
  2. Create CQs with PC asserted and cleared to establish the reference behavior.
  3. Program a PRP with a non-zero offset and expect PRP Offset Invalid.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_invalid_interrupt_vector

Ensure Create IO CQ validates MSI/MSI-X interrupt vectors by issuing commands with vectors beyond hardware limits.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Initialize the invalid interrupt vector placeholder.
  2. Query the MSI capability to derive an out-of-range vector number.
  3. Query the MSI-X capability to identify another invalid interrupt vector.
  4. Skip the test if no invalid interrupt vector can be identified.
  5. Create IO CQs with invalid interrupt vectors and expect Invalid Interrupt Vector errors.

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_cq_before_sq

Verify Delete IO CQ fails with Invalid Queue Deletion when outstanding SQs still reference the CQ.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create one CQ with multiple associated SQs to prepare the dependency graph.
  2. Attempt to delete the CQ while SQs exist and expect Invalid Queue Deletion.
  3. Delete one SQ to reduce the reference count.
  4. Attempt to delete the CQ again and still expect Invalid Queue Deletion.
  5. Delete the remaining SQs to remove all dependencies.
  6. Delete the CQ after all SQs are removed and expect success.

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_full_sq

Validate Delete IO SQ behavior when outstanding commands fill every queue and dependencies exist.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create the maximum number of SQs associated with a single CQ.
  2. Fill every SQ with outstanding commands to block CQ deletion.
  3. Attempt to delete the CQ while SQs have outstanding commands and expect Invalid Queue Deletion.
  4. Delete a subset of SQs to reduce the outstanding load.
  5. Attempt to delete the CQ again and expect Invalid Queue Deletion.
  6. Delete more SQs but still leave outstanding queues.
  7. Attempt to delete the CQ once more and expect Invalid Queue Deletion.
  8. Delete the final SQ so the CQ has no dependencies.
  9. Delete the CQ and expect a successful completion.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_queue_priority

Verify Create IO SQ accepts each supported queue priority by creating SQs with varying QPRIO values.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create SQs with different QPRIO values and confirm that each completes successfully.

function: scripts/conformance/01_admin/queue_test.py::test_queue_set_after_create_queues

Ensure Set Features Number of Queues returns Command Sequence Error once IO queues have been created.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Skip the test if the controller does not advertise NVMe 1.4 or later.
  2. Create a CQ and SQ so that IO queues already exist.
  3. Issue Set Features Number of Queues and expect Command Sequence Error.
  4. Delete the SQ and CQ to clean up resources.

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_qpair_exceed_limit

Verify the controller rejects queue creation once the reported Number of Queues limit is exceeded.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create queue pairs up to the controller-reported maximum.
  2. Attempt to create one more queue pair and expect Invalid Queue Identifier.
  3. Delete all queue pairs to free resources.

function: scripts/conformance/01_admin/queue_test.py::test_queue_setfeature_different_cq_sq_number

Verify Set Features Number of Queues supports different CQ and SQ counts and enforces the advertised limits.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Initialize the controller without programming the Number of Queues feature.
  2. Disable CC.EN and wait for CSTS.RDY to clear.
  3. Initialize the admin queue registers.
  4. Program the controller configuration register without enabling the controller.
  5. Enable CC.EN.
  6. Wait for CSTS.RDY to assert.
  7. Identify the controller and namespaces.
  8. Initialize the controller before programming the Number of Queues feature.
  9. Program different CQ and SQ counts and verify the controller accepts them.
  10. Create every allowed CQ and SQ to reach the configured limits.
  11. Attempt to create an additional CQ and expect failure.
  12. Attempt to create an additional SQ and expect failure.
  13. Delete the created SQs and CQs to clean up resources.

function: scripts/conformance/01_admin/queue_test.py::test_queue_invalid_prp_offset

Verify Create IO CQ requires zero PRP offsets by issuing commands with various misaligned PRP entries.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Initialize a PRP entry with zero offset as the passing reference case.
  2. Figure 149 requires the PRP entry offset to be zero in all cases.
  3. Send PRPs with various invalid offsets and expect PRP Offset Invalid.

function: scripts/conformance/01_admin/queue_test.py::test_queue_cq_sqhd

Verify SQ Head values in admin CQ entries advance correctly across commands and asynchronous events.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Initialize the controller without enabling AER handling.
  2. Disable CC.EN and wait for CSTS.RDY to clear.
  3. Program the admin queue registers.
  4. Program the controller configuration register without enabling the controller.
  5. Enable CC.EN.
  6. Wait for CSTS.RDY to assert.
  7. Identify the controller and namespaces.
  8. Configure the number of queues used for admin traffic.
  9. Issue a Get Log Page command and capture the reported SQ head value.
  10. Issue another command and ensure SQ head increments by one.
  11. Issue a command after posting an AER and verify its SQ head advances by two.

function: scripts/conformance/01_admin/queue_test.py::test_queue_sq_fuse_reserved_value

Ensure SQEs that set reserved FUSE field values return Invalid Field in Command or are aborted.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a minimal IO queue pair that can host the fuse experiments.
  2. Set the SQE FUSE field to the reserved value 0x3 and submit one entry.
  3. Poll the CQE and expect Invalid Field in Command to be reported.

function: scripts/conformance/01_admin/queue_test.py::test_queue_enabled_msix_interrupt_all

Validate MSI-X interrupts assert for a qpair by issuing IO on the highest supported SQID.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Create a qpair at the highest supported SQID to exercise the MSI-X wiring.
  2. Clear any existing MSI-X state to start with a clean baseline.
  3. Issue an IO to trigger MSI-X and verify the interrupt fires.
  4. Attempt to create a qpair with an unsupported SQID and expect creation to fail.

file: scripts/conformance/01_admin/sanitize_test

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_operations_basic

Verify the controller sanitization log updates by issuing block erase sanitize and polling the log page.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write sample data into the namespace
  2. Verify data before the sanitize operation
  3. Issue block erase sanitize and monitor sanitize progress via log page and AER
  4. Ensure sanitize completion bit indicates the last sanitize completed successfully
  5. Ensure the most recent sanitize completed successfully flag is set
  6. Confirm SCDW10 reflects the Cdw10 value from the sanitize command
  7. Verify data is altered after sanitize completes

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_crypto_erase_progress

Verify Crypto Erase sanitize updates the log and removes user data by issuing crypto erase and reading the namespace.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Check if controller supports Crypto Erase capability
  2. Write sample user data and verify it prior to sanitize
  3. Issue a Crypto Erase sanitize command
  4. Monitor sanitize progress via log page polling and AER notifications
  5. Confirm data is erased after the Crypto Erase operation
  6. Ensure sanitize completion bit indicates success after Crypto Erase
  7. Ensure sanitize success flag is set after Crypto Erase
  8. Confirm SCDW10 records the Crypto Erase command Dword 10 value

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_abort_non_allowed_command

Ensure sanitize in progress aborts disallowed commands by issuing block erase sanitize and running each blocked command expecting 00/1d.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue block erase sanitize to enter sanitize in progress state
  2. Skip test if sanitize already completed before running disallowed commands
  3. Attempt another Sanitize command and expect sanitize-in-progress abort
  4. Attempt Device Self-test command and expect sanitize-in-progress abort
  5. Attempt firmware download and expect sanitize-in-progress abort
  6. Attempt firmware commit during sanitize and expect sanitize-in-progress abort
  7. Use action=2 for the firmware commit request
  8. Attempt Format NVM command and expect sanitize-in-progress abort
  9. Attempt Flush command during sanitize and handle cache configuration
  10. Attempt Write command and expect sanitize-in-progress abort
  11. Attempt Read command and expect sanitize-in-progress abort
  12. Prepare a read buffer for the command
  13. Monitor sanitize progress until it completes and verify the log page contents
  14. Ensure sanitize completion bit indicates the last sanitize finished successfully
  15. Ensure sanitize success flag indicates the last sanitize completed successfully
  16. Confirm SCDW10 reflects the sanitize command Dword 10 value

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_abort_allowed_command

Ensure allowed commands complete during sanitize by issuing block erase sanitize and running permitted admin commands.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue block erase sanitize to enter sanitize in progress state
  2. Skip test if sanitize already completed before issuing allowed commands
  3. Create IO CQ and IO SQ while sanitize is running
  4. Delete the temporary IO queues while sanitize is running
  5. Issue Set Features command and expect it to complete successfully
  6. Issue Get Features command and expect it to complete successfully
  7. Issue Identify command and expect it to complete successfully
  8. Monitor sanitize progress until completion and verify the log contents
  9. Ensure sanitize completion bit indicates success for the most recent operation
  10. Ensure sanitize success flag is set for the most recent operation
  11. Confirm SCDW10 captures the sanitize command Dword 10 value

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_successful_completion

Ensure reserved sanitize action leaves log and user data unchanged by issuing invalid sanitize and comparing before and after.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write sample data and verify it before issuing an invalid sanitize
  2. Read sanitize status log before issuing the invalid sanitize command
  3. Issue a sanitize command with a reserved action and expect invalid field status
  4. Confirm data remains unchanged after the invalid sanitize operation
  5. Confirm sanitize status log has not been updated after the invalid command

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_nvme_reset

Verify sanitize continues across controller reset by running block erase sanitize, resetting, and monitoring log page.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the test if sanitize completes too quickly to evaluate reset behavior
  2. Issue a block erase sanitize command to create a long operation
  3. Reset the controller while sanitize is still running
  4. Monitor sanitize progress after reset and verify the log contents
  5. Ensure sanitize completion bit indicates the most recent sanitize succeeded
  6. Ensure sanitize success flag is set for the most recent sanitize operation
  7. Confirm SCDW10 records the sanitize command Dword 10 value

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_support_type

Ensure unsupported sanitize actions return Invalid Field by checking capabilities and issuing unsupported options.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read sanitize capabilities from Identify data
  2. Try Crypto Erase when unsupported and expect invalid field status
  3. Try Block Erase when unsupported and expect invalid field status
  4. Try Overwrite when unsupported and expect invalid field status
  5. Issue reserved sanitize actions and expect invalid field status

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_abort_by_fw_activation

Validate pending firmware activation with reset aborts sanitize by committing firmware and issuing sanitize while activation is pending.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue firmware commit that triggers activation with reset
  2. Issue sanitize command while firmware activation with reset is pending
  3. Handle firmware activation asynchronous events and complete reset sequence

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_no_deallocate

Verify sanitize deallocates logical blocks when No Deallocate After Sanitize allows it by configuring features and issuing block erase.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip test if controller specification version is below 1.4
  2. Ensure Block Erase sanitize capability is supported
  3. Ensure the No-Deallocate Inhibited bit capability is supported
  4. Configure Sanitize Config feature to set No-Deallocate behavior
  5. Write sample data and verify it prior to running sanitize
  6. Issue sanitize with No Deallocate After Sanitize field set and monitor log/AER
  7. Confirm data is deallocated after sanitize completes
  8. Check sanitize status to ensure completion or proper inhibited state
  9. Clear Sanitize Config setting and ensure command fails when unsupported

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_exit_failure_mode

Confirm exit failure mode action succeeds after a good sanitize by issuing block erase and then action 001b.

Reference

  1. NVM Express Revision 2.0a.

Steps

  1. Ensure block erase sanitize capability is available
  2. Issue block erase sanitize and monitor sanitize status through log and AER
  3. Ensure sanitize completion bit indicates success for the last sanitize
  4. Ensure sanitize success flag indicates the most recent sanitize succeeded
  5. Confirm SCDW10 captured the block erase sanitize Cdw10 value
  6. Verify data after sanitize to confirm overwrite behavior
  7. Issue an Exit Failure Mode sanitize command to clear failure state
  8. Verify sanitize log remains unchanged after Exit Failure Mode

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_and_flush

Ensure flush commands may succeed during sanitize by disabling cache, issuing sanitize, and running flush while monitoring log.

Reference

  1. NVM Express Revision 2.0a.

Steps

  1. Require a controller with a volatile write cache for this scenario
  2. Record the original write cache setting
  3. Disable the write cache and verify the feature is set correctly
  4. Issue block erase sanitize to create a sanitize-in-progress window
  5. Skip the test if sanitize has already completed before flush can be issued
  6. Issue Flush command during sanitize and expect either success or sanitize-in-progress abort
  7. Complete the sanitize cycle and verify the log contents
  8. Ensure sanitize completion bit indicates the sanitize completed successfully
  9. Ensure sanitize success flag indicates the most recent sanitize succeeded
  10. Confirm SCDW10 captures the sanitize command Dword 10 value

folder: scripts/conformance/02_nvm

file: scripts/conformance/02_nvm/compare_test

function: scripts/conformance/02_nvm/compare_test.py::test_compare_lba_0

Validate Compare command boundary behavior by issuing combinations of legal and illegal LBAs and NLBs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Ensure the namespace advertises Compare support before running the sequence.
  2. Read namespace capacity to establish the maximum valid LBA.
  3. Initialize media content and the comparison buffer to create a passing baseline.
  4. Issue Compare commands against mismatched LBAs and expect miscompare status codes.
  5. Modify the buffer to force data miscompares and ensure the controller reports errors.
  6. Restore the comparison buffer so subsequent tests start from known data.
  7. Vary the NLB field to observe expected errors when the range length does not match.
  8. Send Compare commands that reference invalid LBAs and confirm proper range errors.

function: scripts/conformance/02_nvm/compare_test.py::test_compare_invalid_nsid

Verify Compare commands reject invalid NSIDs by submitting crafted SQEs to the admin queues.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Ensure the controller supports the Compare opcode before injecting invalid NSIDs.
  2. Submit a Compare command targeting each invalid NSID and capture completion status.
  3. Verify the completions signal the invalid or nonexistent namespace condition.

function: scripts/conformance/02_nvm/compare_test.py::test_compare_fused_operations

Check Compare-and-Write fused semantics by mixing legal and illegal fuse values and validating the resulting CQ status codes.

Reference

  1. NVM Express Revision 1.4c.

Steps

  1. Abort the test if the controller does not report Compare-and-Write fused capability.
  2. Collect status codes from every CQE produced by the specified number of commands.
  3. Ring the submission queue doorbell to submit queued commands.
  4. Wait for the completion queue to return the requested number of entries.
  5. Record the status code for each completion.
  6. Advance the completion queue head to release consumed entries.
  7. Return a scalar when a single status is captured, otherwise return the list.
  8. Issue standalone write commands to seed data for future comparisons.
  9. Corrupt the buffer so a standalone compare produces a miscompare status.
  10. Restore the data to prove the compare path works when inputs match.
  11. Apply invalid fuse settings to individual commands and expect generic errors.
  12. Combine commands that use illegal fuse combinations and confirm both fail.
  13. Execute a valid Compare-and-Write pair and expect success for both commands.
  14. Introduce a data mismatch within a fused pair to ensure the write aborts.
  15. Reissue the fused pair with correct data to confirm the happy path.
  16. Target different LBAs inside the fused pair to observe range errors.
  17. Change the write buffer to refresh the comparison baseline for later tests.
  18. Confirm the fused pair succeeds when using the updated buffer.
  19. Run a standalone compare to verify the freshly written data persisted.
  20. Mix a fused pair with a standard compare to ensure the queue handles interleaving.
  21. Interleave a fused pair with a conflicting CID to confirm the controller rejects it.
  22. Launch two legal fused pairs simultaneously to verify both pass.
  23. Combine one passing fused pair with one illegal pair to see split completions.

function: scripts/conformance/02_nvm/compare_test.py::test_compare_write_mixed

Exercise Compare and Write operations, including ioworker mixes, to validate token-sensitive behavior and error reporting.

Reference

  1. NVM Express Revision 1.4c.

Steps

  1. Skip the test if the namespace lacks Compare command support.
  2. Format the namespace to clear previous data and metadata.
  3. Expect compare failures when issuing commands without valid prepared data.
  4. Disable write tokens and validate the basic write-then-compare pass sequence.
  5. Stress the namespace with ioworker mixes that lack token enforcement.
  6. Force miscompare warnings by running ioworker with an unexpected data pattern.
  7. Re-enable write tokens and ensure ioworker traffic reports the expected errors.

file: scripts/conformance/02_nvm/copy_test

function: scripts/conformance/02_nvm/copy_test.py::test_copy_basic

Verify the copy command can duplicate chained source ranges and that every destination matches.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Initialize LBA 0-32 with a deterministic pattern for comparison.
  2. Issue a copy command that duplicates LBA 0-32 into destination LBA 32-64.
  3. Chain two source ranges to populate higher destination LBAs.
  4. Prepare read buffers and confirm their initial contents differ from the source.
  5. Read back LBA 0-32 to capture the original data.
  6. Read back LBA 32-64 to verify the first copy.
  7. Read back LBA 64-96 to verify the chained copy result.
  8. Read back LBA 96-128 to complete the verification set.
  9. Confirm that every copied region exactly matches the source data.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_smart

Ensure copy commands alter SMART host read/write counters without changing data unit counts.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Capture the baseline SMART accounting values before issuing copy commands.
  2. Issue repeated copy commands so SMART host command counters should increment.
  3. Retrieve SMART data again to confirm only host command counters increased.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_format_1

Exercise Copy descriptor format 1 handling by mixing matching and mismatched format selections.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Query the controller to confirm Copy Format 1h is supported.
  2. Send a baseline copy using format 0 for both descriptor and command.
  3. Repeat the copy with format 1 for both descriptor and command.
  4. Mix descriptor format 0 with a format 1 command and expect an error.
  5. Mix descriptor format 1 with a format 0 command and expect an error.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_lba

Validate boundary handling by issuing copy commands that hit or exceed namespace capacity.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Read the namespace capacity from Identify data.
  2. Copy a minimal range that ends exactly at the namespace boundary.
  3. Issue the boundary copy with SLBA equal to NCAP-1.
  4. Attempt several out-of-range copies and confirm the controller returns errors.
  5. Submit a Copy whose source starts beyond NCAP and expect LBA Out of Range.
  6. Push destination parameters just over NCAP to confirm additional error handling.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_max_namespace_size

Confirm LBA out-of-range handling when Copy destinations exceed namespace size.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Retrieve NSZE and NCAP and confirm they match.
  2. Attempt destination LBAs beyond NSZE to ensure LBA out-of-range errors are reported.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_fua

Ensure copy commands succeed when the Force Unit Access bit is asserted.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Send a copy with FUA enabled and require it to complete successfully.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nsid

Inject an invalid NSID via manual SQE programming and expect an aborted Copy command.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Build a simple Copy range buffer used by the crafted SQE.
  2. Manually submit a Copy command with an invalid namespace identifier.
  3. Read the completion entry and confirm a nonzero error status.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nsid_lba

Craft a Copy command that combines an invalid NSID with an out-of-range SLBA.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Inspect MDTS and NCAP; skip when maximum transfer size already covers the scenario.
  2. Allocate a Copy range buffer used by the malformed SQE.
  3. Issue a Copy command with both an invalid namespace and SLBA beyond NCAP.
  4. Check the completion status for the expected error code family.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_max_nr

Exercise the upper NR limit by issuing a Copy command that uses all MSRC entries.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Read the MSRC capability to understand how many descriptors are allowed.
  2. Populate descriptors up to the allowed MSRC value.
  3. Issue the Copy command using the maximum supported NR.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nr

Verify controllers reject Copy commands whose NR exceeds the MSRC capability.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Use MSRC+1 descriptors to intentionally exceed the supported NR.
  2. Skip if MSRC already advertises the maximum descriptor limit.
  3. Populate one more descriptor than the controller supports.
  4. Submit the Copy command and expect Command Size Limit Exceeded.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_mssrl

Confirm controllers enforce MSSRL by rejecting source ranges that exceed the per-entry length.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Read MSSRL from Identify to know the maximum blocks per entry.
  2. Run a Copy that uses the maximum allowed blocks per entry and expect success.
  3. Exceed MSSRL by one block and expect a Command Size Limit Exceeded status.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_mcl

Validate enforcement of the MCL cumulative length limit across all Copy descriptors.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Read MSSRL and MCL to determine how many descriptors can be aggregated.
  2. Fill descriptors that exactly sum to MCL and ensure the Copy succeeds.
  3. Define an additional descriptor so the total exceeds MCL.
  4. Submit the oversized Copy and expect Command Size Limit Exceeded.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_multi_source

Validate multi-source Copy behavior by merging four regions into a contiguous destination.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Ensure the MSRC capability supports the number of requested ranges.
  2. Ensure the MCL capability can accommodate the combined range lengths.
  3. Ensure no single range exceeds the MSSRL limit.
  4. Log the source offsets and lengths for easier troubleshooting.
  5. Write unique patterns to each region and issue a multi-range Copy.
  6. Read the destination region back in four segments.
  7. Compare each segment against its originating buffer.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_write_uncorrectable

Ensure Copy reports unrecoverable read errors when targeting write-uncorrectable LBAs.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Skip when the Write Uncorrectable feature is unsupported.
  2. Prepare patterned data that will later be rewritten.
  3. Mark the target LBA range as write uncorrectable.
  4. Attempt a Copy from the uncorrectable range and expect an unrecovered read error.
  5. Rewrite data into the range so subsequent accesses succeed.
  6. Copy the recovered range again to ensure success after rewriting.
  7. Read back each logical block and validate the pattern.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_ioworker

Stress copy operations within a mixed workload generated by ioworker.

Reference

  1. Spec Reference: NVM Command Set Specification 1.0b

Steps

  1. Run ioworker with mixed operations, including Copy commands, across the namespace.

file: scripts/conformance/02_nvm/deallocate_test

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_and_write

Validate deterministic post-deallocation reads by trimming LBAs, writing a pattern, and confirming data transitions.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Initialize the DSM buffer and data pattern for the upcoming operations
  2. Deallocate the target LBA range with the Dataset Management command
  3. Write patterned data to the trimmed LBAs and confirm the reads match
  4. Compute the starting LBA for the trim verification
  5. Issue DSM again on the calculated range
  6. Read the trimmed LBAs to ensure the pattern no longer matches

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_out_of_range

Ensure Dataset Management aborts cleanly when LBA ranges exceed the namespace.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Retrieve the namespace capacity from the Identify data structure
  2. Deallocate the last valid logical block to confirm a passing completion
  3. Attempt to deallocate LBA ranges beyond capacity and expect an error completion

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_nr_maximum

Exercise the DSM range entry limit by issuing maximum and overflowed range counts.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Populate 256 DSM ranges, the architectural maximum, and execute the command
  2. Attempt to exceed 256 ranges and expect helper-side and controller-side failures

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_correct_range

Verify DSM only affects the specified LBA range by trimming the middle block of three writes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write three consecutive logical blocks to seed deterministic content
  2. Deallocate the middle logical block with a DSM command
  3. Read each logical block to ensure only the trimmed block changed state

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_multiple_range

Validate trimming multiple DSM ranges and verify data integrity on affected and unaffected LBAs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write four logical blocks to establish baseline data patterns
  2. Program the DSM buffer with several overlapping and contiguous ranges
  3. Read back the LBAs to confirm trimmed ranges return default data and untouched ones remain intact
  4. Rewrite and reread the previously trimmed LBAs to ensure normal IO resumes

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_mixed

Stress DSM operations mixed with other IO types using the ioworker generator.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Launch a mixed workload blending trims with random reads and writes

file: scripts/conformance/02_nvm/flush_test

function: scripts/conformance/02_nvm/flush_test.py::test_flush_with_read_write

Validate that flushing after a write persists cached data by issuing a write, flushing, and reading back.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Prepare read and write buffers and prime the queue pair for IO validation.
  2. Issue a write request to place the pattern in the controller cache.
  3. Issue a Flush command to force the cached data to non-volatile media.
  4. Read back the data to ensure the flushed content matches expectations.

function: scripts/conformance/02_nvm/flush_test.py::test_flush_vwc_check

Check the VWC capability bits by evaluating controller version logic and issuing a namespace broadcast flush.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Skip the validation if the controller implements a pre-1.4 specification.
  2. Read the Volatile Write Cache (VWC) field from the Identify Controller data structure.
  3. Assert that VWC bits 2:1 are not 00b for controllers compliant with revision 1.4 or later.
  4. Verify that a Flush with NSID 0xffffffff is rejected when namespace broadcast semantics are unsupported.

file: scripts/conformance/02_nvm/read_test

function: scripts/conformance/02_nvm/read_test.py::test_read_large_lba

Validate boundary LBAs by issuing reads at and beyond the namespace capacity.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read namespace capacity from Identify data for boundary calculations.
  2. Issue a single-block read at the last valid LBA and expect it to succeed.
  3. Exercise reads beyond the capacity and require LBA Out of Range responses.

function: scripts/conformance/02_nvm/read_test.py::test_read_max_namespace_size

Confirm reads fail when the starting LBA exceeds nsze by issuing varied offsets.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read nsze and ncap from Identify data and ensure they match.
  2. Attempt reads beyond nsze with multiple offsets and expect LBA Out of Range.

function: scripts/conformance/02_nvm/read_test.py::test_read_fua

Exercise Force Unit Access by issuing repeated FUA-enabled reads and observing completion.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Send repeated reads with the FUA bit set and require successful completion.

function: scripts/conformance/02_nvm/read_test.py::test_read_bad_number_blocks

Ensure transfers larger than MDTS are rejected while valid sizes succeed.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read MDTS, size the buffer, and skip if the limit is already above 2 MB.
  2. Issue reads with lengths within MDTS to verify the baseline behavior.
  3. Attempt transfers larger than MDTS and expect Invalid Field in Command errors.
  4. Sweep valid NLB values within MDTS and ensure each read completes.

function: scripts/conformance/02_nvm/read_test.py::test_read_valid

Verify data consistency by chaining write and read operations with varied IO flags.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Initialize I/O buffers and place a distinguishable payload in the write buffer.
  2. Issue the write and trigger the corresponding read from the callback.
  3. Wait for both commands to complete and check that the payload matches.

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nsid

Ensure reads targeting an invalid namespace ID return Invalid Namespace or Format.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Build a read SQE that intentionally references an invalid namespace ID.
  2. Submit the SQE to hardware and allow time for completion.
  3. Verify the CQE status reports Invalid Namespace or Format.

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nlb

Ensure reads with transfer lengths beyond MDTS trigger Invalid Field in Command.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Skip the scenario when MDTS is already large enough to mask the overflow.
  2. Program a read SQE whose NLB equals MDTS so the controller must fault it.
  3. Confirm completion reports Invalid Field in Command.

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nsid_lba

Check controller behavior when both NSID and SLBA are invalid in the same command.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Gather namespace capacity (and MDTS for skip logic) before crafting the command.
  2. Populate a read SQE with invalid NSID and SLBA values and submit it.
  3. Inspect the completion status and accept any documented error for bad NSID/SLBA.

function: scripts/conformance/02_nvm/read_test.py::test_read_ioworker_consistency

Measure random read IOPS by running an io worker that records per-second throughput.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Launch the io worker with random reads and collect the reported IOPS timeline.

function: scripts/conformance/02_nvm/read_test.py::test_read_ioworker_trim_mixed

Exercise mixed read and deallocate workloads by driving io worker operations.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Run the io worker with an even mix of read and trim operations for the duration.

function: scripts/conformance/02_nvm/read_test.py::test_read_different_io_size_and_count

Validate reads across multiple IO sizes and queue depths using buffered transfers.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Allocate per-command DMA buffers sized to the requested LBA count.
  2. Issue the reads for each buffer and wait for all completions.

file: scripts/conformance/02_nvm/verify_test

function: scripts/conformance/02_nvm/verify_test.py::test_verify_large_lba

Validate Verify rejects out-of-range accesses by issuing commands across the namespace boundary.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read the namespace capacity to determine the maximum valid LBA.
  2. Issue Verify at the last valid LBA to confirm the happy-path behavior.
  3. Issue Verify starting exactly at namespace capacity and expect an error.
  4. Issue Verify starting past the namespace capacity and expect an error.
  5. Issue Verify covering the capacity boundary through NLB and expect an error.
  6. Issue Verify using an obviously out-of-range SLBA and expect an error.

function: scripts/conformance/02_nvm/verify_test.py::test_verify_valid

Confirm Verify handles supported IO flags by executing callback-driven IO around the command.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Prepare IO buffers that contain a known data pattern.
  2. Program one LBA with the known pattern and confirm it reads back correctly.
  3. Define a Verify completion callback that re-reads the data.
  4. Issue Verify with the requested IO flag and callback.
  5. Wait for outstanding commands and confirm data remained unchanged.

function: scripts/conformance/02_nvm/verify_test.py::test_verify_invalid_nsid

Ensure Verify with an invalid NSID returns Invalid Namespace or Format by submitting a raw SQE.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Build a Verify SQE targeting namespace 0xff, which is unsupported.
  2. Inspect the completion status and ensure it reports namespace errors.

function: scripts/conformance/02_nvm/verify_test.py::test_verify_nlb

Demonstrate Verify ignores MDTS by sweeping NLB around the MDTS-derived limit.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read controller MDTS to establish the comparison baseline.
  2. Build a Verify SQE for namespace 1.
  3. Set NLB relative to the MDTS limit using the provided delta.
  4. Submit the command to the queue and ring the doorbell.
  5. Confirm completion reports success regardless of NLB selection.

function: scripts/conformance/02_nvm/verify_test.py::test_verify_invalid_nsid_lba

Check Verify rejects invalid NSID and SLBA combinations by hand-crafting the SQE and PRP list.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read capacity and MDTS to build an out-of-range request.
  2. Build a Verify SQE that targets namespace 0xff and SLBA = capacity.
  3. Ring the doorbell to submit the crafted command.
  4. Validate the completion status reports an appropriate error.

function: scripts/conformance/02_nvm/verify_test.py::test_verify_uncorrectable_lba

Validate Write Uncorrectable handling by forcing errors on Verify and Read until data is rewritten.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Check controller capabilities and skip if Write Uncorrectable is unsupported.
  2. Issue Write Uncorrectable on the selected LBA range.
  3. Send read commands on the marked range.
  4. Expect Unrecovered Read Error warnings from both Read and Verify.
  5. Overwrite the range to clear the uncorrectable condition.
  6. Re-issue Verify to confirm the range now passes without errors.

file: scripts/conformance/02_nvm/write_test

function: scripts/conformance/02_nvm/write_test.py::test_write_large_lba

Verify boundary handling by writing at, below, and beyond the namespace limit.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read NCAP from identify data
  2. Issue a valid write at the last logical block
  3. Drive explicit writes beyond NCAP and expect LBA out-of-range warnings

function: scripts/conformance/02_nvm/write_test.py::test_write_max_namespace_size

Confirm namespace size enforcement by issuing writes at and beyond NSZE.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read NSZE and NCAP and confirm they match
  2. Attempt writes beyond NSZE and verify LBA out-of-range aborts

function: scripts/conformance/02_nvm/write_test.py::test_write_fua

Validate force unit access writes by repeatedly issuing commands with FUA set.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue many FUA writes and require each command to complete successfully

function: scripts/conformance/02_nvm/write_test.py::test_write_bad_number_blocks

Exercise MDTS limits by issuing writes up to and beyond the advertised limit.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Check MDTS and allocate a buffer that exceeds the transfer size
  2. Submit writes whose nlb is equal to or below MDTS and expect success
  3. Attempt larger writes and ensure the controller reports invalid field
  4. Sweep smaller nlbs to confirm the remaining range continues to pass

function: scripts/conformance/02_nvm/write_test.py::test_write_valid

Check data consistency by writing then reading back blocks across IO flag modes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Prepare dedicated read and write buffers
  2. Issue a write followed by read to capture the completion callbacks
  3. Wait for both commands to finish and verify the payload matches

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nsid

Confirm invalid namespace handling by submitting a write with a bogus NSID.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Build a write SQE targeting an invalid namespace identifier
  2. Ring the submission queue doorbell
  3. Check completion status for Invalid Namespace or Format

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nlb

Trigger invalid field errors by advertising an NLB larger than MDTS allows.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Abort early if MDTS is too large for this negative test
  2. Program a write command whose NLB equals MDTS sectors
  3. Submit the SQE to hardware
  4. Inspect completion status for Invalid Field in Command

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nsid_lba

Stress combined namespace and LBA validation by issuing writes far past NCAP.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Guard on MDTS and gather NCAP for the overrun
  2. Issue the out-of-range write on an invalid namespace identifier
  3. Post the SQE to the submission queue
  4. Read the completion status and require an error consistent with the violation

function: scripts/conformance/02_nvm/write_test.py::test_write_ioworker_different_op_mixed

Validate mixed I/O capability by running ioworker with read, write, flush, and trim.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Execute an ioworker mix of read/write/flush/trim commands

function: scripts/conformance/02_nvm/write_test.py::test_write_ioworker_consistency

Measure write consistency by recording per-second IOPS during a write-only run.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Run a write-only ioworker and capture the reported per-second IOPS

function: scripts/conformance/02_nvm/write_test.py::test_write_followed_by_read

Validate write-read pairing by cycling writes followed by reads to many LBAs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write data to one LBA before each subsequent read
  2. Read data from the same LBA immediately after each write
  3. Repeat the write-read sequence across thousands of LBAs

function: scripts/conformance/02_nvm/write_test.py::test_write_fua_unaligned

Ensure unaligned FUA writes succeed by issuing IOs that misalign the starting LBA.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Perform unaligned FUA writes across the test region
  2. Verify the data region with larger aligned read operations

function: scripts/conformance/02_nvm/write_test.py::test_write_cache_disable

Assess data persistence by disabling write cache and inducing power events.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Fill the drive with background writes to prepare the media
  2. Disable volatile write cache and confirm the feature setting
  3. Run sequential writes totaling less than 32KiB * 999 commands
  4. Simulate an unsafe shutdown by removing and restoring power
  5. Reset the controller and verify post-shutdown readback
  6. Run sequential writes exceeding 32KiB * 1010 commands
  7. Trigger a controlled shutdown and power-cycle the subsystem
  8. Reinitialize the controller after power restoration
  9. Validate there are no miscompares and verify data integrity

file: scripts/conformance/02_nvm/write_uncorrectable_test

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_large_lba

Verify Write Uncorrectable results at the namespace boundary by issuing commands across the final SLBAs.

Reference

  1. Reference: NVM Express Revision 1.4a.

Steps

  1. Retrieve namespace capacity to identify the last accessible LBA.
  2. Issue Write Uncorrectable at the last valid LBA to confirm proper completion.
  3. Push Write Uncorrectable beyond namespace end and expect LBA Out of Range errors.

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_deallocate

Confirm that deallocate commands succeed immediately after Write Uncorrectable and allow normal writes.

Reference

  1. Reference: NVM Express Revision 1.4a.

Steps

  1. Verify the controller advertises the deallocate capability before testing.
  2. Prepare deterministic write data used after the uncorrectable/deallocate sequence.
  3. Mark the target range as uncorrectable before any reclaim operations.
  4. Deallocate the same range and require successful completion.
  5. Rewrite the previously uncorrectable range to confirm media usability.

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_after_deallocate

Ensure Write Uncorrectable behaves normally when issued after explicit deallocate operations.

Reference

  1. Reference: NVM Express Revision 1.4a.

Steps

  1. Confirm the controller can execute deallocate commands ahead of the test.
  2. Create deterministic IO data and queue resources for follow-on writes.
  3. Deallocate the range so Write Uncorrectable follows an empty logical space.
  4. Issue Write Uncorrectable on the deallocated space and expect success.
  5. Perform a standard write afterward to verify the namespace continues working.

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_read

Validate that reads targeting Write Uncorrectable LBAs return Unrecovered Read Error until rewritten.

Reference

  1. Reference: NVM Express Revision 1.4a.

Steps

  1. Allocate IO buffers and deterministic write data for the LBA range.
  2. Tag the LBAs as uncorrectable ahead of the read validation.
  3. Read the uncorrectable range and expect an Unrecovered Read Error from the controller.
  4. Rewrite the same LBAs so subsequent reads can succeed.
  5. Read the recovered range and confirm the expected pattern is returned.

file: scripts/conformance/02_nvm/write_zeroes_test

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_large_lba

Ensure Write Zeroes rejects ranges exceeding namespace capacity by issuing commands at and beyond the boundary.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read namespace capacity to determine command boundaries.
  2. Issue Write Zeroes at the last logical block to confirm valid completion.
  3. Issue Write Zeroes starting exactly at namespace capacity and expect a range error.
  4. Issue Write Zeroes starting beyond namespace capacity and expect a range error.
  5. Issue Write Zeroes that overruns the namespace end and expect a range error.
  6. Issue Write Zeroes using an invalid high SLBA to ensure the controller reports a range error.

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_valid

Validate Write Zeroes with different IO flags by writing data, zeroing it, and reading back asynchronously.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Prepare data buffers and populate the IO queue context.
  2. Write patterned data and confirm the media contains the payload.
  3. Define a completion callback that issues a read after Write Zeroes.
  4. Submit the Write Zeroes command with the requested IO flags.
  5. Wait for completion and confirm the logical block was zeroed.

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_invalid_nsid

Confirm Write Zeroes returns Invalid Namespace when an out-of-range NSID is used.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Submit a raw SQE targeting invalid namespace 0xff.
  2. Verify the completion status reports Invalid Namespace or Format.

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_nlb

Validate the Write Zeroes maximum transfer by issuing a request sized to the controller MDTS limit.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read the controller MDTS to determine the allowed maximum blocks.
  2. Calculate the Write Zeroes Size Limit when the controller exposes it.
  3. Program a Write Zeroes SQE sized to the MDTS limit.
  4. Confirm the completion status indicates success for the MDTS-sized request.

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_invalid_nsid_lba

Ensure Write Zeroes reports errors for invalid namespace and SLBA combinations exceeding capacity.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Read the namespace capacity and controller MDTS for boundary calculations.
  2. Submit Write Zeroes targeting an invalid namespace and out-of-range SLBA.
  3. Validate the completion status returns an appropriate error code.

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_data_unit_write

Verify the SMART Data Units Written field remains unchanged when issuing Write Zeroes commands.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Skip this validation on controllers implementing specs earlier than 1.4.
  2. Establish the baseline Data Units Written counter from the SMART log.
  3. Issue a burst of Write Zeroes commands across the namespace.
  4. Re-read the SMART log to confirm the Data Units Written value is unchanged.

folder: scripts/conformance/03_features/hmb

file: scripts/conformance/03_features/hmb/1_basic_test

function: scripts/conformance/03_features/hmb/1_basic_test.py::test_hmb_buffer_alloc_huge

Validate HMB allocations scale cleanly by instantiating progressively larger buffers.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Ramp buffer allocation sizes exponentially to observe allocator limits
  2. Allocate a large contiguous buffer to check high watermark handling
  3. Create multiple medium buffers to confirm repeated allocations succeed

function: scripts/conformance/03_features/hmb/1_basic_test.py::test_hmb_write_read

Verify HMB IO paths by issuing sequential, read-only, and mixed workloads across sizes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable the host memory buffer before issuing IO
  2. Iterate through IO patterns for multiple passes while HMB remains active
  3. Issue a write-only workload to populate the host memory buffer
  4. Issue a read-only workload to verify data retrieval through HMB
  5. Issue a mixed workload to confirm balanced read and write handling

file: scripts/conformance/03_features/hmb/2_protocol_test

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_support

Verify the Host Memory Buffer preferred size is greater than or equal to the minimum by parsing Identify data.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Read the HMB preferred and minimum sizes from Identify data.
  2. Validate HMPRE is at least HMMIN and log relevant Identify fields.
  3. Warn if HMPRE exceeds 64 MiB of host memory.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_command_sequence

Ensure enabling HMB while it is already active triggers a Command Sequence Error via the Set Features command.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Enable the Host Memory Buffer feature as the baseline state.
  2. Attempt to re-enable HMB with EHM=1 and expect a Command Sequence Error.
  3. Disable the Host Memory Buffer feature to clear the state.
  4. Issue a subsequent disable to confirm it succeeds when already disabled.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_size_invalid

Validate that HMB Set Features rejects invalid buffer sizes before accepting valid parameters.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Gather HMB capability data and skip when unsupported.
  2. Allocate host memory buffers and descriptor table for hmb validation.
  3. Allocate the next chunk buffer and track remaining size.
  4. Populate the descriptor entry for this chunk.
  5. Record the physical address of the descriptor list for subsequent commands.
  6. Attempt to enable HMB with zero buffer size and expect an Invalid Field in Command status.
  7. Enable HMB with valid size parameters to confirm normal behavior.
  8. Query the HMB feature to confirm the controller adopted the configuration.
  9. Disable the HMB feature to clean up the test state.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_entry_count_invalid

Validate that HMB Set Features rejects a zero descriptor count while accepting valid configurations.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Gather HMB capability data and skip when unsupported.
  2. Allocate host memory buffers and descriptor table for entry-count validation.
  3. Allocate the next chunk buffer and account for bytes remaining.
  4. Populate the descriptor entry for this chunk.
  5. Record the physical address of the descriptor list for subsequent commands.
  6. Attempt to enable HMB with zero descriptor entries and expect Invalid Field in Command.
  7. Enable HMB with the valid descriptor count to confirm success.
  8. Query the HMB feature to verify descriptor list information.
  9. Disable the HMB feature and return the controller to default configuration.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_format_sanitize

Confirm that format and sanitize operations succeed while HMB traffic continues to run.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Enable the HMB feature before issuing IO and maintenance commands.
  2. Run ioworker traffic to exercise the data path while HMB is active.
  3. Record the current LBA format ID from Identify data.
  4. Issue a format command while HMB remains enabled.
  5. Run ioworker traffic again to verify IO remains healthy post-format.
  6. Skip the sanitize portion when the controller reports no support.
  7. Launch a block erase sanitize and monitor progress via the sanitize log page.
  8. Run ioworker traffic a final time to validate HMB functionality.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_enable_disable_with_ioworker

Stress enable and disable transitions for HMB while an IO workload runs continuously.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Cycle HMB enable and disable while random IO runs to expose timing issues.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_data_consistency

Verify data consistency across HMB enable and disable transitions by comparing read buffers.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Enable the HMB feature prior to capturing baseline IO data.
  2. Run IO workload to seed media content before comparisons.
  3. Read baseline data into reference buffers while HMB is enabled.
  4. Disable HMB to compare behavior without host memory buffering.
  5. Read data without HMB and confirm that it matches the HMB-enabled baseline.
  6. Run IO workload again between enable and disable phases.
  7. Capture another data snapshot with HMB disabled for later comparison.
  8. Re-enable HMB prior to the final comparison read.
  9. Read data with HMB enabled once more and ensure it matches the disabled snapshot.

file: scripts/conformance/03_features/hmb/3_size_test

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_single_buffer

Validate HMB single-buffer behavior by enabling the entire allocation and running IO traffic.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Configure HMB as a single contiguous chunk covering the full allocation
  2. Issue IO workloads to exercise the configured HMB

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_large

Validate large HMB chunk sizes by programming multiple megabyte chunks and running IO.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Configure HMB using the requested multi-megabyte chunk size
  2. Issue IO workloads to exercise the configured HMB

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_small

Stress small HMB chunk sizes by ensuring descriptor limits are sufficient before exercising IO.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Skip test if the host cannot describe enough chunks for the selected configuration
  2. Configure HMB using the requested small chunk size
  3. Issue IO workloads to exercise the configured HMB

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_tiny

Validate extremely small HMB chunk sizes by checking descriptor capacity prior to executing IO.

Reference

  1. Specification: NVM Express Revision 1.4a.

Steps

  1. Skip test if the host cannot describe enough chunks for the selected configuration
  2. Configure HMB using the requested tiny chunk size
  3. Issue IO workloads to exercise the configured HMB

file: scripts/conformance/03_features/hmb/4_mr_test

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_reset

Validates MR bit persistence by reenabling HMB after resets and D3 transitions using the previously recorded buffer.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Enable HMB and verify baseline IO workload
  2. Disable HMB while leaving MR metadata intact before reset
  3. Skip reset operations when no event is requested
  4. Trigger a controller reset when requested by the parameter
  5. Perform subsystem reset followed by controller reset when requested
  6. Cycle through D3hot to D0 and reset the controller after power transition
  7. Reenable HMB with the same MR buffer parameters
  8. Run IO workload again to ensure buffer reuse works after transitions

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_d3_without_disable

Confirms IO continues successfully after D3 transitions even when HMB is not reenabled until later depending on the disable flag.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Enable HMB and execute a baseline IO workload
  2. Disable HMB with MR state saved prior to power change
  3. Enter D3hot and return to D0 before resetting the controller
  4. Optionally clear the HMB state completely when disable flag is set
  5. Issue IO workload to detect failures when HMB stays disabled
  6. Reenable HMB using the retained MR settings
  7. Exercise IO workload again after HMB reenable

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_with_wrong_buffer

Ensures controller rejects MR enable attempts with zeroed buffer fields after D3 transitions by expecting an invalid field warning.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Enable HMB to capture valid buffer parameters
  2. Run IO workload to confirm baseline behavior
  3. Disable HMB while keeping MR metadata prior to power cycle
  4. Transition through D3hot and reset the controller
  5. Attempt to enable HMB with zeroed buffer values expecting invalid field warning
  6. Run IO workload again to confirm the controller remains operational

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_with_different_buffer

Checks that mismatched MR buffer parameters are rejected after D3 transitions by issuing a set features command with altered values.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Enable HMB and confirm IO worker activity
  2. Disable HMB while preserving MR metadata before power cycle
  3. Move to D3hot, return to D0, and reset controller to simulate sleep
  4. Attempt to enable HMB using modified buffer address and size to force rejection
  5. Run IO workload afterward to ensure controller remains responsive

file: scripts/conformance/03_features/hmb/5_memory_test

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_address_non_align

Verify HMB enabling succeeds even with a misaligned Host Memory Descriptor List address by issuing IO afterwards.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB using a descriptor pointer that is intentionally misaligned.
  2. Issue IO workload to confirm the controller operates normally.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_memory

Ensure preexisting host buffers retain data by allocating memory before and after enabling HMB and performing IO.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Allocate the first host buffer before enabling HMB and fill it with a fixed pattern.
  2. Enable HMB to consume host memory for controller-managed caching.
  3. Allocate a second host buffer after HMB enable with another pattern.
  4. Run an IO workload while HMB is in use.
  5. Confirm neither buffer content changed.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_not_equal

Validate the controller accepts HMB lists where entry sizes differ by enabling HMB with uneven descriptors.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB with buffers configured to have unequal sizes per entry.
  2. Run IO traffic to ensure normal behavior continues.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_out_of_order

Check that HMB remains operational when descriptor entries are intentionally out of order.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB while permuting the entry order to mimic improper sorting.
  2. Execute IO to verify HMB accesses despite the shuffled list.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_bit_flip_in_buffer_list

Introduce bit flips inside the Host Memory Descriptor List to ensure the controller tolerates descriptor corruption.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB using a valid descriptor table.
  2. Run IO to populate and exercise HMB.
  3. Corrupt the descriptor list by flipping a bit in the HMB list buffer.
  4. Run IO again to confirm the device handles the corrupted descriptor.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_bit_flip_data_consistency

Flip random bits inside HMB data buffers and verify read workloads continue to succeed.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB before running any IO workload.
  2. Run sequential writes to populate HMB with deterministic data.
  3. Randomly flip bits in the HMB data buffers according to the parameterized count.
  4. Re-run random read IO to ensure data path tolerates the corrupt cache lines.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_dword

Stress the controller by corrupting one byte in every 32-bit word of the HMB buffers before validation IO.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB so the controller can start referencing host memory buffers.
  2. Run sequential writes to populate the namespace and exercise HMB.
  3. Walk each HMB buffer and flip a random bit within every dword.
  4. Perform mixed random reads to observe behavior with widespread corruption.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_bytes

Corrupt every byte stored in HMB before executing a mixed read-write workload.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB for the target namespace.
  2. Run sequential writes to populate the namespace with predictable data.
  3. Flip a random bit in every byte across all HMB buffers.
  4. Execute mixed random IO to validate error handling after full corruption.

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_interval

Inject periodic corruption into varied buffer intervals before running a mixed IO workload.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. Enable HMB with standard descriptors.
  2. Run sequential writes to preload the namespace and fill HMB.
  3. Inject random bit flips within each selected interval across every HMB buffer.
  4. Launch mixed random IO traffic to validate behavior after interval corruption.

folder: scripts/conformance/03_features

file: scripts/conformance/03_features/boot_partition_test

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_write

Validate boot partition programming by streaming Firmware Download chunks and completing with Firmware Commit.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Compute boot image geometry based on the Boot Partition size field.
  2. Prepare deterministic buffer chunks for the image payload.
  3. Transfer each chunk sequentially with the Firmware Download command.
  4. Commit the staged image to the specified boot partition.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load

Ensure boot partition data can be read through the MMIO load registers and matches the expected pattern.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Calculate chunk geometry that aligns with the Boot Partition capacity.
  2. Program the MMIO registers, fetch each chunk, and verify the pattern.
  3. Report the effective boot partition read throughput.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_beyond_end

Confirm the controller reports BPINFO.BRS errors when MMIO loads request data beyond the boot partition boundary.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Allocate a transfer buffer and derive the total number of register-sized chunks.
  2. Program the MMIO registers to read beyond the partition end.
  3. Wait for completion and expect an error BRS status.
  4. Retry with a valid offset and confirm successful completion.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_verify

Verify the boot partition image via Boot Partition Log Page reads and compare the payload to the programmed pattern.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Skip controllers whose reported specification version predates revision 2.0.
  2. Confirm the Boot Partition log page is supported before accessing it.
  3. Validate the reported Boot Partition size within the log page.
  4. Calculate the total image size based on the Boot Partition descriptor.
  5. Retrieve each chunk via Log Page offset reads and verify the buffer contents.
  6. Report the sustained log page read throughput.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_power_cycle

Ensure MMIO boot partition loads recover correctly when a dirty power cycle interrupts the transfer.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Launch a boot partition load and verify the returned data pattern.
  2. Start another load transaction without waiting for completion.
  3. Force a dirty power cycle while the boot image transfer is active.
  4. Reload the image after power is restored and confirm integrity.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_change_address

Validate that reprogramming the MMIO address register mid-transfer routes the boot image to the updated buffer.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Initiate a boot image load into the first buffer.
  2. Reprogram the DMA address after the transfer has started.
  3. Wait for completion and ensure the image appears in only one buffer.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_write_dword

Ensure MMIO boot partition loads succeed when the DMA address registers are programmed with dword writes.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Program the DMA address via dword writes and start the load.
  2. Wait for completion and validate the buffer contents.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_offset

Verify the boot partition can be loaded into a buffer starting at non-4K aligned offsets without data corruption.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Initiate a boot image load into a buffer with the specified offset.
  2. Validate that the resulting buffer matches the expected image pattern.

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_power_cycle

Validate that Firmware Commit to a boot partition remains atomic when a dirty power cycle interrupts the download or commit flow.

Reference

  1. NVM Express Revision 2.0

Steps

  1. Program a known baseline image into boot partition 0 using the helper routine.
  2. Recompute the total image size and chunking parameters used for downloads.
  3. Stage a replacement image by issuing Firmware Download for each chunk.
  4. Commit the newly downloaded image to the boot partition.
  5. Force a dirty power cycle while the commit is in progress.
  6. Load the boot partition through the MMIO registers after power recovery.
  7. Verify whether each chunk matches the old or new pattern.
  8. Firmware Commit with Commit Action 110b or 111b shall guarantee atomic operation.

file: scripts/conformance/03_features/power_management_test

function: scripts/conformance/03_features/power_management_test.py::test_power_state_transition

Exercise read I/O while forcing manual power state transitions between two targets.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so the test can manually drive every transition.
  2. Seed the target LBA with known data for later verification.
  3. Sweep multiple transition delays and confirm read correctness.
  4. logging.info(delay)
  5. Force the initial power state before the delay window.
  6. Request the target state and pause for the active delay.
  7. Re-read the seeded LBA and verify data integrity.
  8. Consume the completion from the asynchronous Set Features call.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_ps3_simple

Confirm the controller exits PS3 when new work arrives by issuing admin and I/O commands.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so the controller holds the requested state.
  2. Place the controller in PS0 and allow it to settle.
  3. Move to PS3 and remain idle long enough for the state to stabilize.
  4. Issue admin identify and read IOs to ensure the device wakes correctly.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_async_with_io

Stress background reads while repeatedly forcing asynchronous transitions into PS3 and PS4.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so manual transitions remain in effect.
  2. Prime the controller in the provided operational state before testing.
  3. Fill the namespace so that later reads return meaningful data.
  4. Run a read workload while toggling between non-operational states.
  5. Alternate between PS3 and PS4 while the workload is active.
  6. Gather the results once the worker completes.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_async_with_io

Cycle through operational power states while a read workload runs to confirm stability.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so the test can directly select every operational state.
  2. Begin the test from PS0 to establish a reference point.
  3. Precondition the namespace with deterministic data.
  4. Keep issuing reads while hopping among operational states.
  5. Randomize the next operational state every iteration.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_npss

Confirm every advertised power state can be selected and invalid states fail cleanly.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Query the number of supported power states from Identify data.
  2. Disable APST if the controller supports it to maintain manual control.
  3. Set each valid power state and confirm the same value is reported back.
  4. Ensure the Set Features command succeeded.
  5. Try invalid power states and expect the controller to return an error.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_maximum_power

Verify the maximum power consumption monotonically decreases across power states.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read the number of supported power states and stage storage.
  2. Collect the scaled maximum power value for each state.
  3. Confirm power decreases across operational states.
  4. Confirm power decreases across non-operational states and between groups.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_ps_with_ioworker

Run ioworker workloads in every operational power state to ensure IO completes cleanly.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable APST when supported so the controller may downshift as needed.
  2. Iterate each operational state and run a brief workload.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_io

Issue IO of a single opcode from each non-operational state and confirm the controller resumes the prior operational state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so manual power selections are honored during setup.
  2. Walk through every operational/non-operational pairing and send IOs.
  3. Confirm Set Features landed on the requested non-operational state.
  4. Validate the controller resumes the prior operational power state.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_mixio

Issue mixed IO workloads in non-operational states to confirm the controller resumes the previous operational state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST so the test can hold each power state pair precisely.
  2. For each pairing, run a mixed workload and see if APST reverts to the operational state.
  3. Validate the request placed the controller into the non-operational state.
  4. Ensure the controller transitions back to the latest operational state.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_admin_cmd

Issue admin commands in every power state and ensure the state does not drift.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST when present so manual state selection sticks.
  2. Loop through every state, send admin log reads, and verify the state is unchanged.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_temperature_aer

Drive temperature thresholds in every power state and verify AERs trigger without altering the state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Capture the current temperature configuration for restoration later.
  2. Enable the temperature threshold asynchronous event.
  3. Skip when the device cannot report temperature via AER.
  4. Disable APST if present so the current power state remains fixed.
  5. Allocate a SMART log buffer for repeated temperature sampling.
  6. Iterate through each power state, trigger the warning, and confirm the state does not change.
  7. Raise an over-temperature event.
  8. Restore the threshold and confirm the warning clears without power-state movement.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_dst

Run device self-tests from non-operational states to ensure power states remain latched.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the test when device self-test is unsupported.
  2. Enable Non-Operational Power State Permissive Mode to allow DST work.
  3. Disable APST so the controller stays in the requested power state.
  4. Launch DST from each non-operational state and confirm the state remains unchanged.
  5. Verify permissive mode holds the controller in the test state.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_short_dst_duration

Hammer Set Features commands during a short DST and confirm it still finishes within spec.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip if the controller lacks device self-test support.
  2. Start a short DST and capture the baseline timestamp.
  3. Flood Set Features requests across power states until DST completes.
  4. Abort the DST if needed and ensure it completed within the two-minute limit.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_different_ps_with_write_register

Write NVMe registers in every power state to ensure MMIO does not disturb the current state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST to guarantee Set Features stays applied.
  2. Touch registers repeatedly across all power states and verify they remain fixed.
  3. Ensure the MMIO accesses did not trigger a power transition.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_different_ps_with_write_pcie_register

Write PCIe configuration registers across power states to ensure the NVMe state is maintained.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST when supported to keep the state fixed.
  2. Perform repeated PCIe writes for each state and verify the state stays locked.
  3. Confirm MMIO/configuration access did not change the power state.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_autonomous_ps_transitions

Verify APST drives the controller into non-operational states after extended idle time.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Ensure Non-Operational Power State Permissive Mode is off when possible.
  2. Enable APST so the controller autonomously transitions after idle.
  3. Idle in each operational state for long enough to trigger APST.
  4. Confirm the controller entered one of the non-operational states.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_invalid_transition

Verify the controller rejects APST entries that specify operational target states.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip when the device reports an NVMe version earlier than 1.4.
  2. Skip when APST is not implemented.
  3. Enable APST so the host can program entries for validation.
  4. Determine how many power states to exercise in the APST table.
  5. Program APST entries that incorrectly target operational states and expect errors.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_max_power_pcie

Compare PS0 maximum power against the PCIe slot limit to ensure compliance.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Derive the PCIe slot power limit from the capability registers.
  2. Calculate PS0 maximum power from the Identify descriptor.
  3. Compare the two numbers to ensure the slot can supply PS0.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_performance

Measure sequential read performance in each operational power state for comparison.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST if needed and reformat to get consistent workload data.
  2. Capture read IOPS for every operational power state.
  3. for ps in range(len(ops_list)-1):
  4. if iops[ps] > iops[ps+1]:
  5.    warnings.warn("performance is not degraded at lower power state")  
    

function: scripts/conformance/03_features/power_management_test.py::test_power_state_thermal_throttle_performance

Check read performance while varying host-controlled thermal throttle thresholds.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Reformat the namespace so the ensuing workload has a clean baseline.
  2. skip this test if the feature is not supported by the device
  3. fix on PS0, and disable autonomous power state transitions
  4. get current temperature from SMART data
  5. Helper to fetch the SMART temperature and validate input ranges.
  6. Helper to measure IO performance with the current throttle configuration.
  7. skip the test if current temperature is out of scope host can control
  8. get the performance in normal case, no thermal throttle
  9. idle to cool down, and get current temperature
  10. make current temperature higher than TMT1 for the light throttle
  11. light throttle performance should be lower than usual performance
  12. idle to cool down, and get current temperature
  13. make current temperature higher than TMT2 for the heavy throttle
  14. heavy throttle performance should be lower than light throttle performance
  15. idle to cool down, and get current temperature
  16. make current temperature higher than TMT1 and lower than TMT2, but the heavy throttle continues
  17. light throttle performance should be higher than heavy throttle performance
  18. restore the TMT setting

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_hot_reset

Force hot resets immediately after power-state selections and ensure the controller recovers to PS0.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Drive each power state just before initiating a hot reset.
  2. After every reset confirm the controller comes back in PS0.
  3. Disable APST when available to keep the target state fixed.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_function_level_reset

Issue PCIe FLR operations after selecting various power states and ensure recovery to PS0.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Walk through all power states before triggering FLR cycles.
  2. Validate that each FLR returns the controller to PS0.
  3. Disable APST for deterministic state transitions.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_idle_transition_ps

Program APST idle timers and verify transitions occur when the idle window elapses.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable APST and back up the table for later restoration.
  2. Program a three-second idle timeout and probe behavior above/below the threshold.
  3. Program a five-second timeout and repeat the same checks at different delays.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_disable_special_ps_apst

Disable APST for selected power states and confirm they no longer transition automatically.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable APST and keep a backup copy of the programmed table.
  2. Helper function that zeroes the APST entry for a specific state.
  3. Clear the APST entry for the targeted power state.
  4. Read the NPSS value so we can iterate through every state.
  5. Disable APST per state and confirm the controller remains in that state while idle.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_break_transition

Verify that issuing IO during APST idle windows resets the transition timer.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable APST and keep a backup of the original table.
  2. Inject IO before the idle window expires and confirm APST restarts the timer.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_change_idle_time

Modify APST idle timers on the fly and verify the new delay takes effect.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Enable APST and snapshot the table for restoration.
  2. Update the idle time before it expires and verify the state does not change prematurely.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_host_power

Map host power-state policies onto NVMe power states and validate the resulting behavior under load.

Reference

  1. NVM Express Revision 1.4a.
  2. Microsoft Power Management for Storage Hardware Devices (NVMe).

Steps

  1. Define representative host policies with idle and change times.
  2. key: [idle_time/ms, change_time/ms]
  3. Derive the device latency numbers from Identify data.
  4. Fill the namespace so later reads validate data already present.
  5. Helper that runs a read workload and then forces a specific NVMe power state.
  6. Randomly select host states and map them to controller states for multiple cycles.
  7. Create a comparison rule that favors states meeting the host latency allowance.
  8. Rank the controller states by the comparison rule.
  9. Default to PS0 if no device latency satisfies the host target.
  10. Otherwise select the mapped power state, handling Modern Standby specially.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_latency

Measure transition latency between every state pair and compare with the advertised limits.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Disable APST when present to keep power states under manual control.
  2. Walk through each source/target combination and validate the reported latency.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_apst_saveable

Validate that APST settings persist across resets when marked as saveable.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the test entirely when APST is unsupported.
  2. Ensure the controller advertises a saveable/ changeable APST capability.
  3. Keep the original APST table beside a copy of the effective state.
  4. Program APST with sv=1 and disable transitions.
  5. Reset the controller to check whether the setting persists.
  6. Confirm APST remains disabled after reset via all select options.
  7. Restore the original APST programming.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_idle_with_low_speed

Exercise APST behavior while stepping through multiple PCIe link speeds.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip test execution entirely when APST is unsupported.
  2. Enable APST so the controller can autonomously downshift while links slow down.
  3. Ensure the requested link speed is within device capability.
  4. Force the PCIe link to the requested speed and confirm the hardware honors it.
  5. check acutal link speed
  6. check apst is enabled
  7. Enable ASPM L1.2 to push the link into the lowest possible idle state.
  8. Fix the controller on PS0 before starting IO.
  9. Write and flush a known block so later reads have context.
  10. Issue reads with increasing idle gaps to observe APST and ASPM combined behavior.
  11. Restore the original link speed before exiting the test.

function: scripts/conformance/03_features/power_management_test.py::test_power_state_format

Issue Format commands from every power state and confirm the controller stays in that state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Loop through each state, run standard and secure format, and confirm the state persists.

file: scripts/conformance/03_features/reset_test

function: scripts/conformance/03_features/reset_test.py::test_reset_queue_level_reset

Validate queue-level reset while IO remains outstanding by deleting and recreating the queues mid traffic.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.3.

Steps

  1. Format the namespace so data verification starts from a known zero baseline.
  2. Create IO submission and completion queues for manual operations.
  3. Submit patterned write commands to seed data before the reset.
  4. Trigger a queue-level reset while the write commands remain outstanding.
  5. Recreate the queues and issue read commands to validate data.
  6. Verify each completion returns the expected pattern or permissible all-zero data.
  7. Tear down the IO queues used in this test.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_nvme_registers

Ensure controller reset restores CC and AQA registers after deliberate modifications.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Capture the original CC register value for later comparison.
  2. Modify CC so that a reset-induced change can be detected.
  3. Issue a controller-level reset.
  4. Confirm the CC register returns to its original value.
  5. Clear CC.EN to permit updates to admin queue registers.
  6. Adjust the AQA register to a known offset.
  7. Re-enable CC.EN so the controller latches the changed registers.
  8. Ensure the AQA modification is visible before reset.
  9. Verify AQA returns to its saved value after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_d3hot

Confirm controller reset succeeds after exiting D3hot and returning to D0.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Drive the device into D3hot and wait for the power state to stabilize.
  2. Return the device to D0 before resetting.
  3. Issue a controller-level reset.
  4. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_aspm

Verify controller reset behavior while toggling ASPM states before issuing the reset.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Force ASPM disabled and wait briefly for the link to settle.
  2. Enable ASPM L1 and wait briefly before issuing the reset.
  3. Issue a controller-level reset.
  4. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_with_outstanding_io

Validate controller reset during outstanding IO by resubmitting reads and checking data patterns.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Format the namespace so data verification starts from a known zero baseline.
  2. Submit a burst of write commands to keep the queue busy.
  3. Reset the controller while IOs remain active.
  4. Recreate the queues and issue read commands to validate data.
  5. Verify each completion returns the expected pattern or permissible all-zero data.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_ioworker

Check controller reset resiliency while background IO workers generate traffic.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Reset the controller while an IO worker continues running.
  2. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_with_existed_adminq

Ensure controller reset can reuse existing admin queue register state without reinitialization.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Record the current admin queue register values.
  2. Reset the controller using the preserved admin queue settings.
  3. Issue thousands of admin commands to validate the reused queue.
  4. Verify the admin queue registers match the saved values after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_d3hot

Confirm function level reset behaves correctly after the device exits D3hot.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Drive the device into D3hot and wait for the power state to stabilize.
  2. Issue a PCIe function level reset.
  3. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_aspm

Verify function level reset while the link is held in ASPM L1.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Enter ASPM L1 and hold the state before the reset.
  2. Issue a PCIe function level reset.
  3. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_with_ioworker

Validate function level reset during IO worker activity to ensure recovery.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Run an IO worker and trigger FLR mid workload.

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_with_outstanding_io

Ensure function level reset during outstanding IO preserves data integrity afterward.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Format the namespace so data verification starts from a known zero baseline.
  2. Create IO submission and completion queues for manual operations.
  3. Post a burst of commands so IO remains outstanding.
  4. Trigger FLR while writes are still in flight.
  5. Recreate queues after reset and issue reads to verify data.
  6. Validate that completion data patterns meet expectations.

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_d3hot

Check PCIe hot reset behavior immediately after exiting D3hot.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Drive the device into D3hot and wait for the power state to stabilize.
  2. Issue a PCIe hot reset followed by a controller reset.
  3. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_aspm

Verify PCIe hot reset while toggling ASPM states before the reset.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Force ASPM disabled and wait briefly for the link to settle.
  2. Enable ASPM L1 and wait briefly before issuing the reset.
  3. Issue a PCIe hot reset followed by a controller reset.
  4. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_with_ioworker

Ensure PCIe hot reset works while IO workers issue commands.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Run an IO worker and trigger a PCIe hot reset mid workload.
  2. Confirm the controller reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_with_outstanding_io

Validate PCIe hot reset with outstanding IO by checking post-reset reads.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.2.

Steps

  1. Format the namespace so data verification starts from a known zero baseline.
  2. Create IO submission and completion queues for manual operations.
  3. Issue a burst of write commands to create outstanding IO.
  4. Initiate the selected reset while IO remains pending.
  5. Recreate queues after reset and issue reads to verify data.
  6. Validate that completion data patterns meet expectations.

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_aspm

Verify subsystem reset while toggling ASPM L1 state.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.4.

Steps

  1. Force ASPM disabled and wait briefly for the link to settle.
  2. Enable ASPM L1 and wait briefly before issuing the reset.
  3. Verify the drive reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_with_ioworker

Check subsystem reset resilience while IO workers run.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.4.

Steps

  1. Run an IO worker and trigger the subsystem reset mid workload.
  2. Verify the drive reports ready status after reset.

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_with_outstanding_io

Ensure subsystem reset with outstanding IO maintains acceptable data patterns afterward.

Reference

  1. NVM Express Revision 1.4a, Section 7.3.4.

Steps

  1. Format the namespace so data verification starts from a known zero baseline.
  2. Create IO submission and completion queues for manual operations.
  3. Issue a burst of write commands to create outstanding IO.
  4. Initiate the selected reset while IO remains pending.
  5. Recreate queues after reset and issue reads to verify data.
  6. Validate that completion data patterns meet expectations.

function: scripts/conformance/03_features/reset_test.py::test_reset_timing

Measure NVMe initialization timing from CC.EN enable through first admin and IO completions.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Define a custom NVMe initialization routine.
  2. Disable CC.EN and wait for CSTS.RDY to clear.
  3. Program the admin queue registers.
  4. Program the CC register fields.
  5. Enable CC.EN and capture the timestamp.
  6. Instantiate the controller with the custom init function.
  7. Wait for CSTS.RDY to assert.
  8. Issue the first Identify command and capture latency.
  9. Initialize each namespace and create IO queues.
  10. Issue the first IO read and record its latency.
  11. Release allocated queues and namespace handles.

file: scripts/conformance/03_features/write_protect_test

function: scripts/conformance/03_features/write_protect_test.py::test_write_protect

Validate that namespace write protection is supported before executing the feature test.

Reference

  1. Specification: NVM Express Revision 1.4c

Steps

  1. Check the Identify Namespace data to confirm write protection support.

folder: scripts/conformance/04_registers

file: scripts/conformance/04_registers/controller_test

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap

Validate CAP/CC expose correct page size limits and NVM command set support.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Retrieve Controller Capabilities register to parse limits
  2. Derive MPSMIN and MPSMAX to compute supported page sizes
  3. Verify MPSMAX is not smaller than MPSMIN
  4. Read Controller Configuration Memory Page Size
  5. Ensure configured MPS is within advertised range
  6. Confirm CAP indicates NVM command set support

function: scripts/conformance/04_registers/controller_test.py::test_controller_crto

Verify CRTO/CRMS timeouts are reported correctly and registers remain read-only.

Reference

  1. NVM Express Revision 2.0c.

Steps

  1. Skip if NVMe spec version is below 2.0
  2. Capture CRTO-related fields from CAP and CRTO
  3. Attempt to modify read-only CRTO register and verify no change
  4. Validate CRWMT value and emit warning when too small
  5. Check the relationship between CRIMT and CRWMT
  6. Validate CRIMT range when supported

function: scripts/conformance/04_registers/controller_test.py::test_controller_version

Confirm the Version register reports a supported NVMe major revision.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Version register value
  2. Validate supported major version field

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc

Inspect CC to ensure CQ/SQ entry sizes match NVMe requirements.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Controller Configuration
  2. Verify IOCQES/IOSQES encode 16B CQ and 64B SQ entries

function: scripts/conformance/04_registers/controller_test.py::test_controller_register_reserved

Ensure reserved controller registers remain zero after write attempts.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Confirm reserved field starts at zero
  2. Attempt to write to reserved field
  3. Re-read reserved field to ensure it stayed zero

function: scripts/conformance/04_registers/controller_test.py::test_controller_csts

Read CSTS to confirm CSTS.RDY indicates controller is ready.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Controller Status register
  2. Check csts.rdy is 1

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap_to

Toggle CC.EN and measure CSTS.RDY transitions against CAP.TO limits.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Controller Capabilities Timeout value
  2. Disable cc.en to start timing deassertion
  3. Wait for CSTS.RDY to clear and verify timing
  4. Restore cc.en to re-enable controller
  5. Wait for CSTS.RDY to assert and verify timing

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap_mqes

Attempt to create IO queues larger than MQES to confirm enforcement of the advertised limit.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Maximum Queue Entries Supported
  2. Verify minimum queue depth support and skip if maximum already reached
  3. Expect IO CQ creation to fail when queue size exceeds MQES
  4. Expect IO SQ creation to fail when queue size exceeds MQES

function: scripts/conformance/04_registers/controller_test.py::test_controller_ams

Confirm CAP/CC reflect supported and selected arbitration mechanisms.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Arbitration Mechanism Supported
  2. Read Arbitration Mechanism Selected

function: scripts/conformance/04_registers/controller_test.py::test_controller_intms_and_intmc

Validate INTMS/INTMC mask registers remain unchanged after writes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Interrupt Mask Set and Interrupt Mask Clear
  2. Write zeros to INTMS to test immutability
  3. Write zeros to INTMC to test immutability
  4. Ensure INTMS and INTMC retained their original values

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_iocqes

Compare CC.IOCQES with Identify CQES limits to validate completion entry size.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read I/O Completion Queue Entry Size from CC
  2. Read Identify Completion Queue Entry Size limits
  3. Validate identify Completion Queue Entry Size bounds
  4. Ensure CC.IOCQES falls within Identify CQES limits

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_iosqes

Compare CC.IOSQES with Identify SQES limits to validate submission entry size.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read I/O Submission Queue Entry Size from CC
  2. Read Identify Submission Queue Entry Size limits
  3. Validate identify Submission Queue Entry Size bounds
  4. Ensure CC.IOSQES falls within Identify SQES limits

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_en

Toggle CC.EN while running simple I/O to verify shutdown and reinitialization succeed.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Capture current CC.EN state before exercising transitions
  2. Ensure controller is fully ready
  3. Issue a simple read command through a small IO queue pair
  4. Confirm read completed successfully
  5. Disable cc.en to shut down controller
  6. Wait for CSTS.RDY to deassert
  7. Reinitialize admin queue while controller is disabled
  8. Re-enable cc.en using original configuration
  9. Wait for CSTS.RDY to assert and confirm CC restored

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_css

Read CSS within CC to ensure the controller reports the expected I/O command set.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Controller Configuration I/O Command Set Selected

function: scripts/conformance/04_registers/controller_test.py::test_controller_mdts

Exercise I/O sizes around Identify MDTS to confirm errors occur when limits are exceeded.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Skip the test if controller already advertises 2MB MDTS
  2. Get Memory Page Size Minimum
  3. Get Maximum Data Transfer Size
  4. Create Submission/Completion Queue
  5. Prepare long data buffer chain for the write command
  6. Issue a write command with maximum valid number of pages
  7. Expect the write command to complete successfully
  8. Issue a write command exceeding MDTS limit
  9. Expect the write command to complete with an error status
  10. Issue a read command exceeding MDTS limit
  11. Expect the read command to complete with an error status
  12. Issue a read command within MDTS limit
  13. Expect the read command to complete successfully
  14. Delete Submission/Completion Queue

function: scripts/conformance/04_registers/controller_test.py::test_controller_doorbell_invalid

Write an invalid doorbell value to confirm the controller raises an asynchronous event.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue one AER command to monitor asynchronous errors
  2. Retrieve supported queue counts from Number of Queues feature
  3. Access invalid doorbell register and expect async error

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_shn

Send a shutdown notification and observe CC/CSTS fields to ensure shutdown flow follows specification.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Controller Configuration before triggering shutdown
  2. Observe current SHN bits and CSTS state
  3. Send shutdown notify to DUT and reset afterward

function: scripts/conformance/04_registers/controller_test.py::test_controller_shn_before_commands

Issue admin and I/O commands before shutdown notify to confirm controller recovers cleanly.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Send read IO commands and admin commands prior to shutdown
  2. Send shutdown notify and measure response time
  3. Reset controller without power cycle
  4. Send read IO commands and admin commands after reset

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_memory_page_size_8k

Reinitialize the controller with an 8K memory page size and verify 4K I/O still succeeds.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Define a helper that reinitializes the controller with an 8K page size
  2. Disable cc.en and wait for CSTS.RDY to deassert
  3. Initialize admin queue registers
  4. Program CC with the desired 8K memory page size
  5. Enable cc.en using the 8K configuration
  6. Wait until CSTS.RDY indicates the controller is ready
  7. Identify the controller and enumerate namespaces
  8. Configure the number of queues via set/get features
  9. Check if MPS supports 8K page size
  10. Prepare 4K read/write buffer
  11. Send write and read command
  12. Wait for commands to complete and verify data integrity

function: scripts/conformance/04_registers/controller_test.py::test_controller_asq

Configure the controller with different ASQ offsets to ensure admin queue processing is unaffected.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Define a helper that reinitializes the controller with a shifted ASQ
  2. Disable cc.en and wait for CSTS.RDY to deassert
  3. Program admin queue registers with the requested ASQ offset
  4. Configure CC before enabling the controller
  5. Enable cc.en to bring the controller online
  6. Wait until CSTS.RDY indicates readiness
  7. Identify the controller and its namespaces
  8. Reset controller with different locations of admin SQ
  9. Test with many admin commands to fill-up admin SQ

file: scripts/conformance/04_registers/pcie_test

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_identifiers

Check the PCI Identifiers fields by reading configuration space registers.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Identifiers register from configuration space.
  2. Read Class Code register to capture the device class tuple.
  3. Read Subsystem Identifiers to confirm board specific IDs.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_command

Verify the PCI Command register contents by issuing configuration reads.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read the Command register snapshot for diagnostics.
  2. Confirm the Memory Space Enable bit is set before exercising BAR access.
  3. Confirm reserved bits remain clear to detect malformed configuration.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_revision_id

Capture the PCI Revision ID so later tests know which silicon is present.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Revision ID register for logging.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_class_code

Validate the PCI Class Code matches a Non-Volatile memory controller.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Class Code register to capture device type fields.
  2. Check the class tuple matches the NVMe controller class code.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_bist

Read the PCI Built-In Self Test register to make sure no faults latched.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read Built-In Self Test register value from config space.
  2. Verify completion code fields are zero or indicate test in progress.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pmcr

Read PCI Power Management Capabilities to understand supported states.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Locate the PCI Power Management capability offset.
  2. Dump the capability register contents for version and AUX data.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pmcsr

Read PCI Power Management Control and Status to track power state.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Locate the PMCSR offset derived from the capability pointer.
  2. Capture PMCSR fields such as current state and software reset bits.
  3. Decode the optional Power Data reporting if the scale indicates support.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pcie_cap

Interrogate PCIe capability, device, and status registers over config space.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Locate the PCI Express capability structure.
  2. Read the PCIe capability header for version, type, and slot info.
  3. Read Device Capabilities to learn payload size support.
  4. Read Device Control settings for payload, read request, and ordering.
  5. Read and clear Device Status bits to monitor correctable errors.
  6. Clear the correctable error detected bit to confirm write ability.

Read PCIe link capability, control, and status registers for link metrics.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Gather link capability settings for speed, width, and ASPM support.
  2. Read link control settings before making any ASPM modifications.
  3. Read link status bits for negotiated speed, width, and training state.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_format

Issue a Format NVM command and confirm the controller completes it.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Submit a format command and wait for completion.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_write_bandwidth

Measure write bandwidth by running a short sequential workload.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Configure IO worker for sequential write workload.
  2. Calculate throughput based on IO size and operation count.

Toggle ASPM policies and issue repeated reads to observe behavior.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read current Link Control settings before modification.
  2. Program the ASPM control bits to the requested setting.
  3. Create IO submission and completion queues for read traffic.
  4. Loop on 100 blocking reads to stress link power transitions.
  5. Delay slightly between IOs to allow ASPM entry.
  6. Restore ASPM configuration back to L0.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_mps_256

Confirm Max Payload Size is at least 256 bytes via the PCIe capability.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Read the Device Control register to check negotiated MPS.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write

Perform a write then read on the same LBA to validate data path.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a write to the target namespace and wait for completion.
  2. Issue a read to confirm payload access still works.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_max_read_request_size

Sweep the Max Read Request Size register and measure IO bandwidth.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Capture the current Device Control register settings.
  2. Program a new Max Read Request Size value.
  3. Re-read Device Control to confirm the update was accepted.
  4. Run the bandwidth workload to gauge the effect.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write_post

Repeat the write/read workload after changing payload settings.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Write test data to confirm the namespace is available.
  2. Read test data to ensure integrity after configuration changes.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_reset

Reset the PCIe link and controller to restore payload defaults.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Reset the PCIe hierarchy before touching the controller.
  2. Reset the controller to reinitialize admin state.

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write_after_reset

Verify read and write commands succeed after the PCIe reset.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Run a write to ensure queues operate post-reset.
  2. Run a read to confirm data-path functionality remains intact.

file: scripts/conformance/04_registers/power_test

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_pmcsr_d3hot

Verify PMCSR-directed D3hot entry and exit while checking admin command behavior across transitions.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Read the PCI Power Management capability offset and control register.
  2. Force the controller into D3hot via the PMCSR.
  3. Return the controller to D0 and confirm admin command success.
  4. Re-enter D3hot before sending an admin command.
  5. Expect admin command timeouts in D3hot to confirm loss of service.
  6. Restore the controller to D0 and verify it responds again.

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_capability_d3hot

Validate D3hot transitions through PCIe capabilities while ensuring IO workloads continue to succeed in D0.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Read the PCIe power management registers for the initial state.
  2. Confirm the controller starts in D0.
  3. Drive the controller into D3hot and hold the state briefly.
  4. Bring the controller back to D0 from D3hot.
  5. Run an IO workload to verify D0 services requests.
  6. Explicitly request D0 to ensure the state is stable.
  7. Run another IO workload to confirm continued service.
  8. Verify the power state remains D0 after the workload.

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_aspm_L1

Ensure ASPM transitions between L1 and L0 still allow admin commands to complete successfully.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Drive the link into ASPM L1 state.
  2. Send admin commands while in ASPM L1 and allow the state to settle.
  3. Return to ASPM L0 and log controller information.

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_aspm_l1_and_d3hot

Validate combined ASPM and D3hot transitions by toggling both mechanisms and running IO workloads.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Enable ASPM L1 to reduce link power.
  2. Transition to D3hot while ASPM is active and pause briefly.
  3. Bring the controller back to D0 and reset ASPM to L0.
  4. Confirm ASPM reports L0 and run IO to verify functionality.
  5. Enter D3hot again to validate repeated transitions.
  6. Request ASPM L1 while in D3hot and wait briefly.
  7. Return ASPM to L0 to prepare for D0 operation.
  8. Bring the controller back to D0 after ASPM adjustments.
  9. Run IO while in D0 and ASPM L0 to confirm serviceability.
  10. Verify ASPM remains in L0 after the IO workload.
  11. Transition back to D3hot for an additional cycle and pause briefly.
  12. Exit D3hot to D0 to finish the sequence.
  13. Run IO in D0 one final time to ensure stability.

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_ioworker_aspm

Check ASPM toggling during mixed IO workloads to ensure data consistency.

Reference

  1. Specification: NVM Express Revision 1.4a

Steps

  1. Start a mixed read and write IO workload.
  2. Toggle ASPM between L1 and L0 during the workload.
  3. Reset the controller to clear any lingering ASPM effects.

folder: scripts/conformance/05_controller

file: scripts/conformance/05_controller/arbitration_test

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_weighted_round_robin

Exercise urgent-class Weighted Round Robin by flooding queues with flush commands and verifying urgent completions arrive first.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Confirm controller supports Weighted Round Robin arbitration
  2. Reformat the namespace to remove prior workload artifacts
  3. Program HPW:MPW:LPW ratios of 8:4:2 and an eight-command burst
  4. Measure admin command latency while the device is idle
  5. Create a completion queue and two SQs per priority class
  6. Instantiate eight IO submission queues that cover all priorities
  7. Fill each submission queue with 50 flush commands
  8. Ring submission queue doorbells starting from the lowest priority
  9. Measure admin command latency while IO queues are busy
  10. Sample completion entries to capture submission queue IDs
  11. Confirm urgent queues complete their work before the others
  12. Delete the IO queues to restore the controller state

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_weighted_round_robin_ioworker

Validate Weighted Round Robin proportional control by running IO workers at different priorities and comparing their throughput share.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Confirm controller supports Weighted Round Robin arbitration
  2. Reformat the namespace to ensure deterministic IO ranges
  3. Pre-fill the working region with writes to remove read-modify penalties
  4. Program HPW:MPW:LPW ratios of 8:4:2 and an eight-command burst
  5. Configure one IO worker per priority level with identical transfer parameters
  6. Start the IO workers and store their running handles
  7. Close each worker to collect IO statistics and throughput
  8. Assert that higher priority queues consume more IO bandwidth

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_default_round_robin

Verify default round robin arbitration by flooding the controller with flush commands and checking that completions cover all queues evenly.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Verify the controller offers a CQ depth large enough for the test
  2. Ensure there are enough IO queue pairs to exercise round robin scheduling
  3. Set the arbitration burst size to two commands
  4. Create a completion queue and eight IO submission queues
  5. Populate each submission queue with 50 flush commands
  6. Ring the submission queue doorbells in numerical order
  7. Measure admin command latency while IO queues are busy
  8. Inspect completion entries to review submission queue IDs
  9. Assert that round robin arbitration keeps completions evenly distributed
  10. Delete created queues to restore the controller configuration

file: scripts/conformance/05_controller/interrupt_test

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_aggregation_time_threshold

Verify interrupt aggregation time and threshold using Get Features commands.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Retrieve the default interrupt aggregation time and threshold settings

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_mask

Validate MSIx mask behavior by toggling the mask bits while issuing IO on a queue pair.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Initialize the queue pair and clear any pending MSIx interrupt
  2. Issue a read command to trigger a notification
  3. Confirm the MSIx interrupt asserts for the pending IO
  4. Clear the MSIx pending bit
  5. Issue another read command to confirm the interrupt flow
  6. Confirm the MSIx interrupt asserts for the pending IO
  7. Clear the pending interrupt and set the MSIx mask bits
  8. Issue a read command while the interrupt remains masked
  9. Confirm the MSIx interrupt stays deasserted while masked
  10. Unmask the MSIx interrupt to release the pending completion
  11. Verify the delayed interrupt is finally reported

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_multiple_qpair_msix

Confirm MSIx interrupts are delivered per queue pair by issuing IOs on two vectors.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Create two queue pairs with interrupts enabled on separate vectors
  2. Issue IO on the first queue pair
  3. Confirm the first queue pair asserts its interrupt
  4. Confirm the second queue pair does not receive an interrupt
  5. Drain the completion before deleting the queues
  6. Delete both queue pairs to clean up resources

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_coalescing

Measure MSIx interrupt coalescing by comparing latency before and after enabling aggregation time.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Clear existing interrupts before measuring latency
  2. Enable interrupt vector coalescing to allow aggregation
  3. Measure baseline interrupt latency for a single IO
  4. Program aggregation time to 20ms and threshold to 6 completions
  5. Issue two IOs to trigger coalescing behavior
  6. Measure the coalesced interrupt latency for multiple IOs
  7. Log a warning if latency does not reflect aggregation
  8. Disable coalescing to restore immediate interrupts
  9. Issue another IO to confirm latency returns to normal
  10. Measure the interrupt latency with coalescing disabled
  11. Warn if the latency remains unexpectedly high

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_coalescing

Ensure disabling interrupt coalescing per vector bypasses the controller aggregation settings.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Enable coalescing on interrupt vector 1 as a baseline
  2. Clear pending interrupts before measuring latency
  3. Capture baseline interrupt latency to compare against aggregated runs
  4. Issue preliminary reads to warm up the queue pair if required
  5. Disable coalescing on the interrupt vector under test
  6. Program the controller aggregation time and threshold
  7. Submit IOs that would be aggregated if coalescing were enabled
  8. Measure latency with vector-level coalescing disabled
  9. Confirm latency stays below the aggregation window
  10. Re-enable coalescing for interrupt vector 1
  11. Submit IOs that should now aggregate
  12. Measure latency with coalescing restored
  13. Warn if interrupts still arrive immediately despite coalescing

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_different_coalescing

Validate that queue pairs can independently enable or disable interrupt coalescing.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Instantiate two queue pairs to compare coalescing behaviors
  2. Enable coalescing on both interrupt vectors initially
  3. Measure baseline latency on the first queue pair
  4. Disable coalescing for the second queue pair
  5. Program shared aggregation time and threshold values
  6. Issue IO on qpair1 to exercise aggregated behavior
  7. Measure latency on qpair1 while aggregation is expected
  8. Warn if the first queue pair does not show aggregation delay
  9. Issue IO on qpair2 where coalescing has been disabled
  10. Measure latency on qpair2 and ensure it stays short
  11. Assert qpair2 latency remains below the aggregation window

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_vector_discontiguous

Ensure queue pairs bound to discontiguous interrupt vectors signal on their assigned vector.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Create queue pairs pinned to interrupt vectors 2 and 4
  2. Clear qpair1 interrupt state before issuing IO
  3. Issue IO on qpair1 tied to vector 2
  4. Verify qpair1 raises its own interrupt
  5. Clear qpair2 interrupt state before the next IO
  6. Issue IO on qpair2 tied to vector 4
  7. Verify qpair2 raises its own interrupt
  8. Delete the queue pairs after verification

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_specific_interrupt_vector_coalescing

Demonstrate mixed interrupt vector coalescing states by measuring latency per vector.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Create queue pairs on distinct vectors with coalescing initially disabled
  2. Read different LBAs to avoid LBA lock contention
  3. Drain the expected completions and clear MSIx state
  4. Enable coalescing only on qpair2
  5. Configure aggregation time and threshold values
  6. Measure qpair2 latency to verify aggregation is active
  7. Measure qpair1 latency where coalescing remains disabled
  8. Ensure qpair1 latency stays below the aggregation expectation

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_create_cq_disable

Verify that a Completion Queue with interrupts disabled does not signal MSIx events.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Prepare IO buffers and queue pairs with interrupts enabled and disabled
  2. Clear qpair1 interrupt state before issuing IO
  3. Issue a write on qpair1 to trigger an interrupt
  4. Verify qpair1 reports the interrupt and completion
  5. Clear qpair2 interrupt state even though it is disabled
  6. Issue a read on qpair2 where interrupts are disabled
  7. Confirm qpair2 completes without asserting MSIx
  8. Validate that the transferred data matches expectations
  9. Delete both queue pairs after verification

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_coalescing_numb

Validate the interrupt aggregation threshold by issuing varying numbers of IOs.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Clear pending interrupts before measuring latency
  2. Disable controller and per-vector coalescing to force immediate interrupts
  3. Issue a few IOs to capture the immediate interrupt latency
  4. Continually issue IO bursts to confirm latency stays low without aggregation
  5. Ensure latency remains below the aggregation threshold when coalescing is disabled
  6. Enable coalescing and configure aggregation time and threshold
  7. Issue bursts below the threshold to observe time-based aggregation
  8. Warn if latency does not show aggregation when threshold is not met
  9. Issue bursts that meet the threshold to confirm aggregation occurs
  10. Assert aggregated latency remains within the expected bounds

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_ioworker_qpair

Observe interrupt behavior while an ioworker exercises a queue pair.

Reference

  1. Spec: NVM Express Revision 1.4a

Steps

  1. Create a queue pair with interrupts enabled or disabled per parameter
  2. Start an ioworker that continuously issues reads on the queue pair
  3. Repeatedly sample the interrupt status while the worker is running
  4. Delete the queue pair after the worker completes

file: scripts/conformance/05_controller/prp_test

function: scripts/conformance/05_controller/prp_test.py::test_prp_format

Ensure the namespace is formatted before executing subsequent PRP tests.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a namespace format to guarantee a clean state for follow-on tests

function: scripts/conformance/05_controller/prp_test.py::test_prp_write_mdts

Validate PRP write handling across varying data lengths around MDTS limits.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Gather the controller-reported minimum memory page size for later comparison
  2. Retrieve the advertised Maximum Data Transfer Size so we can detect boundary behavior
  3. Create a temporary IO queue pair dedicated to this data length sweep
  4. Build a PRP list chain large enough to describe the requested transfer length
  5. Iterate through the required page count and link additional PRP lists as needed
  6. Submit the write command sized to the target NLBA value
  7. Verify the completion entry reflects either success or MDTS violation as expected
  8. Tear down the temporary IO queue pair created for this test

function: scripts/conformance/05_controller/prp_test.py::test_prp_page_offset

Read data using PRPs with different offsets and confirm the correct LBA data is returned.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Program a known data pattern to the target namespace for later verification
  2. Read the data with varying offsets distributed across two memory pages
  3. Verify each resulting buffer still begins with the known signature

function: scripts/conformance/05_controller/prp_test.py::test_prp_admin_page_offset

Issue Identify commands using PRP buffers aligned at different valid offsets.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Prepare an Identify buffer that starts at the requested valid offset
  2. Issue the Identify command using the offset buffer
  3. Confirm the controller returned valid identify data

function: scripts/conformance/05_controller/prp_test.py::test_prp_admin_page_offset_invalid

Send Identify commands with misaligned PRP offsets and observe controller handling.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Prepare an Identify buffer that intentionally violates dword alignment
  2. Send an Identify command with the invalid offset while tracking completion status
  3. Capture the completion status through a callback for later verification
  4. Check identify data from offset 0 when no error was reported

function: scripts/conformance/05_controller/prp_test.py::test_prp_valid_offset_in_prplist

Issue a read command with valid PRP1 and PRP list offsets to confirm acceptance.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Configure PRP1 with a non-zero offset to emulate partial-page buffers
  2. Build the first PRP list segment with a non-zero offset and constrained size
  3. Allocate a chained PRP list to cover the remaining regions
  4. Fill the PRP list entries before linking to the chained list
  5. Populate the chained PRP list entries that continue the transfer
  6. Issue a read command that consumes both PRP1 and the PRP list
  7. Reap the command and wait for the CQ phase bit flip

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_offset_in_prplist

Fill PRP list entries with invalid offsets and expect a PRP Offset Invalid status.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Configure PRP1 with a non-zero offset to exercise validation logic
  2. Create a PRP list whose base also starts at a non-zero offset
  3. Fill each PRP entry with the same invalid offset to ensure the controller detects it
  4. Submit the command that references the invalid PRP list
  5. Wait for completion and expect the PRP Offset Invalid status

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_buffer_offset

Issue an IO command with a PRP entry that has a non-zero offset and observe controller behavior.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Prepare a write command that intentionally supplies a PRP entry with offset 1
  2. Post the command to the submission queue and capture the resulting status
  3. If no error is reported, read the LBA back to confirm the controller ignored the offset

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_one_qpair

Inject invalid PRP commands through a single queue pair to verify error handling.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create a dedicated queue pair for the invalid PRP injections
  2. Repeatedly inject commands with invalid PRPs until phase bits wrap
  3. Tear down the temporary queue pair to release resources

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_one_qpair_normal_command

Mix invalid and valid commands on one queue pair to confirm recovery after errors.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Inject an invalid PRP command through the shared queue pair
  2. Issue a normal write followed by a normal read to verify the queue still operates

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_multi_qpair_normal_command

Exercise invalid and valid PRP commands across two queue pairs to check isolation.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create a second queue pair for concurrent validation
  2. Inject invalid PRP commands into both queue pairs
  3. Issue normal write and read commands on both queue pairs to confirm recovery
  4. Delete the additional queue pair that was allocated for this test

function: scripts/conformance/05_controller/prp_test.py::test_prp_multi_invalid_and_multi_normal_command

Stress one queue pair with many invalid and valid commands to ensure stable completion handling.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Provision an IO queue pair sized to the controller capabilities
  2. Initialize the starting command identifier and completion phase bit
  3. Loop over the requested amount of invalid PRP commands
  4. Follow up with the requested amount of valid write commands
  5. Tear down the queue pair after completing the stress mix

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_before_ioworker

Submit an invalid PRP command before launching ioworker traffic to verify stability.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Inject an invalid PRP command through the shared queue pair before background IO
  2. Run an IO worker workload after the invalid command is injected

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_multiple

Repeat invalid PRP injections many times to confirm consistent error responses.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Iterate over many cycles to repeatedly exercise the invalid PRP path

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_with_ioworker

Inject invalid PRP commands while an IO worker issues background traffic.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Run an IO worker workload while continuously injecting invalid PRP commands

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_offset_create_sq

Attempt to create IO submission queues with invalid PRP offsets and expect failures.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Provision a valid completion queue that future SQs can target
  2. Create a submission queue using a properly aligned PRP pointer as a baseline
  3. Exercise invalid PRP offsets when creating submission queues and expect failures
  4. Attempt to create a queue with PRP offset 2048 and expect PRP Offset Invalid
  5. Attempt the same sequence with a slightly different misalignment
  6. Cover the extreme case where the pointer is almost at the end of the page
  7. Try another sub-page offset to ensure all cases are rejected

function: scripts/conformance/05_controller/prp_test.py::test_prp_page_offset_invalid

Read using buffers that have invalid offsets and expect either errors or ignored offsets.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Program the namespace with a recognizable pattern at the target LBA
  2. Prepare a read buffer whose offset varies across the invalid values
  3. Capture the completion status through a callback to detect PRP offset errors
  4. Check whether we receive the expected error or the data with ignored offset

function: scripts/conformance/05_controller/prp_test.py::test_prp_identify_prp2

Issue Identify commands with PRP2 that is not contiguous with PRP1 and verify data stitching.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Issue a baseline Identify command with a contiguous buffer
  2. Issue another Identify command that spans PRP1 and PRP2
  3. Check data returned by the two Identify commands to ensure it aligns correctly
  4. Dump the last commands for debug context if verification fails later

file: scripts/conformance/05_controller/sgl_test

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_format

Formats the namespace to confirm the controller accepts SGL-capable format commands.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Issue a namespace format with a 4 KiB data size

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_buffer

Exercises buffer backed SGL reads and writes while toggling offsets and descriptor usage.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Ensure the verification helper is available before running IO checks
  2. Write baseline data then read it back to confirm normal PRP operations
  3. Disable SGL usage to validate PRP fallback
  4. Re-enable SGL and ensure basic reads still succeed
  5. Read within a single sector window by adjusting the buffer offset and size
  6. Consume two sectors using a larger offset span
  7. Reset offsets and capture the buffer contents for debugging

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_ioworker

Runs IO workers with varying SGL usage ratios to ensure both PRP and SGL paths succeed.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Reformat the namespace to place metadata in a known state
  2. Run an IO worker that uses only PRP transfers
  3. Run an IO worker that uses SGL transfers exclusively
  4. Mix PRP and SGL transactions in a longer run

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_send_cmd_and_waitdone

Submits manual SGL read commands and validates completions when mixing PRP and SGL descriptors.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Establish IO submission/completion queues for explicit command posting
  2. Define completion callbacks that check phase tags and status fields
  3. Issue short SGL reads to verify normal completions
  4. Send additional commands to exercise invalid CID handling and PRP fallback
  5. Confirm the queue pointers advanced as expected
  6. Tear down the test submission and completion queues

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_send_cmd_size_offset

Validates SGL descriptors honor explicit sizes and offsets across varying read lengths.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Create IO queues for directly issued read commands
  2. Define a callback that ensures completions return success
  3. Submit a default SGL read using a descriptor sized to one sector
  4. Move the SGL window by one sector and read again
  5. Expand the descriptor to cover two logical blocks starting mid-buffer
  6. Ring the submission queue doorbell and wait for completions
  7. Release the temporary queues

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_offset_in_segment

Builds nested SGL segments to confirm offset handling inside a last segment descriptor.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Allocate the top-level last segment descriptor used for chaining entries
  2. Configure the first data block so it trims to a single sector
  3. Configure a second data block that contributes zero bytes
  4. Keep a final data block with default offsets to complete the segment
  5. Populate the segment entries with the prepared data blocks

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_send_cmd_and_waitdone_read_segment_example

Scopes a multi-level SGL segment during read/write IO and verifies each section of data.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Prepare the namespace with known data before issuing segmented reads
  2. Write random data to LBA 100-125 for later comparisons
  3. Set up dedicated IO queues for manually posted read/write commands
  4. Enforce successful completions for each segmented transfer
  5. Build nested segments that mix data, bit-bucket, and last segment entries
  6. Submit the segmented read and wait for completion
  7. Check segmented buffers and the zero-filled hole
  8. Reuse the same segment topology for a write command
  9. Remove the temporary IO queues

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_send_cmd_and_waitdone_read_segment_error

Attempts a recursive SGL segment submission to validate controller error handling.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Create IO queues that will host the invalid segmented command
  2. Callback expects an error status signaled via the completion queue
  3. Build a malformed segment list that references itself
  4. Issue the read expected to fail and wait for completion
  5. Clean up the queues after capturing the error

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_invalid_phys_addr

Submits descriptors with bogus physical addresses to confirm hardware detects the faults.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Open IO queues to host commands that point to invalid addresses
  2. Capture completion statuses for the invalid address tests
  3. Submit reads using SGL and PRP descriptors with fake physical addresses
  4. Ring the queue doorbells and drain all completions
  5. Delete the queues to finish the test

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_invalid_address_and_length

Combines invalid length and address values to ensure the controller rejects malformed SGLs.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Initialize IO queues for submitting intentionally invalid descriptors
  2. Build and submit a descriptor with the requested invalid tuple
  3. Wait for the command log to capture the failing write
  4. Dump the merged command log entries for inspection
  5. Release the queues before exiting the test

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_invalid_type

Sweeps through reserved SGL descriptor types to verify proper error reporting.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Allocate queues used to host writes with unsupported SGL descriptor types
  2. Show how the controller reported the invalid type request
  3. Destroy the queues after the invalid type sweep

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_last_segment_unaligned_size

Verifies that Last Segment descriptors enforce 16-byte alignment by varying lengths.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Build a last segment descriptor that intentionally violates the size requirement
  2. Configure the first data block for a one-sector transfer
  3. Configure the second data block to contribute zero length
  4. Keep a standard data block as the final entry
  5. Load the segment vector and log it for debugging
  6. Create queues and post a write using the malformed segment
  7. Dump the command log for the failing transfer
  8. Delete the queues at the end of the iteration

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_invalid_subtype

Sweeps SGL subtype combinations to ensure the controller flags invalid encodings.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Allocate queues shared by the subtype iterations
  2. Capture the controller response for each unsupported subtype
  3. Clean up queues at the end of the sweep

function: scripts/conformance/05_controller/sgl_test.py::test_sgl_all_zero

Checks controller behavior when the SGL descriptor and size fields are all zeros.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Allocate queues for issuing the zeroed descriptor command
  2. Submit a write whose descriptor and size are forced to zero
  3. Print the resulting command log for debugging
  4. Remove the queues after collecting the log

function: scripts/conformance/05_controller/sgl_test.py::test_trim_with_sgl

Triggers a Dataset Management trim using SGL metadata and checks the affected data blocks.

Reference

  1. Reference: NVM Express Revision 1.4a

Steps

  1. Prepare data buffers and IO queue resources
  2. Send write and read commands to populate reference data
  3. Wait for the commands to finish and verify data integrity
  4. Trim LBA 0 using an SGL descriptor in meta mode
  5. Check data after issuing the trim

file: scripts/conformance/05_controller/sq_cq_test

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_cq_around

Validate that a three-entry CQ wraps correctly by issuing four commands and observing phase behavior.

Reference

  1. NVM Express Revision 1.4a, Data Structures.

Steps

  1. Create a CQ with three entries and an SQ with five entries mapped to that CQ
  2. Issue three SQEs with descending CIDs to approach the wrap condition
  3. Allow the controller to post the completions
  4. Verify the first CQE reports CID 4
  5. Verify the second CQE reports CID 3
  6. Confirm the third CQE is still empty before wraparound
  7. Advance the CQ head to consume the first entry
  8. Verify the first CQE is still CID 4 after re-reading with old phase
  9. Confirm the third CQE now reports CID 2 as the queue wraps
  10. Issue a fourth SQE to force the phase bit toggle
  11. Move the CQ head further to toggle the phase bit
  12. Confirm CQ entry 3 keeps CID 2 with the proper phase
  13. Confirm CQ entry 2 keeps CID 3 with the proper phase
  14. Ensure the wrapped CQ entry now reports CID 1

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_overflow

Check that a two-entry SQ wraps safely by issuing two commands and validating CQ contents.

Reference

  1. NVM Express Revision 1.4a, Data Structures.

Steps

  1. Create a CQ with five entries and an SQ with two entries bound to the CQ
  2. Issue the first SQE with CID 4 and ring the tail doorbell
  3. Issue the second SQE with CID 3 and wrap the tail pointer
  4. Verify the first CQE reports CID 4
  5. Verify the second CQE reports CID 3
  6. Confirm no other CQEs become valid after wraparound

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_delete_after_cq

Ensure the controller rejects deleting an IO CQ while an associated SQ still exists.

Reference

  1. NVM Express Revision 1.4a, Section 5.4.

Steps

  1. Create a linked IO CQ/SQ pair
  2. Attempt to delete the CQ first and expect Invalid Queue Deletion

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_doorbell

Validate that writing an SQ tail doorbell without SQEs leaves the CQ idle.

Reference

  1. NVM Express Revision 1.4a, Data Structures.

Steps

  1. Create a linked IO CQ/SQ pair
  2. Ring the SQ tail doorbell and ensure the controller accepts it

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_admin_smaller_cq

Verify that an oversized admin SQ paired with a smaller CQ still produces coherent completions.

Reference

  1. NVM Express Revision 1.4a, Data Structures.

Steps

  1. Helper to write SQEs into the admin SQ buffer
  2. Disable the controller before programming admin queue registers
  3. Program the admin queue registers with SQ size 100 and CQ size 10
  4. Re-enable the controller so the new queue settings take effect
  5. Build an Identify Namespace admin command template
  6. Issue a series of commands that use distinct CIDs to stress the small CQ
  7. Inspect the CQ buffer before any manual doorbell moves
  8. Advance the CQ doorbell once and recheck completion data
  9. Advance the CQ doorbell again and verify the entries remain consistent
  10. Advance the CQ doorbell twice to force a phase change and inspect the CQ

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_admin_invalid_doorbell

Verify an invalid admin SQ doorbell write triggers the expected asynchronous error.

Reference

  1. NVM Express Revision 1.4a, Figure 146.

Steps

  1. Program an invalid admin SQ tail value and wait for the associated AER

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_doorbell_invalid

Confirm an invalid IO SQ doorbell value raises the Invalid Doorbell asynchronous event.

Reference

  1. NVM Express Revision 1.4a, Figure 146.

Steps

  1. Clear existing asynchronous event records
  2. Create a paired IO CQ/SQ for the test
  3. Program an invalid SQ tail doorbell value and expect an AER

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_cq_another_sq

Validate two IO SQs can target the same CQ without corrupting completion order.

Reference

  1. NVM Express Revision 1.4a, Submission Queue Entry Format.

Steps

  1. Create a CQ and the first SQ, each with three entries
  2. Post commands with CID 4 and 3 to the first SQ
  3. Ring the SQ doorbell twice to submit both commands
  4. Create the second IO SQ that also targets the same CQ
  5. Post a command with CID 2 on the second SQ and ring its doorbell
  6. Verify the first CQE reports CID 4
  7. Verify the second CQE reports CID 3
  8. Confirm the third CQE is still empty
  9. Advance the CQ head to release the first entry
  10. Ensure the third CQE now reports CID 2 coming from SQ2
  11. Confirm the first CQE still shows CID 4 with the old phase
  12. Post a command with CID 1 using SQ2 and ring its doorbell
  13. Advance the CQ head again to consume two entries
  14. Verify the CQ wraps and now reports CID 1 at entry 0

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_create_invalid_cqid

Ensure Create IO SQ fails gracefully when referencing invalid CQIDs from the spec-defined categories.

Reference

  1. NVM Express Revision 1.4a, Section 5.3.

Steps

  1. Query ncqa through Get Features to know the supported CQ range
  2. Create a CQ with CQID 1 for later references
  3. Create a valid SQ bound to the CQ above as control
  4. Expect SQ creation to fail when targeting CQID 0 (the admin CQ)
  5. Expect failure if CQID 0xffff is used
  6. Expect failure when CQID exceeds the supported ncqa range by one
  7. Expect failure when CQID exceeds the supported range by a large offset
  8. Expect failure when CQIDs 2 or 4 do not correspond to created CQs
  9. Delete the valid SQ and CQ used for baseline coverage

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_read_write_burst

Burst IO submissions to ensure the CQ handles near-full conditions without overflowing.

Reference

  1. NVM Express Revision 1.4a, Section 5.3.

Steps

  1. Create paired IO CQ/SQ buffers sized near the queue limit
  2. Define a helper that seeds SQEs with unique PRPs and CIDs
  3. Post 127 write SQEs that use their LBA as data and ring the doorbell once
  4. Wait for all but the last completion and advance the head
  5. Submit one additional write to verify wraparound behavior
  6. Confirm no CQ overflow occurs after consuming the entries
  7. Tear down the write queues
  8. Create fresh CQ/SQ pairs to repeat the burst with reads
  9. Post 127 read SQEs that check the written LBAs
  10. Wait for all read completions to be posted
  11. Validate each buffer contains the expected pattern (cid == data)
  12. Delete the read queues after validation

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_doorbell_valid

Confirm a standalone CQ can be created and deleted without an associated SQ.

Reference

  1. NVM Express Revision 1.4a, Section 5.3.

Steps

  1. Create the CQ only and wait briefly for it to become ready
  2. Delete the CQ cleanly without touching any SQ

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_create_physically_contiguous

Ensure controllers requiring physically contiguous CQs reject queues created without PC set.

Reference

  1. NVM Express Revision 1.4a, Section 5.3.

Steps

  1. Check CAP.CQR and skip the test if PC enforcement is not required
  2. Create an IO CQ with the PC flag to prove the valid path succeeds
  3. Attempt to create the CQ without PC and expect the controller to reject it

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_sq_diff_id

Confirm SQs may be paired to a CQ even when their queue identifiers differ.

Reference

  1. NVM Express Revision 2.0.

Steps

  1. Create an IO CQ using QID 1
  2. Create an SQ parameterized by sqid that targets the CQ
  3. Post a simple command and ring the SQ doorbell
  4. Wait for the completion and verify it succeeded
  5. Tear down the SQ and CQ

file: scripts/conformance/05_controller/sqe_cqe_test

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_sqhd

Validate SQ head pointer reporting by reusing SQ entries and observing CQ updates.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ depth 3 and IOSQ depth 2 to control the observed head/tail behavior.
  2. Issue the first IO command and advance the SQ tail pointer.
  3. Verify only the first CQ entry reports the SQHD, SQID, and phase bit update.
  4. Issue the second IO command and roll the SQ tail.
  5. Confirm the second CQ entry reports the new command while the third remains untouched.
  6. Issue the third IO command without advancing CQ head.
  7. Ensure the last CQ entry does not update while the CQ head is unchanged.
  8. Move the CQ head to release an entry and allow the pending completion to post.
  9. Submit another command and release an additional CQ entry.
  10. Confirm the next round of CQ entries advances SQHD and toggles the phase bit.
  11. Delete the queues to clean up the environment.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_p_tag_invert_after_cq_full

Confirm phase tag inversion when the CQ wraps by repeatedly filling and draining a small queue pair.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ depth 2 and IOSQ depth 10 to force CQ wrap quickly.
  2. Build and post the first write command.
  3. Post the second write command and advance the tail.
  4. Verify the first completion reports the expected identifiers and SQ head.
  5. Verify the second completion reports the expected identifiers and SQ head.
  6. Confirm both CQ entries still report a phase of 1 before wrap.
  7. Post the third write command to trigger wrap conditions.
  8. Post the fourth write command.
  9. Ensure the third completion reports the expected metadata with inverted phase.
  10. Confirm the fourth completion appears after freeing a CQ slot.
  11. Confirm both CQ entries now report the cleared phase bit.
  12. Delete the queue pair used for the wraparound test.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_discontinuous_cid

Verify completion handling by submitting commands with discontinuous CIDs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with three entries to observe CID order.
  2. Issue commands using non-sequential CIDs and advance the tail.
  3. Verify CQ entries return the discontinuous CIDs as submitted.
  4. Delete the queues once CID handling is verified.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_max_cid

Exercise completion reporting by issuing commands that use minimum and maximum allowed CIDs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Determine the valid CID range for the detected controller.
  2. Create IOCQ and IOSQ with three entries to capture the completions.
  3. Submit commands using maximum and zero CIDs.
  4. Ensure the CQ reports the CIDs without modification.
  5. Delete the queues to conclude the CID range test.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_cid_conflict

Verify controller behavior by submitting duplicate CIDs simultaneously.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with twenty entries to host large transfers.
  2. Prepare the PRP buffer chain sized to the controller MDTS limit.
  3. Post identical commands with the same CID into the queue.
  4. Wait for completions and ensure status indicates success or CID conflict.
  5. Delete the queues once duplicate CID handling is verified.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_reserved

Check controller handling by issuing commands whose reserved SQE fields are non-zero.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ supporting three entries for the reserved field check.
  2. Issue a command with a non-zero reserved field.
  3. Verify the controller completes successfully despite the reserved bits.
  4. Delete the queues after the reserved field validation.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_fuse_is_zero

Verify the controller accepts commands by issuing IOs with the FUSE field cleared.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with three entries dedicated to fuse validation.
  2. Issue a command with the fuse bits cleared.
  3. Confirm successful completion status for the zero fuse field.
  4. Delete the queues after verifying fuse handling.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_opc_invalid_admin_cmd

Validate that invalid admin opcodes are rejected by issuing admin commands with unsupported opcodes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Send the invalid admin command and expect an error completion.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_opc_invalid_nvm_cmd

Confirm the controller flags invalid NVM command opcodes by sending IOSQ entries with unsupported opcodes.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with three entries to capture the invalid opcode response.
  2. Issue a command using the invalid opcode under test.
  3. Check that the status field indicates Invalid Command Opcode.
  4. Delete the queues after capturing the invalid opcode completion.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_ns_invalid

Verify invalid namespace identifiers are rejected by targeting unsupported namespace IDs.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with three entries to record the namespace failures.
  2. Issue a command against an invalid namespace value.
  3. Confirm the completion status reports Invalid Namespace or Format.
  4. Delete the queues once the invalid namespace handling is validated.

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_ns_broadcast

Ensure broadcast namespace ID handling matches expectations by issuing IOs to NSID 0xffffffff.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. Create IOCQ and IOSQ with three entries to monitor broadcast namespace behavior.
  2. Issue a command targeting the broadcast namespace identifier.
  3. Confirm status indicates Invalid Namespace or Format or Namespace Not Ready.
  4. Delete the queues once the broadcast namespace behavior is confirmed.

folder: scripts/conformance/06_tcg

file: scripts/conformance/06_tcg/01_use_case_test

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct01_level0_discovery

Verify Level 0 Discovery by power cycling the controller and issuing discovery commands.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Power cycle the subsystem to force the TCG stack into a known state
  2. Issue the Level 0 Discovery command and verify the COMID count
  3. check Number of ComIDs >= 1
  4. Log the Level 0 discovery feature list for traceability
  5. Validate Admin4 authority access when supported

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct02_properties

Validate Properties negotiation using defined host limits and confirming device acceptance.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Send the Properties command with the mandated host limits
  2. Read the Properties response to confirm each reported value

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct03_take_ownership

Take ownership by retrieving the MSID PIN, replacing the SID credential, and validating revert-to-default.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start an Admin SP session as Anybody to read the MSID PIN
  2. Retrieve MSID PIN from the C_PIN table
  3. Close the Anybody session
  4. Start an Admin SP session authenticated as SID using the MSID PIN
  5. Program the new SID password to complete ownership
  6. Close the SID-authenticated session
  7. Start an Admin SP session with the newly set SID password
  8. Issue RevertTPer to confirm the credential operates correctly

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct04_activate_locking_sp

Activate the Locking SP from Manufactured-Inactive by setting SID and issuing Activate.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start an Admin SP Anybody session and fetch the MSID PIN
  2. Replace the SID credential using the retrieved MSID PIN
  3. Authenticate as SID and issue Activate on the Locking SP object
  4. CLOSE_SESSION
  5. Start a Locking SP Admin1 session to verify the lifecycle state
  6. Close the locksp session after verification

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct05_configuring_authorities

Configure required authorities by enabling admins and users and validating session logon.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Query the Level 0 discovery data to determine the LAST_REQUIRED_USER value
  2. Start a Locking SP Admin1 session to configure account access
  3. Enable User1 and provision a password for it
  4. Enable Admin4 if Opal 2.0 capabilities are advertised
  5. Enable User1 in the Locking SP and set its password
  6. Enable the LAST_REQUIRED_USER authority and set its password
  7. Close the Locking SP admin session after provisioning users
  8. Open a Locking SP Admin1 session using the newly provisioned credentials
  9. Close the session after confirming access
  10. Open a Locking SP Admin4 session to validate optional administrator access
  11. Close the Admin4 validation session
  12. Authenticate as User1 on the Locking SP to confirm user access
  13. Close the User1 session to end the test
  14. Start a Locking SP session as the LAST_REQUIRED_USER to verify final authority
  15. Close the LAST_REQUIRED_USER session cleanly

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct06_configuring_locking_objects

Configure locking ranges by programming range metadata and verifying host access.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Read Level 0 discovery data to obtain the LAST_REQUIRED_RANGE identifier
  2. Start a Locking SP Admin1 session to configure locking range attributes
  3. Program the LAST_REQUIRED_RANGE boundaries and initial access state
  4. Close the configuration session to persist the locking metadata
  5. Write data through the namespace and verify it can be read back

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct06_configuring_locking_objects_powercycle

Verify locking range configuration across a power cycle by attempting I/O afterward.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Determine the LAST_REQUIRED_RANGE identifier from Level 0 discovery
  2. Open a Locking SP Admin1 session to set up the range prior to power cycling
  3. Configure the range boundaries and lock both read and write access
  4. Close the configuration session to persist the range state
  5. Power cycle the subsystem and reset the controller
  6. Attempt read and write I/O and expect Data Protection errors

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct07_unlocking_range

Verify that a configured locking range can be unlocked for a user through the defined methods.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start a Locking SP Admin1 session and provision User1 credentials
  2. Query LAST_REQUIRED_RANGE and size it for the test
  3. Configure the locking range boundaries for the unlock test
  4. Close the admin session to proceed with range controls
  5. Start another Locking SP Admin1 session to configure lock state
  6. Grant User1 read and write access to the configured range
  7. Close the admin session after assigning user permissions
  8. Start a User1 session to request unlocking of the assigned range
  9. Clear the ReadLocked and WriteLocked bits within the range
  10. Close the user session cleanly before issuing I/O
  11. Issue write and read traffic to validate the unlocked range

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct08_erasing_range

Validate range erasure by deriving a new key and verifying data is destroyed.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Skip the test on Pyrite devices that do not support erasing ranges
  2. Identify the LAST_REQUIRED_RANGE to target for the erase procedure
  3. Read the AlignmentGranularity so the range sizing honors controller constraints
  4. Establish the locking range boundaries using the calculated size
  5. Write a known pattern and unlock the range before erasure
  6. Verify the pattern prior to erasing the range
  7. Start a session to derive a new wrapping key for the range
  8. Read the range to confirm the data no longer matches the original pattern

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct09_using_datastore

Validate DataStore table read/write permissions via Admin and User sessions.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Enable User1 and program its password through a Locking SP admin session
  2. Start a Locking SP Admin1 session to grant DataStore privileges
  3. Allow User1 to perform DataStore Set operations
  4. Allow User1 to perform DataStore Get operations
  5. Close the admin session so the user can consume the privileges
  6. Start a User1 session to write data into the DataStore
  7. Write the magic pattern into the DataStore table
  8. Close the session after writing
  9. Start another User1 session to read back the stored data
  10. Read and verify the DataStore payload matches what was written

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct10_enable_mbr_shadow

Exercise the MBR shadowing enable-to-done workflow and confirm locking range visibility.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Skip devices that lack Opal 2.0 because MBR shadowing is not supported there
  2. Invoke Properties method
  3. read AlignmentGranularity
  4. start locking sp admin1 session
  5. Invoke the Set method on the BooleanExpr column of the ACE_MBRCONTROL_SET_DONE ACE object to include the UIDs of the User1 and LAST_REQUIRED_USER Authority objects
  6. invoke Get method on the Rows column of the MBR Table Descriptor Object
  7. Determine the namespace geometry to size the MBR range correctly
  8. invoke the Set method to change the RangeLength column of the LAST_REQUIRED_RANGE to SIZE_OF_MBR_TABLE_DESCRIPTOR_IN_LOGICAL_BLOCKS + 10 LBAs
  9. write 1s over the entire LAST_REQUIRED_RANGE
  10. call Get method on the MBR object in the Table table to retrieve the MandatoryWriteGranularity column value
  11. invoke Set method to write the MBR table with the MAGIC_PATTERN
  12. Write any remaining tail bytes that do not fill an entire chunk
  13. invoke Set method on the Enable column of the MBRControl table with a value of TRUE
  14. close session
  15. powercycle
  16. write the MAGIC_PATTERN over the entire LAST_REQUIRED_RANGE
  17. read from LBA 0 to the size of the MBR Table
  18. Read the trailing portion beyond the MBR table and confirm it remains erased
  19. test_uct11_mbr_done
  20. read LAST_REQUIRED_USER
  21. enable user1 and set passwd for it
  22. enable last_required_user and set passwd for it
  23. Grant the LAST_REQUIRED_USER authority read and write access to the range
  24. close session
  25. Start a LAST_REQUIRED_USER session to clear the lock bits and set MBR done
  26. Call the Set method on the ReadLocked and WriteLocked columns of the LAST_REQUIRED_RANGE Locking object with a value of FALSE
  27. Set the MBRDone flag to release the range to hosts
  28. close session
  29. read the entire LAST_REQUIRED_RANGE
  30. Read the remaining portion and compare it to the expected filler pattern

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct12_revert_locking_sp

Validate Locking SP revert in Manufactured state and confirm access behavior afterward.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Write a known pattern over 64 logical blocks at LBA 0
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Revert method on Locking SP object
  4. Call StartSession method with SPID = Locking SP
  5. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct13_revert_admin_sp_lockingsp_inactive

Revert the Admin SP while the Locking SP remains ManufacturedInactive and ensure SID handling.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Determine which SSC feature the DUT advertises
  2. take ownership
  3. Replace the SID credential to prepare for the revert operation
  4. write data over 64 logical blocks beginning at LBA 0
  5. Call StartSession method with SPID = Admin SP UID
  6. Call Revert method on Admin SP object
  7. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  8. Reprogram the SID credential if the revert reset it
  9. start session with SID
  10. Call StartSession method with SPID = Locking SP
  11. Read 64 logical blocks beginning at LBA 0

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct14_revert_admin_sp_locking_sp_active

Revert the Admin SP while Locking SP is active and confirm SID recovery behavior.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Determine the supported SSC feature set
  2. Start an Anybody Admin SP session to capture the MSID PIN
  3. Call StartSession method with SPID = Admin SP UID
  4. Call Get method on UID 00 00 00 06 00 00 02 02 to determine support
  5. close session
  6. write data over 64 logical blocks beginning at LBA 0
  7. Call StartSession method with SPID = Admin SP UID D and HostSigningAuthority = SID authority UID
  8. Call Revert method on Admin SP object
  9. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  10. start session with SID
  11. Call StartSession method with SPID = Locking SP
  12. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct15_revert_admin_sp_locking_sp_active

Revert the Admin SP using Admin1 credentials while the Locking SP is in Manufactured state.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Determine the supported SSC capability before executing Admin1 flows
  2. check whether admin1 is supported
  3. Enable admin1
  4. write data over 64 logical blocks beginning at LBA 0
  5. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = Admin1 authority
  6. Call Revert method on Admin SP object
  7. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  8. start session with SID
  9. Call StartSession method with SPID = Locking SP
  10. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct16_psid_revert

Perform a PSID revert to return the device to factory defaults.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Provide the correct PSID credential for the device under test
  2. Power cycle the device to ensure the PSID path is available
  3. Start a PSID session and issue RevertTPer to factory reset the device

file: scripts/conformance/06_tcg/02_specific_functionality_test

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf01_transaction

Exercise SPF-01 transaction flow by issuing sequential sessions that wrap writes in start and end transaction tokens and verifying datastore persistence.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. start locking sp admin1 session
  2. write zero to datastore table
  3. close session
  4. start a new session to test transactional write sequencing
  5. send a subpacket that contains a startTransaction token with a status code of 0x00
  6. write magic_pattern to datastore table
  7. send a subpacket that contains an end transaction token with a status code of 0x00
  8. open a verification session to read back committed data
  9. read data from datastore table and check it
  10. start locking sp admin1 session to clear the data under transaction
  11. send a subpacket that contains a startTransaction token with a status code of 0x00
  12. write zero to datastore table
  13. open the final session to confirm the erase operation succeeded
  14. read data from datastore table and check it

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf02_if_recv_behavior_tests_case1

Verify IF-RECV case 1 by issuing IF-RECV and confirming the ComPacket header indicates no further data.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Issue an IF-RECV command
  2. check a ComPacket header value of “All Response(s) returned – no further data”

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf02_if_recv_behavior_tests_case2

Validate IF-RECV case 2 by forcing a large datastore read and observing segmented IF-RECV responses.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. start locking sp admin1 session
  2. read data from datastore table
  3. IF-Recv transfer length = 0x100
  4. read the returned compacket header to determine outstanding data and minimum transfer
  5. close the session before checking header fields
  6. validate outstanding data and transfer length indicators

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_sid

Exercise SPF-03 SID TryLimit by exhausting failed SID authentications, observing lockout, and validating recovery after a power cycle.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Attempt SID login with an invalid PIN until TryLimit attempts are consumed when TryLimit > 0.
  2. Skip the TryLimit exercise when the implementation exposes no limit.
  3. Confirm SID authentication is locked out by expecting AUTHORITY_LOCKED_OUT when opening the Admin SP.
  4. Power cycle the subsystem and reset the controller to clear the SID lockout.
  5. Re-activate the locking SP lifecycle to restore the baseline after the power cycle.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_admin1

Exercise SPF-03 Admin1 TryLimit by exhausting incorrect authentications and confirming lockout behavior.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Attempt Admin1 login with an invalid PIN until the reported TryLimit is met.
  2. Verify Admin1 authentication is locked out by expecting AUTHORITY_LOCKED_OUT.
  3. Power cycle the subsystem and reset the controller for subsequent tests.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_user1

Exercise SPF-03 User1 TryLimit by exhausting failed user authentications and confirming lockout.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Attempt User1 login with an invalid PIN until the reported TryLimit is consumed.
  2. Confirm User1 access is locked out by expecting AUTHORITY_LOCKED_OUT on session start.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_sid

Validate SPF-04 SID tries reset by exhausting failed SID attempts and verifying the tries counter clears.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Exhaust SID authentication attempts to reach TryLimit – 1 when the device advertises a TryLimit.
  2. Skip the reset validation entirely when TryLimit support is absent.
  3. Start an Admin SP session with SID authority to evaluate the tries counter.
  4. Read the SID tries counter after exercising the TryLimit threshold.
  5. Ensure the SID tries counter cleared to zero.
  6. Close the SID session after verification.
  7. Re-enable the locking SP lifecycle for the next test sequence.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_admin1

Validate SPF-04 Admin1 tries reset by consuming TryLimit – 1 failures and confirming the counter clears.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Drive Admin1 authentication failures until the tries counter reaches TryLimit – 1 when supported.
  2. Start a locking SP session with Admin1 authority to inspect tries.
  3. Read the Admin1 tries counter following the attempted resets.
  4. Close the Admin1 session after verification.
  5. Ensure the Admin1 tries counter cleared to zero.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_user1

Validate SPF-04 User1 tries reset by driving failures and confirming the user counter clears.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Drive User1 authentication failures until the tries counter reaches TryLimit – 1 when available.
  2. Start a locking SP session with User1 authority to clear tries.
  3. Close the User1 session after the reset.
  4. Start a locking SP session with Admin1 authority to read the User1 tries counter.
  5. Read the User1 tries counter to confirm it reset.
  6. Close the Admin1 session after gathering the result.
  7. Ensure the User1 tries counter cleared to zero.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_sid

Verify SPF-05 SID tries reset across a power cycle by exhausting failures, power cycling, and confirming the counter clears.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Drive SID authentication failures through the reported TryLimit when supported; skip when no limit is published.
  2. Verify SID remains locked out prior to the power cycle.
  3. Power cycle the subsystem and reset the controller.
  4. Start an Admin SP SID session after power cycle and read the TryLimit/Try counters.
  5. Call Get method on SID’s C_PIN Object to retrieve the TryLimit Column’s value
  6. Close the SID session after sampling the counters.
  7. Ensure the SID tries counter cleared after the power cycle.
  8. Re-activate the locking SP lifecycle for subsequent tests.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_admin1

Verify SPF-05 Admin1 tries reset across a power cycle by exhausting failures and confirming the counter clears afterward.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Drive Admin1 authentication failures through the reported TryLimit when available; skip when unsupported.
  2. Verify Admin1 is locked out before issuing the power cycle.
  3. Power cycle the subsystem and reset the controller.
  4. Open a locking SP Admin session to sample Admin1 TryLimit/Try counters.
  5. Call Get method on Admin1’s C_PIN Object to retrieve the TryLimit Column’s value
  6. Close the locking SP session and ensure the tries counter cleared.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_user1

Verify SPF-05 User1 tries reset across a power cycle by exhausting failures and confirming the counter clears afterward.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Drive User1 authentication failures through the reported TryLimit when available; skip when unsupported.
  2. power cycle
  3. Call Get method on User1’s C_PIN Object to retrieve the TryLimit Column’s value
  4. Close the locking SP session and ensure the User1 tries counter cleared.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf06_next_case1

Validate SPF-06 Next command Case 1 by iterating locking table entries and confirming returned UID ordering.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip the test when the controller does not advertise Opal capability.
  2. Start a locking SP session with Admin1 authority.
  3. Read the MaxRanges field from the LockingInfo table.
  4. Issue Next without a WHERE filter to obtain the initial UID list.
  5. Issue Next starting from the first UID to retrieve one subsequent entry.
  6. Close the session after collecting the iteration data.
  7. Ensure the number of returned UIDs equals MaxRanges + 1 (9 in this test vector).
  8. Verify the UID prefix of each returned entry and confirm the second query overlaps the first list.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf06_next_case2

Validate SPF-06 Next command Case 2 by iterating Pyrite locking entries and confirming constraints.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip the test when the controller does not advertise Pyrite capability.
  2. Start a locking SP session with Admin1 authority.
  3. Issue Next on the MethodID table without a WHERE filter to collect the initial UID list.
  4. Issue Next starting from the first UID to retrieve a single subsequent entry.
  5. Close the session after collecting the iteration data.
  6. Ensure at least seven entries are reported by the first query.
  7. Verify each returned UID has the expected prefix and the second query overlaps the first list.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf07_host_session_number

Verify SPF-07 host session number handling by forcing an arbitrary HSN and ensuring it persists across commands.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Start an Admin SP session with an arbitrary host session number to confirm the controller echoes it.
  2. Read the MSID PIN to ensure the host session number is preserved in the response header.
  3. Close the Admin SP session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case1

Validate SPF-08 RevertSP Case 1 by writing a known pattern, issuing RevertSP without KeepData, and confirming access is blocked with data wiped.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Write a known pattern across the first 64 logical blocks.
  2. Start a Locking SP session with Admin1 authority to drive RevertSP.
  3. Invoke RevertSP without KeepData so user data and keys are erased.
  4. Attempt to reopen the Locking SP session and expect a TCG error.
  5. For non-Pyrite controllers, verify the previously written pattern was removed.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case2

Validate SPF-08 RevertSP Case 2 by running RevertSP with KeepData cleared and confirming data removal.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip devices that only support Pyrite 1.00, which does not exercise this flow.
  2. Write a known pattern across the first 64 logical blocks.
  3. Start a Locking SP session with Admin1 authority to drive RevertSP.
  4. Invoke RevertSP with KeepGlobalRangeKey/KeepData explicitly set to FALSE.
  5. Attempt to reopen the Locking SP session and expect a TCG error.
  6. Read back the first 64 logical blocks to confirm the pattern was erased.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case3

Validate SPF-08 RevertSP Case 3 by running RevertSP with KeepData set and confirming user data persists.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip devices that only support Pyrite 1.00, which does not exercise this flow.
  2. Write a known pattern across the first 64 logical blocks.
  3. Start a Locking SP session with Admin1 authority to drive RevertSP.
  4. Invoke RevertSP with KeepGlobalRangeKey/KeepData set to TRUE.
  5. Attempt to reopen the Locking SP session and expect a TCG error.
  6. Read back the first 64 logical blocks to confirm the pattern persists.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf09_range_alignment_verification

Verify SPF-09 locking range alignment by reading alignment parameters and configuring a compliant range.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Start a Locking SP session with Admin1 authority to query alignment requirements.
  2. Exit early when the feature or AlignmentRequired flag is not enabled for this device.
  3. Read block size, alignment granularity, and lowest aligned LBA to determine constraints.
  4. Configure a locking range using the aligned LBA constraint.
  5. Close the session after the configuration command completes.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf10_byte_table_access_granularity

Validate SPF-10 byte table access granularity by enforcing MandatoryWriteGranularity on datastore writes and reads.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Start a Locking SP session with Admin1 authority to query the datastore granularity.
  2. Read the MandatoryWriteGranularity field from the datastore table.
  3. Skip the test when the granularity requirement is trivial.
  4. Write a payload whose length is exactly one granularity unit.
  5. Close the session after writing the datastore.
  6. Start a new session to read back the datastore contents.
  7. Read the datastore entry and confirm the stored pattern matches the original write.
  8. Ensure the granularity is non-zero and the stored value is correct.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf11_stack_reset

Exercise SPF-11 stack reset by enabling a user, issuing STACK_RESET, and ensuring the enable state clears.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Start an Admin1 session to modify User1 state within a transaction.
  2. Begin a transaction for the user enable operation.
  3. Enable User1 to establish state that should be reset later.
  4. Issue STACK_RESET and confirm the command succeeds.
  5. Start another Admin1 session to inspect the User1 enable state.
  6. Retrieve the User1 enabled column value after the reset.
  7. Close the Admin1 session.
  8. Confirm the user enable state cleared.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf12_tper_reset_case1

Validate SPF-12 TPer reset by enabling programmatic reset, configuring locking ranges, and confirming DNR behavior after TPER_RESET.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Enable ProgrammaticReset to allow the TPer reset to clear locks.
  2. Open an Admin1 session to configure locking ranges prior to the reset.
  3. Configure the optional global range for Opal 2.0+ devices.
  4. Lock and enable LockOnReset for the primary global range.
  5. Program the LockOnReset column to include Programmatic.
  6. Start a Locking SP session to issue the TPER_RESET.
  7. Issue the TPER_RESET command
  8. Reopen the Locking SP session to verify range state.
  9. Read the Locking_GlobalRange columns to confirm locks reasserted.
  10. Close the session and ensure both read/write locks are set.
  11. Attempt a write while expecting DNR due to locked range.
  12. Attempt a read under the same conditions and ensure DNR remains asserted.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf13_authenticate

Validate SPF-13 authentication by retrieving MSID credentials, authenticating as SID, and verifying the returned UID.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Read the MSID PIN to obtain the default SID credential.
  2. Start an Admin SP session to perform the Authenticate method.
  3. Authenticate using the SID authority and the MSID credential.
  4. Read the SID C_PIN UID column for confirmation.
  5. Close the session.
  6. Ensure the returned UID matches the SID C_PIN object identifier.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf15_random

Validate SPF-15 Random by requesting multiple 32-byte random blocks and ensuring the data is not all zeros or ones.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Start a Locking SP session to access the Random method.
  2. Request 32-byte random data twice to confirm the interface functions consistently.
  3. Ensure each response demonstrates entropy and is not comprised solely of ones or zeros.
  4. Close the locking SP session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf16_common_name

Validate SPF-16 CommonName by programming Admin1 and locking range CommonName fields and verifying the stored values.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip devices that do not advertise Opal 2.x or Ruby CommonName support.
  2. Start an Admin1 session to modify CommonName columns.
  3. Program the Admin1 CommonName using the MAGIC_PATTERN string.
  4. Program the Locking_GlobalRange CommonName fields using the MAGIC_PATTERN.
  5. Read the Admin1 CommonName value back for verification.
  6. Read the Locking_GlobalRange CommonName value.
  7. Close the Admin1 session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf17_additional_dataStore_tables_case1

Validate SPF-17 additional datastore tables Case 1 by activating the maximum number of tables and confirming reported sizes.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip when the controller does not advertise the Additional DataStore Table feature.
  2. Parse the maximum number of DataStore tables from discovery data.
  3. Parse the total DataStore size from discovery data.
  4. Parse the DataStore table alignment requirement.
  5. Take ownership to obtain the default SID password.
  6. Program the SID PIN to the provided test password.
  7. Activate the Locking SP with aligned DataStore table sizes.
  8. Call Activate with the computed table count and per-table size.
  9. Close the Admin SP session.
  10. Start an Admin1 session to query the resulting DataStore table metadata.
  11. Read the DataStore table Rows column to verify the allocated size.
  12. Close the Admin1 session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf17_additional_dataStore_tables_case2

Validate SPF-17 additional datastore tables Case 2 by activating aligned per-table sizes and confirming each table reports the expected rows.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip when the controller does not advertise the Additional DataStore Table feature.
  2. Parse the maximum number of DataStore tables from discovery data.
  3. Parse the DataStore table alignment requirement.
  4. Take ownership to obtain the default SID password.
  5. Program the SID PIN to the provided test password.
  6. Activate the Locking SP using the minimum aligned table size.
  7. Call Activate with the per-table alignment value.
  8. Close the Admin SP session.
  9. Start an Admin1 session to query each DataStore table.
  10. Read each table’s Rows column and ensure the reported size equals the alignment granularity.
  11. Close the Admin1 session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf18_range_crossing_behavior

Validate SPF-18 range crossing behavior by configuring aligned ranges and issuing I/O that spans or stays within the boundaries.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Determine range crossing capability from Level 0 discovery and skip unsupported devices.
  2. Record the MDTS value to size the test I/O.
  3. Read the alignment granularity from the LockingInfo structure.
  4. Start an Admin1 session to configure the test ranges.
  5. Configure and unlock the Locking_GlobalRange and Locking_Range instances.
  6. Close the Admin1 session.
  7. Issue range-crossing writes and reads and check whether DNR is asserted.
  8. Perform range-crossing I/O and confirm the controller allows it without errors.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf19_block_sid_authentication

Verify SPF-19 block SID authentication by toggling the hardware reset requirement and observing SID session availability.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip when the subsystem reset capability is unavailable.
  2. Skip when the Block SID Authentication feature is absent.
  3. Retrieve the MSID PIN for subsequent SID authentication attempts.
  4. Assert the hardware reset block via IF-SEND.
  5. Confirm SID authentication is rejected while the block is asserted.
  6. Check the feature descriptor to ensure the hardware reset status bit is set.
  7. Perform a subsystem reset to satisfy the hardware reset requirement.
  8. Verify SID authentication succeeds once the hardware reset is satisfied.
  9. Clear the hardware reset condition again using IF-SEND.
  10. Confirm SID authentication is blocked until the condition clears.
  11. Verify the hardware reset status bit is now cleared.
  12. Power cycle the subsystem and ensure SID authentication continues to succeed.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf20_data_removal_mechanism

Validate SPF-20 data removal mechanisms by iterating supported mechanisms and confirming the active value updates accordingly.

Reference

  1. TCG Storage Opal Family Test Cases Specification Revision 1.00

Steps

  1. Skip devices that do not advertise the Data Removal Mechanism feature.
  2. Parse the Supported Data Removal Mechanisms descriptor.
  3. Read the currently active data removal mechanism via an Admin SP SID session.
  4. Query the ActiveDataRemovalMechanism column and ensure it is supported.
  5. Set the ActiveDataRemovalMechanism column to one of the supported modes.
  6. Close the SID session to commit the change.
  7. Open an Anybody Admin SP session to read the public view of the active mechanism.
  8. Query the ActiveDataRemovalMechanism column again and ensure it reflects the requested value.
  9. Close the Anybody session and validate the mechanism selection.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_read_datastore

Verify datastore reads by writing a known pattern with MandatoryWriteGranularity enforcement and retrieving nine rows.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start a Locking SP Admin1 session to query MandatoryWriteGranularity.
  2. Read nine rows from the datastore table for verification.
  3. Close the session and verify the amount of data returned.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_getacl

Verify getacl by opening an Admin SP session and retrieving the ACL for a known object.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start an Admin SP session with SID authority.
  2. Invoke getacl for the specified UID and log the returned ACL entries.
  3. Close the Admin SP session.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_set_lock_on_reset

Validate LockOnReset behavior by programming the global range to powercycle mode and verifying it persists through a reset.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start a Locking SP session to program the LockOnReset column.
  2. Set LockOnReset to powercycle and read back the programmed value.
  3. close session
  4. Power cycle the subsystem to ensure the setting persists.
  5. Reopen a session to read back the LockOnReset field.
  6. Retrieve the LockOnReset value and ensure it remains unchanged.
  7. Close the session and validate the value.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_write_longdata_to_datastore

Verify datastore writes with large payloads by honoring MandatoryWriteGranularity and host communication limits.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start a Locking SP Admin1 session to ensure MandatoryWriteGranularity permits the test.
  2. write 1k bytes to datastore table
  3. Read back the data written with the default host limits.
  4. Close the session and confirm the last write persisted.
  5. Advertise host communication limits large enough to test 4K writes.
  6. Start a new session to perform a 4K write under the updated limits.
  7. Write a 4K payload to the datastore and ensure the transfer succeeds.
  8. Read back the 4K payload using the same buffer size.
  9. Close the session and verify the read data matches the pattern.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_mbr_table

Validate MBR table functionality by writing known patterns, power cycling, and ensuring expected persistence.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start an Admin1 session to confirm MandatoryWriteGranularity supports the test.
  2. Initialize the MBR table with zeros.
  3. Close the session before the next write.
  4. Start another session to write the MAGIC_PATTERN into the MBR table.
  5. Write the MAGIC_PATTERN into the MBR table.
  6. Power cycle and format the namespace to ensure MBR persistence is evaluated cleanly.
  7. Format the namespace to clear any residual data.
  8. Read the MBR table to ensure the MAGIC_PATTERN persisted.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_mbr_and_revert

Validate MBR persistence across a RevertTPR sequence by writing data, reverting, and confirming the table clears.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start an Admin1 session to confirm MandatoryWriteGranularity supports the test.
  2. Write the MAGIC_PATTERN into the MBR table.
  3. Verify the written data before issuing RevertTPer.
  4. Revert the TPer to factory state.
  5. Reacquire ownership using the MSID credential.
  6. Activate the Locking SP to restore normal operation.
  7. Ensure the MBR table content was cleared by the revert.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_done_on_reset

Validate MBR DoneOnReset behavior by programming powercycle mode and confirming it persists across a reset.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start a session to program the DoneOnReset column.
  2. Set DoneOnReset to powercycle and read back the value.
  3. Power cycle the device to ensure the value persists.
  4. Verify the DoneOnReset column after the reset.

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_write_maxdata_to_datastore

Validate datastore transfers at maximum token sizes by negotiating host properties and performing a boundary-length write and read.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Announce host properties to learn the device’s maximum communication sizes.
  2. Swap host/device limits to match the device’s maximum capabilities.
  3. Start an Admin1 session to confirm MandatoryWriteGranularity supports the test.
  4. Compute the maximum atom payload based on the negotiated limits.
  5. Write the maximum supported payload to the datastore.
  6. Read the payload back to confirm integrity.
  7. Close the session and ensure the stored data matches the write buffer.
  8. Verify the read data matches the written pattern.

file: scripts/conformance/06_tcg/03_error_test_cases_test

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_01_native_protocol_rw_locked_error_responses

Validate that NVMe read/write commands report DNR when Admin1 locks the GlobalRange via the Locking SP.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Establish a Locking SP Admin1 session to manage the global range.
  2. Enable read/write locking on the global range and set both locks to True.
  3. Track the DNR state reported by the NVMe completion callbacks.
  4. Issue a write to the locked range and verify the DNR bit is set.
  5. Issue a read to the locked range and verify the DNR bit is set.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_02_general_if_send_if_recv_synchronous_protocol

Ensure synchronous IF-SEND/IF-RECV exchanges return protocol errors when Properties is requested immediately after session start.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Issue an IF-SEND to open an anybody Admin SP session.
  2. Request Properties over the same ComID to trigger the synchronous protocol fault.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_03_invalid_if_send_transfer_length

Validate that IF-SEND commands exceedings MaxComPacketSize are rejected after discovering the device limit.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Query Properties to learn the device MaxComPacketSize.
  2. Prepare a Properties request whose ComPacket header length matches the payload size.
  3. Send the IF-SEND with a Transfer Length greater than MaxComPacketSize to trigger an error.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_04_invalid_sessionid_regular_session

Confirm that using an invalid SessionID within a regular session triggers the expected zeroed outstanding/min-transfer response.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Take ownership by opening an anybody Admin SP session.
  2. Start a new Admin SP session using the retrieved password.
  3. Issue a Get against the MSID credential using a mismatched Packet SessionID.
  4. Close the session and verify outstanding data and minimum transfer values remain zero.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_05_unexpected_token_outside_of_method_regular_session

Verify the Locking SP handles injected EndList tokens outside the method body while maintaining User1 state.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Initialize a Locking SP session as Admin1 and enable User1 prior to error injection.
  2. Reopen the Admin1 session and inject an EndList token before the CALL while disabling User1.
  3. Capture the response when EndList is forced before the CALL token.
  4. Evaluate the outstanding data and minimum transfer fields returned in the header.
  5. Retry the injected call if the device reports pending data.
  6. Close the session after handling the injected command.
  7. Repeat the Set method attempt with another injected EndList token ordering under a new session.
  8. Query the Enabled column to confirm User1 remains enabled after the error injection.
  9. Close the session and ensure the User1 state is still enabled.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_06_unexpected_token_in_method_regular_session

Validate that injecting EndList immediately after the CALL token causes the Set method to fail as expected.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Open a Locking SP session as Admin1 to prepare for the token injection test.
  2. Disable User1 while injecting an EndList token immediately after the CALL token.
  3. Close the session after validating the injected command failed.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_07_unexpected_token_outside_of_method_control_session

Ensure control sessions reject StartSession requests when an EndList token is injected before the CALL token.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Attempt to start a Locking SP session with an EndList token inserted ahead of the CALL token.
  2. Start a clean Locking SP session to confirm the device remains in a good state.
  3. Verify the outstanding data and minimum transfer fields remained zero.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_08_unexpected_token_in_method_control_session

Validate that duplicate StartList tokens in a control session parameter list trigger the appropriate error response.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Issue a Properties command while injecting a StartList token immediately after another StartList.
  2. Confirm that the returned header fields indicate zero outstanding data and minimum transfer bytes.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case1

Verify the controller rejects Get requests from Admin1 when the InvokingID is not defined in the table.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Open a Locking SP session as Admin1 to issue an invalid Get request.
  2. Attempt to Get from InvokingID 00 00 08 01 AA BB CC DD and expect an error status.
  3. Close the session after completing the invalid Get request.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case2

Validate that Anybody sessions cannot Get from undefined InvokingIDs such as the DataStore table.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start a Locking SP session using the Anybody authority to issue the invalid Get.
  2. Attempt to Get from InvokingID 00 00 10 01 00 00 00 00 (DataStore table) and expect an error.
  3. Close the session after observing the invalid InvokingID response.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case3

Confirm that valid Admin1 Get requests against C_PIN_Admin1 return data while the PIN value remains hidden.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start a Locking SP session as Admin1 to read from the C_PIN_Admin1 object.
  2. Request multiple columns from InvokingID 00 00 00 0B 00 01 00 01 (C_PIN_Admin1).
  3. Retrieve the remaining single-column values to ensure each response contains one item.
  4. Close the session and confirm that the PIN value array is empty.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case4

Ensure Anybody sessions cannot Get from the ThisSP object when the InvokingID is invalid for the caller.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Open a Locking SP session with Anybody authority to access the ThisSP object.
  2. Attempt to Get from InvokingID 00 00 00 00 00 00 00 01 (ThisSP) and expect an error.
  3. Close the session after confirming the invalid InvokingID response.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_11_invalid_invoking_id_non_get

Demonstrate that Set operations targeting invalid InvokingIDs fail during a Locking SP session.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start a Locking SP session to attempt a Set against an invalid object.
  2. Attempt to Set InvokingID 00 00 08 01 00 00 00 05 and expect the command to fail.
  3. Close the session after receiving the invalid InvokingID error.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_12_authorization

Ensure unauthorized attempts to enable User1 during a Locking SP session raise the expected error.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Start a Locking SP session using Anybody authority to simulate an unauthorized caller.
  2. Attempt to enable User1 without sufficient rights and expect an authorization error.
  3. Close the session after verifying the unauthorized Set was rejected.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_13_malformed_comPacket_header_regular_session

Verify malformed ComPacket headers are rejected during regular sessions while keeping Admin1 tries unchanged.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Query Properties to learn the device MaxComPacketSize limit.
  2. Start a Locking SP session as Admin1 to perform the malformed write.
  3. Write to the datastore with a ComPacket header larger than MaxComPacketSize while keeping Transfer Length within limits.
  4. Issue IF-RECV to capture the response to the malformed command.
  5. Determine whether the malformed header caused a session abort.
  6. Reauthenticate as Admin1 to inspect the Tries column if the session was aborted.
  7. Read the Tries column of Admin1’s C_PIN object to ensure no retries were consumed.
  8. Confirm admin1_tries remains zero after the malformed command.
  9. Close the session if the malformed header was handled without aborting.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_16_overlapping_locking_ranges

Ensure overlapping locking ranges generate errors when configured through the Locking SP.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Skip this test if the device lacks Opal 1.0/2.0 locking range support.
  2. Start a Locking SP session as Admin1 to configure locking ranges.
  3. Configure Locking_Range1 to reserve LBAs 0-63.
  4. Attempt to configure Locking_Range2 with an overlapping definition and expect an error.
  5. Close the session once the overlapping range failure is observed.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_17_invalid_type

Verify that invalid data types written to User1’s Enabled column are rejected.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Open a Locking SP session as Admin1 to modify User1 state.
  2. Attempt to write a 0xAAAA value into the Enabled column to trigger a type error.
  3. Close the session after verifying the invalid type was rejected.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_18_revertsp_globalrange_locked

Validate that RevertSP fails when the global locking range remains locked.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Open a Locking SP session as Admin1 to manipulate the global range.
  2. Lock the GlobalRange by setting both ReadLocked and WriteLocked attributes.
  3. Attempt to RevertSP while keeping the global range key/data and expect an error.
  4. Close the session after observing the RevertSP failure.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_19_ata_security_interaction

Validate ATA security interactions by attempting Locking SP activation after resetting SID credentials.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Retrieve the MSID credential via an anybody Admin SP session to take ownership.
  2. Update SID credentials using the retrieved MSID value.
  3. Attempt to activate the Locking SP and expect the ATA security interaction to trigger error 63.
  4. Close the session after validating the activation failure response.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_20_startSession_on_inactive_locking_sp

Verify StartSession fails when issued against an inactive Locking SP.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Attempt to start a Locking SP session even though the SP remains inactive.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_21_startsession_with_incorrect_hostChallenge

Ensure StartSession rejects Admin1 attempts when the HostChallenge does not match the stored PIN.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Attempt to start a Locking SP session as Admin1 using an incorrect HostChallenge value.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_22_multiple_sessions_case1

Validate the controller enforces the MaxSessions limit when multiple Locking SP sessions are requested.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Query Properties to determine the TPer’s MaxSessions capability.
  2. Start a Locking SP session with write access using Admin1 credentials.
  3. Attempt to open another Locking SP session and expect an error based on MaxSessions.
  4. Close the original session before power cycling the subsystem.
  5. Power cycle the subsystem to return the controller to a known state.
  6. Reset the controller handle after the subsystem power cycle completes.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_23_data_removal_mechanism_set_unsupported_value

Ensure unsupported Data Removal Mechanism values are rejected when written via the Admin SP.

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Skip the test if the device does not advertise the Data Removal Mechanism feature.
  2. Extract the Supported Data Removal Mechanisms descriptor from Level 0 Discovery.
  3. Start an Admin SP session as SID to query the current Active mechanism.
  4. Read the ActiveDataRemovalMechanism column to capture the baseline value.
  5. Iterate through all mechanisms and try to set those not advertised as supported.
  6. Attempt to set ActiveDataRemovalMechanism to an unsupported value and expect an error.
  7. Close the Admin SP session before verifying the current mechanism value.
  8. Read back the ActiveDataRemovalMechanism via Anybody authority to confirm it was not updated.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_data_over_maxcompacketsize

Confirm datastore reads larger than MaxComPacketSize produce an error after provisioning the Locking SP.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Determine the device’s MaxComPacketSize limit before programming the SP.
  2. Take ownership by retrieving the MSID credential through the Admin SP.
  3. Program the SID credential with a known password for the remainder of the test.
  4. Activate the Locking SP using the newly provisioned SID credentials.
  5. Perform an in-bounds read from the datastore table as a control case.
  6. Attempt to read beyond MaxComPacketSize and expect an error response.

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_start_session_with_wrong_sp

Ensure StartSession fails when the command references an incorrect SP identifier list.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Attempt to start a Locking SP session while providing an incorrect LOCKINGSP parameter.

file: scripts/conformance/06_tcg/04_appendix_test

function: scripts/conformance/06_tcg/04_appendix_test.py::test_active_user_powercycle

Validate locked user data remains protected across dirty power cycle by reactivating TCG sessions.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Provision user1 and grant range access
  3. Close admin session after provisioning
  4. Write pattern to namespace with unlocked state
  5. Flush written data to media
  6. Power cycle subsystem to emulate dirty shutdown
  7. Attempt read without user credential and expect lock enforcement
  8. Reactivate locking SP and unlock range with admin credential
  9. Read data with user credential to confirm integrity

function: scripts/conformance/06_tcg/04_appendix_test.py::test_mbr_read_write

Verify MBR table write and read operations through Opal admin session.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Verify device supports Opal 2.0 before running MBR test
  2. Invoke Properties method to negotiate host limits
  3. Start locking SP admin1 session
  4. Retrieve MBR table size from descriptor
  5. Query mandatory write granularity for MBR object
  6. Write MBR table with test pattern
  7. Read back the MBR content for verification
  8. close session

function: scripts/conformance/06_tcg/04_appendix_test.py::test_datastore_read_write

Validate datastore table write and read via locking SP session.

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Start locking SP admin1 session
  2. Query datastore mandatory write granularity
  3. Define datastore offsets and payload
  4. invoke Set method to write the datastore table with the MAGIC_PATTERN
  5. Read datastore content for verification
  6. close session

function: scripts/conformance/06_tcg/04_appendix_test.py::test_blocksid_and_lock_range

Assess Block SID authentication survives power events while locking ranges are enforced.

Reference

  1. NVM Express Revision 1.4a

Steps

  1. L2 platform: Tributo MTL meteor lake 0.1.37
  2. Discover TCG capabilities and ensure Block SID is supported
  3. Power cycle the SD to clear existing Block SID state
  4. BlockSID enabling
  5. Set feature PS3
  6. PCIe reset
  7. Power cycle the SD
  8. Start SID session and set new Admin SP password
  9. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  10. Activate locking SP and move lifecycle to enabled state
  11. Call Activate method on Locking SP object
  12. Enable ProgrammaticReset to allow TPER reset
  13. Open locking SP session and enforce lock on global range
  14. Issue TPER reset to clear locking state
  15. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  16. Unlock global range after TPER reset
  17. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  18. Perform IO to confirm access restored without errors

file: scripts/conformance/06_tcg/05_core_spec_test

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_trylimit

Verify SID try limit equals 10 by reading Admin SP session properties.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Start Admin SP session using SID authority credential
  2. Read SID C_PIN TryLimit column
  3. Read SID C_PIN Tries column
  4. Read SID C_PIN persistence attribute
  5. Close session to release resources

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_datastore_size

Verify datastore capacity equals 10 MB using Admin SP session query.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Open Admin SP session
  2. Get datastore size
  3. Keep optional datastore size threshold check for reference
  4. if data_store_rows < (1010241024):
  5. warnings.warn(“DataStore size: %dKB, less than 10MB” % (data_store_rows//1024))
  6. Close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_single_user_mode

Confirm Single User Mode support via Level 0 discovery feature list.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Run Level 0 discovery and check Single User Mode feature flag

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_configurable_namespace_locking

Confirm Configurable Namespace Locking support via Level 0 discovery.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Run Level 0 discovery and check Configurable Namespace Locking feature flag

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_properties_info

Collect and log TPer property limits using the Properties method.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Invoke Properties method with host-defined property hints
  2. Get MaxComIDTime value
  3. Get DefSessionTimeout value
  4. Get MaxSessionTimeout value
  5. Get MinSessionTimeout value
  6. Get MaxTransactionLimit value
  7. Get MaxSessions value
  8. Get MaxReadSessions value
  9. If a host includes property parameters to the Properties method invocation that the TPer does not support, the TPer SHALL ignore those parameters, and SHALL NOT return them in its response.

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_authentication_time

Measure authentication time for C_PIN start session handshake.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5

Steps

  1. Start authentication session and capture latency
  2. Optional check: authentication of a C_PIN should be at least 100 ms
  3. if authentication_time < 0.1:
  4. warnings.warn(“authentication time: %.02fms, less than 100ms” % (authentication_time*1000))

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_get_comid

Verify GET_COMID returns expected ComID via security receive command.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Prepare buffer for GET_COMID readback
  2. Issue GET_COMID
  3. Parse feature data to determine number of ComIDs

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_verify_comid_valid

Check VERIFY_COMID_VALID status through security send/receive sequence.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Check security command capability before issuing TCG command
  2. Retrieve Level 0 discovery data to confirm TCG availability
  3. Issue GET_COMID to obtain active ComID
  4. Issue VERIFY_COMID_VALID to report ComID status

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_comid_and_session

Verify ComID state transitions before, during, and after sessions.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Power cycle the device to clear ComID state
  2. Retrieve ComID via GET_COMID and validate availability
  3. Fetch ComID status through VERIFY_COMID_VALID
  4. Open Admin SP session to associate ComID
  5. Verify ComID status reflects association
  6. Close session and confirm status assertion
  7. Re-query ComID status after closing session
  8. Power cycle device again to ensure status persistence

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_invalid_comid

Ensure invalid ComID session attempt raises expected warning.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Attempt session creation with invalid ComID and expect warning

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_syncsession

Validate SyncSession command keeps session active without errors.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Start anybody session on Admin SP
  2. Issue SyncSession method to maintain session
  3. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_data_store_transaction_success

Verify successful datastore transaction commit preserves written data.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Start locking SP admin1 session and read mandatory write granularity
  2. Write zeros to datastore to set baseline content
  3. Close session after baseline write
  4. Begin transaction before modifying datastore content
  5. Write magic_pattern to datastore table
  6. Commit transaction with success status
  7. Read datastore to confirm committed pattern persists

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_data_store_abort_transaction

Confirm endTransaction failure status discards uncommitted datastore data.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Start locking SP admin1 session
  2. Begin transaction before writing data
  3. Write magic_pattern to datastore table
  4. End transaction with failure status to discard changes
  5. Close session after transaction failure
  6. Read data to ensure failed transaction did not persist writes

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_transaction_trylimit_case

Exercise transaction flow to verify Admin1 try limit enforcement.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Drive Admin1 authentication retries up to TryLimit using wrong password
  2. Attempt User1 session and expect AUTHORITY_LOCKED_OUT
  3. Power cycle to reset Admin1 Tries counter
  4. Confirm retries cleared after power cycle
  5. Start transaction before authentication attempts
  6. Authenticate with wrong password to increment tries
  7. End transaction with success status
  8. Check Admin1 tries incremented once
  9. Start a second transaction
  10. Authenticate again with wrong password
  11. End transaction with failure status to finalize second try
  12. Check Admin1 tries incremented twice

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_level0_discovery

Send Level 0 Discovery at multiple phases to observe ComID consistency.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Issue Level 0 discovery before opening any session
  2. Start Admin SP session for subsequent discovery checks
  3. Issue Level 0 discovery while session active
  4. Issue Level 0 discovery after MSID retrieval
  5. Issue Level 0 discovery after closing Admin SP session
  6. Start SID session with MSID password to update credential
  7. Issue Level 0 discovery after SID start-session send
  8. Issue Level 0 discovery after SID close-session send

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_send_twice_start_session

Verify duplicate start session security sends fail as expected.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Retrieve baseline ComID from Level 0 discovery
  2. Send start session security send twice
  3. Check the second start-session send fails
  4. Retrieve MSID C_PIN after successful session start

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_send_twice_end_session

Verify duplicate end session commands return expected error status.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Retrieve baseline ComID from Level 0 discovery
  2. Open Admin SP session prior to duplicate close attempts
  3. Retrieve MSID C_PIN after successful session start
  4. Send close session security send twice
  5. Check the second end-session send fails

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_outstandingdata

Validate outstanding_data handling by limiting datastore receive length.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Start locking SP admin1 session
  2. Query mandatory write granularity before datastore access
  3. Read 512 rows from the datastore table
  4. Retrieve partial response to capture outstanding data size
  5. Receive remaining outstanding data to complete transfer
  6. Close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_max_compacket_size

Check MaxComPacketSize enforcement when reading large datastore ranges.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Invoke Properties method to identify MaxComPacketSize limits
  2. Start locking SP admin1 session
  3. Query mandatory write granularity prior to large read
  4. Read MaxComPacketSize rows from the datastore to exceed response limit
  5. Close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_multi_users_and_ranges

Verify eight users own unique locked ranges and can read/write after unlock.

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01

Steps

  1. Check Opal 2.0 capability and skip if unsupported
  2. Open locking SP admin session
  3. Enable eight users with default passwords
  4. Assign locked ranges to each user with distinct offsets
  5. Close admin session after range setup
  6. Verify locked ranges reject host I/O
  7. Open eight user sessions and unlock assigned ranges
  8. Issue host write/read to confirm unlocked access per range

Suite: scripts/benchmark

folder: scripts/benchmark

file: scripts.benchmark.basic_cdm

CrystalDiskMark-style benchmark that replays a fixed set of sequential and random workloads to sanity-check SSD bandwidth and IOPS under typical client settings.

The suite preconditions the drive and then sweeps through large-block sequential transfers plus 4K random mixes so engineers can compare numbers across firmware revisions with minimal tuning.

file: scripts.benchmark.idle_stress

Exercises drives in 24×7 standby scenarios with sporadic IO bursts to mimic office-style PCs, ensuring the SSD survives repeated transitions into/out of deep power states without latency spikes.

It measures entry/exit latency into PS3/PS4, watches power draw through a PAM, and validates data with background readbacks so firmware engineers can spot low-power bugs quickly.

file: scripts.benchmark.interval_read_disturb

Stresses fixed LBAs for long periods to expose read-disturb related errors while leaving idle gaps that mimic fleet workloads; JEDEC enterprise patterns are replayed to prepare the media before targeted reads.

SMART, temperature, and UECC counters are captured before/after each sweep, and a full-drive verify pass ensures retention issues surface early.

file: scripts.benchmark.ioworker_stress

Runs a long-haul randomized workload that mixes IO workers with admin commands to validate SSD stability, performance, and error recovery under shifting conditions.

Workloads are repeatedly started/stopped while telemetry (SMART, voltage, error codes) is monitored, and the entire drive is verified at the end to guarantee data integrity.

Assesses NVMe stability when PCIe link speeds hop between generations under a continuous read workload, capturing transition delays and timeouts.

By scripting repeated speed thrashing while IOps remain high, the benchmark reveals LTSSM bugs, PHY margin issues, or firmware throttling triggered by link changes.

file: scripts.benchmark.llm_loading

Simulates LLM-style dataset loading by repeatedly filling regions with mixed IO patterns and then streaming multi-GB “image” chunks to measure read and write latency consistency.

Parameter sweeps change image sizes and loop counts so teams can size caches, validate QoS under bursty ingestion, and ensure data stays intact after each heavy pass.

file: scripts.benchmark.longtime_readwrite

Consumes defined percentages of PE cycles via sustained sequential writes and then checks how read/write performance drifts over time.

file: scripts.benchmark.performance

Evaluates client SSD performance across sequential/random workloads while tracking latency consistency, thermal behavior, and power envelopes.

It automates fill phases, JEDEC traces, cache reclaim studies, restricted IOPS cases, and power-state characterizations so validation teams can build a single report covering QoS, efficiency, and recovery.

file: scripts.benchmark.por_sudden

Automates dirty power-cycle (SPOR) validation by logging controller readiness after abruptly removing power during active workloads. Requires a Quarch PAM for precise control and telemetry.

SPOR emulates unexpected power loss with no shutdown hint so firmware handling of inflight IO, command logs, and rescan latency can be measured repeatably.

Power-on timing checkpoints:

  • BAR Access Time: write BAR registers successfully
  • Admin Ready Time: admin queue commands accepted
  • First IO Completion Time: first read completes

file: scripts.benchmark.por_typical

Automates clean power-cycle (POR) testing with Quarch PAM control to track when the SSD exposes BAR registers, accepts admin commands, and completes first IOs.

POR simulates shutting down power after the host has notified the device, so it focuses on firmware readiness, cache flush completeness, and command-log scan times.

Power-on timing checkpoints:

  • BAR Access Time: control registers writable
  • Admin Ready Time: admin queue ready
  • First IO Completion Time: first read finishes

file: scripts.benchmark.read_retention

Fills the entire drive, stores CRCs for every LBA, and later compares them after long power-off intervals (e.g., multi-month retention studies).

Recommended workflow:

  1. Create /home/crc with root if missing.
  2. Run make test TESTS=...::test_prepare to fill & log CRC.
  3. Run test_verify once to archive initial logs.
  4. Collect artifacts from results.
  5. Power off DUT and store at room temperature for ~2 months.
  6. Reinsert DUT into same SUT.
  7. Re-run test_verify to compare CRC/SMART.
  8. Collect updated logs/diagrams.

file: scripts.benchmark.replay_trace

Provides helpers to parse CSV trace descriptions (SLBA, NLB, opcode, timestamp) and replay them against an NVMe namespace at different capacity scales.

test_replay_trace writes/trim operations, performs a clean subsystem power cycle, re-enables HMB if present, and finally replays the read-only phase so power-cycle sensitivity can be evaluated deterministically.

file: scripts.benchmark.reset_double

Inject resets during NVMe initialization to ensure the controller/driver stack recovers cleanly from mid-boot interruptions.

file: scripts.benchmark.saw_diagram

Stresses APST/APSM-driven transitions into PS3/PS4 by injecting IOs at varying idle delays and charting the resulting latency spikes (saw-tooth diagram).

file: scripts.benchmark.wear_leveling

Evaluates static and dynamic wear leveling by writing hot/cold regions with sequential and random IOs, then graphing IOPS trends and verifying data.

Hot sequential vs. hot random phases expose GC aggressiveness, while final power cycle + verify stages confirm that endurance algorithms did not corrupt data.

file: scripts.benchmark.write_latency

Measures long-tail latency while streaming sequential 128KB writes (QD=1) and plots throughput, per-IO latency, and temperature evolution.

Preparation helpers handle formatting, optional sanitization/prefill, HMB/FUA selection, and workload sizing so repeated runs stay consistent.

Pass criteria enforce <1% of IOs exceeding 8ms and 99th percentile latency below 8ms; diagrams and CSV logs assist regression tracking.

Suite: scripts/management

folder: scripts/management

file: scripts/management/01_mi_inband_test

function: scripts/management/01_mi_inband_test.py::test_mi_vpd_write_and_read

Validate MI VPD write/read using mixed in-band and out-of-band access.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Allocate buffers for in-band VPD write and readback
  2. Attempt VPD write and fall back to copying existing VPD if write not supported
  3. Read VPD in-band for baseline comparison
  4. Verify written data matches read data
  5. Read VPD via out-of-band path and compare contents
  6. Mix in-band and out-of-band reads to cross-check consistency
  7. Repeat out-of-band reads across size variants

function: scripts/management/01_mi_inband_test.py::test_mi_large_message_inband

Validate MI VPD read padding across varying payload lengths in-band.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Issue MI VPD read using zeroed pattern to confirm completion status across sizes
  2. Repeat MI VPD read using ones pattern to confirm unused bytes remain unchanged

function: scripts/management/01_mi_inband_test.py::test_mi_inband_header

Validate in-band receive header handling for CIAP, IC, and NVMe context rules.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Initialize completion status tracking for MI receive path validation
  2. Capture completion status and error location from MI receive completions
  3. Validate invalid command opcode handling in in-band command message
  4. Confirm invalid opcode handling for alternate opcode value
  5. Exercise CIAP=0 flow in an in-band command message
  6. Ensure CIAP=1 in an in-band command triggers error reporting
  7. Ensure IC=1 in an in-band command triggers error reporting
  8. Validate error handling in NVMe context during MI receive
  9. Issue VPD read with non-dword size to confirm padding behavior
  10. Issue VPD read from slot 1 for comparison
  11. Pad buffer tail to dword granularity before comparison

function: scripts/management/01_mi_inband_test.py::test_mi_inband_header_send

Exercise MI send header handling for CIAP, IC, and opcode validation.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Initialize completion status tracking for MI send path validation
  2. Capture completion status and error location from MI send completions
  3. Issue baseline MI send to confirm successful completion
  4. Validate invalid command opcode handling in in-band command message
  5. Confirm invalid opcode handling for alternate opcode value
  6. Exercise CIAP=0 flow in an in-band command message
  7. Ensure CIAP=1 in an in-band command triggers error reporting
  8. Ensure IC=1 in an in-band command triggers error reporting
  9. Validate error handling in NVMe context during MI send
  10. Read original VPD data for later restoration
  11. Write VPD with non-dword aligned data pattern
  12. Read VPD from slot 0 after partial write
  13. Read VPD from slot 1 for comparison
  14. Restore original VPD contents
  15. Verify VPD data matches padded write data across slots

function: scripts/management/01_mi_inband_test.py::test_mi_inband_control_primitives_prohibited

Ensure in-band tunneling rejects Management Control Primitive opcodes.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Track completion status returned by the MI tunneling transport
  2. Capture completion status and error location during tunneled operations
  3. Issue receive/send commands for Control Primitive opcodes 0-4 to confirm rejection

function: scripts/management/01_mi_inband_test.py::test_mi_reset

Validate MI reset command using subsystem power cycle callbacks.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Define MI reset callback that triggers inline MI reset through PCIe transport
  2. Define placeholder completion callback required by the subsystem wrapper
  3. Skip if MI commands are not supported
  4. Create subsystem wrapper using MI callbacks
  5. Issue MI reset sequence and restore controller readiness

function: scripts/management/01_mi_inband_test.py::test_mi_invalid_operation

Verify MI send path reports invalid opcode errors for unsupported commands.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Send MI command with an invalid opcode and confirm status field is 0x3
  2. send MI with an invalid command opcode

function: scripts/management/01_mi_inband_test.py::test_mi_configuration_get_health_status_change

Retrieve Health Status Change configuration via MI tunneling.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Issue MI receive with CFG_ID 0x02 to fetch Health Status Change configuration
  2. MI command to get configuratrion of Health Status Change

function: scripts/management/01_mi_inband_test.py::test_mi_configuration_set_health_status_change

Update Health Status Change configuration via MI tunneling.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Send MI configuration set for CFG_ID 0x02 and expect successful completion
  2. MI command to set configuration of Health Status Change, it shall complete successfully

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_nvm_subsystem_information

Read MI data structure for NVM subsystem information via receive path.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Fetch NVM subsystem information and dump the first few dwords

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_nvm_subsystem_information_wrong_command

Confirm MI data structure read fails when issued through mi_send.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Attempt MI send for subsystem information and log resulting status

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_port_information

Read MI data structure for port information via receive path.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Query port information structure for port 0 and log contents

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_port_information_wrong_port

Confirm MI port information read fails for unsupported port identifiers.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Attempt port information read against invalid port value and observe status

function: scripts/management/01_mi_inband_test.py::test_mi_ep_buf_prohibited

Verify Management Endpoint Buffer commands are prohibited in MI tunneling.

Reference

  1. Based on NVM Express Management Interface Revision 1.1b.

Steps

  1. Allocate scratch buffer used for Management Endpoint buffer read/write commands
  2. Test Management Endpoint Buffer Read (0x0A)
  3. Test Management Endpoint Buffer Write (0x0A)
  4. Test Management Endpoint Buffer Read (0x0B)
  5. Test Management Endpoint Buffer Write (0x0B)

function: scripts/management/01_mi_inband_test.py::test_mi_configuration_prohibited

Validate prohibited configuration identifiers return expected tunneled status.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Exercise Configuration Get/Set commands for prohibited CFG_ID=1
  2. Exercise Configuration Get/Set commands for prohibited CFG_ID=3
  3. Exercise Configuration Get/Set commands for valid CFG_ID=2

file: scripts/management/02_basic_mgmt_cmd_test

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_status

Verify drive status, SMART warnings, and temperature via SMBus block read with PEC validation.

Reference

  1. Based on Management Interface Specification Revision 1.2c Appendix A.

Steps

  1. Read drive status page from offset 0 over SMBus into an 8 byte buffer
  2. Calculate PEC over the address and payload to validate integrity
  3. Log the retrieved status payload for traceability

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_static_data

Retrieve drive VID and serial number through an I2C static data block read for identification.

Reference

  1. Based on Management Interface Specification Revision 1.2c Appendix A.

Steps

  1. Capture 32 bytes of static identification data through an I2C block read
  2. Log the retrieved static payload for debugging

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_reset_arbitration_bit

Reset the arbitration condition by issuing a single byte I2C write to the management endpoint.

Reference

  1. Based on Management Interface Specification Revision 1.2c Appendix A.

Steps

  1. Send byte 0xff to clear the arbitration bit on the management endpoint

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_status_across_i2c_block_boundaries

Sweep I2C bitrates to confirm block reads cross boundaries while preserving status payload integrity.

Reference

  1. Based on Management Interface Specification Revision 1.2c Appendix A.

Steps

  1. Log the drive serial number for comparison against I2C payloads
  2. Sweep the I2C bitrate and read 100 bytes to ensure seamless access across block boundaries
  3. Validate that the serial number prefix is returned intact

file: scripts/management/03_mi_cmd_set_test

function: scripts/management/03_mi_cmd_set_test.py::test_mi_no_unsolicited_response

Verify that the Management Endpoint never sends unsolicited responses by calling receive() without issuing a command first.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Receive a response without sending a prior command request

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_different_length

Exercise Management Endpoint buffer read and write paths at varied data lengths to confirm data integrity.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the SMBus port data structure to check for buffer support and size
  2. Write random data into the Management Endpoint buffer
  3. Read back the buffer and verify data integrity

function: scripts/management/03_mi_cmd_set_test.py::test_mi_read_mi_data_structure

Validate MI data structure reads by querying subsystem, port, and parameter records through out-of-band access.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the NVM Subsystem Information structure (DTYP=00h)
  2. Read the SMBus port information structure (DTYP=01h) to examine parameters
  3. Enumerate and log the port data for each reported port
  4. Validate Maximum MCTP Transmission Unit Size
  5. Validate Management Endpoint Buffer Size
  6. Validate VPD Address and Maximum Frequency
  7. Validate MI Endpoint Address and Maximum Frequency

function: scripts/management/03_mi_cmd_set_test.py::test_mi_pcie_port_specific_data

Correlate MI-reported PCIe port data against PCIe configuration space registers to confirm accuracy.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Create a helper to locate the requested PCIe port entry within the MI data structures
  2. Locate a PCIe port entry in the MI data structures
  3. Ensure the capability exists on the PCIe side
  4. Parse MI-reported PCIe port fields
  5. Compare Maximum Payload Size (encoded per PCIe Device Control register bits [7:5])
  6. Supported link speed vector
  7. Derive from Link Capabilities max speed if Link Capabilities 2 is not populated
  8. Current link speed from Link Status register bits [3:0]
  9. Maximum link width from Link Capabilities bits [9:4]
  10. Negotiated link width from Link Status bits [9:4]

function: scripts/management/03_mi_cmd_set_test.py::test_mi_read_mi_data_structure_lockdown

Verify the lockdown feature blocks MI data structure reads when commanded via admin lockdown operations.

Reference

  1. NVMe Base Specification Revision 2.2

Steps

  1. Issue a baseline MI data structure read to confirm access prior to lockdown
  2. Prohibit the command via the lockdown admin command
  3. Confirm the MI data structure read is blocked while lockdown is active
  4. Allow the command again through the lockdown admin interface
  5. Prohibit the command through the admin queue controlling different scope
  6. Reissue the MI data structure read out of band and confirm access still works
  7. Restore the default lockdown settings to reenable the command fully

function: scripts/management/03_mi_cmd_set_test.py::test_mi_read_mi_data_structure_controller

Cross-check controller list and controller information MI data structures with identify controller data.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the controller list and compare the CID with the Identify data
  2. Read the controller information structure and confirm the PCI vendor/device IDs

function: scripts/management/03_mi_cmd_set_test.py::test_mi_read_mi_data_structure_command_support

Inspect the command support data structures to list NVMe-MI command opcodes per message type.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the target data structure and skip tests when unsupported
  2. Enumerate the reported commands and log their opcodes

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll

Exercise the NVM Subsystem Health Status Poll command to confirm field values fall within expected limits.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Send a baseline health status poll with Clear Status cleared
  2. Check the NVM Subsystem Status (NSS) flags for a healthy state
  3. Record the smart warning field for traceability
  4. Confirm the composite temperature stays below the safety threshold
  5. Confirm the percentage drive life used is within the expected range
  6. Log the composite controller status value for reference

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll_check

Compare MI health status poll information with the NVMe SMART log to cross-validate telemetry.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Send a health status poll and capture the MI-reported composite temperature
  2. Retrieve the SMART log from the NVMe controller for temperature comparison
  3. Validate that the temperature readings do not diverge beyond the tolerance
  4. Compare the percentage used values reported through MI and SMART

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll_clear

Verify that Clear Status operates independently from subsystem resets and reflects in MI polling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Check the Identify Controller data to ensure subsystem reset is supported
  2. Clear the health status and confirm CCS is zero
  3. Issue an NVMe subsystem reset and allow the transition to complete
  4. Poll the health status without clearing after the reset to capture the event bit
  5. Validate that the “NVM Subsystem Reset Occurred” bit becomes set
  6. Clear the status again to remove the latched bit
  7. Re-poll the health status to confirm the reset bit cleared
  8. Validate that the “NVM Subsystem Reset Occurred” bit is now cleared

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_health_status_independent

Confirm that MI health status tracking for in-band and out-of-band paths operates independently.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Skip the test when MI send/receive commands are not available
  2. Retrieve the subsystem health status over the in-band path to prime buffers
  3. Read CCS via the in-band path and ensure it reports a non-zero value
  4. Capture CCS via out-of-band access for comparison
  5. Clear the “ready” bit via Configuration Set and re-read CCS
  6. Verify that the in-band CCS view remains unchanged after the OOB clear
  7. Clear the status via the out-of-band interface only
  8. Re-read CCS in-band to ensure it has not yet been cleared
  9. Confirm the out-of-band CCS view cleared as expected
  10. Trigger an in-band reset and ensure CCS ultimately clears

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_health_status_change

Drive configuration set operations that alter health status bits and observe CC/CSTS reflections.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Clear the CC register through PCIe config space and confirm it takes effect
  2. Clear the health status and validate CSTS and CCS report ready=0
  3. Set CC.EN and verify CSTS register indicates ready
  4. Set shutdown and verify CSTS register and CCS
  5. Clear “ready” in CCS and verify
  6. Validate final CCS state is cleared

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll_temperature

Validate that the composite temperature reported via MI tracks workload-induced heating.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Capture the idle composite temperature using the health status poll
  2. Stress the namespace with an IO worker to elevate device temperature
  3. Read the composite temperature again to confirm it increases
  4. Verify the post-workload temperature is not lower than the idle value

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll

Exercise controller health status polling for various filters and selectors to confirm responses.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Report only changed health status without selecting a specific controller
  2. Report changed health status for a specific controller selection field
  3. Report all health statuses across all controllers
  4. Report changed health status for a multi-controller selection
  5. Report only a specific error condition by programming error_select
  6. Report all health statuses (all=1), regardless of changes.
  7. Validate critical response data fields.

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll_with_nvme

Correlate MI controller health status poll data with NVMe Identify and SMART log fields for consistency.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Perform a general health status poll with ALL=1
  2. Validate response data structure
  3. Extract and validate the Controller ID reported in the MI payload
  4. Compare the MI health status data with the NVMe SMART log page
  5. Perform a health status poll targeting the specific controller selection
  6. Validate that controller selection fields still apply when ALL=1
  7. Use error selection filtering to ensure consistent error reporting
  8. Limit the number of returned entries by setting MAXRENT to one
  9. Specifying 256 entries is interpreted as an Invalid Parameter.
  10. The maximum number of entries is 255
  11. Report only controller status changes (error_select=1).
  12. Generate temperature change by running IO workload

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll_clear

Validate Clear Changed Flags behavior and compare OOB results to in-band tunneling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Perform a health status poll with ALL=1 (report all health data)
  2. Poll again with Clear Changed Flags set to clear the change bits
  3. Perform a subsequent poll with ALL=1 to confirm the cleared flag state persists
  4. Report only controller status changes to ensure no entries are returned
  5. Retrieve the health status via in-band tunneling to demonstrate independent operation

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll_filter

Trigger controller health status change flags via temperature events and validate filtered polling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Clear the controller health status changed flags to start from a clean slate
  2. Poll the controller health status using a filter to confirm no entries are reported initially
  3. assert res[7] == 0, f”Unexpected Response Entries error: {res[7]}”
  4. Capture the original temperature threshold configuration
  5. Ensure asynchronous temperature events are enabled
  6. Retrieve the current temperature from the SMART log to use as the trigger point
  7. Drop the over temperature threshold below the current temperature to force an event
  8. Restore the original threshold configuration after the event triggers
  9. Poll the controller health status after triggering the event
  10. Clear the controller health status changed flags again to clean up

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ses_prohibited

Confirm SES receive/send opcodes are rejected with Invalid Command Opcode status.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Attempt to send an SES Receive command (0x08) and expect Invalid Command Opcode
  2. Attempt to send an SES Send command (0x09) and expect Invalid Command Opcode

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_get

Exercise the Configuration Get command to confirm default values across identifiers.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Get Health Status Change configuration
  2. Get current SMBus/I2C Frequency configuration
  3. Get MCTP TU size configuration

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set

Program several configuration identifiers and observe both valid updates and reserved field errors.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Verify a reserved configuration identifier raises an invalid parameter response
  2. Set Health Status Change masks with increasing coverage
  3. Set SMBus/I2C Frequency to 400 KHz
  4. Read the SMBus/I2C Frequency configuration
  5. Set SMBus/I2C Frequency back to 100 KHz
  6. Confirm the SMBus/I2C Frequency returned to 100 KHz

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_frequency_outstanding

Attempt to change SMBus/I2C Frequency while other MI command messages are outstanding.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Attempt to set SMBus/I2C Frequency (CFG_ID = 1) while slot is idle
  2. Issue a VPD read command on slot 1 and receive response
  3. Issue another VPD read command on slot 1 but not wait for receiving
  4. Attempt to set SMBus/I2C Frequency again and verify response
  5. recover the default freqency

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_frequency_invalid

Drive Configuration Set for SMBus/I2C Frequency with unsupported values to confirm error handling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Unsupported SMBus/I2C frequency (e.g., 0x05 which is not defined)
  2. Invalid Port Identifier (port not an SMBus/I2C port)
  3. Attempt to set a reserved frequency (e.g., 0x0 which is not valid)
  4. Attempt to set a frequency that exceeds defined limits (e.g., 0x10 which is reserved)
  5. Recover the default frequency to ensure no residual impact

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_frequency

Cycle through each supported SMBus/I2C frequency and verify it applies cleanly.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Capture the current SMBus/I2C Frequency
  2. Configure the requested frequency value
  3. Verify the Configuration Set succeeded
  4. Exercise the bus by issuing a VPD read
  5. Restore the original SMBus/I2C frequency

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_tu_size

Configure various MCTP Transmission Unit sizes and confirm they remain within supported limits.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the device-supported maximum TU size
  2. Skip the test if the requested TU exceeds the maximum supported size
  3. Set the MCTP Transmission Unit size
  4. Verify that the TU size was set correctly
  5. Read VPD data to validate device response

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_tu_size_outstanding

Modify the TU size while other commands are pending to ensure MI command sequencing is honored.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Set TU size to 100 bytes
  2. Issue a VPD read command on slot 1 and receive the response
  3. Issue another VPD read command on slot 1 but do not wait for response
  4. Attempt to set TU size to 128 bytes while VPD read is outstanding
  5. Receive the VPD response
  6. Recover the default TU size (64 bytes)

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_tu_size_invalid

Feed unsupported TU sizes to Configuration Set to confirm invalid parameter handling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Set an unsupported TU size (e.g., 20 bytes, below the valid range)
  2. Set an unsupported TU size (e.g., 300 bytes, above the valid range)
  3. Set a reserved TU size (e.g., 0 bytes, which is invalid)
  4. Recover the default TU size (64 bytes) after invalid cases

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set_invalid_port_id

Ensure Configuration Set rejects invalid or reserved port identifiers with the proper error code.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Use a Port Identifier that is not valid (e.g., a non-existent port)
  2. Use a reserved Port Identifier (e.g., 0xFF which is not allowed for valid configurations)
  3. Recover with a valid Port Identifier and TU size

function: scripts/management/03_mi_cmd_set_test.py::test_mi_message_header

Inspect NVMe-MI message headers for length, message type, and slot-specific bits.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the VPD (Vital Product Data) from the MI interface in the specified slot
  2. Check the message length
  3. Verify Message Type is 0x4
  4. Verify Integrity Check (IC) bit is set to 1
  5. Verify ROR (Response/Request) bit is set correctly
  6. Verify NVMe-MI Message Type indicates an NVMe-MI Command
  7. Verify the Command Slot Identifier

function: scripts/management/03_mi_cmd_set_test.py::test_mi_message_header_send_meb

Verify commands that do not support the MEB bit return an invalid parameter when it is set.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. read ep buffer shall not support MEB bit
  3. read ep buffer without MEB bit
  4. read ep buffer with MEB bit
  5. Management Endpoint shall respond with an Invalid Parameter Error Response with the PEL field indicating the MEB bit.

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read

Sweep VPD read lengths up to the TU limit to validate response sizing and integrity.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read the SMBus port data structure to determine the TU limit before issuing reads
  2. send vpd read command with different length

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read_length

Probe VPD read offsets and lengths that cross the VPD boundary to confirm error handling.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. A VPD Read command with length 0 and no data is valid
  2. Valid VPD read request within size limits
  3. VPD read request with Data Length exceeding VPD size (invalid)
  4. VPD read request with Data Offset + Data Length exceeding VPD size (invalid)

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_invalid_length

Submit Management Endpoint Buffer writes with mismatched data lengths to observe error returns.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. Initialize buffer with specified length
  3. Test: Write buffer with length + 1 (mismatch with Data Length)
  4. Test: Write buffer with length – 1 (mismatch with Data Length)
  5. Test: Write buffer with correct length

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_offset

Exercise Management Endpoint Buffer writes across different offsets to ensure address handling is correct.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. Initialize buffer with specified length
  3. Test valid buffer writes at various offsets

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_write_read

Verify data integrity across Management Endpoint Buffer write and read sequences at large transfer sizes.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. Write data to Management Endpoint Buffer
  3. Read back the data from Management Endpoint Buffer
  4. Validate the written data matches the read data
  5. Test invalid buffer length > 4224 bytes
  6. Read back the data from Management Endpoint Buffer

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_read_after_sanitize

Confirm sanitize operations clear the Management Endpoint Buffer and trigger appropriate MI status values.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. Write data to Management Endpoint Buffer
  3. Read back the data from Management Endpoint Buffer
  4. Validate the written data matches the read data
  5. sanitize
  6. check sanitize status in log page
  7. responds with a Response Message Status of Management Endpoint Buffer Cleared Due to Sanitize

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_invalid_offset

Check Management Endpoint Buffer reads and writes at invalid offsets and length combinations.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Retrieve Management Endpoint Buffer size
  2. Valid offset tests
  3. Invalid offset tests
  4. Invalid DOFST + DLEN combination tests

function: scripts/management/03_mi_cmd_set_test.py::test_mi_command_servicing_across_controller_reset

Ensure controller-level resets do not interrupt outstanding NVMe-MI command servicing.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Skip the test if the baseline VPD read command is not supported by the endpoint
  2. Issue another MI command and reset the controller before fetching the response.

function: scripts/management/03_mi_cmd_set_test.py::test_mi_reset

Trigger an MI reset and confirm configuration registers and VPD access return to defaults.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Change SMBus/I2C Frequency to 3 and MCTP TU size to 100 bytes
  2. Verify updated SMBus/I2C Frequency and MCTP TU size
  3. Perform a VPD Read and save the response
  4. Define a reset function and initialize the subsystem
  5. reset mi only
  6. Verify SMBus/I2C Frequency and MCTP TU size are reset to defaults
  7. Verify VPD Read is functional after reset
  8. Admin Command Set is still available even though the NVMe Controller may be disabled or held in reset
  9. Perform NVMe reset and verify VPD Read
  10. Send admin command and MI command at the same time
  11. Ensure VPD response before and after NVMe reset are identical

function: scripts/management/03_mi_cmd_set_test.py::test_mi_shutdown_command

Exercise normal and abrupt MI shutdown types and verify subsystem behavior.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. shutdown controller with different type
  2. Test Reserved Shutdown Type
  3. Define a reset function and initialize the subsystem
  4. A Management Controller should shutdown all NVMe Controllers in an NVM Subsystem prior to resetting the NVM Subsystem.
  5. get identify again after reset

function: scripts/management/03_mi_cmd_set_test.py::test_mi_shutdown_abrupt_completion

Verify abrupt shutdown drives CSTS bits and increments dirty shutdown counter.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Define helper accessors used to observe CSTS and the dirty shutdown counter
  2. Ensure controller is ready and capture baseline dirty shutdown counter
  3. Issue abrupt shutdown via MI (shutdown_type=0x01)
  4. Wait until SHST10b and ST1 (shutdown complete per spec)
  5. Recover controller to ready state for subsequent tests

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read_offset_len

Read overlapping VPD ranges at different offsets to confirm data consistency.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Read 100 bytes from VPD at offset 0 and validate
  2. read 51 bytes to VPD starting at offset 22
  3. validate data buffer

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read_while_sanitize

Demonstrate that VPD reads continue to operate while sanitize is in progress.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Skip when the device does not advertise sanitize capabilities
  2. start sanitize operation
  3. vpd read is allowed during a sanitize operation
  4. check sanitize status in log page

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read_while_format

Confirm VPD reads remain functional while a Format NVM command executes.

Reference

  1. NVMe-MI Specification Revision 1.2c

Steps

  1. Capture baseline VPD data prior to issuing the Format command
  2. start the format command
  3. VPD read is allowed during a format command
  4. wait format command done

file: scripts/management/04_mi_admin_cmd_test

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_prohibited

Ensure the MI endpoint flags prohibited NVMe Admin opcodes with an Invalid Command warning.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Issue an MI Admin command with a forbidden opcode and expect an Invalid Command warning.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_get_log_page

Compare SMART log retrieval through MI Get Log Page against NVMe Admin results.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Retrieve the SMART log through the MI transport.
  2. Retrieve the SMART log via the NVMe Admin Get Log Page command.
  3. Compare temperatures and buffers returned by both transports.

function: scripts/management/04_mi_admin_cmd_test.py::test_inband_admin_tunneling_prohibited

Verify that NVMe Admin commands cannot be tunneled through the in-band MI mechanism.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Skip the test if the platform lacks MI Send or Receive in-band tunneling support.
  2. Use MI out-of-band transport to capture a SMART log slice for comparison.
  3. Prepare the MI message metadata used for tunneling NVMe Get Log Page.
  4. Track the tunneled completion status provided by the callback.
  5. Capture the MI completion status and error location from the callback path.
  6. Attempt to issue an Admin command via in-band MI tunneling, which is prohibited.
  7. MI message type 2 indicates NVMe Admin Command tunneling.
  8. Validate that the MI transport rejected the tunneled Admin command.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_get_log_page_without_controller

Confirm MI Get Log Page behavior without controller state and when applying offsets.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Execute MI Get Log Page without a controller context to capture baseline data.
  2. Fetch the same log with an explicit offset applied at the MI layer.
  3. Compare the buffered data between offset and non-offset reads.
  4. Request the log page with an invalid offset to confirm error handling.
  5. Validate equivalent responses when combining SQE offset and MI request offset fields.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_get_log_page_error_status

Exercise MI Get Log Page success paths and invalid parameter handling versus NVMe Admin.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Define baseline log page IDs and windowing parameters for validation.
  2. Compare MI and NVMe Admin responses for each mandatory log page.
  3. Verify MI error reporting for invalid size and offset combinations.
  4. Confirm payload size for a valid log page ID with regenerated data.
  5. Request an unsupported log page ID to validate NVMe status reporting.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_get_log_page_requires_rae

Demonstrate that tunneling Get Log Page with RAE cleared produces an Invalid Field status.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Issue a tunneled Get Log Page with RAE cleared to zero.
  2. Check that the NVMe completion status reports Invalid Field in Command.
  3. Ensure aborted Get Log Page operations return no payload data.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_aer_temperature

Validate MI-observed temperature events generated via NVMe Asynchronous Event Requests.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Issue an Asynchronous Event Request (AER) command
  2. Save original Over and Under Temperature Threshold settings
  3. Enable all asynchronous events
  4. Allocate buffer for SMART log and get the current temperature
  5. Set Over Temperature Threshold to trigger AER
  6. Send MI Get Log Page command to verify the event
  7. Restore original Over Temperature Threshold
  8. Set Under Temperature Threshold to trigger AER
  9. Read SMART log to validate the critical warning
  10. Restore original Under Temperature Threshold

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_identify

Confirm MI Identify responses match NVMe Admin Identify data for partial and full reads.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Retrieve partial Identify data through MI.
  2. Fetch the same Identify structure via NVMe Admin for comparison.
  3. Retrieve the full Identify data through MI.
  4. Compare entire Identify buffers between MI and NVMe Admin transports.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_ep_buf_identify

Verify MI Identify functionality when routed through the Management Endpoint Buffer.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. if 0x0206 not in mi.ep_buf_commands:
  2. pytest.skip(“identify not support EP buffer”)
  3. Read the Management Endpoint Buffer size from the MI data structure.
  4. Retrieve partial Identify data via MI without using the endpoint buffer.
  5. Compare the MI data against a direct NVMe Admin Identify command.
  6. Exercise endpoint buffer write and read primitives.
  7. Send MI Identify while directing data into the endpoint buffer.
  8. Read back the endpoint buffer contents.
  9. Send an NVMe Identify command in-band for comparison.
  10. Compare the endpoint buffer contents against the NVMe Identify data.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_timestamp

Use MI Set Features to program the timestamp and poll it while changing power states.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Program the controller timestamp via MI while entering the target power state.
  2. Drive the controller into the requested power state and verify support.
  3. Poll the timestamp through MI and ensure monotonic growth at the expected rate.
  4. Restore the controller to power state 0 after polling completes.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_command_servicing_across_controller_reset

Ensure MI-tunneled Admin commands continue to execute across controller-level resets.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Capture a baseline MI Get Log Page response prior to reset.
  2. Disable the controller to force a controller-level reset.
  3. Issue the same MI Get Log Page request following the reset.
  4. Verify the post-reset data matches the baseline response.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_identify_diff_slot

Confirm MI Identify responses are consistent across slots and match NVMe Admin data.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Send an NVMe Admin Identify command and capture the complete data structure.
  2. Send an MI Identify command using slot 0 and compare against the Admin data.
  3. Repeat the MI Identify command using slot 1 and compare the buffer again.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_invalid_input_data_size

Check MI rejects Admin requests whose payload exceeds the declared data length.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Build a payload that is larger than the advertised data transfer size.
  2. Attempt the transfer and expect the MI Invalid Input Data Size warning.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_fw_download

Transfer firmware chunks through MI Download commands while sweeping link frequency.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Configure the requested link frequency for the MI port.
  2. Read back the configuration to ensure the new frequency is active.
  3. Slice the firmware image into chunks that match the MI transfer unit.
  4. Stream each chunk via MI NVMe Firmware Download and retry on transient errors.
  5. Record the total transfer time and ensure no retries were needed.

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_logpage_lockdown

Read the MI lockdown log page and verify the payload is accessible via Get Log Page.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Fetch the lockdown log page and log the raw payload for review.

function: scripts/management/04_mi_admin_cmd_test.py::test_namespace_delete_all

Remove every namespace on the controller via the MI Delete Namespace command.

Reference

  1. NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Issue the MI Delete Namespace command targeting all namespaces.

function: scripts/management/04_mi_admin_cmd_test.py::test_create_single_ns

Provision a single 4 KiB namespace for subsequent tests via NVMe Admin commands.

Reference

  1. NVM Express Base Specification Revision 1.4b.

Steps

  1. Invoke the shared helper to create one namespace with a 4 KiB LBA format.

file: scripts/management/05_mi_control_primitive_test

function: scripts/management/05_mi_control_primitive_test.py::test_mi_slot_pause_flag

Verify dual-slot states stay independent and Pause flag behaves globally.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. verify both slots start idle
  2. send only the first admin-packet on slot0 so it stays in Receive
  3. verify slot0 active and slot1 idle
  4. send a complete MI command on slot1
  5. verify slot0 still receiving and slot1 processing
  6. pause command servicing and verify global pause flag
  7. resume command servicing and verify pause flag cleared
  8. verify pause flag cleared on both slots

function: scripts/management/05_mi_control_primitive_test.py::test_mi_cmnics_on_new_cmd_when_non_idle

Verify CMNICS is set when a new Command Message starts on a non-Idle slot.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. make slot0 non-idle by sending only the first packet of a firmware download
  2. start another command on the same non-idle slot0 (new SOM while not Idle)
  3. verify CMNICS is set in Get State (Figure 43 error/state flags)
  4. assert (s.data(7, 6) & 0x0008) != 0 # CMNICS == 1
  5. verify slot remains active servicing (either old continues or new accepted)
  6. send remaining packets and collect the response

function: scripts/management/05_mi_control_primitive_test.py::test_mi_idle_receive_and_replay_transmit

Verify Idle→Receive on first packet and Replay→Transmit when response available.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. attempt to replay with no prior response available
  2. issue a short-response command to generate a single-packet response
  3. replay the last response and drain it to avoid mixing packets
  4. verify slot returns to idle after replay completes

function: scripts/management/05_mi_control_primitive_test.py::test_mi_receive_state_transitions

Verify Receive→Idle and Receive→Process transitions.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. enter Receive by only sending the first packet
  2. abort while receiving → Idle
  3. trigger assembly error by skipping the first packet
  4. trigger MIC error by injecting bad CRC32
  5. check BMICE flag (bit4 indicates integrity error)
  6. try receiving to confirm the bad message was silently discarded

function: scripts/management/05_mi_control_primitive_test.py::test_mi_process_state_transitions

Verify Process transitions: completion→Transmit (even when paused) and Abort→Idle.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. assemble a short command and pause before completion
  2. verify completion drives Transmit regardless of Pause
  3. abort during processing → Idle (if abortable)

function: scripts/management/05_mi_control_primitive_test.py::test_mi_transmit_state_transitions

Verify Transmit→Idle after full response or Abort, and Transmit→Process for MPR.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. transmit→idle after full response
  2. transmit→idle after abort

function: scripts/management/05_mi_control_primitive_test.py::test_mi_pause_behavior

Verify Pause behavior: global flag, idempotent, CPSR bits, and response suspension.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. pause in idle has no effect
  2. pause in receive and verify global pause flag
  3. pause again and verify global pause flag remains set (idempotent)
  4. verify CSI must be 0: pause with CSI=1 should return Invalid Parameter with PEL (CSI bit)
  5. issue a command while paused; command responses are suspended while CP responses are not
  6. resume and drain the pending response; pause flag clears and slot returns to idle

function: scripts/management/05_mi_control_primitive_test.py::test_mi_abort_behavior

Verify Abort behavior: Idle transition, CPAS/CPSR, pause clear, resume other-slot transmit, and discard replay.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. abort in idle state has no effect
  2. make slot0 busy (Receive) to be a valid abort target
  3. abort slot0 and check CPSR/CPAS and pause clearing (Abort response is success)
  4. set state to pause state
  5. abort in pause state

function: scripts/management/05_mi_control_primitive_test.py::test_mi_abort_cpas_by_state

Verify Abort CPAS mapping by servicing state (Idle/Receive/Process/Transmit).

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. idle → CPAS=00b
  2. receive → CPAS=01b
  3. verify slot0 active and slot1 idle
  4. abort while receiving
  5. process → CPAS=01b or 10b(before/after affecting subsystem)

function: scripts/management/05_mi_control_primitive_test.py::test_mi_umep_flags

Verify UMEP (Unexpected Middle or End of Packet) flags.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. inject unexpected middle/end of packet to trigger UMEP

function: scripts/management/05_mi_control_primitive_test.py::test_mi_reset_flag

Verify NSSRO (NVM Subsystem Reset Occurred) flag reporting with mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. issue several commands while toggling pause; NSSRO should not be set by normal traffic
  2. Define a reset function and initialize the subsystem
  3. reset mi only
  4. check reset flag

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitives_survive_controller_reset

Controller Level Reset shall not disrupt MI control primitive servicing.

Reference

  1. NVM Express Base Specification, Controller Level Reset behavior.

Steps

  1. Issue Get State with a known tag and trigger controller level reset before reading it.
  2. Control primitives remain available after the reset.

function: scripts/management/05_mi_control_primitive_test.py::test_mi_replay_basic_behavior

Verify Replay clears Pause flag and retransmits the last command response.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. generate a replayable response
  2. pause endpoint to simulate suspended transmissions
  3. replay both slots

function: scripts/management/05_mi_control_primitive_test.py::test_mi_replay_with_rro

Verify Replay with Response Replay Offset (RRO) and RR bit in CPSR.

Reference

  1. Management Interface Specification, Revision 1.2

Steps

  1. generate a replayable response
  2. replay with RRO = 0 → full replay expected, RR=1
  3. replay with RRO within valid range
  4. the response message without first packet is invalid
  5. replay again from the beginning
  6. replay after abort and RR shall be cleared to 0
  7. nothing to replay after abort

function: scripts/management/05_mi_control_primitive_test.py::test_mi_nssro_flag

Verify NSSRO (NVM Subsystem Reset Occurred) flag reporting and clearing.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. issue several commands while toggling pause; NSSRO should not be set by normal traffic
  2. verify NSSRO is not command-slot specific (read with slot parameter should match)

function: scripts/management/05_mi_control_primitive_test.py::test_mi_replay_idle_no_response

Verify Replay in Idle with no cached response → RR=0.

Reference

  1. Management Interface Specification, Revision 2.1.

Steps

  1. ensure no cached response
  2. replay when nothing to retransmit → RR=0

function: scripts/management/05_mi_control_primitive_test.py::test_mi_servicing_model_dual_slot

Validate independent operation of dual command slots in MI.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. skip test if VPD Read command not supported
  2. read VPD with complete packets using slot 0 and 1
  3. send a request only packet 0 in slot 0
  4. read again on slot 1 while slot 0 is receiving
  5. try to read on slot 0 while it is still receiving
  6. complete the command on slot 0
  7. read VPD again on both slots to verify normal operation

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_tag

Verifies that the Response Message contains the same value in the Tag field as the corresponding Request Message.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Send a Get State command with the specified tag and receive the response
  2. Verify that the response message has the same tag as the request
  3. send many command with different tags
  4. check all responsed tags, only the response associated with the last Control Primitive are guaranteed

file: scripts/management/06_mi_pcie_cmd_test

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_cfg_read

PCIe Configuration Read

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 7.1 PCIe Configuration Read

Steps

  1. Send PCIe Configuration Read command
  2. check PCIe Configuration Read data

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_cfg_read_bar_access

A Management Endpoint that supports PCIe commands must service BAR accesses.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. check if PCIe Config Read is supported
  2. BAR0/1 registers sit at offsets 0x10-0x17; spec requires they return data, not Access Denied.
  3. compare BAR content with direct PCIe view

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_cfg_read_offset_overflow

NVMe-MI PCIe Config Read should fail when offset+length exceed config space.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. check if PCIe Config Read is supported
  2. Boundary checks: these requests stay fully within the 4KB PCIe config space.
  3. Offset+length exceeds 4KB boundary, expect Invalid Parameter (PEL -> Offset field).
  4. Length = 1 at the last byte is valid.
  5. Reading 2 bytes from 0xfff straddles the boundary, should fault.

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_cfg_read_d3hot

PCIe command either succeeds or returns PCIe Inaccessible while controller in non-D0 state.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Confirm baseline behaviour in D0.
  2. Enter D3hot (non-D0 state) and attempt a PCIe command.
  3. Exit D3hot and verify PCIe commands succeed again.

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_command_status_during_controller_reset

PCIe command responses are either serviced or return PCIe Inaccessible.

Reference
N/A

Steps

  1. Confirm baseline behaviour prior to reset.
  2. Start another PCIe command and reset controller before collecting the response.
  3. PCIe port remained accessible, so data must match the config space.
  4. If affected, spec mandates PCIe Inaccessible completion status.
  5. PCIe command servicing resumes after reset.

file: scripts/management/07_mi_feature_test

function: scripts/management/07_mi_feature_test.py::test_mi_feature_configuration_set_and_reset

Set Mi configuration and mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 8.3.1 NVM Subsystem Reset

Steps

  1. get default MCTP Transmission Unit Size
  2. set MCTP Transmission Unit Size=128byte
  3. get current MCTP Transmission Unit Size
  4. issue mi reset
  5. get current MCTP Transmission Unit Size

function: scripts/management/07_mi_feature_test.py::test_mi_feature_set_mctp_tu_size

Test for setting and verifying MCTP Transmission Unit (TU) Size and performing VPD reads.

Reference

  1. NVM Express® Management Interface Specification, Revision 1.2c

Steps

  1. Set MCTP Transmission Unit Size to 64 bytes
  2. Retrieve current MCTP Transmission Unit Size to verify it has been set to 64 bytes
  3. Perform initial VPD read of 100 bytes
  4. Set MCTP Transmission Unit Size to 128 bytes
  5. Confirm new MCTP Transmission Unit Size of 128 bytes
  6. Perform VPD read again and ensure consistency
  7. Compare the two VPD read buffers to ensure data integrity
  8. Reset MCTP Transmission Unit Size back to 64 bytes for cleanup
  9. Confirm the reset to 64 bytes

function: scripts/management/07_mi_feature_test.py::test_mi_feature_ep_buf_full

Full Management Endpoint Buffer and verify

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.5 Management Endpoint Buffer Write

Steps

  1. read Management Endpoint Buffer Size
  2. full check Management Endpoint Buffer

function: scripts/management/07_mi_feature_test.py::test_mi_feature_ep_buf_reset

Reset Management Endpoint Buffer

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.5 Management Endpoint Buffer Write

Steps

  1. read Management Endpoint Buffer Size
  2. Management Endpoint Buffer Write special data and check
  3. read Management Endpoint Buffer after controller reset
  4. issue MI Reset
  5. read Management Endpoint Buffer after MI reset

function: scripts/management/07_mi_feature_test.py::test_mi_feature_ep_buf_sanitize

Validate that the Management Endpoint Buffer is cleared to 0h during a sanitize operation.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Read Management Endpoint Buffer Size
  2. Check if sanitize operation is supported
  3. Full Management Endpoint Buffer
  4. Sanitize operation
  5. check error status of Management Endpoint Buffer Cleared Due to Sanitize. Thi

function: scripts/management/07_mi_feature_test.py::test_mi_feature_disable_ccen

disable nvme and send mi command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send nvme identify command
  2. set nvme cc.en=0
  3. send mi identify command
  4. check identify data
  5. check the controller is disabled

function: scripts/management/07_mi_feature_test.py::test_mi_feature_d3hot

enter pcie d3hot and send mi command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send nvme identify command
  2. enter pcie d3hot
  3. send mi identify command
  4. enter pcie d0
  5. check identify data

function: scripts/management/07_mi_feature_test.py::test_mi_feature_command_latency

test mi command latency with io

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.11 VPD Read

Steps

  1. get current SMBus/I2C Frequency
  2. vpd read 256 bytes 100 cycles
  3. vpd read 256 bytes 100 cycles with ioworker running
  4. compare the time

function: scripts/management/07_mi_feature_test.py::test_mi_feature_io_with_periodic_oob_reset

Run random-read IO for 60 seconds while issuing MI resets every second.

Reference
N/A

Steps

  1. start read IO workload
  2. issue MI resets periodically during IO
  3. log results

function: scripts/management/07_mi_feature_test.py::test_mi_duplicated_slot

Verifies that the Management Controller should not send a new command to a slot until the previous command’s response is received.

Reference

  1. NVM Express® Management Interface Specification, Revision 1.2c

Steps

  1. Read VPD data in different slots to confirm slots 0 and 1 are operational
  2. send two commands in both slots without waiting for previous responses
  3. send another two commands in both slots without waiting for previous responses

function: scripts/management/07_mi_feature_test.py::test_mi_dual_slot_concurrent

Validate VPD read consistency across dual slots (slot 0 and slot 1).

Reference

  1. NVMe Management Interface Specification, Revision 1.2c

Steps

  1. skip if VPD read command is not supported
  2. Perform initial VPD read from slot 0
  3. Perform consecutive VPD reads from both slots
  4. Verify data consistency in both slots across reads

function: scripts/management/07_mi_feature_test.py::test_mi_nvm_subsystem_report

Validate the NVM Subsystem Report (NVMSR) field to verify NVMe Storage Device status.

Reference

  1. NVMe Management Interface Specification, Revision 1.2c

Steps

  1. skip if NVMe spec version is below 2.0
  2. Retrieve NVMSR field from the Identify Controller data structure
  3. Verify that the NVMSR field is set to 1 for an NVMe Storage Device
  4. print FGUID

file: scripts/management/08_mi_error_inject_test

function: scripts/management/08_mi_error_inject_test.py::test_mi_unexpected_eom

Validate unexpected EOM handling and confirm UMEP reporting behavior.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Reset MI error state to start clean
  2. Send two partial commands to create a Receive → Process transition.
  3. Inject an unexpected EOM (missing SOM) and confirm UMEP is reported.

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_som_and_eom

Verify SOM/EOM error handling during MI VPD and Identify commands.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Check VPD read support before exercising error cases
  2. Reset MI error state to isolate injected errors
  3. Send a baseline VPD read command with valid headers
  4. Send a VPD read with correct SOM/EOM values to confirm clean completion
  5. Send a VPD read with SOM only while clearing EOM
  6. Send a VPD read with EOM only while clearing SOM
  7. Send a VPD read with both SOM and EOM cleared
  8. Send a normal MI Identify command to ensure baseline success
  9. Send MI Identify command split with intentionally incorrect EOM marker
  10. The error command shall be discarded
  11. Clear all error state flags after validation

function: scripts/management/08_mi_error_inject_test.py::test_mi_bad_mctp_version

Send MI command with bad MCTP version to ensure BHVS flag is asserted.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Start from a clean state so only the injected error affects the flags
  2. Force an invalid MCTP version value in the first packet header
  3. Issue a standard Identify command; the corrupted header should be rejected
  4. Check MI state for BHVS flag after processing the invalid header
  5. Attempt to receive response and confirm it is silently discarded, triggering warning

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_integrity_check

Check invalid integrity check handling in NVMe-MI admin commands.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Reset MI error state before injecting IC fault
  2. Send a valid NVMe Admin command and ensure response without errors
  3. Verify the state is in Idle
  4. Inject an invalid integrity check (IC) and send the command again
  5. Send the same admin command with invalid integrity check
  6. Attempt to receive response and confirm it is silently discarded, triggering warning

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_mic_crc32

Inject CRC32 corruption into MI command and verify BMICE reporting.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Reset MI error state before CRC injection
  2. Inject an error into the CRC32 value of the MCTP header and send an identify command
  3. Wait briefly to allow the error to be processed
  4. Check the state control field for the BMICE flag (bit 4 should be set to ‘1’ to indicate an integrity error)
  5. Attempt to receive the response and confirm it is silently discarded, triggering a warning
  6. Verify again that the MI has returned to the Idle state after handling the error
  7. Clear error state flag after confirming BMICE behavior
  8. Ensure the error flag is cleared following cleanup

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_offset

Issue MI NVMe Admin commands with invalid offsets and expect errors.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Send MI Identify command with misaligned offset
  2. Validate the reported parameter error location
  3. Send MI Identify command with offset beyond completion data size

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_length

Issue MI NVMe Admin commands with invalid lengths and expect errors.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Send MI Identify command with length shorter than expected
  2. Send MI Identify command with length beyond completion data size

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_opcode

Validate MI error reporting for unsupported NVMe Admin and MI opcodes.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Send MI NVMe Admin command with unsupported opcode
  2. Send MI command with invalid opcode

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_nvme_command

Verify MI surfaces NVMe internal error status for bad Identify command.

Reference

  1. Based on NVM Express Management Interface Specification 1.2c.

Steps

  1. Send MI NVMe Admin Identify command expected to trigger internal error
  2. Check NVMe completion status field is non-zero to reflect error

file: scripts/management/09_mi_stress_test

function: scripts/management/09_mi_stress_test.py::test_mi_stress_mix_nvme_cmd

test mi command mix io command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send different mi command
  2. random read, get io latency
  3. mixing io and mi commands
  4. check if the mi command is correct
  5. check for io latency changes

function: scripts/management/09_mi_stress_test.py::test_mi_stress_cmd_ctrl_mix

Control Primitives Mix

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1 Control Primitives

Steps

  1. mix send different mi command

function: scripts/management/09_mi_stress_test.py::test_mi_stress_io_with_mi_reset

IO and mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.8 Reset

Steps

  1. issue mi reset
  2. mi reset during io

function: scripts/management/09_mi_stress_test.py::test_mi_stress_control_primitives_survive_controller_reset

Repeated controller resets shall not disrupt Control Primitive servicing.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Send multiple Get State commands with unique tags, resetting the controller

function: scripts/management/09_mi_stress_test.py::test_mi_stress_command_servicing_across_controller_reset

MI command servicing must stay alive through repeated controller resets.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Skip test if VPD Read command is not supported
  2. Get baseline VPD data
  3. Repeatedly issue VPD Read commands with controller resets in between

function: scripts/management/09_mi_stress_test.py::test_mi_stress_admin_command_servicing_across_controller_reset

Admin commands tunneled via MI must survive repeated resets.

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. Get baseline Log Page data
  2. Repeatedly issue Get Log Page commands with controller resets in between

function: scripts/management/09_mi_stress_test.py::test_mi_stress_pcie_command_status_during_controller_reset

PCIe Config commands should either complete or return PCIe Inaccessible.

Reference
N/A

Steps

  1. Skip test if PCIe Config Read command is not supported
  2. Prepare expected data
  3. Repeatedly issue PCIe Config Read commands with controller resets in between

function: scripts/management/09_mi_stress_test.py::test_mi_stress_diff_slot

Send mi commands with different slots

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

  1. get response of different mi command in slot 0
  2. send different mi command in slot 0 and slot 1

function: scripts/management/09_mi_stress_test.py::test_mi_stress_basic_management_mix

mi command and basic management mixed

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

  1. mix send mi command and basic management command
  2. mi configuration get command
  3. smbus block read of the drive’s status
  4. mi read mi data structure command
  5. smbus block read of the drive’s static data
  6. mi vpd read
  7. i2c.i2c_master_write(i2c.ENDPOINT, [0xff], flags=pyaardvark.I2C_NO_STOP)
  8. mi identify command

function: scripts/management/09_mi_stress_test.py::test_mi_stress_inband_oob_cmd_mix

mi command and out of band command mixed

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. skip if mi command is not supported
  2. mix send mi command and basic management command
  3. mi configuration get command
  4. send mi command to get configuratrion of Health Status Change
  5. mi read mi data structure command
  6. send mi command to set configuration of Health Status Change
  7. mi vpd read
  8. mi identify command

function: scripts/management/09_mi_stress_test.py::test_mi_large_message

mi command with different length of packets

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send vpd read command with different length
  2. send read mi data structure command with different length

function: scripts/management/09_mi_stress_test.py::test_mi_dual_slot_thread_commands

send commands in both slots

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. get response of cmd1 in slot0
  2. get response of cmd2 in slot1
  3. define first thread for cmd1
  4. define second thread for cmd2
  5. start both threads
  6. get and verify all responses in main thread
  7. wait both threads complete

function: scripts/management/09_mi_stress_test.py::test_mi_stress_longtime

keep nvme and mi mixed operations for long time

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. format the namespace first
  2. Collect all ioworkers
  3. Start all ioworkers
  4. Mix mi commands and i2c basic management commands
  5. Close all ioworkers

file: scripts/management/10_mi_ocp_test

function: scripts/management/10_mi_ocp_test.py::test_ocp_update_data_byte_write

Check Firmware Update Flags field

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6

Steps

  1. Read current data from the device
  2. Calculate and update PEC before writing back
  3. Read data again for verification
  4. Revert to original value without pec
  5. Read data again for verification
  6. Validate that the data remains unchanged after the update

function: scripts/management/10_mi_ocp_test.py::test_ocp_read_firmware_update_flags

Check Firmware Update Flags field

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6. SMBUS-4

Steps

  1. send NVMe Basic Management Command opcode 90
  2. check Firmware Update Flags field (byte 91) in the SMBus Data structure shall be set to FFh

function: scripts/management/10_mi_ocp_test.py::test_ocp_read_secure_boot_failure_reporting

Check Firmware Update Flags field

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6. SMBUS-5

Steps

  1. send NVMe Basic Management Command opcode 242
  2. The Secure Boot Failure Feature Reporting Supported bit at offset 243 shall be supported and set to 1b.

function: scripts/management/10_mi_ocp_test.py::test_ocp_read_basic_mgmt_delay

Check NVMe Basic Management Command time

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6. SMBUS-7

Steps

  1. read Controller Capabilities Timeout value
  2. get NVMe Basic Management Command time
  3. The device shall take no longer than the CAP.TO timeout value to produce stable SMBus output through the NVMe Basic Management Command

function: scripts/management/10_mi_ocp_test.py::test_ocp_update_data_block_write

Test NVMe-MI data update via I2C.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6. SMBUS-10

Steps

  1. Read current data from the device
  2. Calculate and update PEC before writing back
  3. The device shall check the PEC value sent when the host issues a Block Write and only process the message if the PEC value matches the SMBus data format
  4. Read data again for verification
  5. revert to original value
  6. Read data again for verification
  7. Validate that the data remains unchanged after the update

function: scripts/management/10_mi_ocp_test.py::test_mi_level0_discovery

send tcg level0 discovery over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send tcg level0 discovery over smbus

function: scripts/management/10_mi_ocp_test.py::test_mi_device_self_test

send device self test command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send device self test command over smbus

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_device_self_test

Validate device self-test log page (page ID 0x6) over SMBus.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Start a short Device Self-Test (DST) and record start time
  2. Retrieve the device self-test log through MI
  3. Issue a format command to abort the DST
  4. Retrieve the device self-test log through MI again

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_sanitize

Validate sanitize log page (page ID 0x81) over SMBus.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Send sanitize command over smbus
  2. Issue another Block Erase sanitize command
  3. Monitor sanitize status via MI log page
  4. Retrieve sanitize log page through MI
  5. Extract status and progress from the log data
  6. Validate sanitize completion

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_extended_smart_log

Validate extended SMART/Health log page (page ID 0xC0) over SMBus.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Retrieve the extended SMART/Health log through MI
  2. Retrieve the same extended SMART/Health log through NVMe Admin
  3. Compare the data retrieved through MI and NVMe Admin

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_persistent_event_log

Validate persistent event log page (page ID 0xD) over SMBus.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Reset the NVMe controller
  2. Retrieve persistent event log (LSP = 1) through MI
  3. Retrieve persistent event log (LSP = 0) through MI

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_telemetry_host_initiated_log

send get log page pageid(0xd) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get persistent event log log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_telemetry_controller_initiated_log

send get log page pageid(0x8) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get persistent event log log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_feature_temperature_threshold

Validate temperature threshold feature using MI commands.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Retrieve original temperature threshold configuration
  2. Set a new temperature threshold using MI
  3. Verify the new configuration through NVMe Admin command
  4. Restore the original temperature threshold configuration

function: scripts/management/10_mi_ocp_test.py::test_mi_diff_host_smbus_frequency

Validate MI command functionality under different SMBus host frequencies.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5.
  2. Section 11.3: NVMe-MI Requirements.

Steps

  1. Initialize host with different SMBus frequencies
  2. Prepare a buffer for the NVMe firmware download command
  3. Send an NVMe firmware download command via MI

function: scripts/management/10_mi_ocp_test.py::test_smbus_prepare_to_arp

Test for sending the Prepare to ARP command using SMBus protocol.

Reference

  1. System Management Bus (SMBus) Specification Version 3.2, 6.6.3.2 Prepare to ARP

Steps

  1. Send ARP address with command
  2. Send the SMBus command with PEC
  3. send command with wrong PEC

function: scripts/management/10_mi_ocp_test.py::test_smbus_get_udid

get UDID of an SMBus device.

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.6. NVMe-MI-6

Steps

  1. send get udid SMBus command
  2. send another get udid SMBus command
  3. verify the data, assume no random number
  4. fetch the UDID from the packet
  5. check UDID fields

function: scripts/management/10_mi_ocp_test.py::test_spdm_get_version

Get SPDM Version

Reference

  1. Security Protocol and Data Model (SPDM) Specification, Version: 1.0.2
  2. 4.9.1.1 GET_VERSION request message and VERSION response message

Steps

  1. send get SPDM version command over smbus

file: scripts/management/11_spdm_test

function: scripts/management/11_spdm_test.py::test_spdm_get_version

Validate SPDM VERSION response (VersionNumberEntry count and entries) for a GET_VERSION request.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Transmit GET_VERSION and receive the corresponding VERSION response.
  2. Check SPDM message header fields in the VERSION response.
  3. Parse VersionNumberEntry count from the payload.
  4. Iterate over VersionNumberEntry array and log each advertised version.

function: scripts/management/11_spdm_test.py::test_spdm_get_version_invalid_version

Validate GET_VERSION with invalid Version encodings returns an ERROR response.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Send GET_VERSION using invalid Version encodings and expect an ERROR response.

function: scripts/management/11_spdm_test.py::test_spdm_get_version_multiple

After receiving GET_VERSION, the Responder cancels any previous outstanding requests from the same Requester.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Interleave GET_VERSION and GET_CAPABILITIES, then issue GET_VERSION again.
  2. Send GET_VERSION twice; only the latest request should be serviced.
  3. Validate the captured VERSION response header.

function: scripts/management/11_spdm_test.py::test_spdm_get_version_latency

Measure GET_VERSION request latency and verify it is within the expected threshold.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION request.
  2. Start timing prior to receiving the VERSION response.
  3. Receive the VERSION response.
  4. Compute elapsed time in seconds.
  5. Log latency in microseconds.
  6. Threshold check: latency must be < 100 ms.

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities

GET_CAPABILITIES and verify CAPABILITIES response fields and flags.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Perform version discovery then request CAPABILITIES.
  2. Repeat capability discovery after a fresh GET_VERSION.
  3. Verify CAPABILITIES response header fields per SPDM.
  4. CTExponent → cryptographic timeout (ms).
  5. CAPABILITIES Flags.
  6. Selected capability bits per SPDM flags layout.
  7. Reserved bits example check (kept disabled as in the original).
  8. assert flags & 0xFFFC0000 == 0, “reserved bits shall be zero”
  9. DataTransferSize and MaxSPDMmsgSize.
  10. PSK_CAP must not use the reserved value 0b11.

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_invalid_flag

GET_CAPABILITIES with an invalid Flags value and verify ERROR handling.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION then GET_CAPABILITIES with an invalid Flags encoding.
  2. Verify Header.Version in the captured response.

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_multiple

Multiple GET_CAPABILITIES requests and verification of expected responses.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION then GET_CAPABILITIES.
  2. GET_CAPABILITIES before GET_VERSION → expect ERROR 0x04.
  3. Refresh connection discovery then request CAPABILITIES again.

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_invalid_version

Test GET_CAPABILITIES with invalid SPDM versions and expect an ERROR response.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Discover supported VersionNumberEntry values.
  2. Candidate invalid Version encodings.
  3. For versions not advertised by the Responder, expect ERROR on GET_CAPABILITIES.

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_invalid_flags

Send GET_CAPABILITIES with invalid parameter encodings and expect ERROR; verify header fields in the response.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery before capability disclosure
  2. request CAPABILITIES with an invalid Flags bitmap and expect ERROR
  3. request CAPABILITIES with an invalid DataTransferSize and expect ERROR
  4. request CAPABILITIES with invalid MaxSPDMmsgSize relative to DataTransferSize and expect ERROR
  5. verify Header.Version in the captured response

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_check_resp_flags

Check CAPABILITIES response flags and cross-field constraints.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery then request CAPABILITIES with a specific Flags mask
  2. if ENCRYPT_CAP == 1 then KEY_EX_CAP == 1 or PSK_CAP in {1,2}
  3. if MEAS_CAP == 1 then KEY_EX_CAP == 1 or PSK_CAP in {1,2}
  4. if KEY_EX_CAP == 1 then ENCRYPT_CAP == 1 or MAC_CAP == 1
  5. PSK_CAP must not be 3
  6. if PSK_CAP != 0 then ENCRYPT_CAP == 1 or MAC_CAP == 1
  7. if MUT_AUTH_CAP == 1 then ENCAP_CAP == 1
  8. if HANDSHAKE_IN_THE_CLEAR_CAP == 1 then KEY_EX_CAP == 1
  9. if PUB_KEY_ID_CAP == 1 then CERT_CAP == 0
  10. DataTransferSize must be >= minimum; MaxSPDMmsgSize must be >= DataTransferSize
  11. if CHAL_CAP == 1 or MEAS_CAP == 1 or KEY_EX_CAP == 1 then CERT_CAP == 1 or PUB_KEY_ID_CAP == 1

function: scripts/management/11_spdm_test.py::test_spdm_get_capabilities_invalid_param

Send GET_CAPABILITIES with invalid parameters and expect ERROR responses.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. send GET_VERSION to establish negotiated Version
  2. send GET_CAPABILITIES with Param2=0 as baseline success
  3. send GET_CAPABILITIES with Param2=1 expect ERROR
  4. send GET_CAPABILITIES with CTExponent incremented expect ERROR
  5. send GET_CAPABILITIES with DataTransferSize+1 and MaxSPDMmsgSize+1 expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms

Send NEGOTIATE_ALGORITHMS and validate ALGORITHMS response fields.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Prerequisite discovery: VERSION, then CAPABILITIES.
  2. NEGOTIATE_ALGORITHMS → ALGORITHMS.
  3. Length check for ALGORITHMS payload.
  4. Header: Version, ResponseCode=ALGORITHMS, reserved/length fields.
  5. MeasurementHashAlgo: exactly one bit set.
  6. BaseAsymSel / BaseHashSel: exactly one bit set each.
  7. Reserved region and extended counts.
  8. RespAlgStruct list: Type/Count/Selected; fixed 2-byte bitmap; no extended; exactly one bit selected.

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms_invalid_sequence

NEGOTIATE_ALGORITHMS in invalid request orders should return ERROR; only the canonical sequence succeeds.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. NEGOTIATE_ALGORITHMS before GET_VERSION → expect ERROR 0x04
  2. GET_VERSION only (no GET_CAPABILITIES) then NEGOTIATE_ALGORITHMS → expect ERROR 0x04
  3. Canonical order: GET_VERSION → GET_CAPABILITIES → NEGOTIATE_ALGORITHMS → expect ALGORITHMS
  4. Re-issuing NEGOTIATE_ALGORITHMS after negotiation → expect ERROR 0x04
  5. GET_CAPABILITIES after NEGOTIATE_ALGORITHMS (invalid order) → expect ERROR 0x04
  6. NEGOTIATE_ALGORITHMS after the above invalid order → expect ERROR 0x04
  7. Restart from GET_VERSION to allow the canonical sequence again

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms_invalid_version

NEGOTIATE_ALGORITHMS with unsupported Version encodings should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery and collect supported VersionNumberEntry values
  2. candidate invalid Version encodings
  3. for versions not advertised by the responder expect ERROR on NEGOTIATE_ALGORITHMS

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms_invalid_params

NEGOTIATE_ALGORITHMS with invalid parameters should return ERROR

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. send GET_VERSION to establish the negotiated Version
  2. send GET_CAPABILITIES to disclose CTExponent and Flags before algorithm negotiation
  3. send NEGOTIATE_ALGORITHMS with total Length below minimum, expect ERROR
  4. send NEGOTIATE_ALGORITHMS with ExtAsymSelCount exceeding allowed range, expect ERROR
  5. send NEGOTIATE_ALGORITHMS with ExtHashSelCount exceeding allowed range, expect ERROR
  6. send NEGOTIATE_ALGORITHMS with FixedAlgStruct count too small, expect ERROR
  7. send NEGOTIATE_ALGORITHMS with unsupported FixedAlgStruct count value, expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms_multiple_times

Send NEGOTIATE_ALGORITHMS more than once and expect only the first negotiation to succeed.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. send GET_VERSION to establish negotiated Version
  2. send GET_CAPABILITIES to disclose CTExponent and Flags prior to negotiation
  3. send first NEGOTIATE_ALGORITHMS and expect ALGORITHMS
  4. send another NEGOTIATE_ALGORITHMS with Param2=1 and expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_negotiate_algorithms_skip_get_capabilities

Send NEGOTIATE_ALGORITHMS without GET_CAPABILITIES and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery
  2. attempt NEGOTIATE_ALGORITHMS while skipping GET_CAPABILITIES and expect ERROR 0x04

function: scripts/management/11_spdm_test.py::test_spdm_diff_capabilities_algorithms

Exercise CAPABILITIES and ALGORITHMS with differing selections and expect ERROR on invalid overrides.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. VERSION → CAPABILITIES to establish baseline
  2. CAPABILITIES with DataTransferSize override should return ERROR
  3. VERSION → CAPABILITIES → NEGOTIATE_ALGORITHMS to establish baseline
  4. NEGOTIATE_ALGORITHMS with invalid BaseHashAlgo bitmap should return ERROR

function: scripts/management/11_spdm_test.py::test_spdm_get_digest

Send GET_DIGESTS and validate DIGESTS response fields and cert chain digests.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS → DIGESTS
  3. Header: Version and ResponseCode=DIGESTS; Param1 reserved
  4. Param2 slot mask → digest count (number of set bits)
  5. Digest array length = digest_count * digest_length
  6. Log per-slot CertChainHash

function: scripts/management/11_spdm_test.py::test_spdm_get_digests_with_different_hash_algorithms

Verify GET_DIGESTS response across negotiated BaseHashAlgo variants.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Establish connection discovery then capability disclosure.
  2. Negotiate algorithms by proposing a single candidate BaseHashAlgo derived from the parameter,
  3. then observe the Responder’s BaseHashSel selection in the ALGORITHMS response.
  4. If the Responder does not select any BaseHashAlgo, skip this variant.
  5. Determine HashSize for the selected BaseHashAlgo to interpret DIGESTS payload.
  6. Request DIGESTS and record the response for this negotiated BaseHashAlgo.
  7. Extract the first CertChainHash from the DIGESTS payload for observation/logging.
  8. Reissue GET_DIGESTS for idempotence; responses should be identical for the same negotiated state.

function: scripts/management/11_spdm_test.py::test_spdm_get_digests_invalid_requests

Send GET_DIGESTS in invalid connection states; verify ERROR handling and success after proper initialization.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION then GET_DIGESTS before CAPABILITIES → expect ERROR
  2. After CAPABILITIES, GET_DIGESTS before NEGOTIATE_ALGORITHMS → expect ERROR
  3. Correct order completed; GET_DIGESTS should succeed
  4. Unsupported Version in GET_DIGESTS → expect ERROR
  5. Reissuing CAPABILITIES after negotiation → expect ERROR
  6. Reissuing NEGOTIATE_ALGORITHMS after negotiation → expect ERROR
  7. CAPABILITIES again after prior failures → still expect ERROR
  8. GET_DIGESTS remains valid after correct initialization

function: scripts/management/11_spdm_test.py::test_spdm_get_digests_invalid_version

GET_DIGESTS with unsupported Version encodings should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery and collect supported VersionNumberEntry values
  2. candidate invalid Version encodings
  3. for versions not advertised by the responder expect ERROR on GET_DIGESTS
  4. establish connection state before issuing the request under test

function: scripts/management/11_spdm_test.py::test_spdm_get_certificate

Retrieve the full certificate chain via GET_CERTIFICATE and validate against the DIGESTS CertChainHash.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Obtain DIGESTS to anchor the expected CertChainHash.
  3. Retrieve the complete certificate chain in portions until RemainderLength becomes zero.
  4. Verify the certificate chain hash matches the DIGESTS value.
  5. Parse the DER-encoded certificate chain into X.509 objects.
  6. Verify the chain from root to leaf using the issuer’s public key on the subject’s tbsCertificate.

function: scripts/management/11_spdm_test.py::test_spdm_get_cert_from_diff_slots

Send GET_CERTIFICATE for each slot indicated in the DIGESTS slot mask and verify per-slot chain via precomputed CertChainHash.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. request DIGESTS and parse the slot mask and per-slot CertChainHash array
  3. header checks for DIGESTS
  4. derive slot count from Param2 bitmap and slice the digest blob
  5. for each set bit in the slot mask, pull the CERTIFICATE chain and validate against the corresponding CertChainHash

function: scripts/management/11_spdm_test.py::test_spdm_get_leaf_certificate

Retrieve the leaf certificate from the CERTIFICATE chain and log essential fields.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. send GET_VERSION to establish negotiated Version
  2. send GET_CAPABILITIES to disclose CTExponent and Flags
  3. send NEGOTIATE_ALGORITHMS to select MeasurementHashAlgo/BaseAsymSel/BaseHashSel
  4. retrieve the complete CERTIFICATE chain in portions
  5. parse DER certificates into X.509 objects
  6. select the leaf certificate and log subject/issuer/serial/validity

function: scripts/management/11_spdm_test.py::test_spdm_get_certificate_invalid_version

GET_CERTIFICATE with unsupported Version encodings should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery and collect supported VersionNumberEntry values
  2. prepare candidate invalid Version encodings
  3. for versions not advertised by the responder expect ERROR on GET_CERTIFICATE
  4. enter negotiated state then issue the request under test

function: scripts/management/11_spdm_test.py::test_spdm_get_cert_with_invalid_slots

GET_CERTIFICATE with an unset slot number should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. obtain DIGESTS to discover the slot mask
  2. issue GET_CERTIFICATE for each slot not set in the mask and expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_get_challenge

Send CHALLENGE and validate CHALLENGE_AUTH signature against the responder certificate chain.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Retrieve the complete certificate chain to obtain the ResponderPublicKey from the last certificate.
  3. Send CHALLENGE and capture CHALLENGE_AUTH.
  4. Verify CertChainHash equals the hash of the retrieved certificate chain.
  5. Build the signed message: ResponderChallengeAuthSigningContext followed by Hash of the transcript for CHALLENGE.
  6. Extract the signature and verify using the ResponderPublicKey from the leaf certificate.

function: scripts/management/11_spdm_test.py::test_spdm_challenge_with_invalid_version

CHALLENGE with unsupported Version encodings should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. perform VERSION discovery and collect supported VersionNumberEntry values
  2. candidate invalid Version encodings
  3. for versions not advertised by the responder expect ERROR on CHALLENGE

function: scripts/management/11_spdm_test.py::test_spdm_challenge_with_invalid_params

CHALLENGE with invalid parameters should return ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. obtain DIGESTS to learn the slot mask used for CHALLENGE
  3. issue CHALLENGE for each slot not set in the mask and expect ERROR
  4. param2 out of allowed range should return ERROR
  5. param2 near upper reserved boundary should return ERROR

function: scripts/management/11_spdm_test.py::test_spdm_challenge_check_nonce

Check CHALLENGE Nonce quality across repeated exchanges using a chi-square uniformity test.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. collect responder nonces from multiple CHALLENGE flows
  2. evaluate nonce byte distribution for near-uniformity

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a1_b1_c1

Run CHALLENGE flow A1_B1_C1 with full VCA and certificate retrieval prior to CHALLENGE.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a1_b3_c1

Run CHALLENGE flow A1_B3_C1 with DIGESTS after retrieving the certificate, then CHALLENGE.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_CERTIFICATE to obtain leaf certificate
  3. Re-enter VERSION CAPABILITIES ALGORITHMS before CHALLENGE
  4. GET_DIGESTS to obtain CertChainHash
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a1_b4_c1

Run CHALLENGE flow A1_B4_C1 with certificate retrieval after VCA, then CHALLENGE.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. Re-enter VERSION CAPABILITIES ALGORITHMS before CHALLENGE
  4. GET_CERTIFICATE to obtain leaf certificate
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a1_b2_c1

Run CHALLENGE flow A1_B2_C1 with CHALLENGE directly after VCA steps.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. Re-enter VERSION CAPABILITIES ALGORITHMS before CHALLENGE
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a2_b1_c1

Run CHALLENGE flow A2_B1_C1 under CACHE_CAP with DIGESTS and certificate before CHALLENGE.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Check if CACHE CAP is supported
  2. Send CHALLENGE, M1 and M2 shall be set to null
  3. GET_DIGESTS to obtain CertChainHash
  4. GET_CERTIFICATE to obtain leaf certificate
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a2_b3_c1

Run CHALLENGE flow A2_B3_C1 under CACHE_CAP with DIGESTS then CHALLENGE using a retrieved certificate.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Check if CACHE CAP is supported
  2. GET_CERTIFICATE to obtain leaf certificate
  3. Send CHALLENGE, M1 and M2 shall be set to null
  4. GET_DIGESTS to obtain CertChainHash
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a2_b4_c1

Run CHALLENGE flow A2_B4_C1 under CACHE_CAP with certificate retrieval then CHALLENGE.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Check if CACHE CAP is supported
  2. GET_DIGESTS to obtain CertChainHash
  3. Send CHALLENGE, M1 and M2 shall be set to null
  4. GET_CERTIFICATE to obtain leaf certificate
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_challenge_a2_b2_c1

Run CHALLENGE flow A2_B2_C1 under CACHE_CAP with transcript handling then CHALLENGE only.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Check if CACHE CAP is supported
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. Send CHALLENGE, M1 and M2 shall be set to null
  5. Send CHALLENGE and verify response

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_with_invalid_version

Probe GET_MEASUREMENTS with unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. CHALLENGE to bind CertChainHash and responder nonce
  5. GET_MEASUREMENTS all measurements as baseline
  6. GET_VERSION to enumerate supported versions
  7. GET_CAPABILITIES negotiation baseline
  8. NEGOTIATE_ALGORITHMS negotiate base parameters
  9. GET_MEASUREMENTS with VersionNumber set to each unsupported value expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_invalid_order

Send GET_MEASUREMENTS before NEGOTIATE_ALGORITHMS and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION discovery
  2. GET_CAPABILITIES discovery
  3. GET_MEASUREMENTS before NEGOTIATE_ALGORITHMS expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_invalid_params

Validate GET_MEASUREMENTS parameter handling for index and slot id.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. CHALLENGE to bind CertChainHash
  5. GET_MEASUREMENTS param1=1 param2=0xFF request all measurements
  6. GET_MEASUREMENTS param1=1 param2=0x02 request by valid index
  7. GET_MEASUREMENTS param1=1 param2=num_blocks+1 out of range index expect ERROR
  8. GET_DIGESTS to read slot bitmap
  9. GET_MEASUREMENTS with slot_id not provisioned expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_check_response

Validate MEASUREMENTS structure length blocks and value types for GET_MEASUREMENTS.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION discovery
  2. GET_CAPABILITIES discovery
  3. NEGOTIATE_ALGORITHMS select MeasurementHashAlgo
  4. GET_DIGESTS to obtain CertChainHash
  5. GET_CERTIFICATE to obtain leaf certificate
  6. CHALLENGE to bind CertChainHash
  7. GET_MEASUREMENTS param1=1 param2=0xFF request all measurements
  8. Parse MeasurementRecord into MeasurementBlock[]
  9. Check MeasurementSpecification and DMTF value type semantics

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_check_signature

Verify MEASUREMENTS signature for multiple GET_MEASUREMENTS operations.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. GET_DIGESTS to obtain CertChainHash
  3. GET_CERTIFICATE to obtain leaf certificate
  4. CHALLENGE to bind CertChainHash
  5. GET_MEASUREMENTS operation = 0x00 request current measurement

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_check_signature_in_session

Verify MEASUREMENTS signature for GET_MEASUREMENTS inside an established session.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. GET_MEASUREMENTS operation = 0x00 within session
  3. Verify MEASUREMENTS signature

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_out_of_session

Verify MEASUREMENTS signature for GET_MEASUREMENTS without using a session.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. Operate without using the secure session
  3. GET_MEASUREMENTS operation = 0xFF request all measurements
  4. Verify MEASUREMENTS signature

function: scripts/management/11_spdm_test.py::test_spdm_key_exchange_with_invalid_version

Probe KEY_EXCHANGE with unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION to enumerate supported versions
  2. GET_CAPABILITIES then NEGOTIATE_ALGORITHMS as baseline
  3. Prepare KEY_EXCHANGE exchange_data
  4. KEY_EXCHANGE with each unsupported VersionNumber expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_key_exchange_with_invalid_order

Send KEY_EXCHANGE before NEGOTIATE_ALGORITHMS and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION then GET_CAPABILITIES
  2. KEY_EXCHANGE attempted before NEGOTIATE_ALGORITHMS expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_key_exchange_in_session

Send KEY_EXCHANGE within an established session and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Operate within an established secure session
  2. Update application key schedule before sending application-level requests
  3. KEY_EXCHANGE in-session must be rejected

function: scripts/management/11_spdm_test.py::test_spdm_key_exchange_open_two_sessions

Issue KEY_EXCHANGE to create a new session and ensure SessionID differs from existing.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. Update application key schedule before sending application-level requests
  3. KEY_EXCHANGE to establish a new session

function: scripts/management/11_spdm_test.py::test_spdm_key_exchange_invalid_params

Validate KEY_EXCHANGE parameter handling for slot and measurement selection.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_DIGESTS to read slot bitmap
  2. Prepare KEY_EXCHANGE exchange_data
  3. KEY_EXCHANGE with an unprovisioned slot expect ERROR
  4. KEY_EXCHANGE with Param1 beyond TcbMeasurements expect ERROR
  5. KEY_EXCHANGE with Param1 below AllMeasurements expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_handshake_in_the_clear

Exercise KEY_EXCHANGE with HANDSHAKE_IN_THE_CLEAR_CAP = 1 and verify responder signature.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION for discovery
  2. GET_CAPABILITIES with HANDSHAKE_IN_THE_CLEAR_CAP requested then check negotiated flags
  3. NEGOTIATE_ALGORITHMS to select base parameters
  4. Include CertChainHash into transcript
  5. Prepare ephemeral key for KEY_EXCHANGE exchange_data
  6. KEY_EXCHANGE with different Param1 and session_policy = 1

function: scripts/management/11_spdm_test.py::test_spdm_finish_with_invalid_version

Send FINISH with an unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Add CertChainHash to transcript
  3. KEY_EXCHANGE to establish session context
  4. Derive handshake secrets
  5. FINISH with invalid VersionNumber expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_finish_with_invalid_order

Send FINISH without prior KEY_EXCHANGE and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_VERSION for discovery
  2. GET_CAPABILITIES for discovery
  3. FINISH without session establishment expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_finish_in_session

Complete FINISH after KEY_EXCHANGE then attempt another FINISH within the session and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Add CertChainHash to transcript
  3. KEY_EXCHANGE to establish session context
  4. Derive handshake secrets
  5. FINISH to transition to application keys
  6. FINISH again within the established session expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_finish_with_invalid_params

Send FINISH with invalid parameters and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Add CertChainHash to transcript
  3. KEY_EXCHANGE to establish session context
  4. Derive handshake secrets
  5. FINISH with invalid VersionNumber expect ERROR
  6. End session

function: scripts/management/11_spdm_test.py::test_spdm_finish_handshake_in_the_clear

Complete FINISH when HANDSHAKE_IN_THE_CLEAR_CAP = 1.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. VERSION CAPABILITIES with HANDSHAKE_IN_THE_CLEAR_CAP then ALGORITHMS
  2. Add CertChainHash to transcript
  3. KEY_EXCHANGE to establish session context
  4. Derive handshake secrets
  5. FINISH in clear per negotiated capability
  6. Send END_SESSIONEND_SESSION request and receive response

function: scripts/management/11_spdm_test.py::test_spdm_finish_outside_session

Send FINISH without a session and expect ERROR 0x0B.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. FINISH without KEY_EXCHANGE or session establishment expect ERROR

function: scripts/management/11_spdm_test.py::test_spdm_heartbeat_with_invalid_version

Send HEARTBEAT with an unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. HEARTBEAT with VersionNumber = 0xFF should return ERROR

function: scripts/management/11_spdm_test.py::test_heartbeat_in_handshake

Attempt HEARTBEAT during handshake phase and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. incorporate CertChainHash into transcript to begin handshake
  3. KEY_EXCHANGE to start session establishment
  4. derive handshake keys prior to FINISH
  5. HEARTBEAT during handshake should return ERROR
  6. End session

function: scripts/management/11_spdm_test.py::test_key_update_with_invalid_params

Send KEY_UPDATE with an unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. HEARTBEAT to confirm liveness of the session
  3. KEY_UPDATE with VersionNumber = 0xFF should return ERROR

function: scripts/management/11_spdm_test.py::test_end_session_with_invalid_params

Send END_SESSION with an unsupported VersionNumber and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. incorporate CertChainHash into transcript then establish a session
  3. KEY_EXCHANGE to begin session setup
  4. derive handshake keys and complete FINISH
  5. END_SESSION with VersionNumber = 0xFF should return ERROR
  6. End session

function: scripts/management/11_spdm_test.py::test_end_session_before_finish

Send END_SESSION before FINISH.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. incorporate CertChainHash into transcript then establish a session
  3. KEY_EXCHANGE to begin session setup
  4. derive handshake keys and complete FINISH
  5. END_SESSION before FINISH
  6. End session

function: scripts/management/11_spdm_test.py::test_spdm_end_session_outside_session

Send END_SESSION without a session and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. END_SESSION outside any session should return ERROR 0x0B

function: scripts/management/11_spdm_test.py::test_spdm_get_chunk

Retrieve a complete DIGESTS payload using CHUNK_GET and validate it matches the baseline DIGESTS.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Establish a baseline DIGESTS for later comparison.
  3. Negotiate a small DataTransferSize to trigger chunked transfer.
  4. Request DIGESTS expecting LARGE_RESPONSE that mandates CHUNK_GET.
  5. Use CHUNK_GET to fetch the full DIGESTS payload in sequence and reassemble it.
  6. Verify the reassembled payload equals the baseline DIGESTS.

function: scripts/management/11_spdm_test.py::test_spdm_heartbeat_requires_session

Send HEARTBEAT without an established session and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Issue GET_VERSION, GET_CAPABILITIES, and NEGOTIATE_ALGORITHMS to construct VCA
  2. Exercise sending HEARTBEAT before creating a session and assert the harness reports an ERROR response

function: scripts/management/11_spdm_test.py::test_spdm_heartbeat_multiple_with_seq

Send repeated HEARTBEAT to validate liveness on the SessionId and expect HEARTBEAT_ACK.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. send HEARTBEAT and expect HEARTBEAT_ACK
  3. send another HEARTBEAT and expect HEARTBEAT_ACK
  4. send a third HEARTBEAT and expect HEARTBEAT_ACK

function: scripts/management/11_spdm_test.py::test_spdm_key_update_flow

Verify KEY_UPDATE basic functionality over an established session.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. liveness check before updates
  3. update all keys → expect KEY_UPDATE_ACK
  4. verify new keys → expect KEY_UPDATE_ACK
  5. liveness check after update
  6. update single direction → expect KEY_UPDATE_ACK
  7. verify again → expect KEY_UPDATE_ACK
  8. final liveness check

function: scripts/management/11_spdm_test.py::test_spdm_cmds_in_session_invalid

Send GET_CAPABILITIES and NEGOTIATE_ALGORITHMS inside a session and expect ERROR.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. GET_CAPABILITIES is a connection-state command and must not be accepted in session
  3. NEGOTIATE_ALGORITHMS is a connection-state command and must not be accepted in session

function: scripts/management/11_spdm_test.py::test_spdm_get_measurements_with_io

Continuously issue GET_MEASUREMENTS during host I/O and verify MEASUREMENTS signature in-session.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. Start a session
  2. drive mixed read/write I/O while repeatedly validating GET_MEASUREMENTS signature over the same session

function: scripts/management/11_spdm_test.py::test_spdm_get_chunk_out_of_order

Request CHUNK_GET out of order and expect ERROR until chunkseq_no starts at 0.

Reference

  1. SPDM Specification Revision 1.2.3 (DMTF DSP0274 v1.2.3).

Steps

  1. GET_DIGESTS twice to capture baseline size and decide if chunking is needed
  2. VERSION → CAPABILITIES(DataTransferSize=0x40) → NEGOTIATE_ALGORITHMS to enable chunk transfer path
  3. GET_DIGESTS should return ERROR 0x0F indicating chunked transfer required
  4. CHUNK_GET with out-of-order chunkseq_no should return ERROR
  5. CHUNK_GET with chunkseq_no=0 should be accepted to start the transfer

file: scripts/management/12_vdm_test

function: scripts/management/12_vdm_test.py::test_mi_admin_identify_vdm_smbus_concurrent

Exercise MI Identify via VDM and SMBus concurrently and compare with NVMe Admin.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Send Identify command via MI (partial data retrieval)
  2. Send Identify command via NVMe Admin and compare partial data
  3. Retrieve full Identify data through VDM
  4. Retrieve full Identify data through SMBus
  5. Retrieve full Identify data through VDM and SMBus concurrently

function: scripts/management/12_vdm_test.py::test_mi_ep_buf_differnt_length

Validate Management Endpoint Buffer read/write across varying transfer lengths.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Read SMBus Port Information to determine if Management Endpoint Buffer is supported
  2. Write Management Endpoint Buffer with deterministic pattern
  3. Read Management Endpoint Buffer and verify payload

function: scripts/management/12_vdm_test.py::test_mi_read_mi_data_structure

Verify NVMe-MI data structure reads for subsystem and port discovery.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Read NVM Subsystem Information (DTYP = 00h)
  2. Read Port Information (DTYP = 01h) for SMBus port
  3. Log port information for each available port
  4. Validate Maximum MCTP Transmission Unit Size
  5. Validate Management Endpoint Buffer Size

function: scripts/management/12_vdm_test.py::test_mi_vpd_read_length

Confirm VPD read handling when requested length and offset exceed available data.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. A VPD Read command with length 0 and no data is valid
  2. Valid VPD read request within size limits
  3. VPD read request with Data Length exceeding VPD size (invalid)
  4. VPD read request with Data Offset + Data Length exceeding VPD size (invalid)

function: scripts/management/12_vdm_test.py::test_mi_admin_timestamp

Validate MI timestamp set/get consistency across power states.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Set timestamp via MI command while targeting the requested power state
  2. Set the requested power state and confirm support
  3. Repeat timestamp retrieval and validate monotonicity via MI
  4. Restore controller to PS0

function: scripts/management/12_vdm_test.py::test_mi_admin_identify

Compare MI Identify data against NVMe Admin Identify results.

Reference

  1. Based on NVM Express Management Interface Revision 1.2c.

Steps

  1. Send Identify command via MI (partial data retrieval)
  2. Send Identify command via NVMe Admin and compare partial data
  3. Retrieve the entire Identify data through MI
  4. Compare the entire Identify data from MI and NVMe Admin

function: scripts/management/12_vdm_test.py::test_spdm_get_version

Validate SPDM GET_VERSION response handling.

Reference

  1. Based on SPDM Specification Revision 1.0.2.

Steps

  1. Send GET_VERSION request and receive response

file: scripts/management/13_fru_test

function: scripts/management/13_fru_test.py::test_fru_read

Validate FRU common header integrity via direct SMBus reads.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Read the FRU common header over SMBus
  2. Verify the version and checksum in the FRU header
  3. Read the entire FRU content for baseline comparison
  4. Read the FRU content in two segments to confirm offset reset

function: scripts/management/13_fru_test.py::test_fru_read_vpd

Validate that MI VPD reads match SMBus FRU reads for the same offsets.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Prepare buffer placeholder for VPD comparison
  2. Read VPD content via the MI VPD Read command
  3. Read the same VPD region directly over SMBus
  4. Compare MI VPD read data to direct SMBus read
  5. Verify that invalid FRU slave address is rejected

function: scripts/management/13_fru_test.py::test_fru_read_power_cycle

Confirm FRU offset resets after a full power cycle using SMBus reads.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Capture FRU content with initial power applied
  2. Perform a full power cycle of the drive
  3. Read FRU content again to ensure offset reset to zero
  4. The internal offset shall be cleared to 0h following a power cycle of the FRU Information Device

function: scripts/management/13_fru_test.py::test_fru_product_info_area_fields

Inspect Product Info Area fields for completeness and checksum validity over SMBus.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Define expected Product Info labels and required entries
  2. Read the FRU common header to locate Product Info Area
  3. Read the full FRU image for field extraction
  4. Validate Product Info Area checksum and format
  5. Iterate through TLV entries and verify content
  6. padding or explicitly empty field

function: scripts/management/13_fru_test.py::test_fru_nvme_multirecord_area

Decode NVMe and PCIe MultiRecord entries from the FRU over SMBus.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Read FRU header to determine MultiRecord offset
  2. Read the full FRU image to access MultiRecord data
  3. Parse each MultiRecord entry and validate checksums
  4. Log NVMe-specific MultiRecord details when present
  5. Log PCIe port MultiRecord details when present

function: scripts/management/13_fru_test.py::test_fru_i2c_write

Ensure VPD writes over SMBus are blocked when MI VPD Write is supported.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Check whether the MI VPD Write command is supported
  2. Prepare data for a negative VPD write attempt
  3. Confirm SMBus VPD write is rejected when MI write capability exists

function: scripts/management/13_fru_test.py::test_fru_read_with_aux_power_only

Validate FRU SMBus readability when only auxiliary power is supplied.

Reference

  1. Based on NVM Express Management Interface Specification Revision 1.2c.

Steps

  1. Skip if auxiliary power rail cannot be controlled
  2. Skip if platform does not provide U.2 style auxiliary rail
  3. Transition into auxiliary-power-only state
  4. Attempt SMBus static-data read while only aux power is applied
  5. Restore main power and wait before resuming access
  6. Verify auxiliary-only data matches after restoring main power
  7. Final sanity check after another SMBus read with full power.

Suite: scripts/production

folder: scripts/production

file: scripts.production.01_normal_io_test

This file contains long-duration IO tests aimed at evaluating the reliability, endurance, and performance of NVMe SSDs under sustained workloads. The tests cover various read/write patterns, including random and sequential operations with different block sizes and ratios, running for extended periods from 30 minutes to several days. These tests are designed to ensure that the NVMe SSDs can handle continuous stress, identify potential issues, and verify that the devices meet required performance and stability standards over their expected lifespan.

file: scripts.production.02_mix_io_test

This file contains a series of mixed IO tests aimed at evaluating the performance and reliability of NVMe SSDs under various conditions, including different block sizes, read/write ratios, and IO patterns over extended durations. The tests simulate real-world workloads by varying parameters such as queue depth and block size, switching between random and sequential operations, and collecting performance data. These tests are designed to stress the SSD and ensure it can handle diverse and intensive usage scenarios.

file: scripts.production.03_data_model_test

This file contains a series of data model tests designed to simulate real-world workloads on NVMe SSDs. Each test emulates different application scenarios, such as cloud computing, SQL databases, and content delivery networks, by varying parameters like block size, read/write ratio, and randomness. The purpose of these tests is to assess the SSD’s performance, endurance, and reliability under conditions that mimic actual usage patterns in diverse environments.

file: scripts.production.04_trim_format_test

This file includes a series of tests focused on assessing NVMe SSD performance under various conditions, particularly during and after trim operations. The tests simulate workloads that involve sequential and random writes, followed by trim operations and subsequent performance evaluations. These scenarios help determine how effectively the SSD maintains performance when managing trimmed data and handling mixed IO patterns over extended periods.

file: scripts.production.05_small_range_test

This file contains a set of tests designed to evaluate the performance and reliability of NVMe SSDs by executing various read and write operations on specific LBA ranges and random regions within the drive. The tests focus on stressing the SSD with different workloads, such as repeated reads and writes on the same or multiple LBAs, and small range operations. These scenarios are intended to simulate real-world usage patterns and assess how the SSD manages data across its storage space over extended durations.

file: scripts.production.06_jesd_workload_test

This file includes a test case designed to evaluate NVMe SSD performance and endurance under a JEDEC JESD 219 workload, which simulates a typical client workload for solid-state drives. The test involves a sequence of operations: a full drive sequential write with 128KB block sizes, followed by 4KB random writes, and concluding with a workload distribution that mimics real-world usage scenarios. The purpose is to assess how well the SSD handles sustained writes, mixed workloads, and different IO patterns over an extended period.

file: scripts.production.07_power_cycle_test

This script automates power cycling tests on NVMe SSDs to assess their response times and reliability under different power loss conditions. It focuses on simulating both sudden (dirty) and typical (clean) power cycles. These tests help evaluate the SSD’s resilience and ability to maintain data integrity across 1000 cycles, ensuring the device meets stringent durability standards.

file: scripts.production.08_wl_stress_test

This script conducts a wear leveling test on NVMe SSDs to evaluate their endurance and efficiency in managing data distribution across the memory cells. The test involves sequential and random write operations to different regions of the drive, simulating hot and cold data scenarios, and triggers wear leveling and garbage collection processes. The test measures IOPS (Input/Output Operations Per Second) throughout the operations and generates performance diagrams to assess the effectiveness of wear leveling. The script also includes power cycling and full-drive verification steps to ensure data integrity post-testing.

file: scripts.production.09_io_stress_test

This script conducts a comprehensive stress test on NVMe SSDs to validate their stability, performance, and error handling capabilities under prolonged and varied workloads. The test involves running multiple randomized I/O operations on different namespaces concurrently with NVMe Admin commands, MI commands, and events (e.g. power cycle, and reset). This approach helps assess the SSD’s resilience and readiness for deployment in demanding environments.

Suite: scripts/placement

folder: scripts/placement

file: scripts/placement/01_basic_test

function: scripts/placement/01_basic_test.py::test_config_configure_fdp_parameters

Ensure that FDP parameters like the Flexible Data Placement feature are correctly applied when configured.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. delete all namespaces
  2. Enable FDP with default configuration index 0
  3. Verify FDP is enabled
  4. Check Identify Controller for FDP support
  5. Disable FDP
  6. Verify FDP is disabled
  7. create an namespace

function: scripts/placement/01_basic_test.py::test_config_enable_fdp_with_namespace_created

Confirm that a Set Features command specifying the FDP feature is aborted with “Command Sequence Error” if it enables FDP when a namespace is created.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Delete all namespaces
  2. Create a namespace in the Endurance Group
  3. Attempt to set FDP with an existing namespace
  4. Expect a Command Sequence Error
  5. Delete the namespace
  6. nvme0.ns_delete()

file: scripts/placement/02_logpage_test

function: scripts/placement/02_logpage_test.py::test_config_endurance_group_identifier

Validate the Endurance Group Identifier field to ensure it correctly specifies the Endurance Group.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Allocate a buffer for the log page data (16 bytes for header)
  2. Issue a Get Log Page command to retrieve the FDP Configurations log page (Log Page Identifier 0x20)
  3. Assert that the Version field (Byte 02) in the log page header is cleared to 0h, as required by the spec.
  4. Issue another Get Log Page command with an invalid Log Specific Identifier (0 instead of 1).
  5. Expect a warning with “ERROR status: 00/02” indicating an Invalid Field in the Command status code.

function: scripts/placement/02_logpage_test.py::test_config_validate_fdp_configuration_descriptors

Verify the validity of FDP Configuration Descriptors, specifically the FDP Configuration Valid bit and associated fields in the FDP Configuration Descriptor.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Retrieve the FDP Configurations log page header (first 16 bytes)
  2. Extract the total size of the log page and the number of FDP configurations
  3. Initialize the offset to start reading FDP Configuration Descriptors
  4. Iterate through each FDP Configuration Descriptor
  5. Retrieve the first 16 bytes of the current configuration descriptor
  6. Extract the size of the descriptor to prepare for full retrieval
  7. Retrieve the full configuration descriptor based on its size
  8. Reclaim Group Identifier Format (RGIF): normally only single one RG
  9. Check the FDP Configuration Valid bit (Bit 7 in Byte 02)
  10. Ensure that the required fields have non-zero values
  11. Extract the number of Reclaim Unit Handles (NRUH)
  12. Verify that each Reclaim Unit Handle type is either 1 (Host Specified) or 2 (Controller Specified)

function: scripts/placement/02_logpage_test.py::test_handle_usage_fdp_disabled

Verify that the controller aborts the Get Log Page command with FDP Disabled error

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Allocate buffer for log page retrieval
  2. Issue Get Log Page command for Reclaim Unit Handle Usage (Log Page Identifier 21h)
  3. Expect an error with status 00/29 if FDP is disabled

function: scripts/placement/02_logpage_test.py::test_handle_usage_fdp_enabled

Verify that the Get Log Page command retrieves the Reclaim Unit Handle Usage page

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Allocate buffer for log page retrieval
  2. NSID field is reserved
  3. NSID field is reserved if FDP is enabled
  4. Issue Get Log Page command with NSID reserved (should succeed)
  5. Number of Reclaim Unit Handles (NRUH): This field identifies the number of Reclaim Unit Handle Usage Descriptors in the Reclaim Unit Handle Usage Descriptor List. This field shall be a non-zero value.

function: scripts/placement/02_logpage_test.py::test_handle_usage_descriptor

Verify that Reclaim Unit Handle Usage descriptors are correctly retrieved and processed,

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Retrieve the Reclaim Unit Handle Usage log page header (first 8 bytes)
  2. Extract the number of Reclaim Unit Handles (NRUH)
  3. Initialize offset for retrieving descriptors and a counter for Controller Specified entries
  4. Iterate through all Reclaim Unit Handles and validate the attributes
  5. Allocate buffer for the descriptor (8 bytes per descriptor)
  6. Retrieve each Reclaim Unit Handle Usage descriptor
  7. Extract the attributes from the descriptor
  8. Check that the attribute is “not used by a namespace” (0h)
  9. Count the entries marked as “Controller Specified” (2h)

function: scripts/placement/02_logpage_test.py::test_handle_usage_descriptor_with_ns

Verify that Reclaim Unit Handle Usage descriptors are correctly retrieved for a namespace,

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Retrieve the Reclaim Unit Handle Usage log page header (first 8 bytes)
  2. Extract the number of Reclaim Unit Handles (NRUH)
  3. Initialize offset for retrieving descriptors and a counter for Controller Specified entries
  4. Iterate through all Reclaim Unit Handles and validate the attributes
  5. Allocate buffer for the descriptor (8 bytes per descriptor)
  6. Retrieve each Reclaim Unit Handle Usage descriptor
  7. Extract the attributes from the descriptor
  8. Check that the attribute is “Host Specified” (1h) when associated with a namespace

function: scripts/placement/02_logpage_test.py::test_statistics_fdp_disabled

Verify that when FDP is disabled, the controller aborts the Get Log Page command

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. When FDP is disabled, expect an error with FDP Disabled status (00/29)

function: scripts/placement/02_logpage_test.py::test_statistics_written_counter

Verify the FDP Statistics Log Page (Log Page Identifier 22h) for correct reporting of

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Wait for counter updates (SSD stats update interval)
  2. Extract Host Bytes with Metadata Written (HBMW) and Media Bytes with Metadata Written (MBMW)
  3. Ensure the counters are reset to 0 after the FDP feature is modified via Set Features command
  4. Write data to generate Host and Media bytes
  5. Calculate the expected bytes written (based on LBA and sector size)
  6. Retrieve the counters again after the I/O operation
  7. Validate that the Host Bytes and Media Bytes are updated as expected

function: scripts/placement/02_logpage_test.py::test_statistics_nsid_reserved

Verify that NSID is reserved in the FDP Statistics Log Page (Log Page Identifier 22h) when FDP is enabled.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.16.1

Steps

  1. Allocate buffer for log page retrieval
  2. Retrieve the log page with NSID set to reserved value (0)
  3. Invalid NSID values should trigger an error

function: scripts/placement/02_logpage_test.py::test_statistics_after_sanitize

Confirm that a Reclaim Unit Handle is modified to reference a different Reclaim Unit as part of performing a sanitize operation.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Write data to all Placement Identifiers (PIDs) before the sanitize operation
  2. Capture statistics before performing sanitize operation
  3. Perform sanitize operation
  4. Capture statistics after performing sanitize operation
  5. Verify that the Reclaim Unit Handle has been modified
  6. Assert that after sanitize, all Reclaim Unit Handles have the same remaining LBA and time

file: scripts/placement/03_features_test

function: scripts/placement/03_features_test.py::test_setting_default_value

Verify that the default value of the Flexible Data Placement (FDP) feature is 0h.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.27.1

Steps

  1. Retrieve the current FDP feature value
  2. Retrieve the default FDP feature value

function: scripts/placement/03_features_test.py::test_setting_default_value_fdp_enabled

Verify that the default value of the Flexible Data Placement (FDP) feature is 0h.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.27.1

Steps

  1. Retrieve the current FDP feature value
  2. Retrieve the default FDP feature value

function: scripts/placement/03_features_test.py::test_setting_saveable_bit

Verify that the FDP feature can only be modified when the SV bit is set to 1 in the Set Features command.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 5.27.1

Steps

  1. Check if ONCS bit 4 is set (FDP support)
  2. Try to set FDP feature without SV bit (expecting failure)
  3. Set FDP feature with SV bit set to 1 (should succeed)
  4. Verify FDP feature was updated
  5. Reset FDP feature to default (0h)

function: scripts/placement/03_features_test.py::test_setting_enable_fdp_with_namespace

Verify that a namespace can be created properly, but attempting to modify the FDP feature after namespace creation results in a “Command Sequence Error”.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 8.TBD.1

Steps

  1. Create a single namespace using the provided utility function
  2. Attempt to modify the FDP feature (set it to ‘1’)
  3. Expecting a “Command Sequence Error” (ERROR status: 00/0c)
  4. Attempt to disable the FDP feature (set it to ‘0’)
  5. Delete the namespace to clean up

function: scripts/placement/03_features_test.py::test_setting_double_associated

Verify that a specific Reclaim Unit Handle cannot be associated with more than one Placement Handle per namespace.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Attempt to create a namespace with the same RUH (0) associated with multiple PHs
  2. Attempt to create a namespace where RUH 1 is associated with PH 1 and PH 2
  3. Attempt to create a namespace with multiple RUHs, some being duplicated
  4. Create a valid namespace with unique RUH associations

function: scripts/placement/03_features_test.py::test_setting_resume_controller_reset

Verify that following a controller level reset, Flexible Data Placement (FDP) is properly resumed for an Endurance Group.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Retrieve and store the current FDP feature value and configuration index
  2. Retrieve and store the current Reclaim Unit Handle (RUH) Status data
  3. Perform a controller level reset
  4. Check if FDP feature is properly resumed by verifying the FDP configuration index and status
  5. Verify that the RUH status matches the previous state for each namespace in the Endurance Group

file: scripts/placement/04_io_test

function: scripts/placement/04_io_test.py::test_reclaim_update_on_full_capacity

Verify that when a Reclaim Unit is written to capacity, the controller updates

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Retrieve initial Reclaim Unit Handle mapping and total LBA count
  2. Define I/O parameters for filling the Reclaim Unit
  3. Perform continuous write operations to fill the Reclaim Unit using ioworker
  4. After ioworker completes, get the remaining LBA count
  5. Perform additional writes for 5 seconds to trigger RUH update
  6. Get the updated LBA count after the short write
  7. Verify that the remaining LBA count has increased, indicating RUH update

function: scripts/placement/04_io_test.py::test_management_receive_numd

Verify the behavior of the Number of Dwords (NUMD) field for the I/O Management Receive command (Opcode 0x12).

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 7.TBDIOMR

Steps

  1. Retrieve the full 128 bytes data structure
  2. Retrieve only 32 bytes (8 DWORDs)
  3. Retrieve with a larger NUMD, expecting full data structure (128 bytes)

function: scripts/placement/04_io_test.py::test_management_invalid_operation

Verify the behavior of the Number of Dwords (NUMD) field for the I/O Management Receive command (Opcode 0x12).

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 7.TBDIOMR

Steps

  1. Valid operation: No action (MO=00h)
  2. Valid operation: Reclaim Unit Handle Status (MO=01h)
  3. Invalid operation: MO=02h (reserved)

function: scripts/placement/04_io_test.py::test_management_fdp_disabled

Verify that if FDP is disabled in the Endurance Group, the I/O Management Receive command is aborted with “FDP Disabled” status.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 7.TBDIOMR

Steps

  1. Use Namespace ID 1 for this test, assuming FDP is disabled for this namespace
  2. Expect the command to be aborted with a status indicating FDP Disabled (00/29h)
  3. Expect the command to be aborted with a status indicating FDP Disabled (00/29h)
  4. Close the namespace after the test

function: scripts/placement/04_io_test.py::test_management_nsid_invalid

Verify that the I/O Management Receive command is aborted with “Invalid Namespace or Format” when NSID is invalid (0h or FFFFFFFFh).

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 7.TBDIOMR

Steps

  1. Case 1: NSID = 0h, expect the command to be aborted with Invalid Namespace or Format (00/02h)
  2. Case 2: NSID = FFFFFFFFh, expect the command to be aborted with Invalid Namespace or Format (00/02h)
  3. Case 3: Test with an NSID that doesn’t exist (e.g., nsid + 1)

function: scripts/placement/04_io_test.py::test_management_receive_numd_large

Verify that if the host reads beyond the size of the Reclaim Unit Handle Status data structure, zeroes are returned.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement – Section 7.TBDIOMR

Steps

  1. Retrieve the head of the buffer
  2. Get the number of Reclaim Unit Handle Status Descriptors (NRUHSD)
  3. Convert expected size to DWORDs for cdw11 (1 DWORD = 4 bytes)
  4. Request data beyond the actual structure size
  5. Extract data beyond the expected structure size
  6. Verify data beyond the structure size is zero-filled

function: scripts/placement/04_io_test.py::test_management_update_without_buf

Verify that issuing an I/O Management Send command without a valid buffer or with invalid arguments results in an error.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Send command with MO set to 0 (No action) – this should pass
  2. Issue command with invalid MO (e.g., MO=2, which is reserved) – should trigger error
  3. Issue command with MO=1 (Reclaim Unit Handle Update) but without valid buffer – should trigger error

function: scripts/placement/04_io_test.py::test_management_send_mo_invalid

Verify that I/O Management Send command with MO set to ’00h’ (No action) doesn’t perform any updates.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. write some data to RU 0 and 1
  2. Retrieve the initial Reclaim Unit Handle status
  3. Send I/O Management Send command with MO=00h (No action)
  4. Retrieve the Reclaim Unit Handle status after the command
  5. Assert that the Reclaim Unit Handle remains unchanged
  6. invalid MO = 2

function: scripts/placement/04_io_test.py::test_management_update_ruh

Verify that I/O Management Send command performs the Reclaim Unit Handle Update operation correctly with MO set to ’01h’.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. write some data to RU 0 and 1
  2. Prepare valid Placement Identifier list (PIU) in the buffer
  3. Retrieve initial Reclaim Unit Handle status
  4. Issue I/O Management Send command to update Reclaim Unit Handles (MO=01h)
  5. Retrieve Reclaim Unit Handle status after the update
  6. Verify Reclaim Unit Handles have been updated as expected
  7. RU2 is empty, so no need to update
  8. RU3 is not updated here

function: scripts/placement/04_io_test.py::test_management_update_ruh_with_io

Verify that I/O Management Send command performs the Reclaim Unit Handle Update operation correctly with MO set to ’01h’.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Prepare valid Placement Identifier list (PIU) in the buffer
  2. write some data to RU 0 and 1
  3. Issue I/O Management Send command to update Reclaim Unit Handles (MO=01h)

function: scripts/placement/04_io_test.py::test_management_update_invalid_placement_id

Summary not provided.

Reference
N/A

Steps

  1. Find the count of placement in the namespace
  2. fill an invalid placement id to the buffer
  3. Issue I/O Management Send command with invalid place id
  4. the controller shall abort the command with a status code of Invalid Field in Command

function: scripts/placement/04_io_test.py::test_management_update_invalid_npid

Summary not provided.

Reference
N/A

Steps

  1. Find the count of placement in the namespace
  2. Issue I/O Management Send command with invalid npid
  3. the controller shall abort the command with a status code of Invalid Field in Command

function: scripts/placement/04_io_test.py::test_write_invalid_pid

Confirm the behavior when a write command specifies an invalid Placement Identifier (PID).

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Retrieve the number of valid Placement Identifiers (PIDs)
  2. Capture statistics before the write operation
  3. Issue a write command with an invalid PID
  4. Capture statistics after the write operation
  5. Verify that statistics have changed even with the write using an invalid Placement Identifier

function: scripts/placement/04_io_test.py::test_write_without_directive

Confirm the behavior when a write command specifies a Placement Identifier (PID), but no Data Placement Directive is provided. The data should be placed using Placement Handle value 0h.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Capture statistics before the write operation
  2. Issue a write command with a PID (e.g., 1), but without the Data Placement Directive (no io_flags for FDP)
  3. Capture statistics after the write operation
  4. Verify that the write was done using Placement Handle 0, despite specifying PID 1

function: scripts/placement/04_io_test.py::test_write_data_placement_without_directive

Confirm the behavior when a write command does not specify the Data Placement Directive, defaulting to Placement Handle value 0h.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Capture statistics before the write operation
  2. Issue a write command without specifying the Data Placement Directive (no io_flags for FDP)
  3. We write 8 sectors with 10,000 IOPS over a 10-second period
  4. Capture statistics after the write operation
  5. Log the before and after statistics for debugging
  6. Validate that only Placement Handle 0 is affected
  7. The Placement Handle 0 should reflect the changes in its usage (Bytes Written), while other handles remain unchanged.
  8. Validate that all other Placement Handles were unaffected
  9. Ensures that only PH 0 was affected by the write operation

function: scripts/placement/04_io_test.py::test_write_different_operations

Verify that I/O Management Send command performs the Reclaim Unit Handle Update operation correctly with MO set to ’01h’.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Perform write operations without Data Placement Directive
  2. Perform write operations with Data Placement Directive
  3. Perform write operations with Data Placement Directive

function: scripts/placement/04_io_test.py::test_write_read_different_pli

Test FDP by writing data to one Placement Identifier (PLI) and reading it from another PLI

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. write phase use PLI 1
  2. read phase use PLI 2

file: scripts/placement/05_directive_test

function: scripts/placement/05_directive_test.py::test_identify_return_parameters

Confirm that the Identify Directive bit is always cleared to ‘0’, ensuring that the host cannot change its state.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Issue a Directive Receive command for Identify Directive (dtype=0)

function: scripts/placement/05_directive_test.py::test_identify_directive_immutable

Confirm that the Identify Directive bit is always cleared to ‘0’, ensuring that the host cannot change its state.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Issue a Directive Receive command for Identify Directive (dtype=0)
  2. Extract ‘support’, ‘enabled’, and ‘persistent’ bit fields from the buffer
  3. Check that Directive type 0 (Identify) and type 2 (FDP) are supported and enabled
  4. Check that the Identify Directive is not persistent across controller resets (Identify must remain cleared)

function: scripts/placement/05_directive_test.py::test_directive_enable_invalid_nsid

If the Directive is the Data Placement Directive and an NSID value of FFFFFFFFh is specified,

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. First, enable the directive with a valid NSID (ensure it works correctly)
  2. Now, attempt to enable the directive with an invalid NSID (FFFFFFFFh)
  3. Expecting failure with status code indicating Invalid Namespace or Format

function: scripts/placement/05_directive_test.py::test_directive_data_placement

Verify that any Directive Receive or Directive Send command specifying a Data Placement Directive Type

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Attempt to issue a Directive Send command with Data Placement Directive Type (dtype = 2)
  2. Expect the command to fail with Invalid Field in Command status
  3. Attempt to issue a Directive Receive command with Data Placement Directive Type (dtype = 2)
  4. Expect the command to fail with Invalid Field in Command status

file: scripts/placement/06_namespace_test

function: scripts/placement/06_namespace_test.py::test_shared_reclaim_unit_handle_invalid_format

Verify that the command is aborted with a status code of Invalid Format if the Reclaim Unit Handle is shared by other namespaces and the Format Index does not match.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Create a namespace with a specific format
  2. create another namespace with a different format, but using a common RU handle
  3. create another namespace with the same format
  4. Attempt to format the first namespace with a different block size (512), expect failure due to shared Reclaim Unit Handle with incompatible format
  5. Perform some I/O operations on the valid namespace with correct format
  6. Clean up and close namespace

file: scripts/placement/07_event_test

file: scripts/placement/08_performance_test

function: scripts/placement/08_performance_test.py::test_fdp_performance_compare

Evaluate FDP performance using large block write operations with and without FDP directive.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Calculate I/O size in sectors
  2. Perform the first I/O workload with FDP directive enabled
  3. Perform the second I/O workload without FDP directive
  4. Compare the performance between FDP-enabled and without FDP
  5. Assert that the performance difference is within acceptable range (less than 1%)

function: scripts/placement/08_performance_test.py::test_fdp_latency_compare

Assess FDP performance under a high IOPS workload and compare latency with and without FDP directive enabled.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. I/O size in sectors (4KB)
  2. Perform high IOPS workload with FDP directive enabled
  3. Perform high IOPS workload without FDP directive
  4. Compare latencies and log the difference
  5. Assert that the latency difference is within acceptable range (e.g., less than 5% difference)

function: scripts/placement/08_performance_test.py::test_fdp_mixed_io_compare

Compare FDP performance under mixed read/write IO workloads with different read percentages (10%, 50%, 90%).

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. I/O size in sectors (4KB)
  2. Perform mixed read/write workload with FDP directive enabled
  3. Perform mixed read/write workload without FDP directive
  4. Compare read and write IO counts between FDP enabled and disabled runs
  5. Assert the performance difference is within acceptable range (e.g., less than 5%)

function: scripts/placement/08_performance_test.py::test_fdp_multi_ns_performance

Measure the performance of multiple namespaces with FDP enabled under high I/O load.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. I/O size in sectors (4KB)
  2. create another one ns with 1 RUH
  3. use different RUH
  4. Compare performance: ns2 (with all RUHs) should perform better than ns1 (with 1 RUH)
  5. use same RUH
  6. Compare performance: ns2 (with 4 RUHs) should perform better than ns1 (with 1 RUH)
  7. clean up

file: scripts/placement/09_stress_test

function: scripts/placement/09_stress_test.py::test_fdp_enable_disable

Perform a stress test on the FDP feature by repeatedly enabling and disabling it under high load.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Ensure FDP is enabled

function: scripts/placement/09_stress_test.py::test_fdp_io_stress

Perform an intensive I/O workload with FDP enabled to assess system stability and performance.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Iterate through different configurations of I/O sizes and read percentages
  2. Perform I/O with varying io_size, read percentage, and placement identifier (cdw13)
  3. Log the I/O worker result

function: scripts/placement/09_stress_test.py::test_fdp_io_stress_async

Perform an intensive asynchronous I/O workload with FDP enabled to assess system stability and performance.

Reference

  1. NVM Express® Technical Proposal 4146 Flexible Data Placement

Steps

  1. Iterate through different configurations of I/O sizes and read percentages
  2. Perform asynchronous I/O with varying io_size, read percentage, and placement identifier (cdw13)
  3. Wait for all I/O workers to complete and collect the results