PyNVMe3 Test Suites

PyNVMe3 Test Suites

Last Modified: November 25, 2024

Copyright © 2020-2024 GENG YUN Technology Pte. Ltd.
All Rights Reserved.

Suite: scripts/conformance

folder: scripts/conformance/01_admin

file: scripts/conformance/01_admin/abort_test

function: scripts/conformance/01_admin/abort_test.py::test_dut_firmware_and_model_name

print firmware and model name to the log

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 249: Identify – Identify Controller Data Structure

Steps

  1. print Model Number
  2. print Firmware Revision
  3. format namespace

function: scripts/conformance/01_admin/abort_test.py::test_abort_specific_aer_command

send the abort command to abort a specific aer command

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. 5.1 Abort command

Steps

  1. send an aer command and get its cid
  2. send an abort command to abort the AER command by its cid
  3. check if the abort command aborts the AER command

function: scripts/conformance/01_admin/abort_test.py::test_abort_abort_command

send an abort command to abort another abort command

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. 5.1 Abort command

Steps

  1. check Abort Command Limit larger than 2
  2. send an abort command A to abort command with cid 0
  3. send another abort command to abort command A
  4. both above command shall complete successfully
  5. send an abort command B to abort command with cid 0xffff
  6. send another abort command to abort command B
  7. both above command shall complete successfully
  8. send an AER command
  9. send an abort command C to abort the above AER command
  10. send another abort command to abort command C
  11. the AER command shall be aborted if the first abort command is not aborted

function: scripts/conformance/01_admin/abort_test.py::test_abort_io_burst

abort IO command

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.

Steps

  1. create sq/cq
  2. send 100 write and flush, trigger doorbell all in one time
  3. delay to abort
  4. abort the flush command
  5. reap all IO
  6. delete queue

file: scripts/conformance/01_admin/aer_test

function: scripts/conformance/01_admin/aer_test.py::test_aer_limit_exceeded

check the field Asynchronous Event Request Limit in identify data structure

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 96.
  2. The total number of simultaneously outstanding Asynchronous Event Request commands is limited by the Asynchronous Event Request Limit specified in the Identify Controller data structure in Figure 247.

Steps

  1. get Asynchronous Event Request Limit in identify data structure
  2. send all AER commands defined by the limit
  3. send one more AER command, and it shall be aborted
  4. abort all AER commands, and abort successfully

function: scripts/conformance/01_admin/aer_test.py::test_aer_no_timeout

send an AER command and check its completion

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 96.

Steps

  1. issue an AER command
  2. wait 15 seconds for the completion
  3. neither completion, nor host timeout happen on the AER command
  4. abort the AER command, and abort successfully

function: scripts/conformance/01_admin/aer_test.py::test_aer_sanitize

AER will be triggered when Sanitize Operation Completed event happens

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Figure 149:
  2. Asynchronous Event Information – NVM Command Set Specific Status 01h//Sanitize Operation Completed

Steps

  1. skip test if sanitize is not supported
  2. send one AER command
  3. start sanitize operation and check the AER event when the sanitize operation is completed
  4. start and wait sanitize operation complete again, and check the aer event

function: scripts/conformance/01_admin/aer_test.py::test_aer_mask_event

mask an AER event and the AER notification shall not be triggered

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 214.
  2. 5.21.1.11 Asynchronous Event Configuration (Feature Identifier 0Bh)

Steps

  1. issue an AER command
  2. mask the SMART/health Asynchronous event
  3. get current temperature
  4. set the composite temperature threshold lower than current temperature
  5. AER notification shall not be triggered
  6. check SMART/health log Critical Warning bit 1 was set
  7. revert to default setting

function: scripts/conformance/01_admin/aer_test.py::test_aer_fw_activation_starting

AER is triggered by Firmware Activation Starting

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 100.
  2. 5.2 Asynchronous Event Request command
  3. Figure 148: Asynchronous Event Information – Notice

Steps

  1. skip if the firmware commands are not supported
  2. skip if the firwmare slot 1 is read only
  3. skip if OAES is not supported
  4. enable Firmware Activation Starting event
  5. skip if Firmware Activation Starting event is not supported
  6. issue an AER command to check if it is triggered by Firmware Activation Starting later
  7. activate an existed firmware slot and AER notification is triggered
  8. get the logpage to clear the AER event
  9. recover AER configuration setting

function: scripts/conformance/01_admin/aer_test.py::test_aer_event_no_aer

no outstanding AER command, AER should not be triggerred

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 96.
  2. 5.2 Asynchronous Event Request command

Steps

  1. initialize controller without sending any AER command
  2. enable all kinds of Asynchronous event
  3. get current temperature
  4. set temperature threshold below to current temperature
  5. check no AER notification is triggered
  6. check SMART/health log Critical Warning bit 1 was set
  7. recover temperature threshold

function: scripts/conformance/01_admin/aer_test.py::test_aer_abort_all_aer_commands

send all aer commands supported by the controller and abort them

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 96:
  2. 5.1 Abort command: The Abort command is used to abort a specific command previously submitted to the Admin Submission
  3. Queue or an I/O Submission Queue.

Steps

  1. disable all aer events
  2. get the maximum number of AER commands supported by the controller
  3. send all AER commands supported by the controller
  4. send one more AER command and should get AER event
  5. send abort command
  6. Write to Invalid Doorbell Register and confirm no AER notification is triggered

function: scripts/conformance/01_admin/aer_test.py::test_aer_temperature

AER will be triggered when Temperature exceed limit happen.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Figure 147:
  2. Asynchronous Event Information – SMART / Health Status: 01h//Temperature Threshold: A temperature is greater than or equal to an over temperature thresholdor less than or equal to an under temperature threshold (refer to section 5.21.1.4).

Steps

  1. issue an AER command
  2. set feature to enable all asynchronous events
  3. get current temperature
  4. set Over Temperature Threshold to trigger AER
  5. read log page to clear the event
  6. getlogpage to clear events
  7. set Under Temperature Threshold to trigger AER
  8. read log page to clear the event
  9. check smart data for critical warning of the temperature event
  10. recover to original setting

function: scripts/conformance/01_admin/aer_test.py::test_aer_doorbell_invalid_register

AER will be triggered when writing invalid doorbell register.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.Figure 146:
  2. Asynchronous Event Information – Error Status 00h: Write to Invalid Doorbell Register: Host software wrote the doorbell of a queue that was not created.

Steps

  1. issue an AER command
  2. create CQ and SQ
  3. delete SQ first
  4. write doorbell of the deleted SQ to cause the event Invalid Doorbell Register
  5. read log page to clear the event
  6. delete cq

function: scripts/conformance/01_admin/aer_test.py::test_aer_doorbell_out_of_range

AER will be triggered when write an invalid doorbell value

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Figure 146:
  2. Asynchronous Event Information – Error Status 01h: Invalid Doorbell Write Value: Host software attempted to write an invalid doorbell value.
  3. the value written was out of range of the corresponding queue’s base address and size;

Steps

  1. issue an AER command
  2. create CQ and SQ
  3. sq tail register value written was out of range of queue size
  4. read log page to clear the event

file: scripts/conformance/01_admin/dst_test

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_valid_namespace

short DST command for valid namespace

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 108.

Steps

  1. start a short DST
  2. check Log Page if dst operation is in-progress

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_processing

extended DST command for valid namespace

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 108.

Steps

  1. issue a extended DST
  2. check log page if extended DST operation is in-progress

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_time

a short DST command should complete in two minutes or less

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 348.
  2. A short device self-test operation should complete in two minutes or less.

Steps

  1. start a short DST, and record start time
  2. wait DST operation complete
  3. check if the completion time is less than 2 minutes

function: scripts/conformance/01_admin/dst_test.py::test_dst_invalid_namespace

DST command with invalid namespace ID will be aborted with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.

Steps

  1. issue a DST with invalid namespace ID, it will be aborted

function: scripts/conformance/01_admin/dst_test.py::test_dst_invalid_stc

DST command with invalid stc value will be aborted with Invalid Field in Command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue a DST command with invalid stc value, it will be aborted.

function: scripts/conformance/01_admin/dst_test.py::test_dst_in_progress

new DST command shall fail with Device Self-test in Progress during device self-test in progress

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 168:When Self-test in Progress, abort the new Device Self-test command with status Device Self-test in Progress.

Steps

  1. issue the first DST
  2. issue the second DST, the second DST will be aborted because of the first DST in progress.

function: scripts/conformance/01_admin/dst_test.py::test_dst_in_progress_abort_dst

abort DST command will abort device self-test operation in progress

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 168

Steps

  1. issue the first DST command
  2. issue abort DST command to abort the first DST in the background
  3. check log page has been updated, because the first DST was aborted by an abort DST command

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_abort_by_controller_reset

short DST shall be aborted by any Controller Level Reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 348.
  2. A short device self-test operation shall be aborted by any Controller Level Reset that affects the controller on which the device self-test is being performed.

Steps

  1. start a short DST
  2. Controller Reset abort short DST operation in the background
  3. check there is no DST in progress
  4. check if the result is Operation was aborted by a Controller Level Reset
  5. start another short DST, and abort

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format

DST Operation will be aborted because of the processing of a Format NVM command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. Figure 202:Operation was aborted due to the processing of a Format NVM command

Steps

  1. issue a DST command
  2. check DST log page: No device self-test operation in progress
  3. check DST log page: Operation was aborted due to the processing of a Format NVM command

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format_fna_0

DST Operation will be aborted because of the processing of a Format NVM command

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 349.
  2. Figure 477: Format NVM command Aborting a Device Self-Test Operation

Steps

  1. skip if NVMe spec version is below 1.4
  2. get current LBA format id
  3. skip if FNA bit is not 0 in the Identify Controller data structure
  4. issue a short DST, nsid = 1
  5. issue a format when FNA is 0, nsid = 1
  6. check if DST is aborted by format command
  7. issue a DST, nsid=1
  8. issue a format when FNA is 0, nsid = 0xffffffff
  9. check if DST is aborted by format command
  10. issue a DST, nsid=0xffffffff
  11. issue a format when FNA is 0, nsid=0xffffffff
  12. check if DST is aborted by format command
  13. issue a DST, nsid=0xffffffff
  14. issue a format when FNA is 0, nsid=1
  15. check if DST is aborted by format command

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_format_fna_1

DST for all namespace will be aborted because of the processing of a Format NVM command when FNA is 1

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 349.
  2. Figure 477: Format NVM command Aborting a Device Self-Test Operation

Steps

  1. skip if NVMe spec version is below 1.4
  2. skip if FNA bit is not 1 in the Identify Controller data structure
  3. get current LBA format id
  4. issue DST with a nsid
  5. issue format command with a different namespace id
  6. check if DST is aborted by format command

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_sanitize

DST Operation will be aborted when a sanitize operation starts

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 234.
  2. When a sanitize operation starts on any controller, all controllers in the NVM subsystem:Shall abort device self-test operations in progress;

Steps

  1. only test in 1.4
  2. skip if sanitize is not supported
  3. issue a DST command
  4. check if the DST is started
  5. issue sanitize command to abort the DST operation
  6. wait till the sanitize completes
  7. clear the event by read log page
  8. check if DST is aborted by sanitize command

function: scripts/conformance/01_admin/dst_test.py::test_dst_after_sanitize

sanitize in progress will abort a DST command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.24 Sanitize command – NVM Command Set Specific

Steps

  1. skip if sanitize is not supported
  2. start a sanitize operation
  3. if sanitize operation is finished, skip test
  4. issue a DST command, it will be aborted by in-progress sanitize
  5. check if sanitize status is in progress, and AER is triggered
  6. clear the event by read log page

function: scripts/conformance/01_admin/dst_test.py::test_dst_abort_by_command

Verify abort dst command processing.

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 108.
  2. Fh – Abort device self-test. Completes command successfully. The Device Self-test Log is not modified.

Steps

  1. issue short dst
  2. issue Abort dst
  3. check if short dst aborted
  4. check Device Self-test Status in the Newest Self-test Result Data Structure
  5. check the Current Device Self-test Status field

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_time_limit

Verify extended dst command should complete in specify time

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 349:
  2. An extended device self-test operation should complete in the time indicated in the Extended Device Selftest Time field in the Identify Controller data structure or less.

Steps

  1. check Extended Device Self-test Time in identify
  2. fix on PS0, and disable apst
  3. check get log page before issue dst
  4. issue dst
  5. check dst log page till complete
  6. check the Current Device Self-test Status field in the Device Self-test Log
  7. check the Current Device Self-Test Completion in the Device Self-test Log
  8. check complete time
  9. check Device Self-test Status in the Newest Self-test Result Data Structure

function: scripts/conformance/01_admin/dst_test.py::test_dst_with_ioworker

execute dst operations along with different stress ioworker

Reference

  1. A short device self-test operation should complete in two minutes or less.

Steps

  1. start streee IO
  2. issue dst command during ioworker work
  3. check dst log page till no dst in progress
  4. check the Current Device Self-test Status field in the Device Self-test Log
  5. check the Current Device Self-Test Completion in the Device Self-test Log
  6. reset controller after DST is completed
  7. check if the completion time is less than 2 minutes

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_not_abort_by_flr_reset

extended DST should not be aborted by flr.

Reference

  1. NVM Express Revision 1.4a. Page350.

Steps

  1. check if FLR is supported
  2. start an extended DST operation
  3. check DST status
  4. FLR the controller
  5. check if DST is still running or finished normally

function: scripts/conformance/01_admin/dst_test.py::test_dst_extended_not_abort_by_controller_level_reset

Verify extended DST operation will be not aborted by Subsystem Reset.

Reference

  1. NVM Express Revision 1.4a. Page349.
  2. An extended device self-test operation shall persist across any Controller Level Reset, and shall resume after completion of the reset or any restoration of power, if any.

Steps

  1. start an extended DST operation
  2. check DST status
  3. cc.en reset
  4. check DST status again

function: scripts/conformance/01_admin/dst_test.py::test_dst_short_abort_by_flr_reset

short DST should be aborted by FLR

Reference

  1. NVM Express Revision 1.4a. Page349.

Steps

  1. check if FLR is supported
  2. start a short DST operation
  3. check DST status
  4. FLR the controller
  5. check DST status again

file: scripts/conformance/01_admin/features_test

function: scripts/conformance/01_admin/features_test.py::test_features_fid_0

setfeature with feature ID 0

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 206.

Steps

  1. setfeature with feature ID 0, the command shall complete with error

function: scripts/conformance/01_admin/features_test.py::test_features_sel_00

use Select 0 to get the current operating attribute value

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 116.
  2. A Select field cleared to 000b (i.e., current) returns the current operating attribute value for the Feature Identifier specified.

Steps

  1. setfeature and getfeature with Select 0
  2. check the data set and get in above commands

function: scripts/conformance/01_admin/features_test.py::test_features_sel_01

use Select 01 to get the default operating attribute value

Reference

  1. NVM Express Revision 1.4c, Page 301.

Steps

  1. feature Temperature Threshold Current Setting is not Persists Across Power Cycle and reset
  2. check if the feature is saveable
  3. check the current operating attribute shall be the same as the default
  4. a Get Features command to read the saved value returns the default value.
  5. setfeature to change the current operating attribute
  6. verify if the current operating attribute is set correctly
  7. verify the default attribute value is not changed
  8. the default value is used after a Controller Level Reset
  9. restore the current operating attribute to original value

function: scripts/conformance/01_admin/features_test.py::test_features_sel_01_reserved_bit

set feature with an invalid data using reserved bit in cdw11

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 184.

Steps

  1. check the current operating attribute shall be the same as the default
  2. setfeature to change the current operating attribute to an invalid data using the reserved bit
  3. verify if the reserved bit is not set to the current operating attribute
  4. restore the current operating attribute to original value

function: scripts/conformance/01_admin/features_test.py::test_features_sel_11

use Select 011b to get the capabilities supported for a feature

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 116.

Steps

  1. skip if feature Save is not supported
  2. send getfeatures commands with Select 011b
  3. the commands shall complete successfully

function: scripts/conformance/01_admin/features_test.py::test_features_invalid_sel

use an invalid Select

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 116.

Steps

  1. send getfeatures command with invalid Select
  2. the commands complete with error Invalid Fields

function: scripts/conformance/01_admin/features_test.py::test_features_set_volatile_write_cache

set/get feature Volatile Write Cache

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. 5.21.1.6 Volatile Write Cache

Steps

  1. skip if volatile cache is not present
  2. get the original write cache setting
  3. enable the write cache and verify the feature is set correctly
  4. skip the several writes to get the real write latency
  5. get the write latency with write cache enabled
  6. disable the cache and verify the feature
  7. get the write latency with write cache disable
  8. recover original write cache setting
  9. check if the write latency with cache enabled is less than that with write cache disabled

function: scripts/conformance/01_admin/features_test.py::test_features_set_invalid_ncqr

set feature Number of Queues with invalid numbers

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. If the value specified is 65,535, the controller should return an error of Invalid Field in Command.

Steps

  1. set feature Number of Queues to 0xffff, the command shall complete with error
  2. set feature Number of Queues to 0xffff0000, the command shall complete with error
  3. set feature Number of Queues to 0xffffffff, the command shall complete with error

function: scripts/conformance/01_admin/features_test.py::test_features_num_of_queues

set only 2 queues while to create three queues.

Reference

  1. NVM Express Revision 1.4a Page212:
  2. 5.21.1.7 Number of Queues

Steps

  1. create controller with only 2 queues
  2. check the number of queue
  3. create two qpairs
  4. cannot create any more queues

function: scripts/conformance/01_admin/features_test.py::test_features_apst_buffer_length

APST feature has a data buffer, check the length of this buffer

Reference

  1. NVM Express Revision 1.4a

Steps

  1. create 4k buffer, pvalue is all-one data
  2. check apst is enabled, the data structure is 256 bytes
  3. check the buffer 256 byte should be original value all-one data
  4. create 4k buffer, pvalue is all-zero data
  5. check apst is enabled, the data structure is 256 bytes
  6. check the buffer 256 byte should be original value all-zero data

function: scripts/conformance/01_admin/features_test.py::test_features_timestamp

test timestamp features

Reference

  1. NVM Express Revision 1.4a
  2. NVM Express Revision 2.0
  3. timestamp is cleared to 0 due to controller level reset

Steps

  1. check ONCS
  2. verify the length of the data buffer
  3. get current timestamp
  4. get the timestamp again after 1 second
  5. get original timestamp status
  6. reset and check status
  7. set timestamp and check status again
  8. get current timestamp
  9. get the timestamp again after 1 second
  10. set a max value: 0xffff_ffff_ffff
  11. set a min value: 0
  12. reset and check timestamp and status according to the Timestamp Origin

file: scripts/conformance/01_admin/format_test

function: scripts/conformance/01_admin/format_test.py::test_format_function

verify basic format function

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. Figure 329: Format NVM – Operation Scope

Steps

  1. issue a format command with nsid 0xffffffff, the command shall complete successfully
  2. issue a format command with nsid 1, the command shall complete successfully

function: scripts/conformance/01_admin/format_test.py::test_format_secure_erase_function

verify secure erase function

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.23 Format NVM command – NVM Command Set Specific

Steps

  1. issue a format command to erase data in nsid 0xffffffff, the command shall complete successfully
  2. issue a format command to erase data in nsid 1, the command shall complete successfully
  3. issue Cryptographic erase if controller support it, and the command shall complete successfully

function: scripts/conformance/01_admin/format_test.py::test_format_with_ioworker

send format command mixed with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.23 Format NVM command – NVM Command Set Specific

Steps

  1. send a format command with outstanding IO, command shall complete successfully
  2. check the error code of format command

function: scripts/conformance/01_admin/format_test.py::test_format_and_read

send format command mixed with read io

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.23 Format NVM command – NVM Command Set Specific

Steps

  1. send a format command but not wait for its completion
  2. send a read command with an outstanding format command
  3. wait format complete successfully

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_ses

send format command with invalid ses field

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue format command with invalid ses field, the command shall complete with error

function: scripts/conformance/01_admin/format_test.py::test_format_not_support_crypto_erase

send format command even if crypto erase is not supported

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. skip if controller supports crypto erase
  2. issue format command to do crypto erase, but the command shall complete with error

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_lbaf

verify format command with invalid LBAF

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check lbaf value
  2. format command with invalid lbaf, and the command shall complete with error Invalid Format.
  3. format to original format id

function: scripts/conformance/01_admin/format_test.py::test_format_invalid_nsid

verify format command with invalid nsid

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue normal format
  2. format which nsid is zero, will be aborted by Invalid Namespace or Format
  3. format which nsid is 0xfffffffb, will be aborted by Invalid Namespace or Format
  4. format which nsid is 0xff, will be aborted by Invalid Namespace or Format

function: scripts/conformance/01_admin/format_test.py::test_format_verify_data

verify data after format command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. prepare data buffer and IO queue
  2. write data to specified lba and verify
  3. issue format
  4. the data in specified lba was formated
  5. repeat write data to specified lba, verify, format, and check data.
  6. verify crypto erase if data is erased

file: scripts/conformance/01_admin/fw_download_test

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_out_of_order

download the firmware image out of order

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 113.
  2. Firmware portions may be submitted out of order to the controller.

Steps

  1. send firmware download commands out of order

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_overlap

download the firmware image overlapped with each other

Reference

  1. NVM Express Revision 1.4a, March 9, 2020, Page 113.
  2. If ranges overlap, the controller may return an error of Overlapping Range.

Steps

  1. send firmware download commands in order
  2. send the same portion of the image again

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_reset

download the image interrupted by controller reset

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 113.
  2. If a reset occurs between a firmware download and completion of the Firmware Commit command, then the controller shall discard all portion(s), if any, of downloaded images.

Steps

  1. send firmware download commands in order
  2. download the same portion of the image again after controller reset, commands shall complete successfully

function: scripts/conformance/01_admin/fw_download_test.py::test_fw_download_prp

Verify download command PRP offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 113
  2. If ranges overlap, the controller may return an error of Overlapping Range.

Steps

  1. allocate buffer for fw download command
  2. send fw download with valid prp offset
  3. send fw download with invalid prp offset

file: scripts/conformance/01_admin/identify_test

function: scripts/conformance/01_admin/identify_test.py::test_identify_all_nsid

Identify command with invalid namespace will be aborted with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020

Steps

  1. issue identify with valid namespace
  2. issue identify with invalid namespace will be aborted with Invalid Namespace or Format.

function: scripts/conformance/01_admin/identify_test.py::test_identify_namespace_data_structure

verify Identify Namespace data structure and Active Namespace ID list

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue an identify Active Namespace ID list
  2. check if controller contain one namespace
  3. issue an identify Namespace data structure
  4. check buffer is not null
  5. check NSZE = NCAP

function: scripts/conformance/01_admin/identify_test.py::test_identify_reserved_cns

Identify command with Reserved CNS will be aborted with Invalid Field in Command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. Identify command with Reserved CNS will be aborted with Invalid Field in Command.

function: scripts/conformance/01_admin/identify_test.py::test_identify_nsze_ncap_nuse

check if Namespace Size, Namespace Capacity and Namespace Utilization are reasonable value in Identify Namespace Data Structure

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page249.
  2. The following relationship holds: Namespace Size >= Namespace Capacity >= Namespace Utilization

Steps

  1. read nsze, ncap, nuse value
  2. check if Namespace Size >= Namespace Capacity >= Namespace Utilization
  3. if ANA reporting supported and in inaccessible or Persistent Loss state, nuse=0

function: scripts/conformance/01_admin/identify_test.py::test_identify_controller_with_nsid

get identify controller data with nsid field

Reference

  1. If the namespace identifier is not used for the command and the host specifies a value from 1h to FFFFFFFFh, then the controller should abort the command with status Invalid Field in Command,
  2. NVM Express Revision 1.4c
  3. NVM Express Revision 2.0

Steps

  1. get identify controller data with nsid 0
  2. get identify controller data with invalid nsid: 1 – 0xffffffff

function: scripts/conformance/01_admin/identify_test.py::test_identify_new_cns

Identify command has to support new CNS values to access the NVM Command Set Identify specific Namespace

Reference
1.
2. NVM Express Revision Revision 2.0a, July 23rd, 2021
3. 5.17 Identify command

Steps

  1. skip if NVMe spec version is below 2.0
  2. issue identify command to read I/O Command Set specific Identify Namespace data structure
  3. issue identify command to read I/O Command Set specific Identify Controller data structure
  4. issue identify command to read Active Namespace ID list associated with the specified I/O Command Set

file: scripts/conformance/01_admin/logpage_test

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_page_id

send get log page command with valid and invalid Log Page Identifier

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 117.
  2. 5.14 Get Log Page command: If a Get Log Page command is processed that specifies a Log Identifier that is not supported, then the controller should abort the command with status Invalid Field in Command.

Steps

  1. send get log page command with valid Log Page Identifier, and commands shall complete successfully
  2. send get log page command with invalid Log Page Identifier, and commands shall complete with error

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_lid_0

log identifier 00h is valid in NVMe 2.0

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021
  2. 5.16.1.1 Supported Log Pages (Log Identifier 00h)
  3. 3.1.2.1.2 Log Page Support

Steps

  1. skip if NVMe spec version is below 2.0
  2. lid 0,1,2,3,12h,13h are mandatory

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_different_size

send get log page command with different size

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 117.

Steps

  1. get the full smart log page
  2. read partial smart log page, and check data
  3. read data beyond smart log page, and check data

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_data_unit_read

verify the conditions for Data Units Read in SMART/Health log changes

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 123.
  2. 5.14.1.2 SMART / Health Information (Log Identifier 02h)

Steps

  1. skip if compare command is not supported
  2. get original Data Units Read
  3. send 1000 read commands
  4. check the Data Units Read has increased
  5. send 1000 compare commands
  6. check the Data Units Read has increased
  7. if the controller supports verify command, send 1000 verify commands
  8. the Data Units Read shall be increased

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_data_unit_write

verify the conditions for Data Units Written in SMART/Health log changes

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 123.
  2. 5.14.1.2 SMART / Health Information (Log Identifier 02h)

Steps

  1. check if the controller supports Write Uncorrectable command
  2. get original Data Units Written
  3. send 1000 write commands
  4. check the Data Units Written has increased
  5. send 1000 Write Uncorrectable commands
  6. check the Data Units Written has not changed
  7. check if the controller supports Write Zeroes command
  8. write the lba

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_namespace

send get log page commands with valid and invalid namespace

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 121.
  2. 5.14.1.2 SMART / Health Information (Log Identifier 02h)

Steps

  1. skip if NVMe spec version is below 1.4
  2. the command completes successfully and the composite temperature is not 0
  3. send get log page command with nsid=1: log page on a per namespace basis
  4. the command completes successfully and the composite temperature is not 0
  5. send get log page commands with invalid namespace
  6. check the get log page command complete with error

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_offset

Log Page Offset specifies the location within a log page to start returning data from.

Reference

  1. NVM Express Revision 1.4c, page 117
  2. 5.14 Get Log Page command

Steps

  1. read smart data
  2. read smart data with log page offset
  3. compare smart data with different offset, shall be different
  4. offset is greater than the logpage size

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_smart_composite_temperature

verify Composite Temperature and Critical Warning in SMART / Health Information

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 122.
  2. 5.14.1.2 SMART / Health Information (Log Identifier 02h)

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. set the composite temperature threshold lower than current temperature
  4. check AER notification is triggered
  5. send a getlogpage command to get the SMART data
  6. check if Critical Warning bit 1 in SMART data was set
  7. clear event by read log page
  8. set the composite temperature threshold higher than current temperature
  9. revert to default

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_after_error

Verify Error Information will be modified after error happened.

Reference

  1. NVM Express Revision 1.4a.

Steps

  1. send admin command with opcode=0x6, cdw10=0xFF
  2. send get error log cmd and record nerror1
  3. send admin command with opcode=0x6, cdw10=0xFF
  4. send get error log cmd and record nerror2
  5. verify error count value and number of Error Information Log Entries

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_retain_asynchronous_event

send getlog page command with retain asynchronous event.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 117:
  2. Retain Asynchronous Event (RAE): This bit specifies when to retain or clear an Asynchronous Event. If this bit is cleared to ‘0’, the corresponding Asynchronous Event is cleared after the command completes successfully. If this bit is set to ‘1’, the corresponding Asynchronous Event is retained (i.e., not cleared) after the command completes successfully.

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. set the composite temperature threshold lower than current temperature
  4. check AER notification is triggered
  5. send a getlogpage command to get the SMART data
  6. check if Critical Warning bit 1 in SMART data was set
  7. get log page with retain asynchronous event.
  8. clear over Temperature Threshold event
  9. trigger under Temperature Threshold event, but the event type is masked

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_not_retain_asynchronous_event

send getlog page command with clear an Asynchronous Event.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 117:
  2. Retain Asynchronous Event (RAE): This bit specifies when to retain or clear an Asynchronous Event. If this bit is cleared to ‘0’, the corresponding Asynchronous Event is cleared after the command completes successfully. If this bit is set to ‘1’, the corresponding Asynchronous Event is retained (i.e., not cleared) after the command completes successfully.

Steps

  1. get the current composite temperature
  2. set feature enable all asynchronous events
  3. get current temperature
  4. set the composite temperature threshold lower than current temperature
  5. check AER notification is triggered
  6. send a getlogpage command to get the SMART data
  7. check if Critical Warning bit 1 in SMART data was set
  8. send get log page command to clear an Asynchronous Event.
  9. clear over Temperature Threshold event
  10. send get log page command to clear an Asynchronous Event.
  11. trigger under Temperature Threshold event, the event type is masked
  12. send get log page command to clear an Asynchronous Event.
  13. clear Over Temperature Threshold event
  14. power cycle the drive

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_persistent_event_log

generate and verify new event log

Reference

  1. NVM Express Revision 2.0

Steps

  1. check PEL size
  2. fresh events in the logpage
  3. check events: no format event left
  4. format start and complete: 07/08
  5. check events of format, and reset is still there

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_host_initiated_telemetry

Verify host initiated telemetry

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021
  2. 5.16.1.8 Telemetry Host-Initiated (Log Identifier 07h)

Steps

  1. check if telemetry is supported
  2. capture host initiated telemetry log
  3. get Telemetry Host-Initiated Data Generation Number
  4. capture host initiated telemetry log
  5. check if Telemetry Host-Initiated Data Generation Number is incremented each time
  6. get telemetry header info
  7. Bit 6 of the Log Page Attributes field is set to ‘1’ in the Identify Controller Data Structure
  8. Extended Telemetry Data Area 4 Supported (ETDAS) field is set to 1h in the Host Behavior Support feature
  9. get area 4 last block
  10. print header data
  11. get all data block: [1, last]
  12. read last block twice and compare data
  13. get telemetry data beyond last block, error expected
  14. check the generate number again
  15. invalid offset, error expected

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_controller_initiated_telemetry

Verify controller initiated telemetry

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021

Steps

  1. check if telemetry is supported
  2. get telemetry header
  3. check telemetry header data
  4. print header data
  5. get all data block: [1, last]
  6. read last block twice and compare data
  7. check the generate number again
  8. invalid offset, error expected

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_telemetry_offset_not_512

If a Log Page Offset Lower value is not a multiple of 512 bytes then the controller shall return an error with Invalid Field in Command.

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021

Steps

  1. issue a get Telemetry Host-Initiated Log with a multiple of 512 bytes, expect controller return Invalid Field
  2. issue a get Telemetry controller-Initiated Log with a multiple of 512 bytes, expect controller return Invalid Field

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_host_initiated_telemetry_data_change

The Host-Initiated Data shall not change until a subsequent Telemetry Host-Initiated Log with this bit set to ‘1’.

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021
  2. 5.16.1.8 Telemetry Host-Initiated (Log Identifier 07h)

Steps

  1. check if telemetry is supported
  2. get the data area 1 size
  3. get host initiated telemetry data
  4. capture new host initiated telemetry data
  5. get new data area 1 size
  6. get another copy of telemetry data

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_nsid_0

NSID of 0h is supported in SMART/Health log

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021
  2. 5.16.1.3 SMART / Health Information (Log Identifier 02h)

Steps

  1. skip if NVMe spec version is below 2.0
  2. get original Data Units Written
  3. send 1000 write commands
  4. check the Data Units Written has increased

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_error_info_cid_ffff

The value of FFFFh should not be used as the Error Information log page uses this value to indicate an error

Reference
1.
2. NVM Express Revision Revision 2.0a, July 23rd, 2021

Steps

  1. issue a flush command with cid 0xffff
  2. issue an AER command
  3. create CQ and SQ
  4. delete SQ first
  5. write doorbell of the deleted SQ to cause the event Invalid Doorbell Register
  6. read log page to clear the event
  7. issue a error info log page
  8. check sqid and cid value in log page, expect 0xffff
  9. delete cq

function: scripts/conformance/01_admin/logpage_test.py::test_getlogpage_eye_opening_measurement

get EOM data and display

Reference

  1. TP4119a Rx Phy Eye Opening Measurement (EOM)

Steps

  1. read log data, No measurement has been started
  2. skip if EOM is not supported
  3. start measurement and read log data
  4. get the lane descriptor
  5. abort measurement and clear log
  6. read log data
  7. start measurement and read log data
  8. reset to initilize the EOM log
  9. reserved action

file: scripts/conformance/01_admin/queue_test

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_basic_operation

create a queue and send commands on the queue

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue read commands on the create queue
  2. the read commands shall complete successfully

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_with_invalid_id

create IO CQ command with specified invalid Queue Identifier

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 101.
  2. Figure 151: if the value specified is 0h, exceeds the Number of Queues reported, or corresponds to an identifier already in use, the controller should return an error of Invalid Queue Identifier.

Steps

  1. create a cq which queue id is 5, and it shall complete successfully
  2. create a cq which queue id is 0, and it shall complete with error
  3. create a cq which queue id is 0xffff, and it shall complete with error
  4. create a cq whose queue id is larger than supported number of queue, and it shall complete with error
  5. create a cq which queue id is duplicated cqid, and it shall complete with error
  6. delete the CQ

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_id

create IO SQ command specified Invalid Queue Identifier

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 103.
  2. Figure 155: if the value specified is 0h, exceeds the Number of Queues reported, or corresponds to an identifier already in use, the controller should return an error of Invalid Queue Identifier.

Steps

  1. create a cq which queue id is 1
  2. create a sq which queue id is 5
  3. create a sq which queue id is 0, and it shall complete with error
  4. create a sq which queue id is 0xffff, and it shall complete with error
  5. create a sq which queue id is larger than supported number of queue, and it shall complete with error
  6. create a sq which queue id is duplicated sqid, and it shall complete with error
  7. delete SQ and CQ

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_cq_with_invalid_id

delete IO CQ command specified Invalid Queue Identifier

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 105.
  2. Figure 160: Invalid Queue Identifier: The Queue Identifier specified in the command is invalid. This error is also indicated if the Admin Completion Queue identifier is specified.

Steps

  1. delete a cq, normal case
  2. delete a cq which queue id is 0, and it shall complete with error
  3. delete a cq which queue id is 0xffff, and it shall complete with error
  4. delete a cq which queue id is larger than supported number of queue, and it shall complete with error
  5. delete a cq which queue id is not existed queue id, and it shall complete with error

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_sq_with_invalid_id

delete IO SQ command specified Invalid Queue Identifier

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 106.
  2. Figure 162:Invalid Queue Identifier: The Queue Identifier specified in the command is invalid. This error is also indicated if the Admin Submission Queue identifier is specified.

Steps

  1. delete a sq, and it shall complete successfully
  2. delete a sq which queue id is 0, and it shall complete with error
  3. delete a sq which queue id is 0xffff, and it shall complete with error
  4. delete a sq which queue id is larger than supported number of queue, and it shall complete with error
  5. delete a sq which queue id is not existed queue id, and it shall complete with error

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_with_invalid_queue_size

create IO CQ command with specified Invalid Queue Size

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 101.
  2. Figure 151: Queue Size (QSIZE): This field indicates the size of the Completion Queue to be created. If the size is 0h or larger than the controller supports, the controller should return an error of Invalid Queue Size.

Steps

  1. create cq with valid queue size, and commands shall complete successfully
  2. skip remaining steps if MQES is 64K
  3. create cq which queue size is 0xffff, the command shall complete with error
  4. create cq which queue size is 0x10000, the command shall complete with error
  5. create cq which queue size is 1, the command shall complete with error
  6. create cq which queue size is larger than supported Queue Size, the command shall complete with error
  7. create cq which queue size is 0, the command shall complete with error
  8. create cq with valid queue size, will complete successfully

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_queue_size

create IO SQ command specified Invalid Queue Size, shall fail with error of Invalid Queue Size

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 103.
  2. Figure 155:This field indicates the size of the Submission Queue to be created. If the size is 0h or larger than the controller supports, the controller should return an error of Invalid Queue Size.

Steps

  1. create a cq
  2. create sq with valid queue size
  3. create a sq which valid queue size is 1, the command shall complete with error
  4. delete the CQ

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_with_invalid_queue_size_mqes

create IO SQ command specified Invalid Queue Size, shall fail with error of Invalid Queue Size

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 103.
  2. Figure 155:This field indicates the size of the Submission Queue to be created. If the size is 0h or larger than the controller supports, the controller should return an error of Invalid Queue Size.

Steps

  1. skip remaining steps if MQES is 64K
  2. create a cq
  3. create a sq which valid queue size is 0xffff, the command shall complete with error
  4. create a sq which valid queue size is 0x10000, the command shall complete with error
  5. create a sq which valid queue size is larger than supported Queue Size, the command shall complete with error
  6. create a sq which valid queue size is 0, the command shall complete with error
  7. delete the CQ

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_physically_contiguous

If CAP.CQR is 1, CDW11.PC is 0 or a PRP Entry with a non-zero offset, shall fail with correct error code.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 103.
  2. Figure 154: If CDW11.PC is set to ‘1’, then this field specifies a 64-bit base memory address pointer of the Submission Queue that is physically contiguous. The address pointer is memory page aligned (based on the value in CC.MPS) unless otherwise specified.
  3. If there is a PRP Entry with a non-zero offset, then the controller should return an error of PRP Offset Invalid.

Steps

  1. check CAP.CQR value
  2. issue create io sq which pc is false, will be aborted.
  3. issue create io sq which pc is true, will be pass.
  4. if prp offset is non-zero, will be aborted

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_non_physically_contiguous

If CAP.CQR is 0, Create IO SQ in which CDW11.PC is 0 and a PRP Entry with a non-zero offset, shall fail with error of PRP Offset Invalid.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 103.
  2. Figure 154: If CDW11.PC is cleared to ‘0’, then this field specifies a PRP List pointer that describes the list of pages that constitute the Submission Queue. The list of pages is memory page aligned (based on the value in CC.MPS) unless otherwise specified.

Steps

  1. check if pc is required to be physically contiguous
  2. issue create io sq with valid prp, the command shall complete successfully
  3. if prp offset is non-zero, will be aborted

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_non_physically_contiguous

If CAP.CQR is 0, Create IO CQ in which CDW11.PC is 0 and a PRP Entry with a non-zero offset, shall fail with error of PRP Offset Invalid.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 101.
  2. Figure 150.

Steps

  1. check if pc is required to be physically contiguous
  2. issue create io cq with valid prp, the command shall complete successfully
  3. if prp offset is non-zero, will be aborted

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_cq_invalid_interrupt_vector

create cq command with a specified Invalid Interrupt Vector

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 102.
  2. Figure 152: In MSI-X, a maximum of 2,048 vectors are used. This value shall not be set to a value greater than the number of messages the controller supports (refer to MSICAP.MC.MME or MSIXCAP.MXC.TS). If the value is greater than the number of messages the controller supports, the controller should return an error of Invalid Interrupt Vector.

Steps

  1. find an invalid MSIx vector
  2. create IO CQ with the invalid MSIx vector, and the command shall complete with error

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_cq_before_sq

delete IO CQ before deleting its associated IO SQ

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 105.
  2. If there are any associated I/O Submission Queues present, then the Delete I/O Completion Queue command shall fail with a status value of Invalid Queue Deletion

Steps

  1. create 3 IO SQ associated to one CQ
  2. delete CQ first and it shall complete with error
  3. delete one SQ
  4. delete CQ and it shall complete with error
  5. delete all SQ
  6. delete CQ and it shall complete successfully

function: scripts/conformance/01_admin/queue_test.py::test_queue_delete_full_sq

delete IO SQ which has outstanding commands

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create all possible SQ with a single CQ associated
  2. issue all commands to fulfill all SQ
  3. delete CQ first and it shall complete with error
  4. delete some SQ
  5. delete CQ and it shall complete with error
  6. delete some more SQ
  7. delete CQ and it shall complete with error
  8. delete the last SQ
  9. delete CQ and it shall complete successfully

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_sq_queue_priority

check priority in create SQ command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create SQ with different QPRIO, and all commands shall complete successfully

function: scripts/conformance/01_admin/queue_test.py::test_queue_set_after_create_queues

set feature number of queues shall fail with Command Sequence Error if it have been issued after creation of any I/O Submission

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page212.
  2. 5.21.1.7 Number of Queues: This feature shall only be issued during initialization prior to creation of any I/O Submission and/or Completion Queues. If a Set Features command is issued for this feature after creation of any I/O Submission and/or I/O Completion Queues, then the Set Features command shall fail with status code of Command Sequence Error.

Steps

  1. skip if the controller does not support nvme version 1.4 or above
  2. create CQ and SQ
  3. issue a setfeature to set Number of Queues, and it shall complete with error
  4. delete CQ/SQ

function: scripts/conformance/01_admin/queue_test.py::test_queue_create_qpair_exceed_limit

the number of queues created exceeds the controller’s limit

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page212.
  2. 5.21.1.7 Number of Queues

Steps

  1. create the most queues supported by the controller
  2. creating a new qpair shall fail

function: scripts/conformance/01_admin/queue_test.py::test_queue_setfeature_different_cq_sq_number

set feature the number of queues, cq and sq are different

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 212.

Steps

  1. init controller without set number of queue
  2. set feature, number of cq = 2, number of sq = 4
  3. create all possible cq and sq
  4. create one more cq, shall be aborted
  5. create one more sq, shall be aborted
  6. delete queues

function: scripts/conformance/01_admin/queue_test.py::test_queue_invalid_prp_offset

Verify create IO CQ with invalid PRP offset.

Reference

  1. NVM-Express-1_4-2019.06.10-Ratified. Page 101.
  2. If there is a PRP Entry with a non-zero offset, then the controller should return an error of PRP Offset Invalid.

Steps

  1. Spec NVM-Express-1_4-2019.06.10-Ratified
  2. Figure 149: In both cases the PRP Entry shall have an offset of 0h.
  3. send PRP with invalid offset, should get ERROR status: 00/13

function: scripts/conformance/01_admin/queue_test.py::test_queue_cq_sqhd

check sqhd in each CQ entry

Reference

  1. NVM-Express-1_4-2019.06.10-Ratified. Page 101.

Steps

  1. send one admin command and get the sqhd in its CQE
  2. send one more admin command, and the sqhd should be increased by 1
  3. send one more admin command after an AER, and the sqhd should be increased by 2

function: scripts/conformance/01_admin/queue_test.py::test_queue_sq_fuse_reserved_value

fuse field shall be reserved

Reference

  1. NVM Express Revision 1.4a March 9, 2020

Steps

  1. set FUSE field to 0x3, a reserved value
  2. check CQE for error Invalid Field in Command

function: scripts/conformance/01_admin/queue_test.py::test_queue_enabled_msix_interrupt_all

verify MSIx interrupt on all qpairs

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 297
  2. MSI-X also allows each interrupt to send a unique message data corresponding to the vector.

Steps

  1. create all qpairs
  2. read LBA0 with each qpair
  3. check MSIx interrupt assertion
  4. create qpair with illegal sqid, error is expected

file: scripts/conformance/01_admin/sanitize_test

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_operations_basic

controller shall update the Sanitize Status log during Sanitize progress

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 237.
  2. Shall clear any outstanding Sanitize Operation Completed asynchronous event or Sanitize Operation Completed With Unexpected Deallocation asynchronous event; Shall update the Sanitize Status log (refer to section 5.14.1.16.2);

Steps

  1. write data
  2. verify data before sanitize
  3. issue block erase sanitize
  4. check if Sanitize Progress is updated and AER is triggered
  5. check if bit2:0 in Sanitize Status is 1(the most recent sanitize operation completed successfully)
  6. check if bit8 in Sanitize Status is 1 (since the most recent successful sanitize operation.)
  7. check if SCDW10 is the value of the Command Dword 10 field of the Sanitize command
  8. verify data after sanitize

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_crypto_erase_progress

Crypto Erase Sanitize operation progress

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 359.
  2. The Crypto Erase sanitize operation alters user data by changing the media encryption keys for all locations on the media within the NVM subsystem in which user data may be stored.

Steps

  1. check if controller support Crypto Erase
  2. write data and verify
  3. issue a Crypto Erase sanitize command
  4. check if Sanitize Progress is updated and AER is triggered
  5. check if data is erased
  6. check if bit2:0 in Sanitize Status is 1(the most recent sanitize operation completed successfully)
  7. check if bit8 in Sanitize Status is 1 (since the most recent successful sanitize operation)
  8. check if SCDW10 is the value of the Command Dword 10 field of the Sanitize command

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_abort_non_allowed_command

the Sanitize progress will abort any command not allowed

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page237.
  2. Shall abort any command (submitted or in progress) not allowed during a sanitize operation with a status of Sanitize In Progress (refer to section 8.15.1); Firmware Commit, Firmware Image Download, Format NVM, Sanitize will be aborted. And all NVM command will be abort.

Steps

  1. issue a Block Erase sanitize command
  2. if sanitize operation is finished, skip test
  3. abort sanitize command
  4. abort dst command
  5. abort fw download command
  6. abort fw commit command
  7. abort format command
  8. abort flush command
  9. abort write command
  10. abort read command
  11. wait sanitize operation complete
  12. check if Sanitize Progress is updated and AER is triggered

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_abort_allowed_command

the Sanitize progress will not abort any command allowed

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 362.
  2. Figure 484. Abort, AER, Create CQ/SQ, Delete CQ/SQ, DST, Get features, Get log page, Identify, Keep Alive, NVMe-MI Receive, NVMe-MI Send are allowed when sanitize progress.

Steps

  1. issue a Block Erase sanitize command
  2. if sanitize operation is finished, skip test
  3. Create IO CQ and IO SQ
  4. Delete IO CQ and IO SQ
  5. issue setfeatures and it shall complete successfully
  6. issue getfeatures and it shall complete successfully
  7. issue identify command and it shall complete successfully
  8. check if Sanitize Progress is updated
  9. check if bit2:0 in Sanitize Status is 1 for the most recent successful sanitize operation
  10. check if bit8 in Sanitize Status is 1 for the most recent successful sanitize operation
  11. check if SCDW10 is the value of the Command Dword 10 field of the Sanitize command

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_successful_completion

send an invalid sanitize command and check Sanitize Status log page and alter user data

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.24 Sanitize command – NVM Command Set Specific

Steps

  1. write data and verify data
  2. read log page value before issue sanitize
  3. send a sanitize command with Reserved Sanitize Action
  4. check data have not been changed
  5. check log page have not been updated.

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_nvme_reset

Block Erase Sanitize operation is not able to be aborted and continue after NVMe Reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 358.
  2. Once started, a sanitize operation is not able to be aborted and continues after a Controller Level Reset including across power cycles.

Steps

  1. skip the test if sanitize completes in 3 seconds
  2. issue a Block Erase sanitize command
  3. controller reset
  4. check if Sanitize Progress is updated and AER is triggered

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_not_support_type

Controller shall abort the unsupported sanitize command with a status of Invalid Field in Command.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 237:
  2. If an unsupported sanitize operation type is selected by a Sanitize command, then the controller shall abort the command with a status of Invalid Field in Command.

Steps

  1. check Sanitize Capabilities value in identify
  2. if not support Crypto Erase
  3. if not support Block Erase
  4. if not support Overwrite
  5. Reserved Sanitize Action values

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_abort_by_fw_activation

If a firmware activation with reset is pending, then the controller shall abort any Sanitize command.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 237:
  2. If a firmware activation with reset is pending, then the controller shall abort any Sanitize command.

Steps

  1. skip if the firwmare slot 1 is read only
  2. issue fw commit command
  3. issue sanitize command

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_no_deallocate

Verify data will be deallocated after Sanitize if No Deallocate After Sanitize field cleared to 0 or set to ‘1’ and the No-Deallocate Inhibited bit is set to ‘1’.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 239:
  2. No Deallocate After Sanitize: If set to ‘1’ and the No-Deallocate Inhibited bit (refer to Figure 247)is cleared to ‘0’, then the controller shall not deallocate any logical blocks as a result of successfully completing the sanitize operation. If:
  3. a) cleared to ‘0’; or
  4. b) set to ‘1’ and the No-Deallocate Inhibited bit is set to ‘1’,
  5. then the controller should deallocate logical blocks as a result of successfully completing the
  6. sanitize operation.

Steps

  1. only test in 1.4
  2. check if support sanitize
  3. set NODRM
  4. write data and verify
  5. issue a sanitize command, and No Deallocate After Sanitize field set to 1
  6. check sanitize status in log page and AER
  7. check if data is deallocated
  8. check sanitize status
  9. clear NODRM

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_exit_failure_mode

A Sanitize command specifying an Action set to 001b shall be successful if the most recent sanitize operation did not fail

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021.section 5.24

Steps

  1. check if support sanitize
  2. issue block erase sanitize
  3. check if Sanitize Progress is updated and AER is triggered
  4. check if bit2:0 in Sanitize Status is 1(the most recent sanitize operation completed successfully)
  5. check if bit8 in Sanitize Status is 1 (since the most recent successful sanitize operation.)
  6. check if SCDW10 is the value of the Command Dword 10 field of the Sanitize command
  7. verify data after sanitize
  8. issue an exit failure mode Sanitize command
  9. check log page again, log page is not changed

function: scripts/conformance/01_admin/sanitize_test.py::test_sanitize_and_flush

a Flush command may successful even during a sanitize operation

Reference

  1. NVM Express Revision Revision 2.0a, July 23rd, 2021.section 5.24

Steps

  1. if volatile cache is not present
  2. get the original write cache setting
  3. disable the write cache and verify the feature is set correctly
  4. issue block erase sanitize
  5. flush during sanitize
  6. check if Sanitize Progress is updated and AER is triggered
  7. check if bit2:0 in Sanitize Status is 1(the most recent sanitize operation completed successfully)
  8. check if bit8 in Sanitize Status is 1 (since the most recent successful sanitize operation.)
  9. check if SCDW10 is the value of the Command Dword 10 field of the Sanitize command

folder: scripts/conformance/02_nvm

file: scripts/conformance/02_nvm/compare_test

function: scripts/conformance/02_nvm/compare_test.py::test_compare_lba_0

verify the Starting LBA and Number of Logical Blocks fields of the compare command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 255.
  2. The Compare command reads the logical blocks specified by the command from the medium and compares the data read to a comparison data buffer transferred as part of the command.

Steps

  1. check if the compare command is supported
  2. get the maximum number of lba
  3. prepare data to be compared
  4. send compare commands with different LBA, and the command shall complete with error
  5. send compare commands with different data, and the command shall complete with error
  6. recover data buffer to original data
  7. send compare commands with different nlb, and the command shall complete with error
  8. send compare commands with invalid LBA, and the command shall complete with error

function: scripts/conformance/02_nvm/compare_test.py::test_compare_invalid_nsid

issue compare with invalid nsid

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 247.
  2. An invalid NSID is any value that is not a valid NSID and is also not the broadcast value.

Steps

  1. check if the compare command is supported
  2. issue compare with invalid nsid
  3. check the compare command failed to execute

function: scripts/conformance/02_nvm/compare_test.py::test_compare_fused_operations

compare and write fused operation

Reference

  1. NVM Express Revision 1.4c
  2. 6.2 Fused Operations

Steps

  1. skip test if compare and write fused operation is not supported
  2. send separated write commands
  3. send separated compare commands with wrong data
  4. send separated compare commands
  5. send separated command with illegal fuse
  6. send combined commands with illegal fuse
  7. send correct compare/write fused command
  8. send compare/write fused command with wrong data
  9. send correct compare/write fused command with correct data
  10. send compare/write fused command with wrong lba
  11. send compare/write fused command with wrong lba
  12. compare the new data in fused command
  13. compare the new data in separated command
  14. send fused commands with other normal commands
  15. send fused commands with other wrong commands
  16. send 2 pairs of fused commands
  17. send 2 pairs of fused commands, one pass, another wrong

function: scripts/conformance/02_nvm/compare_test.py::test_compare_write_mixed

test compare and write in ioworker

Reference

  1. NVM Express Revision 1.4c

Steps

  1. check if the compare command is supported
  2. format
  3. test write and compare without token
  4. test write and compare without token in ioworker
  5. enable token and test compare

file: scripts/conformance/02_nvm/copy_test

function: scripts/conformance/02_nvm/copy_test.py::test_copy_basic

send copy command and verify the copy data

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022. Page 22.
  2. The Copy command is used by the host to copy data from one or more source logical block ranges to a
  3. single consecutive destination logical block range.

Steps

  1. write lba 0-32
  2. copy lba 0-32 to 32-64
  3. copy lba 0-32, 32-64 to 64-96, 96-128
  4. create buffer to read data copied
  5. read lba 0-32
  6. read lba 32-64
  7. read lba 64-96
  8. read lba 96-128
  9. compare data

function: scripts/conformance/02_nvm/copy_test.py::test_copy_smart

copy cmd will affects SMART Host Read/Write Commands field, but not affects Data Units Read/Written.

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022. Page 8.
  2. 1.4.2.3 SMART Data Units Read Command: The Compare command, Read command, and Verify command
  3. 1.4.2.4 SMART Host Read Command: The Compare command, Copy command, and Read command

Steps

  1. get original Data Units Read/Written and Host Read/Write Commands
  2. copy lba 0-32, 32-64 to 64-96, 96-128
  3. check the Data Units Read/Write not increased, Host Read/Write Commands has increased

function: scripts/conformance/02_nvm/copy_test.py::test_copy_format_1

the copy descriptor format type of the Source range entries

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022. Page 24.
  2. Descriptor Format: Specifies the type of the Copy Descriptor Format that is used. The Copy Descriptor
  3. Format specifies the starting LBA, number of logical blocks, and parameters associated with the read
  4. portion of the operation.

Steps

  1. check if copy format 1 is supported
  2. copy lba 0-32 to 32-64, format=0
  3. copy lba 0-32 to 32-64, format=1
  4. copy lba 0-32 to 32-64, set copy range format=0, copy format=1
  5. copy lba 0-32 to 32-64, set copy range format=1 copy format=0

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_lba

verify the copy command of the starting LBA under boundary conditions

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read ncap value in identify
  2. copy lba 0-1 to ncap
  3. issue a copy command, slba is ncap-1, nlb is 1.
  4. issue a copy command, source range is OOR.
  5. copy slba and lnb more than ncap, will be aborted with LBA Out of Range

function: scripts/conformance/02_nvm/copy_test.py::test_copy_max_namespace_size

send copy command with invalid startling LBA will be aborted with LBA Out of Range

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read nsze and ncap value, and check if both of them are equal.
  2. copy over nsze, will be aborted with LBA Out of Range.

function: scripts/conformance/02_nvm/copy_test.py::test_copy_fua

send copy command with FUA field enabled

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. send copy commands with FUA enabled, all commands shall complete successfully

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nsid

send copy command with invalid nsid will be aborted with Invalid Namespace or Format

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. issue a copy command with invalid namespace
  2. check if copy command was aborted with Invalid Namespace or Format

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nsid_lba

copy command with invalid nsid and invalid SLBA

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read ncap and mdts value
  2. prepare copy range buffer
  3. issue a copy command with invalid namespace and invalid SLBA
  4. check the error code

function: scripts/conformance/02_nvm/copy_test.py::test_copy_max_nr

If the number of Source Range entries (i.e., the value in the NR field) is greater than the value in the MSRC field (refer to Figure 97),

Reference
1.
2. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read msrc value in identify, 0 base
  2. issue a copy command with maximum nr=msrc

function: scripts/conformance/02_nvm/copy_test.py::test_copy_invalid_nr

If the number of Source Range entries (i.e., the value in the NR field) is greater than the value in the MSRC field (refer to Figure 97),

Reference
1.
2. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read msrc value in identify, and increase to an invalid size
  2. check if msrc supports the maximum entries
  3. send the copy command, and it is expceted to be aborted

function: scripts/conformance/02_nvm/copy_test.py::test_copy_mssrl

If a valid Source Range Entry specifies a Number of Logical Blocks field that is greater than the value in the MSSRL field (refer to Figure 97),

Reference
1.
2. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read mssrl value in identify
  2. copy 0-mssrl to dest_bla
  3. copy with lba_count=mssrl+1

function: scripts/conformance/02_nvm/copy_test.py::test_copy_mcl

If the sum of all Number of Logical Blocks fields in all Source Range entries is greater than the value in the MCL field (refer to Figure 97),

Reference
1.
2. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. read mcl value in identify
  2. copy with number of source range is mcl
  3. copy lba 0-32*mcl to dest_lba, range count is mcl.
  4. copy with number of source range is mcl+1
  5. send the copy command, and it is expceted to be aborted

function: scripts/conformance/02_nvm/copy_test.py::test_copy_multi_source

copy data from multiple different places

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. check msrc
  2. check mcl
  3. check mssrl
  4. write data to differnt lba region, and copy to a single region
  5. read copied data
  6. compare data

function: scripts/conformance/02_nvm/copy_test.py::test_copy_write_uncorrectable

copy the LBA written uncorrectable, will be aborted with Unrecovered Read Error status

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.
  2. The Write Uncorrectable command is used to mark a range of logical blocks as invalid. When the specified logical block(s) are read after this operation, a failure is returned with Unrecovered Read Error status.

Steps

  1. check if write uncorrectable command is supported
  2. prepare buffer
  3. issue a write uncorrectable command, will complete successfully
  4. send read commands on uncorrectable LBAs, the command shall complete with error Unrecovered Read Error
  5. issue a write command, the command shall complete successfully
  6. issue a read command, the command shall complete successfully
  7. verify data

function: scripts/conformance/02_nvm/copy_test.py::test_copy_ioworker

copy date with ioworker, mixed with read and write

Reference

  1. NVM Command Set Specification 1.0b January 6, 2022.

Steps

  1. copy date with read and write

file: scripts/conformance/02_nvm/deallocate_test

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_and_write

the logical block can be written and read after deallocated

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 259.
  2. The value read from a deallocated logical block shall be deterministic; specifically, the value returned by subsequent reads of that logical block shall be the same until a write operation occurs to that logical block.

Steps

  1. prepare data buffer
  2. deallocate logical blocks
  3. write data into deallocated logical block and verify data
  4. calcuate the start LBA address of the trim range
  5. trim LBA range
  6. read the LBA range and verify data

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_out_of_range

Dataset Management with out of range will be aborted

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check ncap in identify
  2. deallocate logical blocks of whole drive, and the command shall complete successfully
  3. deallocate logical blocks out of range, and the command shall complete with error

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_nr_maximum

the number of ranges in the dsm command exceeds the limit, the dsm will be aborted

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 258.
  2. The definition of the Dataset Management command Range field is specified in Figure 366. The maximum case of 256 ranges is shown.

Steps

  1. deallocate 256 logical blocks
  2. deallocate more than 256 logical blocks, will be aborted

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_correct_range

the data for logical blocks that are not deallocated are not changed

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 257.
  2. The data and metadata for logical blocks that are not deallocated by the NVM subsystem are not changed as the result of a Dataset Management command.

Steps

  1. write a range of logical blocks
  2. deallocate middle logical block
  3. check if other areas are not deallocated

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_multiple_range

deallocate multiple ranges

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 258.
  2. The data that the Dataset Management command provides is a list of ranges with context attributes.

Steps

  1. write a range of logical blocks
  2. deallocate multiple ranges data
  3. check if the data that are not deallocated are not changed
  4. check if ranges that are deallocated can be written and read

function: scripts/conformance/02_nvm/deallocate_test.py::test_deallocate_mixed

trim mixed with other different IOs

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 259.

Steps

file: scripts/conformance/02_nvm/flush_test

function: scripts/conformance/02_nvm/flush_test.py::test_flush_with_read_write

The Flush command is used to request that the contents of volatile write cache be made non-volatile.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 260.
  2. The Flush command is used to request that the contents of volatile write cache be made non-volatile.

Steps

  1. prepare data buffer and IO queue
  2. issue write command
  3. issue flush command
  4. verify data

function: scripts/conformance/02_nvm/flush_test.py::test_flush_vwc_check

controllers shall not set bits 2:1 in the VWC field to the value of 00b

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 260.
  2. Controllers compliant with versions 1.4 and later of this specification shall not set bits 2:1 in the VWC field to the value of 00b.
  3. If bits 2:1 are set to 11b in the VWC field (refer to Figure 247) and the specified NSID is FFFFFFFFh, then the Flush command applies to all namespaces attached to the controller processing the Flush command. If bits 2:1 are set to 10b in the VWC field and the specified NSID is FFFFFFFFh, then the controller fails the command with status code Invalid Namespace or Format.

Steps

  1. read vwc and vs value
  2. verify bits 2:1 in VWC field is not zero if version is 1.4 and later
  3. check if NSID 0xffffffff is not supported

file: scripts/conformance/02_nvm/read_test

function: scripts/conformance/02_nvm/read_test.py::test_read_large_lba

verify the read command of the starting LBA under boundary conditions

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read ncap value in identify
  2. issue read command, slba is ncap-1, nlb is 1, and the command shall complete successfully
  3. read slba and lnb more than ncap, will be aborted with LBA Out of Range

function: scripts/conformance/02_nvm/read_test.py::test_read_max_namespace_size

send read command with invalid starting lba

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read nsze and ncap value, and check if both of them are equal.
  2. read over nsze, will be aborted with LBA Out of Range.

function: scripts/conformance/02_nvm/read_test.py::test_read_fua

verify the read command with FUA field enabled

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 261.
  2. Force Unit Access (FUA): If set to ‘1’, then for data and metadata, if any, associated with logical blocks specified by the Read command, the controller shall:
    1. commit that data and metadata, if any, to non-volatile media; and
    1. return the data, and metadata, if any, that are read from non-volatile media.

Steps

  1. read with FUA enabled, and commands shall complete successfully

function: scripts/conformance/02_nvm/read_test.py::test_read_bad_number_blocks

read exceed maximum data transfer size will be aborted with Invalid Field in Command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read and check mdts value
  2. issue read command with slba is not more than mdts, normal case
  3. issue read command with slba is more than mdts, read command will be aborted with Invalid Field in Command.
  4. issue read command with valid nlb, and commands shall complete successfully

function: scripts/conformance/02_nvm/read_test.py::test_read_valid

verify data consistency

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. prepare data buffer and IO queue
  2. issue write and read command
  3. wait commands complete and verify data

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nsid

read command with invalid nsid will be aborted with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue a read command with invalid namespace
  2. ring doorbell and wait command completes
  3. check if read command was aborted with Invalid Namespace or Format

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nlb

read command with invalid nlb will be aborted with Invalid Field in Command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. skip this test if the MDTS is too large
  2. issue a read command with invalid nlb
  3. check if read command was aborted with Invalid Field in Command.

function: scripts/conformance/02_nvm/read_test.py::test_read_invalid_nsid_lba

read command with invalid nsid and invalid SLBA

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read ncap and mdts value
  2. issue a read command with invalid namespace and invalid SLBA
  3. check the error code

function: scripts/conformance/02_nvm/read_test.py::test_read_ioworker_consistency

get read iops per second

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send read IO in ioworker and get the IOPS of every second

function: scripts/conformance/02_nvm/read_test.py::test_read_ioworker_trim_mixed

read mixed with trim

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send read and trim commands in ioworker

function: scripts/conformance/02_nvm/read_test.py::test_read_different_io_size_and_count

read with different lba and nlb

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. allcoate all DMA buffers for IO commands
  2. send and reap all IO command dwords

file: scripts/conformance/02_nvm/verify_test

function: scripts/conformance/02_nvm/verify_test.py::test_verify_large_lba

send verify command which nlb exceeds the size of the namespace, will be aborted with Out of Range

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 274.

Steps

  1. read Namespace Capacity value
  2. issue a verify command, slba = ncap -1, nlb = 1, it shall complete successfully
  3. issue a verify command, slba = ncap, nlb = 1, it shall complete with error
  4. issue a verify command, slba = ncap + 1, nlb = 1, it shall complete with error
  5. issue a verify command, slba = ncap-1, nlb = 2, it shall complete with error
  6. issue a verify command, slba = 0xffffffff00000000, nlb = 1, it shall complete with error

function: scripts/conformance/02_nvm/verify_test.py::test_verify_valid

verify valid verify command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 274.
  2. The Verify command is used to set a range of logical blocks to zero.

Steps

  1. prepare data buffer and IO queue
  2. write data and verify
  3. define a callback function for verify
  4. issue a verify command
  5. wait commands complete and verify data

function: scripts/conformance/02_nvm/verify_test.py::test_verify_invalid_nsid

verify command with invalid nsid will be abort with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 247.

Steps

  1. issue a verify which namespace is 0xff
  2. check if status is 0x000b

function: scripts/conformance/02_nvm/verify_test.py::test_verify_nlb

verify command has no limit on mdts

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 275.

Steps

  1. check if controller supports verify.
  2. read mdts
  3. issue a verify which namespace is 1
  4. set verify nlb larger than mdts
  5. send command and trigger the doorbell
  6. check if status is success

function: scripts/conformance/02_nvm/verify_test.py::test_verify_invalid_nsid_lba

verify command with invalid namespace and slba will be abort with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 275.

Steps

  1. read ncap and mdts
  2. issue a verify, namespace = 0xff, slba = capacity
  3. write sq.tail
  4. check the error code

function: scripts/conformance/02_nvm/verify_test.py::test_verify_uncorrectable_lba

verify the LBA written uncorrectable, will be aborted with Unrecovered Read Error status

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 273.
  2. The Write Uncorrectable command is used to mark a range of logical blocks as invalid. When the specified logical block(s) are read after this operation, a failure is returned with Unrecovered Read Error status.

Steps

  1. skip if write uncorrectable command is not supported
  2. issue a write uncorrectable command, will complete successfully
  3. send read commands on uncorrectable LBAs
  4. the command shall complete with error Unrecovered Read Error
  5. issue a write command, the command shall complete successfully
  6. issue a read command, the command shall complete successfully

file: scripts/conformance/02_nvm/write_test

function: scripts/conformance/02_nvm/write_test.py::test_write_large_lba

verify the write command of the starting LBA under boundary conditions

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read ncap value in identify
  2. issue a write command, slba is ncap-1, nlb is 1.
  3. write slba and nlb beyond the ncap, will be aborted with LBA Out of Range

function: scripts/conformance/02_nvm/write_test.py::test_write_max_namespace_size

send write command with invalid SLAB will be aborted with LBA Out of Range

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read nsze and ncap value, and check if both of them are equal.
  2. write over nsze, will be aborted with LBA Out of Range.

function: scripts/conformance/02_nvm/write_test.py::test_write_fua

send write command with FUA field enabled

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send write commands with FUA enabled, all commands shall complete successfully

function: scripts/conformance/02_nvm/write_test.py::test_write_bad_number_blocks

send write command longer than MDTS

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read mdts value
  2. send write commands with slba not more than mdts, the commands shall complete successfully
  3. slba is more than mdts, write command will be aborted with Invalid Field
  4. issue write command which nlb is less than mdts, it shall complete successfully

function: scripts/conformance/02_nvm/write_test.py::test_write_valid

verify read and written data consistency

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. prepare data buffer and IO queue
  2. issue write and read command
  3. wait commands complete and verify data

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nsid

send write command with invalid nsid will be aborted with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue a write command with invalid namespace
  2. check if read command was aborted with Invalid Namespace or Format

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nlb

write command with invalid nlb will be aborted with Invalid Field in Command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue a write command with invalid nlb
  2. check if write command was aborted with Invalid Field in Command.

function: scripts/conformance/02_nvm/write_test.py::test_write_invalid_nsid_lba

write command with invalid nsid and invalid SLBA

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read ncap and mdts value
  2. issue a write command with invalid namespace and invalid SLBA
  3. check the error code

function: scripts/conformance/02_nvm/write_test.py::test_write_ioworker_different_op_mixed

mixed different operatios in one ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send read/write/flush/trim commands in one ioworker

function: scripts/conformance/02_nvm/write_test.py::test_write_ioworker_consistency

get write iops per second

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send write commands in ioworker and get the IOPS of every second

function: scripts/conformance/02_nvm/write_test.py::test_write_followed_by_read

mix write and read commands

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. write data to a lba
  2. read data from the same lba
  3. repeat for 10000 times with different lbas

function: scripts/conformance/02_nvm/write_test.py::test_write_fua_unaligned

write data to unaligned lba with fua mode

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. write data with unaligned FUA writes
  2. verify data

function: scripts/conformance/02_nvm/write_test.py::test_write_cache_disable

write with write cache disabled

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. precondition
  2. write without cache
  3. seq write size is less than 32KiB * 999Cmd
  4. unsafe shutdown occur
  5. reboot
  6. seq write size is more than 32Kib * 1010cmd
  7. shutdown occur
  8. reboot
  9. verify data

file: scripts/conformance/02_nvm/write_uncorrectable_test

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_large_lba

verify the write uncorrectable command of the starting LBA under boundary conditions

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read ncap value
  2. issue write uncorrectable command, slba is ncap-1, nlb is 1.
  3. issue write uncorrectable command with slba and lnb more than ncap, will be aborted with LBA Out of Range

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_deallocate

verify deallocate after write uncorrectable

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check if controller supports deallocate
  2. prepare buffer
  3. issue a write uncorrectable command
  4. issue a deallocate command
  5. issue a write command

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_after_deallocate

verify deallocate before write uncorrectable

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check if controller supports deallocate
  2. issue a deallocate command, command shall complete successfully
  3. issue a write uncorrectable command, command shall complete successfully
  4. issue a write command, command shall complete successfully

function: scripts/conformance/02_nvm/write_uncorrectable_test.py::test_write_uncorrectable_read

read the LBA written uncorrectable, will be aborted with Unrecovered Read Error status

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 273.
  2. The Write Uncorrectable command is used to mark a range of logical blocks as invalid. When the specified logical block(s) are read after this operation, a failure is returned with Unrecovered Read Error status.

Steps

  1. prepare buffer
  2. issue a write uncorrectable command, will complete successfully
  3. send read commands on uncorrectable LBAs
  4. the command shall complete with error Unrecovered Read Error
  5. issue a write command, the command shall complete successfully
  6. issue a read command, the command shall complete successfully

file: scripts/conformance/02_nvm/write_zeroes_test

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_large_lba

write zeroes which NLB exceeds the size of the namespace, will be aborted with Out of Range

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 274.

Steps

  1. read Namespace Capacity value
  2. issue a write zeros command, slba = ncap -1, nlb = 1, it shall complete successfully
  3. issue a write zeros command, slba = ncap, nlb = 1, it shall complete with error
  4. issue a write zeros command, slba = ncap + 1, nlb = 1, it shall complete with error
  5. issue a write zeros command, slba = ncap-1, nlb = 2, it shall complete with error
  6. issue a write zeros command, slba = 0xffffffff00000000, nlb = 1, it shall complete with error

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_valid

verify valid write zeroes command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 274.
  2. The Write Zeroes command is used to set a range of logical blocks to zero.

Steps

  1. prepare data buffer and IO queue
  2. write data and verify
  3. define a callback function for write zeroes
  4. issue a write zeroes command
  5. wait commands complete and verify data

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_invalid_nsid

write zeroes command with invalid nsid will be abort with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 247.

Steps

  1. issue a write zeroes which namespace is 0xff
  2. check if status is 0x000b

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_nlb

verify Write zeroes command which nlb is mdts

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 275.

Steps

  1. read mdts
  2. issue a write zeroes which namespace is 0xff
  3. check if status is 0x0000

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_invalid_nsid_lba

write zeroes command with invalid namespace and slba will be abort with Invalid Namespace or Format

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 275.

Steps

  1. read ncap and mdts
  2. issue a write zeroes, namespace = 0xff, slba = mdts
  3. check the error code

function: scripts/conformance/02_nvm/write_zeroes_test.py::test_write_zeroes_data_unit_write

Data Units Written field is not impacted by the Write Zeroes command

Reference

  1. 5.16.1.3 SMART / Health Information (Log Identifier 02h)

Steps

  1. skip if NVMe spec version is below 1.4
  2. get original Data Units Written
  3. send Write Zeroes commands
  4. check the Data Units Written has not changed

folder: scripts/conformance/03_features/hmb

file: scripts/conformance/03_features/hmb/1_basic_test

function: scripts/conformance/03_features/hmb/1_basic_test.py::test_hmb_write_read

io tests with standard HMB configuration

Reference

  1. NVM Express Revision 1.4a, March 9, 2020. Page 218.

Steps

  1. enable hmb
  2. test different kinds of IO with hmb enabled

file: scripts/conformance/03_features/hmb/2_protocol_test

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_support

Host Memory Buffer Preferred Size shall be greater than or equal to the Host Memory Buffer Minimum Size

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 180:
  2. Host Memory Buffer Preferred Size (HMPRE): This field indicates the preferred size
  3. that the host is requested to allocate for the Host Memory Buffer feature in 4 KiB units.
  4. This value shall be greater than or equal to the Host Memory Buffer Minimum Size.

Steps

  1. read Host Memory Buffer Preferred Size and Host Memory Buffer Minimum Size value in identify
  2. check HMPRE shall be greater than or equal to HMMIN
  3. check if HMPRE is more than 64M

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_command_sequence

If hmb is enabled, then a set feature command to enable hmb shall fail with a status code of Command Sequence Error

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 213:
  2. If the host memory buffer is enabled, then a Set Features command to enable the host memory buffer (i.e.,
  3. the EHM bit (refer to Figure 291) set to ‘1’) shall fail with a status code of Command Sequence Error.

Steps

  1. set feature to enable hmb
  2. EHM bit set to 1 shall fail with a status code of Command Sequence Error when hmb was enabled.
  3. set feature to disable hmb
  4. set feature command to disable hmb can be passed when hmb is disabled

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_size_invalid

try to enable HMB with invalid parameters in setfeatures command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer

Steps

  1. allocate host memory
  2. enable hmb
  3. invalid Host Memory Buffer Size, command should fail with error Invalid Field in Command
  4. enable hmb with correct parameters
  5. check hmb is enabled
  6. disable hmb

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_entry_count_invalid

try to enable HMB with invalid parameters in setfeatures command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer

Steps

  1. allocate host memory
  2. enable hmb
  3. invalid Host Memory Descriptor List Entry Count, command should fail with error Invalid Field in Command
  4. enable hmb with correct parameters
  5. check hmb is enabled
  6. disable hmb

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_format_sanitize

Run format command and sanitize command without error when host enable hmb.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. get current LBA format id
  4. issue a format command
  5. run ioworker
  6. skip if sanitize is not supported
  7. issue block erase sanitize
  8. run ioworker tests

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_enable_disable_with_ioworker

Enable and disable hmb repeatedly within ioworker.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable and disable hmb stress within ioworker.

function: scripts/conformance/03_features/hmb/2_protocol_test.py::test_hmb_data_consistency

Verify data consistency between enable and disable hmb

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. read data
  4. disable hmb
  5. verify data consistency after disable hmb
  6. run ioworker
  7. read data
  8. enable hmb
  9. verify data consistency after enable hmb

file: scripts/conformance/03_features/hmb/3_size_test

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_single_buffer

Verify single buffer when host enable hmb.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with single buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_large

Verify large buffer size when host enable hmb.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with single buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_small

Verify small buffer chunk size when host enable hmb.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with single buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/3_size_test.py::test_hmb_buffer_size_tiny

Verify tiny buffer chunk size when host enable hmb.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with single buffer
  2. run ioworker

file: scripts/conformance/03_features/hmb/4_mr_test

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_reset

MR bit set to 1, hsize and address are same as last, host will return the same size and address after return D0 from D3.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb and test ioworker
  2. disable hmb before reset
  3. not send any event, or
  4. nvme reset, or
  5. subsystem reset, or
  6. enter and exit D3 hot
  7. enable hmb again with the same buffer
  8. run ioworker after enable hmb again

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_d3_without_disable

Run ioworker without error when host don’t enable hmb after return to D0 from D3.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb and test ioworker
  2. disable hmb before reset
  3. enter and exit D3hot
  4. disable hmb
  5. run ioworker
  6. enable hmb again
  7. run ioworker

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_with_wrong_buffer

MR bit set to 1, hsize and address set to 0, host will be abort with a status of Invalid Field in Command after return D0 from D3.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. disable hmb before reset
  4. enter and exit D3hot
  5. enable hmb
  6. run ioworker

function: scripts/conformance/03_features/hmb/4_mr_test.py::test_hmb_mr_with_different_buffer

MR bit is set to 1, but hsize/address are different to earlier value

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb and test ioworker
  2. disable hmb before reset
  3. enter and exit D3hot
  4. enable hmb with different field value
  5. run ioworker

file: scripts/conformance/03_features/hmb/5_memory_test

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_address_non_align

When Host Memory Descriptor List address is not 16 byte aligned, HMB can be enabled.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with non-align buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_memory

HMB buffer will not affect other memory.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. allocate host memory buf1 before host enable hmb, and memory is written to full 0xaa data
  2. enable hmb
  3. allocate host memory buf2 before host enable hmb, and memory is written to full 0xbb data
  4. run ioworker after enable hmb
  5. check whether the data in buffer has changed

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_not_equal

config buffer size in each entries unequal.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with not equal buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_out_of_order

config buffer size in each entries out of order.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb with out of oder buffer
  2. run ioworker

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_bit_flip_in_buffer_list

bit flip host memory buffer list, controller handle errors.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. bit flip happen in hmb buffer list
  4. run ioworker

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_bit_flip_data_consistency

Bit flip host memory entry, controller shall handle errors

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker to load data into HMB
  3. bit flip happen in HMB buffer
  4. run ioworker and verify data

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_dword

Bit flip happen in every dword.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. bit flip happen in every dword
  4. run ioworker and verify

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_bytes

Bit flip happen in every bytes.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. bit flip happen in every bytes
  4. run ioworker and verify

function: scripts/conformance/03_features/hmb/5_memory_test.py::test_hmb_change_all_buffer_interval

Bit flip happen in every bytes.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. 5.21.1.13 Host Memory Buffer (Feature Identifier 0Dh), (Optional)

Steps

  1. enable hmb
  2. run ioworker
  3. bit flip happen in each interval 16K,128K,1M
  4. run ioworker and verify

folder: scripts/conformance/03_features

file: scripts/conformance/03_features/boot_partition_test

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_write

write an image to boot partition

Reference

  1. NVM Express Revision 2.0

Steps

  1. find the boot image size
  2. prepare the buffer chunks of the image
  3. download the image in multiple pieces
  4. commit the download image

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load

load the image in boot parition with nvme registers

Reference

  1. NVM Express Revision 2.0

Steps

  1. find the boot image size
  2. set registers to load and verify boot image
  3. print read speed

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_beyond_end

If the host attempts to read beyond the end of a Boot Partition, the controller shall not transfer data and report an error in the BPINFO.BRS field.

Reference

  1. NVM Express Revision 2.0

Steps

  1. find the boot image size
  2. set registers to load image beyond the end
  3. check if the BRS is error completed
  4. load correct image offset and check the BRS

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_verify

verify the image in boot parition with getlogpage command

Reference

  1. NVM Express Revision 2.0

Steps

  1. skip if NVMe spec version is below 2.0
  2. check boot partition log page is valid
  3. check boot image size in logpage
  4. find the boot image size
  5. get the image using getlogpage command and verify the data
  6. print read speed

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_power_cycle

power cycle while loading the image

Reference

  1. NVM Express Revision 2.0

Steps

  1. load a chunk
  2. load again
  3. dirty power cycle while loading boot image
  4. verify image after power cycle

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_change_address

set address register after loading image started

Reference

  1. NVM Express Revision 2.0

Steps

  1. load image
  2. change the buffer address
  3. check load result

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_write_dword

set address register in dword writing

Reference

  1. NVM Express Revision 2.0

Steps

  1. load image
  2. check load result

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_load_offset

load boot parition image into non-4K aligned address

Reference

  1. NVM Express Revision 2.0

Steps

  1. load a chunk into a buffer with offset
  2. check load result

function: scripts/conformance/03_features/boot_partition_test.py::test_boot_partition_power_cycle

power cycle during download boot parition image

Reference

  1. NVM Express Revision 2.0

Steps

  1. download the full image in bp0
  2. find the boot image size
  3. download the new image in multiple pieces
  4. commit the image
  5. dirty power cycle during commit boot partition image in progress
  6. load image with nvme registers
  7. verify data with old and new image
  8. Firmware Commit with Commit Action 110b or 111b shall guarantee atomic operation

file: scripts/conformance/03_features/power_management_test

function: scripts/conformance/03_features/power_management_test.py::test_power_state_transition

send read command during PS transition

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 208.
  2. 5.21.1.2 Power Management (Feature Identifier 02h): This Feature allows the host to configure the power state. The attributes are specified in Command Dword 11 (refer to Figure 274).

Steps

  1. disable autonomous power state transitions
  2. write data to LBA 0x5a
  3. read after power state change with delay 1us-1ms

function: scripts/conformance/03_features/power_management_test.py::test_power_state_ps3_simple

sending IO commands in the PS3 state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 208.
  2. 8.4.1 Non-Operational Power States: The controller shall autonomously transition back to the most recent operational
  3. power state when an I/O Submission Queue Tail Doorbell is written.

Steps

  1. disable autonomous power state transitions
  2. start with PS0 and sleep 20s
  3. configure into PS3 and sleep 30s
  4. send identify and read command

function: scripts/conformance/03_features/power_management_test.py::test_power_state_async_with_io

transition to PS3 and PS4 with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: When in a non-operational power state, regardless of whether autonomous power state transitions are enabled, the controller shall autonomously transition back to the most recent operational power state when an I/O Submission Queue Tail Doorbell is written.

Steps

  1. disable autonomous power state transitions
  2. set NVMe device to operational power state
  3. fill 64GB data for verify
  4. set power state while reading the NVMe device, all commands shall complete successfully

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_async_with_io

transition to different operational power state with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. disable autonomous power state transitions
  2. set NVMe device to power state 0
  3. fill 64GB data for verify
  4. set power state while reading the NVMe device, all commands shall complete successfully

function: scripts/conformance/03_features/power_management_test.py::test_power_state_npss

set feature to each power state supported

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 317.
  2. 8.4 Power Management: A controller shall support at least one power state and may optionally support up to a total of power states.

Steps

  1. get the number of power states
  2. disable autonomous power state transitions
  3. set feature to each supported power state, and the commands shall complete successfully
  4. set feature to an invalid power state, and the command shall complete with error

function: scripts/conformance/03_features/power_management_test.py::test_power_state_maximum_power

compare the maximum power for each power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 317.
  2. 8.4 Power Management: Power states are contiguously numbered starting with zero such that each subsequent power state consumes less than or equal to the maximum power consumed in the previous state.

Steps

  1. get the number of power states
  2. get the maximum power for each power state
  3. compare the maximum power for each power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_ps_with_ioworker

run ioworker during operational power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. enable autonomous power state transitions
  2. run ioworker in operational power state, all IO shall complete successfully

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_io

send io commands in non-operational power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: The controller shall autonomously transition back to the most recent operational power state when an I/O Submission Queue Tail Doorbell is written.

Steps

  1. disable autonomous power state transitions
  2. send io commands in each non-operational power state
  3. check controller put power state back to the most recent operational power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_mixio

send mix io commands in non-operational power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: The controller shall autonomously transition back to the most recent operational power state when an I/O Submission Queue Tail Doorbell is written.

Steps

  1. disable autonomous power state transitions
  2. send mix io commands in each non-operational power state
  3. check controller put power state back to the most recent operational power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_admin_cmd

send admin command in different power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: In a non-operational power state, memory-mapped I/O (MMIO) accesses, configuration register accesses and Admin Queue commands are serviced.

Steps

  1. disable autonomous power state transitions
  2. send admin command in each power state, all command should complete successfully
  3. check no change in power status

function: scripts/conformance/03_features/power_management_test.py::test_power_state_temperature_aer

trigger asynchronous events in different power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: In a non-operational power state, memory-mapped I/O (MMIO) accesses, configuration register accesses and Admin Queue commands are serviced.

Steps

  1. set feature enable all asynchronous events
  2. get smart log to show disk temperature
  3. trigger asynchronous events in different power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_nonoperational_ps_with_dst

send device self-test command in non-operational power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: Processing a command submitted to the Admin Submission Queue and processing background operations, if any, initiated by that command

Steps

  1. check if the device meets the test conditions
  2. enable Non-Operational Power State Permissive Mode
  3. disable autonomous power state transitions
  4. send device self-test command in each non-operational power state, check real power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_short_dst_duration

send set feature commands while device self test

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. If a controller has an operation in process (e.g., device self-test operation) that would cause controller power to exceed that advertised for
  3. the proposed non-operational power state, then the controller should not autonomously transition to that state.

Steps

  1. check if the device meets the test conditions
  2. start a short DST, and record start time
  3. send set feature commands while device self test
  4. check if the completion time is less than 2 minutes

function: scripts/conformance/03_features/power_management_test.py::test_power_state_different_ps_with_write_register

modify the NVMe register in different power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: In a non-operational power state, memory-mapped I/O (MMIO) accesses, configuration register accesses and Admin Queue commands are serviced.

Steps

  1. disable autonomous power state transitions
  2. modify the NVMe register in each power state and power state should not change

function: scripts/conformance/03_features/power_management_test.py::test_power_state_different_ps_with_write_pcie_register

modify the PCIe register in different power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.1 Non-Operational Power States: In a non-operational power state, memory-mapped I/O (MMIO) accesses, configuration register accesses and Admin Queue commands are serviced.

Steps

  1. close autonomous power state transitions
  2. modify the PCIe register in each non-operational power state and power state should not change

function: scripts/conformance/03_features/power_management_test.py::test_power_state_autonomous_ps_transitions

idle and check power states

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.2 Autonomous Power State Transitions: Autonomous power state transitions provide a mechanism for the host to configure the controller to automatically transition between power states on certain conditions without software intervention.
  3. If NOPPME is set to ‘1’, then the controller may temporarily exceed the power limits of any non-operational power state, up to the limits of the last operational power state
  4. If NOPPME is cleared to ‘0’, then the controller shall not exceed the limits of any non-operational state while running controller initiated background operations in that state (i.e., Non-Operational Power State Permissive Mode is disabled).

Steps

  1. try to disable NOPPME
  2. enable autonomous power state transitions
  3. idle 10s, power state shall change to non-operational power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_invalid_transition

controller should abort the command with a status of Invalid Field to a operational state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 320.
  2. 8.4.2 Autonomous Power State Transitions: If an operational power state is specified,then the controller should abort the command with a status of Invalid Field in Command.

Steps

  1. if DUT doesn’t support nvme version 1.4, skip this test
  2. skip if NVMe spec version is below 1.4
  3. skip if APST is not supported
  4. enable autonomous power state transitions

function: scripts/conformance/03_features/power_management_test.py::test_power_state_max_power_pcie

verify PS0 max power less than PCI Express slot power limit control value

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4 Power Management: Hosts that do not dynamically manage power should set the power state to the lowest numbered state that satisfies the PCI Express slot power limit control value.

Steps

  1. get PCI Express slot power limit control value
  2. get PS0 max power
  3. Compare pcie slot power with PS0 max power

function: scripts/conformance/03_features/power_management_test.py::test_power_state_operational_performance

compare performance in each operational ps

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 318:
  2. 8.4 Power Management: Relative performance values provide an ordering of performance characteristics between power states. Relative performance values may repeat, may be skipped,
  3. and may be assigned in any order

Steps

  1. close autonomous power state transitions
  2. get performance in each operational ps
  3. compare the performance in each operational ps

function: scripts/conformance/03_features/power_management_test.py::test_power_state_thermal_throttle_performance

check performance with different thermal throttle configuration

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 321.
  2. 8.4.5 Host Controlled Thermal Management
  3. The temperature at which the controller stops being in a lower power active power state or performing vendor specific thermal management actions because of this feature is vendor specific.

Steps

  1. skip this test if the feature is not supported by the device
  2. fix on PS0, and disable autonomous power state transitions
  3. get current temperature from SMART data
  4. skip the test if current temperature is out of scope host can control
  5. get the performance in normal case, no thermal throttle
  6. idle to cool down, and get current temperature
  7. make current temperature higher than TMT1 for the light throttle
  8. light throttle performance should be lower than usual performance
  9. idle to cool down, and get current temperature
  10. make current temperature higher than TMT2 for the heavy throttle
  11. heavy throttle performance should be lower than light throttle performance
  12. idle to cool down, and get current temperature
  13. make current temperature higher than TMT1 and lower than TMT2, but the heavy throttle continues
  14. light throttle performance should be higher than heavy throttle performance
  15. restore the TMT setting

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_hot_reset

change power state before hot reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4 Power Management

Steps

  1. change power state before hot reset
  2. check power state is ps0 after hot reset

function: scripts/conformance/03_features/power_management_test.py::test_power_state_with_function_level_reset

change power state before function level reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4 Power Management

Steps

  1. change power state before function level reset
  2. check power state is ps0 after function level reset

function: scripts/conformance/03_features/power_management_test.py::test_power_state_idle_transition_ps

configure the settings for autonomous power state transitions

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 216:
  2. 5.21.1.12 Autonomous Power State Transition (Feature Identifier 0Ch): Each entry in the Autonomous Power State Transition data structure is defined in Figure 289.

Steps

  1. enable APST
  2. setup 3-sec transition time in APST table
  3. idle for 2 sec and check power state not change
  4. idle for 4 sec and check power state is changed
  5. setup 5-sec transition time in APST table
  6. idle for 4 sec and check power state not change
  7. idle for 6 sec and check power state is changed

function: scripts/conformance/03_features/power_management_test.py::test_power_state_disable_special_ps_apst

disable the autonomous power state transition feature for special power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 216.
  2. 5.21.1.12 Autonomous Power State Transition (Feature Identifier 0Ch), (Optional)

Steps

  1. enable autonomous power state transitions
  2. disable the autonomous power state transition feature for special power state
  3. the special power state shall not automatic transition

function: scripts/conformance/03_features/power_management_test.py::test_power_state_break_transition

IO command break autonomous power state transition

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.2 Autonomous Power State Transitions

Steps

  1. enable autonomous power state transitions
  2. send io command before satisfying idle time
  3. idle time shall be recalculated

function: scripts/conformance/03_features/power_management_test.py::test_power_state_change_idle_time

change apst idle time before satisfying idle time

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 319.
  2. 8.4.2 Autonomous Power State Transitions

Steps

  1. enable autonomous power state transitions
  2. change apst idle time before satisfying idle time
  3. idle time should be changed

function: scripts/conformance/03_features/power_management_test.py::test_power_state_host_power

change nvme power state according to the power state of the host

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. https://docs.microsoft.com/en-gb/windows-hardware/design/component-guidelines/power-management-for-storage-hardware-devices-nvme

Steps

  1. get host power state
  2. get identify power state transition latency
  3. fill data for verify
  4. Random enter different host power state

function: scripts/conformance/03_features/power_management_test.py::test_power_state_latency

check the latency of ps transition between each other

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 318:
  2. 8.4 Power Management: The Entry Latency (ENTLAT) field in the power management descriptor indicates
  3. the maximum amount of time in microseconds to enter that power state and the Exit Latency (EXLAT) field indicates the maximum amount of time in microseconds to exit that state.

Steps

  1. close autonomous power state transitions
  2. check the latency of ps switching between each other

function: scripts/conformance/03_features/power_management_test.py::test_power_state_apst_saveable

verify Autonomous Power State Transition feature saveable

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 215:
  2. 5.21.1.12 Autonomous Power State Transition (Feature Identifier 0Ch), (Optional)

Steps

  1. skip the test if Autonomous Power State Transition is not supported
  2. skip the test if Autonomous Power State Transition is not saveable
  3. disable Autonomous Power State Transition with sv enabled
  4. issue nvme controller reset
  5. check Autonomous Power State Transition is disable
  6. recover to original configuration

function: scripts/conformance/03_features/power_management_test.py::test_power_state_idle_with_low_speed

low power in different pcie speed

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 215:

Steps

  1. skip test if apst is not supported
  2. enable autonomous power state transitions
  3. set pcie speed
  4. check acutal link speed
  5. check apst is enabled
  6. enable ASPM L1.2
  7. fix on PS0
  8. write LBA0
  9. send read IO with idle
  10. restore pcie speed

function: scripts/conformance/03_features/power_management_test.py::test_power_state_format

format in different power state

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. set PS and do format
  2. check the PS after format fnishes

file: scripts/conformance/03_features/reset_test

function: scripts/conformance/03_features/reset_test.py::test_reset_queue_level_reset

verify queue level reset with outstanding io command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 293.
  2. 7.3.3 Queue Level

Steps

  1. format the namespace to make all data zero in the namespace
  2. create Submission/Completion Queue
  3. send 100 write commands
  4. issue queue level reset while io are outstanding
  5. send 100 read commands
  6. data read shall be the same as the data written, otherwise the data read shall be all zero
  7. delete SQ and CQ

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_nvme_registers

check nvme register values after controller reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. get the initial value of CC register
  2. modify CC register value
  3. issue controller reset
  4. check if CC register is reset to original
  5. set cc.en = 0
  6. modify AQA register value
  7. set cc.en = 1 to reset NVMe registers
  8. check AQA register has been modified
  9. check AQA register has been reset to original value

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_d3hot

issue controller reset after exiting D3hot

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. enable D3hot and sleep 3 seconds
  2. exit D3hot
  3. issue controller reset
  4. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_aspm

enter ASPM L1 and issue controller reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. issue controller reset
  2. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_with_outstanding_io

verify controller reset with outstanding io command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. format the namespace to make all data zero in the namespace
  2. send 100 write commands in one shot
  3. issue controller reset while io is active
  4. send 100 read commands
  5. data read shall be the same as the data written, otherwise the data read shall be all zero

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_reset_ioworker

verify controller reset with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. issue controller reset while ioworker is running
  2. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_controller_with_existed_adminq

reset controller with adminq registers unchanged

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. get the adminq registers
  2. reset contorller with the existed adminq and register
  3. send 10000 admin commands to check the function of the adminq
  4. check if the adminq registers is the same as that before reset

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_d3hot

issue function level reset after exiting D3hot

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. enable D3hot and sleep 3 seconds
  2. issue function level reset
  3. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_aspm

enter ASPM L1 and issue function level reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. enable ASPM L1 and sleep 3 seconds
  2. issue function level reset
  3. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_with_ioworker

verify function level reset with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. start ioworker and then call FLR reset

function: scripts/conformance/03_features/reset_test.py::test_reset_flr_with_outstanding_io

verify function level reset with io command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292

Steps

  1. create CQ and SQ
  2. send commands to the SQ in one shot
  3. FLR reset with active IO
  4. read after reset with outstanding writes
  5. data verify

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_d3hot

issue hot reset after exiting D3hot

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. enable D3hot and sleep 3 seconds
  2. issue hot reset
  3. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_aspm

enter ASPM L1 and issue hot reset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. issue hot reset
  2. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_with_ioworker

verify hot reset with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.
  2. 7.3.2 Controller Level Reset

Steps

  1. issue hot reset while ioworker is running
  2. check controller status is normal

function: scripts/conformance/03_features/reset_test.py::test_reset_pci_hot_reset_with_outstanding_io

Verify hot reset with io command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 292.

Steps

  1. create CQ and SQ
  2. send some commands to the SQ in one shot
  3. reset with active IO
  4. read after reset with outstanding writes
  5. data verify

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_d3hot

verify subsystem reset and D3hot

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send PCIe power state to D3hot before Subsystem Reset
  2. check drive status after reset

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_aspm

verify subsystem reset and ASPM L1

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check status after reset

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_with_ioworker

Verify subsystem reset with ioworker

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. issue subsystem reset while ioworker is running
  2. check drive status after reset

function: scripts/conformance/03_features/reset_test.py::test_reset_subsystem_reset_with_outstanding_io

Verify subsystem reset and io command

Reference

  1. NVM Express Revision 1.4a March 9, 2020, Page 292
  2. When an NVM Subsystem Reset occurs, the entire NVM subsystem is reset.

Steps

  1. create CQ and SQ
  2. send some commands at one shot
  3. reset with active IO
  4. read after reset with outstanding writes
  5. data verify

function: scripts/conformance/03_features/reset_test.py::test_reset_timing

get the time of nvme init process

Reference

  1. NVM Express Revision 1.4a

Steps

  1. defined nvme init process
  2. reset controller with user
  3. wait csts.rdy = 1
  4. send first identify command and get the latency
  5. init all namespace and queue
  6. send first read IO command and get latency
  7. free resources

file: scripts/conformance/03_features/write_protect_test

function: scripts/conformance/03_features/write_protect_test.py::test_write_protect

check write protect function

Reference

  1. NVM Express Revision 1.4c
  2. 5.21.1.29 Namespace Write Protection Config

Steps

  1. check if write protect is supported

folder: scripts/conformance/04_registers

file: scripts/conformance/04_registers/controller_test

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap

read Controller Capabilities register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 43.
  2. 3.1.1 Offset 0h: CAP – Controller Capabilities

Steps

  1. read Controller Capabilities
  2. read Memory Page Size Maximum and Memory Page Size Minimum
  3. check mpsmax is greater than mpsmin
  4. read Controller Configuration Memory Page Size
  5. check cc.mps is smaller than mpsmin and larger than mpsmax
  6. check controller support NVM command set

function: scripts/conformance/04_registers/controller_test.py::test_controller_crto

Controller Ready Timeouts

Reference

  1. NVM Express Revision 2.0c
  2. 3.1.3.21 Offset 68h: CRTO

Steps

  1. skip if NVMe spec version is below 2.0
  2. check CRTO related registers
  3. Attempt to write to the read-only register and assert no change
  4. check CRWMT
  5. Check CRIMT

function: scripts/conformance/04_registers/controller_test.py::test_controller_version

read Version register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 45.
  2. 3.1.2 Offset 8h: VS – Version

Steps

  1. read Version register
  2. check the major version

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc

read Controller Configuration register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 47.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. read Controller Configuration
  2. check I/O Completion Queue Entry Size is 16 bytes, I/O Submission Queue Entry Size is 64 bytes

function: scripts/conformance/04_registers/controller_test.py::test_controller_register_reserved

read Reserved field in Controller register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 47.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. check Reserved field in Controller register is “0”
  2. write “1234” to the reserved field
  3. check Reserved field in Controller register is “0”

function: scripts/conformance/04_registers/controller_test.py::test_controller_csts

read Controller Status register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 49.
  2. 3.1.6 Offset 1Ch: CSTS – Controller Status

Steps

  1. read Controller Status
  2. check csts.rdy is 1

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap_to

verify cap.timout is the worst case time during CSTS.RDY transition

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 49.

Steps

  1. read Controller Capabilities Timeout value
  2. change cc.en from ‘1’ to ‘0’
  3. wait csts.rdy change from ‘1’ to ‘0’, and check the duration time
  4. change cc.en from ‘0’ to ‘1’
  5. wait csts.rdy change from ‘0’ to ‘1’, and check the duration time

function: scripts/conformance/04_registers/controller_test.py::test_controller_cap_mqes

verify create IO CQ/SQ will fail when qid is greater than mqes

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 45.
  2. Offset 0h: CAP – Controller Capabilities

Steps

  1. read Maximum Queue Entries Supported
  2. check controller supports at least 2 entries
  3. check create IO CQ will fail when qsize is greater than mqes.
  4. check create IO SQ will fail when qsize is greater than mqes.

function: scripts/conformance/04_registers/controller_test.py::test_controller_ams

read Arbitration Mechanism Supported and Selected

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 44.
  2. Offset 0h: CAP – Controller Capabilities

Steps

  1. Read Arbitration Mechanism Supported
  2. Read Arbitration Mechanism Selected

function: scripts/conformance/04_registers/controller_test.py::test_controller_intms_and_intmc

read Interrupt Mask Set and Interrupt Mask Clear

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 46.
  2. 3.1.3 Offset Ch: INTMS – Interrupt Mask Set
  3. 3.1.4 Offset 10h: INTMC – Interrupt Mask Clear

Steps

  1. read Interrupt Mask Set and Interrupt Mask Clear
  2. write Interrupt Mask Set “0”
  3. write Interrupt Mask Clear “0”
  4. check intms and intmc do not change value

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_iocqes

verify I/O Completion Queue Entry Size is valid

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 47.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. read I/O Completion Queue Entry Size
  2. read identify Completion Queue Entry Size
  3. check identify Completion Queue Entry Size is valid
  4. check I/O Completion Queue Entry Size is valid

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_iosqes

verify I/O Submission Queue Entry Size is valid

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 47.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. read I/O Submission Queue Entry Size
  2. read identify Submission Queue Entry Size
  3. check identify Submission Queue Entry Size is valid
  4. check I/O Submission Queue Entry Size is valid

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_en

change Controller Configuration Enable

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 49.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. enable cc.en
  2. check csts.rdy is “1”
  3. issue a read command
  4. check read successfully
  5. change cc.en from ‘1’ to ‘0’
  6. wait csts.rdy change from ‘1’ to ‘0’
  7. check fail to init admin queue
  8. change cc.en from ‘0’ to ‘1’
  9. wait csts.rdy change from ‘0’ to ‘1’

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_css

read Controller Configuration I/O Command Set Selected

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 48.
  2. 3.1.5 Offset 14h: CC – Controller Configuration

Steps

  1. read Controller Configuration I/O Command Set Selected

function: scripts/conformance/04_registers/controller_test.py::test_controller_mdts

issue io command with invalid number of logical blocks

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 173.
  2. 5.15.2.2 Identify Controller data structure (CNS 01h)

Steps

  1. get Memory Page Size Minimum
  2. get Maximum Data Transfer Size
  3. create Submission/Completion Queue
  4. prepare long data for the write command
  5. issue a write command with long data
  6. the command shall complete successfully
  7. issue a write command with invalid number of logical blocks
  8. the command shall complete with error
  9. issue a read command with invalid number of logical blocks
  10. the command shall complete with error
  11. issue a read command with valid number of logical blocks
  12. the command shall complete successfully
  13. delete Submission/Completion Queue

function: scripts/conformance/04_registers/controller_test.py::test_controller_doorbell_invalid

AER will be triggered when write an invalid doorbell value.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Figure 146

Steps

  1. issue one AER command
  2. get number of queue
  3. access invalid register outside of doorbells

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_shn

send Shutdown Notification

Reference

  1. NVM Express Revision 1.4a. June 10, 2020. Page 50:
  2. This field indicates the status of shutdown processing that is initiated by the host setting the CC.SHN field.

Steps

  1. write cc.shn=01b and check the value of csts.shst
  2. send shutdown notify to DUT

function: scripts/conformance/04_registers/controller_test.py::test_controller_shn_before_commands

send controller shutdown notification before other commands

Reference

  1. NVM Express Revision 1.4a. June 10, 2020.

Steps

  1. send read IO commands and admin commands
  2. send shutdown notify and get the response time
  3. controller reset without power cycle
  4. send read IO commands and admin commands

function: scripts/conformance/04_registers/controller_test.py::test_controller_cc_memory_page_size_8k

set MPS to 8K and test 4K read/write

Reference

  1. NVM Express Revision 1.4a. June 10, 2020.

Steps

  1. check if MPS supports 8K page size
  2. prepare 4K read/write buffer
  3. send write and read command
  4. wait commands complete and verify data

function: scripts/conformance/04_registers/controller_test.py::test_controller_asq

set ASQ register with different locations

Reference

  1. NVM Express Revision 1.4a. June 10, 2020.

Steps

  1. reset controller with different locations of admin SQ
  2. test with many admin commands to fill-up admin SQ

file: scripts/conformance/04_registers/pcie_test

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_identifiers

read Identifiers register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 22.
  2. 2.1.1 Offset 00h: ID – Identifiers

Steps

  1. read Identifiers
  2. read Class Code
  3. read Subsystem Identifiers

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_command

read and check Command register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 22.
  2. 2.1.2 Offset 04h: CMD – Command

Steps

  1. read Command register
  2. check Memory Space Enable bit is “1”. Controls access to the controller’s register memory space.
  3. check reserved field is “0”

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_revision_id

read Revision ID register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 23.
  2. 2.1.4 Offset 08h: RID – Revision ID

Steps

  1. read Revision ID

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_class_code

read Class Code register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 23.
  2. 2.1.5 Offset 09h: CC – Class Code

Steps

  1. read Class Code
  2. check the device is a Non-Volatile memory controller

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_bist

read Built-In Self Test register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 24.
  2. 2.1.9 Offset 0Fh: BIST – Built-In Self Test (Optional)

Steps

  1. read Built-In Self Test
  2. Completion Code (CC) should be zero

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pmcr

read PCI Power Management Capabilities register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 27.
  2. 2.2.2 Offset PMCAP + 2h: PC – PCI Power Management Capabilities

Steps

  1. get PCI Power Management Capabilities starting address
  2. read PCI Power Management Capabilities

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pmcsr

read PCI Power Management Control and Status register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 27.
  2. 2.2.3 Offset PMCAP + 4h: PMCS – PCI Power Management Control and Status

Steps

  1. get PCI Power Management Control and Status starting address
  2. read PCI Power Management Control and Status
  3. read power consumed or dissipation

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_pcie_cap

read PCI Express Device Capabilities register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 31.
  2. 2.5 PCI Express Capability

Steps

  1. get PCI Express Capability starting address
  2. read PCI Express Capabilities Register
  3. read Device Capabilities Register
  4. read Device Control Register
  5. read PCI Express Device Status register
  6. cleared correectable error detected bit

read PCI Express Link Capabilities register

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 34.
  2. 2.5.6 Offset PXCAP + Ch: PXLCAP – PCI Express Link Capabilities

Steps

  1. read PCI Express Link Capabilities
  2. read PCI Express Link Control register
  3. read PCI Express Link Status register

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_format

send a format command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 234.
  2. 5.23 Format NVM command – NVM Command Set Specific

Steps

  1. send a format command and complete successfully

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_write_bandwidth

read bandwidth

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 31.

Steps

  1. set io size
  2. get bandwidth

enable aspm and send read command

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 35.
  2. Active State Power Management Control (ASPMC): This field controls the level of
  3. ASPM executed on the PCI Express Link

Steps

  1. read PCI Express Link Control register
  2. set different ASPM status
  3. create IO queue for read commands
  4. read lba 0 for 100 times
  5. return ASPM L0

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_mps_256

check MPS register

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. read MPS register from PCIe capability

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write

write read with modified payload

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_max_read_request_size

change max payload size setting and run IO test

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. check device control register
  2. set MRR
  3. double check device control register
  4. run test to get the bandwidth

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write_post

write read with modified payload

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_reset

reset pcie to restore payload setting

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

function: scripts/conformance/04_registers/pcie_test.py::test_pcie_read_write_after_reset

write read with modified payload

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

file: scripts/conformance/04_registers/power_test

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_pmcsr_d3hot

verify DUT can enter D3hot and exit D3hot normally, DUT can handle Admin command after entering D0 from D3hot.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 27.
  2. 2.2.2 Offset PMCAP + 2h: PC – PCI Power Management Capabilities

Steps

  1. get PCI Power Management Capabilities
  2. set D3hot
  3. set D0
  4. set D3hot
  5. check admin command shall timeout
  6. set back to D0

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_capability_d3hot

verify DUT can enter D3hot and exit D3hot normally, DUT can handle ioworker after entering D0 from D3hot.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 27.
  2. 2.2.2 Offset PMCAP + 2h: PC – PCI Power Management Capabilities

Steps

  1. get power state
  2. check power state is D0
  3. set D3hot, sleep 1 second
  4. exit D3hot, enter D0
  5. run ioworker in D0
  6. set D0
  7. run ioworker in D0
  8. check power state is D0

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_aspm_L1

Verify DUT enter L1 and exit L1 normally, DUT can handle Admin command after entering to L0 from L1.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 35.
  2. Active State Power Management Control (ASPMC): This field controls the level of ASPM executed on the PCI Express Link

Steps

  1. set ASPM L1
  2. issue admin command in ASPM L1, it shall complete successfully, sleep 1 second
  3. set ASPM L0

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_aspm_l1_and_d3hot

verify when host set ASPM and D3hot at the same time

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 21.
  2. 2 System Bus (PCI Express) Registers

Steps

  1. enter ASPM L1
  2. set D3hot, sleep 1 second
  3. exit D3hot, set D0
  4. check ASPM is L0
  5. set D3hot
  6. set ASPM L1, sleep 1 second
  7. set ASPM L0
  8. set D0
  9. run ioworker in D0 ASPM L0
  10. check ASPM is L0
  11. set D3hot, sleep 1 second
  12. set D0
  13. run ioworker in D0, it shall complete successfully

function: scripts/conformance/04_registers/power_test.py::test_power_pcie_ioworker_aspm

Host set ASPM when ioworking is on-going, and verify data consistency.

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 35.
  2. Active State Power Management Control (ASPMC): This field controls the level of ASPM executed on the PCI Express Link.

Steps

  1. start a read/write mixed ioworker
  2. toggle APSM setting periodically
  3. reset controller

folder: scripts/conformance/05_controller

file: scripts/conformance/05_controller/arbitration_test

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_weighted_round_robin

verify handling mechanism of Weighted Round Robin with Urgent Priority Class Arbitration

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 92.
  2. In this arbitration mechanism, there are three strict priority classes and three weighted round robin priority levels. If Submission Queue A is of higher strict priority than Submission Queue B, then all candidate commands in Submission Queue A shall start processing before candidate commands from Submission Queue B start processing.

Steps

  1. check if controller supports Weighted Round Robin
  2. format the DUT
  3. set feature Arbitration, HPW,MPW,LPW=8:4:2, Arbitration Burst:2,011b indicates eight
  4. check the latency of a admin command
  5. create 1 admin queue, 2 urgent, 2 high, 2 medium and 2 low priority IO SQ queues
  6. create 8 SQ queues
  7. fill 50 flush commands in each queue
  8. fire all sq, low priority first
  9. check the latency of admin command
  10. check sqid of the whole cq
  11. assert all urgent IO completed first
  12. delete all queues

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_weighted_round_robin_ioworker

verify set feature Arbitration and controller controls command proportion using Weighted Round Robin

Reference

  1. NVM Express Revision 1.4a. Page 207.

Steps

  1. precondition
  2. set feature Arbitration HPW,MPW,LPW=8:4:2, Arbitration Burst:2,011b indicates eight
  3. start ioworker
  4. high priority queue should consume more IO

function: scripts/conformance/05_controller/arbitration_test.py::test_arbitration_default_round_robin

verify handling mechanism of Round Robin Arbitration

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 91.
  2. If the round robin arbitration mechanism is selected, the controller shall implement round robin command arbitration amongst all Submission Queues, including the Admin Submission Queue. In this case, all Submission Queues are treated with equal priority. The controller may select multiple candidate commands for processing from each Submission Queue per round based on the Arbitration Burst setting.

Steps

  1. make sure cq depth is large enough for testing
  2. set feature Arbitration Burst: 2
  3. create 1 completion queue, 8 io submission queues
  4. fill 50 flush commands in each queue
  5. fire all sq, low prio first
  6. check the latency of a admin command
  7. check sqid of the whole cq
  8. assert all sqs have the same priority
  9. delete all queues

file: scripts/conformance/05_controller/interrupt_test

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_aggregation_time_threshold

get the aggregation time setting

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. get the default interrupt aggregation time and threshold

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_mask

verify MSIx mask bits

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 297.
  2. 7.5.2 MSIX Based Behavior: If either of the mask bits are set to ‘1’, the corresponding pending bit in the MSIX PBA structure is set to ‘1’ to indicate that an interrupt is pending for that vector. The MSI for that vector is later generated when both the mask bits are reset to ‘0’.

Steps

  1. create a pair of CQ/SQ and clear MSIx interrupt
  2. send a read command
  3. check the MSIx interrupt is set up
  4. clear MSIx interrupt
  5. send a read command
  6. check the MSIx interrupt is set up
  7. clear MSIx interrupt and mask it
  8. send a read command
  9. check the MSIx interrupt is not set up
  10. unmask the MSIx interrupt
  11. check the MSIx interrupt is set up

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_multiple_qpair_msix

check MSIx interrupts on two CQ

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 297.
  2. 7.5.2 MSIX Based Behavior: MSIX allows completions to be aggregated on a per vector basis. Each Completion Queue(s) may send its own interrupt message, as opposed to a single message for all completions.

Steps

  1. create two pairs of CQ/SQ with interrupt enabled
  2. send the read command into the first SQ
  3. MSIx interrupt is triggered in the first CQ
  4. MSIx interrupt is not triggered in the second CQ
  5. delete CQ

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_coalescing

verify MSIx interrupt coalescing and the Aggregation Time

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 213.
  2. The controller should signal an interrupt when either the Aggregation Time or the Aggregation Threshold conditions are met.

Steps

  1. clear MSIx interrupt
  2. enable Interrupt Vector 1 Coalescing
  3. send one command, get original interrupt latency
  4. set aggregation time: 200*100us=0.02s, aggregation threshold: 6
  5. send two commands
  6. get interrupt latency
  7. the interrupt should be delayed for aggregation
  8. disable Interrupt Coalescing
  9. send one command
  10. get interrupt latency
  11. check the current interrupt latency not delayed

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_coalescing

verify disable interrupt coalescing

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 214.
  2. If set to 1, then any interrupt coalescing settings shall not be applied for this interrupt vector. If cleared to 0, then interrupt coalescing settings apply for this interrupt vector.

Steps

  1. enable Interrupt Vector 1 Coalescing
  2. clear MSIx interrupt
  3. send one command, get original interrupt latency
  4. send some read as precondition
  5. disbale Interrupt Vector 1 Coalescing
  6. set aggregation time: 200*100us=0.02s, aggregation threshold: 10
  7. send two commands
  8. get interrupt latency
  9. check the current interrupt latency is not aggregated
  10. enable Interrupt Vector 1 Coalescing
  11. send two commands
  12. get interrupt latency
  13. check the interrupt is aggregated

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_different_coalescing

one qpair enable interrupt coalescing, the other one is disabled

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 214.
  2. If set to ‘1’, then any interrupt coalescing settings shall not be applied for this interrupt vector. If cleared to ‘0’, then interrupt coalescing settings apply for this interrupt vector.

Steps

  1. create two pairs of CQ/SQ
  2. enable Interrupt Vector 1 and 2 Coalescing
  3. get the normal interrupt latency on qpair1
  4. disbale Interrupt Vector Coalescing on qpair2
  5. set aggregation time: 200*100us=0.02s, aggregation threshold: 10
  6. send two commands on qpair1
  7. get the interrupt lantecy on qpair1
  8. interrupt latency should be aggregated
  9. send two read commands on qpair2
  10. get the interrupt lantecy on qpair2
  11. the interrupt on qpair is not delayed for aggregation

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_vector_discontiguous

verify discontiguous interrupting vectors

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 102.
  2. Interrupt Vector (IV): This field indicates interrupt vector to use for this Completion Queue.

Steps

  1. create qpair1 with vector 2, qpair2 with vector 4
  2. clear the MSIx interrupt on qpair1
  3. send a read command on qpair1
  4. check the MSIx interrupt on qpair1 is set up
  5. clear MSIx interrupt on qpair2
  6. send a read command on qpair2
  7. check the MSIx interrupt on qpair2 is set up

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_specific_interrupt_vector_coalescing

enable interrupt vector coalescing on one qpair, but disable on another

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 102.
  2. 5.21.1.8 Interrupt Coalescing (Feature Identifier 08h)
  3. 5.21.1.9 Interrupt Vector Configuration (Feature Identifier 09h)

Steps

  1. create two pairs of CQ/SQ with different interrupt vector, both coalescing disabled
  2. aggregation on qpair2 is enabled
  3. set aggregation time: 200*100us=0.02s, aggregation threshold: 10
  4. check the interrupt of qpair2 is aggregated
  5. aggregation of qpair1 is disabled
  6. check if the interrupt on qpair1 is not aggregated

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_create_cq_disable

create cq with interrupt disabled

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 102.
  2. Interrupts Enabled (IEN): If set to ‘1’, then interrupts are enabled for this Completion Queue. If cleared to ‘0’, then interrupts are disabled for this Completion Queue.

Steps

  1. create qpair1 with interrupt enabled, qpair2 with enterrupt disabled
  2. clear the MSIx interrupt on qpair1
  3. send a write command on qpair1
  4. check the MSIx interrupt on the first qpair is set up
  5. clear the MSIx interrupt on qpair2
  6. send a read command on qpair2
  7. check the MSIx interrupt of qpair2 is not set
  8. check the read command have completed
  9. delete qpairs

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_qpair_msix_coalescing_numb

verify MSIx interrupt coalescing and the Aggregation Threshold

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 213:
  2. The controller should signal an interrupt when either the Aggregation Time or the Aggregation Threshold conditions are met.

Steps

  1. clear MSIx interrupt
  2. disable Interrupt Coalescing
  3. send three commands, get interrupt latency
  4. send 10 commands and check interrupt latency
  5. check the interrupt is aggregated
  6. set aggregation time: 200*100us=0.02s, aggregation threshold: 10
  7. send only 9 commands and check interrupt latency
  8. interrupt latency should be aggregated
  9. send 10 commands and check interrupt aggregation
  10. check the interrupt is aggregated

function: scripts/conformance/05_controller/interrupt_test.py::test_interrupt_ioworker_qpair

check interrupt when ioworker running

Reference

  1. NVM Express Revision 1.4a

Steps

  1. create a qpair with interrupt enabled or disabled
  2. start an ioworker with the qpair
  3. repeatly check if the interrupt presented if it is enabled
  4. delete the qpair

file: scripts/conformance/05_controller/prp_test

function: scripts/conformance/05_controller/prp_test.py::test_prp_format

format the device before following test cases

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send a format command

function: scripts/conformance/05_controller/prp_test.py::test_prp_write_mdts

verify write with different data length

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 69.
  2. 4.3 Physical Region Page Entry and List:A physical region page list (PRP List) is a set of PRP entries in a single page of contiguous memory. A PRP List describes additional PRP entries that could not be described within the command itself

Steps

  1. get Memory Page Size Minimum
  2. get Maximum Data Transfer Size
  3. create a pair of io CQ/SQ
  4. create prp and prp list
  5. send a write command with different nlba
  6. check cq entry

function: scripts/conformance/05_controller/prp_test.py::test_prp_page_offset

verify read the data with different offset and check lba

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 69.
  2. 4.3 Physical Region Page Entry and List: Page Base Address and Offset (PBAO):

Steps

  1. fill the data
  2. read the data with different offset
  3. read successfully, check the lba

function: scripts/conformance/05_controller/prp_test.py::test_prp_admin_page_offset

send identify command with different valid offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 161.
  2. 5.15.1 Identify command overview: If using PRPs, this field shall not be a pointer to a PRP List as the data buffer may not cross more than one page boundary.

Steps

  1. create buffer for identify command
  2. send identify command with valid offset
  3. check identify data

function: scripts/conformance/05_controller/prp_test.py::test_prp_admin_page_offset_invalid

send identify command with different invalid offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 69.
  2. 4.3 Physical Region Page Entry and List: Page Base Address and Offset (PBAO): The Offset shall be dword aligned, indicated by bits 1:0 being cleared to 00b

Steps

  1. create buffer for identify command
  2. send identify command with invalid offset
  3. check identify data from offset 0 when no error reported

function: scripts/conformance/05_controller/prp_test.py::test_prp_valid_offset_in_prplist

verify PRP1 and PRP2 with valid offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. prp1 offset is not zero.
  2. prp2 (list) offset is not zero.
  3. fill 8 PRP entries into the PRP list
  4. issue read cmd with PRPs
  5. reap the command
  6. wait CQ pbit flip

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_offset_in_prplist

set PRP entries in PRP List with invalid offset.

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. prp1 offset is not zero.
  2. prp2 (list) offset is not zero.
  3. entry offset in prp list is not zero, which is invalid
  4. issue a command with invalid prp list
  5. reap the command, error 00/13 shall occur: PRP Offset Invalid

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_buffer_offset

fill an invalid PRP in an IO command whose offset is not zero

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 260
  2. 4.3. Physical Region Page Entry and List: The controller is not required to check that bits 1:0 are cleared to 00b. The controller may report an error of PRP Offset Invalid if bits 1:0 are not cleared to 00b. If the controller does not report an error of PRP Offset Invalid, then the controller shall operate as if bits 1:0 are cleared to 00b.

Steps

  1. write one LBA with an invalid PRP whose offset is not zero
  2. issue the write command and get the status code in CQE
  3. no error happen, then the controller shall operate as if PRP offset is zero
  4. read the LBA back, and check if the offset is correctly ignored

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_one_qpair

create one qpair, and issue one invalid prp command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create one qpair
  2. issue a invalid prp command to sq
  3. delete qpair

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_one_qpair_normal_command

create one qpair, and issue one invalid prp command and normal write and read command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create one qpair
  2. issue a invalid prp command to sq
  3. issue a normal write and read command to sq
  4. delete qpair

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_multi_qpair_normal_command

create one qpair, and issue two invalid prp command and normal write and read command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create two qpair
  2. issue a invalid prp command to sq
  3. issue a normal write and read command to sq
  4. delete qpair

function: scripts/conformance/05_controller/prp_test.py::test_prp_multi_invalid_and_multi_normal_command

create one qpair, and issue multiple invalid prp commands and multiple normal commands

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create one qpair
  2. issue multiple invalid prp commands
  3. issue multiple normal write commands
  4. delete qpair

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_before_ioworker

create one qpair, and issue one invalid prp command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. create one qpair
  2. issue a invalid prp command to sq
  3. run ioworker after inject invalid prp command
  4. delete qpair

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_multiple

send commands with invalid PRP

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_with_ioworker

send commands with invalid PRP while an ioworker is active

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. keep injecting invalid PRP into SQ
  2. define a create qpair and issue one prp invalid command function

function: scripts/conformance/05_controller/prp_test.py::test_prp_invalid_offset_create_sq

verify create IO SQ with invalid PRP offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020.
  2. If there is a PRP Entry with a non-zero offset, then the controller should return an error of PRP Offset Invalid.

Steps

  1. create a valid CQ
  2. create the valid SQ
  3. create SQ with invalid PRP offset, and should be aborted with error status

function: scripts/conformance/05_controller/prp_test.py::test_prp_page_offset_invalid

read data with invalid buffer offset

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. fill the data
  2. read the data to different offset in the buffer
  3. check if read complete with correct data or expected error code

function: scripts/conformance/05_controller/prp_test.py::test_prp_identify_prp2

define PRP2 not consecutive with PRP1 in admin command

Reference

  1. NVM Express Revision 1.4a March 9, 2020.

Steps

  1. send an identify command with a contiguous buffer
  2. send an identify command with two separated buffer
  3. check data of two identify commands

file: scripts/conformance/05_controller/sq_cq_test

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_cq_around

create cq support 3 entries and issue 4 commands

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 63.
    1. Data Structures

Steps

  1. create a cq support 3 entries and a sq support 5 entries
  2. send 3 commands with CID 4,3,2,1
  3. check the first cq CID is 4
  4. check the second cq CID is 3
  5. check there is not the third cq command
  6. set cq head = 1
  7. check the first cq CID is 4
  8. check the third cq CID is 2
  9. send 4th command
  10. set cq head = 2, check p-bit is inverted
  11. check the third cq CID is 2
  12. check the second cq CID is 3
  13. check the first cq CID is 1

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_overflow

create sq support 2 entries and issue 2 commands

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 63.
    1. Data Structures

Steps

  1. create a cq support 5 entries and a sq support 2 entries
  2. send command with CID 4
  3. send command with CID 3
  4. check the first cq CID is 4
  5. check the second cq CID is 3
  6. check there are not he third,fourth,fifth cq commands

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_delete_after_cq

delete IO SQ prior to delete IO CQ

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 105.
  2. Host software shall ensure that any associated I/O Submission Queue is deleted prior to deleting a Completion Queue. If there are any associated I/O Submission Queues present, then the Delete I/O Completion Queue command shall fail with a status value of Invalid Queue Deletion.

Steps

  1. create a pair of io CQ/SQ
  2. check delete cq first is invalid

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_doorbell

write Submission Queue tail doorbell

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 63.
    1. Data Structures

Steps

  1. create a pair of io CQ/SQ
  2. set sq tail = 1 and complete successfully

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_doorbell_invalid

write an invalid sq doorbell

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Figure 146.
  2. Asynchronous Event Information – Error Status. 01h//Invalid Doorbell Write Value: Host software attempted to write an invalid doorbell value.

Steps

  1. clear the associated asynchronous events
  2. create a pair of io CQ/SQ
  3. check invalid sq doorbell value will trigger asynchronous event

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_cq_another_sq

create two SQ linked to the same CQ

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 65.
  2. 4.2 Submission Queue Entry – Command Format

Steps

  1. create a cq and a sq both support 3 entries
  2. send command with CID 4
  3. send command with CID 3
  4. set sq tail = 2
  5. create the second io sq
  6. send command with CID 2 by the second sq
  7. check the first cq CID is 4
  8. check the second CID is 3
  9. check there is not the third cq command
  10. set cq head = 1
  11. check the third CID is 2
  12. check the first CID is 4
  13. send command with CID 1 by the second sq
  14. set cq head = 2
  15. check the first CID is 1

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_create_invalid_cqid

Create IO SQ command with an invalid CQID, shall fail with correct error codes.

Reference

  1. NVM Express Revision 1.4a. Page 104.
  2. a) cqid is 0h (i.e., the Admin Completion Queue), then the controller should return an error of Invalid Queue Identifier;
  3. b) cqid is outside the range supported by the controller, then the controller should return an error of Invalid Queue Identifier; or
  4. c) cqid is within the range supported by the controller and does not identify an I/O Completion Queue that has been created, then the controller should return an error of Completion Queue Invalid.

Steps

  1. get the number of queue supported by the device (ncqa)
  2. create a CQ with CQID 1
  3. create a SQ binding to the above CQ
  4. create a SQ binding to CQID 0, which should fail
  5. create a SQ binding to CQID 0xffff, which should fail
  6. create a SQ binding to ncqa+1 , which should fail
  7. create a SQ binding to ncqa+0xff , which should fail
  8. create a SQ binding to 2 or 4 (not exist cqid) , which should fail
  9. delete created SQ and CQ

function: scripts/conformance/05_controller/sq_cq_test.py::test_sq_read_write_burst

Create multiple SQE and update SQ doorbell in one time, check the cq overflow

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 101.
  2. 5.3 Create I/O Completion Queue command

Steps

  1. create cq and sq
  2. write 10 LBAs, and use LBA as the data pattern
  3. wait all write complete
  4. write one more
  5. cq overflow check
  6. delete sq and cq
  7. create cq and sq again
  8. read 10 LBAs
  9. wait all read complete
  10. check read data pattern
  11. delete sq and cq

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_doorbell_valid

create cq and do not create sq

Reference

  1. NVM Express Revision 1.4a March 9, 2020. Page 101.
  2. 5.3 Create I/O Completion Queue command

Steps

  1. create cq
  2. delete cq and complete successfully

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_create_physically_contiguous

If CAP.CQR is 1, create IO CQ in which CDW11.PC is 0 or a PRP Entry with a non-zero offset, shall fail with correct error code

Reference

  1. Page 101. Figure 150: If CDW11.PC is set to ‘1’, then this field specifies a 64-bit base memory address pointer of the Completion Queue that is physically contiguous. The address pointer is memory page aligned (based on the value in CC.MPS) unless otherwise specified.

Steps

  1. check CAP.CQR and skip test if if PC is not required
  2. create CQ with PC flag
  3. create CQ without PC flag, and error is expected

function: scripts/conformance/05_controller/sq_cq_test.py::test_cq_sq_diff_id

pair sq to cq with a different qid

Reference

  1. NVM Express Revision 2.0

Steps

  1. create cq with qid 1
  2. create sq with a different qid
  3. send a cmd
  4. check the cqe
  5. delete sq and cq

file: scripts/conformance/05_controller/sqe_cqe_test

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_sqhd

verify SQ Head Pointer when the host the Submission Queue entries that have been consumed and may be re-used for new entries

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.

Steps

  1. create a iocq with depth of 3, and a iosq with depth of 2
  2. issue one IO command and write sq.tail
  3. check SQHD, SQID and P-bit of the first cq entry updated, other cq entry have not updated
  4. issue one IO command again, and write sq.tail
  5. check SQHD, SQID and P-bit of the second cq entry updated, the last cq entry have not updated
  6. issue one IO command again, and write sq.tail
  7. the last cq entry have not updated if cq.head have not updated
  8. free one cqe before get the third cqe
  9. issue a command, and free a cqe
  10. check SQHD updated of the first cq entry updated, P-bit changed to 0
  11. delete SQ and CQ

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_p_tag_invert_after_cq_full

verify the Phase Tag will invert each pass when the controller has wrapped around to the top of the Completion Queue

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.

Steps

  1. create a iocq with depth of 2, create a iosq with depth of 10
  2. issue a write command into sq, and write sq.tail
  3. issue another write command into sq, and and write sq.tail
  4. check cid, sqid and sqhd for the first command updated
  5. check cid, sqid and sqhd for the second command updated
  6. check the p-bit of all cq entries is 1
  7. issue a write command into sq
  8. issue a write command into sq again
  9. check cid, sqid and sqhd for the third command updated
  10. check cid, sqid and sqhd for the last command updated after free a cqe
  11. check the p bit of all cq entries is 0
  12. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_discontinuous_cid

verify two command with discontinuous cid

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. create a cq and a sq both support 3 entries
  2. issue two commands with discontinuous cid
  3. check its cid updated correctly
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_max_cid

issue command which cid is maximum or minimum

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. in NVMe 2.0, 0xffff is not a valid cid
  2. create a cq and a sq both support 3 entries
  3. send max/min cid commands
  4. check its cid updated correctly
  5. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_cid_conflict

two commands have conflicting cid

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. create a cq and a sq both support 20 entries
  2. issue two same command to one sq
  3. check p-bit updated to 1, status should be Successful Completion or Command ID Conflict
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_reserved

verify Reserved field is non-zero

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. create a cq and a sq both support 3 entries
  2. issue a command, reserved field is non-zero.
  3. check status should be Successful Completion
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_fuse_is_zero

verify fuse field is zero

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. create a cq and a sq both support 3 entries
  2. issue a command, FUSE field is zero.
  3. check status should be Successful Completion
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_opc_invalid_admin_cmd

verify admin command with invalid command opcode

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. issue an admin command with Invalid Command Opcode will be aborted

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_opc_invalid_nvm_cmd

verify nvm command with invalid command opcode

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. Figure 104

Steps

  1. create a cq and a sq both support 3 entries
  2. issue command with invalid command opcode.
  3. check status should be Invalid Command Opcode
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_ns_invalid

verify command with invalid Namespace.

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. 6.1.5 NSID and Namespace Relationships

Steps

  1. create a cq and a sq both support 3 entries
  2. issue a command, Namespace field is invalid.
  3. check status should be Invalid Namespace or Format
  4. delete qpair

function: scripts/conformance/05_controller/sqe_cqe_test.py::test_sqe_cqe_ns_broadcast

verify command with broadcast Namespace ID 0xffffffff.

Reference

  1. NVM Express Revision 1.4a, March 9, 2020.
  2. 6.1.5 NSID and Namespace Relationships

Steps

  1. create a cq and a sq both support 3 entries
  2. issue a command, Namespace field is invalid.
  3. check status should be Invalid Namespace or Format
  4. delete qpair

folder: scripts/conformance/06_tcg

file: scripts/conformance/06_tcg/01_use_case_test

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct01_level0_discovery

UCT-01: Level 0 Discovery

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. power cycle the device
  2. issue level 0 discovery
  3. check Number of ComIDs >= 1

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct02_properties

UCT-02: Properties

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call Properties method with the following HostProperties values: MaxComPacketSize = 4096 bytes, MaxPacketSize = 4076 bytes, MaxIndTokenSize = 4040 bytes

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct03_take_ownership

UCT-03: Taking Ownership of an SD

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. take ownership
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Get method to retrieve MSID’s PIN column value from the C_PIN table
  4. CLOSE_SESSION
  5. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  6. SET_PASSWORD_FOR SID
  7. CLOSE_SESSION
  8. revert tper
  9. start session with SID

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct04_activate_locking_sp

UCT-04: Activate Locking SP when in Manufactured-Inactive State

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  2. Call Activate method on Locking SP object
  3. CLOSE_SESSION
  4. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  5. close session

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct05_configuring_authorities

UCT-05: Configuring Authorities

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. read LAST_REQUIRED_USER
  2. take ownership
  3. enable user1 and set passwd for it
  4. enable last_required_user and set passwd for it
  5. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  6. close session
  7. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = User1 authority UID
  8. close session
  9. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = LAST_REQUIRED_USER authority UID
  10. close session

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct06_configuring_locking_objects

UCT-06: Configuring Locking Objects (Locking Ranges)

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. get LAST_REQUIRED_RANGE
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. close session
  4. write and verify

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct06_configuring_locking_objects_powercycle

UCT-06: Configuring Locking Objects (Locking Ranges)

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00
  2. Power cycle the SD, and read locking range data

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. close session
  3. power cycle
  4. issue read and write command ,will return Data Protection Error

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct07_unlocking_range

UCT-07: Unlocking Ranges

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. For Opal, Call Set method on LAST_REQUIRED_RANGE
  2. close session
  3. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  4. close session
  5. Call the Set method on the ReadLocked and WriteLocked columns of the LAST_REQUIRED_RANGE Locking object with a value of FALSE
  6. close session
  7. issue write and read command

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct08_erasing_range

UCT-08: Erasing Ranges

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device support pyrite, skip the test
  2. For Opal, Call Set method on LAST_REQUIRED_RANGE
  3. read AlignmentGranularity
  4. setup range
  5. unlock range
  6. verify data before erasing
  7. erasing range
  8. verify data

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct09_using_datastore

UCT-09: Using the DataStore Table

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. enable user1
  2. start locking sp admin1 session
  3. Call Set method on the BooleanExpr column of the ACE_DataStore_Set_All ACE object
  4. Call Set method on the BooleanExpr column of the ACE_DataStore_Get_All ACE object
  5. close session
  6. user1 auth session
  7. write magic_pattern to datastore table
  8. user1 auth session
  9. read data from datastore table and check it

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct10_enable_mbr_shadow

UCT-10: Enable MBR Shadowing and UCT-11: MBR Done

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Invoke Properties method
  2. read AlignmentGranularity
  3. start locking sp admin1 session
  4. Invoke the Set method on the BooleanExpr column of the ACE_MBRCONTROL_SET_DONE ACE object to include the UIDs of the User1 and LAST_REQUIRED_USER Authority objects
  5. invoke Get method on the Rows column of the MBR Table Descriptor Object
  6. invoke the Set method to change the RangeLength column of the LAST_REQUIRED_RANGE to SIZE_OF_MBR_TABLE_DESCRIPTOR_IN_LOGICAL_BLOCKS + 10 LBAs
  7. write 1s over the entire LAST_REQUIRED_RANGE
  8. call Get method on the MBR object in the Table table to retrieve the MandatoryWriteGranularity column value
  9. invoke Set method to write the MBR table with the MAGIC_PATTERN
  10. invoke Set method on the Enable column of the MBRControl table with a value of TRUE
  11. close session
  12. powercycle
  13. write the MAGIC_PATTERN over the entire LAST_REQUIRED_RANGE
  14. read from LBA 0 to the size of the MBR Table
  15. test_uct11_mbr_done
  16. read LAST_REQUIRED_USER
  17. enable user1 and set passwd for it
  18. enable last_required_user and set passwd for it
  19. close session
  20. Call the Set method on the ReadLocked and WriteLocked columns of the LAST_REQUIRED_RANGE Locking object with a value of FALSE
  21. close session
  22. read the entire LAST_REQUIRED_RANGE

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct12_revert_locking_sp

UCT-12: Revert the Locking SP using SID, with Locking SP in Manufactured state

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. write data over 64 logical blocks beginning at LBA 0
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Revert method on Locking SP object
  4. Call StartSession method with SPID = Locking SP
  5. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct13_revert_admin_sp_lockingsp_inactive

UCT-13: Revert the Admin SP using SID, with Locking SP in ManufacturedInactive state

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. get the tcg feature of the DUT
  2. take ownership
  3. write data over 64 logical blocks beginning at LBA 0
  4. Call StartSession method with SPID = Admin SP UID
  5. Call Revert method on Admin SP object
  6. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  7. Call StartSession method with SPID = Locking SP
  8. Read 64 logical blocks beginning at LBA 0

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct14_revert_admin_sp_locking_sp_active

UCT-14: Revert the Admin SP using SID, with Locking SP in Manufactured state

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. get the tcg feature of the DUT
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Get method on UID 00 00 00 06 00 00 02 02 to determine support
  4. close session
  5. write data over 64 logical blocks beginning at LBA 0
  6. Call StartSession method with SPID = Admin SP UID D and HostSigningAuthority = SID authority UID
  7. Call Revert method on Admin SP object
  8. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  9. Call StartSession method with SPID = Locking SP
  10. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct15_revert_admin_sp_locking_sp_active

UCT-15: Revert Admin SP using Admin1, with Locking SP in Manufactured state

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. get the tcg feature of the DUT
  2. check whether admin1 is supported
  3. Enable admin1
  4. write data over 64 logical blocks beginning at LBA 0
  5. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = Admin1 authority
  6. Call Revert method on Admin SP object
  7. read Behavior of C_PIN_SID Pin upon TPer Revert value in level0 discovery
  8. Call StartSession method with SPID = Locking SP
  9. For Pyrite 1.00, do nothing for this step

function: scripts/conformance/06_tcg/01_use_case_test.py::test_uct16_psid_revert

UCT-16: Revert the Locking SP using PSID

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. fill the correct PSID in the key here
  2. power cycle the device
  3. revert the device by psid

file: scripts/conformance/06_tcg/02_specific_functionality_test

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf01_transaction

SPF-01: Transaction Case 2:

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. start locking sp admin1 session
  2. write zero to datastore table
  3. close session
  4. send a subpacket that contains a startTransaction token with a status code of 0x00
  5. write magic_pattern to datastore table
  6. send a subpacket that contains an end transaction token with a status code of 0x00
  7. read data from datastore table and check it
  8. start locking sp admin1 session
  9. send a subpacket that contains a startTransaction token with a status code of 0x00
  10. write zero to datastore table
  11. read data from datastore table and check it

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf02_if_recv_behavior_tests_case1

SPF-02: IF-RECV Behavior Tests Case1

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Issue an IF-RECV command
  2. check a ComPacket header value of “All Response(s) returned – no further data”

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf02_if_recv_behavior_tests_case2

SPF-02: IF-RECV Behavior Tests Case2

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. start locking sp admin1 session
  2. read data from datastore table
  3. IF-Recv transfer length = 0x100

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_sid

SPF-03: TryLimit SID

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If SID C_PIN Object has a TryLimit Column value >0, try multiple with not match SD C_PIN to start session until SID C_PIN object’s Tries value = SID C_PIN object’s TryLimit value
  2. else do not perform this test step and the Test Suite SHALL mark the result of this step as NA
  3. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = SID authority UID, will return AUTHORITY_LOCKED_OUT
  4. power cycle

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_admin1

SPF-03: TryLimit Admin1

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If Admin1 C_PIN Object has a TryLimit Column value >0, try multiple with not match Admin1 C_PIN to start session until Admin1 C_PIN object’s Tries value = Admin1 C_PIN object’s TryLimit value
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID, will return AUTHORITY_LOCKED_OUT
  3. power cycle

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf03_trylimit_case_user1

SPF-03: TryLimit User1

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If User1 C_PIN Object has a TryLimit Column value >0, try multiple with not matchUser1 C_PIN to start session until User1 C_PIN object’s Tries value = User1 C_PIN object’s TryLimit value
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = User1 authority UID, will return AUTHORITY_LOCKED_OUT

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_sid

SPF-04: Tries Reset

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If SID C_PIN Object has a TryLimit Column value > 1, try multiple with not match SID C_PIN to start session until C_PIN object’s Tries value = SID C_PIN object’s TryLimit value -1
  2. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  3. Call Get method on the Tries Column of the SID Authority’s C_PIN Object
  4. Check if current sid_tries is zero
  5. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_admin1

SPF-04: Tries Reset

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If Admin1 C_PIN Object has a TryLimit Column value > 1,try multiple with not match Admin1 C_PIN to start session until C_PIN object’s Tries value = Admin1 C_PIN object’s TryLimit value -1
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. Call Get method on the Tries Column of the Admin1 Authority’s C_PIN Object
  4. CLOSE_SESSION
  5. Check if current admin1_tries is zero

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf04_tryreset_case_user1

SPF-04: Tries Reset

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If user1 C_PIN Object has a TryLimit Column value > 1,try multiple with not match user1 C_PIN to start session until C_PIN object’s Tries value = user1 C_PIN object’s TryLimit value -1
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = User1 authority UID.
  3. CLOSE_SESSION
  4. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  5. Call Get method on the Tries Column of the User1 Authority’s C_PIN Object
  6. CLOSE_SESSION
  7. Check if current user1_tries is zero

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_sid

SPF-05: Tries Reset on Power Cycle

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If SID C_PIN Object has a TryLimit Column value > 1, try multiple with not match SID C_PIN to start session until C_PIN object’s Tries value = SID C_PIN object’s TryLimit value
  2. power cycle
  3. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  4. Call Get method on SID’s C_PIN Object to retrieve the TryLimit Column’s value
  5. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_admin1

SPF-05: Tries Reset on Power Cycle

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If Admin1 C_PIN Object has a TryLimit Column value > 1,try multiple with not match Admin1 C_PIN to start session until C_PIN object’s Tries value = Admin1 C_PIN object’s TryLimit value
  2. power cycle
  3. open locking sp admin session
  4. Call Get method on Admin1’s C_PIN Object to retrieve the TryLimit Column’s value

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf05_tries_reset_on_power_cycle_user1

SPF-05: Tries Reset on Power Cycle

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If user1 C_PIN Object has a TryLimit Column value > 1,try multiple with not match user1 C_PIN to start session until C_PIN object’s Tries value = user1 C_PIN object’s TryLimit value
  2. power cycle
  3. Call Get method on User1’s C_PIN Object to retrieve the TryLimit Column’s value

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf06_next_case1

SPF-06: Next

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device not supports opal, skip the test
  2. Call StartSession method with SPID = Locking SP UID
  3. Call Get method on the LockingInfo Table’s MaxRanges Column
  4. Call Next method on the Locking Table with an empty parameter list
  5. Call Next method on the Locking Table with the Where parameter set to the first UID from the list of UIDs, returned in step #3, and the Count parameter set to 1
  6. CLOSE_SESSION
  7. check a list of UIDs where the number of values = the MaxRanges value + 1
  8. check the first four bytes of each UID returned are 0x00000802

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf06_next_case2

SPF-06: Next

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If device do not support pyrite, skip the test
  2. Call StartSession method with SPID = Locking SP UID
  3. Call Next method on the MethodID Table with an empty parameter list
  4. Call Next method on the MethodID Table with the Where parameter set to the first UID from the list of UIDs , returned in step #3 and the Count parameter set to 1
  5. CLOSE_SESSION
  6. check returns a list of UIDs where the number of values >= 7
  7. check the first four bytes of each UID returned are 0x00000006

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf07_host_session_number

SPF-07: Host Session Number (HSN)

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with HostSessionID = ARBITRARILY_VARYING HSN, SPID = Admin SP UID, and HostSigningAuthority = SID authority UID
  2. Call Get method on MSID C_PIN credential’s PIN Column
  3. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case1

SPF-08: RevertSP

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Write the MAGIC_PATTERN over 64 logical blocks beginning at LBA 0
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. Call RevertSP method with the KeepGlobalRangeKey/KeepData omitted
  4. Call StartSession method with SPID = Locking SP UID
  5. For all SSCs supported by this specification other than Pyrite 1.00, read 64 logical blocks beginning at LBA 0,

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case2

SPF-08: RevertSP

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Write the MAGIC_PATTERN over 64 logical blocks beginning at LBA 0
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. Call RevertSP method with the KeepGlobalRangeKey/KeepData present and set to FALSE
  4. Call StartSession method with SPID = Locking SP UID, will return error
  5. Read 64 logical blocks beginning at LBA 0

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf08_revert_sp_case3

SPF-08: RevertSP

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Write the MAGIC_PATTERN over 64 logical blocks beginning at LBA 0
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. Call RevertSP method with the KeepGlobalRangeKey/KeepData present and set to TRUE
  4. Call StartSession method with SPID = Locking SP UID, will return error
  5. Read 64 logical blocks beginning at LBA 0

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf09_range_alignment_verification

SPF-09: Range Alignment Verification

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. This test case only applies to Opal 2.00, Opal 2.01, and Ruby 1.00 if the AlignmentRequired column value in the LockingInfo table = TRUE
  3. Call Get method on the LockingInfo Table to retrieve the LogicalBlockSize, AlignmentGranularity and LowestAlignedLBA column values
  4. setup lockingrange1
  5. close session

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf10_byte_table_access_granularity

SPF-10: Byte Table Access Granularity

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Get method on the DataStore object in the Table table to retrieve the MandatoryWriteGranularity column value
  3. if mandatorywritegranularity is 1, skip test
  4. Call Set method to write the DataStore table with a number of 0s = a non-zero multiple of the MandatoryWriteGranularity column value
  5. CLOSE_SESSION
  6. read data from datastore table and check it

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf11_stack_reset

SPF-11: Stack Reset

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. open admin1 session
  2. Send a subpacket that contains a StartTransaction token with a status code of 0x00
  3. Call Set method on the Enabled Column of User1 Authority with a value of TRUE
  4. Issue STACK_RESET command
  5. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  6. Call Get method to retrieve the value of the Enabled Column of User1 Authority
  7. CLOSE_SESSION
  8. check returns a value of FALSE

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf12_tper_reset_case1

SPF-12: TPer Reset

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. ProgrammaticResetEnable set to TRUE
  2. open admin1 session
  3. Locking_GlobalRange has ReadLocked and WriteLocked columns set to FALSE
  4. Locking_GlobalRange has ReadLockEnabled and WriteLockEnabled columns are set to TRUE
  5. LockOnReset column value includes Programmatic
  6. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  7. Issue the TPER_RESET command
  8. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  9. Call Get method on the Locking_GlobalRange columns
  10. close session
  11. issue write command
  12. issue read command

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf13_authenticate

SPF-13: Authenticate

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. take ownership
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Authenticate method with Authority = SID Authority UID and Proof = C_PIN_SID PIN column value
  4. Call Get method on UID Column of SID C_PIN
  5. CLOSE_SESSION
  6. returns the C_PIN_SID PIN object’s UID column value

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf15_random

SPF-15: Random

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID
  2. Call Random method with a Count = 32
  3. Call Random method with a Count = 32
  4. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf16_common_name

SPF-16: CommonName

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device not support Opal 2.00 or Opal 2.01 or Ruby 1.00, skip the test
  2. open admin1 session
  3. Call the Set method on the CommonName column of the Admin1 authority object using the MAGIC_PATTERN
  4. Call the Set method on the CommonName column of Locking_GlobalRange using the MAGIC_PATTERN
  5. Call Get method on the CommonName column of the Admin1 authority object
  6. Call Get method on the CommonName column of the Locking_GlobalRange
  7. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf17_additional_dataStore_tables_case1

SPF-17: Additional DataStore Tables

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device do not support Additional DataStore Table Feature, skip the test
  2. get Maximum Number of DataStore Tables value
  3. get the DataStore Table Size Alignment value
  4. take ownership
  5. activate locking sp
  6. Call Activate method on the Locking SP with a DataStoreTableSize
  7. CLOSE_SESSION
  8. open admin1 session
  9. Call Get method to retrieve the DataStore table’s Rows column value from the Table table
  10. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf17_additional_dataStore_tables_case2

SPF-17: Additional DataStore Tables

Reference
1.
2. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device do not support Additional DataStore Table Feature, skip the test
  2. get Maximum Number of DataStore Tables value
  3. get the DataStore Table Size Alignment value
  4. take ownership
  5. activate locking sp
  6. Call Activate method with a DataStoreTableSize
  7. CLOSE_SESSION
  8. open admin1 session
  9. Call Get method to retrieve each DataStore table’s Rows column value from the Table table
  10. CLOSE_SESSION

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf18_range_crossing_behavior

SPF-18: Range Crossing Behavior

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device do not support Opal, skip the test
  2. read mdts value
  3. read AlignmentGranularity
  4. open admin1 session
  5. setup locking range
  6. unlocked Locking_GlobalRange and Locking_Range
  7. close session
  8. Issue a Write and Read command, will return error

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf19_block_sid_authentication

SPF-19: Block SID Authentication

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. if device do not support Block SID Authentication Feature, skip the test
  2. get MSID
  3. Issue IF-SEND with Hardware Reset bit in Clear Events field = 1
  4. Invoke StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  5. Invoke StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  6. Trigger a TCG Storage Hardware Reset on the SD
  7. Invoke StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  8. Issue IF-SEND with Hardware Reset bit in Clear Events field = 0
  9. Invoke StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  10. Power cycle the SD
  11. Invoke StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_spf20_data_removal_mechanism

SPF-20: Data Removal Mechanism

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If device do not support Data Removal Mechanism, skip the test
  2. Get Supported Data Removal Mechanisms Feature Descriptor in Level 0 Discovery
  3. Invoke the StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  4. Invoke the Get method on the ActiveDataRemovalMechanism column of the DataRemovalMechanism table

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_read_datastore

Read 9 rows from the datastore table

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start locking sp admin1 session
  2. read 9 rows from the datastore table
  3. close session

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_getacl

Verify the basic functionality of getacl Method

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start session
  2. invoke getacl method
  3. close session

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_set_lock_on_reset

Set lba range lock on reset

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start session
  2. set LockOnReset=powercycle
  3. close session
  4. disk power cycle
  5. start session
  6. get lockonreset value
  7. close session

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_write_longdata_to_datastore

Write long data to datastore table

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start locking sp admin1 session
  2. write 1k bytes to datastore table
  3. close session
  4. write 4k bytes to datastore table
  5. close session

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_mbr_table

Verify the basic functionality of mbr table

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start locking sp admin1 session
  2. write zero to mbr table
  3. close session
  4. write magic_pattern to datastore table
  5. powercycle
  6. format data
  7. read data from mbr table and check it

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_mbr_and_revert

write data to mbr table and revert

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start locking sp admin1 session
  2. write magic_pattern to mbr table
  3. read and check the data in mbr
  4. revert tper
  5. take ownership
  6. activate locking sp
  7. read data from mbr table and check it

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_done_on_reset

Set mbr table doneonreset

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start session
  2. set DoneOnReset=powercycle
  3. powercycle
  4. check DoneOnReset value

function: scripts/conformance/06_tcg/02_specific_functionality_test.py::test_write_maxdata_to_datastore

Write max data length to datastore

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Invoke Properties method to identify the MaxComPacketSize and MaxResponseComPacketSize
  2. start locking sp admin1 session
  3. limit the max token size
  4. write max data size to datastore
  5. read max data size from datastore
  6. close session
  7. check data

file: scripts/conformance/06_tcg/03_error_test_cases_test

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_01_native_protocol_rw_locked_error_responses

ETC-01: Native Protocol Read/Write Locked Error Responses

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. start locking sp admin1 session
  2. set Locking_GlobalRange ReadLockEnabled, WriteLockEnabled, ReadLocked and WriteLocked column values =TRUE
  3. issue write command
  4. issue read command

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_02_general_if_send_if_recv_synchronous_protocol

ETC-02: General – IF-SEND/IF-RECV Synchronous Protocol

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. issue an IF-SEND command
  2. Call Properties method using the ComID from the previous step

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_03_invalid_if_send_transfer_length

ETC-03: Invalid IF-SEND Transfer Length

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call Properties method to determine SD’s MaxComPacketSize
  2. Call Properties method with the correct ComPacket Header Length field to match the required ComPacket
  3. payload size but with the IF-SEND Transfer Length set to a value > MaxComPacketSize

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_04_invalid_sessionid_regular_session

ETC-04: Invalid SessionID – Regular Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. take ownership
  2. Call StartSession method with SPID = Admin SP UID
  3. Call Get method on MSID’s credential object in C_PIN table with a Packet SessionID value <> the current SessionID value

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_05_unexpected_token_outside_of_method_regular_session

ETC-05: Unexpected Token Outside of Method – Regular Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Set method on the Enabled Column of User1 Authority with a value of FALSE and EndList Token before the Call Token
  3. Call Set method on the Enabled Column of User1 Authority with a value of FALSE and EndList Token before
  4. Invoke Get method on the Enabled Column of User1 Authority
  5. CLOSE_SESSION

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_06_unexpected_token_in_method_regular_session

ETC-06: Unexpected Token in Method Header – Regular Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Set method on the Enabled Column of User1 Authority with a value of FALSE and an EndList Token immediately after the Call Token
  3. CLOSE_SESSION

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_07_unexpected_token_outside_of_method_control_session

ETC-07: Unexpected Token Outside of Method – Control Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and an EndList Token before the Call Token
  2. Call StartSession method with SPID = Locking SP UID

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_08_unexpected_token_in_method_control_session

ETC-08: Unexpected Token in the Method Parameter List – Control Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call Properties method with StartList immediately after the Parameter StartList

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case1

ETC-10: Invalid Invoking ID – Get_case1

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Get method on Invoking UID of 00 00 08 01 AA BB CC DD

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case2

ETC-10: Invalid Invoking ID – Get_case2

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Anybody authority UID
  2. Call Get method on Invoking UID of 00 00 10 01 00 00 00 00 (DataStore Table)

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case3

ETC-10: Invalid Invoking ID – Get_case3

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call the StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call the Get method on the InvokingID 00 00 00 0B 00 01 00 01 (C_PIN_Admin1) to get the PIN, CharSet, TryLimit, and Tries columns
  3. CLOSE_SESSION

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_10_invalid_invoking_id_get_case4

ETC-10: Invalid Invoking ID – Get_case4

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call the StartSession method with SPID = Locking SP UID and HostSigningAuthority = Anybody authority UID
  2. Call the Get method on the InvokingID 00 00 00 00 00 00 00 01 (ThisSP)

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_11_invalid_invoking_id_non_get

ETC-11: Invalid Invoking ID – Non-Get

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID
  2. Call Set method on Invoking UID of 00 00 08 01 00 00 00 05

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_12_authorization

ETC-12: Authorization

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID
  2. Call Set method on the Enabled column of the User1 Authority

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_13_malformed_comPacket_header_regular_session

ETC-13: Malformed ComPacket Header – Regular Session

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Invoke Properties method to identify the MaxComPacketSize
  2. Invoke StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  3. Invoke Set method on the Datastore Table such that the Length field in the ComPacket header exceeds the TPer’s MaxComPacketSize – 20, and the IF-SEND Transfer Length set to a value <= MaxComPacketSize
  4. Issue IF-RECV

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_16_overlapping_locking_ranges

ETC-16: Overlapping Locking Ranges

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Set method on Locking_Range1
  3. Call Set method on Locking_Range2

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_17_invalid_type

ETC-17: Invalid Type

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Set method on the Enabled column of the User1 Authority to value of 0xAAAA

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_18_revertsp_globalrange_locked

ETC-18: RevertSP – GlobalRange Locked

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. Call Set method on GlobalRange
  3. Call RevertSP method on the Locking SP with KeepGlobalRangeKey/KeepData = TRUE

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_19_ata_security_interaction

ETC-19: ATA Security Interaction

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. check if ATA password is supported
  2. set ATA password
  3. take ownership
  4. activate locking sp

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_20_startSession_on_inactive_locking_sp

ETC-20: StartSession on Inactive Locking SP

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_21_startsession_with_incorrect_hostChallenge

ETC-21: StartSession with Incorrect HostChallenge

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call StartSession method with SPID = Locking SP UID, HostSigningAuthority = Admin1 authority UID, and
  2. HostChallenge = a value that is different from the C_PIN_Admin1 PIN column value

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_22_multiple_sessions_case1

ETC-22: Multiple Sessions

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. Call Properties method to identify the MaxSessions
  2. Call StartSession method with SPID = Locking SP UID and Write = TRUE
  3. Call StartSession method with SPID = Locking SP UID and Write = TRUE
  4. close session
  5. power cycle

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_etc_23_data_removal_mechanism_set_unsupported_value

ETC-23: Data Removal Mechanism – Set Unsupported Value

Reference

  1. TCG Storage Opal Family Test Cases Specification, Revision 1.00

Steps

  1. If device do not support Data Removal Mechanism, skip the test
  2. Get Supported Data Removal Mechanisms Feature Descriptor in Level 0 Discovery
  3. Invoke the StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_data_over_maxcompacketsize

Read datastore table over MaxComPacketSize

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. take ownership
  2. activate locking sp
  3. read data from the datastore table
  4. read data over MaxComPacketSize, and error is expected

function: scripts/conformance/06_tcg/03_error_test_cases_test.py::test_start_session_with_wrong_sp

start session with wrong sp

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start session with wrong sp, and error is expected

file: scripts/conformance/06_tcg/04_appendix_test

function: scripts/conformance/06_tcg/04_appendix_test.py::test_active_user_powercycle

dirty power off duration sessions

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = Admin1 authority UID
  2. create user1
  3. close session
  4. write data
  5. flush data
  6. power cycle
  7. read data without user1
  8. active locking sp
  9. read data with user1

function: scripts/conformance/06_tcg/04_appendix_test.py::test_mbr_read_write

write and read mbr table

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. Invoke Properties method
  2. start locking sp admin1 session
  3. invoke Get method on the Rows column of the MBR Table Descriptor Object
  4. call Get method on the MBR object in the Table table to retrieve the MandatoryWriteGranularity column value
  5. invoke Set method to write the MBR table with the MAGIC_PATTERN
  6. close session

function: scripts/conformance/06_tcg/04_appendix_test.py::test_datastore_read_write

write and read datastore table

Reference

  1. TCG Storage Security Subsystem Class: Opal Specification Version 2.01

Steps

  1. start locking sp admin1 session
  2. invoke Set method to write the datastore table with the MAGIC_PATTERN
  3. close session

file: scripts/conformance/06_tcg/05_core_spec_test

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_trylimit

Check if SID trylimit is equal to 10

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5 TCG Implementation Requirements

Steps

  1. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  2. Call Get method on SID’s C_PIN Object to retrieve the TryLimit Column’s value
  3. Call Get method on SID’s C_PIN Object to retrieve the try value
  4. Call Get method on SID’s C_PIN Object to retrieve the persistence value
  5. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_datastore_size

Check if datastore size is equal to 10MB

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5 TCG Implementation Requirements

Steps

  1. Open Admin SP session
  2. Get datastore size
  3. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_single_user_mode

Check if support TCG Single User Mode feature

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5.1 TCG Version and Features

Steps

  1. level0 discovery get tcg features, check if support TCG Single User Mode feature

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_configurable_namespace_locking

Check if support TCG Configurable Namespace Locking (CNL) feature

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5.1 TCG Version and Features

Steps

  1. level0 discovery get tcg features, check if support TCG Configurable Namespace Locking (CNL) feature

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_properties_info

Collect different properties

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5.2 TCG Communication Layer

Steps

  1. Invoke Properties method
  2. Get MaxComIDTime value
  3. Get DefSessionTimeout value
  4. Get MaxSessionTimeout value
  5. Get MinSessionTimeout value
  6. Get MaxTransactionLimit value
  7. Get MaxSessions value
  8. Get MaxReadSessions value

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_defsessiontimeout

Reference
1.
2. Datacenter NVMe SSD Specification, Revision 2.5
3. 12.5.2 TCG Communication Layer

Steps

  1. Invoke Properties method to get tper DefSessionTimeout
  2. Start session
  3. idle DefSessionTimeout+10s time
  4. session shall be timeout, start session again
  5. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_authentication_time

Check authentication of a C_PIN time

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 12.5.5 TCG Authentication

Steps

  1. Start a session
  2. check authentication of a C_PIN shall be delayed shall be at least 100ms

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_get_comid

Test GET_COMID tcg command

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.4.3.1 GET_COMID

Steps

  1. issue GET_COMID
  2. get the number of comid

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_verify_comid_valid

Test VERIFY_COMID_VALID tcg command

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.4.7.4 VERIFY_COMID_VALID

Steps

  1. issue GET_COMID
  2. issue VERIFY_COMID_VALID, get comid status

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_comid_and_session

Verify different comid states

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.4.7.4 VERIFY_COMID_VALID

Steps

  1. power cycle the device
  2. get current comid status
  3. open admin sp session
  4. current comid status shall be Associated
  5. close session
  6. get current comid status
  7. power cycle the device

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_invalid_comid

Open session with invalid comid

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.3 ComID Management

Steps

  1. open session with invalid comid

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_syncsession

Test SyncSession method

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.7.1.3 Session Manager Protocol Layer

Steps

  1. start anybody session
  2. issue SyncSession method
  3. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_data_store_transaction_success

Transaction with data store writing

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.7.3 Transactions

Steps

  1. start locking sp admin1 session
  2. write zero to datastore table
  3. close session
  4. send a subpacket that contains a startTransaction token with a status code of 0x00
  5. write magic_pattern to datastore table
  6. send a subpacket that contains an end transaction token with a status code of 0x00
  7. read data from datastore table and check it

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_data_store_abort_transaction

Interrupt the transaction with data store writing

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.7.3 Transactions

Steps

  1. start locking sp admin1 session
  2. send a subpacket that contains a startTransaction token with a status code of 0x00
  3. write magic_pattern to datastore table
  4. send a subpacket that contains a endTransaction token with a status code of 0x1
  5. close session
  6. read data from datastore table and check it not magic_pattern

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_transaction_trylimit_case_sid

Test transaction and trylimit

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.7.3 Transactions

Steps

  1. If User1 C_PIN Object has a TryLimit Column value >0, try multiple with not matchUser1 C_PIN to start session until User1 C_PIN object’s Tries value = User1 C_PIN object’s TryLimit value
  2. Call StartSession method with SPID = Locking SP UID and HostSigningAuthority = User1 authority UID, will return AUTHORITY_LOCKED_OUT
  3. power cycle
  4. check current user1 tries == 0
  5. send a subpacket that contains a startTransaction token with a status code of 0x0
  6. authentication with wrong password
  7. send a subpacket that contains a endTransaction token with a status code of 0x0
  8. check current user1 tries == 1
  9. send a subpacket that contains a startTransaction token with a status code of 0x0
  10. authentication with wrong password again
  11. send a subpacket that contains a endTransaction token with a status code of 0x1
  12. check current user1 tries == 2

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_level0_discovery

send level 0 discovery command at different stages

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.6 Level 0 Discovery

Steps

  1. issue level 0 discovery before starting session
  2. issue level 0 discovery in session
  3. issue level 0 discovery after get msid security send
  4. issue level 0 discovery after close session
  5. issue level 0 discovery after start session security send
  6. issue level 0 discovery after close session security send

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_send_twice_start_session

Send start session security send twice

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.10.2 Interface Commands

Steps

  1. Send start session security send twice
  2. Check the second security send failed

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_send_twice_end_session

Send close session security send twice

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.10.2 Interface Commands

Steps

  1. Call StartSession method with SPID = Admin SP UID and HostSigningAuthority = SID authority UID
  2. Send close session security send twice
  3. Check the second security send failed

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_outstandingdata

Test security receive outstandingdata

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.10.2 Interface Commands

Steps

  1. start locking sp admin1 session
  2. read 512 rows from the datastore table
  3. security receive with length=128
  4. security receive with length=outstanding_data
  5. close session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_max_compacket_size

Test security receive outstandingdata

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 3.3.10.2 Interface Commands

Steps

  1. Invoke Properties method to identify the MaxComPacketSize and MaxResponseComPacketSize
  2. start locking sp admin1 session

function: scripts/conformance/06_tcg/05_core_spec_test.py::test_multi_users_and_ranges

Verify that 8 users are assigned different ranges

Reference

  1. TCG Storage Architecture Core Specification, Revision 2.01
  2. 5.7 Locking Template

Steps

  1. if device not support Opal 2.00 or Opal 2.01, skip the test
  2. open locking sp admin session
  3. enable 8 users
  4. assign different ranges to the 8 users. range locked.
  5. close session
  6. check ranges are locked
  7. open 8 users sessions and unlock range
  8. issue write and read command, compare data

Suite: scripts/benchmark

folder: scripts/benchmark

file: scripts/benchmark/idle_stress

tests disks in 24×7 standby scenarios with sporadic read/write IO, reflecting some PC office environments. It’s designed to check the disk’s ability to handle massive transitions in and out of low-power states.

file: scripts/benchmark/interval_read_disturb

Read specific Logical Block Addresses (LBAs) repeatedly over a long duration and check the impact of read disturb on the device. The test script uses multiple workers to perform different read operations on the device while idle times between the reads simulate a real-world usage pattern. The script uses a JEDEC enterprise workload and writes to the drive to clear it before the read operations start. The script also checks the device’s SMART health information, temperature, and UECC errors before and after the test.

This test script is designed to validate the device’s stability and error handling under heavy and repeated workloads.

file: scripts/benchmark/ioworker_stress

This script conducts a comprehensive stress test on NVMe SSDs to validate their stability, performance, and error handling capabilities under prolonged and varied workloads. The test involves running multiple randomized I/O operations concurrently with critical NVMe commands, such as SMART data retrieval, feature management, and abort operations. The script simulates a real-world usage scenario by continuously starting and stopping I/O workers while ensuring data integrity through the verification of the entire drive at the end of the test. This approach helps assess the SSD’s resilience and readiness for deployment in demanding environments.

file: scripts/benchmark/llm_loading

This test case evaluates the performance of an NVMe SSD under high-load conditions, particularly in scenarios requiring rapid loading of large amounts of data into memory, such as data handling for large language models (LLMs). It simulates realistic data filling and image loading operations, repeatedly executed for specified iterations with varying block sizes, to assess the read and write performance and data integrity of the SSD. The test starts with formatting the device, then fills a designated namespace area with a mix of random and sequential data, followed by writing and reading fixed-size data blocks. Parameterized test inputs include the size of the images and the number of test loops, allowing the test to run under multiple configurations to cover a broader range of use cases. Performance logs are recorded after each iteration to analyze in detail how the device performs under sustained, intense operations.

file: scripts/benchmark/longtime_readwrite

By writing extensively to consume SSD PE cycles and examining the degradation of read/write performance.

This script tests the impact of long-term SSD usage on read and write performance by consuming a large number of program/erase (PE) cycles. The script writes extensively to the SSD to consume a specified percentage of PE cycles and monitors the performance degradation over time.

The script is divided into stages, with each stage testing a different percentage of PE cycle consumption and space allocation. The stages are as follows:

Stage 2.1: Consumes 3% of PE cycles and allocates 30% of drive space.
Stage 2.2: Consumes 6% of PE cycles and allocates 60% of drive space.
Stage 2.3: Consumes 9% of PE cycles and allocates 90% of drive space.
Stage 2.6: Consumes 12% of PE cycles and allocates 100% of drive space.
There are adjusted stages for testing with QLC drives.

file: scripts/benchmark/performance

Evaluate the performance of a client SSD under various working conditions, such as IOPS, latency, and performance consistency. The impact of temperature and power consumption on the SSD’s performance is also taken into account.

file: scripts/benchmark/por_sudden

This script automates power cycling tests on NVMe SSDs, measuring response times in critical states post-power cycle.
It focuses on simulating both dirty (SPOR) and clean (POR) power cycles to evaluate SSD readiness and durability.
Quarch Power Analysis Module is required.

SPOR: Simulates an unexpected power loss without shutdown notification.

Phases of poweron timing:

  • BAR Access Time: Time for a successful Controller Register write post-BAR access.
  • Admin Ready Time: Time until the SSD is ready for admin commands post-reset.
  • First I/O Completion Time: Time for completing the SSD’s first Read command post-reset.

file: scripts/benchmark/por_typical

This script automates power cycling tests on NVMe SSDs, measuring response times in critical states post-power cycle.
It focuses on simulating both dirty (SPOR) and clean (POR) power cycles to evaluate SSD readiness and durability.
Quarch Power Analysis Module is required.

Typical POR: Simulates a power loss with prior shutdown notification.

Phases of poweron timing:

  • BAR Access Time: Time for a successful Controller Register write post-BAR access.
  • Admin Ready Time: Time until the SSD is ready for admin commands post-reset.
  • First I/O Completion Time: Time for completing the SSD’s first Read command post-reset.

file: scripts/benchmark/read_retention

Fill the entire disk with data and record the CRC of all LBAs to a disk file.
After a period of time with power off (e.g., 2 months), check whether the CRC
of the entire disk data matches the previous one.

  1. create a folder /home/crc with root privilege if it is not exist
  2. make test TESTS=scripts/benchmark/read_retention.py::test_prepare
  3. make test TESTS=scripts/benchmark/read_retention.py::test_verify
  4. collect test log and diagram in folder results
  5. poweroff, and keep the DUT for 2 months in room temperature
  6. after 2 months, insert DUT to the same SUT in step 2
  7. make test TESTS=scripts/benchmark/read_retention.py::test_verify
  8. collect test log and diagram in folder results

file: scripts/benchmark/replay_trace

The script contains a function collect_cmd_sequence that collects operations from a simple CSV-formatted trace file, which is used to describe a specific IO sequence. The collected operations include the (SLBA, NLB, opcode, and timestamp) information. The function converts the operations to different capacities and collects them into a list.

The test script test_replay_trace uses the collect_cmd_sequence function to collect commands from the trace file and replay them on the NVMe device. The script first collects a sequence of write and trim operations and executes them. Then it performs a clean power cycle by shutting down the subsystem and powering it off. After a delay, the script powers the subsystem back on and re-enables the HMB if it was enabled before. Finally, the script collects a sequence of read-only operations from the trace file and replays them on the NVMe device.

file: scripts/benchmark/reset_double

do reset during nvme init process

file: scripts/benchmark/saw_diagram

This test script aims to stress test the power state transitions of an NVMe device by
issuing read/write I/Os during the transition to low-power states such as PS3/PS4. The
test checks the robustness of power switching and the latency of exiting low-power states.

The test script involves the following steps:

  1. Checks if APST is enabled.
  2. Sets up test parameters.
  3. Enables ASPM L1.2.
  4. Fixes the device on PS0.
  5. Formats the drive.
  6. Fills the drive with data.
  7. Interrupts power state transitions by sending I/Os at an increasing idle time
  8. Measures the latency of I/Os during power state transitions.
  9. Collects and logs the latency data for analysis.

Notes for Dell DR Test:

  • Connect the USB cable to the PC with QPS installed
  • Set sampling time to 100 microseconds in QPS

file: scripts/benchmark/wear_leveling

This script conducts a wear leveling test on NVMe SSDs to evaluate their endurance and efficiency in managing data distribution across the memory cells. The test involves sequential and random write operations to different regions of the drive, simulating hot and cold data scenarios, and triggers wear leveling and garbage collection processes. The test measures IOPS (Input/Output Operations Per Second) throughout the operations and generates performance diagrams to assess the effectiveness of wear leveling. The script also includes power cycling and full-drive verification steps to ensure data integrity post-testing.

file: scripts/benchmark/write_latency

This script tests the long-tail latency of an NVMe drive by writing data sequentially with a 128K block size. It includes functions to prepare the test environment, such as formatting the drive, prefilling data, and enabling/disabling the Host Memory Buffer (HMB).

The main test function, write_128k_latency_diagram, writes a specified amount of data sequentially with a 128K block size and QD=1 and generates various diagrams to visualize the results, such as IOs per second, temperature, latency per IO, and latency distribution. The function is executed multiple times with a delay between each run.

The script checks if the total latency per IO of more than 8ms is not more than 1% of the total number of IOs and that the 99th percentile latency is less than 8ms (criteria_latency). If these conditions are not met, the test fails.

The test environment can be customized by changing the global variables in the script.

Suite: scripts/management

folder: scripts/management

file: scripts/management/01_mi_inband_test

function: scripts/management/01_mi_inband_test.py::test_mi_vpd_write_and_read

write, read and verify VPD contents

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.
  2. 5.11 VPD Read
  3. 5.12 VPD Write

Steps

  1. write VPD
  2. read VPD
  3. verify data

function: scripts/management/01_mi_inband_test.py::test_mi_reset

MI reset command

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.
  2. 5.8 Reset

Steps

  1. mi_reset
  2. skip if MI is not supported
  3. create subsystem
  4. send MI reset command

function: scripts/management/01_mi_inband_test.py::test_mi_invalid_operation

MI command with invalid opcode

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.

Steps

  1. send MI with an invalid command opcode

function: scripts/management/01_mi_inband_test.py::test_mi_configuration_get_health_status_change

send MI command to get configuration of Health Status Change

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.
  2. 5.1.2 Health Status Change (Configuration Identifier 02h)

Steps

  1. send MI command to get configuratrion of Health Status Change

function: scripts/management/01_mi_inband_test.py::test_mi_configuration_set_health_status_change

send MI command to set configuration of Health Status Change

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.
  2. 5.2.2 Health Status Change (Configuration Identifier 02h)

Steps

  1. send MI command to set configuration of Health Status Change, it shall complete successfully

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_nvm_subsystem_information

send MI command

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.

Steps

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_nvm_subsystem_information_wrong_command

send MI command

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.

Steps

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_port_information

send MI command

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.

Steps

function: scripts/management/01_mi_inband_test.py::test_mi_read_nvme_mi_data_structure_port_information_wrong_port

send MI command

Reference

  1. NVM Express Management Interface Revision 1.1b, October 5, 2020.

Steps

file: scripts/management/02_basic_mgmt_cmd_test

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_status

SMBus block read of the drive’s status (status flags, SMART warnings, temperature)

Reference

  1. Management Interface Specification, Revision 1.2c. Appendix A.

Steps

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_static_data

I2c block read of the drive’s static data (VID and serial number)

Reference

  1. Management Interface Specification, Revision 1.2c. Appendix A.

Steps

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_reset_arbitration_bit

I2c send byte to reset Arbitration bit

Reference

  1. Management Interface Specification, Revision 1.2c. Appendix A.

Steps

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_spec_appendix_a_read_drive_status_across_i2c_block_boundaries

I2C read of status and vendor content, I2C allows reading across I2c block boundaries

Reference

  1. Management Interface Specification, Revision 1.2c. Appendix A.

Steps

function: scripts/management/02_basic_mgmt_cmd_test.py::test_mi_aux_power_only_read_static_data

I2c block read of the drive’s static data with aux power only

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. main power off
  2. read drive’s static data
  3. main power on and verify static data
  4. reset controller and verify static data

file: scripts/management/03_mi_cmd_set_test

function: scripts/management/03_mi_cmd_set_test.py::test_mi_read_mi_data_structure

read mi data structure

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.7 Read NVMe-MI Data Structure

Steps

  1. read NVM Subsystem Information
  2. read SMBus Port Information

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll

send NVM Subsystem Health Status Poll command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.6 NVM Subsystem Health Status Poll

Steps

  1. send NVM Subsystem Health Status Poll with Clear Status=0
  2. send nvme get log page command and compare temperature
  3. send NVM Subsystem Health Status Poll with Clear Status=1

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll_clear

NVM Subsystem Health Status Poll command clear status

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.6 NVM Subsystem Health Status Poll

Steps

  1. issue nvme subsystem reset
  2. send NVM Subsystem Health Status Poll with Clear Status=0
  3. send NVM Subsystem Health Status Poll with Clear Status=1
  4. check NVM Subsystem Reset Occurred bit cleared

function: scripts/management/03_mi_cmd_set_test.py::test_mi_nvm_subsystem_health_status_poll_temperature

test if temperature changes are available through mi

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.6 NVM Subsystem Health Status Poll

Steps

  1. get composite temperature through mi
  2. run ioworker to heat the device
  3. get composite temperature through mi again
  4. Check the temperature changes

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll

send Controller Health Status Poll command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.3 Controller Health Status Poll

Steps

  1. send Controller Health Status Poll with Report All=0
  2. send Controller Health Status Poll with Report All

function: scripts/management/03_mi_cmd_set_test.py::test_mi_controller_health_status_poll_filter

Controller Health Status Poll command filter by Controller Health Status Changed Flags

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.3 Controller Health Status Poll

Steps

  1. send Controller Health Status Poll Clear Changed Flags
  2. send Controller Health Status Poll filter by Controller Health Status Changed Flags
  3. issue an AER command
  4. set feature to enable all asynchronous events
  5. get current temperature
  6. set Over Temperature Threshold to trigger AER
  7. send Controller Health Status Poll filter by Controller Health Status Changed Flags
  8. send Controller Health Status Poll Clear Changed Flags

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_get

Get Mi configuration

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.1 Configuration Get

Steps

  1. get current SMBus/I2C Frequency
  2. get current MCTP Transmission Unit Size

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_frequency

config different frequency

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.2 Configuration Set: 100KHz, 400KHz

Steps

  1. get current SMBus/I2C Frequency
  2. config frequency value
  3. check configuration set success
  4. vpd read 256 bytes
  5. config orig frequency

function: scripts/management/03_mi_cmd_set_test.py::test_mi_configuration_set

Set Mi configuration

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.2 Configuration Set

Steps

  1. set SMBus/I2C Frequency=1
  2. set Health Status Change
  3. set MCTP Transmission Unit Size=64

function: scripts/management/03_mi_cmd_set_test.py::test_mi_vpd_read

VPD read

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.11 VPD Read

Steps

function: scripts/management/03_mi_cmd_set_test.py::test_mi_ep_buf_write_read

Management Endpoint Buffer Write

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.5 Management Endpoint Buffer Write

Steps

  1. read SMBus Port Information
  2. Management Endpoint Buffer Write special data
  3. send Management Endpoint Buffer Read
  4. check data

function: scripts/management/03_mi_cmd_set_test.py::test_mi_reset

MI Reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.8 Reset

Steps

  1. issue mi reset
  2. issue nvme identify

file: scripts/management/04_mi_admin_cmd_test

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_get_log_page

MI get log page command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. get smart-log through mi
  2. get smart data from nvme admin command
  3. compare data

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_aer_temperature

mi nvme get log page command should retain asynchronous event.

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. issue an AER command
  2. set feature to enable all asynchronous events
  3. get current temperature
  4. set Over Temperature Threshold to trigger AER
  5. send mi get log page command
  6. assert res[29] & 0x2
  7. set Under Temperature Threshold to trigger AER
  8. read log page to clear the event
  9. check smart data for critical warning of the temperature event
  10. recover to original setting

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_identify

MI identify command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send identify through mi
  2. send nvme identify command
  3. check identify data

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_timestamp

send mi command to set/getfeature of timestamp

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. set timestamp with MI cmd in either PS3 or PS4
  2. set the PS
  3. repeat get and check timestamp with MI
  4. restore to PS0

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_admin_identify_diff_slot

send mi command in different slot

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send nvme identify command
  2. send mi identify command with slot0 and check data
  3. send mi identify command with slot1 and check data

function: scripts/management/04_mi_admin_cmd_test.py::test_mi_fw_download

send mi nvme fw download command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. config frequency value
  2. check configuration set success
  3. slice fw image
  4. send mi nvme fw download command

file: scripts/management/05_mi_control_primitive_test

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_pause

Control Primitives Pasue

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1.1 Pause

Steps

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_resume

Control Primitives Resume

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1.2 Resume

Steps

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_abort

Control Primitives Abort

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1.3 Abort

Steps

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_get_state

Control Primitives Get State

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1.4 Get State

Steps

function: scripts/management/05_mi_control_primitive_test.py::test_mi_control_primitive_replay

Control Primitives Replay

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1.5 Replay

Steps

file: scripts/management/06_mi_pcie_cmd_test

function: scripts/management/06_mi_pcie_cmd_test.py::test_mi_pcie_cfg_read

PCIe Configuration Read

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 7.1 PCIe Configuration Read

Steps

  1. Send PCIe Configuration Read command
  2. check PCIe Configuration Read data
  3. Send PCIe Configuration Write command

file: scripts/management/07_mi_feature_test

function: scripts/management/07_mi_feature_test.py::test_mi_feature_configuration_set_and_reset

Set Mi configuration and mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. This includes all NVMSubsystem ports (PCIe and SMBus/I2C), Management Endpoints,
  3. and Controller Management Interfaces. All state is returned to its default condition.
  4. 8.3.1 NVM Subsystem Reset

Steps

  1. get default MCTP Transmission Unit Size
  2. set MCTP Transmission Unit Size=128byte
  3. get current MCTP Transmission Unit Size
  4. issue mi reset
  5. get current MCTP Transmission Unit Size

function: scripts/management/07_mi_feature_test.py::test_mi_feature_set_mctp_unit_size

Set Mi configuration and mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.2 Configuration Set

Steps

  1. vpd read 100bytes and receive twice
  2. set MCTP Transmission Unit Size=128byte
  3. vpd read 100bytes and receive twice, the second receive timeout
  4. set MCTP Transmission Unit Size=64
  5. get current MCTP Transmission Unit Size

function: scripts/management/07_mi_feature_test.py::test_mi_feature_disable_ccen

disable nvme and send mi command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send nvme identify command
  2. set nvme cc.en=0
  3. send mi identify command
  4. check identify data
  5. check the controller is disabled

function: scripts/management/07_mi_feature_test.py::test_mi_feature_d3hot

enter pcie d3hot and send mi command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send nvme identify command
  2. enter pcie d3hot
  3. send mi identify command
  4. enter pcie d0
  5. check identify data

function: scripts/management/07_mi_feature_test.py::test_mi_feature_check_seq

check package sequence

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 3 Message Mi

Steps

function: scripts/management/07_mi_feature_test.py::test_mi_feature_command_latency

test mi command latency with io

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.11 VPD Read

Steps

  1. get current SMBus/I2C Frequency
  2. vpd read 256 bytes 100 cycles

file: scripts/management/08_mi_error_inject_test

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_som_and_eom

send mi command with invalid som and eom

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send normal mi identify command
  2. send mi identify command with eom=0

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_integrity_check

send mi command with invalid integrity check field

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send mi nvme admin command without any error
  2. send mi nvme admin command with invalid integrity check field
  3. check parameter error location

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_crc32

send mi command with invalid crc32

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send mi identify command with invalid crc32

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_offset

send mi nvme admin command with invalid offset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send mi identify command with invalid offset
  2. check parameter error location
  3. send mi identify command with offset over the size of the NVMe Admin Command completion data.

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_length

send mi nvme admin command with invalid length

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send mi identify command with invalid length
  2. check parameter error location
  3. send mi identify command with invalid length over the size of the NVMe Admin Command completion data.
  4. check parameter error location

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_opcode

send mi command with invalid opcode

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send mi nvme admin command with invalid opcode
  2. send mi command with invalid opcode

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_nvme_command

send mi command with nvme command internal error

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send mi nvme admin command with nvme command internal error
  2. with pytest.warns(UserWarning, match=”mi status: 0x4″):
  3. check nvme cq entey status field != 0

function: scripts/management/08_mi_error_inject_test.py::test_mi_invalid_crc8

send mi command with invalid crc8

Reference

  1. Management Interface Specification, Revision 1.2c.

Steps

  1. send mi identify command with eom=0
  2. inject invalid crc8
  3. the next command should be corrupted
  4. but the following shall be accepted

function: scripts/management/08_mi_error_inject_test.py::test_mi_cmd_mixed

commands sent during the mi command sending process

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

  1. send vpd read only 15 bytes
  2. send identify through mi
  3. send nvme identify command
  4. check identify data

file: scripts/management/09_mi_stress_test

function: scripts/management/09_mi_stress_test.py::test_mi_stress_mix_nvme_cmd

test mi command mix io command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 6 NVM Express Admin Command Set

Steps

  1. send different mi command
  2. random read, get io latency
  3. mixing io and mi commands
  4. check for io latency changes

function: scripts/management/09_mi_stress_test.py::test_mi_stress_cmd_ctrl_mix

Control Primitives Mix

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4.2.1 Control Primitives

Steps

function: scripts/management/09_mi_stress_test.py::test_mi_stress_io_with_mi_reset

IO and mi reset

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 5.8 Reset

Steps

  1. issue mi reset
  2. mi reset during io

function: scripts/management/09_mi_stress_test.py::test_mi_stress_power_change

main power cycle and i2c command

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 8 Management Architecture

Steps

  1. get the test data when power is on
  2. power off the main power
  3. start a thread to power cycle main power
  4. read vpd data and verify
  5. stop power cycle thread
  6. power on the SSD

function: scripts/management/09_mi_stress_test.py::test_mi_stress_diff_slot

Send mi commands with different slots

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

function: scripts/management/09_mi_stress_test.py::test_mi_stress_basic_management_mix

mi command and basic management mixed

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

  1. mix send mi command and basic management command

function: scripts/management/09_mi_stress_test.py::test_mi_stress_inband_oob_cmd_mix

mi command and out of band command mixed

Reference

  1. Management Interface Specification, Revision 1.2c.
  2. 4 Message Servicing Model

Steps

  1. skip if mi command is not supported
  2. mix send mi command and basic management command

file: scripts/management/10_mi_ocp_test

function: scripts/management/10_mi_ocp_test.py::test_mi_read_firmware_update_flags

Check Firmware Update Flags field

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.1 NVMe Basic Management Command (Appendix A) Requirements

Steps

  1. send NVMe Basic Management Command opcode 90
  2. check Firmware Update Flags field (byte 91) in the SMBus Data structure shall be set to FFh

function: scripts/management/10_mi_ocp_test.py::test_mi_read_secure_boot_failure_feature_reporting

Check Secure Boot Failure Feature Reporting Supported

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.1 NVMe Basic Management Command (Appendix A) Requirements

Steps

  1. send NVMe Basic Management Command opcode 242
  2. check The Secure Boot Failure Feature Reporting Supported bit at offset 243 shall be supported and set to 1b

function: scripts/management/10_mi_ocp_test.py::test_mi_read_basic_mgmt_data_time

Check NVMe Basic Management Command time

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.1 NVMe Basic Management Command (Appendix A) Requirements

Steps

  1. read Controller Capabilities Timeout value
  2. get NVMe Basic Management Command time
  3. check NVMe Basic Management Command take no longer than the CAP.TO timeout value

function: scripts/management/10_mi_ocp_test.py::test_mi_invalid_smbus_addr

send mi command with invalid smbus address

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send vpd read cmd
  2. inject invalid i2c addr
  3. the next command should be corrupted
  4. but the following shall be accepted

function: scripts/management/10_mi_ocp_test.py::test_mi_level0_discovery

send tcg level0 discovery over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send tcg level0 discovery over smbus

function: scripts/management/10_mi_ocp_test.py::test_mi_sanitize

send sanitize command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send sanitize command over smbus

function: scripts/management/10_mi_ocp_test.py::test_mi_device_self_test

send device self test command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send device self test command over smbus

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_device_self_test

send get log page pageid(0x6) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. start a short DST, and record start time
  2. get device self test log through mi
  3. idle 1s
  4. get device self test log through mi
  5. issue format command to abort dst
  6. get device self test log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_sanitize

send get log page pageid(0x81) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. issue a Block Erase sanitize command
  2. get sanitize log through mi
  3. idle 1s
  4. get sanitize log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_extended_smart_log

send get log page pageid(0xc0) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get samrt/health fnformation extended log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_persistent_event_log

send get log page pageid(0xd) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. nvme controller reset
  2. get persistent event log log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_telemetry_host_initiated_log

send get log page pageid(0xd) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get persistent event log log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_log_page_telemetry_controller_initiated_log

send get log page pageid(0x8) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get persistent event log log through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_feature_temperature_threshold

send get log page pageid(0x8) command over smbus

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. get temperature threshold through mi

function: scripts/management/10_mi_ocp_test.py::test_mi_diff_host_smbus_frequenceise

send mi command with different host smbus frequenceise

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send vpd read cmd

function: scripts/management/10_mi_ocp_test.py::test_mi_diff_host_mctp_unit_size

send mi command with different mctp unit size

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

  1. send mi nvme fw download cmd

function: scripts/management/10_mi_ocp_test.py::test_mi_with_pcie_reset

send mi cmd with pcie reset

Reference

  1. Datacenter NVMe SSD Specification, Revision 2.5
  2. 11.3 NVMe-MI Requirements

Steps

function: scripts/management/10_mi_ocp_test.py::test_smbus_prepare_to_arp

Test for sending the Prepare to ARP command using SMBus protocol.

Reference

  1. System Management Bus (SMBus) Specification Version 3.2, 6.6.3.2 Prepare to ARP

Steps

  1. Step 1: Send ARP address with command
  2. Combine the data with the PEC byte
  3. Send the SMBus command with PEC

function: scripts/management/10_mi_ocp_test.py::test_smbus_get_udid

Reference
1.
2. I2C-bus specification and user manual, 3.1.17 Device ID

Steps

file: scripts/management/samsung_test

Suite: scripts/production

folder: scripts/production

file: scripts/production/01_normal_io_test

This file contains long-duration IO tests aimed at evaluating the reliability, endurance, and performance of NVMe SSDs under sustained workloads. The tests cover various read/write patterns, including random and sequential operations with different block sizes and ratios, running for extended periods from 30 minutes to several days. These tests are designed to ensure that the NVMe SSDs can handle continuous stress, identify potential issues, and verify that the devices meet required performance and stability standards over their expected lifespan.

file: scripts/production/02_mix_io_test

This file contains a series of mixed IO tests aimed at evaluating the performance and reliability of NVMe SSDs under various conditions, including different block sizes, read/write ratios, and IO patterns over extended durations. The tests simulate real-world workloads by varying parameters such as queue depth and block size, switching between random and sequential operations, and collecting performance data. These tests are designed to stress the SSD and ensure it can handle diverse and intensive usage scenarios.

file: scripts/production/03_data_model_test

This file contains a series of data model tests designed to simulate real-world workloads on NVMe SSDs. Each test emulates different application scenarios, such as cloud computing, SQL databases, and content delivery networks, by varying parameters like block size, read/write ratio, and randomness. The purpose of these tests is to assess the SSD’s performance, endurance, and reliability under conditions that mimic actual usage patterns in diverse environments.

file: scripts/production/04_trim_format_test

This file includes a series of tests focused on assessing NVMe SSD performance under various conditions, particularly during and after trim operations. The tests simulate workloads that involve sequential and random writes, followed by trim operations and subsequent performance evaluations. These scenarios help determine how effectively the SSD maintains performance when managing trimmed data and handling mixed IO patterns over extended periods.

file: scripts/production/05_small_range_test

This file contains a set of tests designed to evaluate the performance and reliability of NVMe SSDs by executing various read and write operations on specific LBA ranges and random regions within the drive. The tests focus on stressing the SSD with different workloads, such as repeated reads and writes on the same or multiple LBAs, and small range operations. These scenarios are intended to simulate real-world usage patterns and assess how the SSD manages data across its storage space over extended durations.

file: scripts/production/06_jesd_workload_test

This file includes a test case designed to evaluate NVMe SSD performance and endurance under a JEDEC JESD 219 workload, which simulates a typical client workload for solid-state drives. The test involves a sequence of operations: a full drive sequential write with 128KB block sizes, followed by 4KB random writes, and concluding with a workload distribution that mimics real-world usage scenarios. The purpose is to assess how well the SSD handles sustained writes, mixed workloads, and different IO patterns over an extended period.

file: scripts/production/07_power_cycle_test

This script automates power cycling tests on NVMe SSDs to assess their response times and reliability under different power loss conditions. It focuses on simulating both sudden (dirty) and typical (clean) power cycles. These tests help evaluate the SSD’s resilience and ability to maintain data integrity across 1000 cycles, ensuring the device meets stringent durability standards.

file: scripts/production/08_io_stress_test

This script conducts a comprehensive stress test on NVMe SSDs to validate their stability, performance, and error handling capabilities under prolonged and varied workloads. The test involves running multiple randomized I/O operations concurrently with critical NVMe commands, such as SMART data retrieval, feature management, and abort operations. The script simulates a real-world usage scenario by continuously starting and stopping I/O workers while ensuring data integrity through the verification of the entire drive at the end of the test. This approach helps assess the SSD’s resilience and readiness for deployment in demanding environments.

file: scripts/production/09_wl_stress_test

This script conducts a wear leveling test on NVMe SSDs to evaluate their endurance and efficiency in managing data distribution across the memory cells. The test involves sequential and random write operations to different regions of the drive, simulating hot and cold data scenarios, and triggers wear leveling and garbage collection processes. The test measures IOPS (Input/Output Operations Per Second) throughout the operations and generates performance diagrams to assess the effectiveness of wear leveling. The script also includes power cycling and full-drive verification steps to ensure data integrity post-testing.