PyNVMe3 Developer’s Guide

PyNVMe3 Developer’s Guide

Last Modified: September 5, 2025

Copyright © 2020-2025 GENG YUN Technology Pte. Ltd.
All Rights Reserved.


1. Test Platform

PyNVMe3™ is an independent, third-party PCIe/NVMe SSD test platform built for speed. Write test scripts in Python and leverage the full ecosystem; run them on off-the-shelf desktops and servers, then replay the same scripts in your automation pipelines. No expensive, dedicated hardware is required, enabling low-cost, large-scale deployment in NVMe SSD development and manufacturing.

Before installing PyNVMe3, ensure your platform meets the following requirements:

  • CPU: x86_64 architecture.
  • Memory: 16 GB or more.
  • OS: Ubuntu LTS (e.g., 24.04). It is recommended to install the OS on a SATA drive.
  • RAID mode (Intel® RST): Must be disabled in the BIOS.
  • Secure Boot: Must be disabled in the BIOS.
  • IOMMU (VT for Direct I/O): Must be disabled in the BIOS.

For server platforms, additional considerations apply:

  • VMD: Must be disabled in the BIOS.
  • NUMA: Must be disabled in the BIOS.

1.1 Root Privileges

Root privileges are required to install and run PyNVMe3. We recommend enabling passwordless sudo on test platforms.

  1. Open the sudoers editor:
    sudo visudo
    
  2. Add the following line at the end of the file (replace your_username with your username):
    your_username        ALL=(ALL)       NOPASSWD: ALL
    
  3. Press Ctrl-O and Enter to save the file, then Ctrl-X to exit the editor.

1.2 SSH Server

Ubuntu Desktop does not have the SSH service installed and enabled by default. You need to manually install and configure it, as well as adjust the firewall settings. Follow these steps:

  1. Install the SSH service:
    sudo apt update
    sudo apt install -y openssh-server
    
  2. Start the SSH service and enable it to start on boot:
    sudo systemctl start ssh
    sudo systemctl enable ssh
    
  3. Check the status of the SSH service:
    sudo systemctl status ssh
    
  4. Configure the firewall to allow SSH connections:
    sudo ufw allow ssh
    sudo ufw enable
    sudo ufw status
    

After completing these steps, you can connect to your Ubuntu Desktop via SSH.

2. Install PyNVMe3

PyNVMe3 is installed from the command line, and most steps are automated for convenience. The process is as follows:

  1. Update Ubuntu:
    sudo apt update
    sudo apt upgrade
    
  2. On Ubuntu 24.04, disable PEP 668 to allow installation of Python packages into the system:
    sudo rm -f /usr/lib/python3.12/EXTERNALLY-MANAGED
    
  3. Install pip3, as PyNVMe3 relies on many Python libraries:
    sudo apt install -y python3-pip
    
  4. (Optional) Change the pip3 source for faster downloads. Create or edit ~/.pip/pip.conf and add:
    [global]
    index-url=https://pypi.tuna.tsinghua.edu.cn/simple/
    [install]
    trusted-host=pypi.tuna.tsinghua.edu.cn
    
  5. If you have previously installed PyNVMe3, uninstall it first:
    sudo pip3 uninstall PyNVMe3
    sudo rm -rf /usr/local/PyNVMe3
    
  6. Install PyNVMe3 using pip3. If you do not have the installation file, please contact sales@pynv.me.
    sudo pip3 install PyNVMe3-xx.yy.z.tar.gz
    

    PyNVMe3 is installed in the folder /usr/local/PyNVMe3.

3. Configuration

By default, the make setup command reserves 10GB of hugepage memory (2MB hugepages). Some scenarios require more memory (e.g., testing high-capacity SSDs or multiple SSDs). In such cases, additional 1GB Hugepages should be configured during kernel initialization. The steps are:

  1. Edit /etc/default/grub as root and update the GRUB_CMDLINE_LINUX_DEFAULT line as shown below.
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash default_hugepagesz=2M hugepagesz=1G hugepages=18 iommu=off intel_iommu=off amd_iommu=off modprobe.blacklist=nvme pcie_aspm=off"
    

    For details on hugepages number, please see the next section (Hugepage Memory).

  2. Apply GRUB changes:
    sudo update-grub
    
  3. Create a mount entry in /etc/fstab (match the size to GRUB):
    none /mnt/huge hugetlbfs pagesize=1G,size=18G 0 0
    
  4. Restart OS to activate the new configuration:
    sudo reboot
    

4. Hugepage Memory

PyNVMe3 relies on hugepages to allocate memory that is physically contiguous and locked in RAM. This is essential for creating DMA buffers and shared data structures, which are critical for achieving high IOPS, low latency, and stable performance when working with NVMe SSDs.

In Linux, memory is managed in units called pages, typically 4 KB in size. However, for applications requiring large, contiguous memory regions—such as high-performance I/O—the kernel provides hugepages. Hugepages are larger memory blocks (e.g., 2 MB or 1 GB) that reduce the overhead of managing many small pages and improve memory access efficiency by minimizing TLB (Translation Lookaside Buffer) misses.

One important characteristic of hugepages is that they are non-swappable. Unlike regular memory pages, hugepages cannot be moved to disk when the system runs low on RAM. This ensures that large, contiguous memory regions remain available for direct memory access (DMA) operations, making hugepages ideal for frameworks like SPDK (Storage Performance Development Kit) and DPDK (Data Plane Development Kit).

PyNVMe3 adopts hugepages to ensure the integrity and performance of its DMA buffers and shared data structures. By leveraging hugepages, PyNVMe3 guarantees that memory remains physically contiguous and locked in RAM, enabling reliable and efficient high-speed communication with NVMe devices. Hugepages are also used to store the CRC table and cmdlog, which are critical for data integrity and debugging.

On x86-64 platforms, there are two hugepage sizes: 2 MB and 1 GB. PyNVMe3 reserves a fixed 10 GB of 2 MB hugepages by default. For scenarios requiring additional hugepage memory—such as testing high-capacity SSDs—1 GB hugepages must be configured in the GRUB bootloader. Refer to the steps in the “Configuration” section for details on how to enable 1 GB hugepages.

The majority of hugepage memory is consumed by the CRC table, especially when testing large-capacity SSDs (e.g., 8 TB or greater). The CRC table memory requirement can be estimated using the following formula:

 

CRC Table Size (bytes)=SSD Capacity (bytes)LBA Size (bytes)×CRC Entry Size (1 byte)\text{CRC Table Size (bytes)} = \frac{\text{SSD Capacity (bytes)}}{\text{LBA Size (bytes)}} \times \text{CRC Entry Size (1 byte)}

Each LBA (Logical Block Address) has a 1-byte entry in the CRC table. For example:

  • A 16 TB SSD formatted with 512-byte LBAs requires approximately 32 GB of memory for the CRC table:
    16×10244512×1=32 GB\frac{16 \times 1024^4}{512} \times 1 = 32 \, \text{GB} 

  • The same SSD formatted with 4 KB LBAs requires only 4 GB of memory:
    16×102444096×1=4 GB\frac{16 \times 1024^4}{4096} \times 1 = 4 \, \text{GB} 

In addition to reserving hugepage memory, ensure that sufficient physical memory remains available for the operating system and Python3 scripts. We recommend leaving at least 5 GB of free memory for system use. For multi-SSD or multi-socket server environments, refer to the subsequent sections for detailed memory planning and configuration.

By carefully estimating and configuring hugepage memory, PyNVMe3 ensures stable and efficient operation across a wide range of NVMe SSD testing scenarios.

4.1 Single High-Capacity SSD

Consider testing a 16 TB SSD formatted with 512-byte LBAs. The drive contains 32G LBAs (16 TB ÷ 512 B = 32G LBAs). PyNVMe3 therefore requires approximately 32 GB of memory to store the CRC table.

To meet this requirement, reserve 32 GB of 1 GB hugepages, plus the fixed 10 GB of 2 MB hugepages. The host system should provide at least 48 GB of physical memory.

If the SSD is instead formatted with 4 KB LBAs, the number of LBAs is reduced to 4G (16 TB ÷ 4 KB = 4G LBAs). In this case, the CRC table requires only 4 GB of memory.

4.2 Multiple SSDs on a Server

Consider testing 12 SSDs in a server: 6 × 4 TB SSDs formatted with 512-byte LBAs and 6 × 4 TB SSDs formatted with 4 KB LBAs.

The CRC table memory requirement is as follows:

  • For 6 SSDs at 512-byte LBA: each requires 8 GB (4 TB ÷ 512 B = 8G LBAs).
  • For 6 SSDs at 4 KB LBA: each requires 1 GB (4 TB ÷ 4 KB = 1G LBAs).
  • The total CRC table memory is therefore 54 GB (6 × 8 GB + 6 × 1 GB).

In addition to the CRC table, allocate 2 GB per SSD for overhead, giving 24 GB for 12 drives. Reserve another 5 GB for the operating system and background processes.

The total hugepage memory requirement is approximately 83 GB (54 GB + 24 GB + 5 GB). To meet this requirement, configure the host with at least 96 GB of physical memory and reserve 80 × 1 GB hugepages to ensure sufficient headroom.

4.3 Multi-Socket Server Environments

Multi-socket servers introduce additional considerations because of NUMA (Non-Uniform Memory Access). Hugepage memory is allocated separately for each CPU socket, and CPU cores are physically closer to specific PCIe slots. To achieve stable performance, both memory and CPU affinity must be planned carefully.

  • Hugepage allocation across sockets. For example, in the case of 12 SSDs (6 formatted with 512 B LBAs and 6 formatted with 4 KB LBAs), if all six 512 B SSDs are attached to one socket, that socket alone requires about 48 GB of hugepage memory, while the other socket (with six 4 KB SSDs) requires only 6 GB. This imbalance can lead to allocation failures even if free hugepages remain on the second socket. To avoid this, distribute SSDs evenly across sockets so that each socket has sufficient hugepages for its assigned drives.
  • CPU-to-DUT affinity. Assign CPU cores that are physically close to the PCIe devices (DUTs) to handle their workloads. This reduces latency and maximizes throughput. Core-to-device affinity can be configured in the slot.conf file.

⚠️ Note: Testing on multi-socket servers is not recommended, as it is difficult to configure NUMA memory and CPU affinity correctly in all cases. For predictable and repeatable results, use a single-socket platform whenever possible.

5. Run Tests

PyNVMe3 can be executed in several ways:

  • In VSCode, mainly used for debugging new scripts.
  • In a command-line environment.
  • In CI systems such as Jenkins.

We first introduce test execution through the command line.

5.1 Setup

  1. Enter PyNVMe3 directory
    cd /usr/local/PyNVMe3/
    
  2. Switch to root
    sudo su
    
  3. Configure the runtime environment. This step replaces the kernel driver of the NVMe device with the user-mode driver of PyNVMe3 and reserves large pages of memory for testing.
    make setup
    

    By default, PyNVMe3 will try to reserve 10GB of huge-page memory, which can meet the test needs of a 4TB capacity disk (LBA size of 512 bytes). It is recommended that the test machine be equipped with 16G or more memory. For more details, see the Hugepage Memory section.

5.2 Test

  1. Use the following command to execute the test:
    make test
    

    This command executes all test projects in the folder scripts/conformance by default. The conformance test suite contains comprehensive test scripts against NVMe specification, which normally completes in 1-2 hours.

  2. Specify the test cases to run. There are more tests in the folder scripts/benchmark. The benchmark test usually takes a longer time to execute, from a few hours, days, to weeks. We need to specify the file name in the command line.
    make test TESTS=scripts/benchmark/performance.py
    
  3. If there are multiple NVMe DUT on the test platform, we can specify the pciaddr of the DUT in the command line. Alternatively, we can specify the slot of DUT in the command line, and PyNVMe3 can find the BDF address of the DUT in slot.conf file.
    make test pciaddr=0000:03:00.0
    make test slot=3
    

    If there is only one NVMe SSD on the platform, you do not need to specify this parameter, PyNVMe3 will automatically find the BDF address of this disk.

  4. For NVMe disks with multiple namespaces, we can specify the test namespace through the nsid parameter, and the default nsid is 1.
    make test nsid=2
    

We can combine TESTS/pciaddr/nsid parameters in make test command line.

5.3 Results

After the test starts, the test log will be printed in the terminal, and a test log file will be saved in the results folder where you can find more information for debugging. Each test item may have one of the following results:

  • SKIPPED: The test was skipped. The test does not need to be executed because some required conditions are not met.
  • FAILED: The test failed. The log file shows the specific reason for the test failure, usually an assertion that was not satisfied. When an assertion fails, the test exits immediately and does not continue with subsequent test items.
  • PASSED: The test passed.
  • ERROR: The test could not be executed. The DUT may have been lost and cannot be detected by PyNVMe3. If you encounter ERROR, it is recommended to check the log of the previous FAILED test case, which may have caused the DUT problem.

Regardless of the test results, warnings may be generated during the test. The test log contains a list of all warnings. Most warnings are related to error codes in CQE returned by the DUT, or an AER command returned. Warnings do not stop test execution, but we recommend double-checking all warning information.

The results directory contains not only the test log file, but also files (Excel, CSV, or PNG) generated by the test script, such as raw data and diagrams of the test record.

6. Pytest

PyNVMe3 provides a high-performance, proven user-space NVMe driver that sustains millions of IOPS at low latency. The Python API enables development of test scripts that integrate with the broader Python ecosystem, with pytest serving as the primary test framework.

pytest is a general-purpose testing framework that scales from smoke checks to large, parameterized suites. This chapter summarizes the essential concepts and describes how PyNVMe3 integrates with pytest. For comprehensive guidance, refer to the official documentation.

6.1 Overview

PyNVMe3 integrates with pytest in a straightforward manner: test modules are standard Python files that pytest automatically discovers and executes. The make test target is a thin wrapper around pytest with laboratory defaults, ensuring consistent behavior across hosts. When finer control is required, PyNVMe3 provides additional command-line parameters so that tests can be executed with greater flexibility.

To run specific tests, use one of the following lines:

make test TESTS=scripts/test_folder
make test TESTS=scripts/test_folder/test_file.py
make test TESTS=scripts/test_folder/test_file.py::test_function

When multiple NVMe DUTs are attached to the SUT, select a device by its BDF address:

make test pciaddr=0000:BB:DD.F

6.2 Test function

Pytest automatically collects functions whose names begin with test_ and executes them individually. The following file is a minimal, complete example:

import pytest
from nvme import Controller, Buffer, Qpair, Pcie, Subsystem, Namespace, QpairCreationError
from nvme import IOCQ, IOSQ, PRP, PRPList, SQE, CQE, NvmeEnumerateError

def test_format(nvme0n1):
    nvme0n1.format()

Although it contains only three lines, this is a valid pytest test module: it imports pytest, defines one test function, and invokes the PyNVMe3 API. If a function should not be collected temporarily, rename it with a leading underscore so that pytest ignores it:

import pytest
from nvme import Controller, Buffer, Qpair, Pcie, Subsystem, Namespace, QpairCreationError
from nvme import IOCQ, IOSQ, PRP, PRPList, SQE, CQE, NvmeEnumerateError

def _test_format(nvme0n1):
    nvme0n1.format()

This approach keeps the code in place without executing it.

6.3 Assert

In pytest, the standard Python assert statement is used to validate test conditions. The following example defines a small function and a test that asserts the expected result:

def inc(x):
    return x + 2

def test_inc():
    assert inc(1) == 2, f"inc wrong: {inc(1)}"

This assertion fails because inc is implemented incorrectly. pytest reports the test as FAILED and prints a detailed failure explanation, including the message provided in the assert statement. The information captured in the logs facilitates debugging of the test code and firmware. Assertions should include sufficient context—both the expected value and the actual value—so failures are diagnosable and reproducible.

6.4 Fixture

In pytest, a fixture is a function that prepares and disposes of objects required by test functions. It is defined with the @pytest.fixture decorator and injected into tests by name. The following example defines two fixtures, nvme0 and nvme0n1, and a test that consumes them:

import pytest
import logging

@pytest.fixture()
def nvme0(pcie):
    return Controller(pcie)

@pytest.fixture(scope="function")
def nvme0n1(nvme0):
    ns = Namespace(nvme0, 1)
    yield ns
    ns.close()

def test_dut_firmware_and_model_name(nvme0: Controller, nvme0n1: Namespace):
    # print Model Number
    logging.info(nvme0.id_data(63, 24, str))
    # format namespace
    nvme0n1.format()

Fixture names must not begin with test, which is reserved for tests. To use a fixture, include its name in a test function’s parameter list. During collection, pytest resolves dependencies, calls each fixture in the required order, and passes the returned object to the test. If the fixture uses yield, the statements following yield execute after the test completes and typically release resources.

Fixtures can depend on one another. In the example, nvme0n1 depends on nvme0. If the test references both, pytest creates them in dependency order, regardless of their position in the parameter list.

A fixture’s lifetime is controlled by its scope parameter. With scope="function" (the default), pytest creates a fresh object for each test, isolating failures. If the scope is changed to session, the object is created once for the entire test session.

Fixtures can be overridden. A fixture defined in the same test file has the highest precedence. Shared fixtures reside in conftest.py; pytest searches per directory and uses the nearest definition. PyNVMe3 provides common fixtures for NVMe testing—pcie, nvme0, nvme0n1, qpair, and subsystem—so setup code does not need to be repeated in every test.

6.5 Parameterize

Test cases are often parameterized, such as writing data of different LBA lengths at different starting LBAs. Pytest provides a simple decorator to generate all combinations of the input values. In the example below, the test will run with all combinations of lba_start and lba_count, for a total of 4 × 4 = 16 cases. This approach keeps the test logic in one place while improving coverage.

import pytest

@pytest.mark.parametrize("lba_start", [0, 1, 8, 32])
@pytest.mark.parametrize("lba_count", [1, 8, 9, 32])
def test_write_lba(nvme0, nvme0n1, qpair, lba_start, lba_count):
    buf = Buffer(512 * lba_count)
    nvme0n1.write(qpair, buf, lba_start, lba_count).waitdone()

6.6 Configuration

pytest.ini defines project-wide discovery rules, logging, and default options to ensure consistent behavior across hosts. Keep the file at the repository root. The configuration below reflects the current defaults: tests reside under scripts; console logging is enabled at INFO; driver verbosity is controlled by a numeric level; selected warnings are suppressed; and conservative defaults are applied so output remains readable in terminals and CI.

[pytest]
norecursedirs = .git spdk
junit_family=legacy
testpaths = scripts

# python/pytest log level
log_cli=true
log_cli_level=INFO
log_cli_format=[%(asctime)s.%(msecs)03d] %(levelname)s %(funcName)s(%(lineno)d): %(message)s
log_cli_date_format=%Y-%m-%d %H:%M:%S

# driver/spdk log level
# ERROR = 0
# WARN = 1
# NOTICE = 2
# INFO = 3
# DEBUG = 4
log_driver_level=2

filterwarnings = ignore::DeprecationWarning
                 ignore::FutureWarning
                 ignore:AER notification
# uncomment below to report fail on data miscompare
#                 error:(.*)ERROR(.*)0(2|7)/81

addopts = -q -s -v -r Efsx
          --cache-clear
          -p no:cacheprovider
# uncomment below to stop at first fail
#         -x

8. Summary

PyNVMe3 is a comprehensive, flexible framework for NVMe SSD validation. It enables in‑depth testing across a wide range of NVMe functionality—from basic read/write operations to advanced power management, error injection, and out‑of‑band commands—while supporting efficient, automated workflows within the Python ecosystem.

For additional examples, refer to the Python source code included with PyNVMe3. For assistance, contact: