PyNVMe3 Platforms
Last Modified: September 29, 2025
Copyright © 2020-2025 GENG YUN Technology Pte. Ltd.
All Rights Reserved.
- 1. Visual Studio Code (VSCode)
- 2. Laptop Test Platform
- 3. Desktop Test Platform
- 4. Server Test Platform
- 5. Summary
PyNVMe3 provides a comprehensive suite of test platforms purpose-built for NVMe SSD validation, combining a powerful software framework with flexible hardware environments. It supports the entire workflow from development and debugging to large-scale production testing, covering protocol compliance, performance benchmarking, power efficiency, and out-of-band management. To meet diverse testing requirements, PyNVMe3 defines three representative platforms: the laptop platform, ideal for precise power consumption analysis for Client NVMe SSD; the desktop platform, optimized for performance, stress, and management interface testing; and the server platform, designed for high-density, parallel validation at scale.
1. Visual Studio Code (VSCode)
To complement these hardware platforms, PyNVMe3 integrates seamlessly with modern development tools, enabling engineers to build and debug test scripts with efficiency. Visual Studio Code (VSCode) is the recommended integrated development environment (IDE) for PyNVMe3. With its rich ecosystem of extensions and support for remote development over SSH, VSCode allows developers to write scripts on their local workstation while executing them directly on remote test machines. The PyNVMe3 VSCode extension further enhances this workflow by providing real-time visibility into queues, command logs, and performance metrics, ensuring that script development and debugging are both intuitive and closely aligned with the behavior of the underlying SSD under test.
⚠️ Note: Use a native terminal or SSH for day‑to‑day, long‑running test execution. Use VSCode primarily for authoring, quick runs, and debugging.
1.1 Prerequisites
- Work PC: any laptop or desktop computer (Windows/macOS/Linux) with VSCode installed
- Test Machine: the platform with Ubuntu and PyNVMe3 installed
- Network access: SSH login from Work PC to Test Machine
1.2 Steps
- Download the VSCode installer from the official website and install it on the Work PC (Windows, Linux, or macOS).
- Install the Remote‑SSH extension.

- In VSCode, open the Remote Explorer (left sidebar) and add the test host as a target. Use the full SSH command (add
-p <port>if you use a custom port).

- When the remote window opens, install the PyNVMe3 extension. Choose Install from VSIX and select the VSIX package under
PyNVMe3/.vscode.

- The PyNVMe3 extension displays the current queue state, commands, and performance information in VSCode.

- Select the PyNVMe3 directory in the remote window.

- Open a terminal and run the initial setup:
make setupFrom the
make setupoutput, note the PCIe BDF of the DUT.

- Edit
.vscode/settings.jsonand set the BDF field.
⚠️ Note: Remember to check and update the BDF address in
.vscode/settings.json. - The Python Test Explorer extension discovers PyNVMe3 tests automatically. Open the Testing view to see collected cases. Click the Run (triangle) icon next to a test to execute it. For parameterized tests, right‑click the icon to select a parameter set.

- During execution, logs stream to the terminal.

- VSCode runs tests in Debug mode when launched from the editor. Breakpoints and watch windows are supported, and the PyNVMe3 extension remains active while paused.
⚠️ Note: The PyNVMe3 driver enforces I/O timeouts. If you pause execution while commands are in flight, timeouts may occur. Avoid breakpoints immediately before or after I/O submissions or waits. If you need to pause for debugging, consider setting a longer timeout to prevent test failures.
1.3 Codex Workflow
Codex is available in the VSCode remote workspace to help draft or edit PyNVMe3 tests quickly; use it as a coding teammate rather than a replacement for validation.
- Install OpenAI Codex (Codex extension) in VSCode.
- Open the integrated terminal in VSCode (connected to the remote host) and start a Codex session.
- Describe the task clearly; Codex will use AGENTS.md as the system prompt.
- Review and run the generated test.
- If required, copy logs from terminal windows to Codex, and Codex will update the script.
2. Laptop Test Platform
PyNVMe3 can run on a wide range of laptops and is ideal for low-power characterization and rapid script development. When paired with the Quarch Power Analysis Module (PAM), it delivers high-resolution, repeatable power measurements without requiring a full desktop test bench. For a complete walkthrough of setup and operation, see the User Guide.
3. Desktop Test Platform
Desktop Test Platform is most widely used. It is a high-performance, cost-effective testing environment designed for NVMe SSD validation. It provides a flexible solution for developers, enabling comprehensive protocol, performance, power management, and out-of-band (OOB) management interface testing. It is fully integrated with PyNVMe3.
We do not sell this desktop test platform. Instead, users can purchase all necessary hardware components from the market, including:
- Desktop PC Motherboard
- PMU2 Interposer (Email Inquiry)
- Total Phase Aardvark I2C Host Adapter (Product Page)
This approach provides users with the flexibility to build a cost-effective and customizable testing environment.
3.1 Motherboard
The desktop PC motherboard is the foundation of the PyNVMe3 Desktop Test Platform, providing PCIe connectivity for NVMe SSD testing. A suitable motherboard should meet the following criteria:
- PCIe Gen5 support to ensure compatibility with the latest high-speed NVMe SSDs
- Stable power delivery for accurate power consumption and performance measurements
- Proper ventilation to maintain system stability during extended testing
One recommended motherboard for the PyNVMe3 Desktop Test Platform is:
- Model: Asus X670E ROG Crosshair Gene
- CPU: AMD Ryzen 5 7600X
- Memory: DDR5 16GB × 2
- System drive: 2.5-inch SATA SSD (running Ubuntu OS)
This configuration offers strong single-core performance, PCIe Gen5 support, and DDR5 compatibility, making it well-suited for enterprise NVMe SSD testing.
3.2 PMU2: Power Management Unit
PMU2 is a power management and monitoring unit designed for enterprise NVMe SSD testing. It provides precise power control, real-time power consumption monitoring, and support for out-of-band (OOB) management interface testing. It is compatible with PCIe Gen5 and supports both U.2 and M.2 SSDs, making it a versatile solution for SSD validation.
| Category | Features |
|---|---|
| PCIe | Supports PCIe Gen5 |
| Supports both U.2 and M.2 SSDs | |
| Future support planned for E1.S SSDs | |
| Power | Controls DUT power on/off |
| Monitors DUT power consumption | |
| OOB | Enables NVMe-MI OOB testing with Aardvark |
| Independent AUX power control for enhanced OOB tests | |
| Built-in voltage level shifting on the M.2 version |

PMU2 is easy to set up and integrates seamlessly with the PyNVMe3 framework.
- Insert the DUT into the PMU2 test slot.
- Insert PMU2 into the motherboard.
- Connect PMU2 to the desktop test platform via the Type-C USB port (white USB cable in the image above).
- Connect PMU2 to the Total Phase I2C host adapter and then to the desktop test platform (black USB cable in the image above).
- Power on the desktop test platform.
PMU2 will be automatically recognized and is ready for PyNVMe3 tests.
3.3 I2C/I3C Adapter
The Aardvark I2C/SPI Host Adapter from Total Phase is a USB-to-I2C adapter that serves as the physical layer device for NVMe-MI out-of-band (OOB) testing over I2C/SMBus. Unlike traditional setups that rely on a baseboard management controller (BMC) in server environments, PyNVMe3 leverages this compact adapter to enable MI testing on standard desktop platforms. This eliminates the need for expensive server-grade infrastructure while maintaining full testing capabilities.

The Aardvark adapter plays a crucial role in NVMe-MI testing by facilitating SMBus communication between the test machine and the NVMe SSD. It enables sending and receiving NVMe-MI messages, as well as interacting with other protocols such as SPDM. This allows developers to construct and transmit custom packets, monitor responses, and inject errors at different layers, including the physical, transport, and protocol layers. Additionally, the adapter supports configurable controller settings, allowing adjustments to frequency and transfer unit sizes to match specific testing scenarios.
One of the key advantages of using Aardvark is its seamless integration with PMU2. Together, these components provide complete NVMe-MI test coverage, enabling a wide range of management interface validations. PyNVMe3 also allows simultaneous testing of both out-of-band (OOB) and in-band NVMe commands, making it possible to evaluate MI interactions while performing standard NVMe operations.
By using the Aardvark I2C/SPI Host Adapter, PyNVMe3 delivers a reliable and flexible infrastructure for NVMe-MI testing, eliminating the need for dedicated server systems while providing all the essential capabilities required for thorough management interface validation.
3.4 VDM Adatper

The VDM (Vendor Defined Message) Adapter provides an MCTP transport over the PCIe VDM path, enabling NVMe-MI and SPDM management interface validation on a desktop platform without external cabling.
Setup and usage:
- Insert the VDM into the motherboard AIC slot; no additional cables are required.
- Use an AMD desktop platform.
- Run the dedicated VDM test:
make test TESTS=./scripts/management/12_vdm_test.py
Supported platforms (we will add more models later):
| CPU | Motherboard |
|---|---|
| Ryzen 5 7600X | Asus X670E ROG Crosshair |
To route all MI tests over VDM, switch the transport fixture in PyNVMe3/scripts/management/conftest.py from SMBus to VDM:
@pytest.fixture(scope="function")
def transport(smbus):
return smbus
Change to:
@pytest.fixture(scope="function")
def transport(vdm):
return vdm
3.5 Management Test Suite
The scripts/management directory contains specialized test scripts for validating management functionalities, including basic management commands, MI, and SPDM.
| Test Script | Description |
|---|---|
01_mi_inband_test.py |
Tests in-band management interface commands for NVMe devices. |
02_basic_mgmt_cmd_test.py |
Validates basic NVMe management commands for legacy compatibility. |
03_mi_cmd_set_test.py |
Provides comprehensive testing of the NVMe Management Interface Command Set. |
04_mi_admin_cmd_test.py |
Focuses on administrative commands within the Management Interface. |
05_mi_control_primitive_test.py |
Evaluates control primitives in NVMe devices via the Management Interface. |
06_mi_pcie_cmd_test.py |
Targets PCIe-specific commands in the NVMe Management context. |
07_mi_feature_test.py |
Tests features including endpoint buffer configurations and MCTP Transport Unit sizes. |
08_mi_error_inject_test.py |
Injects errors into MCTP and MI packet headers to assess error handling. |
09_mi_stress_test.py |
Stresses the device by interweaving MI with various commands for robust testing. |
10_mi_ocp_test.py |
Validates OCP (Open Compute Project) NVMe-MI compliance. |
11_spdm_test.py |
Tests SPDM (Security Protocol and Data Model) functionality over MCTP. |
12_vdm_test.py |
Tests MCTP over PCIe VDM. |
We can start the management tests on Desktop Test Platform using the following command:
make test TESTS=./scripts/management
3.6 Fixtures and API
PyNVMe3 provides the i2c, mi, and spdm fixtures, allowing users to send various types of request messages. These include basic management commands, NVMe-MI commands, NVMe admin commands, PCIe commands, control primitives, and SPDM commands. Below are examples demonstrating how to write management-related test scripts using PyNVMe3. For more details, please check the source code under PyNVMe3/scripts/management.
Example 1: Reading Drive Static Data
def test_mi_spec_appendix_a_read_drive_static_data(i2c, buf):
""" Read drive static data (VID and serial number) via I2C. """
i2c.i2c_master_write(i2c.ENDPOINT, [8], flags=pyaardvark.I2C_NO_STOP)
buf[:] = i2c.i2c_master_read(i2c.ENDPOINT, 24)[:]
logging.info(buf.dump(64))
Example 2: Retrieving SMART Log Page
def test_mi_admin_get_log_page(mi, nvme0):
""" Retrieve SMART log page via NVMe-MI admin command. """
resp = mi.nvme_getlogpage(2, length=20).receive()
resp_data = resp.response_data(12)
ktemp = resp_data.data(2, 1)
logging.info("Temperature (via MI): %d degrees F" % ktemp)
Example 3: SPDM Get Capabilities Command
def test_spdm_get_capabilities_multiple(spdm):
""" Test SPDM GET_CAPABILITIES request with an invalid sequence. """
spdm.get_version().receive()
spdm.get_capabilities().receive()
with pytest.warns(UserWarning, match="SPDM ERROR response, code: 0x04"):
spdm.get_capabilities().receive()
3.7 SRIOV Testing
PyNVMe3 provides comprehensive support for testing the Single Root Input/Output Virtualization (SRIOV) feature. SRIOV allows a device, such as a NVMe SSD, to separate access to its resources among various PCIe hardware functions. These hardware functions consist of one Physical Function (PF) and one or more Virtual Functions (VF).
In terms of testing, PyNVMe3 facilitates the creation of VFs and namespaces, and the attachment of namespaces to VFs. This is done through a demo script file located in the scripts/features folder. It’s important to note that before any SRIOV testing can be performed, virtual functions need to be enabled in the Device Under Test (DUT).
⚠️ Note: For SRIOV testing, Linux kernel inbox NVMe driver shall be enabled in GRUB configuration.
make setup numvfs=5
make test TESTS=./scripts/features/sriov_test.py pciaddr=0000:01:00.0
PyNVMe3’s testing capabilities extend to executing tests on a single VF or multiple VFs simultaneously. This is carried out by using existing scripts and specifying the id of the VF to be tested. Before runing the tests on VFs, we shall create namespaces, attach them to VFs, and make all VFs online.
make setup numvfs=5
make test TESTS=./scripts/test_utilities.py::test_create_ns_attach_vf
make test TESTS=./scripts/test_utilities.py::test_vf_init
Then we can run multiple tests in different Terminals:
make test TESTS=./scripts/benchmark/ioworker_stress.py pciaddr=0000:01:00.0 vfid=0 nsid=2
make test TESTS=./scripts/benchmark/ioworker_stress.py pciaddr=0000:01:00.0 vfid=1 nsid=1
make test TESTS=./scripts/benchmark/ioworker_stress.py pciaddr=0000:01:00.0 vfid=2 nsid=3
make test TESTS=./scripts/conformance/02_nvm pciaddr=0000:01:00.0 vfid=4 nsid=4
All these features make PyNVMe3 a powerful tool for validating and exploring the capabilities of SRIOV.
4. Server Test Platform
As customers scale up PyNVMe3 testing, they face challenges in exercising many SSDs efficiently while maintaining accuracy under heavy workloads. To address this, we developed the F6 Test Server—a high‑density NVMe SSD validation system engineered for peak performance, rock‑solid reliability, and long‑duration stress testing. Housed in a compact 3U chassis, F6 provides 12 independent PCIe Gen5 x4 slots with independent power control, hot‑swap capability, and precise power monitoring, enabling parallel testing at scale. Each DUT slot includes an independent power‑control module, so any test validated on the Desktop platform runs unchanged on F6. Hardened by years of production use, PyNVMe3 includes a comprehensive library of enterprise‑grade test scripts, making it a trusted solution for SSD RD and QA, delivering scalable, efficient, and accurate NVMe SSD testing.

4.1 System Overview
The F6 Test Server features a well-balanced hardware configuration to ensure that no single component becomes a bottleneck. With high-performance computing, ample memory, high-bandwidth PCIe 5.0 slots, optimized airflow and cooling, and precise power delivery, the system allows each DUT to reach its own performance limits without constraints from the testing platform.

| Category | Label | Specification |
|---|---|---|
| Chassis | ||
| Form Factor | 3U rack-mounted chassis | |
| Dimensions | 482.00mm (W) × 133.00mm (H) × 639.77mm (D) | |
| Weight | 14.8 kg | |
| Cooling System | 5 × 4-wire fans for high-efficiency cooling | |
| System | ||
| Processor | Intel® Xeon® Gold 6421N | |
| Memory | DDR5 ECC, 128GB or 256GB, 4400 MT/s, 2DPC | |
| System Storage | (5), (6) | 2 × 2.5″ SATA SSDs (480GB, RAID 1 for OS, unremovable) |
| Power | ||
| Input Voltage | (7) | 100–240V AC, single-phase |
| Power Ratings | 220V: 4.5A, 110V: 9.0A | |
| Power Switch | (8) | On/off switch for system power |
| Connectivity | ||
| Video Output | (3) | 1 × VGA |
| USB Ports | (4) | 4 × USB 2.0 |
| Network | (1) | 1 × LAN |
| Management | (2) | 1 × IPMI (reserved) |
| Compliance & Safety | ||
| Certification | CE safety certified | |
| Environmental Conditions | ||
| Operating Temperature | 5°C to 50°C |
Standard servers limit true SSD performance due to shared resources, inefficient cooling, and software bottlenecks. F6 Test Server fixes this—every part works together to remove limits and unleash innovation.
| Did you know? | |
|---|---|
![]() |
Standard servers often struggle with large-scale SSD testing, leading to inefficiencies in performance, cost, and time! |
| PCIe Bandwidth | Standard servers commonly use PCIe switches to connect multiple SSDs, which can introduce bandwidth contention and prevent each SSD from achieving full PCIe Gen5 x4 performance. This limitation becomes more severe in high-throughput testing scenarios. |
| CPU Resources | When multiple SSDs share limited CPU cores, resource contention can impact testing efficiency and accuracy. Adding more CPU sockets may increase total processing power but also introduces inter-socket communication overhead, which can degrade performance consistency. Managing this overhead requires precise test parameter configuration, making it a complex and inefficient approach that is generally not recommended. |
| Memory Bandwidth | While standard servers can be configured to provide sufficient memory bandwidth and capacity, achieving this requires precise calculations to ensure memory does not become a system bottleneck. PyNVMe3 relies on fast memory access and large CRC tables for real-time data integrity checks, necessitating well-optimized memory configurations to maintain test accuracy. |
| Thermal and Mechanical | High-performance SSDs generate significant heat under stress testing, but standard server cooling systems are typically designed for general-purpose workloads rather than continuous, high-power SSD operation. Insufficient cooling can lead to thermal throttling, affecting test consistency and reliability. |
| Software Overhead | Many SSD testing tools depend on kernel-based drivers, which introduce context-switching overhead and consume CPU resources. This can reduce the system’s ability to fully saturate SSD performance, particularly in high-concurrency testing environments. |
The F6 Test Server is designed to address these challenges with a purpose-built hardware and software solution, ensuring optimal PCIe bandwidth, dedicated CPU resources, precisely managed memory performance, advanced cooling, and a low-overhead user-space software.
4.2 DUT Slots
Each DUT slot in the F6 Test Server is independently controlled and optimized for PCIe Gen5 x4 performance, enabling high-speed SSD testing.

| Feature | Label | Description |
|---|---|---|
| DUT Slots | ||
| U.2 Form Factor | (1) | U.2 PCIe 5.0 slots (M.2 supported via adapter; E1.S support under development). |
| Eject Button | (2) | Allows SSDs to be inserted or removed without affecting other slots. Some slot numbers are marked on the button. |
| LED Indicators | ||
| Power | (3) | Indicates SSD power status (green). |
| Activity | (4) | Displays SSD activity (blue). |
| Status | (5) | Testing in progress (yellow). |
| Power Management | ||
| Monitoring | Real-time voltage and current monitoring with high accuracy (voltage: ±0.5%, current: ±1.5%). | |
| Control | Adjustable voltage range from 0.6V to 14.5V in 1mV increments. | |
| Protection | Overcurrent protection up to 6A per slot, preventing power surges. | |
| System Power Button | (6) | Controls system power and provides status indication. |
4.3 Production Tests
The F6 Test Server streamlines enterprise SSD validation with pre‑built test scripts in PyNVMe3/scripts/production. These suites are ready to run as‑is or to customize, so teams can adapt workflows to product‑specific requirements while preserving repeatability and scale.
F6 supports long‑duration, large‑scale evaluations that sustain realistic, high‑pressure workloads and drive DUTs to their operational limits—without sacrificing measurement accuracy.
Included tests
01_normal_io_test.py– long‑duration sequential and random read/write baselines with multiple block sizes.02_mix_io_test.py– mixed IO size and read/write ratio sweeps with sequential/random phases.03_data_model_test.py– application data‑model workloads (cloud, SQL, CDN style mixes).04_trim_format_test.py– performance and stability with trim/format sequences and mixed IO.05_small_range_test.py– hot‑spot IO on constrained LBA ranges and small random regions.06_jesd_workload_test.py– JEDEC JESD219 client workload (full‑drive prep plus mixed IO).07_power_cycle_test.py– dirty/clean power‑cycle resilience across 1000 iterations.08_wl_stress_test.py– wear‑leveling stress with hot/cold data distribution.09_io_stress_test.py– multi‑namespace IO stress blended with admin/MI events and resets.
⚠️ Note: Production runs can enable and exercise multiple namespaces—each provisioned with distinct LBA formats (LBAF) and Protection Information (PI) settings—for broader coverage. Pre‑create the required namespaces on the DUT and format each to the target LBAF/PI profile before starting the production test; the suite will discover them and apply I/O‑stress across all namespaces.
Typical production runs last 1–3 weeks; per‑slot parallelism across multiple DUTs shortens wall‑clock time. F6 also enables cross‑vendor benchmarking in a single server, ensuring a consistent environment and eliminating variables across disparate setups.
4.4 Slot Configuration
The F6 Test Server uses a configuration file, slot.conf, to manage its 12 NVMe SSD test slots, allowing flexible and efficient test execution. Each slot configuration defines parameters to resource allocation, including:
- BDF Address – Specifies the PCIe bus, device, and function of the NVMe SSD, ensuring correct device mapping.
- CPU Affinity – Assigns a dedicated CPU core to each slot, minimizing resource contention and maximizing I/O efficiency.
- Test Case – Defines the specific Python test script and test case to be executed, enabling automated and repeatable SSD validation.
Example slot.conf Configuration
# slot_N = BDF, CPU, TESTS
slot_0 = 0000:6f:00.0, 0, ./scripts/production/01_normal_io_test.py::test_case1_16k_randrw_1day
slot_1 = 0000:70:00.0, 2, ./scripts/production/01_normal_io_test.py::test_case2_64k_seqrw_1hour
slot_2 = 0000:46:00.0, 4, ./scripts/production/02_mix_io_test.py::test_case4_mixrw_stress
In this example, slot 0 is mapped to PCIe address 0000:6f:00.0, assigned to CPU core 0, and runs a 16K random read/write test for one day. By specifying test parameters and cases for each slot, the F6 enables parallel execution of independent workloads, maximizing testing efficiency and resource utilization.
⚠️ Note: Users typically only need to modify the TESTS field to assign test cases.*
With the slot.conf configuration file, we can simplify the test execution command line as follows:
make test slot=1
This command line will locate the BDF address of the test drive, the corresponding CPU core number, and the test case to be executed based on the slot.conf configuration. We can even use the following command line to start tests on 12 SSDs simultaneously in the background tasks.
for slot in {0..11}; do nohup make test slot=$slot & done
Since starting tests on 12 drives simultaneously will create a large number of log files in the results directory, we recommend clearing the results directory before testing.
4.5 Hot-Swap
The F6 Test Server fully supports hot-swap operations, allowing users to remove and replace NVMe SSDs in specific slots without disrupting other tests. Two dedicated commands, make pop and make push, streamline this process.
- Terminate the Ongoing Test
Before removing an SSD, use themake popcommand to pause any running tests on the slot, ensuring data integrity.
Example: To stop tests on slot 3:make pop slot=3 - Replace the SSD
Remove the SSD and insert a new drive. - Register the New SSD
Usemake pushto initialize the slot and detect the new SSD.
Example: To register the SSD in slot 3:make push slot=3 - Restart Test
Once registered, restart testing as usual:make test slot=3
Hot-swap support ensures minimal downtime by allowing independent management of each slot.
4.6 WebUI
When testing a single drive, we can monitor the log files to get the current test status. However, when testing many SSDs on multiple platforms, this method becomes inefficient. Therefore, we provide a professional WebUI to get the test status, performance, and other information of a specified platform and slot in the test cluster, as shown below.

This WebUI offers several significant benefits:
- Real-Time Monitoring:
Through the WebUI, users can monitor all active DUTs (Devices Under Test) in real-time, including their slot numbers, BDF (Bus, Device, Function) addresses, and model names. This allows users to quickly and easily obtain the current status and location information of each SSD, eliminating the need for manual searching and recording. - Performance Charts:
The WebUI provides detailed performance charts showing IOPS (Input/Output Operations Per Second) and throughput (in MB/s or GB/s) over a specified time range. Users can use these charts to rapidly assess the performance of SSDs and identify potential performance bottlenecks or anomalies. - Comparison Feature:
The WebUI allows users to select and compare the performance of two DUTs. This is particularly useful for evaluating performance differences between different SSD models or the same model under different test conditions. - Detailed Information:
In addition to performance metrics, the WebUI provides other detailed information about the selected DUT, such as firmware version, temperature, and health status. This enables users to gain a comprehensive understanding of the operating condition and potential issues of each SSD. - Custom Monitoring Functions:
The WebUI allows users to define their own monitoring functions. PyNVMe3 will periodically collect data based on these custom functions and display it on the WebUI. This feature enables users to tailor the monitoring process to their specific needs and preferences, ensuring that the most relevant data is always at their fingertips. - No Impact on Existing Tests:
Importantly, the addition of this WebUI does not impact the functionality or performance of existing tests. Users can continue to run their tests as usual, with the added benefit of enhanced monitoring and data visualization capabilities.
By utilizing this professional WebUI, users can significantly improve the efficiency and accuracy of SSD testing, reduce the time spent on manual operations and monitoring, and ensure comprehensive and visualized test data. This is especially important in environments where many SSDs are tested across multiple platforms, aiding in centralized management and optimization of the testing process.
PyNVMe3’s WebUI uses Grafana to visualize data stored in QuestDB. This enables real-time monitoring of all DUT SSDs’ performance through customizable dashboards.
1. QuestDB
The F6 Test Server integrates QuestDB to enable real-time monitoring and logging of SSD test metrics through PyNVMe3.
PyNVMe3 automatically collects core SSD metrics, e.g. IOPS. Users only need to add custom monitors for vendor-specific data.
Example: Add Temperature Monitoring
def get_temperature(nvme0):
smart_log = Buffer(4096)
nvme0.getlogpage(0x02, smart_log, 512).waitdone() # SMART log page 0x02
return int(k2c(smart_log.data(2, 1))) # Kelvin → Celsius
# Monitor temperature every 5 seconds
nvme0.add_monitor("temperature", get_temperature, interval=5)
# Run test workload
nvme0n1.ioworker(time=15, read_percentage=100).start().close()
The F6 test server includes a pre-installed QuestDB (8.2.1) that automatically stores all monitoring data. No manual setup or configuration is required. Refer to QuestDB official website for more details.
After system reboot, start the service with:
./questdb/bin/questdb.sh start
2. Grafana
- Install Grafana
Follow the official installation guide to install Grafana on your PC/Mac. Grafana runs locally on Work PC to avoid resource contention on the test machines. - Connect to QuestDB
In the Grafana web interface athttp://localhost:3000, navigate to Menu > Connections > Data sources, then add a new data source by selecting PostgreSQL. - Edit data source
Use the following configurations:Name: PyNVMe3 Host URL: <questdb-ip>:8812 # QuestDB in F6 Test Server Username: admin Password: quest TLS/SSL Mode: disable - Save data source
Click the “Save & Test” button to verify that the connection to QuestDB is successful. - Build Dashboards
Import pre-configured dashboard templates for rapid deployment and visualization.
4.7 Application Scenarios
The F6 Test Server is built to tackle real-world SSD testing challenges, offering higher efficiency, better accuracy, and faster results than traditional servers or PC-based platforms. Whether for firmware development, troubleshooting, or performance benchmarking, F6 helps streamline the testing process and accelerate product validation.
Daily Build Testing – Keeping Up with Rapid Development
In fast-paced development cycles, every firmware or hardware update needs thorough validation. The F6 enables testing of multiple SSDs simultaneously, covering different capacities, over-provisioning settings, and configurations within a single system. This improves test coverage and speeds up CI workflows. With PyNVMe3’s automation capabilities, test execution is consistent and repeatable, reducing manual effort.
Problem Analysis – Faster Debugging and Performance Benchmarking
Diagnosing SSD issues often requires testing different firmware versions, models, and even products from multiple vendors. The F6 enables side-by-side comparisons, helping engineers quickly isolate problems and validate solutions. It is also ideal for performance benchmarking and competitor analysis, ensuring SSDs meet expected standards.
Bug Fix Verification – Ensuring Stability Before Release
Fixing one issue should not introduce another. The F6 allows for rigorous validation by running multiple SSDs in parallel, verifying fixes across different configurations. Regression testing ensures that new updates do not disrupt existing functionality, providing confidence before release.
5. Summary
PyNVMe3 spans three complementary platforms—Laptop, Desktop, and F6 Server—covering the arc from low‑power bring‑up to high‑density production validation. All platforms support power cycling and precise power monitoring; choosing the right platform ensures accurate, repeatable results and efficient use of lab resources.
- Laptop platform — When paired with the Quarch Power Analysis Module (PAM), it delivers high‑resolution power data, ideal for low‑power profiling, script prototyping, and reproducing field issues.
- Desktop platform — Provides stable thermals and ample CPU/memory for protocol, performance, and stress testing. With dedicated fixtures, it enables comprehensive out‑of‑band (SMBus, MI, SPDM) validation.
- F6 Server platform — Designed for high‑density, long‑duration runs with per‑slot power control and hot‑swap, enabling parallel validation at scale and consistent cross‑vendor benchmarking.
Recommended platform by goal
| Test Goal | Recommended Platform | Why it fits |
|---|---|---|
| Low‑power characterization | Laptop | High‑resolution measurements with Quarch PAM |
| Performance & stress | Desktop | Dedicated resources and better cooling for stable, repeatable throughput |
| OOB management (SMBus/MI/SPDM) | Desktop | Purpose‑built fixtures and MI tooling |
| High‑density / long‑duration | F6 Server | 12× PCIe Gen5 x4 slots, per‑slot power, hot‑swap |
| Cross‑vendor benchmarking | F6 Server | Consistent environment across many DUTs |
| CI / daily regression | F6 Server | Parallelism shortens wall‑clock and increases coverage |


