• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Proxmox + TrueNAS + 10GbE - VM Storage Performance Much Lower Than Expected

harmano46

Junior Member
Hello,

I’m in the process of refining my home lab setup and I’ve hit a performance issue that I can’t quite explain.

Current layout:
  • Host: Ryzen 7 5700X, 128GB DDR4 ECC UDIMM
  • Motherboard: B550 chipset
  • Hypervisor: Proxmox VE 8.x
  • Storage HBA: LSI 9300-8i (IT mode)
  • Pool: 6× 8TB SATA HDD (RAIDZ2)
  • Cache: 2× NVMe (mirrored, used as special device + metadata)
  • Network: Mellanox ConnectX-3 10GbE (direct attach)
TrueNAS SCALE is running as a VM under Proxmox. The HBA is passed through via PCIe passthrough (IOMMU enabled, no grouping issues). TrueNAS pool health is good, no SMART errors.

The issue:
When testing directly inside the TrueNAS VM (local pool), I see expected sequential speeds (900–1000 MB/s read, 600–700 MB/s write).

However, when accessing the same storage over SMB from another 10GbE machine, I’m only getting 350–400 MB/s reads and 250–300 MB/s writes.

Things I’ve already checked:
  • MTU 9000 enabled on both ends
  • Verified 10Gb link is negotiated
  • CPU usage during transfer is low (<20%)
  • No packet loss or drops on interface
  • SMB multichannel disabled (testing simple case first)
I’m trying to determine whether this is:
  1. Virtualization overhead
  2. SMB tuning issue
  3. ZFS sync / recordsize mismatch
  4. Something related to Proxmox bridge configuration
Has anyone seen this type of drop when virtualizing TrueNAS under Proxmox with HBA passthrough? I’m trying to understand where the bottleneck most likely is before I start re-architecting.

Appreciate any insight.
 
Is that overall or per-core? You could easily be hitting a single-core limit on some software not designed for multi-threading.

That is overall CPU usage across all cores. I also checked per-core usage during transfers and did not see any single core staying at 100% consistently. Most cores remain fairly balanced.

That said, I understand some parts of the network or storage stack can still hit a single-thread limit, so I will monitor per-core load more closely during the next round of testing.

Thanks for pointing that out.
 
Back
Top