Many people don’t actually understand how Fibre Channel (FC) works. That’s why we hosted the FCIA webcast Fibre Channel Fundamentals. If you missed the live event, it’s available on-demand. Some great questions came up during the webcast and our experts have provided the answers here.

Q: Do the ASICs in the switches that handle FC frames for SCSI packets also handle FC frames that have NVMe packets, or is it separate hardware?
A: It is the same ASIC, no different hardware. When a device logs into the switch it registers with a specific FC4 type: FICON, FCP(SCSI), or NVMe and the switch will pass the traffic on the same hardware.

Q: Does FC-NVMe require Gen6 (32GFC) HBAs and storage connectivity?
A: Yes FC-NVMe requires GEN6 HBA’s but the Fabric can either be 16GFC or 32GFC/128GFC (GEN6).

Q: Can NVMe over FC Fabrics and SCSI over FCP network traffic coexist? Does the support depend on HBA implementation? What are latency goals for NVMe over FC fabrics?
A: Yes both FCP/SCSI and NVMe can co-exist coming from the same server, on the same wire and the same fabric. And yes the support starts at the HBA level. The expectation for FC-NVMe is that it will decrease the latency in orders of magnitude in comparison to FCP/SCSI.

Q: The main slow drain issues I have seen is speed mismatch. If we are migrating to NVMe-oF from iSCSI, then it will be those devices that are 4/8/16 that will become the issue for slowing down NVMe. Those 4/8/16 and even 2G devices will be running SCSI, so shouldn’t we not mix the two environments?
A: Yes, sometimes the speed mismatch such as a 4G server talking to 16GFC storage has been known to cause congestion leading to backpressure on ISL’s simply because the 4G device is asking for more data than it can handle from the faster 16GFC storage.

Mixing speeds in a Fabric isn’t necessarily a bad thing but best practices are to get older, slower servers off the faster fabrics or section them off to communicate with devices that aren’t going too slow for the faster devices. As for migrating FCP/SCSI onto NVMe, Fibre Channel is best suited for that since most high performing SCSI implementations are on Fibre Channel.

Q: How does FC on Brocade/Cisco SAN switches latency compare to latency on LAN switches running Ethernet, or on Infiniband?
A: Infiniband is known to have the lowest latency whereas FC and Ethernet might be able to achieve similar latencies depending on architecture and what the application is doing. However as it relates to storage, Fibre Channel is purpose built for storage traffic with a rich history and ecosystem of vendors best positioned to carry FC-NVMe.

Q: Do you have any benchmarks comparing NVMe-FC (32GFC) and RDMA (40G & 100G)?
A: We don’t have any of these benchmarks at this time comparing GEN6 (32/128GFC) to RDMA (40G and 100G).

Q: You are still constrained by FC protocol (BB credits, etc.) on ports and ISL’s.  So the bottleneck becomes the number of BB credits, not latency, right?
A: Note that buffer-to-buffer credits is what makes Fibre Channel the only lossless, most deterministic, resilient, and high performing storage protocol for the world’s most mission critical applications. And while misbehaving end-devices can cause BB Credit delays or loss of BB Credits leading to latency bottlenecks, there are tools built into the ASICS, firmware, and software to identify BB Credit issues. For instance, some Fibre Channel switches have the capability to detect slow-drain edge devices and modify traffic behavior to mitigate issues in the fabric. In summary, yes the lack of BB credits due to a slow drain/misbehaving device can cause a latency bottleneck but there are tools built in to detect and automatically remedy these issues before it impacts performance.

Beyond slow drain, the question assumes an exhaustion of B2B credits per link. It’s useful to remember that once a buffer reaches its destination, a R_RDY message gets sent back to the sender. Lower latency in this case means that the refreshment of the B2B credit pool happens sooner. B2B credits buy you distance, not speed, and in normal operations (slow drain situations are not considered “normal” operations) it is unusual to exhaust B2B credits in edge-core environments inside a data center.

If you’re interested in learning more about Fibre Channel, check out these on-demand FCIA webcasts: Introducing Fibre Channel NVMe and How to Use the Fibre Channel Speedmap. You can also register for our next FCIA webcast on August 29th Deep Dive into NVMe over Fibre Channel.