Our FCIA live webcast, Introducing Fibre Channel NVMe, provided attendees with a solid foundation on how Fibre Channel (FC) and NVMe work together and why someone might want to consider using FC for NVMe-based solutions. If you missed the live event, you can watch it on-demand at your convenience. We answered a lot of great questions during the live webcast, but we didn’t have time to get to them all, so as promised, here they all are:

Q. Has NVMe over fabrics been run over SAN extension networks?  Are there issues with the additional WAN latency of a FCIP network with respect to NVMe fabrics?
A. NVMe is a block-based protocol, which means that its initial deployment should be as close to the hosts as possible. SAN extension is currently a work in progress.

Q. What’s the ballpark % latency reduction that has been observed using NVMe vs. SCSI?
A. At this time, none of the vendors has published official comparison numbers.

Q. Can you talk about how NVMe boot works and if that is something that is supported or can NVMe disks only be used as data LUNs?
A. That’s a vendor support issue. Some vendors support it under Fibre Channel (SCSI) and NVMe boot support will be best determined by individual vendor features.

Q. Does NVMe benefit more than SCSI from multiple paths? Do the number of queues on the node need to be in sync with the number of paths?
A. Theoretically it could. Because of the queue design with NVMe, it’s possible to have more robust path management.

Q. What are the OS vendors supporting NVMe-oF / NVMe today? RedHat, Novell-SuSE, Microsoft-Windows?
A. NVMe-oF is in the upstream for the Linux Kernel. Specific flavors of Linux and Microsoft will release their own supported version of NVMe-oF on their own, independent schedules. Those vendors should be watched individually. FC-NVMe should also be included in the most recent Linux kernel.

Q. Can you explain more about the discovery / connectivity? Particularly: When an FC host discovered the NVMe discovery controller from fabric SNS, is there a secondary discovery happening to discover further lists of FC-NVMe N-Ports? Or is the discovery controller the only N-port on the fabric?
A. The Name Server does the primary discovery for Fibre Channel. Individual targets do further discovery for NVMe subsystems.

Q. What is the relationship between FC Name server and NVMe address scheme?
A. There isn’t a direct relationship between the NVMe address scheme and the FC Name Server. The NVMe Subsystem ports are associated with World Wide Names that, in turn, are used by the FC Name Server. The NVMe Address Scheme relates to how the NVMe controller relates to the NameSpace and NameSpace IDs.

Q. Do blade servers like CISCO UCS support NVME? Can I run bi-directionally?  (ex: SCSI FC devices to FC-NVMe and FC-NVMe to SCSI FC devices)
A. If you’re talking about bridging between NVMe and SCSI, there is no standard capability for that. Some vendors may support such a solution, however, outside the FC and NVMe technical standards.

Q. Will FC-NVMe improve bandwidth to an NVMe array vs. FC to a standard array of SSDs?
A. Individual bandwidth between any two devices would not see much of an increase. Because of the nature of NVMe queuing, however, FC-NVMe may result in higher aggregate bandwidth in the system.

Q. Can NVMe initiator talk to FC Target?
A. This is similar to the bridging question above. NVMe initiators that communicate via a FC connection, using FC-NVMe, can “talk to” a FC-NVMe target.

Q. Better bandwidth vs. non-NVMe SSDs?
A. Point-to-point bandwidth between devices is independent from the upper-layer protocol, whether you’re using NVMe or SCSI. The bandwidth will always be limited by the network topology.

Q. I frequently see FC ports transmitting at line rate (8Gbps or even 10Gbps) so, if the SCSI protocol can send receive data at line rate where does the performance improvement come from?
A. This is a question that involves Systems Under Test, and is a combination of bandwidth, latency, and SCSI/NVMe protocol efficiencies. The line rate of a protocol transport is only one aspect of performance gains

Q. Is Identify command in NVMe the same as SCSI Inquiry command in SCSI to get the disk size+other information or does Identify command do more than that?
A. There are similar mechanisms in NVMe.

Q. Does NVMe support different QoS policies?
A. QoS policies are an aspect of the network technology.

Q. NVMe is in the FC-payload right?
A. Correct.

Q. Does FC-NVMe improve bandwidth or just increased IOPS and lower latency?
A. NVMe helps with latency and IOPS more than bandwidth.

Q. Do you have to zone the NVMe Discovery with all the NVMe hosts as well?
A. Zoning relates to the relationship between a host/initiator and storage/target, and remains the same regardless of using a SCSI or NVMe upper layer protocol.

Q. FC B2B credits are still in play, right?  Nothing changes there.  However, does NVMe have any way to allow the initiator to tell the target to “slow down” the delivery of data, due to FC Network congestion or slowdrain?  Are there any “unused bits” in the protocol to allow for future customization?
A. You are correct. B2B credits are handled on the Fibre Channel layer below the FCP/NVMe layer. There is flow control at the protocol layer, but that wouldn’t have a direct impact on inter-switch congestion. With respect to “unused bits,” there is plenty of room for future expansion.

Q. Where is the NVMe discovery controller living?
A. For FC we defined that it should be part of the FC-NVMe target. For other NVMe-oF technologies the discovery process will be handled differently.

Q. So they support all speeds like 4/8/16/32G?
A. As part of the standard, yes, but it will depend on vendor implementation and qualification of individual product lines.

Q. Is this the same for FCoE?
A.  There’s no difference between NVMe in a FC frame for either Ethernet or “native” Fibre Channel transports.

Q. What does it mean PRACTICALLY that FCP is hardware accelerated by an HBA?
If we claim that, we can also say that Ethernet is hardware accelerated by a NIC and so on….
A. That’s true, and there are hardware accelerator NICs as well.

Q. So no special HBAs or FC-switches required to support FC-NVMe?  Please confirm.
A.  FC-NVMe requires HBAs that can process NVMe commands into the FCP protocol. There are no additional requirements for Fibre Channel switches, however.

Q. I think, it would good to see advanced and not high level only.
A.  You’re in luck! Watch this space. We’ll be offering an Advanced FC-NVMe webinar very soon. Follow us on @FCIAnews for info on dates. And if you want to learn more about Fibre Channel Fundamentals, register here for our FCIA webcast on June 15th.