Experts Answer Questions on All Things FC-NVMe
FC-NVMe (NVMe over Fibre Channel) is preparing for its official launch, there have been numerous questions about how the technology works, how it gets implemented, and what makes it special when compared to traditional, SCSI-based Fibre Channel. That’s why the FCIA hosted a live webcast, “Deep Dive into NVMe over Fibre Channel,” which is now available on-demand. As promised we’ve compiled a pretty extensive Q&A from questions we received from attendees. Here are our experts’ answers:
Q: What is the latency we should expect with NVMe over FC, with 32gig FC, without the drive latency itself?
A: Many factors come into play for latency and will vary by product. This question should be directed to the individual equipment vendors.
Q: NVMe-oF has its CONNECT fabric command that creates association with a “controller” in NVMe-of subsystem. FC-NVMe for its FC-4 link service has its own Create Association and Create I/O Connection NVMeLS requests. What is the thinking behind FC-NMVe needing its own association/connection link service semantic?
A: There are two layers of connect, an NVMe over Fabric layer and a transport layer. The FC-4 links service connect is the transport layer (Fibre Channel in this case) connect and sets up the transport specific connection requirements. The NVMe over Fabric layer connect still takes place at the NVMe layer.
Q: How are multiple queues supported in an NVMe-oF environment? Do HBA vendors have to implement SQ and CQ support in their driver? Do target vendors as well have to implement SQ and CQ from their side?
A: Yes, multiple queues are supported; this is a basic feature of NVMe. The location of SQ and CQ support depends on the implementation. It could be in the driver or in the HBA firmware. Yes SQ and CQ support needs to be supported on the target controllers otherwise it is not an NVMe device.
Q: Can you explain where the lower latency comes from?
A: I am assuming that the question is asking, “What makes NVMe lower latency than SCSI?” (The answer also applies when these protocols are run over Fibre Channel.) The latency reduction comes in the server-side protocol stack and is the result of multiple factors: 1) SCSI has some backward compatibility “baggage” associated with a) inherently slow media (tape and spinning disk), b) being initially architected for single (small computer systems) masters and c) early operating systems. The NVMe protocol stack is optimized for solid state devices (tape and spinning disk are not intended targets) and modern operating systems. As a new protocol, it also sheds the backward compatibility baggage. 2) SCSI did not explicitly integrate queuing into the protocol, requiring queues to be implemented on top of the SCSI protocol, which added context switching delays. NVMe integrates queuing into the protocol, reducing the overhead.
Q: This may not be specific to NVMe-of or FC-NVMe, but is there support for ANA (asymmetric namespace access) in the NVMe standard?
A: This is currently in progress in the NVM Express technical working group.
Q: Is there any support for IBM AIX servers?
A: Unfortunately, specific support for products can only be announced by each vendor.
Q: Zoning is based in FC WWNN and WWPN (it is at the port level). However, access to individual namespaces will be managed just like LUN access is managed by the target end point device. That device will recognize the hosts, and allow a particular host to access a particular namespace, or prohibit that host from accessing that namespace. This is commonly known as LUN mapping or masking, and now in NVMe will be known as namespace mapping or masking.
A: Thanks for the update 🙂
Q: Does the protocol support NVMe Multi-Queue?
A: Yes, this is a basic feature of NVMe.
Q: How will the zoning part be implemented in switch side? Will it use NVMe namespace ID?
A: From a Fibre Channel fabric-perspective, zoning for NVMe over Fibre Channel is just like zoning for SCSI over Fibre Channel. That is, fabric zoning is done at the Fibre Channel layer based on either ports or World Wide Names, but not based on NSIDs or LUNs. (Sometimes target-based functionality is referred to as LUN zoning, and targets will likely implement similar functionality based on Namespace IDs).
Q: Is there a specific requirement of SGL descriptor types that can be used with FC-NVMe?
A: Yes, it is descriptor type 05 (Transport SGL Data Block).
Q: FC-NVMe is the name of the spec that defines the NVMe protocol over the Fibre Channel fabric. NVMe over Fabric is a generic term that talks about the NVMe protocol over any fabric, of which there are multiple types: NVMe over RDMA fabric/network, NVMe over FC fabric/network, NVMe over TCP network/fabric.
A: Correct, with the exception of NVMe-TCP (as of this writing). That is still currently under development.
Q: Any difference between NVMe-oF and FC-NVMe?
A: NVMe-oF is the part of the NVM Express specification that outlines the message-passing parameters between the NVMe protocol and the underlying Fabric (whichever that may be). FC-NVMe is the Fibre Channel standard that outlines the FCP parameters that connects to the NVMe-oF structure. In short, NVMe-oF is an umbrella term for the NVMe specification that works over transports, and FC-NVMe is the Fibre Channel-specific transport standard that accomplishes this.
Q: On one of the slides you mention the NVMe Discovery protocol and an NVMe Discovery service. Where does the NVMe Discovery service live, which device does it run on?
A: For FC-NVMe, we are expecting each target to have a Discovery Controller that indicates which Subsystems are supported by that target.
Q: Where can I get a solid foundation on FC-NVMe?
A: Earlier this year, the FCIA hosted “Introducing Fibre Channel NVMe.” This should provide you with a “101” lesson on the topic, explaining how FC-NVMe works, how it differs from traditional Fibre Channel and why someone might want to consider using Fibre Channel for NVMe-based solutions.