In our most recent FCIA webcast, we asked “Is FC-NVMe Ready for Prime Time?” Of course, the answer was a resounding “Yes.” During the live event, our FCIA experts highlighted a snapshot of where the industry is with FC-NVMe, its key differentiators and the breadth of solutions available today. If you missed the live event you can watch it on-demand here or on the FCIA YouTube channel.
Our audience during the live event asked several great questions. Here are our experts answer to them all:
Q: I have Dell Servers with NVMe installed in PCIe slot in the server and also (newer) servers where the NVMe drive is installed in the disk area connecting to some storage controller on Motherboard. What is the fundamental difference between these two types?
A: While this question is not directly related to Fibre Channel or NVMe over Fibre Channel, it does come up in broader conversations around NVMe and makes sense to answer here. To answer the question: NVMe drives come in various form factors, as you have noted already. The MD2 or HHHL form factor that you see installed in PCIe slots at the back of the server and M.2 and U.2 form factors are in the front (disk area) of the server. All these drives if NVMe use PCIe as the bus connecting the media to the CPU, with the most significant difference being the form factors. As you would assume, it is much easier to replace a failed M.2 NVMe drive vs. an HHHL card.
Q: It was mentioned earlier how you can transport both traditional FCP/SCSI and NVMe concurrently, can you comment on how FC-NVMe also benefits accessing new media such as SCM (Storage Class Memory)? Furthermore, is it important to have end-to-end NVMe if you’re using SCM drives in a storage array?
A: The context of this presentation is on NVMe over FC solutions that are available today in the general marketplace, the possibility of networking SCM using FC-NVMe is an exciting prospect. In general NVMe is a memory access protocol, which marries well with SCM. FC-NVMe is a storage optimized protocol that leverages the existing storage infrastructure, which would be ideal for networking SCM. We feel that Fibre Channel has advantages over all other NVMeoF transport solutions due to its proven history of resiliency, which is required for storage solutions.
Q: We are using EMC VMAX All-Flash Array over Brocade fabric switches to SAN Boot Dell R740 with QLogic FC-HBA with ESXi 6.5, Windows Server 2016 & 2019 – I never faced any serious issues – I’m trying to understand if we are already using FC-NVMe (without knowing)?
A: FC-NVMe is available on EMC PowerMax and VMware FC-NVMe support is available with vSphere 7. In your environment with EMC PowerMax and ESXi6.5 you will be running SCSI.
Q: I have a compatibility question, it might be a bit out of topic, but maybe you can answer that too. What if I have an application that is already working on top of RDMA. How difficult might it be to migrate this application from RDMA to NVMe over FC?
A: It really depends on your application. In the case where RDMA is simply used for storage transport you can likely migrate to FC-NVMe, whereas if the application is doing memory sharing across nodes you will not be able to migrate to FC-NVMe.
Q: On the host side, specific to ESXi, how native is NVMe? Is it full NVMe or NVMe over SCSI?
A: We are not privy to all the details within the internal architecture of the VMware ESXi stack or cannot talk about them publicly. However, based on publicly available information from VMware and other credible industry sources – it is safe to say that the VMware ESXi stack (starting at the VM to the Hypervisor and out through FC-NVMe) is not entirely NVMe and we believe that there are a couple of SCSI to NVMe and vice versa translations that happen as an I/O traverses from the VM to the wire. But, to our knowledge there is very limited/to no performance impact of these translations. Also, such translations of SCSI to NVMe and NVMe to SCSI are not unheard of in parallel implementations of storage stacks on other enterprise operating systems. However, it must be noted that an FC-NVMe capable HBA is a native NVMe Fibre Channel device and does not do translations at the device level.
Q: Any comparisons (Pros/Cons) vs. RoCE or any published data that you can advise to look into?
A: Different technologies apply to different use cases, in the situation where you already have a Fibre Channel SAN, you can readily use the FC SAN for FC-NVMe. In the case you are evaluating to use RoCEv2, you will need to first deploy and configure a DCBx network and ensure servers have RNICs. Recommend to take a look at the Demartek IBTA RoCE Deployment Guide to get an idea of what’s involved to get NVMe over RoCE deployed. Secondly with a RoCE deployment you will not have equivalent fabric services compared to Fibre Channel available and as a result the operational tasks to deploy, maintain and operate are very different to a Fibre Channel SAN which require minimal operational interaction. Read this blog for an example.
Q: When you say FC-NVMe in server – where does the disk sit? on PCIe bus (like standard NVMe) or does it connect to storage subsystem?
A: The actual disk or flash storage would sit in a remote storage subsystem. A host server would discover the NVMe subsystem over a Fibre Channel network, once an NVMe Namespace is connected it would appear similar to a local NVMe block device.
Q: What is with IBM Power/AIX Support?
A: Please refer this question to an IBM representative. We are not aware of FC-NVMe support in AIX.
Q: When is a Microsoft native driver expected?
A: Please refer this question to a Microsoft representative. Microsoft has not specified a timeline for a native NVMe stack to the best of our knowledge.
Q: The slide says application does not need to change. Despite true, this will not expose the performance benefit of NVMe/FC since application will still make SCSI calls when the OS does FC-NVMe calls toward the HBA, so a translation needs to happen, reducing performance. Can you elaborate?
A: (Question in reference to slide #19) – The IO efficiencies and performance improvements from FC-NVMe leverage the NVMe transports ability to bypass the traditional low level and mid layer drivers as well as single queue IO scheduler. Host applications are unaffected because the operating system utilize a multi-queue block layer, the applications IO interactions with this are unaffected. A handy reference for visualizing these IO stacks is linked here.