At our recent live FCIA webinar, “Fibre Channel Data Center Interconnects: 64GFC and More,” attendees gave the presentation by Gene Cannella (Adtran), David Rodgers (EXFO) and Andy Adams (Adtran) a 4.9 rating! The detailed session provided an in-depth look at Fibre Channel Extension via Dense Wave Division Multiplexing (DWDM) and Optical Transport Network (OTN), explaining why Data Center Interconnect is now table stakes and mission-critical. If you missed this presentation, you can watch it on demand on the FCIA YouTube Channel or on BrightTALK.

The audience was very engaged with the content and asked several interesting questions, which Gene Cannella has answered here.

Q) I typically run into problems with setting a fixed speed (on an N_Port or E_Port) when attaching to DWDM for extension. Do modern DWDM systems or DWDM with OTN have capability to auto-negotiate Fibre Channel speeds? (i.e., 64/32/16).

A) Modern OTN/DWDM systems conforming to FC specifications typically support three Fibre Channel rates; on each port the highest rate available plus the next one or two lower legacy rates. These rates are supported on the same module hardware, with the same client optics. For example, an OTN muxponder port rated for 64GFC generally will also be configurable for 32GFC and 16GFC without changing module hardware or inserting different optical plugs. Also, you now typically will see the OTN 64/32/16GFC port supporting the auto-negotiation protocol (i.e., link speed negotiation), but using the protocol to advertise operation at only one rate, that being the rate for which it has been configured in the OTN user interface.

The reason for this is not that the OTN/DWDM module or its optical plugs are unable to adapt to multiple rates, but rather that bandwidth must be allocated all the way through the OTN/DWDM network for the negotiated rate. The entire OTN/DWDM network would be forced to react to the results of the auto-negotiation on a single link. This would create numerous challenges, but a simple example is the case where a link initially auto-negotiates to 16GFC, but later auto-negotiated to 64GFC. The OTN/DWDM network would then need to find and allocate an additional 48Gbps of network bandwidth when all bandwidth might have been previously allocated. It might also be necessary for the network management system to confirm that business rules even allow the additional bandwidth for that service, particularly in a multi-tenant environment. The complexity simply explodes. It is important to remember that the link speed negotiation protocol is resolving the rate for only a single link carrying a single service that owns that entire link, while the OTN/DWDM network is allocating shared bandwidth for multiple services, entering the network from multiple links, which might each have separately negotiated their own rate, if allowed.

By supporting the automatic link speed negotiation protocol on the OTN/DWDM port, even when limited to only one rate, the configuration of the SAN switch port is made as convenient as possible for this application.

Q) Does the FC SAN Switch also need to support encryption if the DWDM is configured for it?

A) While it might not be strictly necessary to apply encryption in both systems, a good security practice for Defense-In-Depth is to encrypt at each layer possible. If the traffic is leaving the building, end-to-end encryption is usually mandatory, and so at least one of those systems must apply encryption. At the same time, many end users are interested in double encryption, even at a single layer, to secure against a scenario where one of the encryption systems has been compromised. Encrypting in both the SAN Switch and the DWDM network can provide that extra amount of security. Here are some of the many reasons that encryption might be done in both systems:

  • Encryption is required in a multi-tenant situation. The DWDM system will encrypt the services of all tenants without relying on those tenants delivering an already encrypted signal.
  • With many encryption systems, only the frame payload is encrypted while certain headers or metadata remain in the clear to support link functions and signal routing. Encrypting first in the SAN switch and then encrypting again in the DWDM systems would protect that metadata when a single pass of encryption might not.
  • In the SAN environment, it is possible some switch ports will be applying encryption, while others may not. In such cases, the opportunity exists for inadvertent mistakes in configuration resulting in transmitting data in the clear from the SAN to the DWDM system.

Q) What is best practice for ISL (Inter-Switch Link) trunking and DWDM?

A) First, we acknowledge that “trunking” will have different meanings in different vendor contexts. Generally speaking, we can say that the DWDM system should be transparent to all PCS characters on the ISL to have the best chance of supporting not only standard Fibre Channel traffic, but all vendor-specific implementations and applications as well.  Total latency and differential latency are qualities of the DWDM system that should be minimized in general, once again to provide a quality transport solution that supports the widest range of application deployment.  Care should be taken in planning the DWDM fiber paths, with an understanding of application requirements.  The DWDM protection scheme should similarly be informed by application requirements.

Q) Why use FC interfaces rather than Ethernet interconnection? Do we expect FC to be replaced by Ethernet in the long run?

A) It is unlikely FC will be replaced by Ethernet any time soon. It is relied upon for mission-critical systems in banking, government and other industries where reliability and performance are essential and paramount. The Fibre Channel SAN has been optimized for storage applications for 25+ years. The requirements for lossless delivery and in-order delivery of frames are the most obvious differentiators from Ethernet. FC Fabrics deliver low latency for block data transfers. FC Fabrics natively use their own addressing scheme, and do not use IP addressing, making the FC Fabric unreachable from an IP routed network. This provides significant protection against any security penetration that might originate in the Internet.

Q) Can you please give a max distance in miles between the two data centers?

A) The distance between data centers is limited by the applications being extended. There is no limit for L3 extension for layer 3 applications, and so you see data centers connected by IP routed networks being distributed globally. For storage applications, FCIP may be used for asynchronous replication over practically any distance, using the FCIP protocol to tunnel through the IP routed network. At the optical layer, the DWDM system itself may be used for transport of signals over unlimited distance. The distance limits noted in this presentation are in respect of the ability to do synchronous transactions at max latency. Typically, synchronous transactions require two roundtrip communications per transaction: 1) the transmission of a command, and the return of its acknowledgement, and 2) the transmission of the data and the return of its acknowledgement. The synchronous transaction may not have a hard limit on latency, or it may vary per application, but if those two roundtrips consume more than 3 milliseconds, synchronous operations may begin to become impractical. At 5 microseconds of latency per kilometer of fiber, two roundtrips of 150 km will add 3ms of transport latency to the latencies contributed within the data center. So, SAN extension for synchronous replication is probably not appropriate beyond 150 km.

Q) What would happen if the fiber is cut? How does the fabric recover?

A) This depends on the protection scheme employed in conjunction with the DWDM system. According to the OTN standard, if the network fiber is cut, an OTN device should forward the NOS primitive to the downstream device (generally a switch port for an ISL). This will indicate to the switch its link partner is not operational, and the switch port will perform the Link Init protocol. If no redundancy or protection switching is supplied with the DWDM (not recommended!), then eventually the switch will assume Link Failure and will recalculate fabric topology. If the DWDM system is deployed with redundancy, then it should restore the ISL within some number of milliseconds, perhaps 50 to 100ms. At that point, Link Init will commence and the fabric will resume operation on that link. Some DWDM systems may provide differentiating features that could restore the ISL even before Link Init becomes necessary. If preferred, the DWDM systems may be configured to shut off the laser on their port facing the SAN switch, rather than transmitting NOS, in order to indicate a hard failure when there is no DWDM protection scheme in place to restore the ISL, for example. The fabric would then immediately begin to recalculate topology. Again, not a recommended approach, as redundancy of the DWDM fiber with line protection is an affordable and effective way to restore the ISL.

For additional great, vendor-neutral content on Fibre Channel, please check out the extensive library of FCIA webcasts on the FCIA Website. For upcoming webcasts and other FCIA initiatives and news, please follow us on LinkedIn and X/Twitter.