dslreports logo

Comparing Nexus VPC to Catalyst VSS

VSS (Virtual Switching System) and vPC (virtual Port-Channel)

VSS allows the two physical Catalyst switches to appear as a single logical upstream device to the downstream devices (Nexus switches in this case). With VSS, one switch acts as active and another as standby, while they share the forwarding plane. Note that line cards on 6500 series switches have distributed forwarding which helps in actively forwarding packets. Similarly, vPC allows the two Nexus switches to appear as a single logical device. Also, all member links forward actively in a virtual Port-Channel.

By design, vPC is a Layer-2 technology; therefore Nexus switches providing vPC is only about appearing as single Layer-2 logical switch. VSS however is by design Layer-2-and-up technology; therefore Catalyst switches acting as VSS is about appearing as single logical switches of all layers where one switch is Active and rest of switches are Member. This concept may explain how specific network design is required for vPC in order to support Layer-3 interconnectivity; and no HSRP is needed in VSS.

Comparing contributing factors to vPC and VSS, I say that VSS implementation is simpler especially when it is a new deployment. For those that have been working with JUNOS-based Juniper products, VSS implement may mimic Virtual Chassis switches or High-Availability Clusters.

VSS vs VPC (Difference between VSS and vPC)

I know many of you have been looking for an answer to this question "what are the differences between VSS and vPC?". Here they are in a simple way, you just need to read it once.

Both are used basically to support multi-chassis ether-channel that means we can create a port-channel whose one end is device A. However, another end is physically connected to two different physical switches which logically appears to be one switch.



There are certain differences as listed below:

• vPC is Nexus switch specific feature while VSS is created using Catalyst switches, originally only available on 6500 series but newer switch generations such as 4500x and 6800 models have the support

• Once switches are configured in VSS, they get merged logically and become one logical switch from control plane point of view that means single control plane is controlling both the switches in active standby manner; similar to Switch Stacking technology. However, when we put Nexus switches into vPC, their control plane are still separate. Both devices are controlled individually by their respective SUP or individual switch and they are loosely coupled with each other.



• In VSS, only one logical switch to be managed from management and configuration point of view; similar to Switch Stacking technology. That means, when the switches are put into VSS, now, there is only one IP which is used to access the switch. They are not managed as separate switches and all configuration are done on active switch. They are managed similar to what we do in stack of say 3750 switches; however, in vPC, the switches are managed separately that both switches will have separate IP by which they can be accessed, monitored and managed. Virtually they will appear a single logical switch from port-channel point of view only to downstream devices.

• Since VSS is single management and single configuration, we can not use them for HSRP active and standby purpose because they are no longer two separate boxes. In fact HSRP is not needed, right?

• One single IP can be given to L3 interface and that can be used as gateway for the devices in that particular vlan and we will still have redundancy as being same ip assigned on a group of two switches. If one switch fails, another can take over. However, in vPC where switches are separately configured and managed, there is a need to configure gateway redundancy as in traditional manner.



Illustration 1

We have two switches like in above diagram, say Switches 1 and 2 with following show interface status output below.

Switch 1

Switch 2


When we put the switches in VSS, they will be accessed by a single logical name say X and if all are Ten Gig ports then interfaces will be seen as Te1/1/1, Te1/1/2 .... Te2/1/1, Te2/1/2 and so on; where Te1/1/1-16 and Te1/2/1-8 are parts of Switch 1 Te1/1-16 and Te2/1-8 physical ports while Te2/1/1-16 and Te2/2/1-8 are parts of Switch 2 Te1/1-16 and Te2/1-8 physical ports, as shown below.

Switches 1 and 2 integrated as single logical switch


However, if these are configured in vPC, then they will NOT be accessed with single logical name. The switches will be accessed/managed separately, that Switch 1 will have its own port only and so on Switch 2.

• Similary, in VSS same instances of stp, fhrp, igp, bgp etc. will be used. However, in vPC there will be separate control plane instances for stp, fhrp, igp, bgp just like they are being used in two different switches

• in VSS, the switches are always primary and secondary in all aspects and one switch will work as active and another as standby. However, in vPC they will be elected as primary and secondary from virtual port-channel point of view and for all other things; they work individually and their role of being primary/secondary.

• vPC is also most of the time not an active-standby scenario, unless for some particular failure situation only. For example, if peer-link goes down in vpc, then only secondary switch will act and bring down vpc for all its member ports.

• VSS can support L3 port-channels across multiple chassis, while vpc is used for L2 port-channels only.

• VSS supports both PAgP and LACP while VPC only supports LACP.

• In VSS, Control messages and Data frames flow between active and standby via VSL. However in VPC, Control messages are carried by CFS over Peer Link and a Peer keepalive link is used to check heartbeats and detect dual-active condition.

Illustration 2

A simple and interesting topology can be used to illustrate. In this case, Nexus and Catalyst use different multichassis technology (VPC and VSS respectively), forming back to back virtual port channel. The effective logical topology becomes greatly simplified (shown on the right side), with benefits including utilization of full bisectional bandwidth, stable all forwarding STP, high resiliency, and ease of adding/removing physical members etc.



VSS Domain ID is very much similar to VPC Domain ID. It is a unique identifier in the topology, which represents logical virtual switch formed by two physical chassis. Only one VSS pair is associated with a particular domain.

Consequently, VSS Domain ID (1-255) is used in protocol negations, therefore must be unique in the network. To illustrate, a pair of 6500 forms VSS. Since VSS is a fully consolidated logical device, it operates as one device in the network. Therefore, the use of common system MAC is necessary to represent the VSS system, for usage such as SPT and LACP. The system MAC must be unique and not tied in with physical devices.

As shown below, a VSS system MAC is derived from the combination of a predefined address (0200.0000.00xx), as well as VSS Domain ID. Since in this case Domain ID is 100, which is 64 in hex, it becomes the last octet.


The use of "0200.0000.00xx" may be curious, since it is not assigned to any manufacturer. In this case, it is only used as a system identifier, and its uniqueness assured by the uniqueness of domain ID, therefore it is perfectly acceptable. But imagine another vendor also adopting similar schemes, potential problems may exist.

Another subtlety is the use of VSS and VPC domain ID. Because VPC and VSS derive system MAC from different MAC pool, they can overlap in a common topology. This is another reason for Cisco to preserve assigned MAC addresses, so that future platforms and technologies can be developed.

Looking under the hood at MAC level can be surprising. On the topic of preserving MAC, both Catalyst and Nexus, uses the same MAC for all SVI interfaces (show interface vlan). In other words, the MAC addresses on all VLAN interfaces are the same, even though the IP addresses are different.

In order to support the above, the switch maintains its CAM and MAC address table per VLAN. As shown in the display, MAC address 0026.8888.7ac2 is used for all SVI interfaces. The switch automatically creates a static MAC entry which points to supervisor (MSFC), where per VLAN resolution occurs.


Hopefully, a look at system MAC has provided a glimpse into the inner-working of two important data center technologies.

Behavior of VSS-based Switches

Similarity and Differences between Stack, Redundant Supervisor, and VSS Technology

Similar to stack-based switches, VSS-based switches has a concept of Active and Member switch. In VSS-based switches, any configuration changes that you save are automatically replicated to all Member switch configuration file in addition to the local Active switch configuration file as follows.


Comparing to stack technology, VSS-based switches behave similarly to redundant-supervisor-based switches whereas you can only administer through the Active switch. When you (say) console and log into Member switch, you get a prompt to logout and log back in from the Active switch. With stack switch, you could still administer even when you console and log into Member switch.

Similar behavior to the redundant-supervisor-based switches makes certain parts and commands available. In 4500-X switches let's say, you can issue show bootflash: and show slavebootflash: to display contents of individual switch's flash.

Here is the show redundancy output.


Switch Role: Active or Member

Command to show roles of each switch, either Active or Member.

Typically you want to specify one switch to be preferred Active. Similar to stack-based switch, you need to configure the one switch to have higher Priority value. In the example above, the local switch which is Switch 1 has the Priority value of 150; making it the Active switch while Switch 2 is the Standby (Member) switch.

If you make configuration changes to the switch priority, the changes only take effect after you save the running configuration to the startup configuration file and perform a reload. The show switch virtual role command shows the operating and configured priority values. You can manually set the VSS standby switch to VSS active using the redundancy force-switchover command. This behavior mimics the dual-supervisor-based switches.

VSS-based Configuration Review

In this illustration, Switches 1 and 2 form a VSS by interconnecting four ports of each switch; ports 1 and 2 of module 1, also ports 1 and 2 of module 2.

Following is a snippet of show running-config related to the dual-supervisor-mimic behavior.

Configuration part that is manually added:
* Assigning Etherchannel interfaces 60 and 61
* Interface description as "VSL trunk port"
* Switch virtual domain 100
* Switch 1 priority 150

This manually added configuration means that they are optional. You can choose any Etherchannel interface ID allowed based on the switch capability you work with; with 4500x model and use cat4500e-universalk9.SPA.03.06.05.E.152-2.E5.bin as firmware, the Etherchannel interface ID range is between 1 and 255.

I decide to put "VSL trunk port" on interface description for clarity, however you can leave the description as blank or use different name per your organization's standard.

It is optional in picking up ID 100 for switch virtual domain since you can use any ID allowed. Switch 1 Priority 150 value is of personal choice as long as the value is higher than other switches to ensure that the switch (Switch 1) is preferred Active. Later we discuss this aspect deeper.

The rest of the configuration (QoS, switch mode virtual, mac-address, redundant mode SSO, and power redundancy-mode redundant) are by default.

VSS-based Switch Etherchannel Implementation

There are three types of Etherchannel (bundle) ports in VSS-based switch.
• Traditional
• MEC (Multichassis EtherChannel)
• VSL (Virtual Switch Link)

Traditional Etherchannel means one switch terminates two or more identical ports to form a bundle (Etherchannel or Port Channel) port. When you have two identical switches form VSS to become one logical switch, the Etherchannel ports can span across physical switches; which make the bundle as MEC (Multichassis EtherChannel). VSL (Virtual Switch Link) is a special bundle needed to form VSS, similar to the Nexus vPC Peer Link.

While the bundle protocol on traditional and MEC can be either LACP or PAgP, VSL uses neither. Following is a description.

Cisco Documentation
Virtual Switch Link (VSL)

As illustration, here is some show etherchannel summary command output.

The Port Channels 50 and 51 are MEC two-port bundle to hosts. The Port Channel 255 in this case is MEC four-port bundle to upstream switch. The Port Channels 60 and 61 are the VSL, consist of four-port bundle using neither LACP or PAgP while other Port Channel ports are LACP based.

Similar to Nexus vPC Peer link, the VSL is also Port Channel trunk ports. In regards to VLAN database passing, by default all VLAN are allowed to go pass through the VSL as follows.

Here is how one of the VLAN Spanning Tree topology looks like.

Following is how Spanning Tree looks like on regular Port Channels (to uplink switch or hosts) and on the VSL.


Looking Deeper into Catalyst VSS

VSS Terms and Technology

Some terms and technology understanding around VSS

Cisco documentation

NSF with SSO Supervisor Engine Redundancy
Virtual Switching System 1440 Architecture
Cisco Catalyst 6500 Virtual Switching System Deployment Best Practices

Few VSS Terminology

• Virtual Switch Link (VSL): A special port channel required to bundle two physical switches into one virtual switch.

• VSL Protocol (VSLP): Runs between active and standby switch over the VSL, and has two components: LMP and RRP
* Link Management Protocol (LMP): Runs over each individual link in VSL
* Role Resolution Protocol (RRP): Runs on each side (each peer) of the VSL port channel

• The LMP heart beat - also referred as the LMP hello timer - plays a key role in maintaining the integrity of VSS by checking peer switch availability and connectivity. Both VSS members execute independent, deterministic SSO switchover actions if they fail to detect the LMP hello message within configured hold-timer settings on the last bundled VSL link. The set of LMP timers are used in combination to determine the interval of the hello transmission applied to maintain the healthy status of the VSL links.

VSS Dual-Active Detection

This Dual-Active Detection is an important function of VSS because it prevents both supervisors from becoming active in event of a VSL link failure.

A VSS pair is connected by a VSL (virtual switch link). If the standby switch detects a complete loss of the VSL, it assumes the active chassis has failed and will take over as the active chassis. However, if the link has failed but the active chassis is still functioning, this can result in both chassis being in the active state. With both chassis routing packets and connected to upstream or downstream switches, black holes can occur.

Dual-Active Detection can be configured to prevent this from happening (in other words, highly recommended.) To accomplish this, a means of communication between both VSS chassis outside the VSL link is established. If the standby switch were to go active (typically by loss of the VSL), the active switch will be informed and will go into recovery mode. In this mode, all ports except the VSL ports are shut down. Upon seeing the VSL ports come active again, the switch will reload and come back as the standby chassis with all its ports up.

Note that while in recovery mode it is possible to have some ports excluded from being shut down. However, we won't be covering that feature.

In release 12.2(33)SXI there are three different forms of Dual-Active Detection.
• Enhanced PAgP
• IP BFD
• Dual-Active Fast Hello Packets (This was not available in prior releases)

I will be covering Enhanced PAgP and Fast Hello. Having only worked with releases that support Fast Hello, I've never had a need to configure IP BFD.

Following is how 4500-X VSS Dual-Active Detection default setup is like per cat4500e-universalk9.SPA.03.06.05.E.152-2.E5.bin firmware version.

Here is how the VSL by default is like, using the same assumption that Port Channels 60 and 61 are for the VSL.


Enhanced PAgP

Take a look at the following diagram.



The VSS pair would be a Data Center pair to which servers are dual connected (not shown). The top switches are a distribution pair which is not running VSS.

Each distribution switch is connected to both VSS chassis using an etherchannel. From the perspective of the distribution switch, it is a standard etherchannel. However, on the VSS pair it is a MEC (Multichassis Etherchannel) since it spans both chassis. Configuration wise, both traditional etherchannel and MEC are identical; no special configuration needed.

As mentioned earlier, Dual-Active Detection needs to speak with both chassis "outside" the VSL. A MEC connected to an upstream switch can provide that connectivity.

An enhanced version of PAgP is used on the etherchannel and provides the Dual-Active Detection. Note that the IOS on the upstream switch must support enhanced PAgP such as the 6500 12.2(33)SHX or SHI for this to work.

Enhanced PAgP Dual-Active Configuration

Once a MEC is operational, PAgP Dual-Active Configuration is quite simple. Identify the PortChannel between the VSS switch pair and Upstream switch. The port channel should be a MEC and include a port from both switch 1 and switch 2.

Dual Active Detection in enabled by default on the etherchannel with enhanced PAgGP. However, it does not provide the functionality until the port channel is put in trust mode under the switch virtual domain.

Note that the port channel must be shutdown first before it can be trusted or an error occurs. Of course, remember to do a no shut afterwards.


interface port channel 10
shutdown

switch virtual domain 9
dual-active detection pagp
dual-active trust channel-group port channel 10

interface port channel 10
no shutdown


That's it! You've got PAgP Dual-Active Detection Configured.

Note that in the example above, you'd want to configure it on both etherchannels for redundancy.

To display the PAgP status and Dual-Active state, issue either of the follow commands. Both give the same output.

With show switch virtual dual-active pagp command, here is the output.



Take note in this example, Channelgroup 11 is not trusted and would not be providing Dual-Active Detection.

Fast Hello Dual-Active Detection

When a PAgP etherchannel is not available for Dual-Active Detection redundancy, Fast Hello Dual-Active Detection can be configured on any pair of ports connected to each of the 2 VSS chassis. For the purpose of my example, I show an RJ45 connection between (2) Gig ports at G1/9/48 and G2/9/48.



Fast Hello Dual-Active Detection Configuration

With the Fast hello configuration, we start by telling the switch virtual domain dual-active detection is fast-hello; then we configure the ports being used for fast-hello.


switch virtual domain 9
dual-active detection fast-hello
exit

interface GigabitEthernet1/9/48
shutdown
dual-active fast-hello
no shutdown
exit

interface GigabitEthernet2/9/48
shutdown
dual-active fast-hello
no shutdown
exit


And that's it. Fast Hello Dual-Active Detection is configured.

Something worth mentioning. Any pair of ports can be used, up to 4 on each chassis, including fiber. Although I'm not sure it would be practical to waste 10G X2 ports on dual-active detection but I suppose there might be a reason to use 1G fiber. If fiber is used, UDLD is disabled.

When a port is configured as a fast hello port, it cannot be used for anything else. In fact, no other commands are available per the docs, although I didn't personally confirm it.

To display the Fast Hello Dual-Active state, issue the following command.



Catalyst 6500 VSS - Nexus vPC Interoperability

Cisco documentation
Cisco Catalyst 6500 VSS and Cisco Nexus 7000 vPC Interoperability and Best Practices White Paper

VSS - vPC Interoperability Sample Configuration

Following is an illustration.



Hardware

• Two Cisco Catalyst 6509 switches with VS-C6509VE-SUP2T (supervisor), WS-X6848-TX-2T (48 port 1G line card) and a WS-X6816-10G-2T (16 port 10G line card). They are running s2t54-ipservicesk9-mz.SPA.150-1.SY1.

• Two Cisco Nexus 5548 UP switches 32 1G ports. They are running version NX-OS 5.1(3)N1(1a).

VSS Configuration

Cisco Documentation
Catalyst 4500 Series Switch Software Configuration Guide, Release IOS XE 3.4.xSG and IOS 15.1(2)SGx: Configuring VSS
Catalyst 6500 Release 12.2SX Software Configuration Guide: Virtual Switching Systems (VSS)

Some quick notes before we begin

* It is imperative that both 6500s have similar config before you convert them into switch virtual mode.
* Run mode sso and nsf under redundancy and OSPF on each standalone 6500. Stateful switchover and Non Stop Forwarding together reduce the time for which the network is unavailable to a user considerably during a failover.

On both switches


redundancy
mode sso
!
router ospf 100
nsf


Initiate the switch virtual domain and assign switch ID to the local standalone switch. Ensure the domain ID is consistent on both standalone switches.

On Switch 1


switch virtual domain 100
switch 1
switch 1 priority 150


On Switch 2


switch virtual domain 100
switch 2
switch 2 priority 100


Configure the switch Virtual link

This link is used to communicate all state information between the two 6509 chassis. I used one TenGigabitEthernet interface on the Sup2T and another on the 6816 line card. This way I have redundancy in case either the Sup or the Line Card fails. Also, when you combine them into a port-channel, the port-channel ID on switch 1 is 1 and on switch 2 is 2. Just personal preference.

On Switch 1


interface Port-channel10
description Virtual Switch Link 1
no switchport
no ip address
switch virtual link 1
!
interface TenGigabitEthernet4/1
description VSL 1 member
no switchport
no ip address
channel-group 10 mode on
!
interface TenGigabitEthernet5/4
description VSL 1 member
no switchport
no ip address
channel-group 10 mode on


On Switch 2


interface Port-channel20
description Virtual Switch Link 2
no switchport
no ip address
switch virtual link 2
!
interface TenGigabitEthernet2/4/1
description VSL 1 member
no switchport
no ip address
channel-group 20 mode on
!
interface TenGigabitEthernet2/5/4
description VSL 1 member
no switchport
no ip address
channel-group 20 mode on


Convert both Standalone switches to Virtual mode

On both switches

The switch now reloads and converts from standalone to virtual mode. At this stage, all interface port numbers change to the following format:
x/y/z where x = switch number, y = module/slot and z = port number
For example: the Te4/1 now changes to Te1/4/1 on the first switch and Te2/4/1 on the second switch.

Configure Dual-Active Detection

You do not want both switches to become active during a failure and have the same IP address shared on the two 6500 active chassis. Also configure an interface for dual active detection by exchanging fast hellos. I used a GigabitEthernet interface on the Sup2T and another on the 6848 line card, again for redundancy.

On Switch 1


switch virtual domain 100
dual-active detection fast-hello

interface GigabitEthernet1/3/1
no switchport
no ip address
dual-active fast-hello
!
interface GigabitEthernet1/5/1
no switchport
no ip address
dual-active fast-hello


On Switch 2


switch virtual domain 100
dual-active detection fast-hello

interface GigabitEthernet2/3/1
no switchport
no ip address
dual-active fast-hello
!
interface GigabitEthernet2/5/1
no switchport
no ip address
dual-active fast-hello


VSS Verification

VSS is now configured and running. Issue show switch virtual command to confirm. There are additional show switch virtual commands for verifying dual-active detection, VSL status etc.


vPC Configuration

Both NX5K switches are vPC peer switches. They should be in the same vPC domain. And there should be a vPC keep-alive link and a vPC peer link between the vPC peers.

Enable the feature set

On both switches


feature lacp
feature vpc


Configure the management vrf and interface.

On Switch 1


vrf context management
ip route 0.0.0.0/0 1.1.1.2
!
interface mgmt 0
vrf member management
no ip redirect
ip addr 1.1.1.1/30


On Switch 2


vrf context management
ip route 0.0.0.0/0 1.1.1.1
!
interface mgmt 0
vrf member management
no ip redirects
ip addr 1.1.1.2/30


Configure the vPC domain which should be consistent on both switches. The mgmt0 port is now used as the vPC keep-alive link with the command peer-keepalive destination 1.1.1.2 source 1.1.1.1 vrf management.

On Switch 1


vpc domain 100
peer-switch
role-priority 2000
system-priority 2000
peer-keepalive destination 1.1.1.2 source 1.1.1.1 vrf management
peer-gateway
auto-recovery


On Switch 2


vpc domain 100
peer-switch
role-priority 6000
system-priority 2000 !! The system-priority has to be same on both vPC peers
peer-keepalive destination 1.1.1.1 source 1.1.1.2 vrf management
peer-gateway
auto-recovery


Configure the vPC Peer-Link. I usually use two Ethernet interfaces (last 2 ones in the module) for this.

On Switch 1


int port-channel100
description vPC peer-link
switchport
switchport mode trunk
vpc peer-link
spanning-tree port type network
!
int eth1/31
descr vPC peer-link to 5548-2
switchport
switchport mode trunk
channel-group 100
spanning-tree port type network
no shut
!
int eth1/32
descr vPC peer-link to 5548-2
switchport
switchport mode trunk
channel-group 100
spanning-tree port type network
no shut


On Switch 2


int port-channel100
description vPC peer-link
switchport
switchport mode trunk
vpc peer-link
spanning-tree port type network
!
int eth1/31
descr vPC peer-link to 5548-1
switchport
switchport mode trunk
channel-group 100
spanning-tree port type network
no shut
!
int eth1/32
descr vPC peer-link to 5548-1
switchport
switchport mode trunk
channel-group 100
spanning-tree port type network
no shut


vPC Verification


Discussion

»VSS or not
»VSS and L2 trunks


Expand got feedback?

by aryoba See Profile
last modified: 2019-10-24 15:52:16