Pages

Thursday, July 28, 2011

Quick Nortel MLT (Link Aggregation) Reference

These excerpts and images were taken from Nortel Reference Manual for Passport 8600. These notes are very helpful for those who are configuring a LAG (MLT) on a Nortel Switch for the first time.


Click image for full view

IEEE 802.3ad-based link aggregation, through the Link Aggregation Control Protocol (LACP), supports a dynamic link aggregation function as they become available to a trunk group. LACP dynamically detects when links can be aggregated into a link aggregation group (LAG) and does so as links become available. LACP also provides link integrity checking at Layer 2 for all links within the LAG.




Supported Link Aggregation Types:
- MLT with LACP
- MLT : statically configured link bundling
- SMLT

Virtual LACP (VLACP) is a Nortel modification that provides end-to-end
failure detection. VLACP is not a link aggregation protocol; You can run VLACP on single ports or on ports that are part of a MLT.

MLT provides module redundancy via Distributed MultiLink Trunking (DMLT). DMLT allows you to aggregate similar ports from different modules.
- Nortel recommends that LACP not be configured on the IST MLT.
- Nortel recommends that you do not configure VLACP on LACP-enabled ports. VLACP does not operate properly with LACP. You can configure VLACP with any SMLT configuration.
- Nortel recommends always using DMLT when possible.

MLT and MLT with LACP configuration rules:
• MLT is supported on 10BASE-T, 100BASE-TX, 100Base-FX, Gigabit Ethernet, and 10 Gigabit Ethernet module ports.
• All MultiLink trunk ports must have the same speed and duplex settings, even when auto-negotiation is set.
• The media type of MLT ports can be different; a mix of copper and fiber are allowed.
• All MultiLink trunk ports must be in the same STG unless the port is tagged. Tagging allows ports to belong to multiple STGs, as well as multiple VLANs.
• MLT is compatible with Spanning Tree Protocol (STP), Multiple Spanning Tree Protocol (MSTP) (IEEE 802.1s), and Rapid Spanning Tree Protocol (RSTP) (IEEE 802.1w).
• Tagging (IEEE 802.1Q) is supported on a MultiLink trunk.
• MLT ports can span modules, providing module redundancy.
• Apply filters individually to each port in a MultiLink trunk.
If identical BPDUs are received on all ports, the MultiLink trunk mode is forwarding. You can disable the Nortel STP (ntstg ) if you do not want to receive BPDUs on all ports.

LAG rules:
• All LAG ports operate in full-duplex mode.
• All LAG ports operate at the same data rate.
• Assign all LAG ports in the same VLANs.
• Link aggregation is compatible with the STP, MSTP, and RSTP.
• Assign all ports in an LAG to the same STP groups.
• Ports in an LAG can exist on different modules.
• For Gigabit and 10 Gigabit ports, you can use link aggregation groups 1 to 31.
• For Fast Ethernet ports, you can use link aggregation groups 1 to 7 only.
• Each LAG supports a maximum of eight active links.
• Each LAG supports a maximum of eight standby links.
• After a MultiLink trunk is configured with LACP, you cannot add or delete ports or VLANs manually without first disabling LACP.

SMLT is an option that improves Layer 2 and Layer 3 resiliency. These SMLT switches form a Switch Cluster and are referred to as an IST Core Switch pair
Switch Clusters are always formed as a pair, but pairs of clusters can be combined in either a square of full-mesh fashion to increase the size and port density of the Switch Cluster.
When configured in a Layer 3 or routed topology, the configuration is referenced as Routed SMLT (RSMLT).

  • Before you reboot a switch that is the LACP master, you must configure the LACP system ID globally to prevent an RSMLT failure.
  • A properly designed SMLT network inherently does not have any logical loops.
  • SMLT solves the spanning tree problem by combining two aggregation switches into one logical MLT entity, thus making it transparent to any type of edge switch. In the process, it provides quick convergence, while load sharing across all available trunks.
Single Port SMLT rules:
• The dual-homed device that connects to the aggregation switches must support MLT.
• Single-port SMLT is supported on Ethernet ports.
• Each single-port SMLT is assigned an SMLT ID from 1 to 512.
• You can designate Single Port SMLT ports as Access or Trunk (IEEE 802.1Q tagged or not); changing the type does not affect behavior.
• You cannot change a Single Port split MultiLink trunk to an MLT-based split MultiLink trunk by adding additional ports. You must delete the single port split MultiLink trunk and reconfigure the port as SMLT/MLT.
• You cannot change an MLT-based split MultiLink trunk into a single port split MultiLink trunk by deleting all ports except one. You must remove the SMLT/MLT and reconfigure the port as Single Port SMLT.
• You cannot configure a port as an MLT-based SMLT and as single-port SMLT at the same time.
• Two or more aggregation switches can have single port Split MultiLink trunk with the same IDs. You can have as many single port Split MultiLink trunk as there are a available ports on the switch.
• LACP is supported on single port SMLT.

Simple Loop Prevention Protocol (SLPP) is used to prevent loops in a SMLT network. SLPP is focused on SMLT networks but works with other configurations. Nortel recommends that you always use SLPP in any SMLT environment. SLPP requires the use of 4.0.x code or higher.

MLT with LACP configuration considerations
When you configure standard-based link aggregation, you must enable the aggregation parameter. After you enable the aggregation parameter, the LACP aggregator is mapped one-to-one to the specified MultiLink trunk.

Perform the following steps to configure an LAG:
1. Assign a numeric key to the ports you want to include in the LAG.
2. Configure port aggregation to true.
3. Enable LACP on the port.
4. Create an MultiLink trunk and assign the same key as in step 1 to it. The MultiLink trunk/LAG only aggregates ports whose key matches its own. The newly created MultiLink trunk or LAG adopts the VLAN membership of its member ports when the first port is attached to the aggregator associated with this LAG. When a port detaches from an aggregator, the associated LAG port deletes the member from its list. After a MultiLink trunk is configured with LACP, you cannot add or delete ports or VLANs manually without first disabling LACP. To enable tagging on ports belonging to a LAG, disable LACP on the port and then enable tagging and LACP on the port.

MLT with LACP and SMLT configuration considerations
Split MultiLinkTrunks (SMLT) can be configured with MLT or MLT with LACP. Follow these guidelines when you configure SMLT with LACP:
• When you set the LACP system ID for SMLT, configure the same LACP SMLT system ID on both aggregation switches to avoid the loss of data. Nortel recommends that you configure the LACP SMLT system ID to be the base MAC address of one of the aggregate switches, and that you include the SMLT-ID. Ensure that the same System ID is configured on both of the SMLT core aggregation switches.
• If you use LACP in an SMLT square configuration, the LACP ports must have the same keys for that SMLT LAG; otherwise, the aggregation can fail if a switch fails.
• If an SMLT aggregation switch has LACP enabled on some of its MultiLink trunks, do not change the LACP system priority. If some ports do not enter the desired MultiLink trunk after a dynamic configuration change, enter the following CLI command:
config mlt lacp clear-link-aggrgate
• When you configure SMLT links, Nortel recommends that you set the multicast packets-per-second value to 6000 pps.
• Nortel recommends that you do not enable LACP on interswitch trunks to avoid unnecessary processing. Use VLACP if a failure detection mechanism is required when there is an optical network between the SMLT core switches.


Click image for full view

Click image for full view





Sunday, July 24, 2011

Cisco Nexus 1000v Gotchas!

Have you deployed the new Cisco N1KV yet or thinking about doing it?

There are many online tutorials and of course Cisco documentation is great and very useful as always for installing and setting up Nexus 1000v distributed virtual switch. However, nothing beats first hand experience of testing, playing and implementing a new technology in production environment. That's why I though I should share some of the important points and guidelines from my experience for installing N1KV in our VMWare environment. It was a hassle to get it right but once done it turned out to be the next greatest thing in virtual Networking for us.

1. VLANs: As you probably read it, you need several new private VLANS (Control, Packet, and Management) for N1KV and these have to exists on the system uplink. However, you also need to put vCenter and vMotion VLANs on the system uplink port-profile as well. To do so, do the following:

N1KV84(config)#port-profile type ethernet system-uplink
N1KV84(config)#switchport mode trunk

N1KV84(config)#switchport trunk allowed vlan 111,113,249,261-262

Here,

VLAN-111 is vCenter Management VLAN
VLAN-113 is vMotion VLAN
VLAN-249 is for Nexus 1000v Management IP (This VLAN can be same as for vCenter but we chose it to be different since we have dedicated management VLAN in our environment.)
VLAN-261 Control
VLAN-262 Packet
 
2. How many N1KV dvSwitches do I need for my VMware environment?
First of all you should know that you need 2 VSM - Virtual Supervisor Module - virtual machines (VMs) per N1KV.  This will vary from environment to environment but for our environment, we created 2 N1KV swtiches across two datacenters with each datacenter hosting 3-4 clusters. 
Also, our datacenters are separated at physical boundaries so it made more sense for us to have 2 dvSwitches. Otherwise if we chose 1 per cluster, we will be creating 16 VSM - Virtual Supervisor Engine - VMs which I think is an overkill.

3. NLB - Network Load Balancing:
If you currently have Windows Network Load Balancing in your VMware environment, you will have to disable IGMP snooping in Nexus 1000v on VLANs to which NLB VIP (Virtual IP) is bound or to which vNIC (port group) NLB enabled VMs are connected. Further, remember that only multicast and IGMP-mulitcast are supported on Nexus 1000v distributed virtual switch. Unicast is not supported on Nexus 1000v.

4. LACP (No static LAGs): You can't create static LAG- Link Aggregation - between a physical switch (or stack) and Nexus 1000v to achieve higher bandwidth and port redundancy. To achieve more than 1 Gbps speed, you must enable LACP feature in Nexus using following command:

 N1KV84(config)# feature lacp

Then configure / activate LACP on ethernet port-profile like this:

 N1KV84(config)#port-profile type ethernet system-uplink N1KV84(config)#vmware port-group
 N1KV84(config)#channel-group auto mode active

Make sure LACP is also enabled on physical switch / stack as passive on ports connected to Nexus 1000v.
 
5. Persistent Connections across Host (Server) reboots:
To make sure that upstream connectivity stay intact during normal reboots or server failures, you need to define certain VLANs as system VLANs for uplinks configured as trunks. These include:
 
  A. vMotion, vCenter, Control, Packet, and Management for system uplink
 
        N1KV84(config)#port-profile type ethernet system-uplink    N1KV84(config)#system vlan 111,113,249,261-262
   N1KV84(config)#vmware port-group

 
 B. Storage VLAN on storage uplink(s) for iSCSI or NFS
 
   N1KV84(config)#port-profile type ethernet storage-uplink-iscsi  
   N1KV84(config)#vmware port-group
   N1KV84(config)#switchport mode access
   N1KV84(config)#switchport access vlan 321
   N1KV84(config)#mtu 9000
   N1KV84(config)#no shutdown
   N1KV84(config)#system vlan 321
 
   N1KV84(config)#port-profile type ethernet storage-uplink-nfs
   N1KV84(config)#vmware port-group
   N1KV84(config)#switchport mode access
   N1KV84(config)#switchport access vlan 320
   N1KV84(config)#mtu 9000
   N1KV84(config)#channel-group auto mode active
   N1KV84(config)#no shutdown
   N1KV84(config)#system vlan 320
   N1KV84(config)#max-ports 32
   N1KV84(config)#state enabled
 
  C. Any VLAN(s) used for for data uplinks
 
  D. You don't need to define system vlans for access ports.
 
6. VSM Management IP Address - Make sure you assign the same management IP address during the installation of both VSMs.
 
7. L2/L3 (Layer 2 or Layer 3) - During the setup you will be asked to configure N1KV for L2 or L3 mode. If your upstream physical switch to which Nexus 1000v will directly connect is running as L2 mode (no routing) then you should configure N1KV in L2 mode otherwise if upstream switch is running in Layer 3 mode (switching as well as routing) then configure Nexus in L3 mode. Our was L2.

8. Finally, start with latest version of Nexus 1000v because earlier releases are buggy. If because of you ESX/ESXi version you can't install latest version then read the release notes and install any patches available.

Hope after reading this post your experience with N1KV won't be as rocky as mine. :-)

Saturday, July 23, 2011

How to disable IGMP snooping on Nexus 1000v when using Microsoft NLB in IGMP multicast mode?

If you have Cisco Nexus 1000v distributed virtual switch in your VMware virtual environment and you have virtual machines (VMs) running Microsoft NLB (Network Load Balancing) in IGMP multicast mode, then you will need to disable IGMP snooping on N1KV to allow multicast traffic to pass through on the VLAN for those NLB VMs. By default IGMP snooping is enabled on all VLANs in Cisco Nexus 1000v, which is a good thing.

Here is the command to disable IGMP snooping on N1KV on let's say VLAN 100.

N1KV#config t
N1KV(config)#vlan 100
N1KV84(config-vlan)# no ip igmp snooping
N1KV84(config-vlan)#copy run start
N1KV84(config-vlan)#exit

Note: Nexus 1000v doesn't support MS NLB in Unicast mode.

Friday, July 22, 2011

Dell OpenManage 6.3 on ESXi 4.1

You will need to set following advance option and reboot ESXi host after installing OpenManage 6.3.

UserVars.CIMoemProviderEnabled to 1

click image for full view

Friday, July 15, 2011

NetApp 3140 FilerView Not Working with Java 1.6 and SSL (HTTPS) - Workaround

Recently got my new NetApp 3140 FAS series unit running Data ONTAP 8.0.1. I wanted to see different ways to manage it (i.e. CLI, NSM, and Browser). It turned out everything worked with web management except any Java applets with SSL on HTTPs. It turned out that Java 6 JRE didn't like the code since FilerView applets are written using Java 1.4 JRE classes. After some trial and error with trying JRE 1.6.x, 1.5.x  I finally got it working with Java 1.4 JRE. Here is the exception stack trace that JRE 1.6 kept throwing because it couldn't load the right library due to not able to process SSL library correctly.

Java Plug-in 1.6.0_07
Using JRE version 1.6.0_07 Java HotSpot(TM) Client VM
User home directory = C:\Documents and Settings\
----------------------------------------------------
c: clear console window
f: finalize objects on finalization queue
g: garbage collect
h: display this help message
l: dump classloader list
m: print memory usage
o: trigger logging
p: reload proxy configuration
q: hide console
r: reload policy configuration
s: dump system and deployment properties
t: dump thread list
v: dump thread stack
x: clear classloader cache
0-5: set trace level to
----------------------------------------------------
load: class com.netapp.meter.HealthMonitor not found.
java.lang.ClassNotFoundException: com.netapp.meter.HealthMonitor
at sun.applet.AppletClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadCode(Unknown Source)
at sun.applet.AppletPanel.createApplet(Unknown Source)
at sun.plugin.AppletViewer.createApplet(Unknown Source)
at sun.applet.AppletPanel.runLoader(Unknown Source)
at sun.applet.AppletPanel.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
 
Here is the screenshot of error message in Internet Explorer.

Click on image for full size view
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
So in my opinion CLI is the best way to manage all your NetApp filers but if you are not command-line savvy and want to use browser, make sure you install Java 1.4 on your system where you will be accessing NetApp FilerView. I hope NetApp will fix this in later releases of Data OnTAP.

Thursday, July 14, 2011

Suppress Redundant NIC Warning for ESX Service Consoles

After you add a second service console on an ESX box you may receive a warning in vCenter about not having redundant NIC connections on your Service Console. This can happen when you want to use two service consoles for redundant access to your ESX box but only want to dedicate single NIC for each. Because using 4 NICs for service consoles is wasting those precious NICs which we all know how valuable they become in virtual world.  Here is how you can suppress dual NIC warning for service console port group.

1. Go to VMware Cluster in virtual center
2. Click edit settings and go to VMware HA
3. Click on Advanced Options
4. In the box, type das.ignoreRedundantNetWarning = true such that das.ignoreRedundantNetWarning is under Option column and true is under Value column.

Hope this will help those who needs two service consoles in their environment.