I’ve been doing a lot of system design work lately, building virtualization infrastructure for places where there is no pre-existing infrastructure available (also known as the revered “green field” deployment). One of the biggest issues I’ve had is that 10 Gbps switches can fall back to 1 Gbps when the proper transceiver is installed. However, they cannot go to 10 or 100 Mbps.
“So what?” you ask. “Nobody in their right mind uses 10 or 100 Mbps anymore.”
Management interfaces do, because the manufacturers haven’t bothered to update them to triple speed NICs (10/100/1000 Mbps). The Dell PowerVault 124T tape library can only do 10/100 Mbps. Brocade fibre channel switches, including their newest models, only have 10/100 Mbps capabilities on their management NIC.
Because of this, when I’m designing a new environment, instead of putting two 10 Gbps switches out in the field I now need at least three switches: two 10 Gbps switches and something that can do 10/100 Mbps.
“Again, so what?” you say. “Switches like that are a dime a dozen, and everybody uses old 10/100 switches for management.”
Yes, but I don’t have a 10/100 switch available at that site. So now I have to spend money to acquire one, and spend money to pay someone to configure it, maintain it, keep it on a service contract, monitor it, have it consume 1U of space, etc. If it had a NIC that could do 10/100/1000 I could plug it right into a leftover port on my nice big, monitored, already-there-and-configured 10 Gbps switches and move on with my life. Even the cheap Cisco Linksys desktop switches, available from Best Buy for $99, have 10/100/1000 available. Why doesn’t my $40,000 fibre channel switch?
On top of all that, why isn’t there redundancy for the management NIC on some of my equipment? My day is bad enough when I lose or misconfigure a switch. Not being able to reach other equipment during a crisis limits my options. I don’t like limited options, especially when the equipment is five hours away.
I’ve singled out Dell and Brocade a bit, both here and with my comments on Twitter, but remember that I know their products very well. They are not the only folks that have this problem. Vendors, if you have a copper management NIC on your device please upgrade it to redundant, gigabit-capable NICs.
Sounds like your network design is not quite best practice.
Most data centres implement a physically separate network for the server, firewall, storage, etc management interfaces. This:
1) ensures the physical security of those management ports.
2) lets you use the older 1G/100M/10M switches that aren’t part of the core.
3) ensures that your management is always available.
May I suggest that you don’t need redundant NICs, you need a smarter network design in the data centre.
That is an ancient & expensive way of thinking about data center infrastructures. There is no “the older switches” — there are no old switches at all in these designs, and I’m loathe to deploy new old switches when I have spare ports on highly redundant, managed & monitored switches I can use. I just need the rest of the equipment to be able to use them, too.
Greg I think you’ve made a fairly sweeping, and perhaps overly optimistic description of most data centres there 🙂
But regardless of that, devices like the Brocade 6510 SAN switch (which is brand new, state of the art 16Gb FC kit) have only 1 management ethernet port, and that can’t be made redundant short of a console cable permanently attached as well, no matter how clever your network design is…