Showing posts sorted by date for query bonding. Sort by relevance Show all posts
Showing posts sorted by date for query bonding. Sort by relevance Show all posts

Mar 20, 2015

Cost Assumption for Tooth Replacement

All of us are aware of the importance of maintaining good oral hygiene. Regularly flossing the teeth, brushing the teeth twice a day, and visiting the dentist every six months can certainly lower the risk of tooth decay and various dental problems. Unfortunately, many fail to follow these basic guidelines. As a result, they develop tooth problems. To add to that, they might visit the dentist at a stage where the damage is so extensive that nothing can be done to save the tooth. Under such circumstances, when the tooth cannot be saved by dental fillings or root canal treatment, the affected tooth has to be extracted. The next step that one needs to consider is getting a dental implant. Though dental implants are expensive, they can last a lifetime. Besides these implants, there is also the option of dental bridges. The following sections provide information regarding the options and cost of tooth replacement.

Cost of Dental Implants

Dental implants is one of the best options, when it comes to tooth replacement. In case of a dental implant, a screw is fitted into the jawbone, which is followed by the placement of a prosthesis on top of the screw. The prosthesis acts as the tooth. This procedure requires a lot of skill. The cost of a single dental implant ranges between USD 1500 and USD 5000. The cost could vary, depending on the type of materials used for making the prosthesis. Also, the cost might depend on the location of the clinic and the qualifications of the dentist. If the bone is not in a good condition, and a pre-implant surgical correction is required, the cost is likely to increase. Since dental implants are expensive, it's advisable to check with your insurance company to find out if they will cover the implant cost.

Cost of Dental Bridge

In case of a missing front tooth, one has the option of getting a dental bridge. A bridge is a prosthesis that uses the support of adjacent teeth. The two adjacent teeth (abutment teeth) need to be contoured, which involves the removal of a portion of the enamel. This will create space for the placement of a crown over these teeth. Thereafter, the dentist takes the impressions of the teeth, which are then used for making the dental bridge, pontic (false tooth that takes the place of the missing tooth), and crowns. A temporary bridge will be used by the patient to protect the exposed teeth and gums while the bridge is being made. The crowns that are placed on the abutment teeth provide support to the pontic tooth. The cost of a dental bridge for a single tooth ranges between USD 700 and USD 1500. The cost would vary, depending on the type of material (metal or ceramic) used to fabricate the bridge. At times, an acrylic or porcelain metal bridge may be used, with porcelain facing over the areas that are visible. The pure ceramic bridge will cost more. Since a pure ceramic bridge is preferred for missing front teeth, cost of front teeth replacement is often higher than the cost of replacing missing molars.

Post and Core

In this treatment option, the root of the missing teeth needs to be intact even if the crown is missing. In this case, the missing tooth structure is replaced before making a new dental crown. The core can be made from dental amalgam (metal filling material) or dental composite (tooth bonding). If more than half of the tooth structure is missing, a post is needed to anchor the core to the tooth. The cost for a post and core treatment for a single tooth will range between USD 300 and USD 500, which is lesser as compared to a bridge or implant. However, there are many factors that need to be considered when going in for a post and core. For examples, the cost would increase if one has to undergo root canal treatment. Also, a post and core is a relatively delicate option, so if you are reckless with your prosthesis, you will invariably land back in the dental chair for repair work, which will again add to the overall cost of tooth replacement.

On a concluding note, a dental implant costs more than other options. However, it could last for a longer duration, provided you get the procedure done by an experienced dentist/periodontist. So, ask your dentist about the options available, so that you can make an informed decision. Opt for a prosthesis that suits your requirements and budget the best.

Mar 4, 2015

OpenMediaVault

OpenMediaVault (OMV) is a complete and free open-source software (FOSS) network-attached storage (NAS) operating system (OS). It is developed and designed primarily for home use. Developer Volker Theile began development of OpenMediaVault in 2009 Previously he worked with the FreeNAS project.


OpenMediaVault is based on the Debian Linux distribution. It is licensed through the GNU General Public License v3 as published by the Free Software Foundation. OpenMediaVault uses Debian's official standard for package management, the Advanced Packaging Tool (APT). OpenMediaVault is designed to be configured and administered via the Webinterface, which is written in Ext JS, and is currently compatible with 32 and 64bit hardware

Features

Through an Application programming interface (API),  OpenMediaVault is designed for features to be added to the Webinterface via the Plug-in System. The developer provides a group of core Plug-ins that can be installed via the Webinterface, while others are developed by the community. Many of the community supported Plug-ins are currently hosted in an unofficial plugin repository.

Other features include:
  • Multi Language web based graphical user interface (GUI)
  • Protocols: CIFS (Samba), FTP, NFS (Version 3 and 4), SSH, rsync, iSCSI, AFP and TFTP
  • Software-RAID with the RAID-Level 0, 1, 4, 5, 6 and 10 plus JBOD
  • Monitoring: Syslog, Watchdog, S.M.A.R.T., SNMP (v1/2c/3) (Read-Only)
  • Statistic reports per E-Mail
  • Statistic graphs for the CPU-workload, LAN transferrates, hard disk usage and RAM allocation
  • GPT/EFI partitioning >2 TByte possible, ext4 maximal 16TiB
  • Filesystems: ext2, ext3, ext4, XFS, JFS, NTFS, FAT32
  • Quota
  • User and groupmanagement
  • Access controls via ACL
  • Link Aggreggation Bonding, Wake On LAN
  • Plug-in system
Plug-ins

Core Plug-ins are developed by Volker Theile
  • ClamAV - Antivirus software
  • Digital Audio Access Protocol - provides audio files in a local network (also for iTunes)
  • SAN and iSCSI - blockbased access datastores over the network
  • Lightweight Directory Access Protocol - Information request and changes of an Directory service
  • Logical Volume Manager - enables the possibility to create and administrate dynamic partitions
  • Netatalk - File-, time- and printserver for Apple Macintosh
  • Plug-in to support the use of an Uninterruptible power supply
  • easy changes to the Routing tables
  • Plug-in, which allows (automatic) backups to external USB hard disks
External Plug-ins are available via additional package repositories. The majority of those Plug-ins are developed by a group called OpenMediaVault Plugin Developers. The status of all Plug-ins can be viewed online.

Minimum System requirements
  • IA-32 (i386/i486) or AMD64 platform
  • 1 GiB RAM
  • 2 GiB hard drive, solid-state drive, or USB flash drive with static wear leveling support (NOTE: The entire disk is used as a system disk. This disk can not be used to store user data.)
  • 1 hard drive, solid-state drive, or USB flash drive for storing user data
  • 1 network card

Jan 16, 2015

Bonding instead of load balancing: Never be offline again!

The philosophy behind Viprinet "Bonding instead of load balancing: Never be offline again!"

Viprinet – Always online, always broadband!

Today, business transactions require enterprises to have an Internet connection with 100% uptime. However, most network solutions dash already against the basics: As soon as an Internet line drops out and mobile radio cells are overbooked, the availability of the connectivity solution decreases significantly. Learn in this video why Viprinet is independent of individual Internet links and can thus help businesses to achieve 100% uptime – at even lower costs.


The data stream from the LAN is encrypted by the Multichannel VPN Router and distributed onto the Internet connections (here: 2x DSL, 1x 3G / UMTS). The encrypted and fragmented data passes the networks of the utilized ISPs and reaches the Multichannel VPN Hub in the data center, which in turn decrypts the data stream and reassembles it correctly. Afterwards, the data stream is forwarded to its actual destination on the Internet. The same goes for the opposite direction: Here, the Hub encrypts the data stream, while the VPN Router decrypts it.

Feel free to visit our downloads section and download our whitepaper on the topic "Always online – wherever and whenever needed", providing detailed information on the Viprinet principle and its possibilities.

We are the bonding inventors!

We've invented the principle enabling the bonding of different WAN technologies. For us, bonding means real aggregation of bandwidth of all WAN media to be bonded.

The Multichannel VPN Router is the core of the Viprinet technology. With this device, several broadband lines can be combined into a single, highly available joint line. Unlike load balancing which can only distribute load to several WAN links, real bonding of all connections available is realized here.

Viprinet can combine all different types of access media, be they ADSL, SDSL, UMTS / HSPA+ / 3G, or LTE / 4G. The LAN sees these connections as one single line providing the accumulated up- and downstream of the different links even for single downloads.

The remote station principle

Viprinet uses an exceptional VPN tunnel technique with a star topology for secure and fast site, facility and vehicle linkage. For this purpose, the integration of two different devices is needed: A Multichannel VPN Router establishes an encrypted VPN tunnel to a single central remote station, the Multichannel VPN Hub, via each Internet line available. These VPN tunnels are then bundled into one tunnel through which the data is then transferred.

The Multichannel VPN Hub is usually located in a highly reliable data center and acts as an exchange: Data targeted at another company site will be forwarded through the respective VPN tunnel; data targeted at the public Internet will be decrypted and forwarded to its destination. The VPN Hub provides secure and quick communication between different Multichannel VPN Routers but it also serves as pivotal exchange point between the encrypted VPN and the public Internet.

Highest reliability and maximum bandwidth

With bonding using the Viprinet principle, a fast virtual WAN connection featuring almost 100 per cent reliability can be established. This reliability even inceases, the more different WAN media are bonded together. The ability to aggregate the bandwidths of all WAN media guarantees maximum transfer rates.​

Wired and wireless bonding: Viprinet's magic

The magic of the Viprinet principle lies in bonding wired WAN media like DSL or cable together with wireless media like UMTS / 3G, HSPA+ or even LTE / 4G. Providers able to bond all these media always offer the optimum mix of cost-effective WAN media ensuring highest bandwidth and reliability. Especially the ability to combine all common 3G and 4G mobile phone technologies like LTE, HSPA+, UMTS and CDMA via bonding with satellite and/or DSL or cable is unique worldwide.

Truffle : Mushroom Networks MultiWAN Bonding Review

Aggregated point-to-multipoint Capacity with Virtual Leased Line - Truffle is a load balancing router with packet level WAN aggregation. Truffle can peer over the Internet with a Truffle device that has the VLL server module, to create a bonded pipe between the two locations (such as a head-quarter office and the branch offices). In this peered mode, all uplink and downlink traffic between the head-quarter/data center office and the branch office location(s), including VPNs, can utilize the aggregated bandwidth of the combined Internet access links.


Acceleration - All HTTP downlink sessions are aggregated for faster transfer via the Broadband Bonding WAN aggregation technology. Truffle is a load balancing router with packet granularity aggregation. Even in cases of single HTTP session (an example of such a session is a single file download), all Internet access lines are simultaneously and intelligently combined together to provide a faster data transfer for that single session.


High 9s network reliability - Automatic failover will protect against failure of one or more Internet access lines as long as at least one access line is still active. Additionally, Automated Domain Name Service (ADNS) optimization is used to maximize the utility of all active access lines, with automated email, syslog or SNMP alerts. This translates to both better performance and less downtime for your network.

Cellular data WAN connection - Truffle supports 2 USB ports for cellular data cards. The cellular data card dongle can be configured as a fail-over-only or always-on WAN connection. In fail-over mode, in case of all the wired Internet access lines fail, the cellular data card will take over in less than 30 seconds.

Application Armor with Session Keep Alive - When peered to a head-office Truffle Master unit, the Truffle Master unit monitors and intelligently reacts in realtime to mitigate any performance degradation caused by the WAN links at the branch offices. Managed parameters and network problems include packet loss, latency, jitter, cross-traffic, buffer management, MTU problems, black holes as well as others. In case of packet loss, spike in latency or any other degradation on any of the WAN links at the branch office, the VLL tunnel between Truffle and Truffle Master maintains the ongoing IP sessions without loss of performance by shielding the effects of dropped WAN link, lost packets, high latency on any of the links. 2G/3G/4G cellular cards can be added as standby WAN access links for additional reliability.

Advanced Quality of Service (QoS) - Various adaptive quality of service features enables dynamic bandwidth reservation for your selected applications and traffic types, that adaptively reserves bandwidth only when that traffic type is detected. You can also limit inbound/outbound traffic to defined bit rates, bind certain traffic types on to specified WAN links, manipulate traffic based on TOS identifier, block certain traffic types and much more.


Traffic Monitoring - A history of your traffic usage based on type, prototcol, interface or layer 7 deep packet inspection identification are presented with multi-color graphs with a time scale from seconds, minutes, hours, days and months.


DNS load balancing for inbound requests - Truffle can easily be configured to provide Dynamic DNS load-balancing for inbound requests for internally hosted servers such as web-servers, ftp-servers, mail-servers etc.

Intelligent session-based load balancing - In peered mode, Truffle will bond all types of traffic in downlink and uplink. Without peering, non-HTTP downlink sessions and all uplink sessions initiated from the Local Network, will be session based load balanced intelligently across the WAN lines. The application and cookie semantics will be preserved.

Pass-through installation – For installing Truffle into your existing network, no changes are required at your firewall or network. Simply slide in the Truffle between your existing network/firewall and your existing modem and add the additional WAN links as you need. All the installation and configuration can be done through the web-based management interface locally or remotely.

No coordination with ISP - With the Truffle, no new equipment or software is necessary from your Internet Service Provider(s) and all ISPs are supported. A user-friendly web-based management interface is provided for quick and easy configuration and system monitoring, either locally or remotely over the Internet.

Additional features include: DHCP server (can be turned off), state-full firewall (can be turned off), port forwarding, DMZ, UPnP support and others. More at mushroom networks.

LACP/Etherchannel Algorithms & Linux Bonding Modes

The LACP Mode in Enterasys or the Port-channel mode in Cisco, have their own algorithms for the priority selection for the slave interfaces involved in the bonding.
As I am a Linux Guy I am more familiar with the Bonding in Linux envoirnment.
We can create bonding in the /etc/sysconfig/network/ifcfg-bond0,here we can define the Master Interface with the IP address and then the slave interfaces involved in the bonding process along with the Mode of the Bonding.

The Switch connected for the bonding also has it's own aggregation algorithm present which must match the with the mode set in the server.

There are 7 modes present in the Linux kernel.

Refer to the Bonding Documentation in the Linux Kernel, it will be available at the path
cat /usr/src/linux-2.6.38/Documentation/networking/bonding.txt | less

More verbose information can be found at
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding

Enterasys :
In Enterasys Switches such as N-series, the LACP Lag output Algorithm can be set for the 3 modes
DIP-SIP - Destination IP address/Source IP Address, slave interfaces are assigned on the basis of Source or Destination IP Addresses.
DA-SA - Destination MAC Addess/Source Mac Address, slave interfaces are assigned on the basis of Source or Destination MAC Addresses.
Round-Robin - Equal distribtution from the first slave to all slaves in round-robin fashion

To check for the LACP algorithm use following on the Enterasys Switch
Matrix N3 Platinum(su)->sh lacp ?
Specifies the lag port(s) to display
outportAlgorithm Shows lacp current ouport algorithm
flowRegeneration Shows lacp flow regeneration state
singleportlag Show single port lag setting
state Show global lag enable state
Matrix N3 Platinum(su)->sh lacp outportAlgorithm
dip-sip
Matrix N3 Platinum(su)->

To set the LACP outputalgorithm to different mode
Matrix N3 Platinum(su)->set lacp outportAlgorithm ?
dip-sip Use sip-dip algorithm for outport determination
da-sa Use da-sa algorithm for outport determination
round-robin Use round-robin algorithm for outport determination
Matrix N3 Platinum(su)->set lacp outportAlgorithm round-robin
Matrix N3 Platinum(su)->

Hence in accordance with the Mode set on the switch we can set the mode in the Linux
After doing this the LAG groups present will use the round-robin algorithm for flow distrbution.
Remember this is the global configuration which will cause change in algorithm of all LAG ports present.
By default the dip-sip algorithm is configured in the Enterasys switches.

Cisco :
On the Cisco Catalyst Switches, the port-channel can be used in LACP mode for the operation.
The default Load-balancing method used is src-mac (Source MAC Address).
Cisco allows us to perform the dry-run of the algoritm implemented using the test command.
I have all interfaces configured in the LACP Mode (Not in PAgP).

To check what is the current algorithm
Cisco#sh etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-mac
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source MAC address
IPv4: Source MAC address
IPv6: Source MAC address
Cisco#

To test the Etherchannel Algorithm used
Cisco#test etherchannel load-balance interface port-channel 1 mac 00:18:17:F1:F9:C4 E4:9F:16:C5:11:56
Would select Gi1/0/1 of Po1
Cisco#

In IP based we can use the IP address to test the etherchannel.
To see the Ether-channel algorithm present
Cisco(config)#port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-ip Src IP Addr
src-mac Src Mac Addr
Cisco(config)#port-channel load-balance

Here we can see that there is the src-dst-ip & src-dst-mac which are used for inducing additional randomization using the XOR logical operation present.

Hence the load-balancing can be done using the Destination IP address or Source IP address, same goes for the MAC addresses.

To set the New Algorithm
Cisco(config)#port-channel load-balance dst-mac
Cisco(config)#

Now the Load-balancing will happen through the Destination-Mac Address.I will do some more research on this and update the post.

Dec 10, 2014

Interface Bonding 802.3ad (LACP) with Mikrotik and Cisco

Bonding (also called port trunking or link aggregation) can be configured quite easily on RouterOS-Based devices.

Having 2 NICs (ether1 and ether2) in each router (Router1 and Router2), it is possible to get maximum data rate between 2 routers, by aggregating port bandwidth.

To add a bonding interface on Router1 and Router2:

/interface bonding add slaves=ether1,ether2

(bonding interface needs a couple of seconds to get connectivity with its peer)

Link Monitoring:
Currently bonding in RouterOS supports two schemes for monitoring a link state of slave devices: MII and ARP monitoring. It is not possible to use both methods at a time due to restrictions in the bonding driver.

ARP Monitoring:
ARP monitoring sends ARP queries and uses the response as an indication that the link is operational. This also gives assurance that traffic is actually flowing over the links. If balance-rr and balance-xor modes are set, then the switch should be configured to evenly distribute packets across all links. Otherwise all replies from the ARP targets will be received on the same link which could cause other links to fail. ARP monitoring is enabled by setting three properties link-monitoring, arp-ip-targets and arp-interval. Meaning of each option is described later in this article. It is possible to specify multiple ARP targets that can be useful in a High Availability setups. If only one target is set, the target itself may go down. Having an additional targets increases the reliability of the ARP monitoring.

MII Monitoring:
MII monitoring monitors only the state of the local interface. In RouterOS it is possible to configure MII monitoring in two ways:

MII Type 1: device driver determines whether link is up or down. If device driver does not support this option then link will appear as always up.
MII Type 2: deprecated calling sequences within the kernel are used to determine if link is up. This method is less efficient but can be used on all devices. This mode should be set only if MII type 1 is not supported.

Main disadvantage is that MII monitoring can’t tell if the link actually can pass the packets or not even if the link is detected as up.

MII monitoring is configured setting desired link-monitoring mode and mii-interval.

Configuration Example: 802.3ad (LACP) with Cisco Catalyst GigabitEthernet Connection.

/inteface bonding add slaves=ether1,ether2 \
   mode=802.3ad lacp-rate=30secs \
   link-monitoring=mii-type1 \
   transmit-hash-policy=layer-2-and-3


Other part configuration (assuming the aggregation switch is a Cisco device, usable in EtherChannel / L3 environment):

!
interface range GigabitEthernet 0/1-2
   channel-protocol lacp
   channel-group 1 mode active
!
interface PortChannel 1
   no switchport
   ip address XXX.XXX.XXX.XXX XXX.XXX.XXX.XXX
!

Or for EtherChannel / L2 environment:

!
interface range GigabitEthernet 0/1-2
   channel-protocol lacp
   channel-group 1 mode active
!
interface PortChannel 1
   switchport
   switchport mode access
   swichport access vlan XX
!

Ethernet bonding with Linux and 802.3ad

Nowadays, most desktop mainboards provide more than one gigabit ethernet port. Connecting them both to the same switch causes most Linux distros by default to get a individual IP on each device and route traffic only on the primary device (based on device metric) or round-robin. A single connection always starts at one IP and so all traffic goes through one device, limiting maximum bandwidth to 1 GBit.

Here comes bonding (sometimes called (port) trunking or link aggregation) to play. It connects two ore more ethernet ports to one virtual port with only one MAC and so mostly one IP address. Wheres earlier only two hosts (with the same OS running) or two switches (from the same vendor) could be connected, nowadays there's a standard protocol which makes it easy: LACP which is part of IEEE 802.3ad. Linux supports difference bonding mechanisms including 802.3ad. To enable bonding at all there are some kernel settings needed:

Device Drivers  --->
[*] Network device support  --->
<*>   Bonding driver support

After compiling and rebooting, we need a userspace tool for configuring the virtual interface. It's called ifenslave and provided with the Linux kernel. You can either compile it by hand

/usr/src/linux/Documentation/networking
gcc -Wall -O -I/usr/src/linux/include ifenslave.c -o ifenslave
cp ifenslave /sbin/ifenslave

or install it by emerge if you run Gentoo Linux:

emerge -va ifenslave

Now we can configure the bonding device, called bond0. Firstofall we need to set the 802.3ad mode and the MII link monitoring frequency by

echo "802.3ad" > /sys/class/net/bond0/bonding/mode
echo 100 >/sys/class/net/bond0/bonding/miimon

Now we can up the device and add some ethernet ports:

ifconfig bond0 up
ifenslave bond0 eth0
ifenslave bond0 eth1

Now bond0 is ready to be used. Run a dhcp client or set an IP by

ifconfig bond0 192.168.1.2 netmask 255.255.255.0

These steps are needed on each reboot. If you're running gentoo, you can use baselayout for this. Add

config_eth0=( "none" )
config_eth1=( "none" )
preup() {
 # Adjusting the bonding mode / MII monitor
 # Possible modes are : 0, 1, 2, 3, 4, 5, 6,
 #     OR
 #   balance-rr, active-backup, balance-xor, broadcast,
 #   802.3ad, balance-tlb, balance-alb
 # MII monitor time interval typically: 100 milliseconds
 if [[ ${IFACE} == "bond0" ]] ; then
  BOND_MODE="802.3ad"
  BOND_MIIMON="100"
  echo ${BOND_MODE} >/sys/class/net/bond0/bonding/mode
  echo ${BOND_MIIMON}  >/sys/class/net/bond0/bonding/miimon
  einfo "Bonding mode is set to ${BOND_MODE} on ${IFACE}"
  einfo "MII monitor interval is set to ${BOND_MIIMON} ms on ${IFACE}"
 else
  einfo "Doing nothing on ${IFACE}"
 fi
 return 0
}
slaves_bond0="eth0 eth1"
config_bond0=( "dhcp" )

to your /etc/conf.d/net. I found this nice preup part in the Gentoo Wiki Archive.

Now you have to configure the other side of the link. You can either use a Linux box and configure it the same way or a 802.3ad-capable switch. I used an HP Procurve 1800-24G switch. You have to enable LACP on the ports you're connected:


Now everything should work and you can enjoy a 2 GBits (or more) link. Further details can be found in the kernel documentation.

EtherChannel vs LACP vs PAgP

What is EtherChannel?

EtherChannel links formed when two or more links budled together for the purposes of aggregating available bandwidth and providing a measure of physical redundancy. Without EtherChannel, only one link will be available while the rest of the links will be disabled by STP, to prevent loop.
p/s# Etherchannel is a term normally used by Cisco, other vendors might calling this with a different term such as port trunking, trunking (do not confuse with cisco’s trunk port definition), bonding, teaming, aggregation etc.


What is LACP

Standards-based negotiation protocol, known as IEEE 802.1ax Link Aggregation Control Protocol, is simply a way to dynamically build an EtherChannel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form an EtherChannel. It’s possible, and quite common, that both ends are set to an “active” state (versus a passive state). Once these frames are exchanged, and if the ports on both side agree that they support the requirements, LACP will form an EtherChannel.

What is PAgP

Cisco’s proprietary negotiation protocol before LACP is introduced and endorsed by IEEE. EtherChannel technology was invented in the early 1990s. They were later acquired by Cisco Systems in 1994. In 2000 the IEEE passed 802.3ad (LACP) which is an open standard version of EtherChannel.

EtherChannel Negotiation

An EtherChannel can be established using one of three mechanisms:
  • PAgP - Cisco’s proprietary negotiation protocol
  • LACP (IEEE 802.3ad) – Standards-based negotiation protocol
  • Static Persistence (“On”) – No negotiation protocol is used

Any of these three mechanisms will suffice for most scenarios, however the choice does deserve some consideration. PAgP, while perfectly able, should probably be disqualified as a legacy proprietary protocol unless you have a specific need for it (such as ancient hardware). That leaves LACP and “on“, both of which have a specific benefit.

PAgP/LACP Advantages over Static

a) Prevent Network Error

LACP helps protect against switching loops caused by misconfiguration; when enabled, an EtherChannel will only be formed after successful negotiation between its two ends. However, this negotiation introduces an overhead and delay in initialization. Statically configuring an EtherChannel (“on”) imposes no delay yet can cause serious problems if not properly configured at both ends.

b) Hot-Standby Ports

If you add more than the supported number of ports to an LACP port channel, it has the ability to place these extra ports into a hot-standby mode. If a failure occurs on an active port, the hot-standby port can replace it.

c) Failover

If there is a dumb device sitting in between the two end points of an EtherChannel, such as a media converter, and a single link fails, LACP will adapt by no longer sending traffic down this dead link. Static doesn’t monitor this. This is not typically the case for most vSphere environments I’ve seen, but it may be of an advantage in some scenarios.

d) Configuration Confirmation

LACP won’t form if there is an issue with either end or a problem with configuration. This helps ensure things are working properly. Static will form without any verification, so you have to make sure things are good to go.

To configure an EtherChannel using LACP negotiation, each side must be set to either active or passive; only interfaces configured in active mode will attempt to negotiate an EtherChannel. Passive interfaces merely respond to LACP requests. PAgP behaves the same, but its two modes are refered to as desirable and auto.


3750X(config-if)#channel-group 1 mode ?
  active     Enable LACP unconditionally
  auto       Enable PAgP only if a PAgP device is detected
  desirable  Enable PAgP unconditionally
  on         Enable Etherchannel only
  passive    Enable LACP only if a LACP device is detected

Conclusion

Etherchannel/port trunking/link bundling/bonding/teaming is to combine multiple network interface.
PAgP/LACP is just a protocol to form the etherchannel link. You can have etherchannel without protocol, but not advisable.

Sources:

http://en.wikipedia.org/wiki/EtherChannel
http://packetlife.net/blog/2010/jan/18/etherchannel-considerations/
http://wahlnetwork.com/2012/05/09/demystifying-lacp-vs-static-etherchannel-for-vsphere/

Dec 9, 2014

How-To Configure NIC Teaming on Windows for HP Proliant Server

NIC Teaming means you are grouping two or more physical NIC (network interface controller card) and it will act as a single NICs. You may call it as a Virtual NICs. The minimum number of NICs which can be grouped (Teamed) is Two and the maximum number of NICs which you can group is Eight.

HP Servers are equipped with Redundant Power Supply, Fan, Hard drive (RAID) etc. As we have redundant hardware components installed on same server, the server will be available to its users even if one of the above said components fails. In the similar manner, by doing NIC Teaming (Network Teaming), we can achieve Network Fault tolerance and Load balancing on your HP Proliant Server.

HP Proliant Network Adapter Teaming (NIC Teaming) allows Server administrator to configure Network Adapter, Port, Network cable and switch level redundancy and fault tolerance. Server NIC Teaming will also allows Receive Load balancing and Transmit Load balancing. Once you configure NIC teaming on a server, the server connectivity will not be affected when Network adapter fails, Network Cable disconnects or Switch failure happens.

To create NIC Teaming in Windows 2008/2003 Operating System, we need to use the HP Network Configuration Utility. This utility is available for download at Driver & Download page of your HP Server (HP.com). Please install the latest version of Network card drivers before you install the HP Network Configuration Utility. In Linux, Teaming (NIC Bonding) function is already available and there is no HP tools which you need to use to configure it. This article will focus only on Windows based NIC teaming.

HP Network Configuration Utility (HP NCU) is a very easy-to-use tool available for Windows Operating System. HP NCU allows you to configure different types of Network Team, here are the few: 

1. Network Fault Tolerance Only (NFT)
2. Network Fault Tolerance Only with Preference Order
3. Transmit Load Balancing with Fault Tolerance (TLB)
4. Transmit Load Balancing with Fault Tolerance and Preference Order
5. Switch-assisted Load Balancing with Fault Tolerance (SLB)
6. 802.3ad Dynamic with Fault Tolerance

Network Fault Tolerance Only (NFT)

In NFT team, you can group two to eight NIC ports and it will act as one virtual network adapter. In NFT, only one NIC port will transmit and receive data and its called as primary NIC. Remaining adapters are non-primary and will not participate in receive and transmit of data. So if you group 8 NICs and create a NFT Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will be in standby mode. If the primary NIC fails, then next available NIC will be treated as Primary, and will continue the transmit and receive of data. NFT supports switch level redundancy by allowing the teamed ports to be connected to more than one switch in the same LAN.

Network Fault Tolerance Only with Preference Order:

This mode is identical to NFT, however here you can select which NIC is Primary NIC. You can configure NIC Priority in HP Network Configuration Utility. This team type allows System Administrator to prioritize the order in which teamed ports should failover if any Network failure happens. This team supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance (TLB):

TLB supports load balancing (transmit only). The primary NIC is responsible for receiving all traffic destined for the server, however remaining adapters will participate in transmission of data. Please note that Primary NIC will do both transmit and receive while rest of the NIC will perform only transmission of data. In simpler words, when TLB is configured, all NICs will transmit the data but only the primary NIC will do both transmit and receive operation. So if you group 8 NICs and create a TLB Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will perform transmission of data. TLB supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance and Preference Order:

This model is identical to TLB, however you can select which one is the Primary NIC. This option will help System Administrator to design network in such a way that one of the teamed NIC port is more preferred than other NIC port in the same team. This model also supports switch level redundancy.

Switch-assisted Load Balancing with Fault Tolerance (SLB):

SLB allows full transmit and receive load balancing. In this team, all the NICs will transmit and receive data hence you have both transmit and receive load balancing. So if you group 8 NICs and create a SLB Team, all the 8 NICs will transmit and receive data. However, SLB does not support Switch level redundancy as we have to connect all the teamed NIC ports to the same switch. Please note that SLB is not supported on all switches as it requires Ether Channel, MultiLink Trunking etc.

802.3ad Dynamic with Fault Tolerance

This team is identical to SLB except that the switch must support IEEE 802.3ad Link Aggregation Protocol (LACP). The main advantage of 802.3ad is that you do not have to manually configure your switch. 802.3ad does not support Switch level redundancy but allows full transmit and receive load balancing.

How to team NICs on HP Proliant Server:

To configure NIC teaming on your Windows based HP Proliant Server, you need to download HP Network Configuration Utility (HP NCU). This utility is available for download at HP.com. Once you download and install NCU, please open it. To know how to open NCU on your HP Server, please check my guide provided below.

Guide: Different ways to open HP NCU on your server

If you are using Windows 2012 Server Operating System on your HP Server, then you could not use HP Network Configuration Utility. We need to use the inbuilt network team software of Windows here. Please check the below provided article about Windows 2012 Network team to learn more.

Guide: NIC Teaming in Windows Server 2012

Let us continue with our Windows 2008/2003 based HP NCU here. Once you open NCU, you will find all the installed network cards are listed in it. As you can find from below provided screenshot, we have 4 NICs installed. Here, we will team first two NICs in NFT mode.

Let’s start

1. The HP Network Configuration Utility Properties window will look like the one provided below.


2. Select 2 NICs by clicking on it and then click Team button.

3. HP Network Team #1 will be created as shown below.
4. Select HP Network Team #1 and click on Properties button to change team properties

5. The Team Properties Window will open now.

6. Here you can select the type of NIC team you want to implement (See below screenshot).


7. Here, I will select NFT from the Team Type Selection drop down list.
8. Click OK once you selected the desired Team type.


9. Now you will be at below provided screen now. Click OK to close HP NCU.


10. You will receive confirmation window prompting you to save changes, Click Yes.

11. HP NCU will configure NIC teaming now, the screen may look like the one provided below.

12. This may take some time, once Teaming is done, below provided window will be shown.

13. Open HP NCU, you could find that HP Network Team is in Green color. Congrats

Working with NIC Teaming in Windows Server 2012

Of the many networking features introduced in Hyper-V 3.0 on Windows Server 2012, several were added to enhance the overall capability for networking virtual machines (VMs). One of the features introduced in Hyper-V 3.0 is a collection of components for configuring NIC teaming on virtual machines and the Windows operating system.

Originally designed for Windows Server 2012, NIC Teaming can also be used to configure teamed adapters for Hyper-V virtual machines. Since our primary focus in this article is to provide an overview of NIC Teaming in Windows Server 2012 and later versions, we will not cover in detail the steps needed to configure NIC Teaming for operating systems and virtual machines.

In earlier versions of Hyper-V (version 1.0 and version 2.0), the Windows operating system did not provide any utility to configure NIC Teaming for physical network adapters, and it was not possible to configure NIC teaming for virtual machines. A Windows administrator could configure NIC teaming on Windows by using third-party utilities but with the following disadvantages:
  • Support was provided by the vendor and not by Microsoft.
  • You could only configure NIC Teaming between physical network adapters of the same manufacturer.
  • There are also separate management UIs for managing each third-party network teaming if you have configured more than one teaming.
  • Most of the third-party teaming solutions do not have options for configuring teaming options remotely.
Starting with Hyper-V version 3.0 on Windows Server 2012, you can easily configure NIC Teaming for Virtual Machines.

This article expounds on the following topics:
  • NIC Teaming Requirements for Virtual Machines
  • NIC Teaming Facts and Considerations
  • How NIC Teaming works
NIC Teaming Requirements for Virtual Machines

Before you can configure NIC Teaming for virtual machines, ensure the following requirements are in place:
  • Make sure you are running minimum Windows Server 2012 version as the guest operating system in Virtual Machine.
  • Available physical network adapters that will participate in the NIC Teaming.
  • Identify the VLAN number if the NIC team will need to be configured with a VLAN number.
NIC Teaming Facts and Considerations

It is necessary to follow several guidelines while configuring NIC Teaming, and there are also some facts you should keep in mind that are highlighted in bullet points below:
  • Microsoft implements a protocol called "Microsoft Network Adapter Multiplexor" (explained shortly) that helps in building the NIC Teaming without the use of any third-party utilities.
  • Microsoft's teaming protocol can be used to team network adapters of different vendors.
  • It is recommended to always use the same physical network adapter with the same configuration, including configuration speed, drivers, and other network functionality, when setting up NIC Teaming between two physical network adapters.
  • NIC teaming is a feature of Windows Server, so it can be used for any network traffic, including virtual machine networking traffic.
  • NIC teaming is set up at the hardware level (physical NIC).
  • By default, a Windows Server can team up to 32 physical network adapters.
  • Only two physical network adapters in teaming can be assigned to a virtual machine. In other words, a network teamed adapter cannot be attached to a virtual machine if it contains more than two physical network adapters.
  • NIC Teaming can only be configured if there are two or more 1 GB or two or more 10 GB physical network adapters.
  • Teamed network adapters will appear in the "External Network" configuration page of Virtual Machine settings.
  • NIC Teaming can also be referred to as NIC bonding, load balancing and failover or LBFO.
How Does NIC Teaming Work?

Microsoft developers have designed a new protocol for NIC Teaming specifically. The new protocol, known as Microsoft Network Adapter Multiplexor, assists in routing packets from physical network adapters to NIC teaming adapters and vice versa. This protocol is responsible for diverting the traffic from a teamed adapter to the physical NIC. The protocol is installed by default as part of the physical network adapter initialization for the first time.

The Microsoft Network Adapter Multiplexor protocol is checked in the teamed network adapter and unchecked in the physical network adapters that are part of the NIC Teaming. For example, if there are two physical network adapters in a team, the Microsoft Network Adapter Multiplexor protocol will be disabled for these two physical network adapters and checked in the teamed adapter as shown in the below screenshot:


As shown in the above screenshot, the Microsoft Network Adapter Multiplexor protocol is unchecked in the properties section of the Physical Network Adapter named "PNIC5," and the Microsoft Network Adapter Multiplexor protocol is checked in the property of "Hyper-VTeaming" teamed network adapter. "Hyper-VTeaming" is a teamed network adapter.

Any network traffic generated from the teamed adapter will be received by one of the physical NICs participating in the Teaming. The teamed adapter talks to the Microsoft Network Adapter Multiplexor protocol bound in the physical NIC.

If this protocol is unchecked in one of the physical network adapters, then the Teamed adapter will not be able to communicate with the physical network adapters participating in the Teaming. Third-party teaming utilities might have a different protocol designed for this, but the one offered by Microsoft can be used with any vendor network card — so this protocol is vendor- and network adapter-independent.

Dec 4, 2014

ZyXEL P-663H-51 ADSL2+ 4-port Bonding Gateway Review

ZyXEL's New P-663H-51 ADSL 2/2+ modem / router supports speeds of up to 48mbps downstream and 4mbps upstream, and includes four 10/100 Ethernet LAN ports. It also provides TR069 protocol for remote management, SPI firewall and DOS protection for security, and advanced QoS and multicasting features for triple play services.

Features at a Glance
  • ADSL2/2+, Annex L and Annex M
  • 2 ADSL2+ port bonding
  • Stateful Packet Inspection
  • Anti Denial-of-Service attack and port scanning
  • IGMP proxy/snooping for IP multicast
  • Port-based VLAN to support triple-play services
ZyXEL's P-663H-51 is an all-in-one ADSL2+ gateway for Home, SOHO and SMB applications. Featuring two ADSL2+ WAN ports and four 10/100Mbps Ethernet LAN ports, the P663H-51 provides SPI (Stateful Packet Inspection), anti-DOS (Denial of Service) and many Firewall security features to protect against network intrusion and attacks.

In addition, advanced features such as IP multicasting, IGMP proxy/snooping, fast leave and IP QoS fulfill the need of triple-play services, while the G.bond-based port bonding feature groups the 2 ADSL2+ physical ports into a logical link. The link not only provides VDSL-equivalent bandwidth with much longer loop length, its load-balance feature between the two ports also makes P-663H-51 the best choice for business and high-end market applications.

ZyXEL P-663H-51 Features

Higher-speed Broadband Access

The ZyXEL P-663H-51 has two ADSL2/2+ WAN ports. With the ATM-based multi-paired bonding feature, the two ports can be grouped into a logical link boasting the bandwidth twice as fast as a single ADSL2/2+ port, and the bit rates of each individual port can be freely and independently changed by their respective PHY layer. If one of the member ports fails, the conveyed traffic will be moved to the other port. When the failed port recovers, it will seamlessly return to the logical link and share the transmission/reception of the upper-layer traffic.

Compliant all standard ADSL2/ADSL2+ features

In addition to delivering increased data rates over greater distance than the basic ADSL2/ADSL2+, the P663H-51 also supports traditional ADSL2+ standards, such as Annex L, Annex M, DELT, SRA and dying gasp functions.

Robust, State-of-the-Art IP Security

The ZyXEL Prestige 663H-51 provides state-of-the-art standard Firewall features including, Stateful Packet Inspection, anti-DoS (Denial of Service) and IP/MAC address spoofing protection for basic defense against hackers, network intruders and other hazardous threats.

Sophisticated QoS for Triple-Play Services

The P-663H-51 comes with complete integrated ATM and Ethernet QoS mechanisms, as well as various IP QoS features (Packet classification/Rate Limitation/Queue Scheduling). The seamless QoS-mapping not only allows consistent and appropriate traffic treatment of packets, but also enables fulfillment of triple-play services. The IGMP proxy/snooping and fast leave (v1, v2) features also supports IP multicasting services.



ADSL Layer Features
  • ADSL2/2+, Annex L and Annex M
  • Support DELT (dual ended loop test)
  • Seamless Rate adaptation (SRA)
  • Dying Gasp
ATM Layer Features
  • Multiple PVC support
  • RFC1483/2684 multiple protocol over AAL5
RFC2516 PPPoE
  • VC and LLC Multiplexing
  • Traffic Shaping UBR, CBR, VBR-nrt
  • OAM F4/F5 end-to-end loopback
  • ATM-based Multi-Pair Bonding (G.998.1) support
Security Features
  • Three level management login
  • WAN & LAN Service access control
  • Service access control based on source IP address
  • Anti-Denial of Service, SYNC flooding, IP Smurfing, Ping of Death, Fraggle,Teardrop, LandAnti-port scanning
TCP/IP/port/Interface filtering rules, Protect against IP and MAC address spoofing
  • Stateful Packet Inspection
Logging Features
  • User selectable levels
  • Local display and/or send to remote syslog server
  • ADSL up/down, PPP up/down
  • Intrusion alert
  • Primary DNS server status monitor
  • XML config file failures
Network Protocols
  • IP routing
TCP, UDP, ICMP, ARP
    • VPN (IPSec, PPTP, L2TP) pass-through *
    • DHCP Server/Relay/Client
    • RADIUS client
    • DNS rely/proxy
    • Dynamic DNS
    • RIP/RIP v2 routing functions
    • NAT/PAT/NAPT
    • IGMP Proxy/snooping and fast leave (v1, v2 and v3)
    • IP QoS
    • UPnP IGD 1.0
Ethernet L2 Features
    • Default Bridging for user traffic
    • ARP
    • 802.1Q Tag-Based VLAN
    • 802.1P CoS with priority queuing
Hardware Specifications
    • Power input & Power consumption
12VDC (1.5A), 15 watt
      • Power Adaptor Input 100~ 240VAC, 0.5A, 50~60Hz, 40~60VA, Output 12VDC, 1.5A, 18W
      • LAN 4-Port RJ-45 connectors for 10/100Mbps with Auto MDI/MDIX. Support both Half and Full Duplex
      • ADSL one RJ-11 connector for 2 ADSL2+ ports
Physical Specifications
      • Dimension
205(W)x 145(D)x 32(H)mm
Environmental Specification
        • Temperature Operating 0 ~ 40, Storage -30 ~ 60
        • Humidity Operating 20 ~ 85% (non-condensing), Storage 10 ~ 95% (non-condensing)
Certification
        • RoHS & WEEE
        • Safety
- UL1950 - CSA C22.2 No. 950
        • EMC - FCC Part 15 & Part 68Class B

The ZyXEL P-663H-51 ADSL2+ 4-port Bonding Gateway review can be read on this forum.