Dec 12, 2014

FreeNAS : How-To Setup Home File Server For Free

I download a lot of music. My wife takes a lot of digital photos. My kids also like to save music and photos. Between all of us, we have a lot of media that quickly accumulates on our home PCs. The task of sharing this media between us is a challenge. My wife didn't know how to burn data CDs and my kids didn't have a CD burner. What we needed was a home file server: A dedicated computer used storage and sharing of our files. My research found a ton of products available that would do the job. There are several dedicated Network Attached Storage (NAS) devices that I could purchase, but even the cheapest ones are still several hundred US dollars. Then there is the server software to consider. Microsoft has its Windows Storage Server software that is also several hundred US dollars. There is also many different Linux solutions that require a working knowledge of the linux file system and command line.


In the end I settled on a free product called FreeNAS. As the title suggests, FreeNAS is free network attached storage software, but that is not all. It also has numerous features that make it extremely easy to set up, manage and expand. Plus it has features that allow you to use it as a media server for various devices. Since its hardware requirement is very minimal, this seemed like an ideal product for me to use. With FreeNAS, I was able to use my old desktop PC (a Pentium 4 with 256 MB RAM), as my file server.

Installation and setup:

To set up FreeNAS as a home file server, you must make sure you have all the proper hardware first. This means you need a multiple port router, or switch to connect your file server to as well as a network cable for the server. For the actual server, you will need a PC with at least one hard drive (I started with 2) and a CD-ROM drive.

The setup process was very easy. I downloaded the FreeNAS ISO file and created a Live CD which I inserted into my old PC. If I wanted to, I could have started using it as a file server right there (by simply changing the IP address of the server), but I wanted something that I could use in the long term... something that could auto restart with no user intervention in the event of a power failure. This meant installing it to the hard drive. FreeNAS setup made this easy to do. I simply selected which hard drive to install to, and that was it. After a reboot, I had to set up the network interface. FreeNAS auto-detects which network adapter you have, so selecting it was simple. Next I had to assign an IP address. FreeNAS setup has a default address you can use if you want, but it may not work on your home network. Its best to find out your workstation's IP address (typically assigned by your ISP through DHCP) and set up your FreeNAS server on a similar address. Once this is done, you are pretty much done with working directly with that machine and can now access all your other options through the web interface, which I found very easy to use.

Setting up file shares:

This is probably the most challenging part of the entire setup, but it was still relatively easy to do. Setting up the server to share files is done in 4 steps: Adding a drive, formatting the drive, adding a mount point, then setting up the share. At first the task was a bit daunting, but after grasping the basic concept, it was really quite straight forward. When I added 2 more hard drives to my server, it was simple to configure them for file sharing and within 15 minutes, I had easily tripled my file server storage capacity.

Additional Features:

Even though storage is its primary feature, there is much more that really makes this product shine. It has the ability to support multiple network protocols, including AppleTalk, NFS, FTP, Unison, and iSCSI. It also comes bundled with many extra services like the Transmission Bittorent client, a UPnP server, iTunes server and a basic web server. This means that it is capable of more than just storage. It can be used as part of your home entertainment setup, serving your media to your Home Theater PC, PSP, iPod, or other network devices.

Conclusion:

I'm happy to say that FreeNAS does a great job storing and sharing my files. Since my initial installation of the product, I added and updated 3 hard drives on my server and the process was very easy and straight forward. FreeNAS easily recognized my new hard drives and allowed me to add and share them for storage with no problems. I use the Transmission Bittorrent client to download my media, so I am not tying up my workstation with a separate bit torrent client. If I decide later to add a Linux PC to my home network, I can simply enable the appropriate protocol on my server and have instant access to all my files. Ultimately my goal is to build a home theater PC, so when that is ready, I will already have the media server ready to serve up my media.

I heartily recommend FreeNAS if you are looking for a free (or very inexpensive) solution for a file server. You will need to know some basic technical information about your home network, like your IP address setup, and you will need to have a multiple port router or switch on your home network, but beyond that, it is relatively easy to manage and expand.

Resources:

Website: http://www.freenas.org/
Download: http://sourceforge.net/projects/freenas/files/
Installation instructions: http://www.installationwiki.org/Installing_FreeNAS
FreeNAS Blog: http://blog.freenas.org/
FreeNAS Knowledgebase: http://www.freenaskb.info/kb/
FreeNAS Support Forum: http://sourceforge.net/apps/phpbb/freenas/index.php

Yet Another AoE vs. iSCSI Opinion

That’s right, folks! Yet another asshole blogger here, sharing his AoE (ATA over Ethernet) vs. iSCSI (Internet SCSI) opinion with the world!

As if there wasn’t already enough discussion surrounding AoE vs. iSCSI in mailing lists, forums and blogs, I am going to add more baseless opinion to the existing overwhelming heap of information on the subject. I’m sure this will be lost in the noise but after having implemented AoE with CORAID devices and iSCSI with an IBM (well, LSI) device and iSCSI with software targets in the past I feel I finally have something share.

This isn’t a technical analysis. I’m not dissecting the protocols nor am I suggesting implementation of either protocol for your project. What I am doing is sharing some of my experiences and observations simply because I can. Read on, brave souls.

Background

My experiences with AoE and iSCSI are limited to fairly small implementations by most standards. Multi-terabyte and mostly file serving with a little bit of database thrown in there for good measure. The reasoning behind all the AoE and iSCSI implementations I’ve setup is basically to detach storage from physical servers to achieve:
  1. Independently managed storage that can grow without pain
  2. High availability services front-end (multiple servers connecting to the same storage device(s))
There are plenty of other uses for these technologies (and other technologies that may satisfy these requirement), but that’s where I draw my experiences from. I’ve not deployed iSCSI or AoE for virtual infrastructure which does seem to be a pretty hot topic these days, so if that’s what you’re doing, your mileage will vary.

Performance

Yeah, yeah, yeah, everyone wants the performance numbers. Well, I don’t have them. You can find people comparing AoE and iSCSI performance elsewhere (even if many of the tests are flawed). Any performance numbers I may accidentally provide while typing this up in a mad frenzy are entirely subjective and circumstantial… I may not even end up providing any! Do you own testing, it’s the only way you’ll ever be sure.

The Argument For or Against

I don’t really want to be trying to convince anyone to use a certain technology here. However, I will say it: I lean towards AoE for the types of implementations I mentioned above. Why? One reason: SIMPLICITY. Remember the old KISS adage? Well, kiss me AoE because you’ve got the goods!

iSCSI has the balls to do a lot, for a lot of different situations. iSCSI is routable in layer 3 by nature. AoE is not. iSCSI has a behemoth sized load of options and settings that can be tweaked for any particular implementation needs. iSCSI has big vendor backing in both the target and the initiator markets. Need to export an iSCSI device across a WAN link? Sure, you can do it, never mind that the performance might be less than optimal but the point is it’s not terribly involved or “special” to route iSCSI over a WAN because iSCSI is designed from the get-go to run over the Internet. While AoE over a WAN has been demonstrated with GRE, it’s not inherent to the design of AoE and never will be.

So what does AoE have that iSCSI doesn’t? Simplicity and less overhead. AoE doesn’t have myriad of configuration options to get wrong, it’s really so straight forward that it’s hard to get it wrong. iSCSi is easy to get wrong. Tune your HBA firmware settings or software initiator incorrectly (and the factory defaults can easily be “wrong” for any particular implementation) and watch all hell be unleashed before your eyes. If you’ve ever looked at the firmware options provided to by QLogic in their HBAs and you’re not an iSCSI expert, you’ll know what I’m talking about.

Simplicity Example: Multipath I/O

A great example of AoE’s simplicity vs. iSCSI is when it comes to multipath I/O. Multipath I/O is defined as utilizing multiple paths to the same device/LUN/whatever to gain performance and/or redundancy. This is generally implemented with multiple HBAs or NICs on the initiator side and multiple target interfaces on the target side.

With iSCSI, every path to the same device provides the operating system with a separate device. In Linux, that’ll be /dev/sdd, /dev/sde, /dev/sdf, etc. A software layer (MPIO) is required to manage I/O across all the devices in an organized and sensible fashion.

While I’m a fairly big fan of the latest device-mapper-multipath MPIO layer in modern Linux variants, I find AoE’s multipath I/O method much, much better for the task of providing multiple paths to a storage device because it has incredibly low overhead to setup and manage. AoE’s implementation has the advantage that it doesn’t need to be everything to every storage subsystem, which fortunately or unfortunately device-mapper-multipath has to be.

The AoE Linux driver totally abstracts multiple paths in a way that iSCSI does not by handling all the multipath stuff internally. The host is only provided with a single device in /dev that is managed identically to any other non-multipath device. You don’t even need to configure the driver in any special way, just plug in the interfaces and go! That’s a long shot from what is necessary with MPIO layers and iSCSI.

There’s nothing wrong about device-mapper-multipath and it is quite flexible, but it certainly doesn’t have the simplicity of AoE’s multipath design.

Enterprise Support

Enterprise support is where iSCSI shines in this comparison. Show me a major storage vendor that doesn’t have at least one iSCSI device, even if they are just rebranded. Ok, maybe there are a few vendors out there without an iSCSI solution but for the most part all the big boys are flaunting some kind of iSCSI solution. NetApp, EMC, Dell, IBM, HDS and HP all have iSCSI solutions. On the other hand, AoE only has only a single visible company backing it at the commercial level: CORAID, a spin-off company started by Brantley Coile (yeah, the guy who invented the now-Cisco PIX and AoE). I’m starting to see some Asian manufacturers backing AoE on the hardware level but when it comes to your organization buying rack mount AoE compatible disk trays, CORAID is the only vendor I would suggest at this time.

This isn’t so fantastic for getting AoE into businesses but it’s a start. With AoE in the Linux kernel and Asian vendors packing AoE into chips things will likely pickup for AoE from an enterprise support point of view: It’s cheap, it’s simple and performance is good.

Conclusion

AoE rocks! iSCSI is pretty cool too, but I’ve certainly undergone much worse pain working with much more expensive iSCSI SAN devices vs the CORAID devices. And no performance benefit that I could realize with moderate to heavy file serving and light database workloads. I like AoE over iSCSI but there are plenty of reasons not to like it as well.

ATA-over-Ethernet vs iSCSI

Every so often someone voices interest in ATAoE support for Solaris or tries to engage in an ATAoE versus iSCSI discussion. There isn't much out there in the way of information on the topic so I'll add some to the pot...

If you look just at the names of these two technologies you can easily start to equate them in your mind and start a running mental dialog reguarding which is better. But, most folks make a very common mistake.. ATA-over-Ethernet is exactly that, over ethernet. Whereas iSCSI is Internet SCSI, or as some people prefer to think SCSI over IP. So we've got two differentiators just given the names of these technologies alone: ATA vs SCSI command set, and Ethernet vs IP stack. The interesting thing is the latter discussion.

There is a natural give and take here. The advantage of ATAoE is that you don't have the overhead of translating ATA to SCSI then back to ATA if your using ATA drives, so there is a performance pickup there. Furthermore, because we don't have the girth fo the TCP/IP stack underneight we don't burden the system with all that processing, which adds even more performance. In this sense, ATAoE strips away all the stuff that gets in the way of fast storage over ethernet. But, naturally, there is a catch. You can't route ethernet, thats what TCP/IP is for. That means that with ATAoE your going to be building very small and localized storage networks on a single segment. Think of a boot server which operates without TCP/IP, you've got to have one per subnet so that it see's the requests.

iSCSI on the otherhand might be burdened by the bulk of the TCP/IP stack, however it has the ability to span the internet because of it. You can have an iSCSI target (server) in New York and an iSCSI initiator (client) in London connected across a VPN and its not a problem. Plus, iSCSI is an open and accepted standard. ATAoE on the otherhand is open but it was created and developed by Coraid who also happens to be the only supplier of ATAoE enclosures. That may change, but we'll see how well it catches on.

ATAoE promises to be smaller and faster than the industry standard iSCSI, and it is, but unless you are using a very local application your going to be in trouble. Not to mention the lack of enclosure and driver support for non-Linux systems.

The question then becomes: Should OpenSolaris support ATAoE? Personally, I don't think we should ever be against the idea of anything new, if someone wants to do it, we should all get behind it. But looking at Solaris I doubt the idea would stick. First and foremost Solaris is an OS that adheres to the standards and plays by the rules, even when its painful. Linux doesn't always play by those rules and often it gains from breaking them. Linux is a great experimental platform, no doubt, but I just don't think the ideals of ATAoE mesh well with the goals of Solaris. Furthermore, ATAoE doesn't offer the level of scalability, flexablilty, and managability that we get with iSCSI. The performance hit of TCP/IP is definately a downside, but the advantages it brings to the table far out weight the downsides I think.

Here are some links to help you explore the subject more on your own:

ATA over Ethernet a ‘strict no’ in Data Center Networks

While exploring for storage networking technologies, there are chances that one can come across ATA over Ethernet (ATAoE). It is nothing but, ATA command set transported directly within Ethernet Frames. ATA over Ethernet approach is similar to that of a Fibre Channel over Ethernet (FCoE), but in reality the former has gained fewer acceptances from the industry.

As a matter of fact, ATAoE is limited to a single vendor (Vendor lock-in) and its specifications reveal that its protocol length is limited to 12 pages when compared with iSCSI, which has a 257 pages length of protocol.

Although, ATA over Ethernet was considered as an unsighted fast technology, it got overshadowed by the virtues of the iSCSI in the long run.

Storage networking specialists go with the opinion that ATAoE protocol is broken and so it is not a good recommendation to be deployed in the data centers. In order to further cement this statement, let us go into further details.
  • ATA over Ethernet has no sequencing- This protocol doesn’t support single sequence of numbers, which allow the storage arrays and servers to differentiate between requests or split a single request into numerous Ethernet frames. As a result of no sequencing, ATAoE offers its server the facility to go for a single request with a particular storage array.
  • ATAoE offers zero transmission- This protocol has no packet loss detection or recovery mechanism.
  • No fragmentation- ATA over Ethernet requests fit directly into Ethernet frames and so the fragmentation of a single request into multiple frames is not possible. As a result the achievement of data flow is almost zero. With the use of jumbo frames, the transfer of only two sectors is possible via each request.
  • Authentication is nil- This protocol if proposed for use in data centers, will not have authentication. So, as a result there is no network security in this protocol and so non-routability of AoE is a source of inherent security.
  • Asynchronous writes have weak support- Due to the absence of retransmissions and sequencing, asynchronous writes are handled in an in-considerate fashion.
The final word is that this protocol would have worked almost 30 years ago, when TFTP-Trivial file transfer protocol was designed. But now, in the present world, it will simply be treated as a broken protocol design class.

According to analysis of industry specialists, just go on with an ATAoE protocol to build a home network. For mission critical data center applications, ATA over Ethernet is a ‘strict no’.

Dec 11, 2014

Understanding ADSL Technology

An acronym for Asymmetric Digital Subscriber Line, ADSL is the technology that allows high-speed data to be sent over existing POTS (Plain Old Telephone Service) twisted-pair copper telephone lines. It provides a continuously available data connection whilst simultaneously providing a continuously available voice-grade telephony circuit on the same pair of wires.

ADSL technology was specifically designed to exploit the "one-way" nature of most internet communications where large amounts of data flow downstream towards the user and only a comparatively small amount of control/request data is sent by the user upstream. As an example, MPEG movies require 1.5 or 3.0 Mbps down stream but need only between 16kbps and 64kbps upstream. The protocols controlling Internet or LAN access require somewhat higher upstream rates but in most cases can get by with a 10 to 1 ratio of downstream to upstream bandwidth. The ADSL specification supports data rates of 0.8 to 3.5 Mbit/s when sending data (the upstream rate) and 1.5 to 24 Mbit/s when receiving data (the downstream rate). The different upstream and downstream speeds is the reason for including "asymmetric" in the technology's name.

ADSL Standard Common Name Downstream rate Upstream rate
ANSI T1.413-1998 Issue 2
ADSL
8 Mbit/s
1.0 Mbit/s
ITU G.992.1
ADSL (G.DMT)
8 Mbit/s
1.0 Mbit/s
ITU G.992.1 Annex A
ADSL over POTS
8 Mbit/s
1.0 MBit/s
ITU G.992.1 Annex B
ADSL over ISDN
8 Mbit/s
1.0 MBit/s
ITU G.992.2
ADSL Lite G.Lite)
1.5 Mbit/s
0.5 Mbit/s
ITU G.992.3/4
ADSL2
12 Mbit/s
1.0 Mbit/s
ITU G.992.3/4 Annex J
ADSL2
12 Mbit/s
3.5 Mbit/s
ITU G.992.3/4 Annex L
RE-ADSL2
5 Mbit/s
0.8 Mbit/s
ITU G.992.5
ADSL2+
24 Mbit/s
1.0 Mbit/s
ITU G.992.5 Annex L
RE-ADSL2+
24 Mbit/s
1.0 Mbit/s
ITU G.992.5 Annex M
ADSL2+
24 Mbit/s
3.5 Mbit/s

The downstream and upstream rates displayed in the above table are theoretical maximums. The actual data rates achieved in practice depend on the distance between the DSLAM (in the telephone exchange) and the customer's premises, the gauge of the POTS cabling and the presence of induced noise or interference.

Broadband is generally defined as a connection which is greater than 128kbs (kilo-bits per second).

Voice-grade telephony uses a bandwidth of 300Hz to 3.4kHz. The sub 300Hz bandwidth can be used for alarm-system data-transfer/monitoring. Bandwidth above 3.4kHz can be used to carry ADSL traffic.

Analogue voice circuits have a nominal 600 ohms impedance at the VF frequency range but exhibit an impedance of around 100 ohms at the frequency range used by ADSL.

DMT Discrete MultiTone modulation technology is used to superimpose the ADSL bandwidth on top of the telephony bandwidth.ADSL typically uses frequencies between 25 kHz and around 1.1 MHz. The lower part of the ADSL spectrum is for upstream tansmission (from the customer) and the upper part of the spectrum is for downstream (towards the customer) transmission.

The ADSL standard allows for several spectra divisions but the upstream band is typically from 25 to 200 kHz and the downstream band is typically 200kHz to 1.1MHz. in a FDM Frequency Division Multiplexed system, different frequency ranges are used for upstream and downstream traffic. Echo-cancelled ADSL allows the downstream band to overlap the upstream band, significantly extending the available downstream bandwidth and extends the upstream bandwidth to provide faster upstream data rates.

POTS/ADSL spectrum allocation is represented in the following diagram.


A DSLAM Digital Subscriber Line Access Multiplexer is installed at the telephone exchange. and has a modem for each customer and network interface equipment. A POTs Splitter Rack is used to separate voice traffic and data traffic on the customers telephone line.

ADSL filters and filter/splitters are used in the customer's premises to separate ADSL data from analogue speech signals and prevent interference between the two types of service. It's important that the specifications of the filters and filter/splitter you use are checked to ensure that effective filtering and equipment isolation and protection are achieved.

The ADSL standard (G.99x.x series) covers several xDSL systems, protocols and tests. They encompass a framework for operation with individual networks and providers free to adapt their system within the framework guidelines. The standards provide the boundaries for equipment manufacturers.

ADSL Physical (PHY) Layer Parameters

Downstream
Overall symbol rate 4kHz
Number of carriers per DMT symbol 256
Subcarrier spacing 4.3125kHz
Cyclic prefix length 32 samples
Operational modes FDM or Echo Cancelled
FDM Mode frequency range 64 to 1100kHz
Echo Cancelled Mode frequency range 13 to 1100kHz
Number of bits assigned per subcarrier 0 to 15 (no bits assigned to 64k QAM)*
Synchronisation Pilot tone at subcarrier 64, f = 276kHz
Upstream
Number of subcarriers per DMT symbol 32
Cyclic prefix length 4 samples
FDM Mode frequency range 11 to 43 kHz
Echo Cancelled Mode frequency range 11 to 275 kHz
Synchronisation Pilot Tone at subcarrier 16, f = 69kHz
Handshake/initialisation Per G.994.1

* The lower three to six subcarriers are set to a gain of "0" (turned off) to permit the simultaneous operation of a POTS service provided that a filter/splitter is installed at the customer's premises telephone line entry point.

pcbuzzcenter : Diskless iCafe LANshop

Brand New PC Buzz Internet Cafe (AMD) *Diskless* 30 Units -1 Timer - 1 Server
Brand new with factory warranty
Price: factory price
Location: Quezon City
The current setup is enough to run the latest games at modest settings.
We only provide top quality brands and models suited for Internet Cafe and Lanshops.
Gskill 1600 RAMs are well suited to improve gaming performance.
Spec:
Client Specifications: (30 Units)
AMD Trinity A6-5400K Processor
4GB Gskill Ripjaws X DDR3-1600 RAM
Gigabyte F2A55M DS2 Motherboard
18.5 LED Philips or AOC Monitor
Core Elite Casing w/ 600 Watts PSU
Genius Keyboard and Mouse Combo PS2
Soncm Headset w/ Mic
Timer Specifications: (1 Unit)
AMD Trinity A6-5400K Processor
4GB Gskill Ripjaws X DDR3-1600 RAM
Gigabyte F2A55M DS2 Motherboard
18.5 LED Philips or AOC Monitor
Western Digital 500GB SATA 3.0 HDD
Core Elite Casing w/ 600 Watts PSU
HP DVD-RW Drive
Genius Keyboard and Mouse Combo PS2
Fortress USB Speakers
Server Specifications: (1 Unit)
AMD Trinity A8-5600K Processor
16GB Gskill Ripjaws X DDR3-1600 Dual Channel RAM
Gigabyte F2A55M DS2 Motherboard
ADATA or Crucial 128GB SSD (Server / Client OS)
Western Digital 1TB Black SATA (Game Disk)
ADATA or Crucial 128GB SSD (Write Back)
Aerocool VS3 Casing
Corsair VS 550 Watts PSU
Genius Keyboard and Mouse Combo PS2
Bosline 650VA UPS
Network:
(2) TP Link 24-Port Gigabit Switch
DLink Original CAT5E Cable 305 Meters *Boxed*
80 Pieces RJ45 *Free*
Optional and Add Ons:
TP-Link 300 Mbps Wireless N Router
Epson L210 (All In One) w/ CISS
(2) Broadcomm Server LAN Card
CC Boot License Only
(Configure Your Own Diskless Setup, Tweakable and Legitimate DisklessProgram)
QQ Diskless Setup and Service
(Network Cable Crimping, Server Imaging, Timer Setup and Overall Client Setup)
ALL parts are top quality branded
ADD on:
Windows 7 license, Antivirus license, Microsoft Office license
We can deliver or you can pick up at our warehouse
We can custom build your PC that is depend on your spec
We can give quotation to your PC spec
For more info you can contact me Nino
Sun 09331998650 call, txt, apps available Viber, Line, Wechat, Tango
Globe 09054992358 call and txt
Smart 09982582976 call and txt
Facebook PC buzz
Email or ym: buzzpc@yahoo.com.ph

WDS Overview : Wireless Distribution System

A wireless distribution system (WDS) is a system enabling the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the traditional requirement for a wired backbone to link them. The notable advantage of WDS over other solutions is it preserves the MAC addresses of client frames across links between access points.


An access point can be either a main, relay, or remote base station.
  • A main base station is typically connected to the (wired) Ethernet.
  • A relay base station relays data between remote base stations, wireless clients, or other relay stations; to either a main, or another relay base station.
  • A remote base station accepts connections from wireless clients and passes them on to relay stations or to main stations. Connections between "clients" are made using MAC addresses.
All base stations in a wireless distribution system must be configured to use the same radio channel, method of encryption (none, WEP, WPA or WPA2) and the same encryption keys. They may be configured to different service set identifiers. WDS also requires every base station to be configured to forward to others in the system.

WDS may also be considered a repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). However, with the repeater method, throughput is halved for all clients connected wirelessly. This is because wifi is an inherently half duplex medium and therefore any wifi device functioning as a repeater must use the Store and forward method of communication.

WDS may be incompatible between different products (even occasionally from the same vendor) since the IEEE 802.11-1999 standard does not define how to construct any such implementations or how stations interact to arrange for exchanging frames of this format. The IEEE 802.11-1999 standard merely defines the 4-address frame format that makes it possible.

Technical

WDS may provide two modes of access point-to-access point (AP-to-AP) connectivity:
  • Wireless bridging, in which WDS APs (AP-to-AP on sitecom routers AP) communicate only with each other and don't allow wireless stations (STA) (also known as wireless clients) to access them
  • Wireless repeating, in which APs (WDS on sitecom routers) communicate with each other and with wireless STAs
Two disadvantages to using WDS are:
  • The maximum wireless effective throughput may be halved after the first retransmission (hop) being made. For example, in the case of two APs connected via WDS, and communication is made between a computer which is plugged into the Ethernet port of AP A and a laptop which is connected wirelessly to AP B. The throughput is halved, because AP B has to retransmit the information during the communication of the two sides. However, in the case of communications between a computer which is plugged into the Ethernet port of AP A and a computer which is plugged into the Ethernet port of AP B, the throughput is not halved since there is no need to retransmit the information. Dual band/radio APs may avoid this problem, by connecting to clients on one band/radio, and making a WDS network link with the other.
  • Dynamically assigned and rotated encryption keys are usually not supported in a WDS connection. This means that dynamic Wi-Fi Protected Access (WPA) and other dynamic key assignment technology in most cases cannot be used, though WPA using pre-shared keys is possible. This is due to the lack of standardization in this field, which may be resolved with the upcoming 802.11s standard. As a result only static WEP or WPA keys may be used in a WDS connection, including any STAs that associate to a WDS repeating AP.
OpenWRT, a universal third party router firmware, supports WDS with WPA-PSK, WPA2-PSK, WPA-PSK/WPA2-PSK Mixed-Mode encryption modes. Recent Apple base stations allow WDS with WPA, though in some cases firmware updates are required. Firmware for the Renasis SAP36g super access point and most third party firmware for the Linksys WRT54G(S)/GL support AES encryption using WPA2-PSK mixed-mode security, and TKIP encryption using WPA-PSK, while operating in WDS mode. However, this mode may not be compatible with other units running stock or alternate firmware.

Dec 10, 2014

Port Switching using Switch Chip on RouterOS

Switch Chip features are implemented in RouterOS (complete set of features implemented starting from version v4.0).

Command line config is located under /interface ethernet switch menu.
This menu contains a list of all switch chips present in system, and some sub-menus as well.

/interface ethernet switch print
Flags: I - invalid
 #   NAME     TYPE         MIRROR-SOURCE   MIRROR-TARGET
 0   switch1  Atheros-8316 ether2          none

Port Switching
Switching feature allows wire speed traffic passing among a group of ports, like the ports were a regular Ethernet Switch (L2).
This feature can be configured by setting a master-port property to one ore more ports in /interface ethernet menu.
A master-port will be the port through which the RouterOS will communicate to all ports in the group.
Interfaces for which the master-port is specified become inactive – no traffic is received on them and no traffic can be sent out.

For example consider a router with five ethernet interfaces:

/interface ethernet print
Flags: X - disabled, R - running, S - slave
 #    NAME    MTU   MAC-ADDRESS       ARP      MASTER-PORT SWITCH
 0 R  ether1  1500  XX:XX:XX:XX:XX:AB enabled
 1    ether2  1500  XX:XX:XX:XX:XX:AC enabled  none        switch1
 2    ether3  1500  XX:XX:XX:XX:XX:AD enabled  none        switch1
 3    ether4  1500  XX:XX:XX:XX:XX:AE enabled  none        switch1
 4 R  ether5  1500  XX:XX:XX:XX:XX:AF enabled  none        switch1

Configuring a switch containing three ports: ether3, ether4 and ether5.
ether3 is now the master-port of the group.

/interface ethernet set ether4,ether5 master-port=ether3
 
/interface ethernet print
Flags: X - disabled, R - running, S - slave
 #    NAME    MTU   MAC-ADDRESS       ARP      MASTER-PORT SWITCH
 0 R  ether1  1500  XX:XX:XX:XX:XX:AB enabled
 1    ether2  1500  XX:XX:XX:XX:XX:AC enabled  none        switch1
 2 R  ether3  1500  XX:XX:XX:XX:XX:AD enabled  none        switch1
 3  S ether4  1500  XX:XX:XX:XX:XX:AE enabled  ether3      switch1
 4 RS ether5  1500  XX:XX:XX:XX:XX:AF enabled  ether3      switch1

Note: previously a link was detected only on ether5 (R Flag), as the ether3 becomes master-port the running flag is propagated to referring master-port.



A packet received by one of the ports always passes through the switch logic at first. Switch logic decides to which ports the packet should be going to. Passing packet up or giving it to RouterOS is also called sending it to switch chips CPU port.

That means that at the point switch forwards the packet to cpu port the packet starts to get processed by RouterOS as some interfaces incoming packet. While the packet does not have to go to cpu port it is handled entirely by switch logic and does not require any cpu cycles and happen at wire speed for any frame size.

Interface Bonding 802.3ad (LACP) with Mikrotik and Cisco

Bonding (also called port trunking or link aggregation) can be configured quite easily on RouterOS-Based devices.

Having 2 NICs (ether1 and ether2) in each router (Router1 and Router2), it is possible to get maximum data rate between 2 routers, by aggregating port bandwidth.

To add a bonding interface on Router1 and Router2:

/interface bonding add slaves=ether1,ether2

(bonding interface needs a couple of seconds to get connectivity with its peer)

Link Monitoring:
Currently bonding in RouterOS supports two schemes for monitoring a link state of slave devices: MII and ARP monitoring. It is not possible to use both methods at a time due to restrictions in the bonding driver.

ARP Monitoring:
ARP monitoring sends ARP queries and uses the response as an indication that the link is operational. This also gives assurance that traffic is actually flowing over the links. If balance-rr and balance-xor modes are set, then the switch should be configured to evenly distribute packets across all links. Otherwise all replies from the ARP targets will be received on the same link which could cause other links to fail. ARP monitoring is enabled by setting three properties link-monitoring, arp-ip-targets and arp-interval. Meaning of each option is described later in this article. It is possible to specify multiple ARP targets that can be useful in a High Availability setups. If only one target is set, the target itself may go down. Having an additional targets increases the reliability of the ARP monitoring.

MII Monitoring:
MII monitoring monitors only the state of the local interface. In RouterOS it is possible to configure MII monitoring in two ways:

MII Type 1: device driver determines whether link is up or down. If device driver does not support this option then link will appear as always up.
MII Type 2: deprecated calling sequences within the kernel are used to determine if link is up. This method is less efficient but can be used on all devices. This mode should be set only if MII type 1 is not supported.

Main disadvantage is that MII monitoring can’t tell if the link actually can pass the packets or not even if the link is detected as up.

MII monitoring is configured setting desired link-monitoring mode and mii-interval.

Configuration Example: 802.3ad (LACP) with Cisco Catalyst GigabitEthernet Connection.

/inteface bonding add slaves=ether1,ether2 \
   mode=802.3ad lacp-rate=30secs \
   link-monitoring=mii-type1 \
   transmit-hash-policy=layer-2-and-3


Other part configuration (assuming the aggregation switch is a Cisco device, usable in EtherChannel / L3 environment):

!
interface range GigabitEthernet 0/1-2
   channel-protocol lacp
   channel-group 1 mode active
!
interface PortChannel 1
   no switchport
   ip address XXX.XXX.XXX.XXX XXX.XXX.XXX.XXX
!

Or for EtherChannel / L2 environment:

!
interface range GigabitEthernet 0/1-2
   channel-protocol lacp
   channel-group 1 mode active
!
interface PortChannel 1
   switchport
   switchport mode access
   swichport access vlan XX
!

Ethernet bonding with Linux and 802.3ad

Nowadays, most desktop mainboards provide more than one gigabit ethernet port. Connecting them both to the same switch causes most Linux distros by default to get a individual IP on each device and route traffic only on the primary device (based on device metric) or round-robin. A single connection always starts at one IP and so all traffic goes through one device, limiting maximum bandwidth to 1 GBit.

Here comes bonding (sometimes called (port) trunking or link aggregation) to play. It connects two ore more ethernet ports to one virtual port with only one MAC and so mostly one IP address. Wheres earlier only two hosts (with the same OS running) or two switches (from the same vendor) could be connected, nowadays there's a standard protocol which makes it easy: LACP which is part of IEEE 802.3ad. Linux supports difference bonding mechanisms including 802.3ad. To enable bonding at all there are some kernel settings needed:

Device Drivers  --->
[*] Network device support  --->
<*>   Bonding driver support

After compiling and rebooting, we need a userspace tool for configuring the virtual interface. It's called ifenslave and provided with the Linux kernel. You can either compile it by hand

/usr/src/linux/Documentation/networking
gcc -Wall -O -I/usr/src/linux/include ifenslave.c -o ifenslave
cp ifenslave /sbin/ifenslave

or install it by emerge if you run Gentoo Linux:

emerge -va ifenslave

Now we can configure the bonding device, called bond0. Firstofall we need to set the 802.3ad mode and the MII link monitoring frequency by

echo "802.3ad" > /sys/class/net/bond0/bonding/mode
echo 100 >/sys/class/net/bond0/bonding/miimon

Now we can up the device and add some ethernet ports:

ifconfig bond0 up
ifenslave bond0 eth0
ifenslave bond0 eth1

Now bond0 is ready to be used. Run a dhcp client or set an IP by

ifconfig bond0 192.168.1.2 netmask 255.255.255.0

These steps are needed on each reboot. If you're running gentoo, you can use baselayout for this. Add

config_eth0=( "none" )
config_eth1=( "none" )
preup() {
 # Adjusting the bonding mode / MII monitor
 # Possible modes are : 0, 1, 2, 3, 4, 5, 6,
 #     OR
 #   balance-rr, active-backup, balance-xor, broadcast,
 #   802.3ad, balance-tlb, balance-alb
 # MII monitor time interval typically: 100 milliseconds
 if [[ ${IFACE} == "bond0" ]] ; then
  BOND_MODE="802.3ad"
  BOND_MIIMON="100"
  echo ${BOND_MODE} >/sys/class/net/bond0/bonding/mode
  echo ${BOND_MIIMON}  >/sys/class/net/bond0/bonding/miimon
  einfo "Bonding mode is set to ${BOND_MODE} on ${IFACE}"
  einfo "MII monitor interval is set to ${BOND_MIIMON} ms on ${IFACE}"
 else
  einfo "Doing nothing on ${IFACE}"
 fi
 return 0
}
slaves_bond0="eth0 eth1"
config_bond0=( "dhcp" )

to your /etc/conf.d/net. I found this nice preup part in the Gentoo Wiki Archive.

Now you have to configure the other side of the link. You can either use a Linux box and configure it the same way or a 802.3ad-capable switch. I used an HP Procurve 1800-24G switch. You have to enable LACP on the ports you're connected:


Now everything should work and you can enjoy a 2 GBits (or more) link. Further details can be found in the kernel documentation.

EtherChannel vs LACP vs PAgP

What is EtherChannel?

EtherChannel links formed when two or more links budled together for the purposes of aggregating available bandwidth and providing a measure of physical redundancy. Without EtherChannel, only one link will be available while the rest of the links will be disabled by STP, to prevent loop.
p/s# Etherchannel is a term normally used by Cisco, other vendors might calling this with a different term such as port trunking, trunking (do not confuse with cisco’s trunk port definition), bonding, teaming, aggregation etc.


What is LACP

Standards-based negotiation protocol, known as IEEE 802.1ax Link Aggregation Control Protocol, is simply a way to dynamically build an EtherChannel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form an EtherChannel. It’s possible, and quite common, that both ends are set to an “active” state (versus a passive state). Once these frames are exchanged, and if the ports on both side agree that they support the requirements, LACP will form an EtherChannel.

What is PAgP

Cisco’s proprietary negotiation protocol before LACP is introduced and endorsed by IEEE. EtherChannel technology was invented in the early 1990s. They were later acquired by Cisco Systems in 1994. In 2000 the IEEE passed 802.3ad (LACP) which is an open standard version of EtherChannel.

EtherChannel Negotiation

An EtherChannel can be established using one of three mechanisms:
  • PAgP - Cisco’s proprietary negotiation protocol
  • LACP (IEEE 802.3ad) – Standards-based negotiation protocol
  • Static Persistence (“On”) – No negotiation protocol is used

Any of these three mechanisms will suffice for most scenarios, however the choice does deserve some consideration. PAgP, while perfectly able, should probably be disqualified as a legacy proprietary protocol unless you have a specific need for it (such as ancient hardware). That leaves LACP and “on“, both of which have a specific benefit.

PAgP/LACP Advantages over Static

a) Prevent Network Error

LACP helps protect against switching loops caused by misconfiguration; when enabled, an EtherChannel will only be formed after successful negotiation between its two ends. However, this negotiation introduces an overhead and delay in initialization. Statically configuring an EtherChannel (“on”) imposes no delay yet can cause serious problems if not properly configured at both ends.

b) Hot-Standby Ports

If you add more than the supported number of ports to an LACP port channel, it has the ability to place these extra ports into a hot-standby mode. If a failure occurs on an active port, the hot-standby port can replace it.

c) Failover

If there is a dumb device sitting in between the two end points of an EtherChannel, such as a media converter, and a single link fails, LACP will adapt by no longer sending traffic down this dead link. Static doesn’t monitor this. This is not typically the case for most vSphere environments I’ve seen, but it may be of an advantage in some scenarios.

d) Configuration Confirmation

LACP won’t form if there is an issue with either end or a problem with configuration. This helps ensure things are working properly. Static will form without any verification, so you have to make sure things are good to go.

To configure an EtherChannel using LACP negotiation, each side must be set to either active or passive; only interfaces configured in active mode will attempt to negotiate an EtherChannel. Passive interfaces merely respond to LACP requests. PAgP behaves the same, but its two modes are refered to as desirable and auto.


3750X(config-if)#channel-group 1 mode ?
  active     Enable LACP unconditionally
  auto       Enable PAgP only if a PAgP device is detected
  desirable  Enable PAgP unconditionally
  on         Enable Etherchannel only
  passive    Enable LACP only if a LACP device is detected

Conclusion

Etherchannel/port trunking/link bundling/bonding/teaming is to combine multiple network interface.
PAgP/LACP is just a protocol to form the etherchannel link. You can have etherchannel without protocol, but not advisable.

Sources:

http://en.wikipedia.org/wiki/EtherChannel
http://packetlife.net/blog/2010/jan/18/etherchannel-considerations/
http://wahlnetwork.com/2012/05/09/demystifying-lacp-vs-static-etherchannel-for-vsphere/

Dec 9, 2014

VDSL2 vectoring explained

Several system vendors including Adtran, Alcatel-Lucent and ZTE have announced vectoring technology that boosts the performance of very-high-bit-rate digital subscriber line (VDSL2) broadband access technology. Vectoring is used to counter crosstalk - signal leakage between the telephony twisted wire pairs that curtails VDSL2's bit rate performance – as is now explained.

Technology briefing

There is a large uncertainty in the resulting VDSL2 bit rate for a given loop length. With vectoring this uncertainty is almost removed

Paul Spruyt, Alcatel-Lucent

Two key characteristics of the local loop limit the performance of digital subscriber line (DSL) technology: signal attenuation and crosstalk.

Attenuation is due to the limited spectrum of the telephone twisted pair, designed for low frequency voice calls not high-speed data transmission. Analogue telephony uses only 4kHz of spectrum, whereas ADSL uses 1.1MHz and ADSL2+ 2.2MHz. The even higher speed VDSL2 has several flavours: 8b is 8.5MHz, 17a is 17.6MHz while 30a spans 30MHz.

The higher frequencies induce greater attenuation and hence the wider the spectrum, the shorter the copper loop length over which data can be sent. This is why higher speed VDSL2 technology requires the central office or, more commonly, the cabinet to be closer to the user, up to 2.5km away - although in most cases VDSL2 is deployed on loops shorter than 1.5km.

The second effect, crosstalk, describes the leakage of the signal in a copper pair into neighbouring pairs. “All my neighbours get a little bit of the signal sent on my pair, and vice versa: the signal I receive is not only the useful signal transmitted on my pair but also noise, the contributed components from all my active VDSL2 neighbours,” says Paul Spruyt, xDSL technology strategist at Alcatel-Lucent.

Typical a cable bundle comprises several tens to several hundred copper pairs. The signal-to-noise ratio on each pair dictates the overall achievable data rate to the user and on short loops it is the crosstalk that is the main noise culprit.

Vectoring boosts VDSL2 data rates to some 100 megabits-per-second (Mbps) downstream and 40Mbps upstream over 400m. This compares to 50Mbps and 20Mbps, respectively, without vectoring. There is a large uncertainty in the resulting VDSL2 bit rate for a given loop length. "With vectoring this uncertainty is almost removed," says Spruyt.


Vectoring

The term vectoring refers to the digital signal processing (DSP) computations involved to cancel the crosstalk. The computation involves multiplying pre-coder matrices with Nx1 data sets – or vectors – representing the transmit signals.

The crosstalk coupling into each VDSL2 line is measured and used to generate an anti-noise signal in the DSLAM to null the crosstalk on each line.

To calculate the crosstalk coupling between the pairs in the cable bundle, use is made of the ‘sync’ symbol that is sent after every 256 data symbols, equating to a sync symbol every 64ms or about 16 a second.

Each sync symbol is modulated with one bit of a pilot sequence. The length of the pilot sequence is dependent on the number of VDSL2 lines in the vectoring group. In a system with 192 VDSL2 lines, 256-bit-long pilot sequences are used (the next highest power of two).

Moreover, each twisted pair is assigned a unique pilot sequence, with the pilots usually chosen such that they are mutually orthogonal. “If you take two orthogonal pilots sequences and multiply them bit-wise, and you take the average, you always find zero,” says Spruyt. "This characteristic speeds up and simplifies the crosstalk estimation.”

A user's DSL modem expects to see the modulated sync symbol, but in reality sees a modulated sync symbol distorted with crosstalk from the modulated sync symbols transmitted on the neighbouring lines. The modem measures the error – the crosstalk – and sends it back to the DSLAM. The DSLAM correlates the received error values on the ‘victim’ line with the pilot sequences transmitted on all other ‘disturber’ lines. By doing this, the DSLAM gets a measure of the crosstalk coupling for every disturber – victim pair.

The final step is the generation of anti-noise within the DSLAM.

This anti-noise is injected into the victim line on top of the transmit signal such that it cancels the crosstalk signal picked up over the telephone pair. This process is repeated for each line.

VDSL2 uses discrete multi-tone (DMT) modulation where each DMT symbol consists of 4096 tones, split between the upstream (from the DSL modem to the DSLAM) and the downstream (to the user) transmissions. All tones are processed independently in the frequency domain. The resulting frequency domain signal including the anti-noise is converted back to the time domain using an inverse fast Fourier transform.

The above describes the crosstalk pre-compensation or pre-coding in the downstream direction: anti-noise signals are generated and injected in the DSLAM prior to transmission of the signal on the line.

For the upstream, the inverse occurs: the DSLAM generates and adds the anti-noise after reception of the signal distorted with crosstalk. This technique is known as post-compensation or post-coding. In this case the DSL modem sends the pilot modulated sync symbols and the DSLAM measures the error signal and performs the correlations and anti-noise calculations.



Challenges

One key challenge is the amount of computations to be performed in real-time. For a fully-vectored 200-line VDSL2 system, some 2,600 billion multiply-accumulates per second - 2.6TMAC/s - need to be calculated. A system of 400 lines would require four times as much processing power, about 10TMAC/s.

Alcatel-Lucent’s first-generation vectoring system that was released end 2011 could process 192 lines. At the recent Broadband World Forum show in October, Alcatel-Lucent unveiled its second-generation system that doubles the capacity to 384 lines.

For larger cable bundles, the crosstalk contributions from certain more distant disturbers to a victim line are negligible. Also, for large vectoring systems, pairs typically do not stay together in the same cable but get split over multiple smaller cables that do not interfere with each other. “There is a possibility to reduce complexity by sparse matrix computations rather than a full matrix,” says Spruyt, but for smaller systems full matrix computation is preferred as the disturbers can’t be ignored.

There are other challenges.

There is a large amount of data to be transferred within the DSLAM associated with the vectoring. According to Alcatel-Lucent, a 48-port VDSL2 card can generate up to 20 Gigabit-per-second (Gbps) of vectoring data.

There is also the need for strict synchronization – for vectoring to work the DMT symbols of all lines need to be aligned within about 1 microsecond. As such, the clock needs to be distributed with great care across the DSLAM.

Adding or removing a VDSL2 line also must not affect active lines which requires that crosstalk is estimated and cancelled before any damage is done. The same applies when switching off a VDSL2 modem which may affect the terminating impedance of a twisted pair and modify the crosstalk coupling. Hence the crosstalk needs to be monitored in real-time.



Zero touch

A further challenge that operators face when upgrading to vectoring is that not all the users' VDSL2 modems may support vectoring. This means that crosstalk from such lines can’t be cancelled which significantly reduces the vectoring benefits for the users with vectoring DSL modems on the same cable.

To tackle this, certain legacy VDSL2 modems can be software upgraded to support vectoring. Others, that can't be upgraded to vectoring, can be software upgraded to a ‘vector friendly’ mode. Crosstalk from such a vector friendly line into neighbouring vectored lines can be cancelled, but the ‘friendly’ line itself does not benefit from the vectoring gain.

Upgrading the modem firmware is also a considerable undertaking for the telecom operators especially when it involves tens or hundreds of thousands of modems.

Moreover, not all the CPEs can be upgraded to friendly mode. To this aim, Alcatel Lucent has developed a 'zero-touch' approach that allows cancelling the crosstalk from legacy VDSL2 lines into a vectored lines without CPE upgrade. “This significantly facilitates and speeds up the roll-out of vectoring” says Spruyt.

How-To Configure NIC Teaming on Windows for HP Proliant Server

NIC Teaming means you are grouping two or more physical NIC (network interface controller card) and it will act as a single NICs. You may call it as a Virtual NICs. The minimum number of NICs which can be grouped (Teamed) is Two and the maximum number of NICs which you can group is Eight.

HP Servers are equipped with Redundant Power Supply, Fan, Hard drive (RAID) etc. As we have redundant hardware components installed on same server, the server will be available to its users even if one of the above said components fails. In the similar manner, by doing NIC Teaming (Network Teaming), we can achieve Network Fault tolerance and Load balancing on your HP Proliant Server.

HP Proliant Network Adapter Teaming (NIC Teaming) allows Server administrator to configure Network Adapter, Port, Network cable and switch level redundancy and fault tolerance. Server NIC Teaming will also allows Receive Load balancing and Transmit Load balancing. Once you configure NIC teaming on a server, the server connectivity will not be affected when Network adapter fails, Network Cable disconnects or Switch failure happens.

To create NIC Teaming in Windows 2008/2003 Operating System, we need to use the HP Network Configuration Utility. This utility is available for download at Driver & Download page of your HP Server (HP.com). Please install the latest version of Network card drivers before you install the HP Network Configuration Utility. In Linux, Teaming (NIC Bonding) function is already available and there is no HP tools which you need to use to configure it. This article will focus only on Windows based NIC teaming.

HP Network Configuration Utility (HP NCU) is a very easy-to-use tool available for Windows Operating System. HP NCU allows you to configure different types of Network Team, here are the few: 

1. Network Fault Tolerance Only (NFT)
2. Network Fault Tolerance Only with Preference Order
3. Transmit Load Balancing with Fault Tolerance (TLB)
4. Transmit Load Balancing with Fault Tolerance and Preference Order
5. Switch-assisted Load Balancing with Fault Tolerance (SLB)
6. 802.3ad Dynamic with Fault Tolerance

Network Fault Tolerance Only (NFT)

In NFT team, you can group two to eight NIC ports and it will act as one virtual network adapter. In NFT, only one NIC port will transmit and receive data and its called as primary NIC. Remaining adapters are non-primary and will not participate in receive and transmit of data. So if you group 8 NICs and create a NFT Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will be in standby mode. If the primary NIC fails, then next available NIC will be treated as Primary, and will continue the transmit and receive of data. NFT supports switch level redundancy by allowing the teamed ports to be connected to more than one switch in the same LAN.

Network Fault Tolerance Only with Preference Order:

This mode is identical to NFT, however here you can select which NIC is Primary NIC. You can configure NIC Priority in HP Network Configuration Utility. This team type allows System Administrator to prioritize the order in which teamed ports should failover if any Network failure happens. This team supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance (TLB):

TLB supports load balancing (transmit only). The primary NIC is responsible for receiving all traffic destined for the server, however remaining adapters will participate in transmission of data. Please note that Primary NIC will do both transmit and receive while rest of the NIC will perform only transmission of data. In simpler words, when TLB is configured, all NICs will transmit the data but only the primary NIC will do both transmit and receive operation. So if you group 8 NICs and create a TLB Team, then only 1 NIC will transmit and receive data, remaining 7 NICs will perform transmission of data. TLB supports switch level redundancy.

Transmit Load Balancing with Fault Tolerance and Preference Order:

This model is identical to TLB, however you can select which one is the Primary NIC. This option will help System Administrator to design network in such a way that one of the teamed NIC port is more preferred than other NIC port in the same team. This model also supports switch level redundancy.

Switch-assisted Load Balancing with Fault Tolerance (SLB):

SLB allows full transmit and receive load balancing. In this team, all the NICs will transmit and receive data hence you have both transmit and receive load balancing. So if you group 8 NICs and create a SLB Team, all the 8 NICs will transmit and receive data. However, SLB does not support Switch level redundancy as we have to connect all the teamed NIC ports to the same switch. Please note that SLB is not supported on all switches as it requires Ether Channel, MultiLink Trunking etc.

802.3ad Dynamic with Fault Tolerance

This team is identical to SLB except that the switch must support IEEE 802.3ad Link Aggregation Protocol (LACP). The main advantage of 802.3ad is that you do not have to manually configure your switch. 802.3ad does not support Switch level redundancy but allows full transmit and receive load balancing.

How to team NICs on HP Proliant Server:

To configure NIC teaming on your Windows based HP Proliant Server, you need to download HP Network Configuration Utility (HP NCU). This utility is available for download at HP.com. Once you download and install NCU, please open it. To know how to open NCU on your HP Server, please check my guide provided below.

Guide: Different ways to open HP NCU on your server

If you are using Windows 2012 Server Operating System on your HP Server, then you could not use HP Network Configuration Utility. We need to use the inbuilt network team software of Windows here. Please check the below provided article about Windows 2012 Network team to learn more.

Guide: NIC Teaming in Windows Server 2012

Let us continue with our Windows 2008/2003 based HP NCU here. Once you open NCU, you will find all the installed network cards are listed in it. As you can find from below provided screenshot, we have 4 NICs installed. Here, we will team first two NICs in NFT mode.

Let’s start

1. The HP Network Configuration Utility Properties window will look like the one provided below.


2. Select 2 NICs by clicking on it and then click Team button.

3. HP Network Team #1 will be created as shown below.
4. Select HP Network Team #1 and click on Properties button to change team properties

5. The Team Properties Window will open now.

6. Here you can select the type of NIC team you want to implement (See below screenshot).


7. Here, I will select NFT from the Team Type Selection drop down list.
8. Click OK once you selected the desired Team type.


9. Now you will be at below provided screen now. Click OK to close HP NCU.


10. You will receive confirmation window prompting you to save changes, Click Yes.

11. HP NCU will configure NIC teaming now, the screen may look like the one provided below.

12. This may take some time, once Teaming is done, below provided window will be shown.

13. Open HP NCU, you could find that HP Network Team is in Green color. Congrats

Windows 7 Link aggregation / NICs Teaming


Intel NIC’s 802.3ad Link Aggregation in Windows 7? – [H]ard|Forum

http://hardforum.com/showthread.php?t=1762818

If anyone else is trying to do this, I figured it out. Follow these directions for Intel NIC’s. The feature is not included in Windows 7, so the NIC drivers have to support it. You have to be logged…


Network Connectivity — How do I use Teaming with Advanced Networking Services (ANS)?

http://www.intel.com/support/network/sb/cs-009747.htm

Adapter teaming with Intel® Advanced Network Services (ANS) uses an intermediate driver to group multiple physical ports. Teaming can be used to add fault tolerance, load balancing, and link…

Working with NIC Teaming in Windows Server 2012

Of the many networking features introduced in Hyper-V 3.0 on Windows Server 2012, several were added to enhance the overall capability for networking virtual machines (VMs). One of the features introduced in Hyper-V 3.0 is a collection of components for configuring NIC teaming on virtual machines and the Windows operating system.

Originally designed for Windows Server 2012, NIC Teaming can also be used to configure teamed adapters for Hyper-V virtual machines. Since our primary focus in this article is to provide an overview of NIC Teaming in Windows Server 2012 and later versions, we will not cover in detail the steps needed to configure NIC Teaming for operating systems and virtual machines.

In earlier versions of Hyper-V (version 1.0 and version 2.0), the Windows operating system did not provide any utility to configure NIC Teaming for physical network adapters, and it was not possible to configure NIC teaming for virtual machines. A Windows administrator could configure NIC teaming on Windows by using third-party utilities but with the following disadvantages:
  • Support was provided by the vendor and not by Microsoft.
  • You could only configure NIC Teaming between physical network adapters of the same manufacturer.
  • There are also separate management UIs for managing each third-party network teaming if you have configured more than one teaming.
  • Most of the third-party teaming solutions do not have options for configuring teaming options remotely.
Starting with Hyper-V version 3.0 on Windows Server 2012, you can easily configure NIC Teaming for Virtual Machines.

This article expounds on the following topics:
  • NIC Teaming Requirements for Virtual Machines
  • NIC Teaming Facts and Considerations
  • How NIC Teaming works
NIC Teaming Requirements for Virtual Machines

Before you can configure NIC Teaming for virtual machines, ensure the following requirements are in place:
  • Make sure you are running minimum Windows Server 2012 version as the guest operating system in Virtual Machine.
  • Available physical network adapters that will participate in the NIC Teaming.
  • Identify the VLAN number if the NIC team will need to be configured with a VLAN number.
NIC Teaming Facts and Considerations

It is necessary to follow several guidelines while configuring NIC Teaming, and there are also some facts you should keep in mind that are highlighted in bullet points below:
  • Microsoft implements a protocol called "Microsoft Network Adapter Multiplexor" (explained shortly) that helps in building the NIC Teaming without the use of any third-party utilities.
  • Microsoft's teaming protocol can be used to team network adapters of different vendors.
  • It is recommended to always use the same physical network adapter with the same configuration, including configuration speed, drivers, and other network functionality, when setting up NIC Teaming between two physical network adapters.
  • NIC teaming is a feature of Windows Server, so it can be used for any network traffic, including virtual machine networking traffic.
  • NIC teaming is set up at the hardware level (physical NIC).
  • By default, a Windows Server can team up to 32 physical network adapters.
  • Only two physical network adapters in teaming can be assigned to a virtual machine. In other words, a network teamed adapter cannot be attached to a virtual machine if it contains more than two physical network adapters.
  • NIC Teaming can only be configured if there are two or more 1 GB or two or more 10 GB physical network adapters.
  • Teamed network adapters will appear in the "External Network" configuration page of Virtual Machine settings.
  • NIC Teaming can also be referred to as NIC bonding, load balancing and failover or LBFO.
How Does NIC Teaming Work?

Microsoft developers have designed a new protocol for NIC Teaming specifically. The new protocol, known as Microsoft Network Adapter Multiplexor, assists in routing packets from physical network adapters to NIC teaming adapters and vice versa. This protocol is responsible for diverting the traffic from a teamed adapter to the physical NIC. The protocol is installed by default as part of the physical network adapter initialization for the first time.

The Microsoft Network Adapter Multiplexor protocol is checked in the teamed network adapter and unchecked in the physical network adapters that are part of the NIC Teaming. For example, if there are two physical network adapters in a team, the Microsoft Network Adapter Multiplexor protocol will be disabled for these two physical network adapters and checked in the teamed adapter as shown in the below screenshot:


As shown in the above screenshot, the Microsoft Network Adapter Multiplexor protocol is unchecked in the properties section of the Physical Network Adapter named "PNIC5," and the Microsoft Network Adapter Multiplexor protocol is checked in the property of "Hyper-VTeaming" teamed network adapter. "Hyper-VTeaming" is a teamed network adapter.

Any network traffic generated from the teamed adapter will be received by one of the physical NICs participating in the Teaming. The teamed adapter talks to the Microsoft Network Adapter Multiplexor protocol bound in the physical NIC.

If this protocol is unchecked in one of the physical network adapters, then the Teamed adapter will not be able to communicate with the physical network adapters participating in the Teaming. Third-party teaming utilities might have a different protocol designed for this, but the one offered by Microsoft can be used with any vendor network card — so this protocol is vendor- and network adapter-independent.

Dec 5, 2014

LMMC header on DLink router file, decoding the zlib zpipe Plaintext password

Tested on a DLink DSL-G604T

Downloading the config file dumps a config.bin file. The first line of the file has a LMMC which indicates a zlib header


Convert the file to a .Z file
dd if=config.bin of=test.config.bin.z bs=20 skip=1

download the zlib source and extract it. http://zlib.net/zlib-1.2.7.tar.gz
go to the examples folder
compile zpipe.c using the command
gcc -o zipe zpipe.c -lz
now you will have something called zpipe

copy the zpipe command where the config files are and execute the command
./zpipe -d < test.config.bin.z > config.txt

now open config.txt and view it plaintext!

LMCC and Router Configs

I contacted ACA and TT (through their website forms) about the Internet Filtering plan that the Australian Government is pushing through.

I’ve been really vocal about this previously, but now I think it’s time to start committing myself into writing and helping others get their letters written and sent to the people both responsible and the people letting this happen.

Click on the article to see the full text I submitted.
Read the rest of this entry »

Tags: censorship, filtering, Internet Filtering
Posted in Internet Filtering | No Comments »
LMCC and Router Configs
October 22nd, 2008

Source code attached, see end of article.

I had to pull the password for the internet connection out of a router at work recently and stumbled upon a problem that didn’t seem to have much of a solution, the router allows you to save a binary copy of the config, but it didn’t appear to be a known format.

kosh@aerith ~ $ file config.bin
config.bin: data

So after a little digging I found a Zlib header in the file and found a resource on the internet that had a windows only decoder (which failed for me :( ) so I proceeded to figure it out for myself.

kosh@aerith ~ $ hexdump -C config.bin | head -n 2
00000000 4c 4d 4d 43 00 03 00 00 c9 1a 00 00 8d 0e 8d cb |LMMC............|
00000010 e0 a2 00 00 78 9c ed 3d 6b 73 db 38 92 9f ef 7e |....x..=ks.8...~|

You can see the Zlib style magic at the 20-byte mark (0x14, “78 9c”). I tested my theory by grabbing zpipe.c from the zlib website and using dd to decode it.

kosh@aerith ~ $ dd if=config.bin of=test.bin.Z bs=20 skip=1
342+1 records in
342+1 records out
6857 bytes (6.9 kB) copied, 0.0165227 s, 415 kB/s
kosh@aerith ~ $ ./zpipe -d < test.bin.Z
....


But considering I was 5 minutes from a simple working setup, I hacked zpipe.c down and made zlmcc.c from it. I've made zlmcc.c available for anyone else that wants to deflate these files quickly.

Usual guarantee applies, if it blows up the world, not my fault. I only tested it on my system and with my single config file, using the above steps you should be able to figure it out if they change the format by a little (offset, etc)

via Kosh's