Dec 12, 2014

HP ProLiant DL380p Gen8 Server Review

As StorageReview expands our enterprise test lab, we're finding a greater need for additional latest generation servers (like the HP D380p Gen8); not just from a storage perspective, but from a more global enterprise environment simulation perspective as well. As we test larger arrays and faster interconnects, we need platforms like the HP DL380p to be able to deliver the workload payload required to these arrays and related equipment. Additionally, as PCIe storage matures, the latest application accelerators rely on third-generation PCIe for maximum throughput. Lastly, there's a compatibility element we're adding to enterprise testing, ensuring we can provide results across a variety of compute platforms. To that end HP has sent us their eighth-generation (Gen8) DL380p ProLiant, a mainstream 2U server that we're using in-lab for a variety of testing scenarios.


While some may wonder about the relevancy of reviewing servers on a storage website, it's important to realize how vital the compute platform is to storage performance, both directly and indirectly. When testing the latest PCIe Application Accelerators for example, for maximum throughput, it's critical to make sure compute servers are ready in areas ranging from hardware compatibility, performance scaling and saturation, to even often overlooked elements like how a server manages cooling.

Case in point, most 2U servers use riser boards for PCIe expansion, and knowing what drives those slots is just as important as the slots themselves. If one 16-lane PCIe slot is being shared for three slots, those may under-perform compared to a solution that uses two 16-lane PCIe slots to share between three riser slots. We also have an eye toward how well manufacturers make use of the cramped real-estate inside 1U and 2U servers, as all are not created equal. Items in this category can vary from everything from cable management to how many features are integrated versus requiring add-on cards, leaving PCIe expansion entirely open to the end-user instead of utilizing those slots for RAID cards or additional LAN NICs. Even the way server vendors handle the traditional SATA/SAS bays can be vastly different which could be the difference between an ideal server/storage relationship and one that is less desirable.

The HP ProLiant DL380p Gen8 Server series is comprised of 2U, 2-socket compute servers that feature a Smart Array P420i RAID controller with up to 2GB Flash Backed Write Cache (FBWC), up to five PCIe 3.0 expansion slots and one PCIe 2.0 expansion slot, and extensive built-in management capabilities. Our server accepts small form factor (SFF) 2.5-inch SAS, SATA, or SSD drives, while other configurations of the ProLiant DL380p Gen8 servers accepting large form factor (LFF) 3.5-inch drives are also available.

Our HP ProLiant DL380p Gen8 Specifications:
  • Intel Xeon E5-2640 (6 core, 2.50 GHz, 15MB, 95W)
  • Windows Server 2008 R2 SP1 64-Bit
  • Intel C600 Chipset
  • Memory - 64GB (8 x 8GB) 1333Mhz DDR3 Registered RDIMMs
    • 768 GB (24 DIMMs x 32G 2R) Max
  • PCI-Express Slots
    • 1 x PCIe 3.0 x16
    • 1 x PCIe 3.0 x8
    • 1 x PCIe 2.0 x8 (x4 electric)
  • Ethernet - 1Gb 331FLR Ethernet Adapter 4 Ports
  • Boot Drive - 600GB 10,000RPM SAS x 2 (RAID1)
  • Storage Bays - 8 x 2.5" SAS/SATA hot swap
    • Smart Array P420i Controller
  • I/O Ports
    • 7 x USB 2.0 (2 front, 4 rear and 1 internal)
    • 2 x VGA connector (front/rear)
    • Internal SD-Card slot
  • Management
    • HP Insight Control Environment
    • HP iLO 4; hardware-based power capping
  • Form Factor - 2P/2U Rack
  • Power
    • 460W Common Slot Platinum Hot Plug
  • HP Standard Limited Warranty - 3 Years Parts and on-site Labor, Next Business Day
  • Full HP ProLiant DL380p Specifications
Hardware Options

The DL380p Gen8 series features configurations with up to two Intel Xeon E5-2600 family processors, up to five PCI-Express 3.0 expansion slots and one PCI-Express 2.0 slot (three with single CPU, six with dual CPU). The standard riser configuration per CPU includes one x16 PCIe 3.0 slot, one x8 PCIe 3.0 slot, and one x8 PCIe 2.0 slot. HP offers different configuration options, with an optional riser that supports two x16 PCIe 3.0 slots. The unit can also support up to two 150W single-width graphics cards in a two processor, two riser configuration with an additional power feed.


Each Intel Xeon E5-2600 processor socket contains four memory channels that support three DIMMs each for a total of 12 DIMMs per installed processor or a grand total of 24 DIMMs per server. ProLiant DL380p Gen8 supports HP SmartMemory RDIMMs, UDIMMs, and LRDIMMs up to 128GB capacity at 1600MHz or 768GB maximum capacity.


HP FlexibleLOM provides bandwidth options (1G and 10G) and network fabric (Ethernet, FCoE, InfiniBand), with an upgrade path to 20G and 40G when the technology becomes available. HP ProLiant DL380p Gen8 provides a dedicated iLO port and the iLO Management Engine including Intelligent Provisioning, Agentless Management, Active Health System, and embedded Remote Support. This layout allows users to manage the DL380p, without taking over a port from the other four 1GbE offered on-board.

Monitoring and Management

HP Active Health System provides health and configuration logging with HP’s Agentless Management for hardware monitoring and alerts. Automated Energy Optimization analyzes and responds to the ProLiant DL380p Gen8’s array of internal temperature sensors and can signal self-identification location and inventory to HP Insight Control. The HP ProLiant DL380p Gen8 is Energy Star qualified and supports HP's Common Slot power supplies allow for commonality of power supplies across HP solutions. If you configure a ProLiant DL380p Gen8 with HP Platinum Plus common-slot power supplies, the power system can communicate with the company’s Intelligent PDU series to enable redundant supplies to be plugged into redundant power distribution units.


HP also offers three interoperable management solutions for the ProLiant DL380p Gen 8: Insight Control, Matrix Operating Environment, and iLo. HP Insight Control provides infrastructure management to deploy, migrate, monitor, remote control, and optimize infrastructure through a single management console. Versions of Insight Control are available for Linux and Windows central management servers. The HP Matrix Operating Environment (Matrix OE) infrastructure management solution includes automated provisioning, optimization, and recovery management capabilities for HP CloudSystem Matrix, HP’s private cloud and Infrastructure as a Service (IaaS) platform.


HP iLO management processors virtualize system controls for server setup, health monitoring, power and thermal control, and remote administration. HP iLO functions without additional software installation regardless of the servers' state of operation. Basic system board management functions, diagnostics, and essential Lights-Out functionality ships standard across all HP ProLiant Gen8 rack, tower and blade servers. Advanced functionality, such as graphical remote console, multi-user collaboration, and video record/playback can be activated with optional iLO Advanced or iLO Advanced for BladeSystem licenses.


Some of the primary features enabled with advanced iLO functionality include remote console support beyond BIOS access or advanced power monitoring capabilities to see how much power the server is drawing over a given period of time. In our case our system shipped with basic iLO support, which gave us the ability to remotely power on or off the system or provided remote console support (which ended as soon as the OS started to boot). Depending on the installation, many users can probably get by without the advanced features, but when tying the server into large scale-out environments, the advanced iLo featureset can really streamline remote management.

Design and Build

Our DL380p Gen8 review model came with a Sliding-Rack Rail Kit and an ambidextrous Cable Management Arm. The rail kit system offers tool-free installation for racks with square or round mounting holes and features an adjustment range of 24-36 inches and quick release levers. Installation into telco racks requires a third-party option kit. The sliding-rack and cable management arm work together, allowing IT to service the DL380p by sliding it out of the rack without disconnecting any cables from the server. Buyers opting for a more basic approach can still buy the DL380p without rails, or with a basic non-sliding friction mount.


The front of the DL380p features one VGA out and two USB ports. Our unit features eight small form factor (SFF) SAS hot-plug drive bays. There is space for an optional optical drive at to the left of the hot plug bays. With a quick glance of the status LEDs on the front, users can diagnose server failures or make sure everything is running smoothly. If no failures have occurred, the system health LEDs are green. If a failure has occurred, but a redundant feature has enabled the system to continue running, the LED will be amber. If the failure is critical and causes shutdown, the LED illuminates red. If the issue is serviceable without removing the server hood, the External Health LED illuminates. If the hood must be removed, the Internal Health LED illuminates.


The level of detail that HP put into the DL380p is fairly impressive at times, with items as simple as drive trays getting all the bells and whistles. The drive tray includes rotating disk activity LEDs, indicators to tell you when a drive is powered on, and even when not to eject a drive. At times when it seems that all hard drives or SSDs get simple blinking activity LEDs, HP goes the extra mile to provide users with as much information as they can absorb just by looking at the front of the server.


Connectivity is handled from both the front and rear of the DL380p. VGA and USB ports are found on both sides of the server for easy management, although both VGA ports can't be used simultaneously. Additional ports such as a serial interface, and more USB ports can be found on the back of the server along with FlexibleLOM ports (four 1GbE in our configuration) and the iLO LAN connector. To get the ProLiant DL380p Gen8 server up and running immediately, HP ships these servers standard with a 6-foot C-14 to C13 power cord for use with a PDU.


Internally, HP put substantial effort into making the ProLiant DL380p Gen8 easy to service while packing the most features they could into the small 2U form-factor. The first thing buyers will notice is the cabling, or lack thereof, inside the server chassis. Many of the basic features are routed on the motherboard itself, including what tends to be cluttered power cabling. Other tightly-integrated items including the on-board FlexibleLOM 4-port 1GbE NIC and the Smart Array P420i RAID controller, adding network and drive connectivity without taking over any PCIe slots. In a sense this allows buyers to have their cake and eat it too, packing the DL380p with almost every feature and still leaving room for fast PCIe application accelerators or high-speed aftermarket networking interconnects such as 10/40GbE or 56Gb/s InfiniBand.


When it comes time to install new hardware or quickly replace faulty buyers or their IT departments will enjoy the tool-free serviceable sections of the DL380p. No matter if you are swapping out system memory, replacing a processor, or even installing a new PCIe add-on card, you don't need to break out a screwdriver. HP also includes a full hardware diagram on the inside of the system cover, making it easy to identify components when it comes time to replacing them.

Cooling

Inside most server chassis, cooling and cable management can go hand in hand. While you can overcome some issues with brute force cooling, a more graceful approach is to remove intrusive cabling that can disrupt proper airflow for efficient and quiet cooling. HP went to great lengths integrating most cables found in servers, including power cabling, or went with flat cables tucked against one side for data connections. You can see this with the on-board Smart Array P420i RAID controller that connects to the front drive bay with flat mini-SAS cables.


While keeping a server cool is just one task to accomplish inside a server, making sure it works and is easily field-serviceable are two distinct items. All fans on the HP DL380p held in with quick-connects, and can be swapped out by removing the top lid in seconds.

On the cooling side of things, the DL380p does a great job of providing dedicated airflow for all the components inside the server chassis, including add-on PCIe solutions. Through the BIOS, users can change the amount of cooling needed, including overriding all automatic cooling options to force max airflow if the need arises. If that's the case, make sure no loose paperwork is around, as it will surely be sucked to the front bezel from the tornado of airflow. In our testing with PCIe Application Accelerators installed and stressed, stock cooling, or slightly increased cooling was enough to keep everything operating smoothly.

Power Efficiency

HP is making a big push into higher efficiency servers that can be seen across the board with a greater push for lower power-draw components. The ProLiant DL380p includes a high-efficiency power supply, our model is equipped with the 94% efficient Common Slot Platinum PSU.


Less power is wasted as heat in the AC to DC conversion process, which means that for every 100 watts you send your power supply, 94 watts reaches the server, instead of 75 watts or less with older models.

Conclusion

We've logged hands on time with just about every major server brand, and even some not so major brands. The one thing that resonates with the HP Gen8 ProLiants is just how tightly they're put together. The interior layouts are clean, cabling is tucked away (or completely integrated with the motherboard) and thoughtfully done and even the PCIe riser boards support the latest generation PCIe storage cards. From a storage perspective, the latter is certainly key, if an enterprise is going to invest in the latest and greatest storage technology, the server better support the expected throughput.

While this first part of our HP ProLiant DL380p review gives a comprehensive overview of the system itself, part two will incorporate performance and compatibility testing with a wide array of storage products. While most SATA and SAS drives will perform roughly the same in any system, the latest PCIe storage solutions have a way of separating the men from the boys in the server world. Stay tuned for our second review installment that will cover these storage concerns and other key areas such as multi-OS performance variability.

Availability

HP ProLiantDL380p Gen8 Servers start $2,569 and are available now.

MSI MS-9A58 Quad LAN Review

MSI IPC launches MS-9A58 industrial system, a compact and fanless embedded IPC powered by an Intel® Atom™ D525 processor with DDR3 support and integrated display interface. It enables much better power savings, while providing top performance and rich I/O capability.


MS-9A58 is powered by the latest Intel® Atom™ D525 dual core processors with DD3 memory up to 4GB for D525. With integrated graphics and memory controllers, these processors deliver graphics core rendering speeds from 200 to 400 MHz while maintaining excellent power efficiency. In addition to higher speeds and less power consumption. The Intel® GMA 3150 graphics engine is built into the chipset to provide fast graphics performance, high visual quality, and flexible display options without the need for a separate graphics card. With a compact mini-ITX system size, system developers get the freedom to design small embedded applications.


MS-9A58 supports 4 Intel 82574L Gb LAN Ports, including one pair of single latch support auto-bypass function. For the storage application, it supports 2 SATA ports. To satisfy increasing demands of connecting more peripheral devices, MS-9A58 is equipped with abundant I/O design, includes one RS-232 and one RS-232/422/485 serial ports with auto-flow control, two COM ports and 6 USB 2.0 ports. Expansion capabilities include two PCI slots, one PCIex1 slot and one mini-PCIe slot. For the internet demand, MS-9A58 comes with a module that has a built-in WiFi 802.11b/g/n module function. MS-9A58 supports ATX and wide range DC 12V / 19V / 24V inputs as the different BOM option.


Key Features:
1. Intel® Pineview D525 Dual Core CPU
2. DDR3 SoDIMM for better memory supply
3. 2 SATA Ports for Storage Application
4. 4 Intel 82574L Gb LAN Ports, including one pair of single latch support auto-bypass function
5. Built-in WiFi 802.11b/g/n module function
6. Wide Range Voltage Input for DC Sku (12/19/24V)
7. Support DirectX10, Shadier Model 4.0 and Intel® Clear Video Technology

With a compact mini-ITX size, MS-9A58 is designed with rich I/O functionality and has the new levels of performance and graphics for the demand in network security applications, such as small business VPN (Virtual Private Network ), VoIP (Voice over Internet Protocol ), SAN (Storage Area Network) and NAS (Network Attached Storage).

The MSI MS-9A58 Quad LAN is really best for embedded system like OpenWrt, pfSense, MonoWall, SmoothWall, DD-Wrt, ZeroShell not to mention other Linux network security OS. Home file server is also applicable such as FreeNAS and SimplyNAS.

CCBoot 3.0 : Server Hardware Requirements

Here is the recommended server hardware for diskless boot with CCBoot.

1.] CPU: Intel or AMD Processor 4 Core or more.
2.] Motherboard: Server motherboard that supports 8GB or more RAM, 6 or more SATA Ports.
3.] RAM: 8GB DDR3 or more.
4.] Hard Disk:At first, we introduce some items.
Image disk: the hard disk that stores the client OS boot data. We call it as "image".
Game disk: the hard disks that store the game data.
Writeback disk: the hard disks that store the client write data. In diskless booting, all data are read and wrote from server. So we need writeback disk to save the client's write data. Other products are also named it as "write cache".

1) One SATA HDD is used for server OS (C:\) and image disk(D:\); some users put image file into SSD disk. It's not necessary. We have RAM cache for image. All image data will be loaded from RAM cache at last. So put image file into SSD disk is not necessary.

2) Two SATA HDD are set up on RAID0 for Game Disk.
We recommend to use Win2008 disk manager to setup RAID0 instead of hardware RAID in BIOS. We recommend to set SATA mode as AHCI in BIOS. Because AHCI is better for writeback disks' write performance. For more information, please refer to AHCI on wiki. In the BIOS, SATA mode can only be one of AHCI and RAID. If we set it as AHCI, the RAID function of the motherboard will be invalid. So we use Win2008 disk manager to setup RAID0. The performance is same as hardware RAID0. Note: If you skip RAID0, the read speed of the game may become slow. But if the clients are less than 50 with SSD cache, it is OK to skip RAID0.

3) One SSD disk for SSD cache. (120G+)

4) Two SATA/SAS/SSD HDD is used for client write-back disk. We do NOT recommend to use RAID for write-back disks. If one disk is broken, we can use the other one. If using RAID for writeback disk, one disk broken will cause all clients stop. On the other hand, CCBoot can do balance for writeback disk. Two disks write performance is better than one RAID disk. Using SSD as writeback disk is better than SATA. SSD has good IOPS. The street said the writing activities are harmful for the lifetime of SSD. In our experience, one SSD for writeback disk can be used for three years at least. It's enough and worth.

Conclusion: You need to prepare 6 HDDs for the server normally. They are 5 SATA HDDs and 1 SSD HDD. 1 SATA for system OS, 2 SATA for game disks, 2 SATA for writeback disks and 1 SSD for cache.

For 25 - 30 client PCs, server should have 8G DDR3 RAM and two writeback disks.
For 30 - 70 client PCs, server should have 16G DDR3 RAM and two writeback disks.
For 70 - 100 client PCs, server should have 32G DDR3 RAM and two writeback disks.
For 100+ client PCs, we recommend to use 2 or more Servers with load balance.
Network: 1000Mb Ethernet or 2 * 1000 Mb Ethernet team network. We recommend Intel and Realtek 1000M Series.

FreeNAS : How-To Setup Home File Server For Free

I download a lot of music. My wife takes a lot of digital photos. My kids also like to save music and photos. Between all of us, we have a lot of media that quickly accumulates on our home PCs. The task of sharing this media between us is a challenge. My wife didn't know how to burn data CDs and my kids didn't have a CD burner. What we needed was a home file server: A dedicated computer used storage and sharing of our files. My research found a ton of products available that would do the job. There are several dedicated Network Attached Storage (NAS) devices that I could purchase, but even the cheapest ones are still several hundred US dollars. Then there is the server software to consider. Microsoft has its Windows Storage Server software that is also several hundred US dollars. There is also many different Linux solutions that require a working knowledge of the linux file system and command line.


In the end I settled on a free product called FreeNAS. As the title suggests, FreeNAS is free network attached storage software, but that is not all. It also has numerous features that make it extremely easy to set up, manage and expand. Plus it has features that allow you to use it as a media server for various devices. Since its hardware requirement is very minimal, this seemed like an ideal product for me to use. With FreeNAS, I was able to use my old desktop PC (a Pentium 4 with 256 MB RAM), as my file server.

Installation and setup:

To set up FreeNAS as a home file server, you must make sure you have all the proper hardware first. This means you need a multiple port router, or switch to connect your file server to as well as a network cable for the server. For the actual server, you will need a PC with at least one hard drive (I started with 2) and a CD-ROM drive.

The setup process was very easy. I downloaded the FreeNAS ISO file and created a Live CD which I inserted into my old PC. If I wanted to, I could have started using it as a file server right there (by simply changing the IP address of the server), but I wanted something that I could use in the long term... something that could auto restart with no user intervention in the event of a power failure. This meant installing it to the hard drive. FreeNAS setup made this easy to do. I simply selected which hard drive to install to, and that was it. After a reboot, I had to set up the network interface. FreeNAS auto-detects which network adapter you have, so selecting it was simple. Next I had to assign an IP address. FreeNAS setup has a default address you can use if you want, but it may not work on your home network. Its best to find out your workstation's IP address (typically assigned by your ISP through DHCP) and set up your FreeNAS server on a similar address. Once this is done, you are pretty much done with working directly with that machine and can now access all your other options through the web interface, which I found very easy to use.

Setting up file shares:

This is probably the most challenging part of the entire setup, but it was still relatively easy to do. Setting up the server to share files is done in 4 steps: Adding a drive, formatting the drive, adding a mount point, then setting up the share. At first the task was a bit daunting, but after grasping the basic concept, it was really quite straight forward. When I added 2 more hard drives to my server, it was simple to configure them for file sharing and within 15 minutes, I had easily tripled my file server storage capacity.

Additional Features:

Even though storage is its primary feature, there is much more that really makes this product shine. It has the ability to support multiple network protocols, including AppleTalk, NFS, FTP, Unison, and iSCSI. It also comes bundled with many extra services like the Transmission Bittorent client, a UPnP server, iTunes server and a basic web server. This means that it is capable of more than just storage. It can be used as part of your home entertainment setup, serving your media to your Home Theater PC, PSP, iPod, or other network devices.

Conclusion:

I'm happy to say that FreeNAS does a great job storing and sharing my files. Since my initial installation of the product, I added and updated 3 hard drives on my server and the process was very easy and straight forward. FreeNAS easily recognized my new hard drives and allowed me to add and share them for storage with no problems. I use the Transmission Bittorrent client to download my media, so I am not tying up my workstation with a separate bit torrent client. If I decide later to add a Linux PC to my home network, I can simply enable the appropriate protocol on my server and have instant access to all my files. Ultimately my goal is to build a home theater PC, so when that is ready, I will already have the media server ready to serve up my media.

I heartily recommend FreeNAS if you are looking for a free (or very inexpensive) solution for a file server. You will need to know some basic technical information about your home network, like your IP address setup, and you will need to have a multiple port router or switch on your home network, but beyond that, it is relatively easy to manage and expand.

Resources:

Website: http://www.freenas.org/
Download: http://sourceforge.net/projects/freenas/files/
Installation instructions: http://www.installationwiki.org/Installing_FreeNAS
FreeNAS Blog: http://blog.freenas.org/
FreeNAS Knowledgebase: http://www.freenaskb.info/kb/
FreeNAS Support Forum: http://sourceforge.net/apps/phpbb/freenas/index.php

Yet Another AoE vs. iSCSI Opinion

That’s right, folks! Yet another asshole blogger here, sharing his AoE (ATA over Ethernet) vs. iSCSI (Internet SCSI) opinion with the world!

As if there wasn’t already enough discussion surrounding AoE vs. iSCSI in mailing lists, forums and blogs, I am going to add more baseless opinion to the existing overwhelming heap of information on the subject. I’m sure this will be lost in the noise but after having implemented AoE with CORAID devices and iSCSI with an IBM (well, LSI) device and iSCSI with software targets in the past I feel I finally have something share.

This isn’t a technical analysis. I’m not dissecting the protocols nor am I suggesting implementation of either protocol for your project. What I am doing is sharing some of my experiences and observations simply because I can. Read on, brave souls.

Background

My experiences with AoE and iSCSI are limited to fairly small implementations by most standards. Multi-terabyte and mostly file serving with a little bit of database thrown in there for good measure. The reasoning behind all the AoE and iSCSI implementations I’ve setup is basically to detach storage from physical servers to achieve:
  1. Independently managed storage that can grow without pain
  2. High availability services front-end (multiple servers connecting to the same storage device(s))
There are plenty of other uses for these technologies (and other technologies that may satisfy these requirement), but that’s where I draw my experiences from. I’ve not deployed iSCSI or AoE for virtual infrastructure which does seem to be a pretty hot topic these days, so if that’s what you’re doing, your mileage will vary.

Performance

Yeah, yeah, yeah, everyone wants the performance numbers. Well, I don’t have them. You can find people comparing AoE and iSCSI performance elsewhere (even if many of the tests are flawed). Any performance numbers I may accidentally provide while typing this up in a mad frenzy are entirely subjective and circumstantial… I may not even end up providing any! Do you own testing, it’s the only way you’ll ever be sure.

The Argument For or Against

I don’t really want to be trying to convince anyone to use a certain technology here. However, I will say it: I lean towards AoE for the types of implementations I mentioned above. Why? One reason: SIMPLICITY. Remember the old KISS adage? Well, kiss me AoE because you’ve got the goods!

iSCSI has the balls to do a lot, for a lot of different situations. iSCSI is routable in layer 3 by nature. AoE is not. iSCSI has a behemoth sized load of options and settings that can be tweaked for any particular implementation needs. iSCSI has big vendor backing in both the target and the initiator markets. Need to export an iSCSI device across a WAN link? Sure, you can do it, never mind that the performance might be less than optimal but the point is it’s not terribly involved or “special” to route iSCSI over a WAN because iSCSI is designed from the get-go to run over the Internet. While AoE over a WAN has been demonstrated with GRE, it’s not inherent to the design of AoE and never will be.

So what does AoE have that iSCSI doesn’t? Simplicity and less overhead. AoE doesn’t have myriad of configuration options to get wrong, it’s really so straight forward that it’s hard to get it wrong. iSCSi is easy to get wrong. Tune your HBA firmware settings or software initiator incorrectly (and the factory defaults can easily be “wrong” for any particular implementation) and watch all hell be unleashed before your eyes. If you’ve ever looked at the firmware options provided to by QLogic in their HBAs and you’re not an iSCSI expert, you’ll know what I’m talking about.

Simplicity Example: Multipath I/O

A great example of AoE’s simplicity vs. iSCSI is when it comes to multipath I/O. Multipath I/O is defined as utilizing multiple paths to the same device/LUN/whatever to gain performance and/or redundancy. This is generally implemented with multiple HBAs or NICs on the initiator side and multiple target interfaces on the target side.

With iSCSI, every path to the same device provides the operating system with a separate device. In Linux, that’ll be /dev/sdd, /dev/sde, /dev/sdf, etc. A software layer (MPIO) is required to manage I/O across all the devices in an organized and sensible fashion.

While I’m a fairly big fan of the latest device-mapper-multipath MPIO layer in modern Linux variants, I find AoE’s multipath I/O method much, much better for the task of providing multiple paths to a storage device because it has incredibly low overhead to setup and manage. AoE’s implementation has the advantage that it doesn’t need to be everything to every storage subsystem, which fortunately or unfortunately device-mapper-multipath has to be.

The AoE Linux driver totally abstracts multiple paths in a way that iSCSI does not by handling all the multipath stuff internally. The host is only provided with a single device in /dev that is managed identically to any other non-multipath device. You don’t even need to configure the driver in any special way, just plug in the interfaces and go! That’s a long shot from what is necessary with MPIO layers and iSCSI.

There’s nothing wrong about device-mapper-multipath and it is quite flexible, but it certainly doesn’t have the simplicity of AoE’s multipath design.

Enterprise Support

Enterprise support is where iSCSI shines in this comparison. Show me a major storage vendor that doesn’t have at least one iSCSI device, even if they are just rebranded. Ok, maybe there are a few vendors out there without an iSCSI solution but for the most part all the big boys are flaunting some kind of iSCSI solution. NetApp, EMC, Dell, IBM, HDS and HP all have iSCSI solutions. On the other hand, AoE only has only a single visible company backing it at the commercial level: CORAID, a spin-off company started by Brantley Coile (yeah, the guy who invented the now-Cisco PIX and AoE). I’m starting to see some Asian manufacturers backing AoE on the hardware level but when it comes to your organization buying rack mount AoE compatible disk trays, CORAID is the only vendor I would suggest at this time.

This isn’t so fantastic for getting AoE into businesses but it’s a start. With AoE in the Linux kernel and Asian vendors packing AoE into chips things will likely pickup for AoE from an enterprise support point of view: It’s cheap, it’s simple and performance is good.

Conclusion

AoE rocks! iSCSI is pretty cool too, but I’ve certainly undergone much worse pain working with much more expensive iSCSI SAN devices vs the CORAID devices. And no performance benefit that I could realize with moderate to heavy file serving and light database workloads. I like AoE over iSCSI but there are plenty of reasons not to like it as well.