The new 2.5 TB ioDrive Octal from Fusion-io was recently displayed at Supercomputing 2009 in Portland, Oregon. The ioDrive is a solid-state drive (SSD) that fits into a x16 PCI Express 2.0 slot. The beauty of the using the PCI Express 2.0 slot is that Fusion-io can really obtain great performance. Fusion-io is claiming that their drive can saturate a 16x PCIe 2.0 slot with their new Octal. The Octal can deliver a bandwidth of 6.4 GBytes/sec.
It accomplishes this by using 1,600 flash dies. Samsung is the provider of the memory chips, and is also an investor in Fusion-io. With that many chips, one might wonder how Fusion-io handles the inevitable chip failures. To see how they handle that, let’s take a look at a simpler ioDrive.
If you take a look at a regular 160 GB ioDrive, as in the above image, you will see that there are 24 pads on both the front and rear of the card. Each of these pads holds 8 chips each. You can think about it as 8 rows, where each row has 24 chips. There is also a 25th pad near the front of ioDrive with another 8 chips on it. That 25th pad handles the redundancy in case one or more of the other chips die. In the worst case, one chip could fail in each of the 8 rows, and the ioDrive could still recover from it. The same idea is used in the ioDrive Duo, which is really just two ioDrives, with one inverted. Both the ioDrive and the Duo consume 25 Watts, which it draws from the PCIe connector. Fusion-io has written some software to handle the rare cases where the Duo can exceed the available 25 Watts.
The ioDrive Octal follows the same pattern, and as one might expect, can be thought of as 8 ioDrives placed together. It is not exactly 8 ioDrives put together, but you can think of it that way. At the end of the Octal is a connector, which is a key difference from a normal ioDrive. This connector is an I/O link that is capable of 6.4 GB/sec. Fusion-io has created a PCIe 2.0 link to connect their drives to other computers. They have also written their own software to handle data integrity during read and write operations. One of the benefits of using this connector is for high availability. If one server is connected to the Octal, and that server dies, the file system can be failed over to a redundant server, and the data from the Octal is still available.
The image above shows a mock up of a data center that can achieve 1 Terabyte per second sustained bandwidth. The clear colored racks represents what it would take in conventional hardware to achieve a 1TB/s sustained bandwidth. That amounts to approximately 55,440 disk drives in 132 racks of equipment. The black colored racks represents what it would take for 220 ioDrive Octals to achieve the same bandwidth, which would occupy six racks. If you look closely, you will notice that there are twelve such racks. That is because the customer wants to double the size of the installation. Who is this customer? All Fusion-io is saying is “two presently undisclosed government organizations.” If you read their press release, you might get the idea that the two locations are part of the Advanced Simulation and Computing (ASC) Program and one of those sites might be Lawrence Livermore. Apparently it depends on how the funding plays out.
When will the Octal be available? Fusion-io is not saying. However, rumors on the floor at SC09 have it being introduced in 1Q2010. Next year they are planning to take the Octal from 2.5 TBytes up to 5.0 Tbytes. Fusion-io is partnered with companies like DDN, IBM, and HP. It is likely that you will see the Octal end up in their products in the near future.
Fusion-io also showed off their media wall at SC09. This was the same setup that was shown at Siggraph 2009. Fusion-io had sixteen diskless servers which were booting off one ioDrive. To show that, all the disks were pulled out of each server.
Once booted, each server then ran 256 instances of the VLC media player. Each VLC loaded a standard definition video file and displayed it on a screen. For those of you doing the math, that is 4,096 video streams coming from one ioDrive. Impressive. - source
0 comments:
Post a Comment