PLX to Chair SSD Track at Flash Memory Summit, Demonstrate its PCIe Gen3 Switches with Marvell
In the session titled “Designing Storage Systems in the SSD Era” (Track 202-B), experts from PLX, STEC, Marvell, Samsung, SolidFire, and Skyera will provide insight and advice to designers of hard disk drive (HDD)-based systems on how they can enhance their platforms to make best use of high-performance SSDs. The session -- chaired by Larry Chisvin, PLX® vice president of strategic initiatives -- will help designers choose the optimal interface for specific applications, maximize system bandwidth, modify their systems to combine SSDs and HDDs, prevent common bottlenecks and errors designers face when using SSDs, and provide attendees with recommendations on how best to determine a real-world return on investment in their new systems. Track 202-B will be held on Wednesday, August 22, 9:45 a.m. to 11:00 a.m.
The PLX and Marvell demonstration focuses on the industry-leading performance of Marvell’s 88NV9145 PCIe-to-NAND controllers, enabled by PLX ExpressLane™ PCIe Gen3 switches, in an enterprise storage application. The PLX switch’s multi-host feature allows the device to be partitioned into two virtual switches, with data streaming to and from multiple servers, thus reducing cost by eliminating the need for a second switch. PLX PCIe x16 port-to-server support offers 16GB/s data transfer capacity in each direction.
“With key support enabled by PLX, the Marvell demonstration features eight virtual machines running on a standard two-socket server ultimately sharing our native PCI Express SSD reference card,” said Shawn Kung, director of product marketing, enterprise storage at Marvell Semiconductor, Inc. “Each virtual machine is capable of achieving I/O performance up to an astounding 90K IOPS.”
The PLX PCIe Gen3 switch portfolio today offers 14 devices ranging from 12 lanes and three ports to 96 lanes and 24 ports. Designers choosing the PEX8796 switch -- touting 96 lanes and bandwidth of 8 Gigatransfers per second, per lane, in each direction -- can achieve throughput of 1,536 gigabits per second (192 gigabytes/s), delivering performance that challenges all other interconnect technologies.