Three Ways Flash Memory Can Accelerate Data Center Architecture

by   |   January 4, 2013 5:22 pm   |   0 Comments

Josh Miner of Fusion-io

Josh Miner of Fusion-io

More than 25 years ago, when Dr. Fujio Masuoka invented flash, he intended it to be used as a new type of high capacity, low-cost memory. Flash also has an added benefit of being persistent so, unlike random access memory (RAM), flash would not lose data in the event of a power loss.

As data storage companies began creating flash products, they lost sight of Dr. Masuoka’s vision. Instead of using flash as memory, they implemented it like a hard disk — and as such, it has replaced hard disk in many of the devices we use. This made the flash drives backward compatible with existing computer architectures, but also saddled them with 30-year-old storage protocols that could not exploit the true potential of flash to serve as a new memory tier.

Related Stories

Four performance bottlenecks for machine learning applications.

Read more»

SanDisk Addresses data access latency with solid state drive cache.

Read more»

Tokutek’s storage engine extends performance lifeline to MySQL.

Read more»

Today, there are three main ways flash can accelerate modern data center architecture: First, as high-performance, low-latency acceleration devices that place data within servers, close to the applications and databases that consume the data. Second, as a high-capacity cache for centralized storage to keep “hot” or active data ready in high-performance flash devices instead of disk. And third, as a shared storage target that can accelerate multiple applications. The next sections discuss each approach in more detail.

1. Bypass Network Bottlenecks with Flash

Adding flash to the server is the most effective way to accelerate performance, as this is where the CPU resides and where data is processed. Implementing flash memory here eliminates network bottlenecks that slow data processing. Server-centric flash is sometimes implemented as a hard disk accessed through SAS (Serial Attached SCSI) and SATA (Serial ATA) ports. While these devices eliminate network latency, the distance between the data and application is still significant and latency is high. Consequently, many companies have been developing flash products that connect data to applications via the system BUS through the PCI Express slot. This greatly reduces latency. However, the majority of these products simply create RAID SSDs on a PCI Express card. They still suffer the latency of legacy storage protocols, the bottlenecks of RAID controllers, and the relative slowness of embedded onboard processors compared to today’s multi-core servers.

It is possible to write software that gives applications direct and native access to the flash memory, essentially fulfilling Dr. Masuoka’s original vision of flash as a new memory tier. This provides applications with terabytes of memory that can extend the capabilities of DRAM and greatly improve data processing speeds. This also allows much larger datasets to be processed very rapidly, which is a key enabler to big data applications.

Companies like Microsoft and Oracle have been quick to recognize that external storage is no longer necessary for the capacity and performance many applications require—and they have developed software features to enable server-based computing. For example, Microsoft’s AlwaysOn and Oracle DataGuard deliver server-based replication so companies can achieve high levels of availability and uptime without the need for backend storage.

2. Deploy Flash as Cache in Centralized Storage

For some companies, an all-flash solution is not practical. For example: Datasets might be too large to move into servers. High-availability requirements might be higher than server-side applications like Microsoft SQL Server AlwaysOn or Oracle Data Guard can provide. They might be operating in a virtualized environment that requires VMware vMotion or Hyper-V Live Migration. Or it could be a financial consideration: they might simply have too much time left on their storage payback period to budget an all-flash solution and need to improve performance in the interim. In big data environments, storage is more frequently becoming a source to mine – where nothing is truly archived.

In all of these cases, a flash cache is a great option. With the right caching software, companies can accelerate physical or non-physical (virtualized) storage by using terabytes of high-performance flash as a cache. Most flash caches are write-through (read) caches, which accelerate reads, but also reduce the performance burden on SANs, extending SAN life and ROI.

3. Share Flash Within the Data Center

Many environments require shared or clustered architectures for applications that access a dataset from multiple servers. To meet the data demand for applications requiring shared storage, companies can house all their shared data on flash memory using specialized software.

In the past year or so, a new trend towards software-defined storage has been gaining momentum. Software-defined storage solutions can allow flash in any server to appear as a high-performance shared storage target. This allows companies to maintain the shared or clustered functionality they need while also improving performance. And because the solution is software-based, companies deploying this option do not have to worry about vendor lock-in.  They simply install the flash and software on the server hardware of their choice.

Speed Things Up

For years, enterprises have been treating flash-based solutions like disk, which limits the potential of flash. Imagine upgrading your Honda Civic to a Porsche 911 Turbo. Your Porsche will go much, much faster, but if it has a governor that caps its speed at 80 mph, you’re not going to enjoy the ride. Removing the governor will allow your Porsche to more than double that speed.

Disk protocols, RAID controllers, and embedded processors are like the governor on a Porsche – they hinder flash from reaching top speed. No matter where you are implementing flash in your data center, overcoming these protocols is essential to maximizing performance in your data center.

Today, new application programming interfaces are providing the building blocks to optimize applications to take advantage of flash capabilities. Over time, it is expected that native access to flash will become increasingly accepted as companies develop previously impossible features to differentiate their applications from the competition. Innovators in the open source community like Percona, which built an extension to InnoDB for Percona MySQL, are using these APIs to achieve important performance improvements while also extending the life of flash memory by 100 percent by cutting writes to the flash tier in half. Similarly, companies in vertical industries like financial services, whose profits hinge on millisecond computing advantages, are more tightly integrating with high-performance, high-capacity, persistent flash to gain an edge.

With these three deployment schemes, data centers can give all their applications a flash-powered performance boost. Not only will this help keep end users happy, it will allow businesses to speed ahead in the data center race for performance.

Josh Miner is director of product marketing at Salt Lake City-based Fusion-io, a storage company that offers a software-defined storage platform based on NAND flash memory.





Tags: ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>