Intel, a company famous for its computer processing hardware, has jumped into the Hadoop software business, asserting that it wants to spur growth of big data analytics deployments in large data centers.
Intel announced plans on Feb. 26 to make the third version of its distribution of Hadoop and its accompanying Intel Manager for Apache Hadoop software available globally in the second quarter. The Manager software is a means to support system configuration, monitoring and management, said Boyd Davis, vice president and general manager of Intel’s datacenter software division. The Intel version of Hadoop will go to the open source community and include performance optimizations for speedier queries and secure data access.
Why Intel and Hadoop? Davis said that even with ongoing innovation happening in the Hadoop open source community, there is still a perception that the technology to access huge datasets and perform computations on them suffers from lagging performance. He added that Intel wants to participate in the Hadoop community while offering enterprises the support of a big technology player that in recent years been devoting more development resources to software-based technologies than hardware.
“Ideally as organizations store data in a Hadoop cluster, they want it to be a foundational layer and build a variety of applications on top of it,” he said. “Many organizations are looking for a large, stable company [to support them] so they can invest for the long term. We feel we’re a good bet for a variety of players.”
Intel joins a Hadoop pool already crowded with high-profile players—companies like IBM, Microsoft, Oracle, SAP and Teradata—as well as relative newcomers such as Cloudera and Hortonworks that have emerged from the Hadoop community. All of these companies are part of a growing field of database, analytics, application and system developers offering ways to tap into new technologies and build bridges to existing IT infrastructures.
At a news conference in San Francisco made available via webcast, Davis emphasized the collaborative nature of Intel’s entry, and included executives from technology partners Cisco, RedHat, SAP and Savvis, a data center hosting company, to discuss the prospective benefits of Intel’s entry into the Hadoop market. Intel names an additional 30 companies as partners in its announcement.
An early Intel customer is NextBio of Santa Clara, Calif., a provider of analytics services to pharmaceutical and biotech companies for such projects as cancer drug discovery and molecule analysis. Satnam Alag, vice president and CTO of NextBio, said his company—located “at the intersection of genomics, big data and medicine’’—saw both software and hardware performance gains on its computing resources using the Intel Hadoop distribution and support on its HBase database that holds 10 billion rows of data.
Intel’s involvement in the Apache Hadoop world started with customer demand from China, Davis said. Telecommunications customers from China Mobile and China Unicom were among the first seeking help for their groaning systems. “We started this out based on customer demand, and as we built our business in China, it became apparent to us that this was indeed global,” he said, adding that Intel decided in 2012 to build a Hadoop business. Proof of concept customers like NextBio, publisher McGraw Hill and the Texas Advanced Computing Center cemented Intel’s ability to unveil its Hadoop plans now.
Davis said Intel is committed to keeping its version of Hadoop compliant with the open source community, and not going its own way. “We view this as a building block,” Davis said. “We will continue to focus on open source. … The goal is not to fork the code or have any dissention in the community. We want this to go forward.”
Michael Goldberg is editor of Data Informed. Email him at email@example.com.