Pivotal Aims to Disrupt Cloud, Big Data Analytics
Updated · May 10, 2013
A decade or so back, enterprise applications sat on their own dedicated server. The rule of one app to one server was strictly observed. That changed with the arrival of virtualization, as pioneered by VMware. All of a sudden a single physical server was transformed into 10 or more virtual servers, each running one or more apps.
This brought about a revolution in enterprise IT, and thoroughly disrupted the landscape. Microsoft, once the controller of all applications via its Windows operating system, was relegated to a junior role while VMware effectively became the OS for this brave new virtual world.
Now we have the advent of cloud computing, with applications running in a nebulous cloud. And a startup named Pivotal aims to become the OS for the cloud and the facilitator of a whole new class of applications.
In addition, Pivotal is building Big Data analytics into its new platform to allow application developers to add analytics to whatever cloud software they create. In other words, watch out business intelligence (BI) and analytics vendors.
Pivotal could be characterized as a platform or OS that sits on top of the cloud, Hadoop-based data stores and all the applications housed in the data center to deal with Big Data storage and analytics.
“Pivotal will facilitate storage and reasoning over Big Data to drive value from the cloud,” said Paul Maritz, Pivotal’s CEO and the former CEO of VMware. “It will also automate rapid application development.”
Not a Typical Startup
Many startups are formed by a couple of guys in a garage or a dorm room with almost no resources. Such was the case for the likes of HP, Facebook, Microsoft, Apple, Google and Dell. Pivotal, on the other hand, begins its existence with 1,259 employees — of which more than 500 are application engineers and developers.
It is also well funded. The company is owned mostly by storage giant EMC as well as VMware. But General Electric (GE) has also rolled in a 10 percent stake.
“GE put in $110 million along with EMC and VMware,” said Jeremy Burton, executive vice president, Product Operations and Marketing at EMC.
Why the interest from GE, a company that has made its name manufacturing power generation equipment, refrigerators and a host of other industrial and consumer goods? The company is very interested in the design of the next generation of enterprise applications, those that will drive what it calls the industrial Internet.
The concept is to harness Big Data to drive down costs by analyzing information in real time. For example, the sensors on a jet engine generate several terabytes of data every flight. GE wants to use this information during the journey to optimize fuel consumption, improve engine efficiency and drastically reduce maintenance costs.
Computing Shifts Result in Vendor Casualties
Speaking at this week’s EMC World, EMC Chairman and CEO Joe Tucci laid out the historical trends driving change. The mainframe which had millions of users and ran thousands of applications, was the first computing platform. Client-server, the second platform, increased the number of users and apps considerably. This was the period when paper-based processes were automated by applications such as ERP, CRM and email.
But now the third platform, located in the cloud, is emerging. “The new mobile platform has billions of users and millions of apps,” said Tucci.
Each wave created a major shake-up in the vendor landscape. Out of thousands of mainframe vendors, for instance, IBM is about the only one to survive into the second platform. Now we see enterprise application giants like Oracle and SAP trying madly to transition into the third platform while being assailed by a host of nimble startups. The old guard has systems solidly based on the second platform, and the rise of unstructured data took them by surprise.
“Unstructured data is three times larger and is growing five times faster than structured, yet most organizations still base the business on decisions made from structured data only,” said Tucci, who foresees big casualties in the not-too-distant future. “Many well-known names will fade.”
Pivotal’s introduction was one of the key happenings at EMC World. Other EMC World highlights included the launch of a software-defined storage platform called ViPR and EMC’s confirmation that object storage will not completely replace file and block storage.
Fast Data, not Big Data
While the world is rushing to prepare for Big Data, Pivotal is looking at what it terms fast data.
“I was intrigued by the discussion around Pivotal and what it means for the development of new applications,” said Greg Schulz, an analyst at StorageIO Group. “Analyzing Big Data at rest is too slow. You have to be able to process it as it comes in, which is what is known as fast data.”
Martiz explained how today’s Big Data analytics applications would deal with the jet engine example mentioned earlier. They would first ingest the data, store it and then analyze it. But the coming approach is to analyze it in real time. Supporting that is the fact that 40 percent of data will come in from telemetry within 10 years, he said.
“As telemetry is added into all devices, it will ramp up the data volume by two orders of magnitude,” said Maritz. “There will be a need for applications that can perform real time analysis.”
Is BI in Jeopardy?
Does the roll out of Pivotal mean the end of BI and analytics as we know it? Will these vendors be cast aside as their tools are supplied free of charge to any developer? Maritz thinks that scenario is unlikely. While Pivotal will provide basic analytics tools, there will continue to be a need for analytics and BI expertise.
“We can provide basic analytics as a service to make it easy for developers to include analytics capabilities within their apps,” said Maritz. “But BI vendors can also take advantage of the Pivotal platform while supplying more sophisticated tools.”
Pivotal, he said, will be designed to interact with legacy applications and infrastructure to enable them to operate better in the cloud. The goal is a vast in-memory set-up that will go beyond what currently exists in the likes of SAP HANA. This will help rapidly ingest the data coming in and query it.
Drew Robb is a freelance writer specializing in technology and engineering. Currently living in California, he is originally from Scotland, where he received a degree in geology and geography from the University of Strathclyde. He is the author of Server Disk Management in a Windows Environment (CRC Press).
Drew Robb is a writer who has been writing about IT, engineering, and other topics. Originating from Scotland, he currently resides in Florida. Highly skilled in rapid prototyping innovative and reliable systems. He has been an editor and professional writer full-time for more than 20 years. He works as a freelancer at Enterprise Apps Today, CIO Insight and other IT publications. He is also an editor-in chief of an international engineering journal. He enjoys solving data problems and learning abstractions that will allow for better infrastructure.