Cisco Tidal Takes on Big Data
Updated · Oct 22, 2012
Cisco is expanding its Tidal Enterprise Scheduler technology in a new release that takes aim at Big Data workloads. The latest version of the software, Tidal Enterprise Scheduler 6.1, includes Hadoop capabilities, a self-service portal for scheduling jobs and mobile apps for accessing the workload scheduling system.
Cisco in 2009 acquired Tidal Software, a provider of application automation and application management solutions, for $105 million. While the Tidal Enterprise suite has been enhanced to work with Cisco’s UCS server gear, it works with other kit as well.
Wayne Greene, Director of Product Management & Business Development, Cloud & System Management Technology Group at Cisco, said that Tidal has a wide array of customers.
“In order to be a credible software vendor, you have to support everybody’s hardware,” Green said. “While this new release has a strong UCS linkage, I imagine that we will have customers that will use our Hadoop adapters on other hardware platforms.”
While Greene said Tidal is agnostic in terms of working on server vendor platforms, he stressed that the Cisco UCS platform offers customers additional benefits for scalability and manageability of the server hardware.
“It’s a better together story, but we have many of our customers that haven’t moved to UCS that will continue to leverage Tidal software to run workloads,” he said.
Easing the Data Flow
In a classic Hadoop data workload, data is gathered and loaded and then analysis is done with analytics tools.
“That is the job stream that we’re automating,” Greene said. “We have a GUI with drag-and-drop to connect jobs and also managing where the job is being executed.”
Cisco is trying to solve several Big Data workload challenges with the new Tidal update. According to Greene, many enterprises now realize that Big Data is going to become a mainstream enterprise service. As such, there is a need for enterprise workload scheduling and management, which is what the Tidal solution aims to deliver.
“We want to make sure people don’t create a silo of automation and integrate with existing workload automation,” Greene said.
Another key challenge is scalability. As Big Data usage moves from single node deployment to a larger scale, automating cluster deployment becomes more complex.
“There are a lot of automation opportunities here,” Greene said.
Sean Michael is a writer who focuses on innovation and how science and technology intersect with industry, technology Wordpress, VMware Salesforce, And Application tech. TechCrunch Europas shortlisted her for the best tech journalist award. She enjoys finding stories that open people's eyes. She graduated from the University of California.