BlueData's latest version of EPIC allows the separation of compute and storage for Big Data applications.
Last week, big data software provider BlueData announced that its EPIC software platform now allows users to run their compute functions on public clouds such as Amazon Web Services (AWS) while keeping their data either on-premises or in a cloud environment. The software leverages Docker containers to provide flexibility with hybrid deployments that have the capability to separate compute and storage functions, creating what BlueData is calling the first Big-Data-as-a-Service (BDaaS) offering.
“One of the challenges for organizations thinking about deploying big data workloads in a public cloud is that their data may already be on-premises, and moving it all to the cloud can be challenging, time-consuming and expensive,” says Jason Schroedl, VP of marketing at BlueData. With the latest EPIC release, end users can run big data applications such as Hadoop and Spark on any infrastructure, whether on-prem, public cloud or hybrid deployment. Initially, the offering will be a direct availability program running on AWS, but over time the company plans to make the platform available on Microsoft Azure, Google Cloud and other public cloud services.
The user interface and experience remains constant whether customers are using BlueData on-prem or in the cloud, giving the same security and control in terms of how many resources are given to different groups for individual use cases.
EPIC allows users to leverage a mix of various frameworks, applications and tools with just a few clicks. The BDaaS announcement was made in conjunction with the summer 2016 release of EPIC, which provides one-click install for preconfigured Docker images within the BlueData App Workbench. In addition, the latest version provides a “bring your own app” capability that allows admins to create new images for their preferred applications and tools.
Schroedl says this new capability empowers end users and gives enterprises flexibility of choice. “If there's a preferred business intelligent tool, or visualization tool, or any other big data analytics tool that they know their data scientists and analysts want to use, they can upload that through the app work bench capability, create their own Docker image for that new application, and then make that available within their own app store for those data scientists and analysts to spit up new clusters with just a few clicks,” he explained.
Anant Chintamaneni, BlueData’s VP of products, gave a use case with Hadoop, which is typically deployed on rigid, complex bare metal servers. “It's a distributor platform that's extremely complex so there's no flexibility, and then you have to copy all the data into the discs on the machines of that cluster. It's slow, it's time consuming,” he says. “That's what a lot of people have to deal with today, is thousands of clicks to get that Hadoop cluster up and running, and that takes weeks or months.”
BlueData’s mission, as Chintamaneni puts it, is to virtualize that infrastructure using Docker containers, which lets customers get a Hadoop cluster up and running in five clicks or less.
Ultimately, EPIC’s new capabilities are all about offering the enterprise choices, says Chintamaneni, not only between applications but also between cloud environments. “These enterprises want to have control of which cloud they go to, for what use case and when, and they want to extract themselves away from the actual individual services that each cloud offers,” he says. “It's all about migrating that workload from one premise to the public cloud in a thoughtful manner and having the choice and flexibility on the cloud itself.”