Apervi announced that its flagship product, Apervi Conflux, a full-featured big data integration and orchestration platform, now offers strong streaming support, and solution accelerators to solve key use cases.
Most organizations using existing technologies for big data integration are not satisfied, due to lack of speed and inadequate legacy systems. Users are also unable to keep up with the ever-growing Hadoop ecosystem. Apervi Conflux is specifically designed to accelerate the development of data lakes, ETL/ELT/CEP workflows, and IoT applications using a drag-n-drop visual development studio.
“Apervi Conflux is the industry’s first web-based big data integration platform providing full support for batch, streaming and micro-batch data pipelines”, said Uday Sagi, SVP Engineering at Apervi.
“The platform ensures low cost of ownership by fully leveraging a customer’s existing Hadoop investment whether native Apache or from any major vendor like Hortonworks or Cloudera,” said Siddu Tummala, CEO of Apervi.
Apervi Conflux eliminates the need to write any code using technologies like MapReduce, Pig or SCALA. Along with pre-built connectors to databases, messaging systems, CRM systems, sensor and social data, users can deploy custom connectors using Java-based API extensions. Users can reuse or redesign workflows, and process them on different stacks (for ex. the same streaming workflow can be run on Storm or Spark, or a batch workflow on any supported Hadoop distribution).
“Using Apervi Conflux, in less than 2 months with a 5X reduction in implementation cost, we built over 25 data workflows to update our campaign management solution to target mobile subscribers in real-time” – Telecom Solutions Provider. “In a few weeks, Apervi built a solution with Storm, using social and location data to incentivize customers based on sentiment” – Retail Customer.
Strong Streaming Support – The platform’s differentiated set of features for processing data leveraging Storm includes event and time-based windowing, in-memory and DB caching, real-time lookups, processing multiple streams, and support for event injection. Users can also design high throughput streaming data flows in micro-batch mode and deploy on Spark streaming.
Business Use Case Accelerators – The platform offers ready-to-deploy solutions to help enterprises jumpstart their big data initiatives. They include – EDW offloading on Hadoop with data governance, built-in support for incremental updates, retries and late arriving data; OLTP replication on Hadoop for real-time operational reporting, and offline storage to enable deep analytics; Real-time log analytics including aggregation in Hadoop, with support for analysis and alerting.
Apervi, Inc. is a data engineering company. Apervi’s platform and expertise can be leveraged to help reduce operational costs, drive faster results from data-discovery to decision-making, accelerate development of data-based products across verticals, and manage integrations effectively through monitoring and intelligent insights. For more information, visit http://www.apervi.com.