LogiCrunch

LogiCrunch is a sophisticated product, built from scratch primarily to process real-time streaming events from hospitals. LogiCrunch currently receives and processes events in various shape and form. LogiCrunch processing pipeline is metadata driven, tenant-specific patient messages are correlated and features dynamically generated, prior to pathway predictions and instant notification.

Interoperability - Platform is capable of handling and transforming HL7 messages across versions. Also, capable of handling tenant specific semantics. All tenant specific encoded messages are transformed to a canonical form that is received and processed by all the downstream processing components.

Multi-Tenancy - Platform is broadly divided into 2 set of processing components. One set of processing components are closer to the source feeds. Receive and process device feeds, ADTs, Lab and other feeds that involve tenant specific profiling and processes, while the tenant-agnostic components receive and process transformed canonical payloads.

RealTime-Streaming- Platform listens to tenant-specific receptors that are clustered. Receives high acuity patient related events in real time at sub-second levels, while the low-acuity patient related events are received at relatively high levels. Platform receives processes, predicts on a continuous streaming data in real-time and relies heavily on clustered messaging brokers for handshakes.

RealTime-Prediction - Cardiac and sepsis pathway predictive models are built offline on historical data, and injected into the streaming pipeline. Device, ADT and Lab related events received in real time are passed through CEP processes that transform, derive features and predict in real-time. Models built are segmented broadly based on high, low acuity areas.

Dynamic Metadata Generation - Platform generates and stores tenant-specific metadata, as it discovers dynamically in the ingestion process, while enriching and transforming from tenant-specific to canonical payloads. Metadata is also loaded onto a clustered cache, for data reads by the pipeline processes.

Data Governance - Pipeline process stores the raw data as received, in the tenant specific data-lakes. Also, pipeline stores use-case specific data ponds per tenant, inclusive of the model features and predictions. Apache Atlas, Ranger and Solr are configured over the data fabric to bolster data access, instrument, search and retrieval scenarios. Product has the complete Data fabric layer metadata driven.

Auto Scale - LogiCrunch is a sophisticated product that is pluggable, extensible and scalable. LogiCrunch has several moving parts - the streaming engine, messaging brokers, CEP engine, in-memory cache, the data fabric in hbase and postgresql. All of these components are clustered, hosted in isolated and/or on shared nodes. Each of these pipeline component can be dynamically scaled based on resource usage.