Semi- and UnStructured Data
Preparation, Processing, Protection, and Presentation
Data comes in diverse forms from diverse sources, including an explosion in machine-generated data. Semi-structured data formats with flexible schemas, such as JSON and XML, are now standard formats for sending and storing data. Conventional DBs and DWs built on fixed schema, however, cannot readily store or process it. Thus it must be stored and carried around in its raw form (compromising performance), or transformed before it is loaded (losing information while adding complexity).
IRI software is designed to process semi-structured data without these trade-offs. The CoSort data manipulation engine in the IRI Voracity big data management platform now handles semi-structured formats natively so you can process this data without converting its format or imposing a new external schema.
In this direct mode, Voracity users can leverage the "unparalleled parallel performance" of CoSort engine processing without sacrificing functionality or flexibility. That's why, in addition to big structured data, IRI software can process certain classes of static and streaming semi-structured data, for example:
- ASN.1 call detail record (CDR) files
- IDMS, IMS, and other legacy sources
- MF-ISAM and Vision index files
- MongoDB (BSON), Cassandra, JSON, and XML
- NoSQL, Hive, and cloud / SaaS sources (e.g, AWS S3)
- IoT and message queues via MQTT, Kafka, MQseries, etc.
You can now also search, extract, and structure data from unstructured text file sources in the IRI Workbench GUI -- and then do everything with the flat-file results in that environment. That means with Voracity, you're getting something of a textual ETL tool as well. In addition, it is possible to find and mask PII in unstructured data files and mask it in place or in new targets with the same file names.
Specifically, using the Dark Data Discovery wizard in IRI Workbench, IRI Voracity data management platform or IRI DarkShield data masking product users can simultaneously find, mask/replace/delete, and extract (and then further process) strings based on patterns, explicit or lookup table values, machine-learned NLP models, path filters, or defined bounding-box areas, across: email repositories; NoSQL DBs like Cassandra and MongoDB; .pdf, .rtf and MS Office (.doc/x, .ppt/x, .xls/x) documents; .txt, .xml, .html, .hl7 / x12, JSON, XML and other unstructured text and log files -- as well as image files and faces -- all at once.
And from that same Eclipse GUI, IRI software users can operate on the flat-file extracts and their metadata for:
- Data integration and transformation
- Data migration and replication
- Data masking (encryption, de-ID, redaction, etc.)*
- DB load and query optimization
- Reporting or wrangled hand-offs to KNIME and other BI/analytic platforms
- Population of CRM, DB, ETL, and external apps
- Forensic auditing on the searched or masked data via value extracts and/or source file attributes -- link here to examples analyzed in Splunk
The Bottom Line
The total trove of big data -- whether analyzed in batch or in real-time feeds -- is of great interest to business and government service providers. IRI software - and its Voracity total data management platform in particular - is the fastest, easiest, and most affordable way to blend and prepare [package, protect, and provision] structured, semi-structured, and unstructured data sources ... within your existing IT infrastructure.