• LinkedIn - White Circle
  • Twitter - White Circle
  • YouTube - White Circle
  • Facebook - White Circle

+353 (0) 21 421 2223

Bluemetrix HQ:

5th Floor, River House, Blackpool Retail Park, Blackpool, Cork, T23 R5TF, Ireland.

Spain Office:

Maudes, 51, planta 8 28003 Madrid

Malaysia office:

The Horizon Phase 2,  Level 8, Tower 8, Avenue 5. No 8, lalan Kerinchi, 59200, Bangsar South, Kuala Lumpur, Malaysia

©2019 Bluemetrix | Sitemap | Privacy Policy | Terms of Use

BDM Data Control

Empowering your business owners to control their data

What is Bluemetrix Data Manager Control?

BDM Control is a suite of modules automating ingestion, governance,  compliance, schema evolution, masking, data completeness and transformations. With the fully interactive and easy to use interface, BDM Control allows dynamic workflow creation by non-technical resources and add value to your business growth. 

Key Benefits

Immediate commercial payback after a successful deployment

Ingest of multiple diverse data sources 

BDM Control allows the ingestion of structured and unstructured data, from diverse data types – Data Warehouse, Tables, Files, Streaming, IoT – for all major data sources – DB2, Oracle, Teradata, JSON, CSV, Kafka, Spark Structured Streaming – etc. This allows the addition deployment of new data sources in minutes rather than days or weeks. 

Reduce scripting time and costs 

With the provision of clean and factored data, your Data Scientist can increase their productivity by 2~10x, and hence increase the speed to market of your Big Data solution to your final customers.  

Governance is automated and applied by default  

There is no need to write any governance code or rules – the governance of all pipeline activities are created and recorded by default in Atlas. Tags can added easily on a table or column level, and these in turn can be used to enforce and control access to the data. 

Data Security & Data Access 

All data that is stored on your Data Lake can be masked and anonymized in-memory as it is being moved onto the lake. This ensures there are no unmasked copies of the data lying around the lake, and that any sensitive aspects of the data are secured and protected when they are created on the lake. 

BDM Control Functionality

Security Masking
  • Over a dozen masking algorithms are available by default 

  • Custom masking algorithms can be developed and deployed as required. 

  • Delivers a consistent, scalable and data-centric approach 

  • This allows you achieve your compliance requirements for GDPR, PCI, HIPAA and other data privacy regulations 

Validation Completeness
  • Guarantees the accuracy of all data as it is moved, ensuring there is no corruption in the data transmission/movement process 

  • Checksum, Row Count, Null Values are available by default 

  • Custom Completeness tests can be developed and deployed as required 

  • Carries out consistency and integrity checks ensuring your data was not corrupted in transit and that the value of the data being moved is as expected 

Transformations
  • All data maps/flows can be created using a simple interface, with a dramatic reduction in code developed and deployed

  • A significant reduction of Spark, SQL skills or HIVE knowledge to transform the data

  • Data is cleansed, formatted and factored 

  • Accelerate, simplify and consolidate the deployment process rapidly

Ingestion
  • Automates the ingest of your data for all your data sources

  • No need to develop any ingest code

  • Customize all templates on a data source basis

  • Deploy new data sources in hours

  • Support all standard ingest platforms 

Governance
  • Compliance and Governance are built in offering transparency to the data environment

  • Capabilities offered include auditing, change-tracking, data snapshots, lineage etc 

  • No requirement for the end user to possess any knowledge of Atlas or Navigator

Schema Evolution
  • Maintains a central record of the schemas of all data sources that are feeding into the Data Lake

  • A versioning system records all changes to these schemas, ensuring corresponding schemas are kept consistent

What data do you need to control?