There can be additional levels, depending on the condition of the raw data, or if data enhancement is needed along the way. The levels work like this:
Level 1:This has a structure like the Source data, but with friendly field names and simple data typing. So if we had a text field named “annual_salary’, we would typically name it “AnnualSalary” and import as a numerical field. Some records might fail if there are values such as “25k pa” or "30k and a car"- and become a data cleanse issue.
Level 2: This has a structure like the Target database. We work with fully "normalised" databases, so there is no data duplication. E.G. the Source might have a company and person in a single record. We would split this out as a separate company record with a mapping to the person. This can identify issues with the source data, like one person with company = "Accenture", and another "Acenture". Data exceptions are reported back so the Business can decide what to do.
Target: This has the data structure of the final application. Records with inconsistent structure will have failed processing in Level 2 - like insurance policies with no policy holder, or Direct Debit Instructions with no bank, or Claims with no claimant. These records are then excluded from the final export.
Just as important is the data dictionary. This audits the path of each field of source data through to the target - vital as the testing identifies records that don't migrate correctly.
The tools that move the data from one level to the next are automated, so we can re-run the process as many times as needed.
All issues found during testing are marshalled through our online issues tracking tool which works as an excellent communication tool between client and development team.
To talk to us more about data migrations, use our Contact Form.