The Great Migration: 40 Million Records Migrated in 10.5 Hours

Share this page:

The Great Migration: 40 Million Records Migrated in 10.5 Hours

Recently one of our largest clients  went live. This was a massive project made possible by the partnership that we developed with the client's team.   Part of the process included transferring many, many, many records in a precise and timely manner. Altai's Business Consultant, John Klemetsrud has written about this part of the process in today's blog post. Readers, this is a lengthy one, however, the process is such that we felt it was important to share in its entirety at once. 

Introduction

My name is John Klemetsrud and I work for Altai Systems Inc., a Microsoft partner. We primarily focus on non-profit organizations interested in Association Management System(AMS) related solutions, which we provide. Recently one of our clients, with our help, was in need of being able to migrate large amounts of data from multiple different sources into Microsoft Dynamics CRM On-Premise in a reasonably short amount of time. This document will serve as a sort of historical account of the process, along with a great deal of technical information on the hardware, software, and configurations needed to make it happen. I hope it is helpful. Note: The 10.5 hours includes the time it took to get the source data “staged” so actual migration time was less.

Hardware

The hardware needed to accomplish this was quite impressive and I’m sure other variations of comparable hardware would work. This is a VMware setup that the client already had in place for the most part and consisted of 5 “host” rack servers with one or more “guests” on each host and was a stand-alone environment used solely for the purpose of migrating the data, at which point it was backed up and moved to a production environment. For that reason, no Dynamics CRM Backend component servers were needed (other than the deployment manager), only Front end. Below are the host server’s hardware makeups. Note: All hard disk space was allocated on a separate NetApp, and all memory and processor affinities were allocated.

Name

Server Type

Memory

CPU

Purpose

CRM Relationship

Host1

Dell R820

256 GB

(x4) 8 core, hyper-threaded

Database Server

SQL for CRM DB Destination

Host2

Dell R820

256 GB

(x4) 8 core, hyper-threaded

Database Server

Source SQL Server Database

Host3

Dell R820

256 GB

(x4) 8 core, hyper-threaded

SSIS Data Migration

(x3) SSIS Servers /w CRM Front End

Host4

Dell R820

256 GB

(x4) 8 core, hyper-threaded

SSIS Data Migration

(x3) SSIS Servers /w CRM Front End

Host5

Dell R820

256 GB

(x4) 8 core, hyper-threaded

SSIS Data Migration and AD

(x2) SSIS Servers /w CRM Front End

As you can see, we have single database server VM on one host using all memory and all processors for the Microsoft CRM Database itself, and the same for the database server that will be housing the “source” data we are going to be processing. That totals 32 cores and 32 hyper threads. The other 8 VM’s had both SSIS Servers and the CRM AppServer (API) on them. Additionally, Host5 had two Active Directory “guests”. These were used to migrate the data with their memory and processors all shared out relatively equally between guest VM’s.

Software

This was done for a Microsoft Dynamics CRM 2011 RU12 installation, but initial tests show it should very well work for Microsoft Dynamics CRM 2015 as well. The SSIS Adapter/Destination of choice by the client was the “Dynamics CRM Destination – by CozyRoc”. An additional component we used was the Microsoft Balanced Data distributor or “BDD”, which can be downloaded free from Microsoft. Below is a small table of software relative to each guest (Virtual Server), for the purposes of the article, we’ll preface each Database guest with DB and each SSIS/CRM front end with SSIS_CRM. The chart also ignores the Active Directory portion as it’s not relative to this document.

Guest/Name

Description

Software Version Info

DB1

Microsoft Dynamics Database Server

Microsoft SQL Server 2012 (SP1) x64

DB2

Source & Data transformation Server

Microsoft SQL Server 2008 R2 (SP2) x64 Data Center Edition

SSIS_CRM1

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM2

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM3

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM4

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM5

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM6

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM7

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD

SSIS_CRM8

SQL Server Integration Services & Dynamics CRM Front End

Business Development Studio (Bids) VS 2008, CozyRoc SSIS+ SR-3, MS Dynamics CRM Front End Components, BDD


Configurations

There were various software configurations to take into account and most were learned over time due to some issue we were having or inconsistent data flow into the target. Below we’ll go over each software component and some of the changes we had to make in order to get everything to flow as smoothly as possible. Later in the article I’ll explain some of the issues we did run into even with these configurations and how we worked around them.

Configurations - Dynamics CRM 2011 Front End

For Dynamics CRM 2011, after much trial and error, we really didn’t need too many changes. The items we did change (NOTE: On all 8 CRM Front Ends) were:

1.        We added the OLEDBTimout and ExtendedTimeout settings to the registry per this article: http://support.microsoft.com/kb/918609. This allowed us to avoid some SQL Timeouts we were getting at different points in the migration due to large record width in some entities.

2.       It should be noted that since every single one of these servers had the CRM Front end installed on them along with the SSIS server that would be delivering them data, the Host file was change so that the URL (Let’s say http://crm.companyname.org) was pointing to its own local host (127.0.0.1). This allows SSIS to source the data and write to itself avoiding an additional network hops from source to destination.

3.       Disabled ALL plug-ins and work-flows (SdkMessageProcessorSteps). This ensures that no errors will be raised by any internal process that is not inherent in Microsoft Dynamics CRM. This can be done through the standard customizations area of CRM.

4.       Disabled CRM Auditing, again in the standard customizations area of CRM. This ensures that CRM is not trying to write any audit logs during the migration and can be found in the standard “settings” area of CRM.

5.       Removed all file type restrictions for attachments and set the attachment size to the maximum value allowed. This is optional, was only done due to the large amount of file attachments we would be migrating and can also be found in the standard “settings” area of CRM.

Configurations – SQL Server 2012 (CRM 2011 Database)

Again, since this would ultimately be a temporary resting spot for the migrated data, we were able to make changes here I would not recommend in a production environment.

1.       Set the Dynamics CRM Database to Simple Recovery. This stops the database server from logging all the transactions and reduces overhead.

2.       Disabled all non-unique, non-clustered indexes ONLY in the destination CRM Database. Since we know (and I’ll cover this later) that 100% every single one of the records we are pushing are  going to be Inserts only, this reduces the overhead of having to write to the indexes along with the tables. This appears to have fixed an issue where we were experiencing fluctuations in writing data (ups and downs). Note: you can write a script for this and there are some dynamic SQL ones online that will disable all and re-enable all as needed.

3.       Replication Enabled out to the “source” SQL Server 2008 database server. We replicated out all the tables, with very few select columns that we needed in order to stage our data and perform lookups at the data level instead of through SSIS. (In replication, you can specify which tables and which columns from those tables you wish to replicate, even which rows you do not want replicated with a filter if need be)

     a.        Example: create staging table of all contacts we want to load

     b.       Load all the contacts with unique foreign key id from source.

    c.        Replication pushes the loaded contacts unique foreign key, along with their now “real” GUID out to the source.

    d.       Stage contact’s invoices, using foreign key to lookup GUID (contactid) from replicated data for customerid.

    e.       Load invoices directly without having to do lookups.

4.        Comment on Replication: This same thing could be accomplished with a Linked Server OR putting the source data on the same SQL Server that the destination CRM Database will be, but the impacts on performance are unknown by us as compared to replication. Also, there are other tables that have to be replicated besides Contact for invoice to work, ie. TransactionCurrency etc.

5.       Although the replication worked well and saved us having to manually make copies of the CRM data on the migration side before staging the next Entity load, some Entity loads did overwhelm the replication process.  Rather than wait additional time for the process to catch up after the load completed, we opted for taking down the replication during these segments of the load and re-starting it after that segment had completed so that we could validate the load and then load the next Entity. This was particularly true for narrow, custom entities which tend to load more rows/minute than the built-in entities.

Configurations – SQL Server Integration Services (SSIS)

The changes needed for SSIS are important as they help both the development process and its ability to write quickly regardless of the adapter/destination component being used. I must give out credit to previously viewed websites I came across regarding some of these configurations, including these:
  1. http://blog.sonomapartners.com/2013/09/multi-threading-your-ssis-crm-integration-for-performance.html
  2. http://danielcai.blogspot.com/2012/02/improve-crm-data-load-performance-by.html
  3. http://danielcai.blogspot.se/2012/04/other-half-story-about-bdd-and-ssis-for.html

You will see several of my comments in there, and I want to make it clear, we do not endorse one adapter/destination over another.

Like the articles read, we added the changes to the DTSExec.exe.config and DtsDebugHost.exe.config files in both the C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn AND C:\Program Files\Microsoft SQL Server\100\DTS\Binn locations. One is for 32bit and the other 64 (May as well cover all the bases). The DTSExec.exe.config file impacts jobs run from the command line, and the DtsDebugHost.exe.config impacts those jobs run through BIDS (Visual Studio). I recommend you put the change in all of them.

The addition looks as follows:

    <system.net>
            <settings>
                  <servicePointManager expect100Continue="false" />
            </settings>
            <connectionManagement>
                  <add address="*" maxconnection="500"/>
            </connectionManagement>
   </system.net>

This is added anywhere between the starting <configuration> and ending </configuration> sections of each of those files.

The expect100Continue=”false” basically tells the web server that it doesn't need a response saving time and the maxconnection=”500” in this case means that instead of the default SSIS two connections allowed per web service call, 500 can be made. Other articles have 100 and I’m not sure we ever reached 500 or not, but with the servers we were using, I left it at that value. I’d start with 100. This is the more important of the two and is an absolute must.

Data Source and SSIS Details

Data Source Details
The first part of this process was to get all the data together. This client took all relevant data bases (there were multiple) and, as a part of their process, backed up and restored those databases over to a server we’ll just call “Migrate” for this article. Remember, in previous comment I mentioned that we also had the CRM database replicated to this Migrate server as well to perform our lookups as needed. Stored procedures were written to source all the multiple databases and generate a “Staging” table that would ultimately become the real source of data. After all the standard data mapping from real multiple database sources to its destination, there are a few things to take away additionally:

1.       We went the route of making the staging table look EXACTLY like the CRM Destination table. We literally made very column length and type identically named to make mapping easy, serve as a mapping document of sorts, and ease SSIS from having to do any un-necessary work.  

2.       We created a column called “RowExists” as a bit in the staging table for each entity, and had a process that just in case there was a problem, we could run script that would look at the replicated data and set it to 1 if the row already existed in CRM. This prevented us from having to do lookups in SSIS and keep it at insert only.

3.       We used the T-SQL NTILE function to equally split the data into parts by populating the NTILE value returned into a column we called “Stream” in the staging table for each entity. In our case (and I’ll get into this later) we used NTILE(80) OVER(ORDER BY UniqueForeignKeyValue) – This broke each staging table up, as it was executed, into 80 equal parts. So, if the table only had 240 rows, the first 3 rows would get “Stream” value 1, then next 3 would get “Stream” value of 2 and so on up to 80.

4.       We created SQL Synonyms for each of the replicated tables we used, and gave them their real names from CRM. So the XXX_MSCRM database replicated on this server has dbo.ContactBase in it, on our Migrate database we created synonyms with 3 part identifiers back to that so we could join to them in our queries and update lookup GUIDs, or reset the RowExists flag.

5.       All the stored procedures during the insert of data would reset the RowExists accordingly (always checked) and would always repopulate the NTILE (Stream) value base on only the records where RowExists=0 to make sure only the ones we didn’t load already would be split up equally for the next load.

6.       All Databases including replication control database on this server were also set to simple recovery.

All in all, the staging of each future “entity” were fairly quick, due to intense optimization and having the time allotted in the project to make any necessary adjustments. These staging times were all written down and monitored every time we performed a test load so that we could plan for “Go-Live”.

SSIS Details
This part along with the above, took many hours of trial and error. We started with less guest SSIS servers and grew to the eight we ended up using. We tried using BDD with 10 destinations vs. 10 Control Flows with a single OLEDB source; we tried many other variations until we came up with what ended up being the best. For the purposes of this article let’s say the only thing we were migrating were Contacts (It wasn’t but you would just repeat the process for each entity). Let’s also just say for the sake of the article it was just 1,000,000 contacts.

This means that when we staged the data using the stored procedure, that has the NTILE(80) in it, each “Stream” had 12,500 rows in it equally.

Now we will end up in the end, having eight SSIS Servers, all with one package called “Contact Load” on it. Each entity had its own separate package file.

We created our Source and Lookup data sources. The “SOURCE” looked at our staging tables, the “LOOKUP” (didn’t use it but kept it) pointed to the real CRM Database server, and the “DESTINATION” was configured to use CozyRoc’s Dynamics CRM Destination. We then proceeded to create 10 int32 variables named “Stream1”, “Stream2” etc up to “Stream10”.

To standardize the packages, we used the “Package Configurations” option to export the connection info and our variables which were only (Stream1, Stream2, Stream3, etc, up to 10). We placed a copy of this XML in the same location on each of the 8 servers (let’s just say: C:\ssisxml) so that if the package was open from ANY server, it would use the file. We then edited the file and hard coded the credentials for the CRM Destination along with the variables for that server. SSIS Server #1 had 1-10 for its stream variables, and SSIS Server #8 had 71-80 for its stream variables, and all the in between had theirs. This gave us the flexibility to only develop on 1 package, but be able to deploy and run it on any of the servers. Note: After hard coding the credentials for the CRM “DESTINATION” in the XML you must set the package security to: “DontSaveSensitive”. This allows it to not encrypt the user/pass and cause authentication problems when reading the connection info from XML file. Also, our “SOURCE” and “LOOKUP” just used Windows Authentication.

Additional SSIS Settings that helped make development easier and also helped with the overall speed OR “Removed Errors” were:

1.       In the “DESTINATION” Connection Manager using CozyRoc.

     a.        [Compression] was set to “True” to reduce size across the network.

     b.       [Connection Sharing] was set to “False”.

     c.        [RetainSameConnection] was set to “False”.

     d.       [ServerTimeout] was set to 300.

e.       [DelayValidation] was set to “True” – This is important to set pretty much everywhere to make the package load faster for editing.

2.       Both the “SOURCE” and the “LOOKUP” Connection Manager items had changes.

     a.        [PacketSize] set to the max of 32767.

     b.       [DelayValidation] set to “True”.

     c.        [RetainSameConnection] to “False”.

3.       The package properties itself, which each Control Flow inherits its values from were also set.

     a.        [DelayValidation] set to “True”.

     b.       [MaxConcurrentExecutables] set to 128 – to avoid any limits.

     c.        [LoggingMode] set to “Disabled” – logging slows the process

d.       [IsolationLevel] set to “ReadUncommitted” – Thought process here is that we are the only ones using the source or destination data, so there won’t be any dirty reads anyway. This will not be appropriate in all situations.

4.       For any sequence container or control flow, they also had their settings changed.

a.        [DelayValidation] set to “True”.

b.       [LogingMode] set to “Disabled

c.        [IsolationLevel] set to “ReadUncommitted

Now, remembering that there are eight SSIS servers, and the source is split into 80 equal parts, and each server had its own custom xml config file with its own SSIS variables; we were able to:

1.       Create a single Control Flow called “Contact1”

     a.        Add oledb source to read from our staging table “select * from contactstage where RowExists=0 and stream=@Stream1” (The variable in the xml)

     b.       Add the Microsoft Balanced Data Distributor component – This equally distributes source rows to multiple destinations.

     c.        Add (in our case) 5 CozyRoc Dynamics CRM Destinations all set to bulk insert 1000 Contacts each – This boils down to sending a large XML file to the CRM SDK which it will parse through and loads.

     d.       We had to open the CozyRoc destinations and let it auto-map all the columns, which we could do since our source column names exactly matched our destination.

     e.       Change the [DefaultBufferMaxRows] Value to approximately how many rows were in any given stream divided by the 5 CRM CozyRoc Destinations, so if it were 12,500 per stream, 12,500/5 = 2,500. This is super important because it makes sure that the Balanced Data distributor will very equally distribute the incoming rows to all the destinations. This doesn’t have to be exact for every run, just know and use the number you think you’ll be migrating on “Go-Live”.

     f.         We did also do some playing with the [DefaultBufferSize] property, but this all depends on the average “width size” of the rows you are processing. It’ll be different for everyone, you can try increasing by 10 MB and see how it impacts performance during testing. The bigger your server will allow the buffer to be before writing to disk, the faster it’ll move the data generally speaking.

2.       Copy and paste the Control Flow “Contact1” 9 times on the first SSIS server, so there are a total of 10 Control flow items on the screen in that single package. Renamed each to “Contact2”, “Contact3” etc.

3.       Inside of each of the control flows, we had to change the source variable it was looking at…so for Control Flow “Contact2” the select statement was changed to “and stream=@Stream2”..for “Contact3” it was modified to  “and stream=@Stream3”.

4.       Now we can easily copy and paste this one package onto all eight servers, where each one looks at a different portion of the source data!

We were initially going to run these packages from the command line, because in theory that is quicker but we ran into some issues that could come up where we really needed to be able to catch them quickly, stop the package and restart it quickly OR decide to cancel all the packages. Due to the nature of those, and since this was a “one time” event to migrate the data, we elected to just run all the packages through BIDS (Visual Studio). As you can imagine, that was fun opening 8 remote desktop windows and hitting “play” on each one, but it definitely worked out ok.

Again, you would repeat this same process for each entity giving them their own package file. If you try to cram some, or all, of the entities into one package file you will end up with 2 issues:

1.       The package takes forever to load.

2.       If you need to make a change to “something” you risk adversely affecting other items in the package unrelated to what you are working on.

In the end each package ends up looking like this at the control flow level:

Inside each control flow, it looks like this:

Issues Encountered

We had a lot of trial and error through the process, but setup as documented we were left with two minor issues.

1.       Every once in a while, when we started the package on any given server for any given package, in the very first seconds of execution it would fail to read the CRM SDK. It would happen so fast, you could hit stop before anything was written (it takes about 20 seconds for each item to validate) and then hit play again, and 99% of the time it would run just fine. This is the main reason we decided to keep the packages run from BIDS rather than the command line. This way I could catch it, and re-play as needed. This could be caused by anything really and I’m not willing to speculate the exact cause. It was one of the few things we ended up at the end not having time to diagnose fully.

2.       During some of the larger (one of the entities was nearly 12 million rows) loads, we would “sometimes”, but not always, end up having a few hundred thousand rows duplicated, some timeout entries in the log (pushing it hard). We dealt with this by having a report we ran at the end of each load that verified against all the replicated data that every row was loaded verified the StatusCode/StateCode of each row in the (CRM) target entity and checked whether there were duplicates. If we had duplicates we would:

     a.        Delete all the base and extensionbase rows that were duplicated

     b.       Reset the RowExists for those rows in the staging table.

     c.        Run all the packages again to have it load any that were missing.

For go-live, this only happened on the 1 entity that had the nearly 12 million rows and ran the longest.

Data Speed and Conclusion

Data Speed

While I’m not at liberty to share exact numbers of customers, invoices, and “other items” that this client had to migrate; I am willing to give you the speeds at which some of them were migrated using the above method. I can tell you the SSIS Servers all peaked at 100% usage and that the databases server hovered around 80-90 usage during the process so the resources seemed to be used fairly well.

It’s important to say, that our initial estimates just running a single standard ETL for each entity, put the estimated migration time at 19 days based on an average load time of 2,000 rows/minute. It took a lot of back and forth to whittle the time down from there.

Additionally the smaller the “column width” of the data you are trying to migrate the faster it will be. Custom “Small width” entities migrated much faster than larger out of the box entities like Contacts for example.

Another thing to consider are the CRM internal plug-ins. You know the ones that calculate the total for the invoices, and things of a similar nature? You cannot shut these off, so things like Invoice and InvoiceDetail loads are going to be exceptionally slower. We monitored InvoiceDetail loads and for every row it inserts the SDK does the insert, as well as multiple lookups and an update to the parent invoice. The more writes the slower.

Additionally one needs to remember, that some entities actually consist of more writes than other. A custom entity for example only writes to the base table, and the extensionbase table. The “Contact” entity on the other hand requires writes to contactbase, contactextensionbase, customeraddressbase and emailsearchbase tables, which means that for every contact inserted there are 4 actual writes.

Below is a chart of some of the speeds we were able to achieve, and while I can’t show you the total record counts of each entity for customer privacy issues, I can say that actual total number of records sourced was approx. 39,333,174 rows which I would have put in the title, but it doesn’t have the same ring to it. The total size of those nearly 40 million records was approx. 135,027.514 MB ,or just over 135 GB worth of data as recorded in the source. The total time to extract everything (by Entity) was about 250 Minutes (4.2 hours) and actual time to load all the data was 561 Minutes (9.35 hours). That was the total time each took to run linear, but several of them were run parallel both at the staging level and the SSIS load level. The “real clock” time was approximately 10.5 hours to complete everything. Add on some time for double checking loads, back filling some values based on data loaded, things of that nature and the entire process took about 12.5 hours.

*Entity = Name of the entity for CRM
*Minutes to Extract = Time in minutes that it took to extract the data from its source databases into a flat staging table.
*Record per minute = actual full CRM records per minute inserted through the SDK calls, not true DB records which would be a higher number since it would include base and extensionbase tables.
*Time per Million = You could figure this on your own, just sharing.

Entity

Minutes to Extract

Approx. Records per minute

Time per Million

Contacts

26 Minutes

120,966

8 ½ Minutes

Invoices

16 Minutes

58,535

17 Minutes

Inv. Detail

28 Minutes

53,377

18 ½ Minutes

Payments*

6 Minutes

221,766

4 ½ Minutes

Memberships*

5 Minutes

86,167

11 ½ Minutes

*Payments and Memberships were custom entities, and I wanted to show speeds on those as they tend to be faster than out of the box entities unless there have larger column widths, which is the case with the Memberships.

Conclusion
As I’ve said at different points through the article, every client has different needs, budgets, hardware available, time constraints and other variables. I feel that what was done here could be scaled up or down accordingly and still be beneficial in some way to some of you out there. The experience for me personally was a great opportunity to work with some highly intelligent individuals and get to do things I may, or may not, get to do again in my life. I was very fortunate to have been put on this project and have a client so willing to invest the time and money into getting every detail “just so”. This will be one of the stories that I’ll share for some time to come over the virtual water cooler. I would really like to extend a special thank you to our client and their great employees who helped make this a possibility.

P.S. Don’t forget to turn back on all the disabled indexes, plugins and workflows at the end!

Return

Back to blog list

Latest blog posts

2017 MM&C Conference Booth Prize Winner

At last week’s Marketing, Membership & Communications Conference put on by ASAE in DC, we were excited to have the opportunity to award our booth prize of a virtual reality headset to Jennif...

CRM: An Evolution

When organizations implement a CRM, their top IT people focus on implementation and integration. Often the focus is kept on the technical aspects of the software, which is important for ultimate use, ...

Microsoft Acquires LinkedIn - What That Means for CRM

On June 13th it was announced that Microsoft will acquire LinkedIn for $196 per share in an all-cash transaction valued at $26.2 billion. This acquisition is part of Microsoft's growing plan to ex...