Planning your first migration? Learn how to save valuable resources with our webinar “Save $ by Migrating More Workloads to Office 365!“
The number one question asked by customers preparing for an upcoming Microsoft 365 migration project is, “How fast can the content be migrated?” I get it! It’s an important question to ask a vendor who is about to move your data from one platform to another.
In this blog post, I’ll make the argument for an alternative question to ask before deciding on a vendor to assist with your data migration project: “How efficient is your migration process?”
While migration throughput is an important consideration, the migration start and end dates that represent all activities in a project are far more important gauges to plan for and optimize.
Before moving on, allow me to establish some background. Here at AvePoint we’ve run many load tests to establish baseline performance expectations for all sorts of scenarios. SP2010 – SPO, SPO to SPO, File Share to Microsoft Teams, etc. In fact, with every major release of DocAve or FLY, such tests give us a starting frame of reference. However, they are not absolute and cannot be applied uniformly from project to project. The variability across data is frankly too great and it’s difficult to overlay lab performance results across different environments.
We know from years of experience that there are a number of common characteristics that affect the speed of a given data migration project, such as:
- Basic information about the content that needs to be moved (we’ll call this primary information)
- Supporting attributes of migrated content like metadata, permissions, etc. (we’ll call this secondary information)
- Hardware available to run the migration jobs
- Throttling restrictions placed on the source and destination endpoints by vendors to ensure platform reliability
- The architecture and baseline performance of the migration tool selected to perform the migration
- Migration process efficiency and job
I won’t dive deeper into the above characteristics points having summarized them in previous posts. Instead, I’ll talk about the number one factor that is often overlooked and will have a higher impact on your overall project duration than most of us would imagine: process efficiency and job uptime.
Simply put, you want to make sure that your migration platform is running at full capacity with high job concurrency. You also want to ensure that there are no gaps between jobs and a low job failure rate. Read on for three focus areas you need to be aware of during a Microsoft 365 migration.
This is the idea that you have maximized your migration platform to the goals of the project and the amount of content that needs to move, and includes selecting the right infrastructure and maximizing the number of agents to add job concurrency across your migration platform. Keep in mind that at a certain point you will be throttled by Microsoft if you have too many jobs running at once.
It’s difficult to predict where this line may be, which is why we advocate incrementally boosting your available capacity while monitoring job error codes for throttling. It would therefore be counterproductive if you run a migration using 10 agents only for throttling to kick in, delaying everything. This line is fluid and must be adjusted.
Once you’ve invested in standing up your migration architecture, you want to maximize the number of jobs that will be migrated at any given time. The investment in your infrastructure resources is too high to have idle servers doing nothing. When running migrations, we typically script jobs into job groups to make sure that as jobs wind down, the next jobs in the queue are automatically picked up and executed without the need for human interaction.
This enables the technical group to focus more on troubleshooting and job reporting and less on scheduling and kicking off jobs, which can occur at any time of the day. Again, you want to make sure you have every available machine hour dedicated to running jobs.
While impressive, a 20 GB/hour job becomes less rosy if the job is riddled with errors which will require you to evaluate, troubleshoot, and schedule re-runs. Certain sources are more prone to higher object failure rates, and timeouts and thresholding contribute to the need to re-run jobs. In our experience, this is where projects lose the most amount of valuable time. Consider what can happen if your migration platform is idle while you’re spending all of your time troubleshooting job error codes and re-running migration jobs.
While some error reporting is unavoidable, we recommend spending time upfront and segregating sites that our discovery tools indicate may have higher than normal error rates into separate containers from other source containers that don’t show symptoms of errors. Moreover, it’s worthwhile to point out that certain errors can be remediated, while others cannot.
It’s therefore important to ensure that you have a way to look at your error reporting at a macro level and filter out error codes that must be dealt with by users. You should also make sure you have an efficient mechanism to report object failures to users in the event that users need to take steps to remediate files directly. Sending emails to hundreds and thousands of users with individual job reports is not an easy task and must be considered during your design stage.
In the end, if you want your migration project to run smoothly, focusing on process efficiency and job uptime is key. By keeping your downtime low, avoiding throttling, and managing your error rate appropriately, you’ll be able to have a relatively seamless Microsoft 365 migration every time. Have any insights of your own? Feel free to share them with us below!