Reducing Impact of Large Updates on Merge Replication

Share this Post

One key parameter to consider when performing batch updates against tables enabled for Merge Replication is ‘generation_leveling_threshold’.  This setting controls how many data changes should be grouped into one generations (or batch).  The Merge Agent then collect and sends those batches of changes based on the Merge Agent [-DownloadGenerationsPerBatch] parameter.  The default is to group 1000 data changes into 1 generation and to group 100 generations to be sent at one time.

How Merge Replication Tracks and Enumerates Changes
http://msdn.microsoft.com/en-us/library/ms151789.aspx

If you update 10 million rows in a single transaction the Merge tracking triggers will record this as 1 generation with 10 million data changes.  The next time the Merge Agent runs it will try and divide up the 1 transaction with 10 million data changes into batches of 1000 commands.  This will take quite a while and may give impression the Merge Agent is hung.  To improve the Merge synchronization performance:

1) Increase the BATCH size of the generation.  For 10 million changes, try with setting at 10,000.

sp_changemergepublication @publication=‘<publication name>’,

@property= ‘generation_leveling_threshold’,

@value= ‘10000’

2) Execute sp_MSmakegeneration to call the generation leveling process.

You can watch the leveling processes by running:

SELECT top 100 *

  FROM [dbo].[MSmerge_genhistory] with (nolock)

  order by generation desc

You’ll now have 1000 generation of 10,000 commands each.  The Merge Agent will send these along in batches of 100 generations providing more frequent updates to the Agent history shown in Replication Monitor.

Special Considerations (Expert Level)

The command can only be executed on the Publisher.  The values are then distributed to the Subscribers during the next synchronization.  However, if a critical situation occurs and you need to update the Subscriber, you can change the values directly in the Subscriber Merge metadata tracking tables.  Get it wrong, and it may cost a loss of data and/or a reinitialization, therefore, execute only after backing up all replicated databases.  In other words, if modifying Merge tables directly, be very, very careful, if you’re not comfortable making the change, don’t do it.

sp_configure ‘allow updates’, 1

go

reconfigure with override

go

use <Subscriber_db>

go

update sysmergepublications set generation_leveling_threshold=10000 where

name=<publication_name>

go

sp_configure ‘allow updates’, 0

go

reconfigure with override

go

For testing, I updated 100 million rows on the subscriber. Then started the Merge Agent.  About 1 hour into the leveling 100,000,000 changes into batches of 1,000 the Merge Synchronization failed with “Unable to decide if MakeGeneration is needed.”

image

I used the code above to set the GenerationSize at 100,000 (I used large value since I made 100 million data changes), then called sp_MSmakegeneration.  In about 6 minutes all the transactions were nicely grouped into batches of 100,000.  All I needed was to start the sync and see data changes flowing.


Share this Post

About: ReplTalk


2 thoughts on “Reducing Impact of Large Updates on Merge Replication”

  1. Merge Replication was blocking all subscribers.  Blocking SPID is executing sp_MSlevelcontents

    This occurs during "generation leveling" The rirst running Agent "levels" and places a BLOCK on all subscribers.  

    Currently running default of 1000 changes per generation.  Increased to 10,000

    Manually executed sp_msmakegenreation -> completed in 5 minutes

    Started Merge Subscribers -> only short-term blocking

  2. For SQL server 2008 Std edition, Do we stop merge agent before step 1 ? I think we should…and after step 2 we should start it. Also once this is over, do we revert back the generation levelling process

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.