If you encounter high volume of data during periods of high latency explore one of these possible solutions:

    1. spread workload over larger time window
    2. commit changes in smaller batches (hard to roll back)
    3. use LogReader MaxCmdsInTrans (can break transactional consistency for large batches leading to same problem as 2)
    4. Replicate the SP “execution” not the results
    5. Unpublish table, make large batch changes, then republish the table
    6. Move published database transaction log to as fast as drive as possible.  Consider using solid-state RAM drives.

–ROW COUNTS

–Look for high row counts (>1millions) indicate cleanup not running

—    or large pending transactions.

SELECT name, rowcnt, STATS_DATE (id, indid) as ‘Last Update Stats’

FROM distribution.dbo.sysindexes

WHERE name IN(‘ucMSrepl_transactions’, ‘ucMSrepl_commands’)

Are the row counts expected or do they now contain millions of rows?  High rows counts (>1 million) may indicate a large transaction is being processes or cleanup procedure is not running.

When performance troubleshooting latency a review of pending commands by day by # commands may uncover helpful pattern.  A breakdown of the commands being stored in the Distribution database can be retrieve by running the following queries.

— Check the Time associated with those Transaction Counts into temp table

select t.publisher_database_id, t.xact_seqno,

      max(t.entry_time) as EntryTime, count(c.xact_seqno) as CommandCount

into #results

FROM MSrepl_commands c with (nolock)

LEFT JOIN  msrepl_transactions t with (nolock)

      on t.publisher_database_id = c.publisher_database_id

      and t.xact_seqno = c.xact_seqno

GROUP BY t.publisher_database_id, t.xact_seqno

— Show each hour and number of commands per Day:

SELECT publisher_database_id

      ,datepart(year, EntryTime) as Year

      ,datepart(month, EntryTime) as Month

      ,datepart(day, EntryTime) as Day

      ,datepart(hh, EntryTime) as Hour

      –,datepart(mi, EntryTime) as Minute

      ,sum(CommandCount) as CommandCountPerTimeUnit

FROM #results

GROUP BY publisher_database_id

      ,datepart(year, EntryTime)

      ,datepart(month, EntryTime)

      ,datepart(day, EntryTime)

      ,datepart(hh, EntryTime)

      –,datepart(mi, EntryTime)

–order by publisher_database_id, sum(CommandCount) Desc

ORDER BY publisher_database_id, Month, Day, Hour

In the sample output below, a large batch of transactions were being replicated as result of table updates causing slowdown in the Distribution Agent.

publisher_database_id Year        Month       Day         Hour        CommandCountPerTimeUnit

——————— ———– ———– ———– ———– ———————–

2                     2009        5           14          10          132

2                     2009        5           14          11          656

2                     2009        5           14          12          880

2                     2009        5           14          13          4379

2                     2009        5           14          14          152

2                     2009        5           14          15          1478

2                     2009        5           14          20          161

2                     2009        5           14          21          145

2                     2009        5           15          6           1700

2                     2009        5           15          7           3672

2                     2009        5           15          8           6266

2                     2009        5           15          9           329

2                     2009        5           15          10          5678715

2                     2009        5           15          11          5637959

2                     2009        5           15          12          5281732

2                     2009        5           15          13          5020950

2                     2009        5           15          14          1252

2                     2009        5           16          11          732

2                     2009        5           16          12          178

2                     2009        5           16          13          725

2                     2009        5           16          14          186

2                     2009        5           16          16          72

Resolution:

Large batch of transactions require heavy IO requirements by the Distribution Agent Reader-Thread on the distribution database.  Fast disk subsystem with Transaction log and database on separate drives/LUNs may help improve IO performance. If this will be an ongoing pattern consider replicating the stored procedure EXECUTION instead of the RESULTS.

Publishing Stored Procedure Execution in Transactional Replication

http://msdn.microsoft.com/en-us/library/ms152754(SQL.90).aspx