Merge agent failing with “Cannot insert duplicates rows into table “tablename”. Violation of PRIMARY KEY constraint” during initialization.

Share this Post

Sateesh Yele
Microsoft SQL Server Support

Last week, I worked on replication case where the merge agent was failing with the above error message. Here is the merge agent verbose log with the error message.

Message: Batch send failed

2011-04-20 13:53:00.546 [39%] [346 sec remaining] Bulk copied data into table ‘MSmerge_tombstone’ (0 rows)

2011-04-20 13:53:00.843 Category:NULL

Source:  Microsoft SQL Server Native Client 10.0

Number:  2627

Message: Violation of PRIMARY KEY constraint ‘PK__table__321442E2C9’. Cannot insert duplicate key in object ‘dbo.usertable’.

2011-04-20 13:53:00.843 [39%] [346 sec remaining] Bulk copying data into table ‘usertable’

2011-04-20 13:53:01.171 Category:NULL

Source:

Number:  20253

Message: To obtain an error file with details on the errors encountered when initializing the subscribing table, execute the bcp command that appears below.  Consult the BOL for more information on the bcp utility and its supported options.

2011-04-20 13:53:01.171 [39%] [346 sec remaining] Bulk copied data into table ‘sysmergesubsetfilters’ (43 rows)

2011-04-20 13:53:01.468 Category:NULL

Source:

Number:  20253

Message: bcp “databasename”.”dbo”.”usertable” in “C:\DOCUME~1\SVRFPR~1\LOCALS~1\Temp\1867ea6e9e1944bd9eee9d70f490133b\usertable_22.bcp” -e “errorfile” -t”\n<x$3>\n” -r”\n<,@g>\n” -m10000 -SServername-T –w

Based on the error message, it seems like the Merge agent when trying to insert the data into the table at the subscriber is getting duplicate data from the bcp file.

The bcp file was in native mode so we can read the file directly to check for duplicates. We queried the user table at the publisher and we did not find any duplicates. To make sure the bcp file has duplicate data, we generated the snapshot in character mode and inserted the data into a temp table and we do see duplicate data. So the question comes, how is the snapshot agent generating the bcp file and why are we getting the duplicates from the table?

I started looking at the publication properties and we have parameterized row filters. It strike to me that the join logic the snapshot agent is using might be causing the duplicates. In order to figure out what query the snapshot agent is using to generate the bcp file, we have to look at the views for each table in the join filter.

I executed the following query on the publisher database.

SELECT * FROM sys.objects WHERE Type =’v’ AND name LIKE ‘msmerge%’

This query returns all the merge replication views in the database. We should be looking at the views with the following naming convention

Msmerge_publicationname_objectname_view

The join logic in this case was very complex and the customer had 6 hierarchy levels of join filters. For example, lets say we are replicating SalesOrderHeader and SalesOrderDetail tables and the join filter is defined as SalesOrderHeader.SalesOrderID=SalesOrderDetail.SalesOrderID on the SalesOrderDetail table. The view of SalesOrderDetail table would then be referring to the view on SalesOrderHeader table. So Msmerge_publicationname_lineitem_view references Msmerge_publicationname_Header_view

After constructing the complete query from the views, we had got the select query that the snapshot agent is using for generating the bcp data. When we executed the query on the publisher, we sure see that the query is returning the duplicate data. After examining the data, we see that the same child row has 3 parent table rows and hence when joining the 2 tables causing the duplicates. Customer deleted the duplicate rows from the parent table. We ran the select query and made sure we are not getting any duplicates. Then generated a new snapshot and the merge agent completed successfully.

posted by: Chris Skorlinski


Share this Post

About: ReplTalk


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.