Msg 2576, Level 16, State 1, Line 1 or Index Allocation Map (IAM)

There is a project currently hosted at another location within our infrastructure. In the not too distant future it will be moved to our data center and fall under my support. In preparation of the move servers have been setup and I am using a copy of the database to set up the database(s) and the various components required to support the project.

The project requirements are to have a primary database server and a secondary one. The secondary will be used for reporting purposes. I was told they currently use transaction replication to keep the two up to date so reports can be generated with near real time information. I ran into a little problem with setting things up and wanted to share my experience.

The Error


Msg 2576, Level 16, State 1, Line 1

The Index Allocation Map (IAM) page (0:0) is pointed to by the previous pointer of IAM page (24:40) in object ID 16719112, index ID 1, partition ID 72057594152419328, alloc unit ID 72057594158579712 (type In-row data), but it was not detected in the scan.

CHECKDB found 44 allocation errors and 0 consistency errors in database ‘MyReport’.

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MyReport).

Note: this error was being thrown on the secondary/report database server.


First, I called up my trusty DBA partner, the Internet and looked up the error message and what it meant. I found some information but most if it was on forums. After reading up on the error and what potentially caused it I looked at the primary database again with the thought that maybe somehow I missed doing something. But the error was not being reported there.

Since there were no errors in the primary database I thought maybe it had to do with something in the replication I either missed or setup wrong. So I dropped the replication and the reporting database and started over. After restoring a brand new backup on the report server I ran CHECKDB and did not get any errors. O.K. some progress.

Next I setup the replication again and after it was running, I ran CHECKDB. BAM! The error appeared right away. More progress but what in the replication process could be causing the corruption?

I then asked my other DBA partner, #SQLHelp on Twitter. Robert Davis ( blog | twitter ) he was instrumental in helping me focus back on the actual cause, Indexes. More importantly, the missing meta data for them was missing. I had moved away from the actual error when I determined (or thought I did) there was something in the replication that was causing the corruption. Wrong.


Looking at the Indexes I found the ones causing the error were the Full Text Indexes. Further investigation should I did not have the Full Text Component installed on two of the secondary or report servers. Doh! After installing the feature and redoing the entire replication setup the error was gone.


Lesson learned? Focus on the error and what the root cause is and try not to lose focus. Make a plan and then follow the plan. I have check sheets to follow but sometimes mistakes happen. We can learn from them or repeat them again. Hopefully others may learn from my mistakes.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s