Quantcast
Channel: SQL Server Database Engine forum
Viewing all 6624 articles
Browse latest View live

DBCC CHECKDB

$
0
0

I have a stored procedure that will get all the database name on an instance. It will then insert the results of DBCC execution into a table.

INSERT INTO tblErrors(
	Error,[Level],[State],MessageText,RepairLevel,[Status],[DbId],Id, IndId,
   [File],Page,Slot,RefFile,RefPage,RefSlot,Allocation
   )
  EXEC ('dbcc checkdb(''' + @db_name + ''') with tableresults, all_errormsgs')

I then scheduled a job to run the SP.

The problem is my job kept failing with

"Syntax error converting the nvarchar value 'repair_allow_data_loss' to a column of data type int."

So when I do DBCC CHECKDB ('ABC'),  I get

Msg 2576, Level 16, State 1, Line 1

IAM page (0:0) is pointed to by the previous pointer of IAM page (1:80) object ID 1858105660 index ID 0 but was not detected in the scan.

CHECKDB found 1 allocation errors and 0 consistency errors in table '(Object ID 1858105660)' (object ID 1858105660).

DBCC results for 'tblBCD'.

There are 2966 rows in 9 pages for object 'tblBCD'.

I understand the table (tblBCD) has issues; however, is there a way to let the job continue (without erroring out) and just insert the issue record intotblErrors?

Thanks.

Edited: This is for SQL2000





Suggestions For Handling Bulk Updates Without Blocking Local User Updates

$
0
0

Hi,

This is a request for general implementation suggestions.

We have a CRM database that is used by a call center application to allow reps to update customer info during business hours.  Outside of business hours we receive data feeds from another source that are bulk uploaded into the database to refresh the data. This has been working fine for now, but we are expanding the use of the app to offices in other countries and are beginning to encounter more blocking during the bulk upload because now the app is being used outside our local business hours because of the time difference.

It seems this would be a common problem, but I haven't been able to identify a good source of information on methods to overcome this. 

What suggestions do people have to complete bulk loads while still allowing updates by local users?

Ideas I have been considering include duplicating the database and performing merge replication, using service broker to queue updates during the bulk load, using snapshot isolation or isolation levels with row versioning....

Any ideas would be greatly appreciated.

Thanks,

Reinis

After Piecemeal Primary FileGroup Restoration, Database Property Window Shows other FileGroups also

$
0
0

Hi,

I have a SQL Server 2008 database with 90 Gigs of Capacity.

The database consists of a huge table with 80 Gigs of Capacity and these entire 80 Gigs of data is Static in nature.

We need the to restore the backup of this database frequently in to the Development environment, so we moved this static data into a Secondary File Group.

Now, the database Consists of 2 Filegroups : Primary & Secondary_fg.

The entire Static table is moved to the Secondary FileGroup.

We have taken the backup of Primary FileGroup using the following Backup Command :

Backup Database MYDB FILEGROUP = 'PRIMARY' to disk = 'E:\SQLBackup\MYDB_Primary_FG_Only.bak'

WITH NOFORMAT, NOINIT, NAME = N'MYDB - Full Filegroup backup', SKIP, NOREWIND, NOUNLOAD

The backup went file with the expected capacity of 10 Gigs.

I have restored this backup in to a New Database : MYDB_PRIMARY_ONLY using the following command

RESTORE DATABASE [MYDB_PRIMARY_ONLY] FILE = 'MYDB_DATA', FILEGROUP = 'PRIMARY'

FROM DISK = 'E:\SQLBackup\MYDB_Primary_FG_Only.bak',

WITH FILE=1,

MOVE 'MYDB_DATA' TO 'E:\SQLDATA\MYDB_PRIMARY_ONLY_DATA.mdf',

MOVE 'MYDB_LOG' TO 'E:\SQLDATA\MYDB_PRIMARY_ONLY_LOG.ldf', NOUNLOAD, PARTIAL

GO

The Restoration also went fine and the size of the Newly Restored Database : "MYDB_PRIMARY_ONLY" is with 10 Gigs in Capacity.

But, the gotcha here is whilst checking the Database Properties of this Newly restored database, on the FILES & FILEGROUPS tab, the files & filegroups related to the SECONDARY FILEGROUP is also showing.

MY ACTUAL QUESTION :

On the PRIMARY FILEGROUP ONLY RESTORATION HOW THIS SECONDARY FILEGROUP IS COMING INTO PICTURE ?

MOREOVER, WHEN I TRIED TO REMOVE THE SECONDARY FILEGROUP AND IT'S RESPECTIVE DATA FILE IT THROUGH THE FOLLOWING ERROR :

COMMAND USED TO REMOVED THE FILE & FILEGROUP :

USE [MYDB_PRIMARY_ONLY]
GO
ALTER DATABASE [MYDB_PRIMARY_ONLY]  REMOVE FILE [SECONDARY_DATA_FILE.ndf]
GO
ALTER DATABASE [MYDB_PRIMARY_ONLY] REMOVE FILEGROUP [SECONDARY_FG]
GO

Error :

Msg 5056, Level 16, State 2, Line 1
Cannot add, remove, or modify a file in filegroup 'SECONDARY_FG' because the filegroup is not online.

Can anyone explain why such this error message ? Thanks in advance for any guidance in this regard.

"Fetch Next from" very slow

$
0
0

Hi all ~ There is a cursor inside a store proc. and at the middle of the loop, became extreamly slow..

and I look into "sysprocesses"

"Fetch Next from cursor_name" SUDDENLY uses huge logical read.~~~~

I am using SQL 2008 R2 SP2 .

Anyone meet this kind of case before ?

Understanding Scale in perfmon charts

$
0
0

Hello,

I have collected the perfmon data and I can read it when it is in excel spreadsheets. I want to create charts over this data but when I see the scale, I don't understand what range of data is getting displayed. Can someone help me in understanding the scale setting?

Regards.

Cumulative update patch 9 for SQL 2012 SP1 doesn't update SQL version number from 11.0.3000 to 11.0.3412

$
0
0

I have applied CUP9 to the SQL2012 SP1 server. Update went through successfully and I restarted the server. However, SQL version didn't change to 11.0.3412 as expected, but still shows 11.0.3000.

It shows HOtFix 3412 for SQL Server 2012 (64-bit) installed in Installed Updates in Programs installed on the server.

Did anybody else experienced this or has any insight on this? How can I be sure that CUP9 is actually installed?

I appreciate any response!

Deadlock waitsource page db_id:1:some number

$
0
0

Does that mean the contention is on the PFS page?

Any guidance here?

There is more than one data file but I guess that wouldn't make a difference once if it's the same object will be on the same data file?

Thanks!


Paula

Linked Server Error

$
0
0

EXEC sp_addlinkedserver
    @server = 'ExcelServer2',
    @srvproduct = 'Excel', 
    @provider = 'Microsoft.ACE.OLEDB.12.0',
    @datasrc = 'E:\Emp_database.xlsx',
    @provstr = 'Excel 12.0; IMEX=1; HDR=YES';


After creating a linked server i executed below query :

    SELECT * FROM OPENQUERY
    (ExcelServer2, 'SELECT * FROM [Sheet1$]')

Error Message :

OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "ExcelServer2" returned message "Unspecified error".

Msg 7303, Level 16, State 1, Line 1

Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "ExcelServer2".


        Please help me on this very emergency.

Thanks in Advance....

       

Recovery from sql server 2005

$
0
0

I am using sql server 2005.My database is increamental databse daily new tranctions are add in it.By mistake i have deleted some transaction from that database table.How can i recover it.I dont have any other database backup.plz help me.

Lock granularity

$
0
0

Hi,

I have a situation where I don't understand why one stored procedure is blocking another. The procedure issues an UPDATE according to this pattern:

UPDATE [T] SET Col1 = Col1 + 1 WHERE Col2_FK = @id

Col1 here is a regular attribute and Col2_FK is a foreign key. Thanks to a mismatch between data model and application model this update will always affect exactly one row. (Col2_FK has a constraint and is NOT NULL and I've furthermore done a count(*) grouped by Col2_FK having count(*) > 1 to verify there are no two rows in T with the same value in Col2_FK.)

If I open up two tabs in management studio, start a transaction in each, and then run the procedure first in tab #1 and then #2 withdifferent @id, the second session is always blocked by the first (until I commit or rollback).

I would expect an X lock at row level only, with IX on the page and table. But then the two transactions would run concurrently, since they do not in fact share any data. In other words, I would expect #1 to block #2 only when the value of @id is the same in both transactions.

Table T has a clustered index (of type int identity(1,1)), and so does the table referenced by the FK. There's no index for the FK itself.

I have tried to use SQL Profiler to understand what's going on, but the trace contains only mysterious IDs (that look suspiciously like addresses - it seems to always be valid hex values) and I don't understand if the locks are row locks or page locks.

If the update is done based on PK instead of FK, it appears only row locks are held and both transactions can run concurrently. 

There's so little data in the table it's probably all on the same page (data space used, as viewed in object explorer details in management studio, is 8KB = 1 page). I therefore suspect an X lock exists at the page level, but I don't understand WHY the database engine would go ahead and lock in practice the entire table just because I update a single row in it (out of ~100 small rows).


The affinity mask specified does not match the CPU mask on this system.

$
0
0

We are having a problem with one of our SQL servers, and in comparing it to the backup server which is working fine, I noticed some differences. I attempted to correct the differences, but no luck.

The dell server has 4 dual-core processors and at one point hyper-threading was enabled. One of our DBAs recommended that it be turned off. We didn't have any major problems until recently and it seems that getting this setting right is the lynchpin. Any suggestions?

 

John

 

 

EXEC

sys.sp_configure N'show advanced options', N'1'RECONFIGUREWITHOVERRIDE

GO

EXEC

sys.sp_configure N'affinity mask', N'0'

GO

EXEC

sys.sp_configure N'affinity I/O mask', N'0'

GO

RECONFIGURE

WITHOVERRIDE

GO

EXEC

sys.sp_configure N'show advanced options', N'0'RECONFIGUREWITHOVERRIDE

GO

 

-----------------------------------------

 

Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.

Msg 5832, Level 16, State 1, Line 1

The affinity mask specified does not match the CPU mask on this system.

Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51

The configuration option 'affinity mask' does not exist, or it may be an advanced option.

Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51

The configuration option 'affinity I/O mask' does not exist, or it may be an advanced option.

Msg 5832, Level 16, State 1, Line 1

The affinity mask specified does not match the CPU mask on this system.

Configuration option 'show advanced options' changed from 1 to 0. Run the RECONFIGURE statement to install.

Msg 5832, Level 16, State 1, Line 1

The affinity mask specified does not match the CPU mask on this system.

Cannot change SQL 2008 R2 Service account from local System to any account

$
0
0

Windows 7 64 Bit Developer Edition of SQL Server 2008 R2

Successfully changed SQL Server Agent, SQL Server Reporting Services, SQL  Analysis Services, SQL Server Integeration Services and SQL Full-Text Filter Daemon Launcher from Local System Account to Domain account.  Howerver,  I cannot change the SQL Server Account.  The SQL Server Configuration Manager generates the below error:

WMI Provider ERROR (in window title bar)

Big red X followed by "The parameter is incorrect. [0x80070057].

I have tried many things with no luck:

Tried using a different local administrator account

Tried putting the Domain account I want to change to in the local admin group

Tried adding the Domain account I want to change to in all of the SQL created local groups

I think im going to have to reinstall to change the account.  What up!@!!

 

-thanks for any help in advance.  Its probably something dumb i did or did not do.

 

 

 


scott

Can not log back after deleting default database

$
0
0

Hi,

I have deleted my default database on local machine with SQL Server 2008 R2, in order to copy/restore new copy.  But after deleting the database I can not log back in.  Neither with 'sa' or Windows authentication.

The following is the error I am getting:

"Can not connect to 'SQLSERVERNAME'. Login failed for user 'sa'. (Microsoft SQL Server, Error: 18456)"

Server Name: 'SERVERNAME'

Error Number: 18456

Severity: 14

State: 1

Line Number: 65536

This happened right after deleting the default database.  Is there a way of bringing back the deteleted default database or somehow log back in to my local sql server ?

MSSQLSERVER stopped after reboot for domain account

$
0
0

Hi everyone,

I installed SQL Enterprise Server 2012 under a domain account: domain\username. During installation, my domain account is temporarily granted Admin rights and everything runs ok. But after rebooting PC (domain account loses Admin rights), MSSQLSERVER service stopped (actually all SQL server services stopped: MSSQLFDLauncher, MSSQLServerOLAPService etc). 

Can anyone tell me is it a SQL server configuration problem (i.e. need specific settings for domain user) or Windows access rights problem? I use default service setting e.g.: NT Service/MSSQLSERVER.

Regards

SQL Server Domain password

$
0
0
 Is there any way I can recover SQL Service Domain password or password which installed under the SQL Server service. IS there anyway I can decrypt? Please help me.....Thanks

backup and restore

$
0
0

I have two databases on the same mssql instance - testdb and mirrordb.

testdb has some data. mirrordb has no data.

I want to copy the data of testdb into mirrordb.

so I want to backup testdb and restore mirrordb from the backup file of testdb.

I tried :

BACKUP DATABASE "testdb" TO DISK='c:\mssql\backup\testdb.bkup' WITH NOFORMAT,INIT,STATS=10
restore DATABASE "mirrordb" from DISK='c:\mssql\backup\testdb.bkup' WITH recovery,STATS=10

but get the error :

Msg 3154, Level 16, State 4, Line 1

The backup set holds a backup of a database other than the existing 'mirrordb' database.

Msg 3013, Level 16, State 1, Line 1

RESTORE DATABASE is terminating abnormally.

can somebody tell me how to do this right?

appreciate the feedback.

Add IBM.Data.DB2.iSeries.Dll Assembly To SQL CLR Causes Errors

$
0
0

I'm writing CLR procs against DB2. Code works as a standalone. When I try to add the IBM.Data.DB2.iSeries.dll as an assembly to my SQL Server database, I get the following error below. I can't find the assembly in the GAC on my server. What's the dependency chain for this dll?


DJ Baby Anne's Biggest Fan................

query hint to force SQL to use a temporary table in a CTE query?

$
0
0

Hi,

is it possible to tell SQL Server to create a temporary table by itself when I'm using a CTE in my query?

I have a query starting with a CTE where I group by my record, then another recursive CTE use the first CTE and finally my select statement like:

with cte as (select a,b,c,row_number() ...  from mytable group by a,b,c)

, cte2(select .... from cte A where rownum =1

union all select ... from cte B inner join cte2 C on ......

)

select * from cte2

this query is very very slow, but if I store the first CTE into a temporary table and then cte2 consume my temp table rather than the CTE, the query is very fast.

creating the temp table took 10sec and the select took 20sec

while the initial query didnt return anything  after 2minutes!!!

so what can I try to do to have the query running in less than 30sec without creating the temp table first?

is there a query hint which can be used to tell SQL Server to convert the CTE into a temp table?

as I have a lot of query to manage, I want to simplify my model without relying in temporary tables every time I suffer this issue...

thanks.

Filetable Query Performance

$
0
0

Hi,

I have a small web app that basically provides a Search Engine-like keyword entry to search approx. 50k documents stored in directories, which I access through a Filetable Query on SQL server 2012.

I get great performance on this - typically less than half a second. Except for the first query of the day - it takes over 30 seconds. Then same or different keywords will be back under half a second.

What could SQL Server be initializing? Does anyone else see this kind of behaviour?

I added a schedule to populate the Full-text catalog at 06:00 every day but that made no difference.

Any help appreciated.

Sql DB transaction log and DB backups

$
0
0

Dear All,

Is it possible to backup Sql db on a network Drive when performing DB and Log files backup is that log file truncate the log size

My sql server got full the size of log is 1 TB what is the best way to shrink the DB and Log file

Regards

Rabbani


RaSa

Viewing all 6624 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>