Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/csharp/318.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/79.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C# SQL SERVER删除大量行优化_C#_Sql_Sql Server_Stored Procedures_Nhibernate - Fatal编程技术网

C# SQL SERVER删除大量行优化

C# SQL SERVER删除大量行优化,c#,sql,sql-server,stored-procedures,nhibernate,C#,Sql,Sql Server,Stored Procedures,Nhibernate,假设有一个表有5列: Id int primary key Date datetime Value double Fund_id reference FundModel_id reference FundDataField_id reference 这张表有3700000行 每个基金大约有4000行。从该表中删除行的最佳快速方法是什么。 我需要一次删除大约7000000行,但这需要大约10分钟,对我来说这太多了 目前我按基金Id删除行,如下所示: Delete from FundY

假设有一个表有5列:

Id int primary key

Date datetime 

Value double

Fund_id reference

FundModel_id reference

FundDataField_id reference
这张表有3700000行

每个基金大约有4000行。从该表中删除行的最佳快速方法是什么。 我需要一次删除大约7000000行,但这需要大约10分钟,对我来说这太多了

目前我按基金Id删除行,如下所示:

Delete from FundYearDetail where Fund_id In (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,2078,2079,2080,2081,2082,
2086,2088,2090,2093,2095,2096,2097,2099,2101,2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,2342,2125,2126,2127,2128,2129,2130,2131)
decalre @tableIds table (Id int)

insert into @tableIds
select 2054 as Id union all
...
//here is other ids
...
select 2131 as Id

while exists(select 1 from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id)
begin
delete top(10000) t1
from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id
end
SELECT 'Starting' --sets @@ROWCOUNT
WHILE @@ROWCOUNT <> 0
    DELETE TOP (50000) dbo.timesheet  --change top value as needed
    WHERE Fund_id IN (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,
                      2078,2079,2080,2081,2082,2086,2088,2090,2093,2095,2096,2097,2099,2101,
                      2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,
                      2342,2125,2126,2127,2128,2129,2130,2131           
            )
此语句将影响大约200000行,并且需要很长时间才能完成,将此语句分为两个查询,每个查询大约4秒,我正在获得更好的性能

有谁知道更好的解决办法吗

注意:我正在使用Fluent NHibernate进行数据访问,如果有人知道使用NHibernate的更好解决方案,请告诉我。 如果我将执行存储过程,这会提高我的性能吗? 谢谢。

像这样怎么样:

Delete from FundYearDetail where Fund_id In (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,2078,2079,2080,2081,2082,
2086,2088,2090,2093,2095,2096,2097,2099,2101,2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,2342,2125,2126,2127,2128,2129,2130,2131)
decalre @tableIds table (Id int)

insert into @tableIds
select 2054 as Id union all
...
//here is other ids
...
select 2131 as Id

while exists(select 1 from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id)
begin
delete top(10000) t1
from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id
end
SELECT 'Starting' --sets @@ROWCOUNT
WHILE @@ROWCOUNT <> 0
    DELETE TOP (50000) dbo.timesheet  --change top value as needed
    WHERE Fund_id IN (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,
                      2078,2079,2080,2081,2082,2086,2088,2090,2093,2095,2096,2097,2099,2101,
                      2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,
                      2342,2125,2126,2127,2128,2129,2130,2131           
            )

您可以按如下方式进行批量删除:

Delete from FundYearDetail where Fund_id In (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,2078,2079,2080,2081,2082,
2086,2088,2090,2093,2095,2096,2097,2099,2101,2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,2342,2125,2126,2127,2128,2129,2130,2131)
decalre @tableIds table (Id int)

insert into @tableIds
select 2054 as Id union all
...
//here is other ids
...
select 2131 as Id

while exists(select 1 from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id)
begin
delete top(10000) t1
from FundYearDetail t1 join @table t2 on t1.Fund_id = t2.Id
end
SELECT 'Starting' --sets @@ROWCOUNT
WHILE @@ROWCOUNT <> 0
    DELETE TOP (50000) dbo.timesheet  --change top value as needed
    WHERE Fund_id IN (2054,2056,2058,2059,2061,2063,2064,2065,2066,2067,2069,2072,2073,2076,
                      2078,2079,2080,2081,2082,2086,2088,2090,2093,2095,2096,2097,2099,2101,
                      2102,2103,2104,2105,2106,2107,2109,2110,2114,2115,2116,2117,2118,2119,
                      2342,2125,2126,2127,2128,2129,2130,2131           
            )

这是一次性操作,还是需要添加到应用程序中以供软件的最终用户重复执行的功能?(无论哪种方式,我可能都不会使用NH,假设您正在水合所有这些实体只是为了删除它们)好吧,我使用NH在正常情况下访问数据,我想删除所有这些行,只是因为我正在从csv文件执行大数据导入,我正在使用sqlbulkcopy执行此操作,因此,在导入数据之前,我想删除所有受影响的行,而不是更新它们,因为我认为更新这么多行比使用sqlbulkcopy插入它们要花费更多的时间。是的,我正在寻找一种可重复的解决方案,但如果您也有一次性解决方案,请告诉我您有哪一种解决方案,我想您不会建议重新创建表…就像大型更新一样,我会尝试使用while循环,一次执行50000条或更少记录的删除。@TabAlleman所说的是在sql中执行大型删除的最佳方法-这里已经回答了一些不同的问题,即使用循环、gotos、,最重要的条款,使这项工作。我在使用NH进行任何批量操作时一次又一次地遇到问题—最好的办法是将整个操作推送到数据库中—存储过程集、SSI等—然后您可以更高效地进行更新,或许可以避免删除。特别是如果您有外键,删除操作将非常有效。SQL SERVER EXPRESS上的200000行在43秒内被删除。不太好。这并不意味着解决方案无效,性能问题可能是由于您的数据库日志文件增长、阻塞、回滚等造成的。@Greg,好吧,我可以在10秒钟内完成,我不是在寻找有效的解决方案,非常抱歉,但有效的解决方案可以是简单的删除,我正在寻找最佳解决方案。@MDDDC我很遗憾听到我之前的回答对性能没有多大帮助。我已经用另一种方法更新了我的答案(有点冒险),但一定要试一试。@MDDDC,我想说的是,因为您经历了糟糕的性能,并不意味着这不是最好的解决方案。还有其他一些因素可能导致业绩不佳。日志文件增长是其中之一,您可以通过适当调整日志文件的大小来解决它。事务性复制使用批处理删除来清理元数据,它一次删除5K行,同样的方式,以秒为单位。它可以非常快速地删除数百万行。