A table [tbl_old] has about 3 billion rows and about 120 GB for disk space, and then created a [tbl_new] with the same schema (actually, just dumped the script of [tbl_old], then rename the table name to [tbl_new], then create it).
But when running this SQL script:
insert [tbl_new]
select * from [tbl_old]
After the transaction log file used up about 780 GB and hung.
My questions are:
Is it reasonable for just such inserting script for about 120 GB, more than 780 GB transaction log is not enough?
What is in the log file?
Background:
- the DB is in simple recovery mode, and no index on these
[tbl_old]and[tbl_new] - SQL Server 2008 R2 Enterprise on Window Server 2008 64bit.
- And Intel Xeon CPU E5645 @2.4GHz (2 processors), 24 CPUs, 64 GB memory