Here is the background on this issue.
I am inserting slightly over 1 million records into a table from within a C++ program using ADO. One row at a time is read in from flat file and converted info a format that ADO will accept to add/update into SQL Server.
Now before you ask, this is a legacy application and it will not be converted to .NET per the clients request.
I have an i7-5960x CPU running/overclocked to 4.4GHZ with 32GB memory and no spinning disk, all Samsung SSD.
I have SQL Server 2019 (Dev Edition) with a MAXDOP of 8 on a system with 8 cores and 8 logical cores.
When running the Add/Update process I have a total CPU use of 10%-20% with 1-2 cores running at 100% (Memory use is 10G out of 32G) causing the Add/Update process to slow processing drastically. SQL Server is showing 132 threads and using 75-80% of the total CPU use. Even if I increase the MAXDOP to 14 it doesn't stop SQL Server from using 100% of any of the CPU cores.
If I comment out the function to write the data to SQL Server the app will process the data from a flat file (37 fields 387 chars total) to the format needed to add/update the data for SQL Server the function will process at least 10,000 records a second. When I run the process to write to SQL Server it will start out processing/writing about 600 records a second to SQL Server then it slowly degrades to about 220 records per second after about 100,000 records.
So the question is, can I restrict SQL Server to a max use on a per core (Not total CPU use) basis or force it to spread the threads/processing over all MAXDOP cores. When it does max out one core it will do it for about 5 minutes then switch to another core and max it out at 100% and the previous core will drop to almost nothing.
Thanks for reading this so far, if you can offer any help it would be really appreciated.