I have a very simple table:
- ID (integer primary key, ID increment on)
- field1 (large text field about 4KB per record)
- field2 (large text field about 4KB per record)
I have about 1 million rows in this table.
I tried a very simple SELECT COUNT(*) FROM table, no filters, no where clause. This query takes about 1 minute to execute.
Is this normal? I don't think 1 million records is very much for a database.
How can I figure out what is taking so long?
EDIT: one strange thing I noticed is when I use SELECT TOP(10) * FROM table, the returned result is NOT ordered by ID. That's strange to me because normally these queries should be ordered by the primary key, since it should be the default/disk ordering of the records. Not sure if this is a symptom of the problem or not
EDIT2: here is the database table in question as requested by one comment:
USE [bfcn_keys]
GO
/****** Object: Table [dbo].[Keypairs] Script Date: 03/05/2022 19:49:09 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Keypairs](
[public] nvarchar NOT NULL,
[private] nvarchar NOT NULL,
[nonce] [decimal](20, 0) NOT NULL,
[Id] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [PK_Keypairs] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
This table is generated by EF core, nothing special I did.
Database is a SQL Server on AWS t3.medium. Nothing else running on it.
(This is for load testing, and not production btw; I'm not storing private keys on the server)