I am using SQL Trigger to update updatedAt column if there is a new data inserted/updated in PG 9.6.
Here is the function that will be called by trigger
CREATE FUNCTION sp_updatedatstamp() RETURNS trigger AS $updatedat_stamp$
BEGIN
NEW."updatedAt" := now();
RETURN NEW;
END;
$updatedat_stamp$ LANGUAGE plpgsql;
Here is the trigger script
create trigger tg_de_installment_cdcupdatedat before
insert or update
on
public.table for each row execute procedure sp_updatedatstamp()
Here is the Table Definition
CREATE TABLE table (
id serial NOT NULL,
"type" text NULL,
"transactionDate" timestamptz NULL,
remark text NULL,
"createdAt" timestamptz NULL,
"updatedAt" timestamptz NULL,
"deletedAt" timestamptz NULL,
CONSTRAINT table_pkey PRIMARY KEY (id)
);
This code is working fine with small data, until an application execute bulk update query (100K+ rows will be updated) that takes 30 minutes.
The trigger will makes all 100k+ rows uses updatedAt of first rows, which makes the other rows updatedAt not correct. At the end, our data pipeline with schedule 5 min, will lose this data. Is there a way to makes the SQL Trigger uses it actual timestamp for huge data?