1

I am using SQL Trigger to update updatedAt column if there is a new data inserted/updated in PG 9.6.

Here is the function that will be called by trigger

CREATE FUNCTION sp_updatedatstamp() RETURNS trigger AS $updatedat_stamp$
  BEGIN
    NEW."updatedAt" := now();
    RETURN NEW;
  END;
$updatedat_stamp$ LANGUAGE plpgsql;

Here is the trigger script

create trigger tg_de_installment_cdcupdatedat before
insert or update
    on
    public.table for each row execute procedure sp_updatedatstamp()

Here is the Table Definition

CREATE TABLE table (
    id serial NOT NULL,
    "type" text NULL,
    "transactionDate" timestamptz NULL,
    remark text NULL,
    "createdAt" timestamptz NULL,
    "updatedAt" timestamptz NULL,
    "deletedAt" timestamptz NULL,
    CONSTRAINT table_pkey PRIMARY KEY (id)
);

This code is working fine with small data, until an application execute bulk update query (100K+ rows will be updated) that takes 30 minutes.

The trigger will makes all 100k+ rows uses updatedAt of first rows, which makes the other rows updatedAt not correct. At the end, our data pipeline with schedule 5 min, will lose this data. Is there a way to makes the SQL Trigger uses it actual timestamp for huge data?

1 Answers1

6

As stated in the manual

Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the “current” time, so that multiple modifications within the same transaction bear the same time stamp.

(emphasis mine)

So if you want a different value for each call, use clock_timestamp()