4

I run PostgreSQL 9.3.5 on Windows 7, 64-bit.

My data arrives quarterly, in multiple tables (table1, ..., tableN) that are linked, intra-period, by cross-table constraints based on key identifiers. Among other columns, each table has identifiers that persist over time: pfi - persistent feature identifier and ufi - universal feature identifier.

pfi is unique per table (it's exceedingly rare that table1.pfi = table2.pfi.
ufi is unique across all tables and across all time. It's not a hash of the row data, but you could think of it as such.

Each period, in each table, some new pfiare brought into being and some old pfi are retired. Some pfi change attributes. ufi tracks any change to any attribute for a given pfi(row), so to fetch changed (and new) rows for table1 it's simply a matter of:

-- 1st query
select a.*
into vm201512.property_d
from vm201512.property a
where not exists (select 1 from vm201412.property where ufi = a.ufi);

This selects all rows which are either new (new pfi) or changed in at least one column.

About 96% of each table remains unchanged in every respect. Accordingly, in analysing the cross-period changes I build a table that only includes changed and new data. This reduces the table size from ~3.5m rows to ~225k rows: that's a BIG reduction if you subsequently do spatial comparisons with relatively-complex polygons and multiple (spatial and non-spatial) JOINs.

The property table has relatively few columns, so I can identify which elements of the data have changes as follows:

-- 2nd query
create table vm201512.property_d_changes as 
select pfi, 
   case when a.view_pfi=b.view_pfi then 0::int else 1::INT end as view_pfi,
   case when a.status=b.status then 0::int else 1::INT end as status,
   case when a.property_type=b.property_type then 0::int else 1::INT end as property_type,
   -- ... more columns
from vm201512.property_d a -- table created with first query
join vm201412.property b using (pfi);

This gives me a nice table where I can determine precisely what changes happened to a changed (not new) row. I can figure out that pfi 123456 had changes to its propnum and its status; I can figure out how many pfi had changes to their view_pfi - that sort of thing.

Several of the other tables have >50 columns, which makes the case statement unwieldy (I realise it only has to be coded once, but what if the data structure changes?)

Question

With two rows in 2 different tables new.table1, old.table1 where new.table1.pfi = old.table1.pfi and one or more columns different, is there a parsimonious, elegant PostgreSQL statement to figure out the changed columns? Or am I stuck with CASE?

I realise I could write a dynamic function to loop through all columns for a given table, and build the query with CASE statements.

GT.
  • 215
  • 2
  • 8

2 Answers2

5

Clarifications

Your comment needs addressing first:

numeric data almost always takes 0 (and text types take '')

The key word here is "almost". As long as it is not "never" (as in "never ever!"), you need to take NULL into account anyway.

no risk of testing NULL=NULL, which would return 1 inappropriately

No it wouldn't. Anything compared to NULL is always NULL even NULL=NULL. Try it. You need to understand NULL comparison.

I think I just need to change sum(col1) to sum(col1::int) to get the number of rows where col1 changed.

If you want to count every case of a.col1 IS DISTINCT FROM b.col1, then you need to work with NULL-safe comparison to begin with. Apart from that, your expression would work. There are many alternatives, depending on the situation:

You use select a.* into vm201512 ... in your 1st query. Don't. SELECT INTO ... is discouraged. Use the superior CREATE TABLE AS ... like in your 2nd query. See

Postgres provides pivot functionality in the tablefunc module, but nothing is pivoted here.

The core problem is the dynamic nature of the query due to varying input tables.

Solution

Assuming no NULL values. Where NULL values are possible, use IS NOT DISTINCT FROM instead of =.
Tested in Postgres 9.5. Should work for Postgres 9.1 or later.

You can build your queries like this:

CREATE OR REPLACE FUNCTION f_build_query(_t1 regclass
                                       , _t2 regclass
                                       , _join_col text = 'pfi')
  RETURNS text
  LANGUAGE sql AS
$func$
SELECT format('SELECT %I, %s FROM %s a JOIN %s b USING (%1$I);'
            , _join_col
            , string_agg(format('a.%1$I = b.%1$I AS %1$I', attname), ', ' ORDER BY attnum)
            , _t1, _t2)
FROM   pg_attribute
WHERE  attrelid = _t1        -- compare all columns from 1st table
AND    NOT attisdropped      -- no dropped (dead) columns
AND    attnum > 0            -- no system columns
AND    attname <> _join_col  -- exclude 'pfi'
$func$;

Call:

SELECT f_build_query('vm201512.property_d', 'vm201412.property');

Returns a query like this (which you can execute in turn):

SELECT pfi, a.a = b.a AS a, a."weird NaMe" = b."weird NaMe" AS "weird NaMe"  -- more ...
FROM vm201512.property_d a JOIN vm201412.property b USING (pfi);

Result:

 pfi | a | b | weird NaMe
-----+---+---+------------
   1 | t | f | t
   2 | f | t | f

Works for arbitrary input tables, and deals with identifiers safely. You can optionally schema-qualify passed table names. See:

Simple dynamic solution

The difficulty is to return varying row types. SQL demands to know the return type at call time. To avoid difficulties, you could return a simple array instead. You get values in the original order of columns, but you don't get column names like in the first query:

CREATE OR REPLACE FUNCTION f_diff_matrix(_t1 regclass
                                       , _t2 regclass
                                       , _join_col text = 'pfi')
  RETURNS TABLE (pfi int, change_matrix bool[])  -- adapt type of pfi as needed
  LANGUAGE plpgsql AS
$func$
BEGIN
   RETURN QUERY EXECUTE (
   SELECT format('SELECT %I, ARRAY[%s] FROM %s a JOIN %s b USING (%1$I)'
               , _join_col
               , string_agg(format('a.%1$I = b.%1$I', attname), ', ' ORDER BY attnum)
               , _t1, _t2)
   FROM   pg_attribute
   WHERE  attrelid = _t1        -- compare all columns from 1st table
   AND    NOT attisdropped      -- no dropped (dead) columns
   AND    attnum > 0            -- no system columns
   AND    attname <> _join_col  -- exclude 'pfi'
   );
END
$func$;

Call (note the difference!):

SELECT * FROM f_diff_matrix('vm201512.property_d', 'vm201412.property');

Result:

 pfi | change_matrix
-----+---------------
   1 | {t,f,t}  -- one element per column
   2 | {f,t,f}

db<>fiddle here
Old sqlfiddle


You could even make the same function return a dynamic result set for various tables, but I doubt it's worth the complication:

If your really need dynamic pivot functionality (not in this case):

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633
3

In general SQL isn't great at treating fields in columns like sets. Especially PostgreSQL, which lacks any kind of generic pivot/unpivot functionality.

Since you're on 9.3.x I'd use hstore. Performance may be less than stellar.

Simplest form:

test=# CREATE TABLE t1 (pfi integer, a text, b text);
CREATE TABLE
test=# CREATE TABLE t2 (pfi integer, a text, b text);
CREATE TABLE
test=# insert into t1(pfi, a, b) values (1, 'a', 'b');
INSERT 0 1
test=# insert into t2(pfi, a, b) values (1, 'a', 'z');
INSERT 0 1

test=# select hstore(t1) - hstore(t2), hstore(t2) - hstore(t1) from t1 inner join t2 on (t1.pfi = t2.pfi);
 ?column? | ?column? 
----------+----------
 "b"=>"b" | "b"=>"z"
(1 row)

More sophisticated, using hstore only as a hack to "pivot" a single row into key/value pairs:

select
  t1.pfi,
  t1h."key",
  t1h."value" AS oldval,
  t2h."value" AS newval
from t1
  inner join t2 on (t1.pfi = t2.pfi)
  cross join lateral each(hstore(t1)) t1h
  inner join lateral each(hstore(t2)) t2h on (t1h."key" = t2h."key") 
where t1h."value" <> t2h."value";

 pfi | key | oldval | newval 
-----+-----+--------+--------
   1 | b   | b      | z

On 9.4 I'd probably use jsonb instead, but the effect is much the same.

A row_each function that returned text representations of each identifier and value would be handy to have built-in to save on the conversions, really.

Craig Ringer
  • 57,821
  • 6
  • 162
  • 193