0

Is it possible to improve representing data to client when they send command select bytea from table limit 150;. It consumes one minute and half but in pg_activity I see "client_write" waiting event. we use 10gbit network. DB version is PostgreSQL 13.

When I create a table from that result, it has 285 MB table size.

Oddly, select count(*) from (select bytea from table) only takes 10 ms.

Is there any tip or a way to improve representing performance?

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633
Melih
  • 284
  • 2
  • 6
  • 19

1 Answers1

1

Seems like network throughput is not the only limiting factor here. There is one thing that might help to some extent: compression.

The huge bytea columns are obviously compressed and stored out of line ("TOASTed"). By default the compression method pglz is used, which tries hard to reduce storage size. But compression, as well as decompression, incurs substantial cost for large amounts of data.
Since Postgres 14, the alternative compression method lz4 is supported, which is typically much faster, but typically compresses a little less. You can set that per column. Any time.

ALTER TABLE tbl ALTER COLUMN bytea SET COMPRESSION lz4;

Be aware that existing data is not re-compressed until a new row version is written that also forces to re-compress the data. Read up here:

Since speed is your predominant concern, that should be for you. You have to test how it affects your data, of course. If you should find that your data is compressed very little anyway, consider disabling compresssion for the column altogether. (Works in Postgres 13, too!) Read instructions here:

Aside

Consider storing huge payloads outside the database - if possible. Related:

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633