What are the common approaches to boost read/write performance of table with up
to 100 millions of rows?
Not running it on a phone? I mean, seriously 100s of millions of rows is not particularly large on modern mid range hardware. This would mean - hm, let's see. Dual Socket, 16 cores (I just go by the minimum licensing Windows Standard here, bit it matches for example the low end of an AMD EPYC), possibly 128GB RAM and an all SSD Setup, at least a heavily SSD cached thing.
I mean, my age old VM (sql server, using 48gb memory, 6 cores and around 10 dedicated SSD) is handling 64 million row insert/delete jobs in less than a second WITHOUT anything particular.
The most obvious approach is creating new table for SEGMENT_ID
This is one thing where professional databases have something called partitioning. A sort google actually tells me postgres also has it - https://www.postgresql.org/docs/current/static/ddl-partitioning.html - are you aware of that? From waht I see it is quite a little less elegant than SQL Server (seems to create indices on each partition, instead of this handled by the database transparently).
It will not make it faster to read or write, but deletes of WHOLE partitions can speed up significantly. No need to be dynamically here, well, though you sort of can - the main point is that you never WORK with the sub-tables, so ORM and queries stay the same.
You can also use sharding, right?
Which you possibly should - once you hit hundreds of billions of rows.
It really is partitioning, but only if your insert/delete scenarios make it efficient. Otherwise the answer really is hardware, particularly because 100 millions are not a lot. And partitioning is the pretty much only solution that works nicely with ORM's.
Any really, why dynamically? Pregenerate. Oh, and...
I need good performance for SELECT * FROM table WERE SEGMENT_ID = ?
Partitions do NOT help here. Ok, here is the issue - partitions help you to search less data, but using an index with the segment_id as first field and filtering by this one - does exactly the same. Enough RAM and FAST IO are the only solution to fast reading in data. Partitions are basically a "delete one partition fast" thing - anything else gets at best a small gain.