Postgres and its ilk were written as general-purpose OLTP database servers. Can it handle multiple inserts and reads concurrently? Yes it can, but it saturates at a much lower level than a specific purpose-built analytics engine will. Clickhouse is still a relational database - it presents its public API as tables of pre-defined tuples, and is interacted with via SQL - just one that is highly optimized internally for answering aggregate queries on large datasets. Postgres could answer the same queries but would take longer and require more disk, memory and CPU.
A good analogy is with cars. Postgres is like the family car. It can comfortably get the kids, the dog and all the gear to a camping trip, without using too much fuel, or costing too much to purchase or maintain. Depending on the model it can go faster or slower, carry more gear or less, be more fuel efficient or thirstier. It will not, however, win an F1 race. Nor will it carry 12 tonnes of ore out of a quarry. Nor does it come with a stretcher and bottled oxygen for ambulance work. If your workload has these needs you talk to a niche vendor to acquire a very specific product that will do one job and do it really well, and pay the corresponding purchase and maintenance costs.
The complementary question is why not use Clickhouse exclusively? It's because that product is not good for a mixed random read/ write workload where Postgres excels. An ecommerce site run off Clickhouse would be awful!