How many rows of data is generated in a month of 2 TB of data? In a year's time, how many months back will still be queried? In 5 years, how many years back of data needs to still be queried?...in 10 years, and in 20 years? What's the most amount of months (or rows of data) that will need to be queried at one time? What type of analytics will need to be done, quantitative (aggregative and arithmetic based), or more qualitative?
These are just a subset of all the important questions to weigh when deciding a database solution, especially at the scale of data you're trying to plan for. Unfortunately there's no direct answer to "which database system should you choose" because there are many factors. And to be honest, most of them can handle even the scale of data you're talking about.
PostgreSQL should be able to handle that much data (though I haven't personally worked with it on that scale yet, but from my research and understanding). I know Microsoft SQL Server definitely can handle a data warehouse at that scale (again depending on your specific use cases based on the questions above), as I have worked with one not far off from it, and the analytical queries we ran took seconds to process.
Although throwing money at hardware isn't the primary recommended solution to a database's performance problems (proper schema design, index design, and process design will go the longest of ways) I would say it does help and is worth considering making sure your non-Cloud based server has decent hardware behind it. NVMe's go a long way these days in terms of improving generally the biggest hardware bottleneck (the disk). Having a decent amount of RAM and CPUs will be helpful too. (E.g. in the Microsoft SQL Server data warehouse I mentioned above, we had 32 GB of RAM and an 8 Core CPU. It probably could've benefitted from doubling both of those, but worked quite well as is too.)
The other benefits of Microsoft SQL Server (though other database systems probably have some sort of equivalent) is you can setup an AlwaysOn Availability Group to ensure constant uptime through redundancy. It's easy to setup and you can also query off your replicas as way to horizontally shard the load when querying the data.