Pages are read into memory as required, if there is no free memory available, the oldest unmodified page is replaced with the incoming page.
This means if you execute a query that requires more data than can fit in memory, many pages will live a very short life in memory, resulting in a lot of I/O.
You can see this effect by looking at the "Page Life Expectancy" counter in Windows Performance Monitor. Look at https://sqlperformance.com/2014/10/sql-performance/knee-jerk-page-life-expectancy for some great details about that counter.
In the comments, you asked specifically what happens when the results of the query is larger than available buffer space. Take the simplest example, select * from some_very_big_table; - assume the table is 32GB, and max server memory (MB) is configured at 24GB. All 32GB of table data will be read into pages in the page buffer one at a time, latched, formatted into network packets, and sent across the wire. This happens page-by-page; you could have 300 such queries running at the same time, and assuming there was no blocking happening, the data for each query would be read into page buffer space, a page at a time, and put onto the wire as fast as the client can request and consume the data. Once all the data from each page has been sent onto the wire, the page becomes unlatched, and will very quickly be replaced by some other page from disk.
In the case of a more complex query, say for instance aggregating results from several tables, pages will be pulled into memory exactly as above as they are required by the query processor. If the query processor needs temporary work space to calculate results, it will know that upfront when it compiles a plan for the query, and will request work space (memory) from SQLOS. SQLOS will at some point (assuming it doesn't time out), grant that memory to the query processor, at which point query processing will resume. If the query processor makes a mistake in its estimate of how much memory to ask for from SQLOS, it may need to perform a "spill to disk" operation, where data is temporarily written into tempdb in an intermediate form. The pages that have been written to tempdb will be unlatched once they are written to tempdb to make room for other pages to be read into memory. Eventually the query process will return to the data stored in tempdb, paging that in using latching, into pages in the buffer that are marked free.
I'm undoubtedly missing a load of very technical details in the above summary, but I think that captures the essence of how SQL Server can process more data than can fit in memory.