There is usually not a significant difference between the two, from a SQL Server point-of-view.
The worry is that the DataReader the queries will run longer, since the client will perform some work while fetching results, instead of fetching all the results into memory and then processing them.
But typically even when using a DataReader, the application will quickly process the result rows, and not significantly extend the lifetime of the query. It's simply an unusual thing for an application to sequentially process a large result set, performing expensive work as it goes. And even in applications where that does happen, it's probably not happening very frequently.
And in those scenarios where it does happen, it's probably because it's more efficient or convenient to not buffer the results on the client. Copying a large result set into client memory is not free, and not always possible.
Even so, having sessions blocked in a ASYNC_NETWORK_IO wait shouldn't be a big deal. If your query plan involves spooled results (either through a temp table or the query plan) any S locks will already be released. But hopefully, you're using READ COMMITTED SNAPSHOT isolation and queries don't require S locks at all.
It's possible that you could use more concurrent connections and more memory on SQL Server if clients fetch slowly, but IMO it doesn't rise to the level to support a general recommendation about how applications should read data. Any problems caused by slow fetching can be identified and remediated on a case-by-case basis.