XFS can take that, but it'll take it better if you plan for your usecase. Accessing that data (STAT and OPEN operations) will go faster if the OS has fewer inodes to romp through to get the data. If you're going to have fewer than, say, 30K files/directories in a given directory, you don't need to bother with this optimization.
But if you're going larger, you might want to consider using the -i size=512 option to give you a larger inode size. That will allow more directory-entries per inode, so the OS will have to thumb through fewer in order to traverse a tree. Given SSDs these days, this will improve things less than back in the spinning rust days; but it's an optimization to consider.
I once managed an XFS-based filesystem that had north of 20 million files in it, with an average file-size of about 100KB. I designed that particular filesystem to handle well over 100 million, and it was on track to get there when I left that company. That's the prod-version of the system I described here: The impact of a high directory-to-file ratio on XFS
Is XFS the best choice for this? Hard to say. But I trust it more than ext4 for large filesystems like you describe. btrfs might be able to take it, but the more conservative faction of system operators doesn't consider it production ready yet.