In C# (and other languages), we can define a numerical variable as a short, an int, or a long (among other types), mostly depending on how big we expect the numbers to get. Many mathematical operations (e.g., addition +) evaluate to an integer (int), and so require an explicit cast to store the result in a short, even when operating on two shorts. It is much easier (and arguably more readable) to simply use ints, even if we never expect the numbers to exceed the storage capacity of a short. Indeed, I'm sure most of us write for loops using int counters, rather than short counters even when a short would suffice.
One could argue that using an int is simply future-proofing, but there are certainly cases where we know the short will be big enough.
Is there a practical benefit to using these smaller datatypes, that compensates for the additional casting needed and decrease in readability? Or is it more practical to just use int everywhere, even when we are sure the values will never exceed the capacity of short (e.g., the number of axes on a graph)? Or is the benefit only realized when we absolutely need the space or performance of those smaller datatypes?
Edit to address dupes:
This is close, but is too broad - and speaks more to CPU-performance than memory-performance (though there are a lot of parallels).
This is close, too, but doesn't get to the practical aspect of my question. Yes, there are times when using a short is appropriate, which that question does a good job of illuminating. But, appropriate is not always practical, and increases in performance (if any) may not actually be realized.