I would like to unserstand better how AWS cpu-credit usage give me some insight about cost savings.
AWS costs may be extremely complicated to understand. CPU size, cpu-credits, and so on.
I try to get my service running stable, fast, and smooth so, from time to time, I test different cloud providers. AWS costs are usually 2 or 4 times higher than Digital Ocean-like cloud providers.
In this example, I have a t3.large instance with the below monitoring charts. Since I am always generating more than consuming credits...
Is it correct to say that my instance is over-dimensioned? I mean, would I be saving money and achieving the same performance if I had chosen a t3.medium instance?
Here are the weekly charts for: CPU Usage, Credit Consumptions, Credit Balance: