As with most things, it depends...
The longer a transaction is open against a set of Tables, the longer those Tables are locked for. Locking on a Table causes contention against other queries trying to read from and write to that Table. So breaking up a transaction into multiple smaller transactions reduces the continuous runtime that those Tables are locked for. It also may better isolate only locking the Tables that need to be absolutely locked for that part of the code.
But a lot of relational problems are best solved in a relational manner (as opposed to iteratively or procedurally), and there is other downtime in creating and committing (or rolling back) multiple transactions and taking out the appropriate locks for those transactions that sometimes a single transaction is best for minimizing.
For example, if you're trying to update 100 rows in a Table, in most cases updating all 100 rows in a single transaction / single UPDATE statement will be a lot quicker than taking out 100 transactions and iteratively looping through the table 100 times, row by row.
Generally the tradeoff is, the query you're trying to run will run the quickest itself with the least amount of transactions, but will potentially cause the most amount of contention on the Tables involved in that query for the rest of the server / other queries accessing those Tables. By breaking the query / logic up into multiple transactions you likely reduce the overall contention for those Tables for the rest of the server but may incur some performance reduction in that specific query that was broken up into multiple transactions.
So the decision to be made will be based off of how important it is for that specific query to run quickly vs the busyness on the rest of the server involving the same Tables that query uses, and how long they can afford to sit waiting to perform their own operations.