What scientific/empirical evidence is there that the mental separation between (business) analysis and (technical) design in software development leads to better results than considering both together?
Background: I'm talking about separation on a small scale here, i.e., for a single user story, separate between "acceptance criteria" that document what shall be done from a requirements-centric, non-technical point of view and "design+implementation" that determines how the acceptance criteria are implemented in a specific solution. I'm NOT talking about "first do all the analysis, then do all the design" (big design up front); which is assumed to be bad. I'm also NOT talking about "don't change the analysis once you have started design" (pure waterfall); which is assumed to be bad, too.
I always considered it rather obvious that such a separation between analysis and design is beneficial. And there is lots of advice that says this is a good thing, sometimes using different words like "requirements", "opportunity space vs solution space", "domain-driven design", ... but essentially meaning the same idea: Don't mix up problem and solution, but have a clear mental separation of them. But recently a co-worker challenged why we should do so. And I realized that apart from "common sense" and logically deductive arguments, I don't know of real empirical evidence that shows that such a separation is good. In which "good" could be anything from "less effort" over "better solutions" to "more maintainable software". Do you know any empirical studies?