9

Does code-review typically involve UAT/QA testing? We are trying to establish a meaningful code review process and we have come to a disagreement among our development team. Some of are holding that a reviewer should checkout a development branch from TFS, and actually perform UAT based on the acceptance criteria of the User Story (our unit of review) and some are saying that the responsibility for whether the code is functional is, and should remain, on the developer.

What is your experience? Is there an industry standard regarding this question?

7 Answers7

10

Typically, no. In my experiences, code review happens after any developer-driven testing. The responsible developer would make the changes to the code, update any unit or integration tests, and usually run the code in a local environment if that's possible. Once the changes are made and at least some basic tests around the changes pass (either manual or automated), the changes can be submitted for a code review.

Should your process allow for quality assurance or user acceptance testing as part of a code review? Thats up for you and your organization to decide. Seems like it adds a lot of overhead. There is a difference in what the different types of testing look for. It's relatively easy to run some automated unit tests, see that tests pass, look at a diff of code, and then see that the automated tests provide appropriate coverage, only turning it over to quality assurance or users if two developers understand the change and say that the changes meet the need.

Thomas Owens
  • 85,641
  • 18
  • 207
  • 307
5

I am not a developer, but an agile coach. So my answer comes from that perspective. There is a logical way to think about your question that should answer it, without needing a development background. I hope you will humor me. Typically, we see a developer work on a piece of functionality, let's say it's in Java, and he or she writes the code and tests (unit, perhaps TDD) and validates that the code works and meets the requirements (or acceptance criteria). At that point, another developer, commonly more senior, performs a code review. At this time, the analysis is on Structural Quality and is enforced or validated via the code review. The review looks for numerous deviations from code standards, or even "transcendental" perspectives on code quality, which is hard to define and gets a bit "artsy". The developers agree on changes to be made to improve the code based on these considerations, they are done, the developer again runs his test and validates that the changes that resulted from code review didn't break the functionality. At this point, Functional Quality of the code is validated through QA and UAT.

If you change the order, and have QA and UAT spend their time on testing, and then perform code review and make changes to the code, wouldn't you need to have QA and UAT yet again validate that everything meets their specs?

You could take a short cut and assume nothing changed, but if you ever broke something you didn't detect in your tests, it could be a big problem. And if you ask them to repeat their testing again, that's highly inefficient and would not be appreciated.

So no, I have never seen any of my teams do it that way, and I've been working with at least a dozen or two agile teams for 8 years.

3

To add to the 'no's, I'd add that the name 'Code review' rather gives it away- you are reviewing the code, not the application. Adding testing to the brief can actually be detrimental- if I, as a developer, have to both include tests as part of reviewing the code, a lot of my focus will be on the test results rather than checking the code from the codes perspective properly. This means any currently non-symptomatic problems are far more likely to be missed.

1

Typical process (for us, anyway) is

  1. Developer codes the feature point or bug fix, as well as any automated tests
  2. Developer performs a local build
  3. Developer performs local unit testing
  4. Developer submits for code review
  5. Lead developer and/or peer developer reviews the code
  6. Developer checks code and automated tests into code repo
  7. CI server detects code changes and pulls latest code
  8. CI server generates a build and executed automated tests again
  9. At regular intervals, builds are installed on a QA environment
  10. QA performs functional testing

Since the code review occurs before the code checkin, it would be unusual for QA to perform any testing concurrent with the code review, since they would have to testing an unofficial build. Generally speaking they are going to want to stick to testing actual builds.

John Wu
  • 26,955
1

I think purpose of code review is just get extra pairs of eyes on the written code.

Reviewers assume that developer who wrote the code have enough skills for that task. If it is not so, then something wrong in your organisation and any process or tool will not help you.

So to keep reviews fast and simple reviewers will check for possible mistakes which can be overlooked by developer: typos, code styling, does code follow programming guideline of your team, best practices and based on the size of the task - design decision.

It is important that review is not big.
In your case, when you review user story one review can take days or if you organise review meetings it will takes hours of not productive time for developers.

Fabio
  • 3,166
0

No, Typically code review is just a coding style check.

However, I think your question cuts to the heart of the problem ie. what should a code review cover?

When I review code I want to be able to say 'yes I certify that this code works' but in reality that means I should be fully testing that code, effectively doing a full QA job on it.

In reality all I can do is say 'yes this code matches our coding standards document'

Maybe if I have been working on the software long enough I will know a few 'gotchas' that I can check for.

Also I can do a mental check for edge cases and missed requirements. eg. 'when this is null it will error' and 'the code works, but wont scale'

But those are really failings in the requirements or prioritisation of technical debt. Using a code review to address them isn't very effective.

So I would say DON'T do this kind of code review. Expand it out into a check that the requirements cover performance, technical (automated tests, logging etc) and edge cases and QA on those requirements.

Ewan
  • 83,178
0

I would say no, but it is based on a few assumptions. The developers are probably some of the highest paid employees on the team which makes their time important and costly. If there are other automated tests and/or people who are testing the code, why would a developer waste time reviewing coding problems that could potentially be caught by those processes? Sorry, I'm not running unit tests on someone else's code.

There are benefits of the code review process to both the writer and reviewer of the code. The review process doesn't always address some of the things the other types of tests should catch, so there's no reason to do a code review, before the other tests. I just don't see the benefit of having the reviewer do the extra work. Getting buy in and setting aside time for reviews is tough enough without heaping any additional responsibilities.

JeffO
  • 36,956