Which (if any) extreme-programming techniques would be appropriate to use in a research environment - where the goal is to produce prototypes, patentable, and/or publishable work?
2 Answers
Speaking from a background of algorithm research:
- Keep a long backlog of ideas
- Re-prioritize aggressively and frequently (e.g. every day)
- Mark down backlog items that are no longer viable
- Maintain an up-to-date picture of inter-dependencies between backlog items
- Unlike regular software development, there is a lot more dependencies in research work.
- Always measure, visualize and track algorithm performance (accuracy, etc)
- Don't work alone.
- Discuss, collaborate and share frequently.
- Keep a wiki, and spend lots of effort to extract "wisdom" from your work.
- Use version control. However, keep good algorithm candidates in the current system, even if they are not actively used.
- It allows you to tinker with an older algorithm at the spur of the moment.
- Stale performance data could be error-prone.
- For example, the old data may be based on a less accurate metric
- To get fresh performance data, re-run the algorithm(s).
- Prefer dynamic typing and flexibility.
- Use the right language.
- If almost all successful researchers in the field use one particular language, then use it. Don't fight the wisdom of the crowd.
- Instead, find ways to integrate smaller components into that language, if the smaller components can be developed in a language suitable for computation such as C/C++, or if existing open source code is available.
- If almost all successful researchers in the field use one particular language, then use it. Don't fight the wisdom of the crowd.
- Ask fellow researchers for their source code.
- Many researcher are actually quite friendly to such requests with proper credits and data sharing.
- This will save a lot of trouble because their published papers will only cover the high level picture, yet the devil is in the details.
- Always push yourself, but don't timebox.
- Timebox don't work because of unpredictability in research work.
An example of how to use backlog in research: Suppose in the beginning there are items A, B, C, ..., X, Y, Z.
- A
- B
- C
- ...
Over time, you worked on a number of items, and you have a sense of how promising each item is, not just the items you have worked but also those you don't. The updated backlog becomes:
- A (promising: 90, progress: 70% done)
- B (promising: 70, progress: 60% done)
- Z (promising: 65, not started)
- ...
- C (seems it won't work, don't bother)
Notice how item C sinked to the bottom because of research insights gained from working on A and B. Also notice how Z floats to the top. Learning about what other researchers are doing will also help floating items to the top.
At the end of one semester, do a backlog cleanup.
- A (done, working)
- B (done, working)
- Z (done, some bugs)
- -----
- Y (50% coded, kept in the system, not actively used)
- X (10% coded, removed from the system in revision 123)
- -----
- C (dropped)
The ones that are working will be the result you publish.
- 17,140
You have to be Agile to do research programming.
You have to be willing to throw away a lot of prototypes.
You have to be willing to think outside the box, so software patterns are not going to help you that much.
I think you have to be willing to learn new languages, and even create some new ones.
Other than that, research programming is basically the same as any other. :) You still have to write unit tests. You still have to write documentation. And you still have a boss.
Your deadlines may be a bit more fluid.
- 200,592