Wednesday, September 1, 2010

DARPA and Program Evaluation

Another topic of great interest from others is in the topic of DARPA evaluations. Some view them as rigidly constraining program researchers and actually tending to cause progress to be merely incremental. Most DARPA Directors ask to see a built-in evaluation process before approving a DARPA program, and sometimes these have been a tool for Directors to micromanage programs.

This post is focused on a different aspect of DARPA evaluations that differentiate DARPA programs that used them from NSF programs and have a large, but possibly not widely recognized, impact. The DARPA programs I ran for the agency had evaluations that were community competitions. Program fund recipients were required to attend and present their results, but others could also attend and present results as well. The only rule was that outsiders had to also follow the same rules and allow their results to be independently judged, usually by NIST. The motivation for engaging in this way, but without funding, was that, if they performed well, they could usually expect to receive DARPA funding in the future. Of course, this was not always the case, but it occurred often enough to be a strong motivator. Even without funding, being able to publish the case that one was a leader in a NIST evaluation generally curried interest and upped the chances of funding, sometimes from other sources.

When a formal, NIST-led evaluation was not used, the DARPA Principal Investigators' meetings were sometimes also a venue for a similar competition. Not many DARPA programs invite non-funded entities to present, but potential bidders were occasionally to present their case by showing their results. Even if they weren't allowed to do this, potential bidders loved the idea of being able to attend and hear how high the bar was and what approaches were being taken to achieve it. This process was a great way for a program manager to weed out potential bidders who could not muster the resources to compete well.

As mentioned in the previous post, running a DARPA program in this way requires that the program manager have good knowledge of the community of researchers working in the field and their relative capabilities. Otherwise, this process could easily get out of hand. In a less organized discipline, as many of the social sciences are, this process would probably not work. The social and behavioral science communities are diffuse and not familiar enough with each other to work well in a competitive fashion. Nor is it possible for a program manager to easily define who is or is not a reasonable bidder to the program.

Back to the criticism that DARPA program competitions tend to foster incremental rather than ground-breaking progress: competition tends to circle the wagons rather than send them out on a broad search for solutions. Countering this tendency is possible in a DARPA context, but the agency best set up to address this need is the National Science Foundation. NSF tries to fund good science on the cutting edge and often finds researchers who pose diverse approaches. After some success under NSF funding, we used to encourage proposers to "graduate" to DARPA for a more intense funding and direction. Of course, this requires a program there to receive the effort and fund it. That requires strong interaction between NSF and DARPA program officers.


No comments:

Post a Comment