Thursday, September 30, 2010

Darwinian Science?

Since progress in science tends to be competitive and occurs in a population of scientists (see blog below), it is inviting to think of it as a Darwinian process. That is, science proceeds by way of a large variety of approaches tried by a population of scientists, and selection occurs on those approaches in terms of their successes in experimentation. It is an attractive way to think about scientific progress because it depends upon the generation of variety by broad investment by the government and upon repeatable experimentation to prove approaches that work. It would be wrong, however, to label this as a Darwinian view of science.

Since science is a phenomenon of ideas and not of genes, it is a Lamarckian process rather than a Darwinian one. In other words, progress is achieved by the passing on of adaptations through learning within one's lifetime rather than through any increased fitness of successful scientists, although successful scientists do tend to attract and train more new scientists than unsuccessful ones. The ideas and techniques that lead to successes in scientific experiments are published and passed on in a much shorter interval than that required to produce new scientists. Scientific progress is like any cultural evolution, in that way.

That said, how should this recognition of scientific progress as a Lamarkian evolution affect science funding programs? Clearly, one should encourage a diversity of approaches in order to improve the chances of "covering" the search space of options to a solution to any scientific problem. One should also encourage rapid and widely-distributed publication of all results. These lessons are not new.

Frequently lost on program managers, however, is the fact that, in order for the evolutionary process to proceed, one must also develop a competitive process among the scientists working to solve the same problem. Selection of a scientific "solution" to a problem is relatively meaningless without a corresponding set of attempts that failed to bring about a successful result. This means that program managers must not only expect failures, but they must be willing to fund sufficient variety of approaches, that is, take sufficient risk, that failures are generated and published! In examining failures, scientists learn valuable lessons about the causes of success and where the causality can be attributed in a success. Without them, it is not possible to know what aspects of the successful approach actually led to the desired result.

The propensity of program mangers to seek and publicize "winners" is only part of the job of effective program management because the wider the net is cast, the more accurately one can not only recognize success, but why an approach has succeeded.

Friday, September 17, 2010

Pasteur's Quadrant

Stokes' book, Pasteur's Quadrant (1997) describes how Vannevar Bush got it wrong when he postulated a continuum from basic research to applied research. He claims that these are actually two separate dimensions along which research can be characterized. Pasteur was a good example of being high on both basic and applied research. He was not only looking for a cure for a specific disease, but he was also doing research on a basic mechanism of disease.

While NSF is best characterized as funding research high on the basic dimension, they are not likely to fund significant efforts that are high on the applied dimension. At least, that is true in comparison to other funding agencies. DARPA, for example, tends to fund research that is high on the applied research dimension, but not significantly so on the basic dimension. In fact, the "color of money" of DARPA tends to prohibit this. Defense research dollars are categorized according to whether they are basic or applied, but not both.

A particularly important challenge to face for a country seeking to maximize return on investment of research dollars is whether or not to spend funds on basic research because the return is so risky and so far in the future. This may be a false issue if one takes Stokes' view. Research topics can be identified that are both basic and applied, and there may be a way to do this intentionally.

Reviews of proposals from other countries has led me to believe that there is an emphasis on targeting research areas that will create industrial partnerships and quick wins in new applications. Unfortunately, the topics often involve the creation of an engineering artifact with new capabilities rather than deep investigation into the fundamentals of the science behind the topic. At the end of the project, a payoff might develop signaling success for the funder in demonstrating increased World market share in some area. At the rate at which competition drives engineering applications these days, however, that success is likely to be short-lived, unless a deep understanding of the principles involved are understood as well. With such a deeper scientific understanding, one can continue to create new artifacts and even understand the drivers for what makes them successful in the first place.

Saturday, September 11, 2010

Critical Mass in Science

In my experience, a lot of program managers in the Federal Government have a view of the development of new ideas in science that they are the result of invention by individuals. It is a view not unlike historical accounts of Edison or other famous inventors who experimented in a lab on a variety of stuff until something they tried worked in some amazing, new way. Certainly, their search space was informed by concepts and beliefs about the nature of the things they were working on, so it wasn't random. Nevertheless, it was individual.

I believe, from my observation of science projects across a broad spectrum of disciplines, that this approach is not just quaint, but fundamentally wrong. Science progresses in large part because it is a social process. Philosophers of science and many others share this view, so this statement is nothing new, except maybe to some government program managers. Beyond this, however, I believe that the search process conducted by scientists working in a specific area is not only not random, i.e., informed by concepts and beliefs about the nature of their area, but it is also not a linear process in which you can increase the chances of discovery in proportion to the number of scientists working in the area.

Scientific progress is influenced by the communication among scientists in a field, by their papers, conference talks, peer reviews of each other, and informal communications. More than that, this communicative feedback induces rapid convergence when progress seems imminent. A research project in a center I once managed was focused on this phenomenon and ways in which it could be detected in networks of scientists. This phenomenon acts like a "social attention" mechanism that rapidly heightens communication among scientists around a specific topic in a manner not unlike the formation of critical mass in a nuclear reactor.

Effective program managers not only pay attention to the social processes of science in making and overseeing government funding for science, but they also get involved in sampling the social network of science for potential critical mass phenomena surrounding potential new discoveries. One of the ways this was done at the National Science Foundation was through funding of workshops in which 25 or more scientists were invited to discuss a particular topic and share concerns and interests. Many, if not most, such workshops were not successful in that a sudden boom of interest occurred, but sometimes they were. Such a process is a sampling process of looking for new areas that, if the Foundation invested some additional funds, a new discovery could be facilitated. The process encourages critical mass to develop in a nascent area, if there are already the concepts and ideas to support it. Without being fueled by such activities, the process might take a lot longer, inhibited by the normal inertia of scientific communication, such as the publication and proposal review process which can take months, if not years.

Tuesday, September 7, 2010

Program Pathologies

Government-funded research programs can sometimes develop pathologies, even in prestigious, peer-reviewed agencies like the National Science Foundation, but definitely not limited to the NSF. Program pathologies can be insidious and not necessarily the intentional result of a biased program manager.

The most serious pathology in terms of the overall health of science and use of taxpayer dollars is what I call the "self-fulfilling prophecy" program. When an agency requires peer-review of proposals for research, the program manager must find capable experts who are both willing and able to do reviews. This should be a learning experience as well as a community service effort that scientists look forward to, but the load can also be onerous when faced with pressures of teaching, research, and maintaining continuity of funding to support one's graduate students and laboratory facilities. The closer the request to review is to the actual program that a scientist might apply to, or already be funded from, the more likely they will agree to serve. An incentive exists also to find out what's new in a program as well as to survey a sampling of the competition. Involvement like this, however, is a slippery slope.

Reviewers tend to like to see proposals along the lines of the kind of research they are doing and tend to dislike newer, unproven approaches. Part of this is because familiar research is easier to judge, the references are familiar, and reviewers expect to (and probably should) see their own work referenced in the proposal. Untested approaches are a harder case to make under most any circumstances. Reviewers are also more tolerant of program managers who share their opinions and approaches than they are of managers who look for trying out something new once and a while. In fact, program managers are not immune to the positive feedback they receive from world-class reviewers who treat them as equals in some sense. The ego needs a boost once and a while in any government job, and kind words from a big name researcher are certainly nice to receive.

The result over time of the tendencies discussed above leads to a "self-fulfilling prophecy" program, or to one in which proposal funding decisions are based more on similarity to work already being done than on a truly objective judgment. It could be said that reviewers are choosing to fund their "friends," knowing that their friends will, in turn, choose to fund them. This pathology may not be intentional, but it sometimes works out that way anyway. Program managers who should be forcing the centroid of the program to be constantly shifting may resist doing so for fear of displeasing those who have heaped praise on them for running such a fine program. Again, this pathology may be more latent than overt, but the bias is often there anyway.

While programs in all agencies are subject to such pathologies, the National Science Foundation has a mission to maintain the health of science on its own principles and must constantly seek to identify pathological programs and seek to change them. Inviting over one-third of its program managers to serve short terms on loan from Universities is one way to try to overcome these tendencies. Other ways have also been used, such as creating initiatives from time to time that are syntheses of different areas that have not before been considered in the same program. In a sense, the best NSF programs are those that constantly change and seek to be on the border between comfortable science and risk-taking.

Friday, September 3, 2010

Flavors of Interagency Collaboration

For many reasons, such as avoidance of duplication, leveraging of other agency investments, or discovery of advances already underway, agencies of the Federal Government engage in various forms of collaboration. There is a wide range of "flavors" of such collaboration: Executive Office National Science and Technology Council committees, Office of Science and Technology Policy working groups, national initiatives such as the High Performance Computing Initiative, special calls for science funding data from the Office of Budget Management, or special efforts started by one agency head trying to engage others in a focused area.

None of these "top-down" efforts are as effective as direct program manager to program manager interaction. Top down initiatives usually devolve into high-level posturing in order to win a larger portion of the budget pie in a particular funding area. For this reason, most agencies are unwilling to reveal details of efforts nor invite hands-on collaboration in funding. The significant results from such interactions are usually seen in rearrangements of the President's budget request.

Most significant interagency collaboration results from the direct interaction and hands-on teamwork of program managers from different agencies. Even this type of interaction has many "flavors". Program managers can agree in a formal way to joint calls for proposals with agreed-to procedures on how to handle review and funding of proposals received. They can also attend each other's PI meetings to learn of funded efforts, track results, and agree on how to handle efforts of joint interest.

One of the least utilized, but probably most useful forms of interaction among agencies is detailing of program managers from one agency to another for two to three years. Interagency details usually involve the host agency transferring to the home agency the loaded salary of the manager during the period of the detail. Only when a program manager actually sits and functions in an agency for an extended period of time can they appreciate the dynamics, and especially the politics, of the processes in that agency. Each agency is like a separate culture with little overlap in processes and procedures. My own direct experience as a program manager was in NSF, DARPA, and DHS, but I have had extended exposure to NIH, NSA, NASA, NGA, and several others. In addition, I have observed all science-funding agencies from the position of several different Executive Office committees. Yet I have never observed any two agencies' science funding processes being essentially the same. Each one practices the funding of science in a very different way.

Perhaps this diversity of approaches is appropriate for the health of US science, and perhaps the differing missions of the agencies are the principal reason for this diversity, but this diversity calls for even greater attention to interagency collaboration. Since the way in which agencies "cover" science differ so much, they are not likely to have the same understandings and are therefore likely to benefit from collaboration with other agencies.

Thursday, September 2, 2010

Contract, Grants, and Bias

One especially troublesome observation over the past 15 years in directing the funding of science has been government mis-use of contracts for research. Science must proceed on the basis, ultimately, of peer review and appreciation (or not) by what Polanyi calls the process of mutualism. (This is in an exceedingly insightful chapter titled "Society of Explorers" in his book The Tacit Dimension). The establishment of truth in science is accomplished by review and duplication of effort independently by others. History is filled with cases where the opposite has been the case, and the results often had serious consequences.

Peer review, as the National Science Foundation conducts it, is perhaps the ideal case whereby Polanyi's process is implemented. NSF also provides grants for research for the most part because, after the award, they do not believe that the agency has the right to interfere in the process of the research, except in the most gross circumstances of fraud, misrepresentation, or failure to attempt to achieve proposed goals.

Some of the other agencies of the Federal Government have recognized that the NSF process tends to select the top researchers in the country and have piggy-backed on the NSF as a mechanism to provide funding to researchers on specific topics. Most of this funding was from agencies of the Intelligence Community which, I believe, do not have granting authority. They can only provide contracts for research.

While I certainly do not believe agencies must all follow the NSF model (NIH does not operate in exactly the same way and yet has a stellar record in this regard), some have pursued general research by awarding contracts. This may seem possible, but I believe the term "research contract" is an oxymoron. Contracts are awarded to achieve specific milestones and are often of time and material form such that tight controls can be, and usually are, exerted by the sponsor. When such contracts are successful, I believe they are better termed development contracts, not research contracts. For this reason, most Universities will not accept time and materials contracts to do research, preferring to accept grants for that purpose.

The problem of research contracts appears most egregious in cases where the government sponsor wants to control the details of the research rather than let it proceed as it might. There may be many reasons for this, one possibility is that it tends to create a virtual government laboratory where Congress did not approve one. Since indirect costs are part of contract fees, a few Universities and most of Industry will accept this situation in spite of the fact of the severe limitations it imposes on the directions and uses of the research.

While lots more could be said about this topic, the main point, in summary, is that it appears, from my own experience, that some agencies of the Federal Government have sought ways to circumvent Congressional approval for expansion in research by subverting research dollars and using contracts as vehicles to do this.

Wednesday, September 1, 2010

DARPA and Program Evaluation

Another topic of great interest from others is in the topic of DARPA evaluations. Some view them as rigidly constraining program researchers and actually tending to cause progress to be merely incremental. Most DARPA Directors ask to see a built-in evaluation process before approving a DARPA program, and sometimes these have been a tool for Directors to micromanage programs.

This post is focused on a different aspect of DARPA evaluations that differentiate DARPA programs that used them from NSF programs and have a large, but possibly not widely recognized, impact. The DARPA programs I ran for the agency had evaluations that were community competitions. Program fund recipients were required to attend and present their results, but others could also attend and present results as well. The only rule was that outsiders had to also follow the same rules and allow their results to be independently judged, usually by NIST. The motivation for engaging in this way, but without funding, was that, if they performed well, they could usually expect to receive DARPA funding in the future. Of course, this was not always the case, but it occurred often enough to be a strong motivator. Even without funding, being able to publish the case that one was a leader in a NIST evaluation generally curried interest and upped the chances of funding, sometimes from other sources.

When a formal, NIST-led evaluation was not used, the DARPA Principal Investigators' meetings were sometimes also a venue for a similar competition. Not many DARPA programs invite non-funded entities to present, but potential bidders were occasionally to present their case by showing their results. Even if they weren't allowed to do this, potential bidders loved the idea of being able to attend and hear how high the bar was and what approaches were being taken to achieve it. This process was a great way for a program manager to weed out potential bidders who could not muster the resources to compete well.

As mentioned in the previous post, running a DARPA program in this way requires that the program manager have good knowledge of the community of researchers working in the field and their relative capabilities. Otherwise, this process could easily get out of hand. In a less organized discipline, as many of the social sciences are, this process would probably not work. The social and behavioral science communities are diffuse and not familiar enough with each other to work well in a competitive fashion. Nor is it possible for a program manager to easily define who is or is not a reasonable bidder to the program.

Back to the criticism that DARPA program competitions tend to foster incremental rather than ground-breaking progress: competition tends to circle the wagons rather than send them out on a broad search for solutions. Countering this tendency is possible in a DARPA context, but the agency best set up to address this need is the National Science Foundation. NSF tries to fund good science on the cutting edge and often finds researchers who pose diverse approaches. After some success under NSF funding, we used to encourage proposers to "graduate" to DARPA for a more intense funding and direction. Of course, this requires a program there to receive the effort and fund it. That requires strong interaction between NSF and DARPA program officers.