Monday, December 6, 2010

Political Appointees

In this era of Federal Employees being denied their raises as a token gesture toward easing the budget deficit, I am surprised that no one has addressed the role of political appointees in the Federal Government. In my experience, political appointees are usually placed in relatively high positions reporting to the head of the agency, who perhaps should be the only political appointee in any agency. Nevertheless, these other placements do occur and are rampant in some agencies. In all cases I've observed, which were many, these appointees were not qualified for the positions they held, had no understanding of the mission nor the culture of the agency, and usually only obstructed activities rather than try to create new methods or processes to improve government. While I am unaware of the salaries of these appointees, I would be very surprised if they were not sufficiently high to satisfy the recipient expectation of payment for their support during a past campaign.

Casting Federal Employees with a broad brush and treating them all the same, as the salary freeze did, is wrong. Most civil servants are hard working and deserve to be paid a just wage for the level of experience and education that they have. Critics who compare Federal Employees' wages with wages in the civilian workforce usually do not take into account that the more menial tasks of government work are not carried out by government employees but by contractors.

There is fat in the Federal Employee workforce, and it is the political appointee. Ridding the government of all such appointees except for agency heads who are cabinet members, would not only reduce the Federal budget for salaries, but would in all likelihood increase the efficiency of the Federal government and increase morale among employees who might then have a upward position to aspire to that is not "taken" by a political hack. I strongly believe that the total Federal employee budget and the number of Federal "employees" can be reduced in one move and result in a stronger, more efficient government. Simply get rid of the political spoils process that allows, sometimes rampant, hiring of political employees into non-agency-head positions.

Tuesday, November 16, 2010

How to Down-Select

Program managers need to be aware of a potential disconnect between the desire to use mid-term reviews as an opportunity to “down-select” some of the projects and the desire to keep hands-off funded research projects. It is important to not micro-manage research from the funding organization. Micro-management of research from the funding agency is never a good approach to keep projects “on-track”.

When a review panel understands that a review will result in the termination (“down-select”) of some of the projects in a program, they will take a far different view of the project report than that taken by a NSF program, for example. In fact, the nature of the review will be similar to that of an original proposal being reviewed for initial funding. Typically observed is a concern with the level of direction that projects receive regarding what should be in a mid-term review and what reviewers need in order to offer advice on whether the program should continue the project or not.

In many instances, the mid-term report is written in a perfunctory manner, failing to address important questions that the panel reviewers needed in order to make judgments as to whether or not the project should be continued. In many cases, information that was lacking included:

  • Honest discussion of what actually was already accomplished versus what was proposed;
  • If objectives have changed, there should be a discussion of what has changed along with reasons for the changes;
  • Information on papers in progress as well as listing of those already accepted for publication, if any; and
  • Listing of all project participants, including students, and how they interact to accomplish the overall goals of the project – i.e., information that would show that the project is being managed appropriately and effectively.

Effective management and a reasonable management plan behind it are most often the primary causes of failed research projects involving more than one primary researcher. Collaboration is very difficult to achieve without such a plan. Most researchers have their own agendas and prefer to use funds to continue those agendas rather than support the needs of a larger collaboration.

One of the most frequently discussed issues during a review is the number and quality of publications made available for review. In projects that have been underway less than two years, it is probably inappropriate to require a listing of published journal articles. It normally takes at least a year or more for an article to be reviewed and published in a major scientific journal, so such a requirement for this stage of review is probably inappropriate. Such papers, when they do appear, appear to be based on work done prior to the funded project. It is often more useful in mid-term reviews to see a list of conferences attended and papers presented at international conferences as a measure of progress as well as a measure of international stature.

With appropriate guidance from a program, mid-term reviews of projects can be conducted fairly and effectively because such guidance ensures that the reviewers will have the information they need to give useful advice to the program.

Saturday, October 9, 2010

The Tenure Issue

Tenure is an issue mostly associated with academic freedom, and the achievement of tenure in an American university is often associated with the ability to teach and do research on topics that, prior to tenure, may have negatively affected one's academic career. The question of whether or not tenure remains an appropriate system for the United States is not one we're addressing here. Instead, we will relate a few ways in which the tenure system affects the progress of science through government funding. Even though the US Government has no official position on tenure in relation to reviewing proposals or getting research funding, there are still strong relationships to be recognized between tenure and the overall progress of science.

Prior to being awarded tenure, junior faculty seek to publish as much as possible in order to build a strong case for being awarded tenure. This effect of the tenure process is good for the progress of science for all the obvious reasons already mentioned earlier in this blog. What someone publishes is also of critical importance as well as how much they publish. If a junior faculty member publishes too much out of scope of the discipline of their potential tenure committee, this will likely negatively affect their tenure case. Tenure committees normally seek to strengthen the field they represent rather than have it change or even evolve by accepting for tenure someone who may be considered to be on the "fringe" of the field, or worse, outside the field. In this respect, the tenure process reflects the political reality of how it proceeds. Tenure committees are actually not unlike small political parties in this sense, seeking to increase the influence and direction that they already represent. For funding programs seeking to diversify a field or to generate interdisciplinary fields, this is a strong negative influence. Rarely will you find a junior faculty member willing to step outside the bounds of their potential tenure committee to seek funding for a revolutionary interdisciplinary idea. When this does happen, it is usually a natural combination, such as a combination of the field with teaching, with computation, or with the collection and sharing of large data sets. Rarely is it a combination involving two core sciences. Bioinformatics is one major exception, being a combination of biology and information science. Even in this case, however, it wouldn't have happened had it not already become obvious after the genomic era that biology is an information science anyway.

There are also influences on government funding of science after the award of tenure. While one would expect a newly-tenured faculty member to begin to diversify and take more risk in their approaches, that is not normally observed. Possibly the reason is that, after spending 7 years keeping tightly within the bounds of a field, they have become fully enculturated in the field and no longer aspire to change. Innovation becomes more of a challenge when a faculty member already has PhD students to supervise, classes to teach, and now service on the tenure committee that keeps them focused on the field.

One outcome of tenure that has to be mentioned because it is so noticeable from the point of view of government funders is that of ego. Achievement of tenure is a difficult and highly political process. When successful, the faculty member is treated differently both by those within the University as well as by those in the field. Such treatment is almost like that of royalty. That may seem overly strong a label, but the actual fealty shown by junior faculty, not yet tenured, can be a strong and potentially negative influence on one's personality. A tenured faculty member is not only going to serve on a tenure committee in their own university, they will be asked for letters of reference by other tenure committees in other universities representing the field. They will serve as either editors or reviewers of major journals in the field. And, unfortunately, they also tend to serve more often on review committees for funding. While program managers should try to include junior faculty as much as possible to teach them how to write fundable proposals, program managers like to "score points" with the field by selecting reviewers who have recognition in the field, and those people are likely to be tenured. For tenured faculty with weak egos to begin with, all this recognition can create an egotist. I have worked equally in industry, academia, and government, and I have never encountered stronger egos, in the negative sense, than I have in academia, and this should be of concern for government funders.

Government funding of science is aimed at progress in the field, but achievement of progress can be hindered by not only the inertia effect of the tenure process, but even more by the influence of very strong egos. Big names in a field are likely to influence outcomes just because of their name rather than because of reasoned argumentation. Ego-tainted, big names tend to make matters worse by influencing outcomes to increase their own standing and entourage in the field. Only the competitive nature of government funding can control this since, having tenure, faculty are protected in their university position. Program managers must immunize themselves to these processes when they direct funding.

Saturday, October 2, 2010

US Government Corruption in Funding of Science?

A question I have been asked is whether or not there is corruption in US Government funding of science. This is an expected question in an era of public scrutiny of government and its spending. I can only comment on what I have actually observed since this sort of thing is not usually published, if it exists at all. My experiences along these lines are with the National Science Foundation, DARPA, DHS, and the Intelligence Community, and the answer is that I have seen wrongful activity in the funding of science in the US, but my answer requires elaboration.

In the National Science Foundation, when corruption occurs, which I believe is rare there, it is intensely pursued by an independent Inspector General's office. For the one serious case I observed, there was an investigation in which I was interviewed as a witness. I don't know what the outcome was, but I believe that, if wrongful acts were found, they were dealt with appropriately. Program Managers in the NSF are required to attend annual workshops where they are given case studies to consider. Most of such case studies are, on the surface, open to interpretation, but at the core, either a criminal act or at least an ethical violation. I trust the NSF system because there are a large number of ways in which the NSF Inspector General gets information about potential problems, and their investigations are thorough, detailed, and unbiased.

The matters in the Department of Defense and the Intelligence Community are very different. It's not that there are more violations, or that there is no Inspector General whose job it is to carry out investigations. There is an IG in every US Government Agency. The problem, I believe, stems from the nature of the business. Almost all staff in these organizations, including program managers, are required to hold active security clearances due to the nature of the research they fund and it's potential impact on National Security. This means that only those who "need to know" actually learn about research projects or their outcomes until, or unless, they are published in the open literature. The circle of those who "need to know" is usually tightly controlled, creating a structural problem for detecting and dealing with corruption - i.e., the number of those involved is far smaller, resulting in a much smaller sampling of people from various disciplines or points of view. This means that those who are involved have to be much more vigilant and willing to report potential problems than, say, those in the National Science Foundation.

Does it work? Are these people more vigilant such that wrongful acts are detected, investigated, and dealt with properly? In cases I've observed, I'd have to say no, unfortunately. The same system that protects National Security also provides a shield that prevents disclosure, and humans, being what they are, always have a certain degree, even if small, of stepping over the line in cases in which they are personally involved. Sure, we all go over the speed limit at times, but these cases are more than being a few miles over a posted limit. The cases I've observed were serious, in my opinion. Such cases were justified by those involved by self rationalizations having to do with importance of the work, going with a research performer they "trust" with such important work rather than follow required procedure, or simply the need to take such risks in order to get important work done at all since it may be of the type not many others wish to engage in.

The US system of security works well in cases where it has to, but it must be recognized that it has unintended side effects such as these. Some might say there are whistle-blower protections, and observed cases must be reported. At what cost? Is someone to risk not just their career, but a potential criminal prosecution just to provide this information? I don't think so. The risks are far to great for anyone I know of to make statements regarding potential wrong-doing they've observed. I suppose if the case were to involve loss of life or flagrant criminality, the result might be different, but funding of science usually does not involve that level of seriousness. It is, however, misuse of taxpayer dollars, and that in itself may be reason enough for a serious reconsideration of how the US funds research in agencies having to do with National Security.

Thursday, September 30, 2010

Darwinian Science?

Since progress in science tends to be competitive and occurs in a population of scientists (see blog below), it is inviting to think of it as a Darwinian process. That is, science proceeds by way of a large variety of approaches tried by a population of scientists, and selection occurs on those approaches in terms of their successes in experimentation. It is an attractive way to think about scientific progress because it depends upon the generation of variety by broad investment by the government and upon repeatable experimentation to prove approaches that work. It would be wrong, however, to label this as a Darwinian view of science.

Since science is a phenomenon of ideas and not of genes, it is a Lamarckian process rather than a Darwinian one. In other words, progress is achieved by the passing on of adaptations through learning within one's lifetime rather than through any increased fitness of successful scientists, although successful scientists do tend to attract and train more new scientists than unsuccessful ones. The ideas and techniques that lead to successes in scientific experiments are published and passed on in a much shorter interval than that required to produce new scientists. Scientific progress is like any cultural evolution, in that way.

That said, how should this recognition of scientific progress as a Lamarkian evolution affect science funding programs? Clearly, one should encourage a diversity of approaches in order to improve the chances of "covering" the search space of options to a solution to any scientific problem. One should also encourage rapid and widely-distributed publication of all results. These lessons are not new.

Frequently lost on program managers, however, is the fact that, in order for the evolutionary process to proceed, one must also develop a competitive process among the scientists working to solve the same problem. Selection of a scientific "solution" to a problem is relatively meaningless without a corresponding set of attempts that failed to bring about a successful result. This means that program managers must not only expect failures, but they must be willing to fund sufficient variety of approaches, that is, take sufficient risk, that failures are generated and published! In examining failures, scientists learn valuable lessons about the causes of success and where the causality can be attributed in a success. Without them, it is not possible to know what aspects of the successful approach actually led to the desired result.

The propensity of program mangers to seek and publicize "winners" is only part of the job of effective program management because the wider the net is cast, the more accurately one can not only recognize success, but why an approach has succeeded.

Friday, September 17, 2010

Pasteur's Quadrant

Stokes' book, Pasteur's Quadrant (1997) describes how Vannevar Bush got it wrong when he postulated a continuum from basic research to applied research. He claims that these are actually two separate dimensions along which research can be characterized. Pasteur was a good example of being high on both basic and applied research. He was not only looking for a cure for a specific disease, but he was also doing research on a basic mechanism of disease.

While NSF is best characterized as funding research high on the basic dimension, they are not likely to fund significant efforts that are high on the applied dimension. At least, that is true in comparison to other funding agencies. DARPA, for example, tends to fund research that is high on the applied research dimension, but not significantly so on the basic dimension. In fact, the "color of money" of DARPA tends to prohibit this. Defense research dollars are categorized according to whether they are basic or applied, but not both.

A particularly important challenge to face for a country seeking to maximize return on investment of research dollars is whether or not to spend funds on basic research because the return is so risky and so far in the future. This may be a false issue if one takes Stokes' view. Research topics can be identified that are both basic and applied, and there may be a way to do this intentionally.

Reviews of proposals from other countries has led me to believe that there is an emphasis on targeting research areas that will create industrial partnerships and quick wins in new applications. Unfortunately, the topics often involve the creation of an engineering artifact with new capabilities rather than deep investigation into the fundamentals of the science behind the topic. At the end of the project, a payoff might develop signaling success for the funder in demonstrating increased World market share in some area. At the rate at which competition drives engineering applications these days, however, that success is likely to be short-lived, unless a deep understanding of the principles involved are understood as well. With such a deeper scientific understanding, one can continue to create new artifacts and even understand the drivers for what makes them successful in the first place.

Saturday, September 11, 2010

Critical Mass in Science

In my experience, a lot of program managers in the Federal Government have a view of the development of new ideas in science that they are the result of invention by individuals. It is a view not unlike historical accounts of Edison or other famous inventors who experimented in a lab on a variety of stuff until something they tried worked in some amazing, new way. Certainly, their search space was informed by concepts and beliefs about the nature of the things they were working on, so it wasn't random. Nevertheless, it was individual.

I believe, from my observation of science projects across a broad spectrum of disciplines, that this approach is not just quaint, but fundamentally wrong. Science progresses in large part because it is a social process. Philosophers of science and many others share this view, so this statement is nothing new, except maybe to some government program managers. Beyond this, however, I believe that the search process conducted by scientists working in a specific area is not only not random, i.e., informed by concepts and beliefs about the nature of their area, but it is also not a linear process in which you can increase the chances of discovery in proportion to the number of scientists working in the area.

Scientific progress is influenced by the communication among scientists in a field, by their papers, conference talks, peer reviews of each other, and informal communications. More than that, this communicative feedback induces rapid convergence when progress seems imminent. A research project in a center I once managed was focused on this phenomenon and ways in which it could be detected in networks of scientists. This phenomenon acts like a "social attention" mechanism that rapidly heightens communication among scientists around a specific topic in a manner not unlike the formation of critical mass in a nuclear reactor.

Effective program managers not only pay attention to the social processes of science in making and overseeing government funding for science, but they also get involved in sampling the social network of science for potential critical mass phenomena surrounding potential new discoveries. One of the ways this was done at the National Science Foundation was through funding of workshops in which 25 or more scientists were invited to discuss a particular topic and share concerns and interests. Many, if not most, such workshops were not successful in that a sudden boom of interest occurred, but sometimes they were. Such a process is a sampling process of looking for new areas that, if the Foundation invested some additional funds, a new discovery could be facilitated. The process encourages critical mass to develop in a nascent area, if there are already the concepts and ideas to support it. Without being fueled by such activities, the process might take a lot longer, inhibited by the normal inertia of scientific communication, such as the publication and proposal review process which can take months, if not years.

Tuesday, September 7, 2010

Program Pathologies

Government-funded research programs can sometimes develop pathologies, even in prestigious, peer-reviewed agencies like the National Science Foundation, but definitely not limited to the NSF. Program pathologies can be insidious and not necessarily the intentional result of a biased program manager.

The most serious pathology in terms of the overall health of science and use of taxpayer dollars is what I call the "self-fulfilling prophecy" program. When an agency requires peer-review of proposals for research, the program manager must find capable experts who are both willing and able to do reviews. This should be a learning experience as well as a community service effort that scientists look forward to, but the load can also be onerous when faced with pressures of teaching, research, and maintaining continuity of funding to support one's graduate students and laboratory facilities. The closer the request to review is to the actual program that a scientist might apply to, or already be funded from, the more likely they will agree to serve. An incentive exists also to find out what's new in a program as well as to survey a sampling of the competition. Involvement like this, however, is a slippery slope.

Reviewers tend to like to see proposals along the lines of the kind of research they are doing and tend to dislike newer, unproven approaches. Part of this is because familiar research is easier to judge, the references are familiar, and reviewers expect to (and probably should) see their own work referenced in the proposal. Untested approaches are a harder case to make under most any circumstances. Reviewers are also more tolerant of program managers who share their opinions and approaches than they are of managers who look for trying out something new once and a while. In fact, program managers are not immune to the positive feedback they receive from world-class reviewers who treat them as equals in some sense. The ego needs a boost once and a while in any government job, and kind words from a big name researcher are certainly nice to receive.

The result over time of the tendencies discussed above leads to a "self-fulfilling prophecy" program, or to one in which proposal funding decisions are based more on similarity to work already being done than on a truly objective judgment. It could be said that reviewers are choosing to fund their "friends," knowing that their friends will, in turn, choose to fund them. This pathology may not be intentional, but it sometimes works out that way anyway. Program managers who should be forcing the centroid of the program to be constantly shifting may resist doing so for fear of displeasing those who have heaped praise on them for running such a fine program. Again, this pathology may be more latent than overt, but the bias is often there anyway.

While programs in all agencies are subject to such pathologies, the National Science Foundation has a mission to maintain the health of science on its own principles and must constantly seek to identify pathological programs and seek to change them. Inviting over one-third of its program managers to serve short terms on loan from Universities is one way to try to overcome these tendencies. Other ways have also been used, such as creating initiatives from time to time that are syntheses of different areas that have not before been considered in the same program. In a sense, the best NSF programs are those that constantly change and seek to be on the border between comfortable science and risk-taking.

Friday, September 3, 2010

Flavors of Interagency Collaboration

For many reasons, such as avoidance of duplication, leveraging of other agency investments, or discovery of advances already underway, agencies of the Federal Government engage in various forms of collaboration. There is a wide range of "flavors" of such collaboration: Executive Office National Science and Technology Council committees, Office of Science and Technology Policy working groups, national initiatives such as the High Performance Computing Initiative, special calls for science funding data from the Office of Budget Management, or special efforts started by one agency head trying to engage others in a focused area.

None of these "top-down" efforts are as effective as direct program manager to program manager interaction. Top down initiatives usually devolve into high-level posturing in order to win a larger portion of the budget pie in a particular funding area. For this reason, most agencies are unwilling to reveal details of efforts nor invite hands-on collaboration in funding. The significant results from such interactions are usually seen in rearrangements of the President's budget request.

Most significant interagency collaboration results from the direct interaction and hands-on teamwork of program managers from different agencies. Even this type of interaction has many "flavors". Program managers can agree in a formal way to joint calls for proposals with agreed-to procedures on how to handle review and funding of proposals received. They can also attend each other's PI meetings to learn of funded efforts, track results, and agree on how to handle efforts of joint interest.

One of the least utilized, but probably most useful forms of interaction among agencies is detailing of program managers from one agency to another for two to three years. Interagency details usually involve the host agency transferring to the home agency the loaded salary of the manager during the period of the detail. Only when a program manager actually sits and functions in an agency for an extended period of time can they appreciate the dynamics, and especially the politics, of the processes in that agency. Each agency is like a separate culture with little overlap in processes and procedures. My own direct experience as a program manager was in NSF, DARPA, and DHS, but I have had extended exposure to NIH, NSA, NASA, NGA, and several others. In addition, I have observed all science-funding agencies from the position of several different Executive Office committees. Yet I have never observed any two agencies' science funding processes being essentially the same. Each one practices the funding of science in a very different way.

Perhaps this diversity of approaches is appropriate for the health of US science, and perhaps the differing missions of the agencies are the principal reason for this diversity, but this diversity calls for even greater attention to interagency collaboration. Since the way in which agencies "cover" science differ so much, they are not likely to have the same understandings and are therefore likely to benefit from collaboration with other agencies.

Thursday, September 2, 2010

Contract, Grants, and Bias

One especially troublesome observation over the past 15 years in directing the funding of science has been government mis-use of contracts for research. Science must proceed on the basis, ultimately, of peer review and appreciation (or not) by what Polanyi calls the process of mutualism. (This is in an exceedingly insightful chapter titled "Society of Explorers" in his book The Tacit Dimension). The establishment of truth in science is accomplished by review and duplication of effort independently by others. History is filled with cases where the opposite has been the case, and the results often had serious consequences.

Peer review, as the National Science Foundation conducts it, is perhaps the ideal case whereby Polanyi's process is implemented. NSF also provides grants for research for the most part because, after the award, they do not believe that the agency has the right to interfere in the process of the research, except in the most gross circumstances of fraud, misrepresentation, or failure to attempt to achieve proposed goals.

Some of the other agencies of the Federal Government have recognized that the NSF process tends to select the top researchers in the country and have piggy-backed on the NSF as a mechanism to provide funding to researchers on specific topics. Most of this funding was from agencies of the Intelligence Community which, I believe, do not have granting authority. They can only provide contracts for research.

While I certainly do not believe agencies must all follow the NSF model (NIH does not operate in exactly the same way and yet has a stellar record in this regard), some have pursued general research by awarding contracts. This may seem possible, but I believe the term "research contract" is an oxymoron. Contracts are awarded to achieve specific milestones and are often of time and material form such that tight controls can be, and usually are, exerted by the sponsor. When such contracts are successful, I believe they are better termed development contracts, not research contracts. For this reason, most Universities will not accept time and materials contracts to do research, preferring to accept grants for that purpose.

The problem of research contracts appears most egregious in cases where the government sponsor wants to control the details of the research rather than let it proceed as it might. There may be many reasons for this, one possibility is that it tends to create a virtual government laboratory where Congress did not approve one. Since indirect costs are part of contract fees, a few Universities and most of Industry will accept this situation in spite of the fact of the severe limitations it imposes on the directions and uses of the research.

While lots more could be said about this topic, the main point, in summary, is that it appears, from my own experience, that some agencies of the Federal Government have sought ways to circumvent Congressional approval for expansion in research by subverting research dollars and using contracts as vehicles to do this.

Wednesday, September 1, 2010

DARPA and Program Evaluation

Another topic of great interest from others is in the topic of DARPA evaluations. Some view them as rigidly constraining program researchers and actually tending to cause progress to be merely incremental. Most DARPA Directors ask to see a built-in evaluation process before approving a DARPA program, and sometimes these have been a tool for Directors to micromanage programs.

This post is focused on a different aspect of DARPA evaluations that differentiate DARPA programs that used them from NSF programs and have a large, but possibly not widely recognized, impact. The DARPA programs I ran for the agency had evaluations that were community competitions. Program fund recipients were required to attend and present their results, but others could also attend and present results as well. The only rule was that outsiders had to also follow the same rules and allow their results to be independently judged, usually by NIST. The motivation for engaging in this way, but without funding, was that, if they performed well, they could usually expect to receive DARPA funding in the future. Of course, this was not always the case, but it occurred often enough to be a strong motivator. Even without funding, being able to publish the case that one was a leader in a NIST evaluation generally curried interest and upped the chances of funding, sometimes from other sources.

When a formal, NIST-led evaluation was not used, the DARPA Principal Investigators' meetings were sometimes also a venue for a similar competition. Not many DARPA programs invite non-funded entities to present, but potential bidders were occasionally to present their case by showing their results. Even if they weren't allowed to do this, potential bidders loved the idea of being able to attend and hear how high the bar was and what approaches were being taken to achieve it. This process was a great way for a program manager to weed out potential bidders who could not muster the resources to compete well.

As mentioned in the previous post, running a DARPA program in this way requires that the program manager have good knowledge of the community of researchers working in the field and their relative capabilities. Otherwise, this process could easily get out of hand. In a less organized discipline, as many of the social sciences are, this process would probably not work. The social and behavioral science communities are diffuse and not familiar enough with each other to work well in a competitive fashion. Nor is it possible for a program manager to easily define who is or is not a reasonable bidder to the program.

Back to the criticism that DARPA program competitions tend to foster incremental rather than ground-breaking progress: competition tends to circle the wagons rather than send them out on a broad search for solutions. Countering this tendency is possible in a DARPA context, but the agency best set up to address this need is the National Science Foundation. NSF tries to fund good science on the cutting edge and often finds researchers who pose diverse approaches. After some success under NSF funding, we used to encourage proposers to "graduate" to DARPA for a more intense funding and direction. Of course, this requires a program there to receive the effort and fund it. That requires strong interaction between NSF and DARPA program officers.


Tuesday, August 31, 2010

DARPA and NSF

The one topic that others have focused on most in conversations with me has been a comparison between DARPA and NSF funding of science. As scientists know, NSF is peer-review oriented and programs tend to cover a traditional area of science and have a relatively long life-span (except for term-limited special programs).

The, what some would say is, "traditional DARPA" is quite different. DARPA programs are approved by the DARPA Director so they take on the character of that person and their interests. Nevertheless, there has been a typical model whereby programs are "sold" to the Director using a presentation that follows the Heilmeier Catechism (see result of Google search on this term). Because NSF does a good job of covering most of science for the US, DARPA can afford to pursue more risky, but higher payoff ideas. The Internet is the best example. No need to recount that history, but you get the idea. Others would like to create programs with that kind of impact, hoping it wasn't just a fluke. Therefore, they want to know how DARPA works (or worked, in that case).

When I hear what other countries are interested in creating, I get the strong feeling that they are less interested in the general health and progress of science than in quick-win programs that could have high payoff. Therefore the interest in hearing about how DARPA programs are created and run. My main contribution to that discussion has been to emphasize that there are two really important factors besides meeting the goals of the Heilmeier Catechism: a program manager with a strong vision and a DARPA Director who buys into that vision without re-directing or mis-directing it.

DARPA used to be known as an agency of 100 program managers united by a common travel office. The point this tries to capture is that DARPA PM's seem to work best when they survey in person the research capabilities in their interest area and create a vision from what they learn. You can't create a program without knowing who would be able to do the required research, and you also need feedback on your own ideas to help clarify and focus them, specifically to answer the catechism questions.

Probably the hardest lesson for other countries in learning about this process from my point of view has been to appreciate the type of person who would be a good program manager in a DARPA-like program and to actually find people with the skills needed. Such people are rare. In my own experience, I have been surprised to find that my cultural anthropology training has been more important than my computer science training because it allowed me to understand the state of art in an area culturally. That is, who the main players are, what the social structure is (conferences, universities, research labs, etc.) and what the main problems of the field are that keep people up at night. Answering these questions requires being truly objective so as not to bias your observations with your own opinions or desires. This kind of objectivity is what ethnologists in the field of cultural anthropology really try to achieve, although not all are successful even there.

I would be very interested in reading comments by anyone else co-trained in both cultural anthropology and a science.

Initial

After retiring from a career in science funding and management (see my LinkedIn posting under "Gary W. Strong"), I decided to start a blog because of a felt need to share my experiences, warts and all.

While the US government science funding program is remarkable in its successes, anything can be improved. Comments from anyone on the subject are more than welcome. We are looking to create a conversation rather than simply do a brain dump.

Having taught a couple of workshops on research management in Saudi Arabia has also clarified to me what some of the lessons were that I learned but took for granted. When you explain to someone else how you did your job, you quickly find out that you actually have strong opinions based on experience that could be valuable to others.

While my very best job was at the National Science Foundation, I also worked at DARPA as a PM and at the Department of Homeland Security in charge of Behavioral research program plans and budgeting. Most positions also allowed me to interact with nearly all of the other Federal agencies in one way or another in matters of research funding cooperation.

This blog won't be a life story. I will post items as they occur to me in probably a more-or-less random order. If you just found me here, thanks, and please let me know what you think.