Negotiating in the Middle Ground

Editor’s note: With our 200th issue, three longtime BEMS members offer their perspectives on some of the important issues facing our Society, past, present, and future. We invite other members to send their ideas about the Society to bemsnewsletter@gmail.com for publication in future issues.

Negotiating in the Middle Ground

Ben Greenebaum, Ph.D.
  University of Wisconsin
Parkside, WI

Who can argue with either the idea of a “scientific” basis for standards or a “biological” one? Both terms are good public-relations strategies, but obscure the real differences that deserve serious consideration. In fact, proponents of each approach base their arguments on the scientific literature and both are thinking about hazardous implications of biological responses to exposure to electromagnetic fields. The crux of the matter, obscured by the shorthand descriptions, is that the two viewpoints disagree on how to assign, interpret and act on uncertainties. Whether this is ultimately based on different motivations as well is a matter for another column. My goal here is to explore whether either of these strategies provides a useful starting point for standards setting.

Setting protection standards always starts with assessing the science. Ideally, this assessment objectively sorts out what is well established, or known, by identifying important results that are supported by adequate, repeatable, and reliable research. In doing this job well, the assessment also outlines the fuzzy boundaries of what is not known. Key to this work is identifying, in some objective way, what constitutes sufficient evidence to establish an effect, and then to look further to assess the impact of that effect on an organism. Work that produces results that seem inconsistent with previous work should be reviewed carefully for differences in experimental techniques that might be relevant to the observed outcome. Standards cannot be based on spurious experimental results, nor should they reflect a willingness to overlook or hide evidence of potentially biologically significant impacts.

At one of the extremes in present day work, the “scientific” approach demands many flawless experiments, all with very consistent, highly significant results that include both observed and theoretical links to illnesses, as proof of a hazard needing regulation. We know from work in related disciplines that this is a nearly impossible standard, given our fundamentally incomplete knowledge of most normal and abnormal biological pathways.

At the other extreme, there would be a “biological” ban on any fields that ever showed the slightest effect in any experiment. Again, from work in related disciplines, we know that small variations in experimental technique can sometimes lead to big differences in results. Without a critical assessment of the true cause of an effect, we can build support for banning what may not be the causative agent.

Few would deny that in an ideal world regulations should protect against hazards that are certain while not precluding what clearly has no effect. What is at issue is, in essence, deciding where to draw the line. It should seem obvious that in practice, neither extreme is viable. Unfortunately, there is no universally agreed upon point of demarcation between the two options to help us navigate this territory easily. Partisans for each extreme each have their own criteria for what constitutes sufficient evidence; that is, how to assess the certainty or uncertainties associated with any particular experimental result and how to weigh the full set of results that may be relevant to any particular possible effect.

Disputes arise in the middle ground where the observed effects are potentially significant, but may not be immediate or clearly a result of a single factor, yet they appear with sufficient robustness to preclude ignoring them altogether. These issues prompt committees, governments, groups and individuals to consider what, if any, “safety factors” or “precautionary measures” are appropriate and to take or advocate any “prudent avoidance” measures that are dictated by their assessment of the uncertainties. To this extent, the “scientific” and the “biological” paradigms are not so different in principle, although in practice they may come up with quite different results. The differences arise from using different concepts of how much, how consistent, and how certain the results must be to be considered sufficient.

Talking past each other in hopes of drowning out the other point of view benefits no one. Social as well as individual factors are important here. To begin the critically important dialog, each side must first show it is willing to hear and understand the other, independently of any actual agreement on any point. One recent sign of hope in our professional community is that the “scientific” ICNIRP and IEEE standards setting bodies now note that their assessments of the research do not draw any definite conclusions about relatively small-incidence effects from long-term exposures to weak RF fields. They add that that the safety factors they started with are indeed somewhat arbitrary and have intended to cover all uncertainties. From the “biological” side, some advocates acknowledge that many of the experimentally observed biological changes may not have a direct harm to health, though they note that the changes can imply long-term possibilities of such harm. They contend that these possibilities are not being taken seriously enough and advocate for stronger precautionary measures, also set arbitrarily.

While I would hope that these acknowledgments might indicate that each extreme is starting to examine each other’s evidence and viewpoints, for some partisans, the acknowledgments may unfortunately be more formal than real. Thoughtful discussion of these important issues should begin with clarifying underlying assumptions and criteria. I submit that talking past each other may only be replaced by real discussions if we take seriously the recognition that the differences are basically due to approaching weighting and interpretation of the evidence from opposite directions. That implies being clearer about the underlying motivations, assumptions, and approaches: it means openly identifying and airing the differences with the goal of resolving them rather than obliterating the other side. I noted above that social and individual factors play an important and usually unspoken role behind these differences; these are underlying values that also need to be recognized and brought to the foreground, without unnecessarily requiring that some are “better” than others. If the scientific community can seriously evaluate the differences among themselves, we can move toward a compromise that most can live with. In doing so, we will have performed a great non-scientific service for society in general. Shorthand phrases like “scientific” and “biological” can jog emotions, but do not help clarify difficult choices that have significant social and human consequences.