About Us | Help Videos | Contact Us | Subscriptions
 

CSA News Magazine - Features

Moving science forward through: Meta-analysis

 

doi:10.2134/csa2015-60-5-1
  1. Madeline Fisher
 
 

This article in CSA NEWS

  1. Vol. 60 No. 5, p. 4-9
    unlockOPEN ACCESS
     
    Published: May 1, 2015


 View
 Download
 Alerts
 Permissions
Request Permissions
 Share

Before Fernando Miguez began running experiments as a University of Illinois master’s student, like any good scientist he dove first into the research literature. His subject was the effect of winter cover crops on summer corn yields, and by the time Miguez entered grad school, a healthy body of work already existed. So, he sat down to review a stack of studies, thinking, naturally enough, that he’d soon hit upon a knowledge gap to target in his trials.

He thought wrong. “To be honest, it seemed like the more papers I read, the more confused I was,” says the ASA and CSSA member, now an assistant professor at Iowa State University. Yields varied widely by year and with local climate and soil conditions, leaving him unable to discern any clear trends. Eventually, he gave up and chose a different tack. “I thought, ‘Let’s try to do a meta-analysis on this topic,’ ” he says, “because reading more papers is not helping.”

Meta-analysis—a statistical technique for combining and analyzing the results from 10 or 20 to hundreds of studies—has been practiced for decades, and in some fields, such as medicine, its use is routine. The principle behind it is that scientific debates, even small ones, are never resolved by a few experiments. Instead, “it’s the collection of results from many sources that move science forward and inform our decision-making,” says Ohio State University plant pathologist and meta-analysis expert, Larry Madden. “Science is meant to be a cumulative process.”

Done right, meta-analysis is simply the most robust, objective means to conduct this process, Madden adds, particularly when studies say different things, as in Miguez’s case. “It’s a way to look at an entire collection of published papers and try to make general sense of them,” agrees Chris van Kessel, a University of California-Davis agronomist, experienced meta-analyst, and Fellow of ASA, CSSA, and SSSA. “It gives you a bigger picture of everything that has been done.”

Yet, the agricultural and soil sciences have historically focused on local problems, driven by the needs of farmers and other land stewards to manage their immediate surroundings—the small scale rather than the large. “What I hear a lot from farmers is, “‘Well, in my area, this is what we do,’ ” Miguez says. “And their area is maybe four to five counties around them.”

So, where does this leave a tool meant to aggregate and distill information, one that synthesizes rather than separates? Can it usher in an era of better data stewardship and “evidence-based management,” as some suggest? Or is it best left to sciences, like medicine, where commonality in experimental purpose and design makes pooling data much more natural?

To begin answering those questions, a team led by Miguez and USDA-ARS statistician Kathy Yeater is planning a symposium and workshop on meta-analysis at this year’s Annual Meeting in Minneapolis, MN. In the meantime, the events’ organizers and presenters—some old hands at meta-analysis, some new—share their perspectives on the technique and what it can bring to our sciences.

The Power of Meta-analysis

Meta-analysis requires the gathering of two essential pieces of information from each selected study: the result, such as a mean or correlation coefficient, and some measure of variability around the result, like a standard error or confidence interval. These products from each investigation are then pooled in one big analysis. To what end?

“Perhaps the greatest reason for meta-analysis is the high statistical power it gives you to test hypotheses,” Madden says.

This can be especially useful in the agricultural sciences, he adds, where individual experiments tend to be underpowered. The classic field experiment, for example—a randomized, complete block design with four or five blocks—usually has a low number of replicates and, thus, low power to detect treatment differences. Effects must be large, in other words, to be found significant. By pooling results from many studies, meta-analysis, in contrast, boosts the sample size and the power, allowing even subtle differences to be uncovered.

To further explain, Madden compares meta-analysis with what often happens in a qualitative research review: what he and others refer to as “vote counting.” Say an author examines 50 studies of an herbicide’s effect on weeds, finding that only 20 of them report a significant result. She then concludes—as authors in such situations often do—that the herbicide isn’t terribly effective because it kills weeds less than half the time.

“Well, if those individual studies have low power, and all you’re doing is counting up how often you get a significant effect—that can be very misleading. A meta-analysis on those results may show, in my hypothetical example, that there really is an overall, positive effect,” Madden says. “And it’s all related to power. That’s the big advantage.”

Obtaining that advantage, though, “requires special care” cautions David Makowski of the National Institute of Agronomic Research (INRA) in France. “It has been shown in medical science and ecology that the use of inappropriate techniques can decrease the value of meta-analysis.” After seeing this in other fields, Makowski grew curious to know how meta-analysts in agronomy were doing. So, a few years ago he and his colleagues devised a set of nine criteria for a quality meta-analysis. They then examined 73 published meta-analyses in the agronomic sciences to see how often those criteria were met.

What they reported in Agriculture, Ecosystems, and Environment in 2011 is that some criteria were satisfied nearly all the time and others hardly at all. For example, 92% of authors presented a reference list of all the studies they included in their analyses. But only 22% described the search procedure and standards they used to choose those studies. Moreover, the datasets and software code used in the analyses were almost never provided.

“I think this is an important finding,” Makowski says. “[It means] that other scientists cannot repeat the meta-analysis because they don’t know exactly how the individual studies were selected and they don’t have access to the datasets.” Making these datasets widely available is something he very much wants to see happen (see “Data Accessibility” below).

Makowski also says it’s useful to check whether the conclusion of the meta-analysis rests on any key assumptions made in the statistical model. But, again, that’s hard to do when few authors publish their code. It’s also imperative to see how sensitive the meta-analysis is to any particular set of studies. “Sometimes the conclusions of the meta-analysis turn on only a few studies amongst all the studies in the dataset,” Makowski says. When this happens, it essentially negates the point of applying the technique in the first place.


David Makowski

 

The Need for Data Standards

After hearing incoming ASA president and ASA and SSSA Fellow, Paul Fixen, speak two years ago about the value of meta-analysis for establishing what is known and where to focus next, Rachel Cook was intrigued. A new assistant professor at Southern Illinois University, she thought the approach could help her spot holes in the literature and hone her questions as she launched her research program. So after acquiring a grant and assembling a small team, the SSSA member jumped into meta-analysis—and landed on a steep learning curve. “Once I got into it,” she says, “I realized, ‘whoa, this is hard.’ ”


Rachel Cook

 

What made it hard wasn’t so much the statistics, Cook adds (she got excellent help from a colleague), but assembling the database on which to run them. Many of the papers her team acquired on its topic—enhanced efficiency fertilizers in the U.S. Midwest—failed to report standard errors or any other measure of variability, forcing the group to contact the authors about them. Another key variable, latitude and longitude, was likewise left out of many papers. Still other authors calculated means or reported methods and results in inconsistent ways.

It all led to considerable frustration. “Every week we were scratching our heads: How are we going to put this study in the spreadsheet? How are we going to put this study in?” Cook says. “So that was a big part of the project: just figuring out how to get all the data together.”

It’s a big part of any meta-analysis, Madden says. “There are a bunch of textbooks on meta-analysis now, and many of them spend over half the book on subjects other than actually doing the analysis itself.” The analysis’s objective, the publications and other data sources to be mined, the criteria for including or not including particular studies, the information from each study that will be added to the database—all these decisions must be carefully made.

However, it’s also true that improved data stewardship and reporting would definitely ease the process. Many are now hoping that a push toward meta-analysis and systematic review will force some change in this area. “As we begin to discuss what’s needed to do effective systematic reviews downstream, we’ll have to go back upstream and say, ‘Here are the minimum datasets and information about methodologies that must appear in our journals,” says Purdue University’s Jeff Volenec, an ASA and CSSA Fellow.

“I think meta-analysis can drive some of the standardization,” agrees Cook. “Because unless you’re doing meta-analysis, you don’t care as much what the standard errors are, for example.” Her struggles with the literature haven’t put her off the approach, though, and she says she’s eager to learn more. In the meantime, her first attempt uncovered what she hoped for: an interesting knowledge gap. “We couldn’t do a meta-analysis on nitrate leaching” and enhanced efficiency fertilizers, she says. “There weren’t enough studies.”

 

Translating Science into Practice

Of course, a global average of minus 5.7% is just that: an average. Some farmers will achieve the same or higher yields with no-till, while others will see losses greater than 6%. The question then becomes: What’s the probability of experiencing a yield decline on any particular farm? Fortunately, meta-analysis can assist here, as well. Not only can it provide the global expected value (e.g., 5.7% yield reduction), but also the distribution of variability around that value, and moderator variables, such as soil moisture or farm practices, that explain the variation in response.


Nicolas Tremblay

 

What this means, in effect, is that “you can [calculate] the probability that an individual grower will achieve a certain change in yield: a 5% decrease, a 10% decrease, or a 5% increase,” Madden explains. “You can give that value to them. We have done this in a number of our investigations.”

Someone who is also using meta-analysis to inform farmer decision-making is Nicolas Tremblay of Agriculture and Agri-Food Canada. Having conducted fertilization trials since “forever,” the ASA, CSSA, and SSSA member jokes, he long suspected weather was behind the year-to-year differences he and his colleagues observed in crop responses to fertilizers. “But because we were looking at so few years and in such a defined location, we could not really understand what was going on,” Tremblay says. “So our conclusions were always partial.”

To try to understand things better, Tremblay eventually joined 11 scientists from Canada, the U.S., and Mexico who likewise were grappling with sizable inconsistencies between trials in nitrogen availability and crop yield. They set up a joint experiment looking at corn response to nitrogen in several North American regions, agreeing upfront to use the same experimental protocols, fertilizer applications rates, etc.

They figured the standardization would reduce the overall variation, allowing the important factors to emerge. But when they pooled their data after four years and charged a colleague with making sense of them, the differences among the experimental locations still defied explanation. Finally, Tremblay turned to meta-analysis, using it to methodically sort through the variability and its causes. And when he did, he says, “we soon figured out everything. All the explanations for the differences were popping out like magic.”

The main conclusion of their paper, which appeared in Agronomy Journal in 2012, is that corn’s changeable response to nitrogen is largely dictated by the interaction between rainfall and soil: Not a surprising finding, necessarily, “but it was never really formalized and quantified,” Tremblay says. Now that it has been quantified, he adds, his group has taken a vital next step. They’re developing a web-based application where farmers will enter certain characteristics of their farms. The tool will then calculate, based on rainfall, soil type, and other parameters, a suitable nitrogen rate.

“And this is all based on the results of the meta-analysis because it opened our eyes to the key parameters” for predicting the rate, he says. “So it really started the whole process of transfer to the user.”

But Tremblay also wants to see information flow the other way: from the user/farmer to the researcher. Scientists have traditionally eschewed on-farm trials and farmer-generated data in favor of highly controlled experiments, he explains—the idea being that the former contain too much local variation, making treatment effects difficult to detect. The unintended consequence of controlled experimentation, however, is that findings become so divorced from the changeable conditions of the real world that farmers can’t apply them in their own settings.

What’s needed, then, to make scientific results more relevant to the practitioner is to embrace variability in a systematic way, so that it informs rather than confounds. And Tremblay knows just the tool. “Meta-analysis,” he says, “can be very instrumental for bridging this gap.”

 

Footnotes


Comments
Be the first to comment.



Please log in to post a comment.
*Society members, certified professionals, and authors are permitted to comment.