Tuesday, March 8, 2011

Testimony of Dr. John Christy at House Subcommittee on Energy and Power Hearing

Written Statement of John R. Christy The University of Alabama in Huntsville

Subcommittee on Energy and Power Committee on Energy and Commerce
8 March 2011

I am John R. Christy, Distinguished Professor of Atmospheric Science, Alabama’s State Climatologist and Director of the Earth System Science Center at The University of Alabama in Huntsville. I have served as a Lead Author and Contributing Author of IPCC assessments. It is a privilege for me to offer my view of climate change based on my experience as a climate scientist. My research area might be best described as building climate datasets from scratch to advance our understanding of what the climate is doing and why. This often involves weeks and months of tedious examination of paper records and digitization of data for use computational analysis. I have used traditional surface observations as well as measurements from balloons and satellites to document the climate story. Many of my datasets are used to test hypotheses of climate variability and change. In the following I will address six issues that are part of the discussion of climate change today, some of which will be assisted by the datasets I have built and published.

EXTREME EVENTS
Recently it has become popular to try and attribute certain extreme events to human causation. The Earth, however, is very large, the weather is very dynamic, especially at local scales, so that extreme events of one type or another will occur somewhere on the planet in every year. Since there are innumerable ways to define an extreme event (i.e. record high/low temperatures, number of days of a certain quantity, precipitation over 1, 2, 10 … days, snowfall amounts, etc.) this essentially requires there to be numerous “extreme events” in every year. The following assess some of the recent “extreme events” and explanations that have been offered as to their cause.

Australia

The tragic flooding in the second half of 2010 in NE Australia was examined in two ways, (1) in terms of financial costs and (2) in terms of climate history. First, when one normalizes the flood costs year by year, meaning if one could imagine that the infrastructure now in place was unchanging during the entire study period, the analysis shows there are no long-term trends in damages. In an update of Crompton and McAneney (2008) of normalized disaster losses in Australia which includes an estimate for 2010, they show absolutely no trend since 1966. Secondly, regarding the recent Australian flooding as a physical event in the context climate history (with the estimated 2010 maximum river height added to the chart below) one sees a relative lull in flooding events after 1900. Only four events reached the moderate category in the past 110 years, while 14 such events were recorded in the 60 years before 1900. Indeed, the recent flood magnitude had been exceeded six times in the last 170 years, twice by almost double the level of flooding as observed in 2010. Such history charts indicate that severe flooding is an extreme event that has occurred from natural, unforced variability. There is also a suggestion that emergency releases of water from the Wivenhoe Dam upstream of Brisbane caused “more than 80 per cent of the flood in the Brisbane River. … Without this unprecedented and massive release ... the flooding in Brisbane would have been minimal.” (The Australian 18 Jan 2011.) (See http://rogerpielkejr.blogspot.com/2011/02/flood-disasters-and-human-caused.html where Roger Pielke Jr. discusses extreme events and supplies some of the information used here.)

England Floods

Svensson et al. 2006 discuss the possibility of detecting trends in river floods, noting that much of the findings relate to “changes in atmospheric circulation patterns” such as the North Atlantic Oscillation (i.e. natural, unforced variability) which affects England. For the Thames River, there has been no trend in floods since records began in 1880 (their Fig. 5), though multi-decadal variability indicates a lull in flooding events from 1965 to 1990. The authors caution that analyzing flooding events that start during this lull will create a false positive trend with respect to the full climate record. Flooding events on the Thames since 1990 are similar to, but generally slightly less than those experienced prior to 1940. One wonders that if there are no long-term increases in flood events in England, how could a single event (Fall 2000) be pinned on human causation as in Pall et al. 2011, while previous, similar events obviously could not? Indeed, on a remarkable point of fact, Pall et al. 2011 did not even examine the actual history of flood data in England to understand where the 2000 event might have fit. As best I can tell, this study compared models with models. Indeed, studies that use climate models to make claims about precipitation events might benefit from the study by Stephens et al. 2010 whose title sums up the issue, “The dreary state of precipitation in global models.” In mainland Europe as well, there is a similar lack of increased flooding (Barredo 2009). Looking at a large, global sample, Svensson et al. found the following.

A recent study of trends in long time series of annual maximum river flows at 195 gauging stations worldwide suggests that the majority of these flow records (70%) do not exhibit any statistically significant trends. Trends in the remaining records are almost evenly split between having a positive and a negative direction.
Russia and Pakistan

An unusual weather situation developed in the summer of 2010 in which Russia experienced a very long stretch of high temperatures while a basin in Pakistan was inundated with flooding rains. NOAA examined the weather pattern and issued this statement indicating this extreme event was a part of the natural cycle of variability (i.e. natural, unforced variability) and unrelated to greenhouse gas forcing. "...greenhouse gas forcing fails to explain the 2010 heat wave over western Russia. The natural process of atmospheric blocking, and the climate impacts induced by such blocking, are the principal cause for this heat wave. It is not known whether, or to what extent, greenhouse gas emissions may affect the frequency or intensity of blocking during summer. It is important to note that observations reveal no trend in a daily frequency of July blocking over the period since 1948, nor is there an appreciable trend in the absolute values of upper tropospheric summertime heights over western Russia for the period since 1900. The indications are that the current blocking event is intrinsic to the natural variability of summer climate in this region, a region which has a climatological vulnerability to blocking and associated heat waves (e.g., 1960, 1972, 1988)."

Snowfall in the United States

Snowfall in the eastern US reached record levels in 2009-10 and 2010-11 in some locations. NOAA’s Climate Scene Investigators committee issued the following statement regarding this, indicating again that natural, unforced variability explains the events. Specifically, they wanted to know if human-induced global warming could have caused the snowstorms due to the fact that a warmer atmosphere holds more water vapor. The CSI Team’s analysis indicates that’s not likely. They found no evidence — no human “fingerprints” — to implicate our involvement in the snowstorms. If global warming was the culprit, the team would have expected to find a gradual increase in heavy snowstorms in the mid-Atlantic region as temperatures rose during the past century. But historical analysis revealed no such increase in snowfall.
In some of my own studies I have looked closely at the snowfall records of the Sierra Nevada mountains, which includes data not part of the national archive. Long- term trends in snowfall (and thus water resources) in this part of California are essentially zero, indicating no change in this valuable resource to the state (Christy and Hnilo, 2010.)

Looking at a long record of weather patterns

A project which seeks to generate consistent and systematic weather maps back to
1871 (20th Century Reanalyisis Project, http://www.esrl.noaa.gov/psd/data/20thC_Rean/) has taken a look at the three major indices which are often related to extreme events. As Dr. Gill Campo of the University of Colorado, leader of the study, noted to the Wall Street Journal (10 Feb 2011) that “… we were surprised that none of the three major indices of climate variability that we used show a trend of increased circulation going back to 1871.” (The three indices were the Pacific Walker Circulation, the North Atlantic Oscillation and the Pacific-North America Oscillation, Compo et al. 2011.) In other words, there appears to be no supporting evidence over this period that human factors have influenced the major circulation patterns which drive the larger-scale extreme events. Again we point to natural, unforced variability as the dominant feature of events that have transpired in the past 140 years. What this means today should be considered a warning – that the climate system has always had within itself the capability of causing devastating events and these will certainly continue with or without human influence. Thus, societies should plan for their infrastructure projects to be able to withstand the worst that we already know has occurred, and to recognize, in such a dynamical system, that even worse events should be expected. In other words, the set of the measured extreme events of the small climate history we have, since about 1880, does not represent the full range of extreme events that the climate system can actually generate. The most recent 130 years is simply our current era’s small sample of the long history of climate. There will certainly be events in this coming century that exceed the magnitude of extremes measured in the past 130 years in many locations. To put it another way, a large percentage of the worst extremes over the period 1880 to 2100 will occur after 2011 simply by statistical probability without any appeal to human forcing at all. Going further, one would assume that about
10 percent of the record extremes that occur over a thousand-year period ending in 2100 should occur in the 21st century. Are we prepared to deal with events even worse than we’ve seen so far? Spending resources on creating resiliency to these sure-to-come extremes, particularly drought/flood extremes, seems rather prudent to me.

A sample study of why extreme events are poor metrics for global changes


In the examples above, we don’t see alarming increases in extreme events, but we
must certainly be ready for more to come as part of nature’s variability. I want to
illustrate how one might use extreme events to conclude (improperly I believe) that the weather in the USA is becoming less extreme and/or colder. For each of the 50 states, there are records kept for the extreme high and low temperatures back to the late 19th century. In examining the years in which these extremes occurred (and depending on how one deals with “repeats” of events) we find about 80 percent of the states recorded their hottest temperature prior to 1955. And, about 60 percent of the states experienced their record cold temperatures prior to that date too. One could conclude, if they were so inclined, that the climate of the US is becoming less extreme because the occurrence of state extremes of hot and cold has diminished dramatically since 1955. Since 100 of anything is a fairly large sample (2 values for each of 50 states), this on the surface seems a reasonable conclusion. Then, one might look at the more recent record of extremes and learn that no state has achieved a record high temperature in the last 15 years (though one state has tied theirs.) However, five states have observed their all-time record low temperature in these past 15 years (plus one tie.) This includes last month’s record low of 31°F below zero in Oklahoma, breaking their previous record by a rather remarkable 4°F. If one were so inclined, one could conclude that the weather that people worry about (extreme cold) is getting worse in the US. (Note: this lowering of absolute cold temperature records is nowhere forecast in climate model projections, nor is a significant drop in the occurrence of extreme high temperature records.) I am not using these statistics to prove the weather in the US is becoming less extreme and/or colder. My point is that extreme events are poor metrics to use for detecting climate change. Indeed, because of their rarity (by definition) using extreme events to bolster a claim about any type of climate change (warming or cooling) runs the risk of setting up the classic “non-falsifiable hypothesis.” For example, we were told by the IPCC that “milder winter temperatures will decrease heavy snowstorms” (TAR WG2, 15.2.4.1.2.4). After the winters of 2009-10 and 2010-11, we are told the opposite by advocates of the IPCC position, “Climate Change Makes Major Snowstorms More Likely” (http://www.ucsusa.org/news/press_release/climate-change-makes-snowstorms-more-likely-0506.html).

The non-falsifiable hypotheses works this way, “whatever happens is consistent with my hypothesis.” In other words, there is no event that would “falsify” the hypothesis. As such, these assertions cannot be considered science or in anyway informative since the hypothesis’ fundamental prediction is “anything may happen.” In the example above if winters become milder or they become snowier, the hypothesis stands. This is not science. As noted above, there are innumerable types of events that can be defined as extreme events – so for the enterprising individual (unencumbered by the scientific method), weather statistics can supply an almost unlimited set of targets in which to discover a “useful” extreme event. Thus, when such an individual observes an unusual event, it may be tempting to define it as a once-for-all extreme metric to “prove” a point about climate change. This works both ways with extremes. If one were prescient enough to have predicted in 1996 that over the next 15 years, five states would break record cold temperatures while zero states would break record high temperatures as evidence for cooling, would that prove CO2 emissions have no impact on climate? No. Extreme events happen, and their causes are intricately tied to semi-unstable dynamical situations that can occur out of an environment of natural, unforced variability. Science checks hypotheses (assertions) by testing specific, falsifiable predictions implied by those hypotheses. The predictions are to be made in a manner that, as much as possible, is blind to the data against which the prediction is evaluated. It is the testable predictions from hypotheses, derived from climate model output, that run into trouble. Before going on, the main point here is that extreme events do not lend themselves as being rigorous metrics for convicting human emissions of being guilty of causing them.

THE UNDERLYING TEMPERATURE TREND

As noted earlier, my main research projects deal with building climate datasets from scratch to document what the climate has done and to test assertions and hypotheses about climate change.
In 1994, Nature magazine published a study of mine in which we estimated the underlying rate at which the world was warming by removing the impacts of volcanoes and El Niños (Christy and McNider 1994.) This was important to do because in that particular 15-year period (1979-1993) there were some significant volcanic cooling episodes and strong El Niños that convoluted what would have been the underlying trend. The result of that study indicated the underlying trend for 1979-1993 was +0.09 °C/decade which at the time was one third the rate of warming that should have been occurring according to estimates by climate model simulations.

Above: update of Christy and McNider 1994: Top curve: Monthly global atmospheric temperature anomalies 1979-2010 (TLT). 2nd: (SST) the influence of tropical sea surface temperature variations on the global temperature. 3rd: (TLT-SST) global temperature anomalies without the SST influence. 4th (VOL) The effect of volcanic cooling on global temperatures (El Chichon 1982 and Mt. Pinatubo 1991). Bottom: (TLT-SST-VOL) underlying trend once SST and VOL effects are removed. The average underlying trend of TLT-SST-VOL generated from several parametric variations of the criteria used in these experiments was +0.09 °C/decade. Lines are separated by 1°C. I have repeated that study for this testimony with data which now cover 32 years as shown above (1979-2010.) In an interesting result, the new underlying trend remains a modest +0.09 C/decade for the global tropospheric temperature, which is still only one third of the average rate the climate models project for the current era (+0.26°C/decade.)

There is no evidence of acceleration in this trend. This evidence strongly suggests that climate model simulations on average are simply too sensitive to increasing greenhouse gases and thus overstate the warming of the climate system (see below under climate sensitivity.) This is an example of a model simulation (i.e. hypothesis) which can provide a “prediction” to test: that “prediction” being the rate at which the Earth’s atmosphere should be warming in the current era. In this case, the model-average rate of warming fails the test (see next.)

PATTERNS OF WARMING

Through the years there have been a number of publications which have specifically targeted two aspects of temperature change in which observations and models can be compared. The results of both comparisons suggest there are significant problems with the way climate models represent the processes which govern the atmospheric temperature. In the first aspect of temperature change, we have shown that the pattern of change at the surface does indeed show warming over land. However, in very detailed analyses of localized areas in the US and Africa we found that this warming is dominated by increases in nighttime temperatures, with little change in daytime temperatures. This pattern of warming is a classic signature of surface development (land cover and land use change) by human activities. The facts that (a) the daytime temperatures do not show significant warming in these studies and (b) the daytime temperature is much more representative of the deep atmospheric temperature where the warming due to the enhanced greenhouse effect should be evident, lead us to conclude that much of the surface temperature warming is related to surface development around the thermometer sites. This type of surface development interacts with complexities of the nighttime boundary layer which leads to warming not related to greenhouse warming (Christy et al. 2006, 2009, see also Walters et al. 2007, Pielke, Sr. 2008.)

The second set of studies investigates one of the clearest signatures or fingerprints of greenhouse gas warming as depicted in climate models. This signature consists of a region of the tropical upper atmosphere which in models is shown to warm at least twice as fast as the surface rate of warming. We, and others, have tested this specific signature, i.e. this hypothesis, against several observational datasets and conclude that this pervasive result from climate models has not been detected in the real atmosphere. In addition, the global upper atmosphere is also depicted in models to warm at a rate faster than the surface. Again, we did not find this to be true in observations (Klotzbach et al. 2010.)
The following are quotes from three of the recent papers which come to essentially the same conclusion as earlier work published in Christy et al. 2007 and Douglass et al. 2007. Table 2 displays the new per decade linear trend calculations [of difference between global surface and troposphere using model amplification factor] … over land and ocean. All trends are significant[ly different] at the 95% level. Klotzbach et al. 2010. [Our] result is inconsistent with model projections which show that significant amplification of the modeled surface trends occurs in the modeled tropospheric trends. Christy et al. 2010. Over the interval 1979-2009, model-projected temperature trends are two to four times larger than observed trends in both the lower and mid-troposphere and the differences are statistically significant at the 99% level. McKitrick et al 2010.

Again we note that these (and other) studies have taken “predictions” from climate model simulations (model outputs are simply hypotheses), have tested these predictions against observations, and found significant differences.

CLIMATE SENSITIVITY AND FEEDBACKS

One of the most misunderstood and contentious issues in climate science surrounds the notion of climate sensitivity. Climate sensitivity is a basic variable that seeks to quantify the temperature response of the Earth to a particular forcing, for example answering the question, how much warming can be expected if the warming effect of doubling CO2 acts on the planet? The temperature used in this formulation is nearly always the surface temperature, which is a rather poor metric to serve as a proxy for the total heat content of the climate system, but that is the convention in use today. In any case, it is fairly well agreed that the surface temperature will rise about 1°C as a modest response to a doubling of atmospheric CO2 if the rest of the component processes of the climate system remain independent of this response. This is where the issue becomes uncertain: the complexity and interrelatedness of the various components of the climate system (e.g. clouds) mean they will not sit by independently while CO2 warms the planet a little, but will get into the act too. The fundamental issue in this debate is whether the net response of these interrelated actors will add to the basic CO2 warming (i.e. positive feedbacks) or subtract from the basic CO2 warming (i.e. negative feedbacks.)

Since climate models project a temperature rise on the order of 3 °C for a doubling of CO2, it is clear that in the models, positive feedbacks come into play to increase the temperature over and above the warming effect of CO2 alone, which is only about 1°C. However, given such observational results as noted earlier (i.e. warming rates of models being about three times that of observations) one can hypothesize that there must be negative feedbacks in the real world that counteract the positive feedbacks which dominate model processes.
My colleague at UAHuntsville, Dr. Roy Spencer, has searched tediously for a way to calculate climate sensitivity from satellite observations which at the same time would reveal the net response of the feedbacks which is so uncertain today. NASA and NOAA have placed in orbit some terrific assets to answer questions like this. Unfortunately, the best observations to address this issue are only about 10 years in length, which prevents us from directly calculating the sensitivity to 100 years of increasing CO2. However, the climate sensitivity over shorter periods to natural, unforced variability can be assessed, and this is what Dr. Spencer has done. To put it simply, Spencer tracks large global temperature changes over periods of several weeks. It turns out the global temperature rises and falls by many tenths of a degree over such periods. Spencer is able to measure the amount of heat that accumulates in (departs from) the climate system as the temperature rises (falls) with temperature changes. When all of the math is done, he finds the real climate system is dominated by negative feedbacks (probably related to cloud variations) that work against changes in temperature once that temperature change has occurred. When this same analysis is applied to climate model output (i.e. apples to apples comparisons), the result is very different, with all models showing positive feedbacks, i.e. helping a warming impulse to warm the atmosphere even more (see figure below.) Thus, the observations and models are again inconsistent. On this time scale in which feedbacks can be assessed, Spencer sees a significant difference between the way the real Earth processes heat and the way models do. This difference is very likely found in the way models treat cloudiness, precipitation and/or heat deposition into the ocean. This appears to offer a strong clue as to why climate models tend to overstate the warming rate of the global atmosphere.
Below: Climate feedback parameter from observations (blue, top line) and IPCC AR4 model simulations (other lines, derived from results in Spencer and Braswell 2010.) Model parameters cluster in a grouping that indicates considerably more sensitivity to forcing than indicated by observations.

The bottom line of this on-going research is that over time periods for which we are able to determine climate sensitivity, the evidence suggests that all models are characterized by feedback processes that are more positive than feedback processes measured in nature.

CONSENSUS SCIENCE

The term “consensus science” will often be appealed to in arguments about climate change. This is a form of “argument from authority.” Consensus, however, is a political notion, not a scientific notion. As I testified to the Inter-Academy Council last June, the IPCC and other similar Assessments do not represent for me a consensus of much more than the consensus of those who already agree with a particular consensus. The content of these reports is actually under the control of a relatively small number of individuals - I often refer to them as the “climate establishment” – who through the years, in my opinion, came to act as gatekeepers of scientific opinion and information, rather than brokers. The voices of those of us who object to various statements and emphases
in these assessments are by-in-large dismissed rather than acknowledged.

I’ve often stated that climate science is a “murky science.” We do not have laboratory methods of testing our hypotheses as many other sciences do. As a result, opinion, arguments from authority, dramatic press releases, and notions of consensus tend to pass for science in our field when they should not.
I noticed the House has passed an amendment to de-fund the Intergovernmental Panel on Climate Change (IPCC.) I have a proposal here. If the IPCC activity is ultimately funded by US taxpayers, then I propose that ten percent of the funds be allocated to a group of well-credentialed scientists with help from individuals
experienced in creating verifiable reports, to produce an assessment that expresses alternative hypotheses that have been (in their view) marginalized, misrepresented or minimized in previous IPCC reports. We know from climategate emails and many other sources of information that the IPCC has had problems with those who take different positions on climate change. Topics to be addresses in this assessment, for example, would include (a) evidence for a low climate sensitivity to increasing greenhouse gases, (b) the role and importance of natural, unforced variability, (c) a rigorous evaluation of climate model output, (d) a thorough discussion of uncertainty, (e) a focus on metrics that most directly relate to the rate of accumulation of heat in the climate system (which, for example, the problematic surface temperature record does not represent), (f) analysis of the many consequences, including benefits, that result from CO2 increases, and (g) the importance that accessible energy has to human health and welfare. What this proposal seeks to accomplish is to provide to the congress and other policymakers a parallel, scientifically-based assessment regarding the state of climate science which addresses issues which here-to-for have been un- or under-represented by previous tax-payer funded, government-directed climate reports.

IMPACT OF EMISSION CONTROL MEASURES
The evidence above suggests that climate models overestimate the response of temperature to greenhouse gas increases. Even so, using these climate model simulations we calculate that the impact of legislative actions being considered on the global temperature is essentially imperceptible. These actions will not result in a measurable climate effect that can be attributable or predictable with any level of confidence, especially at the regional level.

When I testified before the Energy and Commerce Oversight and Investigations subcommittee in 2006 I provided information on an imaginary world in which 1,000 1.4 gW nuclear power plants would be built and operated by 2020. This, of course, will not happen. Even so, this Herculean effort would result in at most a 10 percent reduction in global CO2 emissions, and thus exert a tiny impact on whatever the climate is going to do. Indeed, with these most recent estimates of climate sensitivity, the impact of these emission control measures will be even tinier since the climate system doesn’t seem to be very sensitive to CO2 emissions. (Note: we have not considered the many positive benefits of higher concentrations of CO2 in the atmosphere, especially for the biological world, nor the tremendous boost to human health, welfare, and security provided by affordable, carbon-based energy. As someone who has lived in a developing country, I can assure the subcommittee that without energy, life is brutal and short.) Coal use, which generates a major portion of CO2 emissions, will continue to rise as indicated by the Energy Information Administration’s chart below. Developing countries in Asia already burn more than twice the coal that North America does, and that discrepancy will continue to expand. The fact our legislative actions will be inconsequential in the grand scheme of things can be seen by noting that these actions attempt to bend the blue, North American curve, which is already fairly flat, down a little. So, downward adjustments to North American coal use will have virtually no effect on global CO2 emissions (or the climate), no matter how sensitive one thinks the climate system might be to the extra CO2 we are putting back into the atmosphere.

Thus, if the country deems it necessary to de-carbonize civilization’s main energy sources, sound and indeed compelling reasons beyond human-induced climate change need to be offered. Climate change alone is a weak leg on which to stand for such a massive undertaking. (I’ll not address the fact there is really no demonstrated technology except nuclear that can replace large portions of the carbon-based energy production.)

Thank you for this opportunity to offer my views on climate change.

References

Barredo, J.I., 2009: Normalized flood losses in Europe: 1970-2006. Nat. Hazards Earth
Syst. Sci., 9, 97-104.
Christy, J.R. and J.J. Hnilo, 2010: Changes in snowfall in the southern Sierra Nevada of
California since 1916. Energy & Env., 21, 223-234.
Christy, J.R., W.B. Norris and R.T. McNider, 2009: Surface temperature variations in
East Africa and possible causes. J. Clim. 22, DOI: 10.1175/2008JCLI2726.1.
Christy, J. R., W. B. Norris, R. W. Spencer, and J. J. Hnilo, 2007: Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements, J. Geophys. Res., 112, D06102, doi:10.1029/2005JD006881.
Christy, J.R., W.B. Norris, K. Redmond and K. Gallo, 2006: Methodology and results of
calculating central California surface temperature trends: Evidence of human- induced climate change? J. Climate, 19, 548-563.
Christy, J.R. and R.T. McNider, 1994: Satellite greenhouse signal? Nature, 367, 325. Compo, G.P. et al. 2011. Review Article: The Twentieth Century Reanalysis Project. Q.
J. R. Meteorol. Soc., 137, 1-28.
Crompton, R. and J. McAneney, 2008: The cost of natural disasters in Australia: the case
for disaster risk reduction. Australian J. Emerg. Manag., 23, 43-46. Douglass, D.H., J.R. Christy, B.D. Pearson and S.F. Singer, 2007: A comparison of
tropical temperature trends with model predictions. International J. Climatology, DOI: 10.1002/joc.1651.
Klotzbach, P. J., R. A. Pielke Sr., R. A. Pielke Jr., J. R. Christy, and R. T. McNider
(2009), An alternative explanation for differential temperature trends at the
surface and in the lower troposphere, J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Pall, P., T. Aina, D. A. Stone, P. A. Stott, T. Nozawa, A. G. J. Hilberts, D. Lohmann and
M. R. Allen, 2011: , Nature.
Spencer, R.W. and W.D. Braswell, 2010:
Stephens, G. et al. 2010: The dreary state of precipitation in global models. J. Geophys.
Res., 115, doi:101029/1010JD014532.
Svensson, C., J. Hannaford, Z. Kundzewicz and T. Marsh, 2006: Trends in river floods,
why is there no clear signal in the observations? Frontiers in flood research. Tchiguirinskaia, I., Thein, K., Hubert, P. International Association of Hydrological Sciences, International Hydrological Programme, Publ 305, 1-18.
Walters, J.T., R.T. McNider, X. Shi, W.B. Norris and J.R. Christy, 2007: Positive
surface temperature feedback in the stable nocturnal boundary layer. Geophys. Res. Lett. doi:10.1029/2007GL029505.

2 comments:

  1. I have often asked those who favor human-caused-by CO2- global climate change to explain their science. Although opposed to their view, this statement does much to "clarify" the science. True to real science it is a little bland, even at times slightly boring, but well worth studying for the deeper meaning. My thanks to Dr. Christy.

    ReplyDelete
  2. Clear, concise and unequivocal, my congratulations to Dr Christy.

    ReplyDelete