Buy

Books
Click images for more details

Twitter
Support

 

Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > A Friendly General Discussion with ATTP

After last weeks acrimonious exchange with ATTP over Lomborg, I think we are now having a more fruitful discussion of temperature adjustments and homogenisations. At least I feel that I am getting a better understanding of how difficult it is to build temperature anomalies from data that is of poor quality, not well distributed, and which seems to contain levels of complexity that homogenisation often erases.

May 6, 2015 at 12:00 AM | Unregistered CommenterKevin Marshall

I don’t think anyone assumed it was simple. The problem is the black box nature of it. Every station adjustment should be documented with the justification for any changes. Software shouldn’t be doing any automatic changes at all. Stations with very tricky records should be excluded altogether. Once each station has been assessed it should stay unchanged unless there is a good reason to review it. This constant jiggling of hundred year old records is ludicrous. Ultimately, anything in the distant past should be viewed as a guideline, not a precision result.

Homogenisation doesn’t appear to be a good practice because it potentially mixes good stations with bad. Better to drop the bad ones. Also, you don’t need to travel far to get a discrepancy between stations that one might assume would give similar results. For instance, I live in a place that gets very different weather from the places on all four sides. It’s warmer and drier on the whole. Changes in wind direction means we might get a taste of what the other places are getting (which might last weeks, months or even years) but doesn’t mean that an amalgam of those places or picking one would match our temperature or rainfall.

The temperature records are used as the raw material for computer models. Even if they were perfect, they’re very limited and saying that the end product is good is wrong. You can't make a silk purse out of a sow's ear.

May 6, 2015 at 10:21 AM | Unregistered CommenterTinyCO2

As a bystander, my impression is that much of the data fiddling ('homogenisation') has been undocumented or, at best, the documentation is now lost or inaccessible. Plus much of the original unfiddled data has been lost (according to Phil Jones, if I remember, said it was lost as the result of an office move).

It will be interesting to see what, if anything, comes out of the GWPF enquiry into termperature records.

May 6, 2015 at 12:10 PM | Registered CommenterMartin A

It seems amazing that the Phil Jones didn't want the likes of Steve McIntyre to look at his 'life's work' in detail because he might find something wrong with it, but seems less troubled by the fact that he has actually lost a lot of the underlying data on which his life's work is based.

May 6, 2015 at 4:12 PM | Unregistered Commentermichael hart

TinyCO2 May 6, 2015 at 10:21 AM
What you describe as your locality - being between different weather systems - is probably what happened in Paraguay at the end of the 1960s. The weather systems shifted that lead to the drop of about one degree over an area about the size of mainland Britain. I think there may be layers of complexity in surface temperatures. In studying temperature variation the key is to find the limits of our knowledge from the data. The station coverage is highly uneven and the data of variable quality. This gets worse for the pre-1950 data. I think the key might be to separate out the elimination of measurement biases (UHI, TOBS, station moves) from the homogenization process. Homogenization - smoothing out the data by blending together the elements - should neither add nor subtract from the average. The key I feel is not homogenizing each temperature station or region individually, but in using a consistent method, like BEST. We should then try to do various data checks, to see if it makes sense. It is this stage that does not happen. By identifying limitations and anomalies in the process, we may develop a deeper understanding from the limited data. What is more, we may be able to better understand system noise from actual trends, or at least evaluate which hypotheses are better or worse than others.

May 6, 2015 at 11:45 PM | Unregistered CommenterKevin Marshall