Discussion > Place Your Betts
geronimo
At a guess, I would say they ..... guess!
In an engagement with raff on the thread below I reminded him of something I have said before that the difference in temperature between one end of my kitchen windowsill and the other is greater than the "anomaly" which allegedly makes 2015 the warmest year ever.
The only proper reaction to both Cowtan and Betts and all the others is, "You're 'avin' a larf, mate, aintcha?!"
During my discussion with raff I pointed him towards a WUWT post which puts the temperature and the anomalies in some sort of perspective and I was especially interested in this graph in the comments under that article which appear to show — as some people have long-argued —that increasing maximum temperatures are a myth and that the anomalies are the result solely of increased minima.
If this is correct then the global warming panic is indeed a scam.
No doubt the good Dr Betts will be able to pooh-pooh that idea as well but it really is time to start calling out those who are trying to keep this gravy train on the rails at the expense of those who can least afford it — as Andrew's post this morning on energy prices demonstrates.
The comments in The Times below their report on this disappointed me because that readership ought to be better informed than the average "below the article" punter but most of them were mindlessly lining up to bash the electricity companies which shows just how far this hoax has still to run before common sense returns.
Ground models vs Satellite measurements ? ("The graph label is ensembles. IOW: model runs")
Richard Betts @richardabetts Jan 12
Kevin Cowtan on whether satellite data more reliable than surface measurements for global temperature link to SkS… (Spoiler: No) *Richard Betts @richardabetts Jan 12
"The known uncertainties in the satellite trend .. are five times the uncertainties in the thermometer record trend"
@RogTallbloke* "The satellite record is valuable for its uniform geographical coverage and ability to measure different levels in the atmosphere, but it is not our best source of data concerning temperature change at the surface."
@richardabetts Kevin Cowtan and Richard Betts in the new comedy "Carry on adjusting" Also starring Gavin Schmidt and Sid Levitus
...Will Kevin Cowtan be submitting this for peer review?
... Of course, the large spread in surface data over time is hidden off the left side of the graph #hockeyjockeys
... bigger swings in 'air' vs 'surface' mean bigger std error@GerryMorrow Jan 12
@richardabetts @Foxgoose He doesn't deal with the paucity of weather stations at the surface, shouldn't that be a source of uncertainty?
Ed Hawkins @ed_hawkins Jan 12
@etzpcm Graph description ....... early baseline used to show uncertainty in *trend*, not *anomalies*
....blah blah and then they get to a point when Derek Sorensen click "ah" Twitter Thread
@th3Derek 17h17 hours agoIts above my pay grade but the paper mentions the word model 35 times
@AndyMeanie @andrw100 @HG54 However, what I weas going to go on to say, is that the more data points you have, the more it smooths out.Derek Sorensen @th3Derek
... A temperature time series isn't a random walk, and to treat it as one is deluded. I think that's what /SS/ did.Andy Mac @AndyMeanie 17h17 hours ago
@th3Derek I can understand uncertainty increasing in a model, as you have no obs basis for it, but not for a series of obs.@th3Derek 17h17 hours ago
@AndyMeanie @andrw100 @HG54 /me facepalms. Of course, that's what I missed; that's what they did. The graph is of ensembles. IOW: model runs@Derek Sorensen @th3Derek 17h17 hours ago
@AndyMeanie @andrw100 @HG54 /SS/ took the uncertainty, ran some red noise, and plotted the results. Monte Carlo simulation.Derek Sorensen @th3Derek 17h17 hours ago
@AndyMeanie @andrw100 @HG54 I am ashamed I fell for it; amazed that @richardabetts did though.Andy Mac @AndyMeanie 17h17 hours ago
@th3Derek Of this is so, then it makes no sense as Hadcrut and RSS, (and UAH) are measurements, not models?Richard Betts @richardabetts 16h16 hours ago
@AndyMeanie @th3Derek @andrw100 @HG54 It's explained in this paper on uncertainties in the RSS dataset http://onlinelibrary.wiley.com/doi/10.1029/2010JD014954/fullDerek Sorensen @th3Derek 16h16 hours ago
@HG54 @richardabetts @AndyMeanie Hang on: Richard linked to this, to refute idea this is a simulation.
... Look at the paper's title. WOE is a "Monte-Carlo estimation technique" if not a simulation?
Thanks Mike, I have problems with much that the clisci community puts out by way of numbers, but when they start taking seriously a paper from the University of SkS purporting to show that there are more uncertainty in satellite measurements of global temperature than there are in a measurement system with around 8000 measurement devices mostly in the USA and Europe purporting to be able to measure the global temperature with less uncertainty it piques my curiosity.
Is it my ignorance? or have they somehow forgotten that the temperature measuring instruments are imbalanced spatially, and that there are few of them? I think I know what Cowtan purports to have proved, and that is that individual temperature measurement stations have less uncertainty than satellite measurements, which may be true, but there is a bigger picture, one which puts the terrestrial measurement stations in the context of their numbers and distribution. In that context the resultant temperature measurements cannot have less uncertainties than satellite temperature measurements. Surely?
Anyway I've asked maybe someone will enlighten me.
geronimo
As I understand these things, part of the thinking behind anomalies, as opposed to actual temperatures, is that once you have established what the global temperature is (nb: nobody has yet succeeded in doing that AFAIK), then the number of reporting stations becomes less relevant, the assumption being that the anomalies from those stations will replicate across the globe .... or something like that!
I may well have that wrong but I've certainly had that or a similar argument put forward as justification for reducing the number of stations.
I think Cowtan is simply the latest in a lengthening line of scientivists who are clinging by their finger nails to their models and their favourite (and we know why they're their favourites, don't we?) datasets.
This paper, like Karl, looks like a response to the order from above: "FFS find something that keeps this thing going." More and more the stuff coming out from the usual suspects and their cohorts smacks of desperation.
To be fair, there are big issues with satellites, such as orbital decay, stitching records from different sources (where have we heard that before?), permanent and intermittent power failures, instrumental drift (who checks the sensors once they are in orbit?), etc. They are not unimpeachable sources of fact. See climate audit for Steve Mc 's reservations.
http://climateaudit.org/2016/01/05/update-of-model-observation-comparisons/#comment-765688
diogenes: I accept there are big issues with satellite records, I think I said that above, but it's quite another story to imply that the thermometer/Argo records have fewer uncertainties given the geography and coverage of these measurement stations and argo buoys (1/200,000q miles).
Thanks for the link though it may educate me.
Does anyone really care what the temperature is over most of the ocean? Surely the land temperatures where people live are the most relevant and maybe where glaciers lurk if you have been worried about the last few thousand years general increase in sea level.
Rob Burton, I think that depends on what point is being discussed. In the context of weather and the science behind global-warming climate claims, it does matter. Or ought to. Just as the temperature measured by satellites up in the troposphere matters for the same reasons.
But to the man/woman on the street, no it probably doesn't matter. Which is why the global-warmers will always be struggling to get people interested, never mind continuously frightened, about such things. And this also why they desperately need an on going supply of claimed disasters and "records": People can see the real weather and climate every day when they walk out of the front door, and they are not alarmed. And most of them are still not alarmed when the BBC tells them to be alarmed.
michael hart
People can see the real weather and climate every day when they walk out of the front door, and they are not alarmed. And most of them are still not alarmed when the BBC tells them to be alarmed.
In fact most people over about 18 think what's the fuss I seen much worse, those under 18 think if this is as bad as it gets what's the fuss?
Phil Clarke posted some interesting links on the Adjustocene cartoon thread and I for one am now better educated than I was regarding the nature, extent and reasoning behind the land-based temperature adjustments than I was.
Having read the articles he linked to, I was left the impression that the people concerned are almost certainly sincere in their attempts to produce meaningful temperature runs with some claim to accuracy, but I was also left wondering if the whole exercise isn't ultimately futile. For all the effort put into making the adjustments sensible, accurate and realistic, I still couldn't help feeling there was a "finger in the air" element to it all. Adjusting for changing the time of day when measurements were recorded (in the past, mostly in the afternoon, today, mostly in the morning) sees today's temperatures adjusted up by about 0.5C to compensate, while adjusting for the UHI sees temperatures go down by c 0.2C. The 0.5C I can accept, though the figure seems a little arbitrary; the 0.2C for UHI strikes me as inadequate, but who knows? And that's a large part of the problem. Nobody really knows, for all their fancy algorithms. Much the same can be said for their (smaller) adjustments the other way for sea-based temperatures. Are they sincere? In my view, almost certainly. Have they got it right? In my view, almost certainly not. What would they have to do to get it right? I haven't a clue, but I doubt if anyone else truthfully has, either.
Which leads into your point about satellites. I accept without reservation that there are problems with temperatures suggested by satellites too. They are also adjusted, and no doubt the adjustments are made with equal sincerity. As a layman, it seems to me that the need for the adjustments to the satellite temperatures is less, as they have been around for a much shorter time, and the issues requiring adjustments aren't so dramatic as things like relocating sites, and changing the time of the day when the measurement is recorded. So, on balance, I trust the satellite records more than the land and sea-based ones, but if I'm honest, that's more a gut feeling than anything else.
I think the point you make is a very important one, however. Satellites offer much wider coverage of the planet, whatever their limitations. Land-based measurements are highly biased towards the USA, so yet more adjustments are required to compensate for that. Land-based measurements are (outside of the USA and parts of Eurpose) sparse, so that introduces the concept of in-fill. Ocean-based measurements strike me as even more sparse, for such a huge area. And, as you point out, there's nothing if anything beyond 80 north and south. Which ultimately makes me wonder, what's the point? At least until we are able to find a much more comprehensive and accurate and reliable way of measuring the temperature world-wide. Isn't that where satellites were supposed to come in?
Mark Hodgson
You're forgetting to keep your eye on that pea!
Start from the basis that whoever was taking the temperature readings 50, 60, 100 years ago was an honest man (or woman) and that the reading was accurate to within half a degree. Best you could hope for I would suggest.
Add to that the fact that regardless of how many weather stations there are/were, there is not now and has never been any way of producing a "global temperature" figure that can in any way reflect the actual global temperature, except possibly by accident.
But you can pretend that there is a meaningful figure by taking an average of all those readings over the years and assuming that any variation from that average is an accurate assessment of what the temperature is doing now — ie you deal in what the climate community calls "anomalies" rather then real temperatures.
The problem arises when you start pretending that the original temperature readings actually mean anything. The temperature here at this moment is either 12.0°, 11.7° or 11.8° depending on which of the three thermometers scattered about my property you choose. Those differences are accurate because I made sure the equipment was calibrated but who is to say the calibration fIgure was correct?
And remember that where you don't have a reading you don't have a reading! You can "interpolate" or "extrapolate" or "smear" to your heart's content. Every 'infilled' reading is guesswork, no matter how you dress it up, and every adjustment is a personal opinion no matter how honest the guesser may be.
And at the end if the day what are we talking about. — less than 1° over the course of more than a century, a figure barely one-fifth of the difference between my local temperature now and at seven o'clock this morning or 22° below last year's maximum and much the same above last year's minimum.
Put it all into perspective and then tell me we should be beggaring ourselves to prevent a speculative reduction of 0.001° by the end if this century!
Jan 31, 2016 at 9:28 PM | Unregistered Commentermichael hart
Michael, my main point was that in order to get a "global" temperature using thermometers then we have to guess a huge area of the Earth that doesnt have them and largely we really do not care about. At least the satellites should be pretty consistent at getting the temperatures truely globally. But if you were concerned about climate change in Northern Ireland say I would just look at the very long record in Armagh and say that basically covers climate change for the last few hundred years and leave it at that.
Mark, for an idea of how difficult it is to get a satellite record of atmospheric temperatures, see http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/
If sufficiently interested, a numerate person can download the thermometer data from around the world and, armed with no more than Excel, compute her own temperature index. It is not too difficult - at least one skeptical blogger has done this. The result will be a graph showing rising temperatures. If she wanted to do the same for satellite data... she probably couldn't. It is too difficult.
It would also probably be too difficult for her to produce the planet-wide grid of surface temperatures by the opaque krigging procedures from the thermometers. No one, or few people, pretends the process is trivial. (Least of all the 'guardians' of the surface data: Otherwise they might get asked if what that are doing really needs all that money. Roy Spencer, however, produces the satellite data on a shoestring budget.)
Mike Jackson
I think I did keep my eye on the pea - my ultimate conclusion was: "Which ultimately makes me wonder, what's the point?"
Raff
Thanks for the link - very interesting; I'm adding to my knowledge all the time. Having read about all the adjustments required for satellites, land-based measurements, and ocean-based measurements, however, I think my conclusion remains unchanged: "Which ultimately makes me wonder, what's the point?"
michael hart, yes, the better an answer you want the more work you have to do. But you can get a long way with the basic temperature data and Excel, which is not true of satellite data.
Mark, in a way you are right. We know CO2 will cause rising temperatures and that we need to reduce emissions. So measuring what is happening is just an academic exercise. Is that what you meant?
Raff
Not really, no! I accept that we have GW (not really a surprise, as we emerge from the Little Ice Age). I am sceptical about AGW, and disbelieve entirely in CAGW. Thus I don't accept that we need to reduce CO2 emissions.
However, even if I did, then I never cease to be amazed at the illogical and counter-productive policy prescriptions of the "green" activists. I always put "green" in inverted comments, by the way, as I see nothing green about the rush to blight our beautiful landscape with wind turbines and solar panels. I'm in Athelstan's camp on the other thread, where he said he can put up with the ugliness of nuclear power stations, reluctantly, because at least they achieve something positive, or words to that effect. Living in Cumbria, I think Windscale/Sellafield is a hideous eyesore, but it's as nought to the ring of steel destroying the beauty of this wonderful county.
However, I had better stop there, as I'm going O/T and am in danger of derailing this thread, which is not my intention.
Mike Jackson has it right. What we are looking at is a set of temperature measuring devices over the globe, say around 4000 on land and 3000 at sea (ignoring SST made with buckets). The 4000 are heavily skewed to the USA and Europe.
If over the last 120 years, they had all been measured in the same way, at the same time of day, by the same people then there may, just may, be a signal in the time series that is telling us how the Earth's temperature is performing. If we aren't using the same weather station measured in the same way at the same time of day and have to make adjustments to bring them together, it is my view you could not produce a time-series with any meaningful signal in it and to pretend to be able to do so is disingenuous.
I'll leave to you each to take on board that the Met Office and it's fellow temperature series supporters claim that the system they're using can measure temperatures past and present to 100ths of one degree Celsius.
I wonder if anyone else feels that allowing the people who forecast the temperatures to check whether the forecasts are right or not is a quality control issue?
Feb 2, 2016 at 6:00 PM | geronimo
Yes, testing of software, in this case models, should be done by people not involved in creating the software and who are paid to break it. Dr Jones of UEA had it right of about breaking things, and he should have had faith in the basic "rightness" of his data the fact he didn't speaks volumes but doesn't mean he knew it was wrong.
Geronimo
I'll leave to you each to take on board that the Met Office and it's fellow temperature series supporters claim that the system they're using can measure temperatures past and present to 100ths of one degree Celsius.
I discussed this earlier with Radical Rodent and Micky H Corbett.
The annual average is based, if your figures are correct, on 7000*365=2,555,000 measurements.
Where d is the precision of the mean and n is the sample size
d~1/√n
A thermometer can be read to the nearest 1C. n=1, d=1C
The mean of 100 measurements (n=100) can be expressed to d=0.1C
For n=10000 d=0.01C.
For n=1 million d=0.001C.
For n=2555000 d=0.0006C
The Met Office monthly and annual figures are expressed to 0.01C, which is statistically quite conservative.
EM, they are not all measuring the same the "same" temperature in the same place at the same time. Averaging them is not statistically meaningful or appropriate in the way you think they are.
If I rapidly took a million digital measurements of a cow's fundament, and then one from the South Pole, and averaged them obtain a number I expressed to several decimal places, what would that tell me?
Nothing.
Try reading Essex, McKitrick Andresen, again: "Does a Global Temperature Exist?"
Richard Betts, late of this diocese, posted a tweet the other day referring to the SkS member, Kevin Cowtan, who has produced yet another paper trying to disprove the pause. The approach he took this time was to look at the uncertainties in the satellite and the terrestial temperature measurement and, surprise, surprise, he came to the conclusion that the satellite data had more uncertainties than the terrestrial data. Because of his overt activism I have every reason to doubt Cowtan's scholarship but accepted the general premise that satellites measurements had more uncertainty than terrestial measurements on face value. I also don't have a problem with the homogenisation methodology and assume that the scientists are doing their best to get to an accurate assessment of the global temperature anomalies.
My problem is this. As far as I know there are about 4800 weather stations in the HadCrut4 data set, I'm assuming 3000 argo buoys and a miscellaneous number of data extracted from different forms of measuring sea surface temperatures, and nothing above, or below 80N and 80S respectively. In terms of uncertainty I've assumed that in real terms measuring the temperature of a bucket of seawater is little short of useless, so I'm assuming that the HadCrut4 measurements are obtained from 4800 weather stations and 3000 argo buoys. Each of which represents a pin-prick of space on the Earth's surface. Or put it another way we have 1 measuring station for every 65,400km^2 of the Earth's surface area.
I tweeted back to Richard and asked was this taken into consideration when calculating the likely uncertainties in the HadCrut4 data set, but, not for the first time, I don't believe he understood what I was getting at.(If there's fault it's me not him)
What are the chances of getting a robust view of the Earth's average global temperature analogy out of a network of 7800 measuring stations over a land and sea mass of 510 million kilometres^2 and how to the Met Office and CRU deal with the uncertainty of a signal extracted from 7800 measuring devices over 510 million Km^2? Anyone know?