Discussion > I'll bet not a lot of people know this
aTTP, what part of you running your blog as you see fit do you not understand? That is what I had said. You are so efficient at it that you do not generate much discussion, and have to come here instead.
Clearly you consider your site to be perfection which is why you want this site to adopt your standards.
Yes I know you have been snipped at this site, ironically, so have I, responding to your posts.
Why not use the satellites? Perhaps this is a statement of the bleedin’ obvious, but those few satellites that are in existence have the benefit of monitoring the entire planet, without the hindrance of politics or petty human failings. Not only do they provide us with continuous observations, but the few instruments involved leaves less instrument variability.
Oh, yes. Sorry… just realised that the satellites do not seem to be giving data that fit the narrative. Silly me.
Radical Rodent, satellite data has been one of the most expensive and accurate means of measuring global warming. Except when climate scientists decide it isn't.
You would have thought it would be easy to program the satellites to produce data in accordance with the computer generated models. It is hardly rocket science for NASA
Phil
That's an interesting paper. It's only valid though if the data points are representative of the areas that they cover. Which would mean quite extensive characterisation of an area that size. I don't think having 1 temperature sensor in 1200 miles quite cuts it. However as a paper it's simply another idea in scientific literature. Doesn't have to be correct, just out there.
Maybe the authors should have read Beniot Mandelbrot's paper on the length of the British coastline. That's one of the best ways to look at measuring large scale distances and surfaces.
To repeat, if you're calculating a global average and you have areas of 'missing' coverage than you either give up and go home or calculate the average with the data you have, which effectively means you're assuming the missing region is at the average for the globe
Which you don't have.
Translation: You make up something on an arbitrary basis. Par for the 'climate science' course.
Micky H Corbett - yes. Calculating 'degrees of freedom' from data which has already been reduced in dimensionality is circular reasoning.
Geronimo started the thread off with his reference to information theory. It's a basic principle of information theory that, before you can sub-sample a field, you have to reduce its dimensionality by a smoothing (lowpass filtering) process so that the remaining spatial frequency components are all of lower spatial frequency than half of the spatial sub-sampling rate. If you are trying to measure a physical field by taking values at spatially discrete points, you have no option but to to do the spatial discretisation at an adequately high spatial frequency, dictated by the spatial characteristics of the field itself.
To imagine that you can capture the reality behind this (the reality, not the Met Office plot) by measuring the SAT on a 10 × 12 ("60 per hemisphere") matrix is, as michael hart (2:57 AM) implied, just laughable.
Geronimo - I'm quite familiar with Shannon theory (source entropy, channel capacity, coding theory) but I'm not sure what you mean when you say "... the data sets look entropic...". Care to elucidate? Or perhaps I should ask Entropic man?
It's a basic principle of information theory that, before you can sub-sample a field, you have to reduce its dimensionality by a smoothing (lowpass filtering) process so that the remaining spatial frequency components are all of lower spatial frequency than half of the spatial sub-sampling rate. If you are trying to measure a physical field by taking values at spatially discrete points, you have no option but to to do the spatial discretisation at an adequately high spatial frequency, dictated by the spatial characteristics of the field itself.Martin A, I don't have a clue what that means, and that's fine. If ever I need to do what you are talking about I will know where to come.
I'm prepared to bet that 97% of climate scientists haven't a clue either. What a shame that they aren't prepared to admit it!
Your response to Phil Clarke demonstrates another problem with climate science which appears also to be a characteristic of Brandon's post as reported by Josh last night. They seem to have started arguing in circles.
I’m with you there, MJ. Martin A could be talking double-Dutch, as far as I am concerned, but it is useful to know who I could turn to if I ever need a translation from double-Dutch. (Also, this is one reason you will never make it onto my hate list, by the way.)
That's an interesting paper. It's only valid though if the data points are representative of the areas that they cover.
So, how many spatial degrees of freedom do you believe the climate field has? That paper came up with an estimate of 50-90, others have calculated it as 60. So I was indeed wrong to say 60 thermometers were sufficient to provide a good estimate of global temperature.
I should have said 60 well placed thermometers.
We do, of course, have rather more than 60.
Phil
Even a well placed thermometer won't do it. You need to sample the space accordingly. Martin A just describes this very thing above. They also don't seem to account for bias in samples but just run Monte Carlo variations but I may be wrong.
Martin A could be talking double-Dutch, as far as I am concerned...
RR - I am sorry, I thought I had expressed the use of the Nyquist spatial sampling criterion as simply and straightforwardly as possible. Which part of
It's a basic principle of information theory that, before you can sub-sample a field, you have to reduce its dimensionality by a smoothing (lowpass filtering) process so that the remaining spatial frequency components are all of lower spatial frequency than half of the spatial sub-sampling rate. If you are trying to measure a physical field by taking values at spatially discrete points, you have no option but to to do the spatial discretisation at an adequately high spatial frequency, dictated by the spatial characteristics of the field itself.did you not understand?
You might also like to ask Entropic man to explain how he 'knows' that the global average temperature is measured to a precision of ± 0.1°C when the measurement points are as sparse (or as dense, if you prefer) as they are.
http://rankexploits.com/musings/2011/a-monte-carlo-approach-to-estimating-sampling-error/
https://noconsensus.wordpress.com/2011/07/15/subsampled-confidence-intervals-zeke/
http://moyhu.blogspot.co.uk/2010/05/just-60-stations.html
Most of the signatories to the Oregon Petition are engineers. Go figure.
Yes, Martin A; simple for you, perhaps, but not so simple for me – I had no idea there was such a thing as “information theory”, for a start! I am slowly translating it into Rodent-speak (or should that be “Rodent-squeak”?); give me some time, and I will catch up.
I thought I had expressed the use of the Nyquist spatial sampling criterion as simply and straightforwardly as possible.I'm sure you did, and I'm sure RR is sure you did. Just goes to show how reliant we little people have to be on those who claim to be experts.
And while I have every reason to be confident that you know what you are talking about when it comes to spatial sampling criteria I doubt I could say the same about (for example) Phil Clarke ....
.... who has just told us that
Most of the signatories to the Oregon Petition are engineers.Well, gee thanks Phil. And I have a son who is an expert on energy costs, a fact which is probably about as relevant.
But there again, perhaps you're making a worthwhile point. I would certainly trust an engineer on most things before I would trust a 'climate scientist' — even on climate.
Just for clarification, is this Phil Clarke, who claims that 60 thermometers are sufficient to measure the global temperature, the same person who, on another thread, debunked the CET record because it incorporates readings taken in Utrecht (or so he claims, no doubt because someone on SkS made it up)?
The thermometers have to be where they say they are. Not a difficult idea, surely?
And re: CET I was also concerned with indoor readings and the fact the mercury in glass thermometer was not invented until 1714 ish. Here's what Parker et al - not me or SkS- say about their data
Manley (1953) published a time series of monthly mean temperatures representative of central England for 1698-1952, followed (Manley 1974) by an extended and revised series for 1659-1973. Up to 1814 his data are based mainly on overlapping sequences of observations from a variety of carefully chosen and documented locations. Up to 1722, available instrumental records fail to overlap and Manley needs to use non-instrumental series for Utrecht compiled by Labrijn (1945), in order to mate the monthly central England temperature (CET) series complete. Between 1723 and the 1760s there are no gaps in the composite instrumental record, but the observations generally were taken in unheated rooms rather than with a truly outdoor exposure. Manley (1952) used a few outdoor temperatures, observations of snow or sleet, and likely temperatures given the wind direction, to establish relationships between the unheated room and outdoor temperatures: these relationships were used to adjust the monthly unheated room data. Daily temperatures in unheated rooms are, however, not not reliably convertible to daily outdoor values, because of the slow thermal response of the rooms. For this reason, no daily series truly representative of CET can begin before about 1770.
Remember the noise the Wattites make around badly-sited stations?
LOL
And again it is not me who calculated the 60 degrees of freedom, I rely on Sam Shen et al:
Another example is the question of how many stations are needed to measure the global average annual mean surface temperature. Researchers previously believed that an accurate estimate required a large number of observations. Jones et al. (1986a,b),Hansen and Lebedeff (1987), and Vinnikov et al. (1990) used more than 500 stations. However, researchers gradually realized that the global surface temperature field has a very low dof. For observed seasonal average temperature, the dof are around 40 (Jones et al. 1997), and one estimate for GCM output is 135 (Madden et al.1993). Jones (1994) showed that the average temperature of the Northern (Southern) Hemisphere estimated with 109 (63) stations was satisfactorily accurate when compared to the results from more than 2000 stations.Shen et al. (1994) showed that the global average annual mean surface temperature can be accurately estimated by using around 60 stations, well distributed on the globe, with an optimal weight for each station.
Estimation of Spatial Degrees of Freedom of a Climate Field
The 1994 paper is here
You say the funniest things, Phil:
"The thermometers have to be where they say they are. Not a difficult idea, surely?" Does it matter so much when 60 theremometers can tell you the global temperature?
Also, what is the real relevance of taking the temperature inside a dwelling in an era where central heating did not exist and internal temperatures would have probably followed the same trends as external ones? Stevenson screens were invented to protect thermometers from undue influence of extraneous factors. An 18thc house was probably akin to a Stevenson screen. But, in any case, it would probably allow a reliable anomaly to be calculated since internal temperatures probably track external ones, once the stevenson screen convention is established and there is not central heating and little in the way of draught exlusion, ill-fitting windows etc.
Given the track record of climate science, surely you can permit these assumptions. So much of climate science is just assumptions piled on assunmptions backed by scientific papers without any real proof, eg the under-sampling of global temperatures. How many thermometers used in compiling the global indices are sited in Central Asia, the Himalayas, Hindu Kush, South America. Africa etc? If the resulting index is good enough for your purposes, then the CET qualifies as an outstandingly good case.
120 thermometers is adequate to tell the mean surface temperature of the Earth to within ±0.1 °C ?
Let's see... surface area of the Earth = 500 M sq km [round figures]
Land area of USA = 10 M sq km. So that's 120 × 10/500 = 2 thermometers for the USA. An adequate number of measurement points to assess the average temperature for that country to within ±0.1 °C?
Yeah, yeah, if you say so. But, it's just a joke, right?
I'm sorry, I got it wrong. Here's the correction.
120 60 thermometers is adequate to tell the mean surface temperature of the Earth to within ±0.1 °C ?
Let's see... surface area of the Earth = 500 M sq km [round figures]
Land area of USA = 10 M sq km. So that's 120 60 × 10/500 = 2 1 thermometers for the USA. An adequate number of measurement points to assess the average temperature for that country to within ±0.1 °C?
If Sam the Sham (and the Pharaohs presumably) say so, it must be so..
Which part of (blah blah blah) did you not understand?
Mike J, RR - Sorry. It was meant to be funny..
Martin A
You estimate that 60 thermometers is too small a sample to calculate a global average.
How many do you estimate are necessary? Please show your calculation.
The reason why a global average can be obtained is because what the paper decribes is the Central Limit Theorem. Hence the reason why 60 samples will do provided they have low uncertainty.
What's not included is the uncertainty in each sample that would match real life which in turn will result in a global anomaly with a much larger uncertainty. Most probably one that it is useless.
Martin A
You estimate that 60 thermometers is too small a sample to calculate a global average.
How many do you estimate are necessary? Please show your calculation.
Apr 14, 2016 at 5:50 PM | Unregistered CommenterEntropic man
EM - you can calculate some kind of 'global average' with one, three, ten, or any other number of thermometers. What number would you like?
You yourself have stated (if I remember correctly) that the global average is temperature is known to within ±0.1 °C.
I am not sure whether you meant "the mean of all the actual thermometer readings" or whether you meant "the actual average over the Earth's surface spatially, and one year temporally, of the Earth's SAT field". (Care to clarify?)
If the latter is what you meant, than would *you* care to explain how that is known?
Once you reveal that information, it should be a relative straightforward computation to estimate what spatial density of measurements would be needed to give a desired precision in the final global average.
Until then, it's a bit like saying "how many altitude measurements do I need if I want to make to estimate the average height of the surface within a given area of 100km by 100km?". Impossible to answer without a lot of detail of the nature of the terrain and what precision you require. If your chosen area is in the Rocky Mountains and you require a precision of ±1M, the answer will be quite different from if your chosen area is in Holland and you are happy with a precision of ±1km.
I think there is every reason to believe that, in the case of global average temperature nobody has the necessary information since, over vast areas of the Earth's surface, detailed temperature measurements have never been made.
Martin A:
It was meant to be funny..And you didn’t think my response was also meant to be funny?! Dang!
lol