Discussion > GHG Theory step by step
The Mann Bradley Hughes (MBH) hemispheric temperature reconstructions, published 1998 and 1999 (that is, 20 years ago), nicknamed the hockey stick, have been challenged, vindicated and reproduced.
Really? Last time I looked this has been debunked by McIntyre and McKitrick.
McIntyre and McKitrick, Energy & Environment, 2003 MM03
McIntyre and McKitrick, GRL 2005a
McIntyre and McKitrick, E&E 2005b
Full list & discussions here: https://climateaudit.org/multiproxy-pdfs/
And for any later work by Mann etal, same site use search term: 'Mann' and of course 'Climategate' ...
For follow up studies attempting to reproduce such as PAGES2K, just use search terms 'Gergis' and 'PAGES'.
A few examples of the latter :
Data Torture in Gergis2K
PAGES2017: New Cherry Pie
Phil Clarke, you referred to Dessler 2009. Were you relying on standard RealClimate or Skeptical Science denials?
Those with an ability to be open minded about clouds, feedback etc, would be better off (full post) here:
https://wattsupwiththat.com/2011/09/07/the-good-the-bad-and-the-ugly-my-initial-comments-on-the-new-dessler-2011-study/
Introduction
"NOTE: This post is important, so I’m going to sticky it at the top for quite a while. I’ve created a page for all Spencer and Braswell/Dessler related posts, since they are becoming numerous and popular to free up the top post sections of WUWT."
"UPDATE: Dr. Spencer writes: I have been contacted by Andy Dessler, who is now examining my calculations, and we are working to resolve a remaining difference there. Also, apparently his paper has not been officially published, and so he says he will change the galley proofs as a result of my blog post; here is his message:
“I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”
[Dessler may need to make other changes, it appears Steve McIntyre has found some flaws related to how the CERES data was combined: http://climateaudit.org/2011/09/08/more-on-dessler-2010/
As I said before in my first post on Dessler’s paper, it remains to be seen if “haste makes waste”. It appears it does. -Anthony]
Update #2 (Sept. 8, 2011): Spencer adds: I have made several updates as a result of correspondence with Dessler, which will appear underlined, below. I will leave it to the reader to decide whether it was our Remote Sensing paper that should not have passed peer review (as Trenberth has alleged), or Dessler’s paper meant to refute our paper"
●Phil Clarke, does your reference to Dessler 2009 still stand as an honest assessment? Spencer criticised Dessler. Spencer even notes Dessler's corrections and responses.
Climate Audit and Bishop Hill have references within the article.
This is excellent evidence of how Science works, and why Climate Science doesn't.
Clouds, water vapour etc have been minimised by taxpayer funded Hockey Team Peer Reviewed Climate Scientists. All the more reason for excluding them from more.
Thank you for drawing it to everyones attention by referring to Dessler 2009.
Just a bit McIntyre-heavy there JJ. We all know the dangers of relying on a 'single source'
Energy and Environment is a comedy journal, of course, produced by a Reader in Geography at the University of Hull, and serving her own political purposes (although I believe her husband has a science A level) . M&M had to go there when proper journals rejected their offerings.
The MM 'debunkings' were shown in the literature either to have merit but negligible impact or no merit. See
Huybers (2005),
Rutherford et al (2005),
Wahl and Amman (2007),
Amman and Wahl (2007),
To quote the Rutherford paper:
It should be noted that some falsely reported putative “errors” in the Mann et al. (1998) proxy data claimed by McIntyre and McKitrick (2003) are an artifact of (a) the use by these latter authors of an incorrect version of the Mann et al. (1998) proxy indicator dataset, and (b) their misunderstanding of the methodology used by Mann et al. (1998) to calculate PC series of proxy networks over progressively longer time intervals.
And so on. It is not the case that the MBH algorithm produces 'hockey sticks' from random noise (M&M used unrealistically auto-correlated red noise, their versions were a fraction of the size of the real thing, and for presentation they had to mine for the top 1% most HS-looking graphs), MBH's use of decentred PCA was arguably a poor choice but makes a tiny difference to the outcome, and the shape of the reconstruction is not dependent on a few proxies.
As for PAGES, well your first link is back to McIntyre writing about Gergis et al which was published later. McIntyre insists that it was 'identical' to PAGES 2K while stating that it has only 20 out of 27 proxies in common. What!? Later he discovers he's been using the wrong dataset and again, insists it doesn't matter.
But it is only a blog post, and McIntyre has such a history of exaggeration and error that I really can't be that bothered. Wake me when he gets it past peer review.
Phil:
Just a bit McIntyre-heavy there JJ. We all know the dangers of relying on a 'single source'
Nah, his site is hardly single source as you should know.
Phil:
Amman and Wahl (2007),
Hum. Again search them on CA and you will find a debunking of A&W and W&A, and the others, even any you haven't yet mentioned.
Don't believe the hype, read the history and the papers and check for yourself; I know I did.
Phil:
It is not the case that the MBH algorithm produces 'hockey sticks' from random noise (M&M used unrealistically auto-correlated red noise, their versions were a fraction of the size of the real thing, and for presentation they had to mine for the top 1% most HS-looking graphs), MBH's use of decentred PCA was arguably a poor choice but makes a tiny difference to the outcome, and the shape of the reconstruction is not dependent on a few proxies.
Yes MBH's approach does create hockeysticks from random noise, as does PAGES2K and Gergis.
The approach is essentially the same as I have already explained many posts ago. It does not matter whether you drop series you do not like or whether you keep them but give them a very low weight. The issue for all is that they choose which proxy of the exact same nature to keep, based upon their correspondence to the instrumental temperature record or some similar indicator of goodness.That is goal seeking and the opposite of a-select statistics.
No matter whether you do this with simple time-series using a simple method or when you are using some grid network and principal components.
See the cherry pie post [https://climateaudit.org/2017/07/11/pages2017-new-cherry-pie/] and the wonderfull picture at the bottom.
If (1) you start with an extended dataset half of which goes up in the 20th century and half of which goes down and (2) from that extended dataset, select only those series which go up, one trivially will get a hockey stick with simple composite methods (which do not assign negative coefficients i.e. flip the underlying series), [added] as illusttrated by the following cartoon (h/t CTM):
Phil:
As for PAGES, well your first link is back to McIntyre writing about Gergis et al which was published later.
LOL. Indeed Gergis 2012 was not published after the errors were pointed out to them. And yes for the first time on ClimateAudit. And then in 2016, finally, it is published. But that was not the end of the story.
In 2012, the then much ballyhoo-ed Australian temperature reconstruction of Gergis et al 2012 mysteriously disappeared from Journal of Climate after being criticized at Climate Audit. Now, more than four years later, a successor article has finally been published. Gergis says that the only problem with the original article was a “typo” in a single word. Rather than “taking the easy way out” and simply correcting the “typo”, Gergis instead embarked on a program that ultimately involved nine rounds of revision, 21 individual reviews, two editors and took longer than the American involvement in World War II. However, rather than Gergis et al 2016 being an improvement on or confirmation of Gergis et al 2012, it is one of the most extraordinary examples of data torture (Wagenmakers, 2011, 2012) that any of us will ever witness.
Phil:
McIntyre insists that it was 'identical' to PAGES 2K while stating that it has only 20 out of 27 proxies in common. What!?
Quotes below from Data Torture in Gergis2K
First that post mentions: "The PAGES2K Australasian network is the product of the same authors."
and later
Its network is substantially identical to the Gergis 2012 network: 20 of 27 Gergis proxies carry forward to the P2K network. Several of the absent series are from Antarctica, covered separately in P2K. The new P2K network has 28 series, now including 8 series that had been previously screened out. The effort to maintain continuity even extended to keeping proxies in the same order in the listing, even inserting new series in the precise empty spaces left by vacating series.
'Substantially identical' is not the same as identical, that just means that there is a large overlap, and there is. That should not surprise anyone because it is 'the product of the same authors'.
And of course as we will later see loads of their proxies then turn out to be rejected by themselves (...).
And all kinds of other 'wonderful' changes ... :
The next table provides an inventory of changes between 2013 and 2017. Of the 146 North American tree ring series used in PAGES2K, 84% (!?!) were discarded because they had a “negative” relation to temperature. Only 23 series were carried forward (nearly half of which were stripbark bristlecones or foxtails). Replacing the 123 discarded series were 125 new series all of which were said to have a “positive” relation to temperature (though many were admitted in the SI to have “no low-frequency signal”.
Phil:
Later he discovers he's been using the wrong dataset
Really? Where, must have missed that :)
and again,
Again? How so?
insists it doesn't matter.
And I haven't found where he says it doesn't matter, but really it doesn't, because
a) these people (2K et.al.) have made all kinds of data errors, if you read the series you'll see many examples and
b) no matter the data they will derive the same answer (due to data torture, massage and selection).
Same story all the time.
Phil:
But it is only a blog post, and McIntyre has such a history of exaggeration and error that I really can't be that bothered. Wake me when he gets it past peer review.
You keep on dreaming.
Rhoda and geronimo , Monckton exicitly states his feedback to be negative at around 0.8i
He goes on about co2 because that is what climateers go on about. Of course all active gases in the air are involved.
Read the post.
If 288 k has an effect then so would 255k
Hindsight is wonderful.
….Later he discovers he's been using the wrong dataset
One of the central accusations made against Gergis by McIntyre is the 'dubious' exclusion of the Law Dome series, strongly implying it was done because it does not show a hockey stick.
For Law Dome d18O over 1931-1990 for the central gridcell at lag zero i.e. without any Gergian data mining or data torture, using the HadCRUT3v version on archive, I obtained a detrended correlation of 0.529, with a t-statistic of 4.71 (for 37 degrees of freedom after allowing for autocorrelation using the prescribed technique). This was one of the highest t-statistics in the entire network, higher than 24 of 28 proxies selected into the screened network and higher than both long proxies included in the network. It also met any plausible criterion of statistical significance.
But it is based on the wrong dataset, in his calculations McIntyre used a different dataset HadCRUT3 (no 'v') which has substantive difference in that particular gridcell. Using the correct dataset reduces the t-stat by 20%. On Planet Audit, 'the key point of the post is completely unaffected by this issue', (just as 2 reconstructions with different proxies are identical).
As Steve Mosher wryly notes in the comments:
You have to watch some people.. they will savage others for making mistakes, while excusing themselves for the same thing.
Which is just one reason why putting your faith uncritically in any blog or author is unwise. But tell you what-, you carry on using McIntyre's blog science as credible, while ignoring the reviewed literature, and I'll carry on doing the opposite.
Rhoda
Five settings. Think of them as strange attractors.
The upper stop, seen only once since the Precambrian, was 24C seen during the massive GHG release of the PETM.
The Eemian was typical of what might be called the hothouse Earth, 19C.
Ice Age interglacials come in around modern temperatures, 14C.
Ice Age glacial periods come in around 9C.
The lower stop is the snowball Earth around 4C.
Lets turn, now, to MM’s claim that the “Hockey Stick” arises simply from the application of non-centered PCA to red noise. Given a large enough “fishing expedition” analysis, it is of course possible to find “Hockey-Stick like” PC series out of red noise. But this is a meaningless exercise. Given a large enough number of analyses, one can of course produce a series that is arbitrarily close to just about any chosen reference series via application of PCA to random red noise. The more meaningful statistical question, however is this one: Given the “null hypothesis” of red noise with the same statistical attributes (i.e., variance and lag-one autocorrelation coefficients) as the actual North American ITRDB series, and applying the MBH98 (non-centered) PCA convention, how likely is one to produce the “Hockey Stick” pattern from chance alone.
Precisely that question was addressed by Mann and coworkers in their response to the rejected MM comment through the use of so-called “Monte Carlo” simulations that generate an ensemble of realizations of the random process in question (see here) to determine the “null” eigenvalue spectrum that would be expected from simple red noise with the statistical attributes of the North American ITRDB data. The Monte Carlo experiments were performed for both the MBH98 (non-centered) and MM (centered) PCA conventions. This analysis showed that the “Hockey Stick” pattern is highly significant in comparison with the expectations from random (red) noise for both the MBH98 and MM conventions. In the MBH98 convention, the “Hockey Stick” pattern corresponds to PC#1 , and the variance carried by that pattern (blue circle at x=1: y=0.38) is more than 5 times what would be expected from chance alone under the null hypothesis of red noise (blue curve at x=1: y = 0.07), significant well above the 99% confidence level (the first 2 PCs are statistically significant at the 95% level in this case). For comparison, in the MM convention, the “Hockey Stick” pattern corresponds to PC#4, and the variance carried by that pattern (red ‘+” at x=4: y=0.07) is about 2 times what would be expected from chance alone (red curve at x=4: y=0.035), and still clearly significant (the first 5 PCs are statistically significant at the 95% level in this case).
So the facts deal a death blow to yet another false claim by McIntyre and McKitrick.
From <http://www.realclimate.org/index.php/archives/2005/01/on-yet-another-false-claim-by-mcintyre-and-mckitrick/>
Finally, I’ll return to the central claim of Wegman et al – that McIntyre and McKitrick had shown that Michael Mann’s “short-centred” principal component analysis would mine “hockey sticks”, even from low-order, low-correlation “red noise” proxies . But both the source code and the hard-wired “hockey stick” figures clearly confirm what physicist David Ritson pointed out more than four years ago, namely that McIntyre and McKitrick’s “compelling” result was in fact based on a highly questionable procedure that generated null proxies with very high auto-correlation and persistence. All these facts are clear from even a cursory examination of McIntyre’s source code, demonstrating once and for all the incompetence and lack of due diligence exhibited by the Wegman report authors.
From <https://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/>
EM, I can't see those numbers representing the absolute limits, rather the limit (possibly) of what was the outcome of external influences (orbital, insolation, whatever) at that time. We don't actually know we've ever hit a limit for sure. Are any of your quoted limits associated with extreme CO2? And the corollary, is extreme CO2 associated with any limit at all?
Steve Richards
What are the 255K feedbacks?
I've been going through the possibilities.
No GHGs.(by definition)
No water vapour (too cold)
No low cloud (too cold)
No absorbtion at the sea surface ( no sea surface)
Ice Albedo 0.6 (therefore a strong negative feedback)
The only positive feedback I can think of would be cirrus cloud, which is far too weak to produce the effect Monkton's group claim.
You are talking about this post:
https://climateaudit.org/2016/08/03/gergis-and-law-dome/
And this is what it says now:
For Law Dome d18O over 1931-1990 for the central gridcell at lag zero i.e. without any Gergian data mining or data torture, using the HadCRUT3v version on archive, I obtained a detrended correlation of 0.529, with a t-statistic of4.713.65 (for 37 degrees of freedom after allowing for autocorrelation using the prescribed technique) [updated Sep 10, 2016]. This was one of the highest t-statistics in the entire network, higher than2419 of 28 proxies selected into the screened network and higher than both long proxies included in the network. It also met any plausible criterion of statistical significance.So how (and why) did Gergis screen out Law Dome?
See he corrected it. And indeed it does not matter. 20% lower t-stat is in this case a t-stat of 3.65 and that is still quite significant and all other conclusions stay the same.
So he corrects himself in a blog. Contrast this with some of the buddy-reviewed garbage stuff produced by (one of) the Hockey Team (and some of their buddies): no matter what error is pointed out, no matter how clear the error, the standard position is to deny deny deny. And block such letters to the editor that point them out, and actions against a journal that publishes such letters, attempts to prevent publication of counter papers, getting a journal editor fired etc etc.
Rhoda
The PETM is associated with a rapid release of GHGs to a peak around 1500ppm
http://www.pnas.org/content/113/28/7739
The Eocene( sorry, the Eemian was an interglacial) had CO2 above 500ppm.
https://www.nature.com/articles/nature17423
A normal interglacial such as the Holocene has CO2 around 280ppm.
http://cdiac.ess-dive.lbl.gov/trends/co2/lawdome.html
A glacial period has about 190ppm CO2.
https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2010GL044499
The last snowball Earth had about 30ppm
http://www.snowballearth.org/week8.html
Regarding limits, it would be hard to exceed the PETM maximum until the cloud negative feedback is removed along with all the water in a couple of billion years.
The lower limit, the snowball Earth may no longer be possible. The Sun has warmed 6% since the last one, so it may not be possible to cool the planet enough, even with no GHGs.
Rhoda
Somebody did the sums on a full runaway greenhouse effect leading to Venus conditions.
https://www.scientificamerican.com/article/fact-or-fiction-runaway-greenhouse/
It is possible but not at the moment. You would need 30,000ppm (3%) CO2, way beyond anything we might release.
Perhaps when the Sun becomes hot enough to bake CO2 out of rocks in a billion years.
And indeed it does not matter. 20% lower t-stat is in this case a t-stat of 3.65 and that is still quite significant and all other conclusions stay the same.
Maybe, maybe not. It's about more than the t-stat. The actual version used by Gergis has missing data - in fact no single year has a complete summer season data set for the relevant gridcell so probably would fail screening for that reason alone. McIntyre mashes up his own infilling technique that results in him retaining years with a single data point. Just like a reducing t-stat, this 'makes no difference'. Some Audit.
As the papers I cited show beyond a doubt, every one of the McKitrick / McIntyre criticisms of MBH falls into the category he claims for himself, an error that 'makes no difference'. An error, a debateable methodological choice, novel statistics, correcting the MBH 'errors' or applying the McIntyre-approved methodology made damn-all difference to the conclusion; with a pleaseing symmetry, exactly what is claimed here.
Those short of time can just read the IPCC report
So what is the actual issue at the heart of this? A single line in the IPCC AR4 report (p466) which correctly stated that “Wahl and Ammann (2006) also show that the impact [of the McIntyre and McKitirck critique] on the amplitude of the final reconstruction [by MBH98] was small (~0.05C)”. This was (and remains) true. During the drafting Keith Briffa corresponded with Eugene Wahl and others to ensure that the final text was accurate (which it was). Claims from McIntyre that this was not allowed under IPCC rules are just bogus – IPCC authors can consult with anyone they like at any time. However, this single line, whose inclusion made no effective difference to the IPCC presentation, nonetheless has driven continuing harassment of everyone involved for no good purpose whatsoever. Wahl and Ammann did show that MM05 made no substantial difference to the MBH reconstruction, whether it got said in the IPCC report or not.
That this inconvenient fact has driven hundreds of blog posts, dozens of fevered accusations, a basket load of FOI requests, and stoked multiple fires of manufactured outrage is far more a testimony to personal obsession, rather than to its intrinsic importance. The science of paleo-reconstructions has moved well beyond this issue, as has the interest of the general public in such minutiae. We can however expect the usual suspects to continue banging this drum, long after everyone else has gone home.
Written in 2011. How prescient.
From <http://www.realclimate.org/index.php/archives/2011/03/wahl-to-wahl-coverage/>
If snowball Earth has 30ppm CO2 and 4 deg C, and we are now at 400ppm and 15C that's 3.65 doublings for 11 degrees so that is CS = 3ish. All well and good, right in the IPCC range. But we have 6% more sun and half the albedo, so CS with these numbers must be far less. If I had a piece of paper I'd have a guess. First hack says its negative.
OK, I have Earth with 6% less insolation and albedo 0.6 at 246K. Now what is the snowball temp again, I can't believe 4 C.
And I suspect the Snowball Earth link's 30ppm CO2 is an informed guess, I don't think there is anything but an inference from other geological clues.
Mann’s fraudulent misrepresentation of his credentials and academic standing later earned him a rebuke from Geir Lundestad, director of the Nobel Institute in Oslo. One can well understand why the exposure of Mann’s fraudulent claim should cause him embarrassment but it should surely not justify resetting the procedural clock back to the beginning on this case, which is what in effect happened.
Somebody did the sums on a full runaway greenhouse effect leading to Venus conditions.Except… er… they didn’t do any sums, at all, just give us uncorroborated stuff like this: “As the world warms not from a brightening sun but from fossil fuel–burning humans…” Someone who has done the sums is studiously ignored, as his work shows the whole “greenhouse effect” to be utter hokum.
Answer this question, Entropic man: if CO2 is such a greenhouse gas as is claimed, why is the atmosphere on Venus (11+ “doublings” of Earth CO2 concentration) at altitudes where it is Earth-pressure, exactly the temperature that the Earth would be, were it the same distance from the Sun, and not somewhere between 86 and 186°C?
This offers a very interesting idea about natural variation. Even if slightly true, we should not be holding back on burning the oil!
Perhaps you can find a similar blog post on a Brian Soden paper, so you can dismiss his entire career?
Mar 21, 2018 at 1:33 PM | Phil Clarke
Do you deny everything produced by Steve McIntyre, whether you have read it or not?
Using the Peer Reviewed Climate Smearology of Harvey et al 2017, co-authored by M Mann, are you aware that Soden has co-authored with Phil Jones in another Emergency Hockey Stick Repair Paper?
Your link to a PNAS Paper by Soden included:
"Climate models predict that as the climate warms from the burning of fossil fuels, the concentrations of water vapor will also increase in response to that warming. This moistening of the atmosphere, in turn, absorbs more heat and further raises the Earth's temperature."
Read more at: https://phys.org/news/2014-07-vapor-global-amplifier.html#jCp
Does Climate Science make predictions or not?
Meanwhile, Soden is no stranger at the IPCC, and the Hockey Team:
Coordinating Lead Authors:
Kevin E. Trenberth (USA), Philip D. Jones (UK)
Lead Authors:
Peter Ambenje (Kenya), Roxana Bojariu (Romania), David Easterling (USA), Albert Klein Tank (Netherlands), David Parker (UK), Fatemeh Rahimzadeh (Iran), James A. Renwick (New Zealand), Matilde Rusticucci (Argentina), Brian Soden (USA), Panmao Zhai (China)
Phil Clarke, you proposed Brian Soden. Is he worth taking seriously under the rules of Climate Smearology as defined by Mann in Harvey et al 2017?
Rhoda
All of the paleo climate temperatures are inferred from geological clues. Nobody was there to wield thermometers.
I tried my own calculation for the snowball Earth temperature using the CO2 forcing equation, and interglacial baseline temperature of 14C.
∆T = 5.35 ln(C/Co) climate sensitivity/ forcing per degree C
5.35ln(30/280)3/3.7 = -9.7
14-9.7 =4.3C.
That agrees with the estimate I linked at a CS of 3.
I plotted CO2 on the X axis against temperature on the Y axis and got a smooth logarithmic curve.
Finally I tried the forcing calculation on the other temperatures and the calculations all underestimated the amount of warming. To get a match required CS values between 4.1 and 8.9.
Finally I plotted CO2 on the X axis against temperature on the Y axis and got a smooth logarithmic curve.
Conclusion? There's extra complexity here ansd CS is not necessarily a constant.
Written in 2011. How prescient.
realclimate
Mar 21, 2018 at 8:01 PM | Phil Clarke
deepclimate
Mar 21, 2018 at 6:41 PM | Phil Clarke
If Mann believes in the reliability and integrity of those sources, why has he delayed the Legal action that he instigated?
After being embarrassed by Mann's claims about winning a Nobel Prize, perhaps his Lawyers are not confident about Mann's judgement?
Radical Rodent
Your question says more about your confusion than anything else. Your mistake is to assume that Venus takes up more energy than Earth.
For Earth a square metre at the subsolar point receives 1361W/M^2. With an albedo of 0.3 409W is reflected back into space and 952W is absorbed into the planetary energy budget.
Venus receives 260W/M^2. With an albedo of 0.77 2002W is reflected back into space and 598W is absorbed.
Venus absorbs 38% less energy than Earth.
The question should be, why is Venus warmer at the 1 bar level than Earth when Venus absorbs less energy than Earth?
This is not a rhetorical question. My own answer is, of course, a stronger CO2 greenhouse effect. I look forward to your thermodynamically valid answer.
ACLU and National Media Intervene in Mann v Steyn et al
Mar 21, 2018 at 9:56 PM | clipe
From your link to Steve McIntyre's honest and reliable Climate Audit:
Nick Stokes Posted Aug 13, 2014 at 9:44 PM | Permalink
“Where do you side on this issue, Nick? With the ACLU or with Mann?”
"The ACLU brief primarily addresses immediate appeal. I have no opinion there, but expect that they will prevail.
They also defend the right to criticise Mann’s science. So do I. But if the court rules that folks should do so without accusing Mann of being a fraud and the Sandusky of climate science, then I can live with that."
McIntyre responds to "Nick Stokes"
Steve: "Nick, for a Canadian, I am surprised at the apparent scope of permissible rhetoric under American libel law and at the degree to which it appears to permit defamation of public figures. As the subject of such defamation – notably by Mann himself – I don’t like it very much. But it is hard to contemplate a suit that fits more squarely into anti-SLAPP criteria since Mann has publicly said that he wanted to silence his critics and it does not appear that he suffered any actual damages from the blog posts. As I’ve written before, I think that Mann was unwise to include in his pleadings claims which Steyn calls “fraudulent” about winning a Nobel prize and about having been exonerated by the University of East Anglia, NOAA and the UK Government, particularly in a case where he claims to have been damaged by use of the word “fraudulent”. But, in the past, Mann has made some bizarre lies, so it’s not easy understanding his rationale for anything."
Since 2014, Mann has co-authoured Harvey et al 2017, Climate Smearology, Peer Reviewed by those that approve of Climate Smearology.
Steyn's Lawyers can present Harvey et al 2017 as evidence of Mann's scientific integrity and honesty. His double standards and hypocrisy now form part of Peer Reviewed Science.
This is not a rhetorical question. My own answer is, of course, a stronger CO2 greenhouse effect. I look forward to your thermodynamically valid answer.
Mar 22, 2018 at 12:44 AM | Entropic man
Can you explain the lack of warming, if your maths is correct?
Are you sure you ought to be relying on Dessler 2009?
Who said I was? There are other studies which confirm that one, eg.eg Soden et al
And the Dessler paper was published nearly a decade ago, and nobody in that time, (so far as I know) has published a criticism. What you've dug up is a post, on the blog of an engineer, discussing the sensitivity of a later Dessler paper on another area of research to choice of dataset.
One would need rather more than that …
Perhaps you can find a similar blog post on a Brian Soden paper, so you can dismiss his entire career?