Buy

Books
Click images for more details

Twitter
Support

 

Recent posts
Recent comments
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > The Sceptical General Circulation Model

Having seen how badly academic models are doing, and since the algorithms and data are all kept secret, I was thinking of starting an open source project to write my own general circulation model (or suite of them).

I have a lot of experience of computer models, having spent about seven years working in a network modelling research lab for a large telco, and having a good working knowledge of numerical computing, adaptive networks (my Master's dissertation was on the subject of performant Hopfield networks) and a general interest in all things computery (I run a small software development company)

I know nothing about GCMs, I hasten to add. I was deliberately not going to research current GCMs before I start, so that I came to it with a fresh outlook, and perhaps approach.

The idea would be to make everything open. Designs, approaches, data, tests, code. All failures out in the open. Who knows, perhaps we'd even manage to do it better than the 'pros' :)

Anyone interested in helping?

Jan 11, 2014 at 9:32 PM | Unregistered CommenterTheBigYinJames

What sort of computing resources do you have at your disposal? A supercomputer in the basement?

Jan 11, 2014 at 11:46 PM | Unregistered CommenterChandra

Hi Chandra, I forgot to add that I also specialize in simultaneous distributed algorithms (parallel computing), so not too bothered about resource, although I get your implication that you share the Met Office's view that you need bigger to get better.

It's a view I don't share - if you hold with Moore's Law, today's climate models should be millions of times faster than the original models created in the 80s, but it's obvious that speed alone has done nothing to improve climate models' fitness for purpose - they almost all fail miserably, and they have been getting worse, not better. The problem with the models is obviously (to me) not in the power of the hardware, although in my industry it's common to see people blame the hardware for failures in the software, so it's no surprise.

I wasn't attempting to beat them on numerical resolution anyway, the idea that I might come up with something more powerful than an academic model was slightly tongue in cheek. I was hoping to come up with something cleverer, rather than beefier. The idea was to create an open source model - or suite of models - rather than the legion of secret models we are lumped with at the moment.

Thanks for your interest Chandra, it's nice to receive encouragement in the scientific endeavour, instead of the negative, defensive knee-jerk finger pointing that we have come to expect from the believers.

Jan 12, 2014 at 9:31 AM | Unregistered CommenterTheBigYinJames

What sort of computing resources do you have at your disposal? A supercomputer in the basement?
Jan 11, 2014 at 11:46 PM Chandra

It's an interesting question (I don't know the answer) how far back you have to go in time to have the MFLOPs rating of a supercomputer of the day match that of a current top-of-range PC.

Just guessing, I'd say that a lot of us, by the standards of the 1990's (or at least of the 1980's), have a supercomputer in the basement or the bedroom.

Jan 12, 2014 at 11:07 AM | Unregistered Commentersplitpin

For Chandra:

http://en.wikipedia.org/wiki/OpenCL

Jan 12, 2014 at 11:31 AM | Unregistered Commenternot banned yet

TBYJ - I agree with your thinking but I do not have the time to usefully up my skills and contribute. I think I have seen other people mention similar ideas but I don't recall them progressing the issues. Do you read at the Science of Doom? I've not visited recently but, if you do go ahead, I think that would be a useful venue to seek critical input.

http://scienceofdoom.com

Jan 12, 2014 at 11:47 AM | Unregistered Commenternot banned yet

Good idea TBYJ.

This chap might guide the design past some errors in existing climate models.

http://rocketscientistsjournal.com/2009/03/_internal_modeling_mistakes_by.html

Jan 12, 2014 at 11:57 AM | Unregistered CommenterAlan Reed

TBYJ - ps: I also think that MartinA's assertion that a "GCM" is essentially an impossible proposition would need to be addressed (not quite how he worded it!). I suspect somewhere there is a theoretical framework which provides for problem analysis and enables one to decide if a problem is tractable or not. I think Paul Matthews is likely to be a good person to involve.

Jan 12, 2014 at 12:00 PM | Unregistered Commenternot banned yet

[ ...having spent about seven years working in a network modelling research lab for
a large telco
That's interesting - I worked for a while in the Performance Analysis Department of a well-known research lab operated by a large US telecomms company. ]

My suggestion would be to look for an approach different from GCM's - it's my conviction that they are doomed from the start for a bunch of fundamental reasons. In another thread recently, I used a rude word to categorise them based on my conviction of the futility of their creators' efforts.

As soon as you start trying to solve dynamic systems (you know all this, I'm sure) that have a wide range of time constants (or eigenvalues or whatever you want to call them), you run into numerical problems of ill-conditioning and the like, quite apart from the scale of the computation needed being beyond any conceivable computing system if you were to use a discretisation grid with resolution appropriate to the detail in the system.

What's needed is some way of eliminating the fine detail in space and time but retaining the relevant behavior, before any attempt at a numerical solution.

I was always impressed by methods for analysing dynamic systems that did not require complete solution of the differential equations but gave the answers needed through the use of conservation principles - conservation of energy, principle of least action. My impression was that such methods might be specially useful for large and complicated systems.

Some fundamental advance in system analysis is needed. One thing that is certain is that such a new approach is not going to come from any of the current climate modelling community.

Jan 12, 2014 at 12:00 PM | Registered CommenterMartin A

MartinA - we cross posted! Your comment reminds me of the work of Demetris Koutsoyiannis:

http://itia.ntua.gr/dk/

I recall he is a supporter of open dialogue and I think he would be another useful critic/contributor.

Jan 12, 2014 at 12:06 PM | Unregistered Commenternot banned yet

MartinA - "Some fundamental advance in system analysis is needed. One thing that is certain is that such a new approach is not going to come from any of the current climate modelling community."

This is worth a look if you haven't already seen it:

http://judithcurry.com/2013/06/16/what-are-climate-models-missing/

I read the paper at the time and IIRR it had some damning figures in it show how wrong the models get things "behind the curtains".

Jan 12, 2014 at 12:13 PM | Unregistered Commenternot banned yet

Big one, I wasn't really being negative, just teasing. When you talk of distributed algorithms, you put me in mind of the kind of protein-folding and alien seeking screensaver projects. Is that what you have in mind? I guess some readers will be willing to lend you some megaflops on their idling machines. Or you could create and use a botnet...

When you talk of current models failing misewably, what do you have in mind? Do climate scientsist agree with that analysis (real question, curious to know)?

Jan 12, 2014 at 7:21 PM | Unregistered CommenterChandra

TheBigYinJames

I would try computer modellng based on historical empirical data, they give much better results !

Jan 12, 2014 at 8:01 PM | Unregistered CommenterRoss Lea

I knew you were teasing Chandra, which is why I teased back :)

Yes, I did have some sort of subscription-based distribution in mind, like the SETI project of the 90s, or indeed any of the ones which followed - perhaps not a screensaver, many companies don't allow them these days. Since I haven't designed it yet, I don't know if massive processing is going to be required, it depends on the balance of smart/dumb cells that I use, I suppose - my instinct is not to simply repeat the many-dumb-cell methods used elsewhere.

As for climate scientists not agreeing with my synopsis of their work, well they would hardly verbalise it to the lay public, would they? When Mario makes me a turd-pizza, I don't need the Vatican to agree with me to know it tastes bad, capiche?. I'm sure their notes are caveated to distraction, but the shortcomings and known errors are never propagated to the end product documentation or indeed the mainstream media. I'm not laying the blame solely at academia, but more in the machine which presents the models as the last word in prediction, then fails to report when they depart from reality.

Jan 13, 2014 at 9:01 AM | Unregistered CommenterTheBigYinJames

Thanks for all the other help, folk, I'll compile a list.

Jan 13, 2014 at 9:02 AM | Unregistered CommenterTheBigYinJames

When you talk of current models failing misewably, what do you have in mind? Do climate scientsist agree with that analysis (real question, curious to know)?
Jan 12, 2014 at 7:21 PM Chandra

Well, if I may take it upon myself to answer a question addressed to BigY....

- Climate science failed to predict 'the pause' in global temperature (which I prefer to term 'the halt' until it starts to change again in one direction or the other)

- The Met Office (who I am inclined to take as the authoritative voice for 'climate science') has said that climate models are the only way to predict future climate. If so, it was the models that failed to predict the halt.


Climate science is clearly perplexed and nonplussed by the halt. Chandra recently pointed to a paper apparently explaining it by taking into account effects such as solar variation and volcanoes, but it's clear that that paper has not been acclaimed by climate science in general as the definitive explanation for the halt - otherwise ideas such as 'the missing heat hiding in the deep ocean' would not be offered as explanations. Whether it's termed a failure of climate science or a failure of its models, it amounts to much the same thing.

I don't think all climate scientists (or even 97%) will agree that their models have failed - but I think they are denying the reality. I've seen comments claiming that the halt/pause was in fact predicted, that the Earth is still warming (and so the models are in fact correct) and other things that seem to make no sense at all.

Jan 13, 2014 at 10:20 AM | Registered CommenterMartin A

Even if the pause is only a delay in warming, and it has gone elsewhere, and global warming starts apace again in a few years.... and all the dire predictions of AGW come to light.

Even if ALL THAT IS TRUE... then it is still fair to say the models failed to predict the behaviour of temperature over decadal periods, because erm... they didn't predict it.

Climate science itself is entering a period of it's own 'denial' which will end in either the deniers recanting (eventually they have to admit it's not just a pause) or warming will resume.

None of that changes the fact that the models (which were 85% 95% accurate?) were completely wrong.

Jan 13, 2014 at 10:29 AM | Unregistered CommenterTheBigYinJames

Martin
> Climate science failed to predict 'the pause' in global temperature.

As we've discussed, to the extent that any 'pause' is the result of departures of forcings from their historical averages, predicting the unpredictable is clearly not possible. To the extent that the 'pause' is synthetic, starting, as sceptics claim, around an El Nino peak, I'd say it is a red herring.

> The Met Office ... has said that climate models are the only way to predict future climate.

That seems uncontroversial, unless you know of another way.

> If so, it was the models that failed to predict the halt.

See above.

Also, regarding the paper I mentioned, it is not clear to me why the idea that variations in ENSO, which result in your apparent 'halt', should be incompatible with heat going into the deep oceans instead of into the atmosphere. ENSO is after all an ocean/atmospheric phenomenon.

As I said on the other thread, as far as I'm aware the projections are still within error bounds and so by your definition the models have not yet failed.

--

Big, how do you propose to predict the unpredictable? In other words, current climate models are unable to predict ENSO, solar variability or volcanic activity and so have to guess these forcings according to their historical averages. This makes them 'fail' in the short term in your eyes, even if they might be correct in the long term. Assuming you want your model to 'fail' neither in the short nor in the long term, how do you square this circle?

Jan 13, 2014 at 5:03 PM | Unregistered CommenterChandra

Chandra - yes, I think we are agreed that, if the observations are still within the error bounds of what was predicated, then they have not failed.

However, a Met Office spokesperson, who had previously lead the Met Office's climate model development and so presumably was fully aware of the range of possible error and the limitations of their models, said in 2007:

"2014, we are predicting, will be 0.3 degrees warmer than 2004". The Met Office spokesperson then went on to make this sound very alarming (and quite specific) by adding: "just to put that into context, the warming over the past century and a half has only been 0.7 degrees".

Statements like that imply to any ordinary person who understands the notion of error bounds a precision of around ± 0.1 degree. Something failed somewhere.

If it was not the models that failed, then it was something else. There were no caveats along the lines "of course, short-term climate prediction is essentially impossible because climate models are unable to predict ENSO, solar variability or volcanic activity, so nobody should take seriously what I have just said as the error bounds are quite a lot wider than the increase we are predicting".

Jan 13, 2014 at 5:47 PM | Registered CommenterMartin A

I'm getting mixed messages from you and others. You confirm that a model hasn't failed if it is within its error bounds and you understand the limitations caused by the unpredictability of some forcings, yet models are routinely castigated by you and other sceptics (except for economic models of gas prices of course, the Bishop likes those). Big James wants to write his own model to fix the flaws you all see but he hasn't yet identified how to overcome the unpredictable.

And you seem upset that the MO spokesman made some inaccurate or unqualified statements 5 years ago. Were the technical publications he/she referred to not more equivocal? It is as if the presentation is what really bothered you.

Have the models "failed" or haven't they? If they haven't, why do I keep reading here that they have?

Jan 14, 2014 at 12:13 AM | Unregistered CommenterChandra

I predict this thread will drown in the trolling of Chandra.

I thought as well that not looking into the present-day models would be a good start. They are making mistakes so looking there will not help and maybe even harm independent thinking.

My own thoughts: Climate models had a start in the two-dimensional physical models that would emulate circulation patterns. Since these emulated phenomena that were observed in the real world, it was thought that better models could approximate more and more features, and at some point the models would start resembling the real world.

This thought-syllogism is the basis for present-day GCMs. The syllogism is wrong.

The recapitulation of small facets of physical reality cannot be used as the basis to generate predictive behaviour. The emergent properties and their inter-relationship cannot be predicted, even if one understands sub-components well.

If I were able, I would build a model/models that reproduce patterns that are seen in the temperature record. For instance, can I build a temperature model that shows spontaneous variations at the 0.5-1C level at the multidecadal scale? Or a model that recapitulates the temperature variability at all timescales property of the climate system. It doesn't even have to be a globe, have an atmosphere or anything. Just something that swings just like the earth system does.

(note that climate science offers 'reasons' for these swings, but almost always retrospectively, and almost always cannot explain why a certain 'forcing' should produce exactly such-and-such a change and no other)

Jan 14, 2014 at 1:49 AM | Registered Commentershub

Chandra - you seem more interested in scoring debating points than in understanding, so I'll keep it relatively brief and I'll refrain from writing a textbook on the validation of simulation models.


And you seem upset that the MO spokesman made some inaccurate or unqualified statements 5 years ago. Were the technical publications he/she referred to not more equivocal? It is as if the presentation is what really bothered you.

"5 years ago" does not make the Met Office misinformation less serious. It's specially relevant now, as the predictions were for a period up to now, 2014. Plus, this was at the time that the Climate Change Act was being conceived. The presenter was the person in charge of the Met Office's advice to the government.

The accompanying Met Office press release made no reference to error ranges.

Over the 10-year period as a whole, climate continues to warm and 2014 is likely to be 0.3 °C warmer than 2004. At least half of the years after 2009 are predicted to exceed the warmest year currently on record.

These predictions are very relevant to businesses and policy-makers.

Have the models "failed" or haven't they? If they haven't, why do I keep reading here that they have?


The discrepancy between what was predicted and what was subsequently observed means that, in any reasonable sense, the models have failed. The fact that some things were unpredictable from the start does not lessen their failure.

Anyone asked to certify simulation software with important outcomes will start by applying a range of 'sanity checks'. Failure of a single sanity check means there is no point in further work to certify the software - its predictions must be considered unusable. GCM's fail numerous sanity checks, from the way that distributed systems are represented numerically to the software development methodology (lack of). By those measures, they are failures also. [If not convinced, think of vaccine production that fails basic hygiene checks as an analogy.]

If it is true that at the time the models were produced (and their development was frozen*), formal validation criteria were produced then, if the criteria have not yet been violated, the models have not yet failed in the formal sense. It would be a legalism but I'd agree that, formally, they have not yet failed.

* something that should happen if proper validation is to be done. I have never seen it referred to for GCMs.

Jan 14, 2014 at 10:42 AM | Registered CommenterMartin A

Martin, debating points? Not at all. There are none available to trolls.

What I find odd is that you can agree that the unpredictability of natural forcings makes short term model projections unreliable and yet still consider the public use of such unreliable forecasts invalidates the model itself. This seems quite illogical.

Clearly a press release that omits the caveats is not the full truth, but isn't that so of all press releases, whatever the subject? To get the details (which are of no interest to the press) one normally has to dig deeper. That the spokesman was also an advisor to the Government doesn't mean that the press release constituted the entirety of MO advice.

> GCM's fail numerous sanity checks, from the way that distributed systems are represented numerically to the software development methodology...

Do you have any academic references (I'm interested)?

Shub

> The recapitulation of small facets of physical reality cannot be used as the basis to generate predictive behaviour. The emergent properties and their inter-relationship cannot be predicted, even if one understands sub-components well.

Is that a generally accepted truth?

> ...can I build a temperature model that shows spontaneous variations at the 0.5-1C level at the multidecadal scale?

Is that how modelling works? It seems back to front; you define what the model must project and the task is to modify the code/parameters until it obliges. That sounds like the sort of accusation I've seen levelled at existing climate models. And if you do achieve this for the recent period, do you then consider that you have explained the recent rise in temperatures? How do you then account for the additional forcing from the rise in CO2? Or are you saying that ceases to exist? As I understand things, it is not been possible to model the recent rise in temps without taking anthropegenic forcings into account. Presumably you consider that untrue.

Jan 14, 2014 at 6:52 PM | Unregistered CommenterChandra

Chandra

Is that how modelling works? It seems back to front; you define what the model must project and the task is to modify the code/parameters until it obliges.

That's exactly how climate models work. They partition the available historical data into 'training' and 'verification' periods. The training data is used to tune the input parameters until the model matches the trend in the training data.

Then the model is run against the verification data to make sure that the model still performs outside the training range with no further tweaks made to the parameters. After that. it moves into hindcasting and forecasting. Hindcasting can be used as further verification.

Unfortunately, with such a short historical record in decadal detail, training periods tends to be long, and verification shorter than you would want.

It's no surprise to me that most models are starting to wander.

Jan 14, 2014 at 7:11 PM | Unregistered CommenterTheBigYinJames

There is a fundamental problem: This

"2014, we are predicting, will be 0.3 degrees warmer than 2004". The Met Office spokesperson then went on to make this sound very alarming (and quite specific) by adding: "just to put that into context, the warming over the past century and a half has only been 0.7 degrees".

is meaningless.

These figures are not temperatures; they are so far removed from the data from which they are derived as to bear no meaningful relationship to it.

The very concept is nonsense. There is no such thing as the temperature for 2014, not for the world, the UK or anywhere else. You can take what they have got which is a series of historic max /min readings of questionable accuracy and which tells you nothing about what happened in between the max and the min, apply corrections to try and remove local effects then apply more dubious maths and statistics to smear the thing all over the area to be considered. What you get is not a temperature.

All the met. office does is use their models to produce forecasts with a huge error band then match it to the "actual" within another large error band and note that it matches most of the time. This they call validation. The percentage match is directly related to the size of the error bands.

Jan 14, 2014 at 7:24 PM | Unregistered CommenterNW