Discussion > Do computer models provide 'evidence'?
I gave you the chance to take it up with the author, Raff....but you are just a troll and not even amusing
Maybe the discussion should have been "Does raff (aka Chandra) provide evidence?".
According to WUWT
From Scripps: Global climate models fail to simulate key dust characteristics
In that case along with all the other inputs which models are already known to have issues with: Volcanoes, Clouds, Ocean Currents and Ocean Oscillations to name a few we can now add desert dust. How can the answer be anything but NO?.
I worked in IT for 35 years. I have been a Computer Operator, Programmer, Analyst, Ops Manager, Project Manager, Consultant. I know a little bit about how computers actually work and I don't think that people who believe that a computer model of the world climate can tell them anything at all know anything at all about how a computer works. I read about senior people in government and at the Met Office going on about how super-powerful their new machines are, as if that means anything much! All computers function by obeying the instructions presented in the controlling program and process data according to the machine instructions in the control program. A very powerful computer will process the instructions more quickly than a less powerful computer, that is the only difference. An unkind person might say that all you will get is the same wrong answer a little earlier. How many variables are there in the world climate system? What are the interactions between all these variables? I think it is reasonable to say that nobody knows the answer to either of these questions. I think that therefore it is reasonable to say that no program can 'model' the world climate system. Who writes these model programs? Who writes the program specifications? What quality assurance is done on the tested results?
Do computer models provide evidence? Don't be silly.
Are the computer models reliable?
Computer models are an essential tool
in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.
Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate which gives
scientists confidence that they can also
predict the future.
But computer models cannot predict the
future exactly. They depend, for example, on
assumptions made about the levels of future
greenhouse gas emissions.
Met Office publication
A computer model is an embodiment of a theory. As such it cannot be evidence, it is the thing which the evidence may support or contradict. In this case, climate models are the embodiment of certain theoretical relationships. The calculations are defined by the theory. Observed measurements are the evidence which confirm or contradict the theory expressed by the model's calculations. The theory itself, or the physical embodiment of it (a computer program) is evidence of nothing. - it is the thing which is awaiting evidence against it.
The deification of modelling into some sort of Oracular sage has been one of the science's biggest errors.
ANH1 you are missing the wood for the trees. Yes, computer architectures haven't changed that much and the advances are largely down to speed, storage density, comms speeds and so on. But the big difference is that a computer such as that used by the Met Office can execute programs that a less powerful computer could not in a hundred years. That is not a trivial difference. It opens up completely new possibilties.
TheBigYinJames if what you said were true, the nuclear test ban treaty would never have been implemented. If a model of a nuclear explosion could not give evidence that weapons are safe (or whatever) then we'd still be doing big bangs. Evidence, and good evidence, mind you, but not proof. Moreover if what you said were true, that the model is just an embodyment of the theory, then why would we need to execute the nuclear model at all - it would be giving us nothing the theory doesn't already tell us.
diogenes, why would I take it up with the author? The truth is staring you in the face. He's an amatuer grapher, taking two data sets he probably doesn't understand and sticking the together as if they belong. Yet he understands enough to adjust the label, as he knows what he has plotted are probably not compatible. And he is also sure not to plot the whole of MBH98, perhaps because it would give the game away.
He's an amatuer grapher Says Raff
That's really convinced me as that comment has obviously come from such a serious intellect.
Sarc off
Your comments have no content at all, they are vacuous in the extreme and best ignored.
But computer models cannot predict the future exactly. They depend, for example, on assumptions made about the levels of future greenhouse gas emissions.
Are they the only assumptions in the models. I doubt it very much, what about clouds? precipitation? wind? volcanic eruptions? sand storms? tropical storms? etc. Talk about glossing over the difficulties!
Even I don't think it was worth saying twice. Don't know where it went wrong.
[Extra one removed. TM]
Jolly Farmer, see the link below and the lack of any complaint from the regulars about cutting research. Oh the lucky country...
http://www.bishop-hill.net/blog/2014/6/4/hitting-back-at-scientivists.html
Jul 9, 2014 at 9:51 AM | Unregistered CommenterRaff
Raff, once again you appear to be suffering from facultative dyslexia. On the thread you refer to, the very first comment by "omnologos" draws a clear distinction between funding "greenie dreams" and "real science".
It was the first comment.
It was the first sentence of the first comment.
It was the only sentence of the first comment.
I don't know if raff is capable of making a good point, s/he seems to be a content free zone, but if the point is that we are to reduce funding of climate science research I think it would be a good thing. The scientists have identified the problem, they've made projections of disaster and they've told us the only solution is a drastic reduction in the output of CO2. If we accept this, and our government does, then we have no more need for their input, they're not engineers (or statisticians for that matter) and don't have the necessary skillset to come up with an engineered solution. We should now divert the money we're spending on climate science into engineering research that will provide us with low CO2 energy.
But computer models cannot predict the future exactly. They depend, for example, on assumptions made about the levels of future greenhouse gas emissions.
Are they the only assumptions in the models. I doubt it very much, what about clouds? precipitation? wind? volcanic eruptions? sand storms? tropical storms? etc. Talk about glossing over the difficulties!
Jul 18, 2014 at 3:14 AM geronimo
Well, in saying "for example", they imply that there may be other unknown factors.
But it's Met Office Bullshit (MOB™), clearly intended to make readers think that, were it not for unknowns such as future greenhouse gas levels, computer models would be able to predict future climate exactly (to use the MO's own word).
Notice how carefully the two sentence were wordcrafted and how they avoided simply saying "Because things such as future greenhouse gas levels have to be assumed, climate models cannot predict future climate exactly".
Raff, You have, probably unwittingly, agreed with what I wrote. The Met Office computer will execute programs at a greater speed, your hundred years quote is rather a large exaggeration. As I said, you will get the same wrong answer earlier than before. I think you have been taken in by the Met Office public relations guff which was necessary to justify the expense.
Raff, if what I said were true then.. blah, something completely unrelated which actually proves my point.*
What I said is true. I worked in modelling for the R&D laboratory department of a very large company for many years, modelling very complex systems. Computer models are mathematical formulae expressed in computational form. The formulae express the theories about posited relationships between input and output parameters. When the models are run, a set of inputs are fed into the formulae, and (if it's a numerical model) then iterative calculation steps are taken, and output parameters recorded along the way. At the end of a run, the model has produced a set of outputs which are the computation results of the formulae.
These outputs are not evidence of anything. The model which produced them is not evidence of anything. The evidence is in the comparison of what the model produced with real empirical measurement. If the model has produced something which approximates reality - then the model can be said to have been validated across that set of input parameters. This does not mean that it accurately models the physical relationships the theory is based on, only that the computational model is skilful enough to fit the curve of the real measurements. Skilful in the sense of modelling simply means the combination of computational steps has faithfully reproduced whatever short series or trend the theory was attempting to model - it does NOT mean the model is accurately modelling the phenomenon.
In reality, the model is trained or 'tweaked' so that outputs match expected. This fudging is post-hoc, it is not driven by the physical theory itself, but via a series of fudge factors - a subset of the input parameters which are unknown or unknowable at the time of modelling, so have to be guessed. At the end, it's hardly surprising the model fits the training data, since the training data has been driving the fudging. This is evidence of nothing. All you have proved is that you have engineered an algorithm that matches a desired trend. The only evidence you have is that the model reproduces f(n) over that particular range of input data.
This is a distinction which seems to have been lost on some, including the copywriters in the Met Office.
If such a model, having been trained on, say, historical temperature trends, and found to skilfully match these trends, is then let loose on forecasting activity, then this is the true meaning of model validation. If your model then performs skilfully for 10 or 20 years (or however long the error bars on the input data deem it accurate) then you may claim that the algorithm you have created adequately models the underlying physical phenomenon and this, by implication but not deduction, may lend weight to the validity of the original theory expressed in the formulae. It's supporting evidence but not proof.
Unfortunately for the models referred to by the Met Office, none of them adequately forecast the observed trend since they were created. Almost all of them run too hot. This could be for many reasons: perhaps the original training runs failed to match the training data, so certain fudge factors were increased. Whilst this made them match reality at the time, it has made them run too hot ever since. Perhaps the training data periods represent a range of data which were unusual in terms of normal climatic function - the 90s has an unusually steep temperature rise, so any models tuned on this data may encapsulate an undetected trend acceleration computation which is no longer functioning in the real climate. Who knows. I'm speculating. But they are all - with one or two exceptions - running hot.
This means that the original models - no matter how closely they hindcast the training data - have been invalidated.
* Your point about nuclear models actually demonstrates my point ably. Climate modelling and nuclear modelling both serve the same purpose. They are an attempt by scientists to provide something scientific-sounding to support a extant policy desired by politicians. Your idea that a model of a nuclear explosion is good evidence BECAUSE otherwise we'd still be exploding them is a standard horse-before-cart fallacy. The decisions about the use of nuclear weapons are all political, and the scientific evidence to support whichever political position commissions it has been generated post-hoc. You're really not that naïve to believe that politicians were only to happy to keep going but were talked out of it by the excellent evidential models produced by scientists, are you Raff?
Raff (again), You made no attempt to counter my main point which was contained in the questions at the end of my first post.
Who writes the program and who writes the specification for it? What quality assurance is performed in the testing cycle.
The machine will only provide results based on what it is told are the relationships between the variables and this will be set by the program specification.
I found by far the most interesting part of 'Climategate' was the diary of the computer programmer who apparently fiddled around with his code until it produced results in line with what his superiors required.
BYIJ, ANH1 and others.
Don't waste bandwidth.
Somebody appears to believe that current nuclear weapons may not be identical in every detail (so far as the so-called 'phisycs package' is concerned) to weapons that were tested before the cessation of test explosions.
As I commented on unthreaded, I watched Horizon on BBC last night, Computer Models were being hyped up as the greatest thing and, like nuclear fusion, were on the verge of greatness. A Golden Age in fact. Mrs S threatened to turn the TV off if I didn't stop making comments. It did contain a lot of stuff I'd first read on sceptic blogs, given an impending doom slant.
It's a bit off topic for this thread but the Jetstream and how it had changed causing extreme weather in the UK and was all caused by (Anthropogenic) Climate Change™. This seems to be the latest theory to put to the public for going back to the Stone Age. By coincidence at the same time as the loopy Jetstream was being shown on BBC Steve Goddard had posted a piece showing how in 1977 the very same phenomena was being put down to global cooling.
Steve Goddard's article source here
http://news.google.com/newspapers?nid=1899&dat=19830325&id=cAUgAAAAIBAJ&sjid=9WQFAAAAIBAJ&pg=1417,3960641
splitpin, I think it was Geronimo elsewhere who said that sometimes answering a troll isn't so much for the troll's benefit, but it's so that an answer is given, in case anyone else reading (present or future) thinks we don't have an answer and thus have conceded the point.
TheBigYinJames, you and others seem to care deeply that computer models do not produce evidence. I think you are just playing with words, but that seems to be the nature of 'scepticism' (as opposed to scepticism). However, if you really believe what you say, can I extract a promise from you that you will:
- never, ever, write a little model for yourself or others to investigate an idea or theory.
Or that if you do write such a model:
- you will represent every physical and logical entity in the model to the finest level of detail possible;
- you will use no approximations to improve efficiency, even if you beleive they make no difference;
- you will carry on writing the model until it is fully verified and validated.
- you will draw absolutely no conclusions from the results of the model until that time;
- you will tell nobody of those early results (for fear that they might draw inappropriate conclusions) even if they are paying for the model;
- you will not, under any circumstances, cease writing the model even if it seems to give 'evidence' that your idea or theory was wrong;
- neither will you, under any circumstances, cease writing the model if it seems to give sufficient 'evidence' that your idea or theory has some legs;
- instead in all cases you will carry on until the model is complete and validated in every aspect.
Thanks.
ANH1, a hundred years is reasonable. An iPhone is thousands of times faster than a 1980s IBM PC. Anything that takes around than a month to run now would have taken 100s of years then. That performance increase makes things *possible*, not just faster. Unless you think carrying 1000s of PCs wired up together in your pocket to make 4G phone calls would be possible.
michael hart, and who gets to decide what is "real science" in your world? Know any politicians who know *anything* about it?
geronimo, if what I say is content free, then ignore it. The fact that people don't implies that what I say hits a nerve. If you want low CO2 energy the best way to get it is to make a market for it. Not cancelling research programs (like Abbott) helps too.
raff, I can see you're into strawmen. I didn't say models weren't useful, I said they weren't evidence. Nothing in your reply answers that point.
Of course he's into strawmen. He said
" But the big difference is that a computer such as that used by the Met Office can execute programs that a less powerful computer could not in a hundred years. "
He did not say what "a less powerful computer" was.
1982 IBM PC (0.2 MIPS)
2014 AMD R9 295X2 11.6 TFlops?
He did not say how long the Met office's programs took on its supercomputer.
1 second?
6 months?
He's one of those people who haven't a clue how profound their ignorance is. His questions and comments have the appearance of saying something but if you look closely, there's nothing there. It's scribble.
Don't waste bandwidth.
Raff,
Your last reply shows to me that you do not know anything about commercial computing. PCs and iPhones are irrelevant here. I suggest you confine yourself to subjects of which you have some real knowledge.
I'm fully aware that it is your typical out of date bbc news story that gets rated on the current site but remember it annoying me at the time...