Discussion > GHG Theory step by step
Jennifer Marohasy, down under, has been doing interesting work on weather and rain prediction - quite important in Australia, using neural networks - ANNs.
Deep learning ANNs are difficult to handle currently, with much research taking place.
Re: surrounding hour 2 and 4 data.
If the ANNs have been trained with 1 hour snap shots, then they can only give 1 hour predictions. Interpolation would not be recommended.
I wonder how many previous 1 hour snap shots will be required to get a reasonable response from the system?
10, 100, 1000 10000 or millions?
Dec 23, 2017 at 11:27 PM | Rob Burton
Computers, programming etc is not my "thing" at all. As a country bumpkin, yottie etc, I do appreciate accurate weather forecasting.
It was my understanding that UK weather forecasts traditionally looked at where High and Low Pressures were, and made forecasts based on records of similar patterns having occurred before, and what happened next. I accept that a Low tracking 100 miles further North or South than predicted will make a lot of difference.
Would a neural network "just" be a fast way of processing live data, seeking best fit matches with past data, and capable of learning from its own experience, to increase the accuracy of forecasts? If so, are Climate Models incapable of any self-education?
Rob Burton, Steve Richards
Since we started getting satellite data in1979, that is about 300,000 hours of global data.
Climate modelling uses mostly well understood classical physics. The model solves the Navier-Stokes equations of state for 100,000 grid squares, calculates the energy flows into, out of and between squares and updates to new states after a set time interval. The process then repeats as often as necessary.
You are dealing with a well understood set of laws, and a lot of repititious number crunching solving partial differential equations. This is better done by algorithms rather than neural networks.
Rob Burton, Steve Richards
Since we started getting satellite data in1979, that is about 300,000 hours of global data.
Climate modelling uses mostly well understood classical physics. The model solves the Navier-Stokes equations of state for 100,000 grid squares, calculates the energy flows into, out of and between squares and calculates new states after a set time interval. The process then repeats as often as necessary.
You are dealing with a well understood set of laws, and a lot of repititious number crunching solving partial differential equations. This is better done by algorithms rather than neural networks.
EM: Seasons greetings.
Your post says it all really!
"mostly well understood" is your first key phrase, followed by "100,000 grid squares".
Join those two phases together and you have all of the not fit for purpose climate models.
We need to understand every reaction relating to this globe of ours. It will be a long time coming.
ANNs as pattern matchers, using poorly defined and imprecise data may do a much better job and predicting future weather and climate.
Since our input data is very ragged and physical relations not fully defined (clouds anyone?) then formal 'big' simulation should be a low key research activity. Not the stuff that press releases should be made of.....
Steve Richards
You are using the impossible standards straw man.
Two questions?
Do we have complete understanding. No.
Do we have enough understandingly do useful research? Yes.
Empirical observations of cloud cover indicate that overall cloud cover is gradually increasing as we warm. Increased low cloud has a cooling effect. Increased high cloud has a warming effect.
The net effect is a slight warming feedback. I hate to spoil Christmas Day for you, but increasing cloud cover is not a Deus ex Machina which will rescue you from climate change.
I read that wind/energy/weather can only flow in four directions between adjacent grid squares. N S E W. No diagonals. Could that be true of the models? If so, !
Rhoda, in Climate Science, data and energy can jump over squares, like a Knight in chess.
"You are dealing with a well understood set of laws, and a lot of repititious number crunching solving partial differential equations. This is better done by algorithms rather than neural networks." Dec 24, 2017 at 11:52 PM | Entropic man
How many inaccurate computer models do we need, before we try a different approach? As Consensus Climate Scientists can't admit their mistakes, why should their logic and assumptions be accepted for more taxpayer funding?
Steve Richards
Science of doom has a good post on current thinking regarding clouds .
Rhoda, golf Charlie
You do not have to have square grid cells.
How about triangles? Each cell connects to three others.
How about hexagons? Each cell connects to six others.
Each has advantages and disadvantages. Triangles have fewer edges to calculate, but tend to have distortions due to corner effects.
Hexagons are likely to show more uniform conditions within each cell, but six edges adds considerably to the calculation workload.
How about Penrose tiling? You can vary the size and shape of the files to produce irregular patterns.
In practice, each grid element is based on latitude and longitude, so most grid squares are actually trapezoid. Their area decreases as you move further from the Equator, so the resolution of the models improves at higher latitudes.
This gives a compromise with each grid element half a degree of longitude deep and half a degree of latitude wide. Point measurements come as a value, lat/long coordinates and a time. A lat/long based grid makes it easy to manage data, which is probably why it was chosen.
This is better done by algorithms rather than neural networks." Dec 24, 2017 at 11:52 PM | Entropic man
You think neural networks aren't implemented with algorithms?
Entropic Man, you seem determined to maintain the status quo, throwing even more money at failed dogma.
If all computer climate models have evolved from the same sources, and failed, what is wrong with weeding out the worst performers, and redirecting the money to a fresh approach? The current model programmers have no incentive to find their own mistakes, so what is the point in taxpayers funding them?
Oh dear EM:
You are using the impossible standards straw man.
Two questions?
Do we have complete understanding. No.
Do we have enough understandingly do useful research? Yes.
Have you no concept of rounding errors?
This is why you do not 'round' intermediate answers when performing a sequence of calculations. You get an incorrect answer if you do.
I can not imagine a more involved computer program than a climate model using many thousands of cells, the outputs of each calculation being fed into the inputs of many other calculations. increment the time clock and repeat the calculations again.
The possibilities of errors creeping in are many, *IF* we had all of the correct science understanding which led to correct programming.
We do not know how major processes work, so by definition, the formulas are wrong.
By definition all climate model outputs are wrong.
EM, I do not request impossible standards, I would accept normal engineering design standards and methods.
A chartered engineer will tell you if the system you have requested is currently impossible to create, climate scientists do not.
EM, if you want the world to change its behaviour, and for poor people to remain poor, then you need exceedingly good evidence. Climate models do not provide evidence, they are faulty.
EM, you ask for a little research, *YES* I think everyone here would welcome basic research in to the physics of atmospheric reactions. Experiments etc.
Running faulty climate models in no evidence at all other that.......
See fortran precision problems here: http://people.ds.cam.ac.uk/nmm1/fortran/paper_07.pdf
Testing
Michael hart
All neural networks are algorithms. Not all algorithms are neural networks.
Neural networks tend to be used for signal processing and pattern recognition problems.
You do not need a neural network for a climate model, which is just repetitive number crunching. The extra computation to simulate a neural network on top is just wasted effort.
Golf Charlie
"climate models have failed,"
You keep saying that as though it is an article of faith.
When you look at the output of climate models and compare them with observation you get this match.
Kindly explain on the basis of the data how you come to your conclusion that models which correctly forecast past and present have failed.
Steve Richards
The models are good enough. The models hindcast accurately and forecast the near future. If rounding errors were a disabling problem then model runs given the physics and forcings which reflect actual conditions would not match reality (see the link I gave Golf Charlie.)
I have very little confidence in engineering practice applied outside engineering. No engineer has been able to produce anything which accurately describes what happens when seven billion people try to drive a planet.
Lots of experiments have been done at laboratory level, plus observations of insolation, OLR and DWLR which match expectations from the lab results.
Proper trials of the effect of increasing CO2 use past climates as controls. To do them in real time would require duplicate planets. Unless you have other experimental design options?
BH is cantankarous at the moment. I had to rewrite both posts to get them accepted and had to leave out the link.
Try Climate Lab Book Comparing CMIP5 & observations.
Dec 27, 2017 at 11:34 PM | Entropic man
Which Models predicted the pause?
Golf Charlie
"Which Models predicted the pause?"
All of them.
The ensemble using real world forcings to 2005 put the observed temperatures near the bottom of their 95% confidence range. (That is the pale grey band on the second graph.)
The updated ensemble using real world forcings to 2011 put the end of the pause near the middle of their 95% confidence range.(That is the darker grey band on the second graph.)
Entropic Man, Climate Models did not predict the pause. Or are you saying Phil Jones was wrong?
“The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant….”
Dr. Phil Jones – CRU emails – 7th May, 2009
‘Bottom line: the ‘no upward trend’ has to continue for a total of 15 years before we get worried.’
So Jones was being honest, whilst knowing that dubious adjustments had been incorporated about UHI:
https://climateaudit.org/2010/11/03/phil-jones-and-the-china-network-part-1/
Jones et al 1990
"In 1988, Tom Karl had published a study purporting to show that UHI didn’t “matter” in the US. In 1990, Jones decided to extend the results to the rest of the world and sought out data for networks in Russia, Australia and China in order to compare “urban” and “rural” sites as a supposed way of estimating the urbanization impact on global temperature. They concluded that there was “no indication of significant urban influence” in any of the three networks and that an upper limit of ~0.05 deg could be set on the contribution of urbanization to 20th century land temperatures, an order of magnitude less than observed warming."
"Entropic Man, Climate Models did not predict the pause."
On the contrary, they did indeed predict it, after it happened. And EM lectures US on articles of faith. Dear EM, anybody can make a much-parameterised model fit after the fact. It doesn't mean a thing. Any problem with repeated iterations causing runaways can be fixed if the code itself puts its hand up or if some results are just thrown away.
Golf charlie
I don't think we are discussing the same thing.
The consensus view is that a reduction in albedo forcing led to a reduced rate of warming during the 2000s. This never exceeded 0.2C from the long term trend, so never became statistically significant. It may have just been noise in the data.
I suspect that your interpretation of the pause is :-
"Global warming stopped for 13 years"
That sounds good as a propoganda meme, but as science it is bullshit.
Take a look at Zeke Hausfather's analysis of forcings affecting the BEST temperature record. Note particularly the -0.4C negative forcing of aerosols since 2000. If the pause existed, that would be its smoking gun.
Rhoda
So cynical!
That sounds good as a propoganda meme, but as science it is bullshit.
Take a look at Zeke Hausfather's analysis of forcings affecting the BEST temperature record. Note particularly the -0.4C negative forcing of aerosols since 2000. If the pause existed, that would be its smoking gun.
Dec 28, 2017 at 2:27 PM | Entropic man
Entropic Man, I think it is appropriate to quote you as follows:
"That sounds good as a propoganda meme, but as science it is bullshit."
Zeke Hausfather quotes Karl 2015 to support BEST
https://www.carbonbrief.org/factcheck-mail-sundays-astonishing-evidence-global-temperature-rise
http://notrickszone.com/2017/12/28/7-new-2017-papers-forecast-global-cooling-another-little-ice-age-will-begin-soon/
"During 2017, 120 papers linking historical and modern climate change to variations in solar activity and its modulators (clouds, cosmic rays) have been published in scientific journals.
It has been increasingly established that low solar activity (fewer sunspots) and increased cloud cover (as modulated by cosmic rays) are highly associated with a cooling climate.
In recent years, the Earth has unfortunately left a period of very high solar activity, the Modern Grand Maximum. Periods of high solar activity correspond to multi-decadal- to centennial-scale warming.
Solar scientists are now increasingly forecasting a period of very low activity that will commence in the next few years (by around 2020 to 2025). This will lead to climate cooling, even Little Ice Age conditions.
Thirteen recently-published papers forecasting global cooling are listed below."
Basically off topic of the GHG global warming hypothesis.
Does anyone see a neural network/machine learning approach to weather forecasting being far better than our current model based approach. It would be curve fitting to some extent but the neural network would probably see important factors that we haven't yet.
Also if say the neural network/machine learning program sees say the 6 previous observations leading into the current forecast, having say 6 hourly slightly poor resolution observations would lead to a 'back dated' higher resolution view of the data when spliced together in model/forecast form. ie some of the data could be shown to be impossible to occur at say hour 3 given all the surrounding hour 2 and 4 data.