The biggest advantage of using CFD is that I don’t have to make a prototype in order to test a design and I can refine my design to a level that I just couldn’t with a pen and paper or statistical/empirical based calculations. I can make all the little tweaks and refinements I like, and then go make a prototype. Its not a magic bullet, to be sure, but it cuts the amount of time it takes to analyze and develop something.
After a model was constructed and everyone got the opportunity to shit all over it through a review process. As part of this review process, all of the assumptions I made were scrutinized. The software lets me “ignore” for lack of a better term phenomenon that doesn’t apply to my model, or should I say phenomenon that I think have a negligible impact on the results. The software also allows me to set the resolution of the model. I could, for example, include every physical, chemical, and thermal phenomenon into my model, but in my opinion the ones I chose to ignore are of tertiary importance and including them will waste manhours and make the model to complicate to run on my puny machine.
Couple this with the fact that I have years of experimental and empirical data to validate my initial methodology. For example, Dr Smith from the University of Complicadia figured out in 1967 that the exothermic reaction of a minor chemical intermediary in my process only contributes .001% to the overall energy balance and therefore I can safely say BFD.
In addition to all this, the one thing I don’t have to worry about is the soundness of the tool I am using. The makers of it have put it through a rigorous verification and validation procedure (like ISO/IEC 90003:2004 or something), and this procedure conforms to some recognized international standard.
At the end of my review process changes were made until everyone was happy and a prototype was built. That prototype was then tested and discrepancies between the real world results and the model were explained as either within the margin of error, poor assumption in the model or whatever. These real world results were then taken into consideration the next time a model was constructed, and were also good to give to management to waylay their fears as they are naturally skeptical of all that oogah-boogah black box magic, and don’t want to spend $40 million just to find out that the computer was wrong.
This CFD tool that I used on boilers, pumps, solid fuel injection and all that good stuff is being used to model the global climate. There are statistical based models as well, but I know very little about them and my comments will only touch on general circulation models which use a methodology similar to the CFD models I worked with.
A lot of big claims are being made about where the global climate is going. The IPCC (Intergovernmental Panel on Climate Change) says that they are 90% certain that global temperatures could rise from 1.8°C to 4.0°C. My goodness, 90% certainly sounds pretty freaking sure, and after all, they have all those global warming models to back these claims up, but now the question becomes how reliable are the models.
Now these climate modelers are smart guys to be sure, they know their shit and I would never accuse them of intentionally passing off what they knew to be bad date. This said, they seem to be a rather insular group of people who are a little to thin skinned, clannish and not open to criticisms of their work. Many of them have never had to work in the private sector where good verifiable results are a must and poor performance is severely punished. If they want to satisfy people like me, they should allow their work to be checked by people with no connection or conflict of interests.
They claim that they have validated their models using historic data, but there are some gaping holes in that explanation.
The explanation goes like this: we have validated the models by setting them up to reflect the conditions of the year 1900 and when advanced to 2008, they show a good match between what happened with the climate, or at least they did after we tweaked them.
And by tweaking they mean that they modified the code and all those constants whose values they came through via assumptions, to get the models results to match the historical data. And that’s OK, isn’t it? Isn’t that what I do when performing my modeling work?
Not exactly. As I mentioned before, I actually get to build what I modeled and see if it works as I predicted it would, they don’t have that luxury so they should be extra specially careful, that and there are significant problems with the methodology they use for their fancy curve fit.
First, how reliable are temperature readings that are 150 or even 100 years old? Was the instrumentation properly calibrated? How accurate were these instruments? It might sound like a trivial point, but instrumentation reliability has killed more than one experiment. Remember we are talking about differences of a just a couple of degrees C, was instrumentation resolution and accuracy in 1905 at a high enough level to make the data gathered useful?
Secondly, what happens when the models appear to work fine at describing some phenomenon like the global average mean temperature from 1850 to 2000, but work very poorly at describing related phenomenon:
global weather models predict that as carbon dioxide increases, it should affect the temperatures of higher elevations more than it does at ground level. Douglass’s analysis suggests that while the models do roughly match ground temperatures as carbon dioxide increased over the last 20 years, the mid- to high-tropospheric levels of the atmosphere actually cooled.
“The models are relatively accurate at predicting the temperatures at the Earth’s surface, “says Douglass, “but when you go a few miles up, they diverge dramatically. The models are really challenged to explain these results.”
The standard response to this is that the modelers will incorporate this data into their next revision, and that will make the models even more accurate, except the climate models continually predict the same warming trend regardless of how many revision have been made to them:
A detailed analysis of black carbon -- the residue of burned organic matter -- in computer climate models suggests that those models may be overestimating global warming predictions.
I have had conversation with people about this issue in the past and they have responded to me with two answers: you can’t predict the future and there is no cost in being wrong about this, it’s a win win.
Its not about predicting the future here, its about verifying that the methodology you used to construct your climate model is sound and has been rigorously examined by people who have no personal or professional interest in its failure or success.
There is a cost for being wrong. If legislation to halt global warming is enacted it could very well result in the greatest transfer of wealth from the first to the third world and could also result in the greatest voluntary surrender of individual freedom.
Don’t believe me? The developing world will be written out of any treaty on CO2 emissions. Industry will move there and make them wealthy. Legislation here will determine in no small part what we eat, where we live, how large our homes are, how we go places, how many children we can have, where we work, what temperature the thermostat is set for, where we vacation etcetera.
I don’t know they are wrong, but they have yet to demonstrate that they are right. I know, sounds like a cop out, but its not. They have not demonstrated to any accepted global standard that they tools they use are reliable and they have not demonstrated to any accepted global standard that their methodology is reliable.
Make your case before you legislate way my freedom in the name of saving the Earth. If the science is truly settled, conform to the same burden of proof I would have to.
Show me the money.