• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Interview with Judy Curry

...um, it's in the paper.
Please show an example of that. Link please. All I have seen is pundits lying about what papers say. Did you actually read such a paper, or read what some pundit claimed the paper said?
 
Unrelated.

You asked about confidence intervals. I provided you examples of distributions.
I was familiar with the paper, but the instrument data in that paper is cited as being from
Wigley, et al
It is clear from Fig. 5 that the best fit is obtained for a sensitivity below the 1.5°C lower bound of the 90% confidence range.
 
I disagree with that perspective. My perspective on science is that we take a hypothesis, and do our best to disprove it. When all failures to disprove it fail, then we are getting somewhere.

That is precisely what inferential statistics is all about. Type I and Type II errors. Rejecting or failing to reject the null hypothesis. All of this falls out solid statistical analyses.


Why are people constantly trying to deny the works of others that they don't like? Do you believe that to be science?

It is perfectly fine to deny the musings of interested amateurs just piddling around with a pen and paper if they fail to be able to make the case that they are more likely to be correct.

Is it bad to deny someone selling a "perpetual motion machine" the right to take advantage of others?

The sad part for most skeptics and especially denialists is that the science is pretty settled among the folks who know it the best. It is insufficient to simply feel bad about what the results say.
 
Please show an example of that. Link please. All I have seen is pundits lying about what papers say. Did you actually read such a paper, or read what some pundit claimed the paper said?

I already did. I said look at Fig. 3.
 
I was familiar with the paper, but the instrument data in that paper is cited as being from
Wigley, et al

...and AGAIN I cannot stress this enough, that 90% confidence figure is not just based on the "opinion" of the researchers. It is calculated from the distribution itself.
 
...and AGAIN I cannot stress this enough, that 90% confidence figure is not just based on the "opinion" of the researchers. It is calculated from the distribution itself.
Which Wigley found was below 1.5C for the best fit.
 
That's their conclusion based on the information they used. Do you think they believe that's absolute?

I get the feeling I'm slamming my head against a brick wall here. What do you mean "absolute"? Do you have no familiarity with statistics and data distributions? I don't understand your question.

When you have a distribution of data you can, from that, calculate estimates of the mean, median, standard deviation, etc. It is calculated based on the data.

Statistics is a critical skill to have for a scientist.
 
I get the feeling I'm slamming my head against a brick wall here. What do you mean "absolute"? Do you have no familiarity with statistics and data distributions? I don't understand your question.

When you have a distribution of data you can, from that, calculate estimates of the mean, median, standard deviation, etc. It is calculated based on the data.

Statistics is a critical skill to have for a scientist.
As a guide only.
 
NO figure 3 instrument section came from Wigley, or at least that is what the citation said.

Fig. 3 has MANY different methods of measurement. The entire article goes over a wide variety of methods. Fig 3 shows confidence intervals.
 
Huh? I'm not following. So, barring statistics, what is the best way to determine the most likely accurate hypothesis based on the data in your experience?
That is too general of a question. A hypothesis has to be able to be challenged. I'm not sure where you are going with this statistic idea, except to think you must rely too much of statistics.

A reminder. Facts make statistics. Statistics do not make facts.
 
Fig. 3 has MANY different methods of measurement. The entire article goes over a wide variety of methods. Fig 3 shows confidence intervals.
only the instrument data counts with any accuracy, models and opinions ,
PS expert elicitation is the synthesis of opinions of authorities of a subject where there is uncertainty due to
insufficient data or when such data is unattainable because of physical constraints or lack of resources.

in short their opinion!
 
I get the feeling I'm slamming my head against a brick wall here. What do you mean "absolute"? Do you have no familiarity with statistics and data distributions? I don't understand your question.

When you have a distribution of data you can, from that, calculate estimates of the mean, median, standard deviation, etc. It is calculated based on the data.

Statistics is a critical skill to have for a scientist.
Look,man. These guys are ‘sort of engineers’. They obviously sort of know what they’re talking about.
 
That is too general of a question. A hypothesis has to be able to be challenged. I'm not sure where you are going with this statistic idea, except to think you must rely too much of statistics.

A reminder. Facts make statistics. Statistics do not make facts.

You keep saying that, but I sense it is because you are not one who utilizes statistics. Statistics is the BEST and, as of now, the ONLY tool we have to estimate error and eliminate bias.

I don't know what kind of magical thinking you are undertaking about the nature of an hypothesis but when you work with data statistics becomes VERY important.

I've seen people take graphs and draw best fit lines through the data and wave their hands and say "Look, here's a trend!" But in reality unless there's an F-test done on that trend to show with some confidence that there is actually a REAL trend and not just a random noise artifact then one would be best NOT to accept the claim that there is a trend. That's just sloppy.
 
only the instrument data counts with any accuracy, models and opinions ,
PS expert elicitation is the synthesis of opinions of authorities of a subject where there is uncertainty due to
insufficient data or when such data is unattainable because of physical constraints or lack of resources.

in short their opinion!

-sigh-

THE CONFIDENCE INTERVALS IN FIGURE 3 are NOT, repeat, NOT based on "opinion".

Here's how confidence intervals are calculated on normal distributions: http://www.stat.yale.edu/Courses/19... unknown,for the standard normal distribution.


And here's how one works with non-gaussian distributions (CI on the median): https://online.stat.psu.edu/stat415/lesson/19/19.1

(There are whole host of non-parametric statistics as well)
 
You keep saying that, but I sense it is because you are not one who utilizes statistics. Statistics is the BEST and, as of now, the ONLY tool we have to estimate error and eliminate bias.

I don't know what kind of magical thinking you are undertaking about the nature of an hypothesis but when you work with data statistics becomes VERY important.

I've seen people take graphs and draw best fit lines through the data and wave their hands and say "Look, here's a trend!" But in reality unless there's an F-test done on that trend to show with some confidence that there is actually a REAL trend and not just a random noise artifact then one would be best NOT to accept the claim that there is a trend. That's just sloppy.
I am guessing that you do not understand how statistics are applied to merging climate data and climate model results.
The empirical climate data has an actual confidence interval and standard deviation, the model outputs do not really have that because
each simulation can have very different input variables.
So while the empirical data has a range of ~1C to 2.5C, the model ranges are all over the place depending on the input assumptions.
The IPCC said no best estimate of ECS could be found, because the instrument record and the model output could not be matched.
 
-sigh-

THE CONFIDENCE INTERVALS IN FIGURE 3 are NOT, repeat, NOT based on "opinion".

Here's how confidence intervals are calculated on normal distributions: http://www.stat.yale.edu/Courses/1997-98/101/confint.htm#:~:text=For a population with unknown,for the standard normal distribution.


And here's how one works with non-gaussian distributions (CI on the median): https://online.stat.psu.edu/stat415/lesson/19/19.1

(There are whole host of non-parametric statistics as well)
Most of the ones not under the instrument listing are so subjective as to be opinion.
 
I am guessing that you do not understand how statistics are applied to merging climate data and climate model results.
The empirical climate data has an actual confidence interval and standard deviation, the model outputs do not really have that because
each simulation can have very different input variables.

You are saying that model outputs cannot have variability in successive runs? That does not sound right. I believe Monte Carlo methods are used in some aspects of climate science, do you think their output is not statistically describable? Why not?

In fact there are stochastic climate models which, pretty much by definition, are described by statistics.

 
Most of the ones not under the instrument listing are so subjective as to be opinion.

Umm....I don't know where you get that from. Considering that even undergrads know how statistics are used to characterize distributions.
 
Please show an example of that. Link please. All I have seen is pundits lying about what papers say. Did you actually read such a paper, or read what some pundit claimed the paper said?

Pundits lying talking point.
 
Agree or disagree...

Do you agree with the Orwellian type censorship?

This is off topic, but I will respond once. Censorship is normally seen as a function of an official entity, normally a government. In a democratic society, the government has no “right” to censor the media, for instance, as is normal in a more tyrannical society. However, we also live in a capitalist society whereby individual entities of capitalism do indeed have the right to make certain that their platform is not used to spread nefarious and incorrect information. That is why the “fraud” lies of the Trumpies were “cancelled” from some media. And why right wing white supremacists are not given voice in these platforms. The solution, of course, is competition through capitalism by providing alternate outlets for disinformation regarding, for instance, climate change or election results, or hate speech such as white supremacists engage in. And it is clearly working, as shown by the availability of the disinformation videos of Stossel and the fact that white supremacists do indeed have ways to communicate in the “dark web”.
So your question basically becomes moot for two reasons: that our government is prevented from censorship by the Constitution, and that capitalism basically prevents censorship by providing alternatives.
 
You are saying that model outputs cannot have variability in successive runs? That does not sound right. I believe Monte Carlo methods are used in some aspects of climate science, do you think their output is not statistically describable? Why not?

In fact there are stochastic climate models which, pretty much by definition, are described by statistics.
If the used exactly the same initiation condition, combined with exactly the same variable selection, they would likely get very close to the same results.
The problem is that the initiation conditions, and the variables are almost all different.
Different sensitivity levels, different emission scenarios, different starting states.
So saying the output of say the CIMP5 model produces X, you would have to understand all the starting assumptions.
One thing that seems common, is that the delay between emission and maximum warming for small pulses,
consistently averages about 10 years.
 
Back
Top Bottom