Uncertainty in early temp records

Discussion in 'Science' started by yguy, Mar 3, 2019.

  1. yguy

    yguy Well-Known Member

    Joined:
    Feb 4, 2010
    Messages:
    18,423
    Likes Received:
    886
    Trophy Points:
    113
    Gender:
    Male
    If the biosphere is warming, anthropogenically or otherwise, obviously its heat content must be greater than at some point in the past. Accordingly I asked:
    I never got an answer; and until I do, that number is worthless as far as I'm concerned, and the uncertainty could be more than the alleged increase.
     
    Dispondent, Sunsettommy and vman12 like this.
  2. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    Those readings were taken from glass thermometers and different times during the day. Simple reading errors can introduce an error from 0.5 to 1 degree.

    Stevenson screens were introduced about a century ago. Some studies have been done and find that the reading error can be 2.5 to -0.5 in error. Radiation heating in the screen can give a temperature 0.5 higher. The reason is that during sunny days it can be warmer inside the screen than outside. With current readings the error can be around 0.7 degrees.

    The USHCN is a series of the newest temperature stations set up specifically for accuracy. That started in 2005. The USHCN record shows slight cooling in the US since 2005.
     
    Last edited: Mar 3, 2019
  3. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    That includes all uncertainty.

    The interesting thing about systematic uncertainty is that it cancels out when doing anomaly analysis.

    ΔT = (Tm + B) - (Tb + B) where Tm is the measured temperature, Tb is the baseline temperature, and B is the systematic bias.

    Watch what happens.

    ΔT = (Tm + B) - (Tb + B)

    rearrange...

    ΔT = (Tm - Tb) + (B - B)

    and because B - B = 0

    ΔT = Tm - Tb

    Berkeley Earth sets Tb = 14.183C ±0.051 so B = 0.051C. Then for the 5yr mean ending in 1900 the measurement error was about 0.08C. The error around 2000 is about 0.05C. So if you wanted to know the RMS error of a 2000 - 1900 warming amount it would be 0.08 + 0.05 = 0.13C because the systematic bias 0.051C cancels out as per the above.

    http://berkeleyearth.lbl.gov/auto/Global/Land_and_Ocean_complete.txt

    Pretty much everyone including NASA, NOAA, HadCRUT, Cowtan&Way, etc. have roughly the same amount of error. I just used Berkeley Earth because they were the institution founded and funded by skeptics/deniers specifically to bring public attention to all of the mistakes, errors, and even the possibility they believed existed at the time. Notable skeptics like Judith Curry were on their advisory board.
     
  4. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    The anomaly still relies on reliable temperature readings that that is what the OP is asking about.
     
    Dispondent, Blaster3 and drluggit like this.
  5. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    Yes. I know. Berkeley Earth lists the 5yr running global mean error at the time at about 0.08C. That's the instrumental error. That's the error of the mean of all of the instrumental measurements is computed to be.
     
    Last edited: Mar 4, 2019
  6. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    Computed based on which assumptions?
     
  7. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    The error of the mean formula. E = S/sqrt(N) where S is either the standard error if you want the data to self describe or the actual RMS error of the samples if it's known and N is the number of samples.

    For example, if the RMS error of a hypothetical sets of temperature instruments is 5.0C then error of the mean for 10,000 such readings is E = 5/sqrt(10000) = 0.05C. That's the power of having a lot of data. Each individual data point might have a lot of noise by itself, but when you aggregate many data points into a mean the signal increases for each new data point you consider.

    Take a look at the BE dataset. Some of those monthly mean errors are as high as 0.50C back in the 1800's because instrumental error of the constituent samples was quite large back then. But around 2000 it is pretty steady at 0.07C'ish. But when you aggregate and average 60 months at a time for a 5yr running mean the error goes way down because N goes way up.

    And, like I said, the way scientists deal with systematic errors is to reframe the problem in terms of anomaly analysis instead of absolute analysis such that the systematic bias cancels out when you do the subtraction with the baseline value. This is why almost global mean surface temperature datasets are published as anomalies instead of absolutes...because it's more accurate that way. Most datasets including BE publish the absolute baseline temperature they use in case someone wants to know what the actual temperature was in any given month. For example BE's baseline is 14.183C±0.051. So, for example, the Jan 2019 mean temperature was (14.183+0.867) ±(0.051+0.054) = 15.050C±0.104. Note, that adding the individual RMS errors of 0.051 and 0.054 to get a final value of 0.104 isn't actually correct. RMS errors aren't additive like that. The combined RMS error is actually a bit lower, but it's tricky to compute the right way so I just added them together knowing that I was overestimating the error. But you get the idea all the same.
     
    Last edited: Mar 4, 2019
  8. yguy

    yguy Well-Known Member

    Joined:
    Feb 4, 2010
    Messages:
    18,423
    Likes Received:
    886
    Trophy Points:
    113
    Gender:
    Male
    So besides instrumental error, what other uncertainty is included in that number?
    No it isn't. A temp reading can be perfectly accurate, but if it's the only one in 10,000 square miles, what the hell good is it?
     
    Sunsettommy likes this.
  9. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    How can you cancel out bias with a homogenization radius of 1,200 km? How do you end up with hotter temperatures where there are not stations than the surrounding stations through homogenization?
     
    Last edited: Mar 4, 2019
    Blaster3, vman12 and drluggit like this.
  10. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    I think this is one of the biggest challenges for the conventional datasets because they only incorporate surface temperature readings which were really sparse 100 years ago especially in the polar regions. Even reanalysis dataset which incorporate far more information have problems with this time period. We're probably going to have to just live with the fact that global mean surface temperature errors around 1900 are really low compared to today.
     
  11. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    If the errors are low how come the record changes every iteration? The tail would widen the further back you go so the uncertainty is much larger in 1900 than today.
     
    Sunsettommy likes this.
  12. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,516
    Trophy Points:
    113
    Which has, and continues to be the primary criticism of the data points, and the reanalysis estimates that are now being included in data sets that are substantively changing the average temp maps. It's tantamount to guessing. It's what is wrong with the way temp data is being collected today where monitoring and collection are methodologically different with different error sets. It's why my local "official" temp for our location is ALWAYS more than 2F different (warmer) than other collectors not at the air port... The temp record is littered with this kind of error. It makes the data quality highly unreliable, and generally makes it difficult then to apply social or economic policy based on it.
     
    Last edited: Mar 4, 2019
    Blaster3 and vman12 like this.
  13. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    It changes on each iteration because as problems are identified they are fixed and new observations are constantly being added even for time periods that happened 100 years ago.

    And yes. The uncertainty is much larger for older time periods.
     
  14. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    Which particular errors. I hope you realize they are run through a model and that new model will apply the same correction to every reading. Also, the homogenization often doesn't make sense, like correcting more isolated locations up in temperature based on stations located in heat islands.
     
    Sunsettommy and drluggit like this.
  15. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    And yet all of the worlds leading experts including skeptics agree that the temperature record is reliable enough to draw definitive conclusions. The published margin of error makes that pretty clear I think.

    Remember, just because YOU don't know or understand something doesn't mean expert scientists are equally as clueless. And that's not meant as personal attack. The same statement applies to me as well because I'm not expert nor am I as the world's leading experts on the topic.
     
  16. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    Who adjusts isolated temperatures up and what effect does this have on the warming trend?

    Have you let Berkeley Earth know about your concerns? What was their response?
     
  17. Hoosier8

    Hoosier8 Well-Known Member Past Donor

    Joined:
    Jan 16, 2012
    Messages:
    107,541
    Likes Received:
    34,488
    Trophy Points:
    113
    For every iteration it has made the past cooler, exaggerating the warming trend. All you have to do is look at the differences.
     
    Blaster3, Sunsettommy and drluggit like this.
  18. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,516
    Trophy Points:
    113
    And you don't see that as a problem... huh? When the published record suggests that it comes with a confidence rating of <40%, and those are being expressed as "reliable enough" I'd suggest you have issues. More, when it comes back to having to develop public policy, the low confidence attributes must make policy makers reluctant to follow any recommendations based on the data.

    And the flip side of that is where the data is in play, the public policy employed has made environmental instability much more an issue that ever. See Kalifornia...

    I had to laugh though, today. Volvo has "committed to lowering the top speed of their vehicles". Read, putting less powerful engines into their cars because they never figured out how to compete in that space... So, why bother trying to make better, faster cars, when you can simply use the excuse that in lowering the output of their cars, they are somehow being more "green". It's laughable.
     
    Blaster3 likes this.
  19. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,516
    Trophy Points:
    113
    This has been a trend for a number of years now. It's why the 1930s is far more cool now that it was 30 years ago in the data sets. Of course the 30s represented temps that are still higher than today, but if you admit that, then what we see today doesn't have uniqueness or impact.
     
    Blaster3 and Hoosier8 like this.
  20. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    Here are the differences between the different conventional datasets with the raw data plotted as well. It's pretty well known that the net effect of all of the adjustments is to broadly lessen the warming trend prior to WWII; not exaggerate it. And after WWII when the anthroprogenic component of the warming began dominating the adjustments don't make any meaningful difference at all. Everybody including the skeptical institution Berkeley Earth agrees the planet is warming.

    [​IMG]
     
    drluggit likes this.
  21. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    The global mean surface temperature was not higher in the 30's than today. Like, not even close.

    And if you don't agree with the necessary adjustments that the conventional datasets use prior to WWII then you're going to have to accept that the warming is even more pronounced than what they are publishing.
     
  22. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,516
    Trophy Points:
    113
    You've said this before, the evidence of the heat records, the number of records, etc all support this observation. It's why in the northern American continent for example that we have half the number of 90 degree days than we did then. The ONLY reason that anyone has suggested this isn't the case are those datasets that have been artificially manipulated to remove those records and lower the observable data. So, you don't get to have it both ways. The 30s were warmer. Period. As far as global indicators, we don't have sufficient records elsewhere to have ANY credibility suggesting this was isolated or otherwise reflective of only the North American continent.
     
    Blaster3 likes this.
  23. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    Record highs outnumber record lows 2-to-1 in the United States. Do you have an official source for the 90 degree days that I can look at?

    It's actually the opposite. The net adjustments made 1930's temperature records has been to increase the readings and thus decrease the warming rate. I take it you don't agree with these "artificially manipulated" datasets? If so then you're just going to have to grin and bear the fact that the amount of warming that has occurred since the 1930's is even higher than what scientists say it is.

    And yet all of the world's leading experts including institutions founded and funded by skeptics disagree with you. Why is that you think you're so much smarter than the world's leading experts on this issue? And why don't you publish your concerns and explain to them how they got it wrong and how to get the right answer?
     
    Last edited: Mar 4, 2019
  24. drluggit

    drluggit Well-Known Member

    Joined:
    Nov 17, 2016
    Messages:
    31,067
    Likes Received:
    28,516
    Trophy Points:
    113
    Record low highs outnumbered record highs. The heatwave in the 1930s has never been eclipsed..
    [​IMG]

    The temp record clearly doesn't reflect this and has been artificially removed from the data models.

    See laterst NOAA chart..

    [​IMG]

    compared to other historic records...

    [​IMG]
    Where the 1930s are accurately listed.

    Of course the first NOAA representations look different, and clearly, they aren't equal. The observation is that NOAA have consistently under reported the temps during the 1930s. So, yes, they NEED to be more accurately reflected, and clearly, the record would require the modeled expectational data to be corrected to reflect the actual collected temp data from the era which would bring it in line with the last graph.
     
    Blaster3 likes this.
  25. iamanonman

    iamanonman Well-Known Member

    Joined:
    Dec 2, 2016
    Messages:
    4,826
    Likes Received:
    1,576
    Trophy Points:
    113
    It does reflect this. You are aware that the United States is NOT the same thing as the whole Earth right? Remember, the contiguous United States only represents 1.5% of the Earth.

    That may be true. I don't doubt their temperature reconstruction because it has been peer reviewed. Just know that it is for the western part of North American only and primarily that of the western United States so it's even less than 1.5% of the Earth.

    They look different because they are not the same measurement. The top graph isn't even a temperature graph. And while the bottom graph is at least plotting temperatures it is only doing so for the 30-55N and 95-135W part of the Earth.

    Refer to post #20 for a graph of the global mean surface temperature from 7 different datasets. This is a like-to-like comparison because all 7 plots are of the global mean surface temperature.

    And yes. The dust bowl years in the United States were very warm. Many of the heat records achieved back then still stand today. Do you know why the dust bowl years were so warm in the United States?
     

Share This Page