Jump to content
Snow?
Local
Radar
Cold?
IGNORED

Global Surface Air & Sea Temperatures: Current Conditions and Future Prospects


BornFromTheVoid

Recommended Posts

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

 

And two days ago was the warmest day of the year, so far.

Link to comment
Share on other sites

Posted
  • Location: Camborne
  • Location: Camborne

Global highlights: Year-to-date (January–March 2015)

  • During January–March, the average temperature across global land and ocean surfaces was 1.48°F (0.82°C) above the 20th century average. This was the highest for January–March in the 1880–2015 record, surpassing the previous record of 2002 by 0.09°F (0.05°C).

During January–March, the globally-averaged land surface temperature was 2.86°F (1.59°C) above the 20th century average. This was the highest for January–March in the 1880–2015 record, surpassing the previous record of 2002 by 0.09°F (0.05°C)

During January–March, the globally-averaged sea surface temperature was 0.95°F (0.53°C) above the 20th century average. This was the third highest for January–March in the 1880–2015 record.

For extended analysis of global temperature and precipitation patterns, please see our full March report

  • Like 1
Link to comment
Share on other sites

Posted
  • Location: Napton on the Hill Warwickshire 500ft
  • Weather Preferences: Snow and heatwave
  • Location: Napton on the Hill Warwickshire 500ft

Global highlights: Year-to-date (January–March 2015)

  • During January–March, the average temperature across global land and ocean surfaces was 1.48°F (0.82°C) above the 20th century average. This was the highest for January–March in the 1880–2015 record, surpassing the previous record of 2002 by 0.09°F (0.05°C).
  • During January–March, the globally-averaged land surface temperature was 2.86°F (1.59°C) above the 20th century average. This was the highest for January–March in the 1880–2015 record, surpassing the previous record of 2002 by 0.09°F (0.05°C)
  • During January–March, the globally-averaged sea surface temperature was 0.95°F (0.53°C) above the 20th century average. This was the third highest for January–March in the 1880–2015 record.
  • For extended analysis of global temperature and precipitation patterns, please see our full March report

 

 

Deja vu any one ??

 

""The March 2015 global temperature was the third highest monthly departure from average on record for any month, just 0.01°C (0.02°F) lower than the monthly anomalies for February 1998 and January 2007

 

Re 2014 ...Yet the Nasa press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

Read more: http://www.dailymail.co.uk/news/article-2915061/Nasa-climate-scientists-said-2014-warmest-year-record-38-sure-right.html#ixzz3XsbG1pr4 

 

Edited by stewfox
Link to comment
Share on other sites

Posted
  • Location: Near Newton Abbot or east Dartmoor, Devon
  • Location: Near Newton Abbot or east Dartmoor, Devon

Deja vu any one ??

 

""The March 2015 global temperature was the third highest monthly departure from average on record for any month, just 0.01°C (0.02°F) lower than the monthly anomalies for February 1998 and January 2007

 

Re 2014 ...Yet the Nasa press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

Read more: http://www.dailymail.co.uk/news/article-2915061/Nasa-climate-scientists-said-2014-warmest-year-record-38-sure-right.html#ixzz3XsbG1pr4 

What do you want them to do? Not tell it how it was? If Lewes Hamilton qualifies first by .001 of a second does he not come first according to the sceptic mindset then?

  • Like 1
Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

What do you want them to do? Not tell it how it was? If Lewes Hamilton qualifies first by .001 of a second does he not come first according to the sceptic mindset then?

 

Lewis, even.

 

It depends on error. With the best will in world, all measurements contain error. It could be instrumental, it could be bias; it doesn't matter what the cause the error is there.

 

Essentially, with climate series all the producers homogenise (is that a word?) the series so that it fulfills certain basic statistical assumptions, the primary one being that it is modified so that it is normally distributed. The process of doing this in and of itself introduces errors, too. If you take the time to read how HadCrut is produced a vast amount of the published works by Jones et al is in describing how to homogenise and compute the error. It is a first class piece of work.

 

If you fail to say this is the figure +/- error then that's a failure, it's not anything else, it is selective data bias and is on par with, say, selecting the start date for slapping a trend on an Excel spreadsheet, for instance. Failure it is.

 

As for your metaphor when does it become ridiculous? A millionth of a second, a trillionth of a second? At what point do you make the assumption that the figure is tosh? Most thermometers around the world only compute, accurately, to 1/10th degree. The 100th of a degree is more information designed to show which way the underlying data is biased, or to allow for truncation and/or rounding error.

 

To categorically state that the measured world temperature is a record to 100th/degree particularly without specifying the error rate is pure hyperbole. Of course, it has to be a figure of something, but a record by 0.02 might well be a computational record, but in the real world, it is meaningless.

 

All of the climate models/computations use IEEE 64bit floating-point types (last time I looked) does it surprise you that using this datatype can't even represent 0.1 on the underlying hardware?

 

Several different representations of real numbers have been proposed, but by far the most widely used is the floating-point representation.1 Floating-point representations have a base beta.gif (which is always assumed to be even) and a precision p. If beta.gif = 10 and p = 3, then the number 0.1 is represented as 1.00 × 10-1. If beta.gif = 2 and p = 24, then the decimal number 0.1 cannot be represented exactly, but is approximately 1.10011001100110011001101 × 2-4.

 

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

 

Some, without a real grasp of the underlying computational fundamentals, will gamble the world's future on approximate arithmetic. You better just be glad that your financial computations aren't done using floating point arithmetic. In fact, any computer programmer who used it at any company in world would find themselves fired in short shrift. If it comes to money, then, I'm afraid, people are prepared to invest and take the time to get it right at the most fundamental level.

Edited by Sparkicle
  • Like 2
Link to comment
Share on other sites

Posted
  • Location: Mytholmroyd, West Yorks.......
  • Weather Preferences: Hot & Sunny, Cold & Snowy
  • Location: Mytholmroyd, West Yorks.......

Hi Sparks, thanks for that! Does the 'error' remain constant over the series or is it variable over time? I only ask as I am unsure as to the outcome. 

 

If this years global temp sets a record that is again smaller than the error bar will it still be statistically warmer than last years final figure?

Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

 

Hi Sparks, thanks for that! Does the 'error' remain constant over the series or is it variable over time? I only ask as I am unsure as to the outcome. 

 

If this years global temp sets a record that is again smaller than the error bar will it still be statistically warmer than last years final figure?

 

Not sure if it's constant throughout the series.

Out of interest 2014 on the final figure came out top. Right at the bottom end of the error range it would've been 27th, and clearly, at the top end of the error range it's 1st.

Link to comment
Share on other sites

Posted
  • Location: Solihull, West Midlands. - 131 m asl
  • Weather Preferences: Sun, Snow and Storms
  • Location: Solihull, West Midlands. - 131 m asl

   

Not sure if it's constant throughout the series.

Out of interest 2014 on the final figure came out top. Right at the bottom end of the error range it would've been 27th, and clearly, at the top end of the error range it's 1st.

 

Sparks

 

Do you have any numbers  to report for last year in terms of the error bars compared to the  actual recorded?

Also any other years would be of interest?

Just if you are aware, for gods sake don't do any work on it! 

It certainly would be interesting to see the error being recorded in comparison to that quoted..

 

MIA

Link to comment
Share on other sites

Lewis, even.

 

It depends on error. With the best will in world, all measurements contain error. It could be instrumental, it could be bias; it doesn't matter what the cause the error is there.

 

Essentially, with climate series all the producers homogenise (is that a word?) the series so that it fulfills certain basic statistical assumptions, the primary one being that it is modified so that it is normally distributed. The process of doing this in and of itself introduces errors, too. If you take the time to read how HadCrut is produced a vast amount of the published works by Jones et al is in describing how to homogenise and compute the error. It is a first class piece of work.

 

If you fail to say this is the figure +/- error then that's a failure, it's not anything else, it is selective data bias and is on par with, say, selecting the start date for slapping a trend on an Excel spreadsheet, for instance. Failure it is.

 

As for your metaphor when does it become ridiculous? A millionth of a second, a trillionth of a second? At what point do you make the assumption that the figure is tosh? Most thermometers around the world only compute, accurately, to 1/10th degree. The 100th of a degree is more information designed to show which way the underlying data is biased, or to allow for truncation and/or rounding error.

 

To categorically state that the measured world temperature is a record to 100th/degree particularly without specifying the error rate is pure hyperbole. Of course, it has to be a figure of something, but a record by 0.02 might well be a computational record, but in the real world, it is meaningless.

 

All of the climate models/computations use IEEE 64bit floating-point types (last time I looked) does it surprise you that using this datatype can't even represent 0.1 on the underlying hardware?

 

Several different representations of real numbers have been proposed, but by far the most widely used is the floating-point representation.1 Floating-point representations have a base beta.gif (which is always assumed to be even) and a precision p. If beta.gif = 10 and p = 3, then the number 0.1 is represented as 1.00 × 10-1. If beta.gif = 2 and p = 24, then the decimal number 0.1 cannot be represented exactly, but is approximately 1.10011001100110011001101 × 2-4.

 

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

 

Some, without a real grasp of the underlying computational fundamentals, will gamble the world's future on approximate arithmetic. You better just be glad that your financial computations aren't done using floating point arithmetic. In fact, any computer programmer who used it at any company in world would find themselves fired in short shrift. If it comes to money, then, I'm afraid, people are prepared to invest and take the time to get it right at the most fundamental level.

 

A bit off topic but 80-bit extended precision floating points, also known as long doubles or REAL10 values have been the native standard on-chip for Intel FPUs since the early 1980s (Intel's work led to the IEEE standards). To put it into perspective, with correct rounding a decimal value to 18 significant digits can be converted to a binary float and back without loss of precision, and a FPU calculated value converts to decimal accurate to 21 significant places.

For when every trillionth of a degree matters!

Edited by Interitus
  • Like 2
Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

A bit off topic but 80-bit extended precision floating points, also known as long doubles or REAL10 values have been the native standard on-chip for Intel FPUs since the early 1980s (Intel's work led to the IEEE standards). To put it into perspective, with correct rounding a decimal value to 18 significant digits can be converted to a binary float and back without loss of precision, and a FPU calculated value converts to decimal accurate to 21 significant places.

For when every trillionth of a degree matters!

 

 

It doesn't matter how many bits you use for representation, the fact remains that 0.1 is not representable, exactly, in binary, which is why no one in their right mind would consider using any floating point value for any financial calculation whatsoever (because some values are not representable, exactly, in binary)

 

Edited by Sparkicle
Link to comment
Share on other sites

It doesn't matter how many bits you use for representation, the fact remains that 0.1 is not representable, exactly, in binary, which is why no one in their right mind would consider using any floating point value for any financial calculation whatsoever (because some values are not representable, exactly, in binary)

 

 

Well explain the difference to representing fractions, a third for example in decimal. It is not an issue with binary, it is an issue with any base and its factors, and as long as one is aware of any possible rounding errors and code options such as denormalising values, extending precision further or continued fractions, then it is not a problem. It depends if we considering purely financial or general calculations, but binary floats can represent and differentiate between values with 0.1 precision with absolute ease (unless the exponent is astronomically enormous), it is nonsense to suggest otherwise IMO.

Edited by Interitus
  • Like 1
Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

Well explain the difference to representing fractions, a third for example in decimal. It is not an issue with binary, it is an issue with any base and its factors, and as long as one is aware of any possible rounding errors and code options such as denormalising values, extending precision further or continued fractions, then it is not a problem. It depends if we considering purely financial or general calculations, but binary floats can represent and differentiate between values with 0.1 precision with absolute ease (unless the exponent is astronomically enormous), it is nonsense to suggest otherwise IMO.

 

No, 0.1 cannot be represented in binary in the same way that 1/3rd cannot be represented in decimal - they are both respectively have infinite fractional expansions.

 

Consider the following C# code exerpt

using System;namespace Representation{    class MainClass    {        public static void Main (string[] args)        {            double d = 1.0 * (0.5 - 0.4 - 0.1);            Console.WriteLine (d.ToString());        }    }}

What's the answer to the double precision computation? Is it really -2.77555756156289E-17? No, it isn't - it should be zero, and this is of particular importance to dynamical systems where tiny changes in the inputs can lead to vast changes in the outputs. The exponent isn't astronomically large as you suggest - actually, the exponents need to be significantly different, not necessarily large - it is a faulty calculation because 0.1 cannot be represented in binary. One wouldn't dream of thinking that 0.3 + 0.3 + 0.3 = 1.0.

 

My view is that calculations of importance, such as climate calculations, should be computed using rational arithmetic using infinite precision integers, but then, of course, one quickly runs into the problem of the transcendental numbers and the necessary approximation that would be required therein. It would be interesting to do a simple climate model using interval arithmetic where a byproduct of the computation is the range of error. Indeed, that's all you get: the true value must lie between the end points, therefore the end points are the error range.

Edited by Sparkicle
Link to comment
Share on other sites

Posted
  • Location: Edmonton Alberta(via Chelmsford, Exeter & Calgary)
  • Weather Preferences: Sunshine and 15-25c
  • Location: Edmonton Alberta(via Chelmsford, Exeter & Calgary)

No, 0.1 cannot be represented in binary in the same way that 1/3rd cannot be represented in decimal - they are both respectively have infinite fractional expansions.

 

Consider the following C# code exerpt

using System;namespace Representation{    class MainClass    {        public static void Main (string[] args)        {            double d = 1.0 * (0.5 - 0.4 - 0.1);            Console.WriteLine (d.ToString());        }    }}

What's the answer to the double precision computation? Is it really -2.77555756156289E-17? No, it isn't - it should be zero, and this is of particular importance to dynamical systems where tiny changes in the inputs can lead to vast changes in the outputs. The exponent isn't astronomically large as you suggest - actually, the exponents need to be significantly different, not necessarily large - it is a faulty calculation because 0.1 cannot be represented in binary. One wouldn't dream of thinking that 0.3 + 0.3 + 0.3 = 1.0.

 

My view is that calculations of importance, such as climate calculations, should be computed using rational arithmetic using infinite precision integers, but then, of course, one quickly runs into the problem of the transcendental numbers and the necessary approximation that would be required therein. It would be interesting to do a simple climate model using interval arithmetic where a byproduct of the computation is the range of error. Indeed, that's all you get: the true value must lie between the end points, therefore the end points are the error range.

Bored now!

  • Like 2
Link to comment
Share on other sites

Posted
  • Location: Camborne
  • Location: Camborne

  GWPF inquiring into temperature adjustments

 

Well, this is an interesting one. GWPF has announced an inquiry into temperature adjustment practices. It is being boosted by Booker at the Telegraph ("Top scientists start to examine fiddled global warming figures") which is a bad dent in its credibility. However, it has a reasonably qualified panel, so it may be interesting to see what they develop. They have asked for submissions by June 30, so I might try to come up with something. But the remit sounds like they have been advised by Paul Homewood. For now, I'll just review that:

Link to comment
Share on other sites

Posted
  • Location: Ireland, probably South Tipperary
  • Weather Preferences: Cold, Snow, Windstorms and Thunderstorms
  • Location: Ireland, probably South Tipperary

NCEP/NCAR reanalysis has April 2015 as the 6th warmest on record (though 3rd down to 8th place are separated by less than 0.1C)

 

6Nh7JcK.png

 

January to April 2015 is now the 3rd warmest on record

 

DAsVrjt.png

Link to comment
Share on other sites

Posted
  • Location: Camborne
  • Location: Camborne

Extending the temperature record of southeastern Australia

 

This is a guest post by Linden Ashcroft. She did her PhD studying non-climatic changes in the early instrumental period in Australia and now works at the homogenization power house, the Centre on Climate Change (C3) in Tarragona, Spain. She weekly blogs on homogenization and life in Spain.

This guest post was originally written for the Climanrecon blog. Climanrecon is currently looking at the non-climatic features of the Bureau of Meteorology’s raw historical temperature observations, which are freely available online. As Neville Nicholls recently discussed in The Conversation, the more the merrier!

 

http://variable-variability.blogspot.co.uk/2015/05/homogenization-southeastern-Australia.html

Link to comment
Share on other sites

Posted
  • Location: Camborne
  • Location: Camborne

Addicted to global mean temperature

 

“Everything should be made as simple as possible, but not simpler.†There is evidently no record of Einstein having actually used these words , and a quote of his that may be the source of this aphorism has a somewhat different resonance to my ear.  In any case, I want to argue here that thinking about the global mean temperature in isolation or working with simple globally averaged box models that ignore the spatial structure of the response is very often “too simpleâ€.  I am reiterating some points made in earlier posts, especially #5, #7, and #44,  but maybe it is useful to gather these together for emphasis.

 

http://www.gfdl.noaa.gov/blog/isaac-held/2015/03/31/58-addicted-to-global-mean-temperature/

Edited by knocker
Link to comment
Share on other sites

Posted
  • Location: Napton on the Hill Warwickshire 500ft
  • Weather Preferences: Snow and heatwave
  • Location: Napton on the Hill Warwickshire 500ft

 

One thing is for sure such fold wont be winning any marketing awards. Read it 3 times then gave up re what its shows. I though labeling graphs was a fundamental ?

 

""""There is a color key, which is not labelled, but you can query any color on the plot just by clicking it there"   ??

Edited by stewfox
Link to comment
Share on other sites

Posted
  • Location: Napton on the Hill Warwickshire 500ft
  • Weather Preferences: Snow and heatwave
  • Location: Napton on the Hill Warwickshire 500ft

 

Interesting, particularly around the satellite changes last 30 years.

 

Given these adjustment can I ask some folk why they have such confidence in 1880 data re comparing it to 2015 data ?

Link to comment
Share on other sites

Posted
  • Location: Ireland, probably South Tipperary
  • Weather Preferences: Cold, Snow, Windstorms and Thunderstorms
  • Location: Ireland, probably South Tipperary
GISS has updated with the 2nd warmest April on record, +0.75C, behind 2010 at +0.81C

 

LQm8aVC.png

 


 

 

The year to date is now the warmest on record

 

2015 +0.79C

2010 +0.78C

2007 +0.73C

2002 +0.72C

1998 +0.69C

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • European State of the Climate 2023 - Widespread flooding and severe heatwaves

    The annual ESOTC is a key evidence report about European climate and past weather. High temperatures, heatwaves, wildfires, torrential rain and flooding, data and insight from 2023, Read more here

    Jo Farrow
    Jo Farrow
    Latest weather updates from Netweather

    Chilly with an increasing risk of frost

    Once Monday's band of rain fades, the next few days will be drier. However, it will feel cool, even cold, in the breeze or under gloomy skies, with an increasing risk of frost. Read the full update here

    Netweather forecasts
    Netweather forecasts
    Latest weather updates from Netweather

    Dubai Floods: Another Warning Sign for Desert Regions?

    The flooding in the Middle East desert city of Dubai earlier in the week followed record-breaking rainfall. It doesn't rain very often here like other desert areas, but like the deadly floods in Libya last year showed, these rain events are likely becoming more extreme due to global warming. View the full blog here

    Nick F
    Nick F
    Latest weather updates from Netweather 2
×
×
  • Create New...