Jump to content
Snow?
Local
Radar
Cold?

Tracy Flick

Members
  • Posts

    104
  • Joined

  • Last visited

Profile Information

  • Location
    DL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Tracy Flick's Achievements

Contributor

Contributor (5/14)

  • Conversation Starter
  • Reacting Well
  • Very Popular
  • First Post
  • Collaborator

Recent Badges

216

Reputation

  1. ... and the A68 (yes, an 'A' road) near Tow Law http://www.durham.gov.uk/article/4530/Road-weather-station-camera-details?action=detail&station=12
  2. Pretty nasty conditions up near Forest-in-Teesdale School: Definitely need a 4x4 to get to school today(!)
  3. As seen as you seem interested, I'll just add the following too I may: As well as the accuracy limits from the coding approximations, there is also a fundamental limit introduced because the system is essentially chaotic: starting from 2 data points (or 2 ensemble members) that differ only by a tiny amount, the differences between the 2 will stay small for a short while, then suddenly diverge. Now even if there were NO approximations in the coding, this sudden divergence of the ensemble members can still occur. It will be at a somewhat different time and with a somewhat different outcome, but the divergence can occur all the same. This is best shown by the Lorenz attractor in this video: Note how the 3 slightly different starting points suddenly diverge at around 55 seconds: This is not a coding error, but is a feature of the exact mathematical solution. So if the current, unusual atmospheric state that we have right now is close to one of these areas where divergence suddenly occurs, then uncertainty in the forecast is could be very high not only because we've made approximations, but also because two very close starting points will very soon rapidly diverge. (If we are near the place of sudden divergence, we would say Shannon entropy is currently high.)
  4. NWP is traditionally purely Newtonian physics (no historical data use), but some companies are trying Artificial Intelligence ('Neural Networks', the self-learning way). How I see these 2 approaches: Using pure Newtonian physics (which is the traditional MetO way): In the case of fluid dynamics, the balance of forces and the conservation of momentum, energy and mass are expressed through Navier-Stokes partial differential equations (mass, energy and momentum are conserved in everything, even car crashes... read about Newton's Laws for more). But these NS equations are difficult to 'solve' both numerically and analytically... in fact we don't even know how to generally prove that physical solutions exist to them at all. The best we can do is therefore create a simplified version of the NS equations and then use computational techniques such as 'finite difference methods' (see Wikipedia) to solve the many partial differential equations simultaneously: these iterate the evolution over increments of time and space (the space increments are the grid spacings that are often referred to.) The starting point (t=0) is observed data combined with reanalysis data. Obviously smaller increments of time and space give better results but are more computationally expensive. These Newtonian NWP models should in theory be fine with unusual atmospheric states because the laws of physics are still always true... however, the approximations introduced by the physicists and programmers may mean that accuracy is only guaranteed for certain ranges of atmospheric states. By making observations of unusual states we can test the validity of these approximations and improve them if necessary. An example: If there's an experimental observation such as eg 'high sea surface temperatures in an ocean leads to warmer summers' , this will 1st be checked for good statistically correlation. Then this will be checked on the computer model to see if the simulation output matches the experimental observation. If it does, great, but if not it means something is either lacking from the code or something is in error in the code. Then the possible sources of error are identified eg maybe evaporation needs to be modelled less simplistically, so they will try that. This will mean increasing the complexity of the approximate partial differential equations, or adding new ones. Nonlinear systems (eg the atmosphere) require exponential increases in computing power for linear increases in forecasting accuracy... I suspect there is therefore probably no real desire anymore to simply improve results by buying better computers because the forecasting improvement is increasingly tiny for each extra dollar spent. It is surely instead more efficient to improve the formulation of the differential equations that have to be simultaneously solved... a question that could be asked is eg "how can I better model turbulence on the edges on tropical storms?" ... answering this question will lead to modifications of the set of partial differential equations. GFS I think uses finite difference methods to solve the partial diff equations but they want to advance to finite volume methods. ECMWF use spectral methods... these involve writing the solution as a sum of basis functions as is done in Fourier analysis. The AI Neural Network way: Some private companies are it seems now trying to somewhat blindly get forecast results using Artificial Intelligence. See https://blogs.microsoft.com/ai/hows-the-weather-using-artificial-intelligence-for-better-answers/ Basically this method I think (or at least one possible implementation) uses historic observations with known outcomes to 'train' the software. Then given enough training it should be able spot patterns and give a correct output (the forecast). This is similar to spectral methods in that we are trying to find weights to give to each element of a set of functions... BUT this time the desired weighting on a function is evolved towards using nothing but historic observations as a guide, whereas the ECMWF find the weights by 'simply' plugging the sets of functions into the partial differential equations. For an introduction to weather with neural networks and practising using it yourself: https://www.amazon.co.uk/Neural-Network-Programming-Java-Souza/dp/178588090X As computing power increases AI might become more important in forecasting, BUT weather has sensitivity to initial conditions so perfect AI forecasting can surely only be achieved with an infinite amount of training data(!)... that's to say AI methods must have at least the same fundamental limitation as classical physics methods. I suspect that it is with unusual atmospheric states that AI will seriously struggle. Maybe combining AI and traditional NWP in some way is a way some might go in the future. TLDR of my take: AI uses purely historic weather observations, whereas traditional NWP uses purely Newton's Laws of physics, with approximations. I'm sure someone at the MetO is playing with AI but they surely aren't using it yet.
  5. The GEM goes for The Scream option. Even Scotland would struggle to stay snowy with this one.
  6. For those who want the snow to survive for the one weekend (which surely isn't asking too much, is it?) the GFS 18z is not a concern. What is a concern is the UKMO and now the latest shift in the ECM run set. I don't know why they have to run the GFS 4 times a day to 384h: maybe it's an attempt to buy love and attention by compensating for the lack of quality with excess quantity. What's a pro forecaster meant to do, publish a new in-depth analysis every 6 hours? It's an overload of information that muddies the waters rather than making them clearer. Anyway FWIW, the JMA + GEM ensemble means still have promise out to a week on Tuesday for much of the country. I don't expect a shift in tone from the MetO just yet. If the snow survives one weekend, rather than just wrecking the drive to work then vanishing Friday night, I'll be happy.
  7. I wouldn't be surprised if a few rural locations in the southern half of the UK get scenes like this at some point in the next 6-12 days, as the lows try to push in from the south-west.
  8. Worth noting that morning and evening rush hours are in daylight in March, which should help. Personally I hated 2010 much more than 2013 because I was on a Pennine road every day at 6 am in the dark. Scary as hell, I couldn't see the front of my car let alone the nearest farmhouse.
  9. If by 'Next week' she means 'day 0 to day 7' it's probably just about a reasonable assessment. (I didn't hear or watch it though.)
  10. This is good agreement now between GEFS mean and ECM mean: Slack, dry easterly seems to be being converged upon for the 6-10 day range.
  11. The ECM mean evolution: 3 days ago it predicts blocking heaven for next Friday: This prediction then adjusts to become blocking purgatory as geo heights lower near Greenland: But the latest run has the block to the East fighting back: So the ECM hasn't quite returned to what it was showing 3 days ago, because of differences around S Greenland. But the block to the east is nonetheless fighting westward. A JMA mean for the same day and a few days later shows the blocking starting to win to the north as battleground/undercut scenarios develop: ECM mean is similar to JMA mean, but slightly more easterly: The differences between JMA and EC now seem to be narrowing down to speed of evolution more than anything else. No doubt everything will change again this evening.
  12. The evolution of the GEFS 12z mean from the last 4 days for 1300h, 23rd Feb: And the EC means from 13th and 14th, for 23rd Feb: The Atlantic's significance is increasing with each run. Tonight's EC mean will be interesting.
  13. March easterlies need a northerly component for lasting settled snow. By April the requirement has become a full-on northerly. The most statistically likely outcome is the worst of both worlds: no memorable snow and no pleasant warmth.
×
×
  • Create New...