Jump to content
Snow?
Local
Radar
Cold?

SnowBallz

Members
  • Posts

    201
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by SnowBallz

  1. An excellent post, Ruben. One of the great imponderables climatology faces, is weighing-up how mans unquestionable environmental effect influences the more broader cyclical processes. Whilst it is possible to analyse glaciated isotopes, these are isotopically poor analogues to use as any basis for comparison, as they will lack the considerable catalyst for atmospheric change which has been witnessed since the dawn of industrialisation. That's not to say that they're useless, more that care and thought should be taken before blindly drawing 1:1 comparisons to 10,000 years ago. My own view is that nature does - in many ways - seem to have this welcomed habit of finding equilibrium, even when prima facie it might appear impossible to do so. It could be that, for example, global warming counteracts the cooling effect of less UV emitted from the sun. Whilst I grant that to be extraordinarily simplistic, there is definitely a counterbalance which will result in a net climatological effect. It's the resulting net effect which will determine severity, in my opinion. Here's an interesting thought to ponder: could it be that mans view - influenced by our now greater understanding - of CFCs changes, owing to perhaps the northern hemisphere drastically cooling? Introducing CFCs into the atmosphere could be seen as a way to mitigate against such a cool-down. Of course, it's difficult to envisage that ever happening, but then modern times hasn't had to countenance mini ice-ages; but how does an intelligent, scientifically aware species react to that? Its a very interesting discussion. Probably not within the context of this thread though!
  2. If you take prior analogues of similar synoptic pattern, a signal for amplification of the Scandinavian high is plausible. But whether it's likely is another matter, and at this stage there's far too much uncertainty to make a confident call either way.
  3. Hi Shedhead, What I would say, as the strongest possible response to your post, is that the NOAA themselves are not happy with the performance of neither the GFS or its ensemble compatriot, GEFS. To that end, it is undergoing a major overhaul in April, in order to correct a number of errors which have been identified in the physical model. In addition, it is also worth stating that all mathematical models generate bias, and the GFS is no different in that regard; it will - for example - display a predisposition to explosive cyclogenesis, and this is mainly due to an issue around resolution depth, coupled with a data issue regarding SST. If you read the narrative that Nick often references from NOAA discussion, you will - quite often - see commentary to this effect. With respect to the verification statistics, I would wholly disagree with your approach. The statistical basis for verification is an accepted industry standard, and it is through that which respective model performance is objectively assessed. There is no agenda at play; the statistical results are derived from the output of the models, so - if they don't verify - then it is the model that is at fault, not the statistics. I think an important point to make here is that - in a scientific environment, there is no room or for that matter value, in assessing using subjective interpretation; there needs to be benchmarks and criteria, and that is what a standard reanalysis model delivers. The ECMWF - in tandem with UKMO - are world-leading pioneers in meteorology, and this isn't lost on NOAA or other worldwide counterparts. There is actually a fairly broad consensus right across the board, and I think you'd be surprised that there really isn't this competitive nature - it's a lot more scientific and progressive than that. For example, I know that there are (at least) two ECM colleagues over in Reading who are currently advising NOAA on how to best transition from the current iteration of GFS over to the new one in April. That's so that the NOAA have a seamless implementation of the operational model which, in time, the ECM can also make use of and naturally compare itself against. My personal - and I stress personal - view of the GFS, is that it contains too many consistent and permanent flaws, in order for me to attribute it too much credit. That is not to say that I completely discount it - that would be foolish - but it's about weighing it up, relative to its peers; there will be times when the GFS performs better, and there will be other times when it performs quite obviously poorer. But that's mathematical modelling. There is one model, for instance, that is nigh-on useless through the first third of winter, and that is because its physics engine relies so heavily on stratospheric data. Thereafter, its performance appears to be quite exceptional. It is not within my gift to say which one, but I'm sure Ian knows which one I'm referring to. Equally so, it is evident that some models perform better through different seasons, and this can be understood through how every model will have different elements of teleconnective reasoning. So, it isn't really the case that there is one model that is in any way to be considered a panacea - that really isn't the approach that is used by the Met Office (or the NOAA, for that matter) The approach is to objectively assess all outputs, and to then apply probabilistic reasoning against it, in order to draw a forecast. That approach abrogates against the sort of bias which is often found to be at the source of most known human errors. I hope that helps, or at least serves to perhaps dispel a myth or two? SB
  4. Hi BA, MOG-15 is a stochastic suite of 24 members which runs twice-daily, at 60km and across 70 vertical levels, on one of our machines sited at ECMWF. This ensemble suite complements: MOG-G - 33km, and 70 vertical; (the now defunct) MOG-R - 18km, and 70 vertical, and finally MOG-UK - 2.2km, and again 70 vertical. For completeness, GloSea5 (seasonal EPS) is another stochastic - but (as opposed to MOGs) coupled - suite of 42 members (21-day staggered creation) which runs at 60km atmospheric, and 0.25° oceanographic, out to a time-span of 6mths. Hope that's helpful? SB
  5. Loosely model-related - I suppose - but I thought I'd share with you all that I was talking to a colleague a few weeks ago about, amongst others things (like the mid-seasonal prognosis) rather bland stuff like NWP model performance. That, in part, came from discussion around what the likely improvements would be when GFS receives its major overhaul in April 2014. For quite some time, it is acknowledged that the performance of the GFS has been - relatively speaking - disappointing, and there's been a lot of discussions around why - for instance - verification is consistently behind, primarily, the ECMWF suite. I wasn't aware until recently, but but an experiment was carried out not too long ago into why this under-performance was evident. It is worth noting that NOAA attribute a great level of professional respect to their peers at ECMWF, and so enlisted their help to review and breakdown GFS. What was found was extremely interesting. The model itself, ie: GFS and its ensemble compatriot, GEFS, behave in very similar ways to that of the ECMWF model. So, in that case, why do they differ? Well, interestingly, it all depends on the initialisation data - and not so much on the algorithms. How was this proven? Well, in very basic terms, ECMWF masked across a set of their initialisation data - so that it would fit with the GFS criteria - and waited to see what would happen to the output. Unexpectedly, the verification of the output was consistently almost 10% higher than the GFS running average. In short, it proved that the performance of GFS isn't so much about the model in terms of mathematics or its physics, but more the depth and variety of the data which is currently being applied to it. It will be interesting to see how the 'new GFS' operates in the Spring. It will enact considerable upgrade to the definition and layering within the current model. It's taken them 18 mths to synchronise initialisation of the 'new' model with the 'old' one, but as far as I'm aware, they both run completely in parallel now. This is ahead of the NOAA's complete refresh of their supercomputing capability in 2015, which I hear will break new ground, with some talk about harnessing the cloud; pooled computing, that sort of thing. I'm sure more will leak out through 2014, so keep your eyes peeled. Finally, I would like to wish a very Happy New Year to you all! I do hope that, while extreme weather brings interest to us all, that we are saved the disastrous effects that we've all witnessed especially over the last couple of months. And with that, it's time for a drink (...or two, possibly three) SB
  6. Hi, With that thought, conceptually you're not far wrong. A common misconception is often that variance implies that the data holds no discernible trend or pattern. With stochastic mathematical modelling the opposite can often be the case as you are using parameterisation to resolve a lot of dynamic process. Computing uses an inherently deterministic method of calculation, and will therefore follow certain patterns. If you mix non-deterministic mathematical methods with that though, then you can produce patterning which infers a high degree of variance. The underlying trend though, is that the variables which are calculated though stochastic parameterisation, are displaying that the 'end result' is not determined. Okay, so why is that important? Well, you can infer from it that the atmospheric state is non-determinable; the base variables are such that their propensity to change state, is statistically high. To coin a phrase, the atmosphere looks finely balanced, and the stochastic elements of NWP is displaying this base state within the output. Tighter grouping is often evident of a more settled atmospheric state, and a confidence in using the underlying laws of motion to hypothesise its change of state. How 'major' a change is very - if not impossible - to judge. A lot of atmospheric change is consequential of other variables; and variables that can amplify or suppress in equal measure. Amplification is probably the hardest calculation to apply within NWP, as its consequences do bear disproportionate influence on the rest of the calculation process. I think this is where the human eye is still the most discerning expert, when determining plausibility of NWP output. Anyway I hope that helps? It's a bit detailed, but in layman's terms I suppose the bottom line is that lots of variance or 'scatter' can - as you posed - reflect that a change is in the offing. It doesn't by any means guarantee it, more that the underlying conditions seem far more conducive to it. SB
  7. Hi, The idea that runs should only be compared relative to the previous output (0z to 0z, 12z to 12z) due to infinitesimally small (magnitudes of 10th/%) idiosyncrasies in data, doesn't stack up. Even if there were data blind spots (often happens) you either run algorithms to blend and normalise it, or you back-fill with prior cleansed 'control' data. The overriding error correction is the updated observational data, which is the precursor to every initialisation.Therein, to discount any run even though it contains perhaps 98% of all operational data, is not advisable. I could sympathise with such a view if such data blind spots brought the scope down to <85-90%, but that simply isn't the case. I often find intra-run variance to be, in the main, anecdotal; for example, verification against the GFS suite (0z, 6z, 12z, 18z) doesn't actually tend to favour any one particular initialisation - they all have, more or less, periods of better performance over each other - which, to be fair, is exactly what you'd expect from a stochastic model. The matter of data density is only a considerable factor when a critical mass is breached insofar as there is a lack of initialisation variables. For that to happen, there would need to be a serious lack of data (balloons, buoys, aircraft, nautical) and it's very unlikely that would ever happen. I'm not sure what the operational parameters are to initialise GFS, but ECM is >= 98.25%. To give context around that, I think I've read in papers that the average over the last three years is about 99.6%. So, it's a very stable base state model. I think it dropped to just below 99% when there was disruption to air traffic post eruption of Eyjafjallajokull. So to surmise, yes some runs lack data relative to others, but it's proportionate weight against that of the depth is what matters. Equally, any such data gaps can be and often are rendered over with algorithms, to help with calculation smoothing. You tend to get a value spat out at the end of an initialisation which will evaluate confidence in the starting parameter values, and if that's within a tolerance then you can generally have confidence in the run. I know that's how the developers parse layer code into the stack, and judge its consequential effect. Hope that helps, SB
  8. Hi Timbo, To be honest, it doesn't really work like that. The c-40 upgrade is a major change to the operational programme that performs various calculations against data, and the design of it has been taking place in the developmental environment over in Reading since earlier this year. The current ECM ENS Op programme is only able to perform calculations against data which it has been written to accept. The point here is: yes there is a greater depth of data available from satellites (mainly) but the current programme just doesn't even recognise it. Major upgrades like this lid increase are worked-up in Dev, because it's not only the level increase that needs to be modeled into the algorithm, but also you need to fine-tweak ALL of the interdependent tolerance values. That takes time; runs and runs, using test data and verifying that the expected consequences are within boundaries. I think I read somewhere that in excess of 10,000 runs have been run against c-40 in Dev - which is an almost continuous cycling. I did drop in many months ago that the ECM guys were working on a field that specifically focused on SSW. This was being done in Dev, and I do know that it was parsed into the programme to utilise the lid-level data. Going forward, this upgrade will - amongst other things - allow NWP to begin to understand and calculate the variables and level interactions between the strat and trop. That's a very exciting area of understanding; embryonic at the moment, but significant potential wrt LRF.
  9. Hi, The colloquial term "slider" refers to the behaviour of a low pressure system as it interacts with an entrenched area of high pressure. To contextualise, for our region (NW Europe) if there was a large area of continental high pressure (perhaps a 'Scani High') and a low pressure system was tracking towards it from the Atlantic, a "slider" would be if the low pressure system undercut and tracked south of the the high pressure. This specific interaction of the two pressure systems - with associated warm/cold gradient) often creates the conditions for extreme winter weather. 'Sliders' can be difficult, as - in the main - they are dependent on strong, stable and favourably orientated high pressure systems - characteristics which are often rare in conjunction, it not in isolation. The opposite to a 'slider' is when the low pressure system tracks north of the high pressure system; less mixing, less gradient, and more likely to result in the breakdown of an entrenched pattern, ie: blocking. That is not to say that high pressure cannot re-emerge, just that there is a correction of the general pattern of W - E flow (as opposed to a retrogressive E - W, when blocking is present) In respect of 'cold into Europe', that's really about depth of thermal gradient. If the continent is cold - and this time of the year it is still relatively warm, albeit cooling rapidly - then the high pressure will be sourcing very dry, very cold air from the depths of Russia and Eastern Europe. As that bitter and dry air smashes into the warm and moist air within a low pressure system, this is where thermodynamical fireworks takes place, or - as Steve might say - 'BOOM!'. Basically battlegrounds, where cold meets warm. Hope that's helpful? SB
  10. Hi, The AO (Arctic Oscillation) is one of the indices used to derive a measure of variation from what would be considered normal surface level pressure. In general, a +ve reading is indicative of low pressure atop the pole, which invariably tightens the PV (Polar Vortex) and makes it less likely for that really cold arctic-sourced air to drop to the lower latitudes; it remains bottled-up in the higher northern regions instead. Conversely, a -ve is indicative of higher pressure around the polar region. Higher pressure would tend to suggest either a weakened or displaced PV, so less oscillation means that colder arctic air is more able to filter south and into abnormal latitudes. It's one of a number of indices, and it's important to understand that there is no one which - in isolation - foresees cold weather; it's usually a combination, and - even then - a degree of factors all have to become conducive. Hope that helps, SB
  11. I'm not sure how the MO can win on this one? As ever, the warnings issued reflect confidence within the current OBS attendant to, largely, regionalied UKV fields. OBS can only issue public messages when confident justification is well grounded and heavily evidence-based. There also has to be due consideration to population density; wherein - to be albeit admittedly rather brief and simplistic - a judgement needs to be taken as to the comparative impact of storms hitting some areas, as opposed to others. When you are weighing-up the balance of consequential loss of life, and all of the services and contingency plans that automatically find catalyst from warnings, you need to be balanced and sure as to your call. I note that some are asking 'where are the warnings', yet others - predictably - pour scorn on the MO having issued too many warnings for St Jude: 'hyped-up', 'media circus', etc. I can confidently state, that - in both instances - the warnings are issued on heavily evidence-based analysis. It's not that the MO are, for example, sitting on their thumbs; this is what we do on a daily basis, and we take any evidence and risk of extreme weather very seriously. It is also important that, while all around you are busy losing their heads and getting carried away with hype, focus and perspective is maintained. For example, I know some journalists and news organisations who - quite literally - pester the office on an hourly basis, 'looking for updates' - usually for something that hasn't even formed yet. St Jude was, so I'm told, a classic example of this. Warnings have been issued, and their timings largely reflect uncertainty around projection and intensity; there is little public value in issuing warnings like confetti, and changing them every hour. Yes an eye is constantly kept to developing data, and all of that is taken on measure and entered into discussion. But, just because a model run might indicative a different development, that does not mean that there will be a consequential and immediate reflection to the broader view. I think this is where it is hard to manage expectations in light of the fact that there is public access to some data - but not, by any stretch, all data. In essence, it's easy to react to when you've only seen a partial side of what is usually quite a complex story - particular true of rapidly intensifying cyclones which are notoriously difficult to align a forecast against. Anyway I hope that helps. OBS isn't my area by any means, but I do happen across some of my colleagues who do operate there and I don't think I'm doing a disservice by retelling the (hopefully understandable) frustration. I can confidently state that OBS will be a very frantic area today and they'll been keeping a very keen eye on all developments with that cyclone. Should the evidence suggest a change in the operational warnings, then those will be delivered; but it's important not to knee-jerk. Thankfully my area is a lot less fraught! (...not to mention a lot less time-specific) and I don't have to worry so much about the criticism which my other colleagues frequently have to contend with. I don't envy them personally, even though it's a very exciting and thoroughly rewarding area to work in.
  12. Wow, congrats on the 10,000 - excellent dedication!! Bit random but a few of my colleagues are just beginning to take a keen look at the data trends for DJF and the current leaning - albeit very small sampling at the moment - is towards another anomalously colder winter season. It's a very small degree of confidence though, so not - at this stage - worthy of any great thoughts. Data churn with reanalysis will continue through to around the 3rd week in September, by which time they'll be far greater depth of data available. The experts in this field say that the early data is always prone to anomalies as they tend to run various different sample code against it. By the way, an SSW field is being parsed into the GRID for this coming winter, so it will be very interesting to see results against that. I'm not sure they're using it in production though, possibly just development. Anyway enough of my waffling - still waiting for winter here!
  13. Is it winter yet? Really looking forward to being back on the model thread in a few months time - last winter was exceptional and I think we all learned a great deal. As advised last winter, I started work at the MetO early last month, so I shall post what am able to get away with Working in research fields, but I've already had a play around with some of the operational toys. Anyway, see you lot soon and I hope you're all well. Best, SB x
  14. Purga is bang on.Relative to the previous run(s), the 12z is very much a stark moderation of previously advertised sharply milder theme, and more towards a flatter recovery. The direction may remain the same, but it's the trajectory and rate of decay which implies a continuation of the current pattern, for a little while longer yet anyway.
  15. Ha!Well, if pjl isn't misrepresenting Matt Hugo - and Matt indeed thinks a dull summer is in the offing "because of now" - then I have to say I'm a bit surprised. I'm surprised, because there is absolutely no evidence whatsoever to suggest there's a correlation between - for instance - a late-ending winter, and a consequential (relatively) poor summer. Plenty of anecdotal 'evidence' but nothing of scientific relevance to give it any credence. The reality, is that - as we've seen all this last winter season - patterns change frequently, and even when some suggested 'winter was over' in mi-February, quite the opposite has borne out. Therein, it's somewhat futile to hypothesise too far forward, and especially using a highly tenuous methodology like 'well it's really cold now , ergo it must mean cold/crap summer'. Not at all. Just as equally likely is that we'll get a scorcher of a summer, with blazing heat, droughts and hose pipe bans.
  16. This is the thing Stew, when people obsess over IMBY-perspective, they lose focus of the bigger picture.Case in point: the last 10 days or so for me has been characterised by frequent snow showers - not heavy, not persistent - but frequent snow, aligned with a biting and penetrating cold wind. It hasn't - by any stretch of the imagination - felt anything Spring-like, or remotely mild. Unfortunately, heavy commitments of late mean that I can only afford cursory glances at the models at the moment, but I do know that all of that episode was progged within the models - so for anyone to suggest that there is an issue of verification, I'm afraid they are very much mistaken, or applying too much of an IMBY viewpoint. I've looked at the current outputs and - again, much like the last episode - some will be experiencing cold, and very un-spring-like temperatures over the next few days. Not everyone, but some will. How foolish would it be of me, for instance, to dismiss such output in a week's time should I in contrast happen to have a milder few days? The UK's weather simply isn't determinded or judged on the local effects to a small pocket of land, on the entire British Isles. It's a lot wider than that, and any discussion which doesn't appreciate that, is - I'm afraid to say - pretty much dead in the water; in fact, (to me anyway) it comes across as pretty amateur, and I find it very easy to ignore.
  17. Disagree 100%. I posted yesterday, that the CFS was the first model to advertise - with conviction - the broad blocked pattern that we're currently experiencing, and long before any other model. The detail is different - which is to be expected - but, importantly, it remained resolute as to a blocked pattern forming. The next model to pick up the pattern was BOM-ACCESS. The CFS has been very consistent with regards to its prognosis for March. Now, as I posted about a while back, consistency isn't necessarily accuracy, but it certainly demands more than cursory dismissal. Usually, the CFS displays huge intra-run variance - as one would come to expect from a LR NWP model, but it's noticeable how this has very much been lacking, in favour of this consistent signal.
  18. Incidentally, CFS (the much maligned pariah of meteorology) just seems to love this idea of a rather bonkers cold weather pattern prevailing into March. I try not to attach too much attention to the model, but it's hard to ignore when it's persistent. Moreover, I'm inclined to give CFS some respect as - long before the other models - it proposed the blocking pattern we now see before us; not in absolute detail (obviously) but the overriding signal was there, nonetheless. It's been chewing on this March biscuit for a while now and can't seem to spit it out. Must've been a good 10+ days ago when I first posted (slightly tongue-in-cheek) regarding what it was proposing for March, but - to be quite frank - it hasn't budged much. Anyway, let's get through February first - March can wait
  19. Ahhh! I remember those; ping-pong-esque. Interesting times ahead indeed...
  20. Presumably Stewart, that would lay decent conditions for heights to (finally) build into Greenland area and/or retrogressive Scandinavian link-up, thereafter inducing a heavily blocked NW Europe quadrant?
  21. And, believe it or not, you do make a critical observation. Indeed, a mathematical model which seems to show incredible consistency isn't necessarily to be trusted prima facie; it may, at a deterministic level, have a very strong error, which is recursive enough to deliver consistency. To give you a good idea, a good example of such error would be if, for instance, there was a data blind spot. You can run an algorithm to normalise the data set, thus 'smoothing over' the lack of true values. However, if the output of the normalised data is inherent flawed - and especially if the data has a heavy weighting - then you'll tend to see recursion, which can appear to be consistency, which may belie accuracy. It's easy enough to spot though, and - in principle - you'd run a data governance parameter, in concurrence, which will - at the end of the run - give you a score as to how 'true' or reliable the initial parameters were. It would also spit out a report which would identify data blind spots or smoothing issues, and then a forecaster can detemine how influential those variables would be in the overall model. So, in summary: consistency isn't always what it appears to be - accuracy.
  22. Steve, That would be the GloSea4 model. If any members would like to get a better understanding of GloSea4, this is a very good presentation: http://www.ecmwf.int/newsevents/meetings/workshops/2011/MOS13/presentations/Hewson.pdf SB
  23. If we take the UKMO @ 144hrs, and compare with the BOM also @ 144hrs, we have some broad - and I stress broad - similarity: So, if we roll the BOM forward and see where it goes thereafter, we find it transitions towards this @ 240hrs: Rather large, retrogressing continental high pressure there, making its way towards a progressively blocked Atlantic, with hights just starting to nudge into Greenland. I do sense that we're approaching somewhat of a juncture, in terms of heights, and I don't think winter is quite done with us just yet. Quite how much of a sting is left in the tail is the question, but I think there's enough bases loaded for cold to come knocking once more.
×
×
  • Create New...