Jump to content
Snow?
Local
Radar
Cold?

zdlawrence

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by zdlawrence

  1. Also, it's worth noting that the FV3-GFS has a polar upper stratosphere cold bias that develops over the forecast period, so this type of temperature change at those levels will be there almost every day (barring significant disturbances).
  2. I'm not trying to take away from the great work you've done, but (admittedly, non-operational) 3D animations of the polar vortex have been made at least as far back as the early 90s. I know this because my PhD advisor, Dr. Gloria Manney made some of them: Figures 13-15 from the first (1994) paper below are associated with 3D animations of isosurfaces of the vortex edge and parcel trajectories that have been archived on ... VHS tape! This was back when 3D animations like these essentially required supercomputing resources (in this case, provided by the JPL supercomputing project!) Figure 5 of the second (2004) paper is also associated with an animation that is supposed to be archived online ... but unfortunately it points to a dead link (see text that says: "An animation of these isosurfaces for the entire simulation is given in the supplemental electronic material (http://dx.doi.org/10.1175/JAS3313.s1).") JOURNALS.AMETSOC.ORG JOURNALS.AMETSOC.ORG Just realized these embedded links don't actually work, so: here's the first paper, and the second paper
  3. Sorry for the delay. Interannual variability in the polar strat is very low this time of year, but I received such a large number of requests to put up the NH charts that I went ahead and switched stratobserve back to NH mode. Cheers
  4. Yes, this. Please make the ECMWF, UKMO, and etc models as free, without restrictions, and easy to access as the GFS, and we would use them.
  5. Apologies if this is off topic now; I wrote the post that follows yesterday, but found I couldn't reply in this thread for whatever reason. Interitus is correct that caution is required when interpreting standard deviations in terms of probability without knowing the underlying distribution. However, the discussion of standard deviations in Interitus's post is based on historical raw values of heat fluxes (either grouped year-round or within individual months), not deseasonalized ones. The plots I have on StratObserve are standardized anomalies, which are based on deseasonalized values (i.e., observed values minus climatology). To explain a bit more, standardized anomalies are calculated by dividing anomalies by their standard deviations. For me it helps to understand by walking through the process, so I'll explain with some data I have: With MERRA-2 data, I have 38 full years of data, 365 days long (ignoring leap days), on 33 pressure levels (size of raw_dat = 38 x 365 x 33). To get anomalies, you first calculate the mean across the years to get a single time series (the climatologies) for each pressure level; so 365 days on 33 pressure levels means size of climatology = 365 x 33. Then, you subtract the climatology from each year of raw data to get a grouping of anomaly time series for each year and pressure level, which is back to being (size of anom_dat = size of raw_dat = 38 x 365 x 33) in size. Finally, you calculate the standard deviation of the anomalies for each pressure level by grouping together all the years and the specific parts of the year you want to keep; for now, we'll just assume we're keeping the whole year. So for each pressure level, you find the standard deviation of the 38 years x 365 days = 13870 days of anomalies to get just 33 standard deviations, one for each pressure level (size of std_dev_anoms = 33). Now you can divide the raw anomalies by these standard deviations, which tells you the number of standard deviations the anomalies are from the climatology; i.e., std_anoms = anom_dat / std_dev_anoms. Standardized anomalies can be useful for a few reasons. Sometimes they're a useful abstraction, especially when the underlying quantity has weird units. Eddy heat fluxes are a fairly good example; their units are Kelvin meters per second (K * m /s). Kelvin and m/s can be meaningful to us humans because we have innate understandings of how temperature/velocity feel/look ... but when you multiply them together, they don't mean much to us physically, right? Since standardized anomalies are unitless (because the anomalies and standard deviations have the same units), they give us an easy way to look at a quantity and say "okay, regardless of what the raw value is, it is higher/lower than normal because it is positive/negative". They can also be useful for quantities displayed on multiple vertical levels (whether that be altitude, pressure, etc), since many quantities vary non-linearly with height. So for example, while it may be normal to get a value of 10 for some quantity (not necessarily heat flux, just speaking generally) in the troposphere, in the stratosphere the normal value could be 100. When looking at anomalies, it might be normal to get a value of 5 in the troposphere, but a value of 50 in the stratosphere. However, if you didn't know this and saw that at some time you had simultaneous anomaly values of 50 in both the troposphere and stratosphere, you would never know how unusual it is to get a value of 50 in the troposphere! If these were instead turned into standardized anomalies, the standardized anomaly value in the troposphere would be much higher than the value in the stratosphere, which would tell you "this value in the troposphere is further from the climatology than the value in the stratosphere". Now back to eddy heat fluxes. I'm going to show a few plots; they're quick and dirty and not too pretty, but they'll do the job. So if we instead look at a histogram of the heat flux *anomalies* at 10 hPa using full years, we get: Similar to the chart in Interitus's post, it's highly peaked at 0 because of the portions of the year when there is no vertical wave activity (when the circulation in the stratosphere is easterly). We can look at the climatological time series for different pressure levels in the stratosphere to confirm this: In reality, what I use for the quantities on StratObserve are smoothed versions created by removing harmonics with frequencies higher than ~half a month; this is what that looks like: As you can see, they look very similar (and it turns out that using the raw versus smoothed version doesn't really matter much for calculating the standardized anomalies). The heat flux values in the stratosphere are about 0 between day of year 110-120 (mid to late April) and day of year 260-270 (mid to late September). For this reason, the standardized heat flux anomalies on StratObserve were calculated using only the September through May period. This histogram of these anomalies looks like so: It is still higly peaked around 0, but this time it's more due to more of a combination of including times when heat flux is 0 in the months of September and April/May, and because the variability in heat flux is quite low (i.e., anomalies are usually close to 0) at the beginning and end of the seasons -- see, e.g, the ozonewatch heat flux plots. But unlike the histogram of raw values, the anomaly histogram is much more symmetric about 0, albeit with a longer tail on the positive side (meaning heat flux anomalies tend to be highly positive more than highly negative). So the 6 SD values on the StratObserve charts are "real" and "unusual" in the sense that the upcoming heat fluxes are going to be very high for the time of year, but that should be the extent of the conclusions drawn. For example, based on my calculations, the largest 10 hPa heat flux anomaly is from 2009-01-19 (as mentioned in Interitus's post), which was about 373 K * m/s. Depending on whether you use the full year of anomalies, just September thru May, or an even more limited period of December thru March, the SD values for this date correspond to 10.6, 9.2, and 6.3 respectively; in other words, it was big no matter how you look at it! And that's usually the best way to interpret standardized anomalies - just as relative magnitudes. Hopefully this post is a little bit useful more generally, since many folks plot standardized anomalies on maps or in other charts. For what it's worth, my writing this post made me go back and change the climatology period for those StratObserve charts to October thru April, which I think is a bit better, but it doesn't change things too much: If I form the distribution of heat flux anomalies at 10 hPa for the Oct - Apr period and estimate the distribution as shown in this plot: then events with heat flux anomalies greater than 230 K * m/s (which is about 5 SDs) only have about a 0.25% chance of occurring -- but keep in mind this is estimated from only 38 years of data (i.e., events with heat flux anomalies greater than 230 K * m/s consist of about 0.25% of the 38 years of Oct - Apr data). Compare this to a normal distribution with values greater than 5 SDs having about a 0.000032% chance of coming up.
  6. Yeah, this was just a mistake that was posted on Twitter. The Charlton & Polvani 2007 paper specifically says: "... cases where the zonal mean zonal winds become easterly but do not return to westerly for at least 10 consecutive days before 30 April are assumed to be final warmings, and as such are discarded. This criterion ensures that following SSWs, a coherent stratospheric vortex is reestablished." The upcoming event will classify as a mid-winter SSW. How quickly the stratospheric circulation "recovers" after it remains to be seen, but forecasts (ensembles included) show it taking anywhere from 4-10+ days before circulation returns to westerly.
  7. Just want to address two points here: Amy Butler agreed that the Mar 2016 "major final warming" was a wave-driven event, but since the circulation never recovered, it just cannot be included on her table of CP07 events. Amy Butler has never said that the CP07 definition for SSWs is the "best" or "standard" definition -- it's just one of many definitions, but it is commonly used in the literature (see her paper on defining SSWs, http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-13-00173.1). I think her compendium of CP07 SSW dates is most helpful as a way to prevent errors in research & the literature; there are some small intricacies in the CP07 definition that do matter (see my next point below). I don't think her compendium was ever advertised as or intended to be the "definitive" list of events. Regarding the Feb 2017 event, I think the reason it's not an event in MERRA-2 is that the CP07 classification uses daily mean zonal mean zonal winds at 10mb 60N. They specifically say: "The first day on which the daily mean zonal mean zonal wind at 60°N and 10 hPa is easterly is defined as the central date of the warming." (see http://journals.ametsoc.org/doi/abs/10.1175/JCLI3996.1). If I wasn't on travel I would confirm, but since I can't, I would bet that averaging U1060 over the full 8 times per day of MERRA-2 would give a positive value. If anything, the above goes to show how any definition can be inadequate in specific situations, but that's just a side-effect of being consistent (i.e., true to the definition no matter what).
×
×
  • Create New...