I've had a kickself simple idea about the summer loss of thickness in calculated thickness...
First off. A couple of weeks ago I posted "How Low Can it Go". In a nutshell I pointed out that 2007 and 2011, the previous two records, ground to a near halt in the last week of August, as the sun set on the central polar region. So I suggested that the same would happen this year.
The measure I use is Cryosphere Today's area metric, dataset here. It's still dropping precipitously now at 3.722M km^2, 0.533M km^2 below the previous record set last year. That's a staggering -18% from the last record. Not as large as the loss between 2006 and 2007, which was -27%, but still massive.
It seems I was wrong on my minor prediction, but why? I can't see anything in the weather that screams out as the reason for the continuing losses, which brings to mind a question that's been bothering me all year. Just what state is the ice in?
The main reason for posting though is that in considering a reply to Jim Williams at Neven's sea ice blog (thread on my recent posts here), I had an idea so simple I can't believe it didn't occur to me before.
The issue is why would the seasonal cycle change after the volume loss of 2010 such that there is volume loss in the calculated thickness series over the summer, when previously there wasn't. I need to think about it some more, but here's my initial stab at an answer.
Thickness of a given region, be it the whole ice pack, or a grid box, is the average of all the thickness categories of ice in that region. So what happens when the ratios of thick ice and thin ice change?
I've spreadsheeted a simple toy model. I assign different ratios of ice totalling 100%, so for example first year ice (FYI) may be 60%, which means multi-year ice makes up the remaining 40%. The graph below gives the percentage contribution of multi-year ice (MYI). FYI is assumed to melt at a higher rate than MYI. In the model FYI loses 0.15 units per time period, MYI loses only 0.05 units per time period. The ratio of these different loss rates is exaggerated to see what happens, and the loss rates don't change as the 'ice' moves from mainly MYI to mainly FYI, it's just a toy model to show an idea.
I then calculate the average thickness for the total amount of ice, taking into account the relative proportions of the MYI and FYI. Assuming a constant linear rate of loss for each type of ice, what happens when I vary the proportion of MYI and FYI?
Time on horizontal axis, thickness on vertical.
Because the ice categories are defined as being different thicknesses to start with, MYI is 4m thick, FYI 2m thick, the starting average thicknesses vary, from 4m thick when 100% is MYI, to 2m thick when 100% is FYI. So the average thickness traces go down as the ice moves from 100% MYI to 0% MYI.
But what's interesting is that the slope of the lines gets greater. Which is what we've seen in calculated thickness during the summer period after 2010. Indeed, before then there's a tendency for this to happen for more recent decadal averages.
So is the answer to the summer thinning conundrum in post 2010 calculated thickness simply that the ratio of thicker MYI has decreased relative to FYI? Is it just that the MYI that had previously biased calculated thickness upward throughout the summer, as the FYI proportion declined relative to the thicker MYI off the Canadian Arctic Archipelgo, has been so denuded by the loss event of 2010 that's no longer having this biasing effect?
Notice too: You keep the initial thickness the same, i.e. no thinning in winter, and simply remove increasing amounts of MYI, and you get open water... This is only a toy model, and shouldn't be used for anything but examining an idea, but to me that's an interesting outcome.