Wednesday, January 30, 2013

The End of a Japanese Tradition

The Japanese tradition of taking care of one's own in old age is fast headed towards extinction. The cause, as for so many of Japan's current and future woes, is down to demographics. Using statistics kindly provided by Professor Noriko Tsuya of the Department of Economics at Keio University, Beacon Reports paints a grim picture of what we, as residents of Japan, will most likely experience firsthand. Professor Tsuya serves as chair of the subcommittee on population change and economy for The Science Council of Japan. She also chairs the subcommittee on population and social statistics for the statistics committee of the Cabinet Office and chairs the committee on population projection for the Social Security commission of the Ministry of Health, Labor, and Welfare.

In many Asian cultures, including that of Japan, it is tradition for the eldest son to look after aging parents. That burden, of course, falls largely on the spouse of the eldest son. Therefore, as a group, women have been de facto primary care givers to the elderly in Japanese society. Will there be enough of them to continue this time-honored tradition given Japan's changing demography?

We all know the headline demographics — Japan's rapid aging and low birth rate means the population will shrink by one-third, down on average by about one million people per annum over the coming decades. The ratio of those of working age (aged 15 - 64) to the elderly (seniors, aged 65+) will shrink from roughly 3:1 to 1:1 so that only about half the population will be in paid employment by 2060, when those aged 65+ will represent 40% of the total population.

Figures for 2010 show about 11% of Japan's total population is aged 75+. For every 100 persons aged 75+, there are 117 primary care givers (Japanese women aged 40 - 59). And only 3% of Japan's total population is aged 85+.

At first glance, then, it would appear that the supply of and demand for informal care is in balance — the ratio of care is almost 1:1 for those aged 75+. But that's not quite the case, as on average 72.3% of women aged 40 - 59 hold jobs outside the home. The burden of holding two jobs, one formal and another informal, puts an added strain on care givers.

While currently stressed, the tradition of informal care giving could survive if not for rapidly changing demographics.

Sixty years ago, life expectancy at birth averaged 61.3 years (59.6 for males and 63 years for females). With better sanitation, healthcare and increased living standards, those today aged 65 can look forward to an average 21.4 additional years of life (18.9 for males and 23.9 for females).

Yet the fertility rate has fallen steadily since 1947. The number of average births per woman in Japan is currently 1.4, well below America's 1.9 and well below the replacement level fertility rate of 2.1, the level of fertility at which a population exactly replaces itself from one generation to the next.

Simply put, the society is aging. As a result, the percentage of the population aged 75+ is expected to increase from 11% to 20% by 2035. Likewise, the percentage of the population aged 85+ is expected to increase from 3% to 9% over that same period.

By 2060, 27% of Japan's total population will be aged 75+, with more than 13% aged 85+.

The fact is, those aged 85 - 89 are in the most rapidly growing age segment of Japanese society. That's important, because it is generally accepted that, as a group, health declines rapidly after age 85. By 2060, one-third of all seniors will be aged 85+ and by 2015 (very soon) more than 50% of all seniors will be aged 75+.


The speed of change is frightening and difficult to comprehend. As the below chart shows, Japan's population is aging faster than any other developed nation. France's elderly (aged 65+) doubled in size from 7% to 14% of the total population over the course of 126 years; Japan took 24 years to achieve the same doubling. France's elderly are expected to double in size from 10% to 20% of the total population over a 77 year period for the year ending in 2020; Japan had already accomplished that feat over a 20 year period for the year ended 2005.


The impact of rapid demographic changes on the tradition of primary care giving will be devastating. This graph shows the number of primary care givers per 100 elderly as broken out by age group.


In 2010 there were 117 and 435 primary care givers, respectively, for every 100 elderly aged 75+ and 85+.

Roll the calendar forward to the year 2035 and the story darkens. In 2035 there are projected to be, respectively, only 61 and 134 primary care givers for every 100 persons aged 75+ and 85+.

Roll the calendar forward even further to 2060 and there will be, respectively, 43 and 87 primary care givers for every 100 persons aged 75+ and 85+.

To add to the troubles, the historic trend of increasing female employment is expected to continue. This chart shows between 2012 - 2030 another 7.4% of primary care givers will likely join the workforce, taking the average women aged 40-59 in paid employment to almost 80%.


Care workers will therefore be far outnumbered and overworked. Beacon Reports asks, who will take care of the elderly? By 2060, perhaps much earlier, the tradition of primary care giving will be a fading memory.

Professor Tsuya

Professor Noriko Tsuya of the Department of Economics at Keio University. The projections presented in this report are based on the most likely expected outcomes. The statistical degree of error increases with the length of projection. Short term projections are likely to be more accurate than those that are longer in term. All opinions, based the statistics provided by Professor Tsuya, are that of Beacon Reports alone.

Sunday, January 27, 2013

This Time is Different

The 2008 crash resulted from the bursting of the biggest bubble in financial history, a ‘credit super-cycle’ that spanned more than three decades. How did this happen?
As Carmen Reinhart and Kenneth Rogoff have demonstrated in their magisterial book This Time Is Different, asset bubbles are almost as old as money itself. The Reinhart and Rogoff book tracks financial excess over eight centuries, but it would be no surprise at all if the Hittites, the Medes, the Persians and the Romans, too, had bubbles of their own. All you need for a bubble is ready credit and collective gullibility.
Some might draw comfort from the observation that bubbles are a long established aberration, arguing that the boom-and-bust cycle of recent years is nothing abnormal. Any such comfort would be misplaced, for two main reasons: first, the excesses of recent years have reached a scale which exceeds anything that has been experienced before; and second, and more disturbing still, the developments which led to the financial crisis of 2008 amounted to a process of sequential bubbles, a process in which the bursting of each bubble was followed by the immediate creation of another.
Though the sequential nature of the pre-2008 process marks this as something that really is different, we can, nevertheless, learn important lessons from the bubbles of the past.
  • First, bubbles follow an approximately symmetrical track, in which the spike in asset values is followed by a collapse of roughly similar scale and duration. If this holds true now, we are in for a very long and nasty period of retreat.
  • Second, easy access to leverage is critical, as bubbles cannot happen if investors are limited to equity.
  • Third, most bubbles look idiotic when seen with hindsight.
  • Fourth – and although institutional arrangements are critical – the real driving dynamic of bubbles is a psychological process which combines greed, the willing suspension of disbelief and the development of a herd mentality.
“tulips from Amsterdam”
One of the most famous historical bubbles is the tulip mania which gripped the United Provinces (the Netherlands) during the winter of 1636-37. Tulip bulbs had been introduced to Europe from the Ottoman Empire by Obier de Busbeq in 1554, and found particular favour in the United Provinces after 1593, when Carolus Closius proved that these exotic plants could thrive in the harsher Dutch climate.
The tulip was a plant whose beauty and novelty had a particular appeal, but tulip mania would not have occurred without favourable social and economic conditions. The Dutch had been engaged in a long war for independence from Spain since 1568 and, though final victory was still some years away, the original Republic of the Seven Provinces of the Netherlands declared independence from Spain in 1581. This was the beginning of the great Dutch Golden Age. In this remarkable period, the Netherlands underwent some fundamental and pioneering changes which included the establishment of trading dominance, great progress in science and invention, and the creation of corporate finance, as well as the accumulation of vast wealth, the accession of the Netherlands to global power status, and great expansion of industry.
This was a period in which huge economic, business, scientific, trading and naval progress was partnered by remarkable achievements in art (Rembrandt and Vermeer), architecture and literature. The prosperity of this period created a wealthy bourgeoisie which displayed its affluence in grand houses with exquisite gardens. Enter the tulip.
For the newly-emergent Dutch bourgeoisie, the tulip was the “must have” consumer symbol of the 1630s, particularly since selective breeding had produced some remarkably exotic new plants. Tulips cannot be grown overnight, but take between seven and twelve years to reach maturity. Moreover, tulips bloom for barely a week during the spring, meaning that bulbs can be uprooted and sold during the autumn and winter months. A thriving market in bulbs developed in the Netherlands even though short-selling was outlawed in 1610. Speculators seem to have entered the tulip market in 1634, setting the scene for tulip mania.
The tulip bubble did not revolve around a physical trade in bulbs but, rather, involved a paper market in which people could participate with no margin at all. Indeed, the tulip bubble followed immediately upon the heels of the creation by the Dutch of the first futures market. Bulbs could change hands as often as ten times each day but, because of the abrupt collapse of the paper market, no physical deliveries were ever made.
Price escalation was remarkable, with single bulbs reaching values that exceeded the price of a large house. A Viceroy bulb was sold for 2,500 florins at a time when a skilled worker might earn 150 florins a year. Putting these absurd values into modern terms is almost impossible because of scant data, but the comparison with skilled earnings suggests values of around £500,0003, which also makes some sense in relation to property prices. In any event, a bubble which began in mid-November 1636 was over by the end of February 1637.
Though tulip mania was extremely brief, and available data is very limited, we can learn some pertinent lessons from this strange event.
For a start, this bubble looks idiotic from any rational perspective – how on earth could a humble bulb become as valuable as a mansion, or equivalent to 17 years of skilled wages? Second, trading in these ludicrously overvalued items took place in then novel forms (such as futures), and were conducted on unregulated fringe markets rather than in the recognised exchanges.
Third, participants in the mania lost the use of their critical faculties. Many people – not just speculators and the wealthy, but individuals as diverse as farmers, mechanics, shopkeepers, maidservants and chimney-sweeps – saw bulb investment as a one-way street to overnight prosperity. Huge paper fortunes were made by people whose euphoria turned to despair as they were wiped out financially.
The story that a sailor ate a hugely valuable bulb, which he mistook for an onion, is probably apocryphal (because it would have poisoned him), but there can be little doubt that this was a period of a bizarre mass psychology verging on collective insanity.
all at sea
The South Sea Bubble of 1720 commands a special place in the litany of lunacy that is the history of bubbles.
The South Sea Company was established in 1711 as a joint government and private entity created to manage the national debt. Britain’s involvement in the War of the Spanish Succession was imposing heavy costs on the exchequer, and the Bank of England’s attempt to finance this through two successive lotteries had not been a success. The government therefore asked an unlicensed bank, the Hollow Sword Blade Company, to organise what became the first successful national lottery to be floated in Britain. The twist to this lottery was that prizes were paid out as annuities, thus leaving the bulk of the capital in government hands.
After this, government set up the South Sea Company, which took over £9m of national debt and issued shares to the same amount, receiving an annual payment from government equivalent to 6% of the outstanding debt (£540,000) plus operating costs of £28,000. As an added incentive, government granted the company a monopoly of trade with South America, a monopoly which would be without value unless Britain could break the Spanish hegemony in the Americas, an event which, at that time, was wildly implausible.
The potentially-huge profits from this monopoly grabbed speculator attention even though the real likelihood of any returns ever actually accruing was extremely remote. Despite very limited concessions secured in 1713 at the end of the war, the trading monopoly remained all but worthless, and company shares remained below their issue price, a situation not helped by the resumption of war with Spain in 1718.
Even so, shares in the company, effectively backed by the national debt, began to rise in price, a process characterised by insider dealing and boosted by the spreading of rumours.
Between January and May 1720, the share price rose from £128 to £550 as rumours of lucrative returns from the monopoly spread amongst speculators. What, many argued, could be better than a government-backed company with enormous leverage to monopolistic profits in the fabled Americas? Legislation, passed under the auspices of Company insiders and banning the creation of unlicensed joint stock enterprises, spurred the share price to a peak of £890 in early June. This was bolstered by Company directors, who bought stock at inflated prices to protect the value of investments acquired at much lower levels. The share price peaked at £1,000 in August 1720, but the shares then lost 85% of their inflated market value in a matter of weeks.
Like the Dutch tulip mania, the South Sea Bubble was an example which fused greed and crowd psychology with novel market practices, albeit compounded by rampant corruption in high places. Even Sir Isaac Newton, presumably a man of common sense, lost £20,000 (equivalent to perhaps £2.5m today) in the pursuit of the chimera of vast, but nebulous, unearned riches.
Any rational observer, even if unaware of the insider dealing and other forms of corruption in which the shares were mired, should surely have realised that an eight-fold escalation in the stock price based entirely on implausible speculation was, quite literally, ‘too good to be true’.
In his Extraordinary Popular Delusions and the Madness of Crowds, Charles Mackay ranked the South Sea Company and other bubbles with alchemy, witch-hunts and fortune-telling as instances of collective insanity. Whilst other such foibles have tended to retreat in the face of science, financial credulity remains alive and well, which means that we need to know how and why these instances of collective insanity seem to be hard-wired into human financial behaviour.
made in Japan
In some respects, the Japanese asset bubble of the 1980s provided a ‘dry run’ for the compounded bubbles of the super-cycle. Japan’s post-war economic miracle was founded on comparatively straightforward policies. Saving was encouraged, and was channelled into domestic rather than foreign capital markets, which meant that investment capital was available very cheaply indeed. Exports were encouraged, imports were deterred by tariff barriers, and consumption at home was discouraged. The economic transformation of Japan in the four decades after 1945 was thus export-driven, and led by firms which had access to abundant, low-cost capital.


By the early 1980s, Japan’s economic success was beginning to lead to unrealistic expectations about future prosperity. Many commentators, abroad as well as at home, used the ‘fool’s guideline’ of extrapolation to contend that Japan would, in the foreseeable future, oust America as the world’s biggest economy. The international expansion of Japanese banks and securities houses was reflected in the proliferation of sushi bars in New York and London. Boosted by the diversion of still-cheap capital from industry into real estate, property values in Japan soared, peaking at $215,000 per square metre in the prized Ginza district of Tokyo.
Comforted by inflated property values, banks made loans which the borrowers were in no position to repay. The theoretical value of the grounds of the Imperial Palace came to exceed the paper value of the entire state of California. Meanwhile, a soaring yen was pricing Japanese exports out of world markets.
Though comparatively gradual – mirroring, in true bubble fashion, the relatively slow build-up of asset values – the bursting of the bubble was devastating. Properties lost more than 90% of their peak values, and the government’s policy of propping up insolvent banks and corporations created “zombie companies” of the type that exist today in many countries. Having peaked at almost 39,000 at the end of 1989, the Nikkei 225 index of leading industrial stocks deteriorated relentlessly, bottoming at 7,055 in March 2009.
The Japanese economy was plunged into the “lost decade” which, in reality, could now be called the ‘lost two decades’. In 2011, Japanese government debt stood at 208% of GDP, a number regarded as sustainable only because of the country’s historic high savings ratio (though this ratio is, in fact, subject to ongoing deterioration as the population ages).
2008 – the biggest bust
With hindsight, we now know that the Japanese asset bust was an early manifestation of the ‘credit supercycle’, which can be regarded as ‘the biggest bubble in history’. The general outlines of the super-cycle bubble are reasonably well understood, even if the underlying dynamic is not. To understand this enormous boom-bust event, we need to distinguish between the tangible components of the bubble and its underlying psychological and cultural dimensions.
Conventional analysis argues that tangible problems began with the proliferation of subprime lending in the United States. Perhaps the single biggest contributory factor to the subprime fiasco was the breaking of the link between borrower and lender. Whereas, traditionally, banks assessed the viability of the borrower in terms of long-term repayment, the creation of bundled MBSs (mortgage-backed securities) severed this link.
Astute operators could now strip risk from return, pocketing high returns whilst unloading the associated high risk. The securitisation of mortgages was a major innovative failing in the system, as was the reliance mistakenly placed on credit-rating agencies which, of course, were paid by the issuers of the bundled securities. Another contributory innovation was the use of ARM (adjustable rate mortgage) products, designed to keep the borrower solvent just long enough for the originators of the mortgages to divest the packaged loans.
The authorities (and, in particular, the Federal Reserve) must bear a big share of culpability for failing to spot the mispricing of risk which resulted from the on-sale of mortgage debt. The way in which banks were keeping the true scale of potential liabilities off their balance sheets completely eluded regulators, and Alan Greenspan’s belief that banks would always act in the best interests of shareholders was breathtakingly naive. In America, as for that matter in Britain and elsewhere, central banks’ monetary policies were concentrated on retail inflation (which had for some years been depressed both by benign commodity markets and by the influx of ever-cheaper goods from Asia), and ignored asset price escalation.
Meanwhile, banks’ capital ratios had expanded, in part because of ever-looser definitions of capital and assets and in part because of sheer regulatory negligence. Just as Greenspan’s Fed believed that bankers were the best people to determine their shareholders’ interests, British chancellor Gordon Brown took pride in a “light touch” regulatory system which saw British banks’ total risk assets surge to more than £3,900bn on the back of just £120bn of pure loss-absorbing capital or TCE (tangible common equity).
It does not seem to have occurred to anyone – least of all to the American, British and other regulatory authorities – that a genuine capital reserve of less than 2% of assets could be overwhelmed by even a relatively modest correction in asset prices.
Both sides of the reserves ratio equation were distorted by regulatory negligence. On the assets side, banks were allowed to risk-weight their assets, which turned out to be a disastrous mistake. Triple-A rated government bonds were, not unnaturally, regarded as AFS (‘available for sale’) and accorded a zero-risk rating, but so, too, in practice, were the AAA portions that banks, with the assistance of the rating agencies, managed to slice out of MBSs (mortgage-backed securities) and CDOs (collateralised debt obligations).
Mortgages of all types were allowed to be risk-weighted downwards to 50% of their book value which, at best, reflected a nostalgic, pre-subprime understanding of mortgage risk on the part of the regulators. In the US, banks were allowed to net-off their derivatives exposures, such that J.P. Morgan Chase, for example, carried derivatives of $80bn on its balance sheet even though the gross value of securities and derivatives was close to $1.5 trillion. The widespread assumption that potential losses on debt instruments were covered by insurance overlooked the fact that all such insurances were placed with a small group of insurers (most notably AIG) which were not remotely capable of bearing system-wide risk.
Meanwhile, innovative definitions allowed banks’ reported capital to expand from genuine TCE to include book gains on equities, and provisions for deferred tax and impairment. Even some forms of loan capital were allowed to be included in banks’ reported equity.
Together, the risk-weighting of assets, and the use of ever-looser definitions of capital, combined to produce seemingly-reassuring reserves ratios which turned out to be wildly misleading. Lehman Brothers, for example, reported a capital adequacy ratio of 16.1% shortly before it collapsed, whilst the reported pre-crash ratios for Northern Rock and Kaupthing were 17.5% and 11.2% respectively.
Well before 2007, the escalation in the scale of indebtedness had rendered a crash inevitable. Moreover, the two triggers that would bring the edifice crashing down could hardly have been more obvious. First, the resetting of ARM mortgage interest rates made huge subprime default losses inevitable unless property prices rose indefinitely, which was a logical impossibility. Subprime defaults would in turn undermine the asset bases of banks holding the toxic assets that the sliced-and-diced mortgage-based instruments were bound to become as soon as property price escalation ceased.
The second obvious trigger was a seizure in liquidity. The escalation in the scale of debt had far exceeded domestic depositor funds, not least because savings ratios had plunged as borrowing and consumption had displaced saving and prudence in the Western public psyche. Unlike depositors – a stable source of funding, in the absence of bank runs – the wholesale funding markets which had provided the bulk of escalating leverage were perfectly capable of seizing up virtually overnight. For this reason, a liquidity seizure crystallised what was essentially a leverage problem.


At this point, three compounding problems kicked in.
  • The first was the termination of a long-standing ‘monetary ratchet’ process – low rates created bubbles, and the authorities countered each ensuing downturn by cutting rates still further, but, this time around, prior rate reductions left little scope for further relaxation.
  • Second, economies had become dependent upon debt-fuelled consumption, and any reversal in debt availability was bound to unwind the earlier (and largely illusory) ‘growth’ created by debt-fuelled consumer spending. As figs. 2.2 and 2.3 show, the relationship between borrowing and associated growth had been worsening for some years, such that the $4.1 trillion expansion in nominal US economic output between 2001 and 2007 had been far exceeded by an increase of $6.7 trillion in consumer debt, and the growth/borrowing equation had slumped.
  • Third, some countries – most notably the United Kingdom – had compounded consumer debt dependency by mistaking illusory (debt-fuelled) economic expansion for ‘real’ growth, and had expanded public spending accordingly, a process which created huge fiscal deficits as soon as leverage expansion ceased. Ultimately, the leverage-driven ‘great bubble’ in pan-Western property values had created the conditions for a deleveraging downturn, something for which governments’ previous experience of destocking recessions had provided no realistic appreciation.


familiar features
Though, as we shall see, the bursting of the super-cycle in 2008 had some novel aspects, the process nevertheless embraced many features of past bubbles.
A number of points are common to these past bubbles, factors which include easy credit, low borrowing costs, financial innovation (in the form of activities which take place outside established markets, and/or are unregulated, and/or are outright illegal), weak institutional structures, opportunism by some market participants, and the emergence of some form of mass psychology in which fear is wholly ousted by greed.
Often, the objects of speculation are items which can seem wholly irrational with the benefit of hindsight (how on earth could tulip bulbs, for instance, have become so absurdly over-valued?) A further important point about bubbles is that they can inflate apparent prosperity, but the post-burst effects include the destruction of value and the impairment of economic output for an extended period. In reality, though, the bursting of a bubble does not destroy capital, but simply exposes the extent to which value has already been destroyed by rash investment.
Of course, the characteristics of earlier excesses have not been absent in contemporary events. As with tulip bulbs, South Sea stock and Victorian railways, recent years have witnessed the operation of mass psychologies in which rational judgement has been suspended as greed has triumphed over fear. Innovative practices, often lying outside established markets, have abounded. Examples of such innovations have included subprime and adjustable-rate mortgages, and the proliferation of an ‘alphabet soup’ of the derivatives that Warren Buffett famously described as “financial weapons of mass destruction”. Credit became available in excessive amounts, and the price of credit was far too low (a factor which, we believe, may have been exacerbated by a widespread under-reporting of inflation).
why this time is different
Whilst it shared many of the characteristics of previous such events, the credit super-cycle bubble which burst in 2008 differed from them in at least two respects, and arguably differed in a third dimension as well.
The first big difference was that the scale and scope of the 2008 crash far exceeded anything that had gone before. Though it began in America (with parallel events taking place in a number of other Western countries), globalisation ensured that the crash was transmitted around the world. The total losses resulting from the crash are almost impossible to estimate, not least because of notional losses created by falling asset prices, but even a minimal estimate of $4 trillion equates to about 5.7% of global GDP, with every possibility that eventual losses will turn out to have been far greater than this.
The second big difference between the super-cycle and previous bubbles lay in timing. A gap of more than 80 years elapsed between the tulip mania of 1636-37 and the South Sea bubble of 1720, though the latter had an overseas corollary in the Mississippi bubble of the same year. The next major bubble, the British railway mania of the 1840s, followed an even longer time-gap, and a further interval of about seven decades separated the dethroning of the crooked “railway king” (George Hudson) in 1846 from the onset of the ‘roaring twenties’ bubble which culminated in the Wall Street Crash. Though smaller bubbles (such as Poseidon) occurred in between, the next really big bubble did not occur until the 1980s, when Japanese asset values lost contact with reality.
In recent years, however, intervals between bubbles have virtually disappeared, such that the decade prior to the 2008 crash was characterised by a series of events which overlapped in time. Property price bubbles were the greatest single cause of the financial crisis, but there were complementary bubbles in a variety of other asset categories.
The dot-com bubble (1995-2000) reflected a willing suspension of critical faculties where the potential for supposedly ‘high tech’ equities were concerned, and historians of the future are likely to marvel at the idiocy which attached huge values to companies which lacked earnings, cash flow or a proven track record, and were often measured by the bizarre metric of “cash-burn”. Other bubbles occurred in property markets in the United States, Britain, Ireland, Spain, China, Romania and other countries, as well as in commodities such as uranium and rhodium. Economy-wide bubbles developed in countries such as Iceland, Ireland and Dubai. Perhaps the most significant bubble of the lot – for reasons which will become apparent later – was that which carried the price of oil from an average of $25/b in 2002 to a peak of almost $150/b in 2008.
This rash of bubbles suggests that recent years have witnessed the emergence of a distinctive new trend, which is described here as a credit super-cycle, a mechanism which compounds individual bubbles into a broader pattern.
This report argues that a third big difference may be that the super-cycle bubble coincided with a weakening in the fundamental growth dynamic. What we need to establish is the ‘underlying narrative’ that has compressed the well-spaced bubble-forming processes of the past into the single, compounded-bubble dynamic of the credit super-cycle.
It is suggested here that this narrative must include:
  • A mass psychological change which has elevated the importance of immediate consumption whilst weakening perceptions both of risks and of longer-term consequences.
  • Institutional weaknesses which have undermined regulatory oversight whilst simultaneously facilitating the provision of excessive credit through the creation of high-risk instruments.
  • Mispricing of risk, compounded by false appreciation of economic prospects and by the distortion of essential data.
  • A political, business and consumer mind-set which elevates the importance of the immediate whilst under-emphasising the longer term.
  • A distortion of the capitalist model which has created a widening chasm between ‘capitalism in principle’ and ‘capitalism in practice’.
Before we can put the credit super-cycle into its proper context, however, we need to appreciate three critical issues, each of which is grossly misunderstood.
  1. The first of these is the vast folly of globalisation. This has impoverished and weakened the West whilst ensuring that few countries are immune from the consequences of the unwinding of a world economy which has become a hostage to future growth assumptions at precisely the same time that the scope for generating real growth is deteriorating.
  2. The second critical issue is the undermining of official economic and fiscal data, a process which has disguised many of the most alarming features of the super-cycle.
  3. Third, there has been a fundamental misunderstanding of the dynamic which really drives the economy. Often regarded as a monetary construct, the economy is, in the final analysis, an energy system, and the critical supply of surplus energy has been in seemingly-inexorable decline for at least three decades.

Thursday, January 10, 2013

How the World Will Look In 2030


What will the world look like two decades from now? Obviously, nobody knows, but some things are more likely than others. Companies and governments have to make informed guesses, because some of their investments today will last longer than 20 years. In December, the United States National Intelligence Council (NIC) published its guess: Global Trends 2030: Alternative Worlds.
The NIC foresees a transformed world, in which “no country – whether the US, China, or any other large country – will be a hegemonic power.” This reflects four “megatrends”:
[These trends exist today, but during the next 15-20 years they will deepen and become more intertwined, producing a qualitatively different world. For example, the hundreds of millions of entrants into the middle classes throughout all regions of the world create the possibility of a global “citizenry” with a positive effect on the global economy and world politics. Equally, absent better management and technologies, growing resource constraints could limit further development, causing the world to stall its engines.]

1. Individual Empowerment and the growth of a global middle class;

2. Diffusion of Power from states to informal networks and coalitions;

3. Demographic changes, owing to urbanization, migration, and aging;

4. Increased demand for food, water, and energy.

Each trend is changing the world and “largely reversing the historic rise of the West since 1750, restoring Asia’s weight in the global economy, and ushering in a new era of ‘democratization’ at the international and domestic level.” The US will remain “first among equals” in hard and soft power, but “the ‘unipolar moment’ is over.”
It is never safe, however, to project the future just by extrapolating current trends. Surprise is inevitable, so the NIC also identifies what it calls “game-changers,” or outcomes that could drive the major trends off course in surprising ways.
First among such sources of uncertainty is the global economy: will volatility and imbalances lead to collapse, or will greater multipolarity underpin greater resilience? Similarly, will governments and institutions be able to adapt fast enough to harness change, or will they be overwhelmed by it?
Moreover, while interstate conflict has been declining, intrastate conflict driven by youthful populations, identity politics, and scarce resources will continue to plague some regions like the Middle East, South Asia, and Africa. And that leads to yet another potentially game-changing issue: whether regional instability remains contained or fuels global insecurity.
Then there is a set of questions concerning the impact of new technologies. Will they exacerbate conflict, or will they be developed and widely accessible in time to solve the problems caused by a growing population, rapid urbanization, and climate change?
The final game-changing issue is America’s future role. In the NIC’s view, the multi-faceted nature of US power suggests that even as China overtakes America economically – perhaps as early as the 2020’s – the US will most likely maintain global leadership alongside other great powers in 2030. “The potential for an overstretched US facing increased demands,” the NIC argues, “is greater than the risk of the US being replaced as the world’s preeminent political leader.”
Is this good or bad for the world? In the NIC’s view, “a collapse or sudden retreat of US power would most likely result in an extended period of global anarchy,” with “no stable international system and no leading power to replace the US.”
The NIC discussed earlier drafts of its report with intellectuals and officials in 20 countries, and reports that none of the world’s emerging powers has a revisionist view of international order along the lines of Nazi Germany, Imperial Japan, or the Soviet Union. But these countries’ relations with the US are ambiguous. They benefit from the US-led world order, but are often irritated by American slights and unilateralism. One attraction of a multipolar world is less US dominance; but the only thing worse than a US-supported international order would be no order at all.
The question of America’s role in helping to produce a more benign world in 2030 has important implications for President Barack Obama as he approaches his second term. The world faces a new set of transnational challenges, including climate change, transnational terrorism, cyber insecurity, and pandemics. All of these issues require cooperation to resolve.
Obama’s 2010 National Security Strategy argues that the US must think of power as positive-sum, not just zero-sum. In other words, there may be times when a more powerful China is good for the US (and for the world). For example, the US should be eager to see China increase its ability to control its world-leading greenhouse-gas emissions.
US Secretary of State Hillary Clinton has referred to the Obama administration’s foreign policy as being based on “smart power,” which combines hard and soft power resources, and she argues that we should not talk about “multipolarity,” but about “multi-partnerships.” Likewise, the NIC report suggests that Americans must learn better how to exercise power with as well as over other states.
To be sure, on issues arising from interstate military relations, understanding how to form alliances and balance power will remain crucial. But the best military arrangements will do little to solve many of the world’s new transnational problems, which jeopardize the security of millions of people at least as much as traditional military threats do. Leadership on such issues will require cooperation, institutions, and the creation of public goods from which all can benefit and none can be excluded.
The NIC report rightly concludes that there is no predetermined answer to what the world will look like in 2030. Whether the future holds benign or malign scenarios depends in part on the policies that we adopt today.
The upper chart below shows US share of real global GDP under four 'alternate' scenarios. The lower chart illustrates patterns in the shift in global economic clout across regions (measured in terms of regions’/countries’ share of global GDP) in 2010 and in our four scenarios for 2030. The four scenarios are:
  • Stalled Engines–a scenario in which the US and Europe turn inward and globalization stalls.
  • Fusion–a world in which the US and China cooperate, leading to worldwide cooperation on global challenges.
  • Gini-Out-of-the-Bottle–a world in which economic inequalities dominate.
  • Nonstate World–a scenario in which nonstate actors take the lead in solving global challenges.
Europe will become just a tourist land while Asia will be where growth and profits are. 

Wednesday, January 2, 2013

Japans debt bomb 2013


Speaking of "wrong," let's move on to the bond market, shall we? "Wrong" doesn't even come close to describing where the prices of sovereign debt instruments are currently trading, and in 2013 that, I believe, will be the biggest story to unfold. If, as I foresee, a sweeping wave of reality begins to wash over the investment world, then sovereign bond holders (and the institutions that produce them) are in for a world of hurt. Where this cascade begins is anybody's guess, though. It could be in Japan now that the era of "Abenomics" seems to be upon us. To recap, Japan has the world's most outrageous debt-to-GDP ratio at roughly 240%, and as that super-smart guy Kyle Bass has so eloquently pointed out recently, their debt will shortly reach 1 quadrillion yen—a hard-to-fathom number which he simplifies thus: "If you were to try and count to a quadrillion and every number took you one second to get there, how long do you think it would take you to count to a quadrillion? Thirty-one million years." Kyle's assessment of the ramifications of that? "There is no chance the Japanese can ever repay their debts. Plain and simple." 18188.png  Source: Bloomberg Kyle is right. In fact, Kyle has been right for a couple of years. But Kyle has been a victim of his own peerless ability for clear thought—he has been early. No matter. 2013 will be the year Kyle is proven oh-so-right. 18011.png  Source: Bloomberg Adding to Japan's woes is their demographic situation, which will now lurch from problematic to perilous almost overnight: 18025.png  Source: The Housing Time Bomb Now, charts are great to outline the problems facing Japan, but what about "Abenomics" as the possible solution to Japan's woes? Well, according to one of the smartest Japan-watchers I know, BAML's Pawan Kalia, far from being Japan's saviour, Abenomics could, in fact, be the final straw that pushes the Land of the Rising Sun towards sunset in a hurry. Pawan's logic? Simple. Abe won election on a platform of aggressive fiscal expansion, and although ¥3-4 trillion is priced into the market, the final number may well be closer to ¥10 trillion or 2% of GDP (in fact, if you listen very carefully, you can even hear numbers like ¥200 trillion over 10 years being waved around with abandon in certain circles). If that is even close to what eventually transpires, it will require massive new bond issuance. Ironically, just as fixing the confidence problem will sound the Fed's death knell, in Japan, generating the much-hyped "2-3% inflation" will also bring the bond market crashing down around the government and the BoJ's ears. Be careful what you wish for. In Europe, that cute little blonde girl couldn't be more hopelessly wrong about everything being fine and the EU putting all its problems behind it in 2013. Let's begin with Greece. Beware Alex Tsipras. The charismatic and combative leader of the left-wing, anti-austerity Syriza party came from nowhere last year to almost sweep into power on a wave of anti-European sentiment, and, though Antonis Samaras' New Democracy Party (the original architects of Greece's cooked books) narrowly won the election, Tsipras is not lying down quietly: (UK Guardian): Public opinion surveys have repeatedly put his party in the lead since the summer. He has his sights on power. Demands for fresh elections are likely to be heard frequently over the course of 2013. "This government does not have a long lifeline," he says, waving his arms for emphasis as he lists the measures adopted by his "dogmatic neo-liberal" political enemies that, he continues, have been tried and failed miserably.