Evansville Water: The Movie: Part 1

Audio/Video Evansville Schools Meetings

Seek the High Ground

The Book of Minutes

Search This Blog

Wisconsin Wit

Wednesday, December 17, 2008

Point-Counterpoint: Bear Trader talks on Risk; Hedging: The "Law" of Large Numbers

The Observer wrote to Bear Trader:

> (Insurance folks, option traders, and "hedge folks") describe with some reverence, the "law of numbers" that insurance, or the passing of risk, depends on.
> In reflecting on this, I have looked up the law in wikipedia:
> http://en.wikipedia.org/wiki/Law_of_large_numbers
> and have thought back to the Oct 1986 plunge when a broker asked a technical guru in New York what the charts looked like--to which th eguru replied: In a time of panic, the chartes are meaningless and do not apply....


> the Law of number thus, it seems to me, says that risk can be determined in a large set, if one assumes that only a few items of the set are losses and the pool can effectively distribute the loss among everyone-----the law of numbers does not say that in a time of generalized depression and deflation, that the total risk can be hedged---it cannot be---Indeed the history of mutual funds in bear markets has shown that distributing money around many of plunging funds results in plunging results.,.,,,,,,


> Indeed, when one considers that the Life insurance industry uses commerical real estate investments matched to the life expectancy of their life pool, one would surmise that all the reporting of A Best etc, etc, is simply bogus at this time.
> So---does the, did the, will the....law of numbers bail us out if simple faith and proper ratios of indebtedness does not?
> Talk to me.
> The Observer
>----------------------------
Dear Observer.

Yeah, this "law of numbers" you are getting pushed at you is an argument used by insurance companies to show that they are cool. It seems they claim an algorithmic model, computer model, mathematical model, same things, based on this "rule of large numbers". And, of course, as in so much computer modeling people are use the model wrongly, or don't understand anything about it, or the model has been hopelessly screwed up at it's foundation. I think what we are seeing is a wrong understanding of the idea of "random number".

I have had really very frustrating exchanges with people who have way more than enough math to know better on the web about the nature of "random numbers". They are just wildly screwed up.

My interest in random numbers arose in cryptography, and in the idea that a cryptographic system can be shown to be harder to attack by analysis than with a brute force exhaustive password search. Essentially, if this is true, a large bulk of encrypted materials when examined as elements an array, (that is, value #1, value #2, value #3, value#5.....,value the last), where, say, there are a million new values coming into your analysis system every day and maybe a billion values in your database, so "value the last" is some "value the trillionth", have to be indistinguishable from an array of random numbers.

(A brute force attack on the password can be a chore. Let's say that the password is less than 501 qwerty characters, then the total possibilities are about one followed by 1204 zeros. At one million attacks per second then the attack will exhaust the possibilities in about one followed by 1196 zeros years. If you started at the beginning of time with a trillion processors making a trillion attacks every second then you would still have one followed by 1181 zeros years left to go. Numbers this big are most easily dealt with with logarithms.)

So, back to random numbers. The "law of large numbers" is the (looks to me to be unprovable) idea that a string, an array, a series, of random numbers existing within an upper and lower value bound will tend toward a constant median value, and as the array gains more elements with time the median of ALL the elements will tend to be "closer" to a constant median value.

The problem, of course, is whether the data values in the array are random. If they are not random then the law of large numbers does not apply. Further, if the data values "cluster" around the median in a bell curve sort of way, then the series is and never has been random.

Proving a number series is random, by the way, is impossible. Sometimes it is possible to prove that a number series is not random though.. Google "tests of randomness". It is therefore bogus to use the "law of large numbers" in writing insurance. More accurate, and more like what actuaries use, are trend following curve fitted models. You take the birth, death, astrology charts, interest rates, whatever-you-like-time-series, as data and force a curve fit using matrix algebra, and hope that you have found the independent variables (which you won't have). Independent variables are hard to find. Even when variables correlate over some time interval, or don't, is not a very good clue. Sometimes when you find good independent variables "The Brass" don't like the models built on them (like what caused the recent credit crash). My own experience is that people fear, and therefore hate, the truth generally. After all, reality is just so darn INTRUSIVE!!

No comments:

Post a Comment