I was wrong. I depended too much on inference. Wikipedia is incorrect.
Fibonacci Coding does not compress information. I should have done the empirical testing BEFORE I broadcast, but I did not. Now a sadder but wiser blogger, I will update the heading on my previous entry after I hit <send> here.
What you may learn, if you test it out, is that although the vocabulary is very simple, the sheer length of the output defeats the purpose. I didn’t publish without ANY preliminary experimentation, or I would not have developed such high hopes, but more thorough efforts did not bear out my early suppositions. No scheme to convert ASCII to numbers can overcome the problem that Fibonacci Coding is longer than binary.
Hexadecimal notation beats it, character for character. Since sending hexadecimal amounts to sending 32 binary bits per word, and since Fibonacci Coding amounts to sending one binary bit per word, it cannot improve communications transmission.
Even if one were to fall back, and drop Fibonacci Coding neatly into the bus, it is still often twice as long as the number to be abbreviated. Furthermore, there is an unavoidable overhead in processing.
Representing the message entire, as one numeric value is no improvement. The nature of the scheme [the mathematics that underpins the reason there are no double one (11) entries,] means that there are no vast expanses of 0’s, with which to economize on notation.
It is the case that Fibonacci code is VERY much more compressible than numeric data, and I still have to satisfy myself that it cannot compress encrypted data. My hypothesis is that anything you do to Fibonacci Coded data that results in compression, isn’t Fibonacci Coding. Reality is this:
Expanding is not compressing!
I was gullible, and I allowed Wikipedia to lead me to premature conclusions.