Multi-scale information content measurement method based on Shannon information
Zsolt Pocze
Volgyerdo Nonprofit Kft., Nagybakonak, HUN, This email address is being protected from spambots. You need JavaScript enabled to view it., 2023
Abstract
In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

Introduction

Traditionally, Shannon's information theory [13] has been used to measure the information content of samples. Shannon information, as defined by Claude E. Shannon, is the degree of uncertainty or surprise associated with a given outcome in a set of possible outcomes. Shannon entropy, which is the expected value of Shannon information, is used to quantify the average information content of a discrete sample or message. It serves as a basic concept in information theory and is widely used in communication systems and data compression.
In some situations, such as repeated patterns, Shannon's original information measurement method does not give accurate results enough because it does not take into account the structure of the patterns, it only looks at certain statistical characteristics of them. To solve this problem, this paper presents a new multiscale information content calculation method based on Shannon's original principles. By refining the computational approach, our method offers a more accurate estimate of the internal information content of discrete samples, regardless of their nature.
There are several other methods for measuring the information content of patterns, such as Kolmogorov complexity [8], randomness [9], and compression complexity. The common property of these methods that they are all suitable for determining and understanding the information content of patterns with some accuracy, and therefore provide a suitable comparison basis for checking newer methods.
To verify the effectiveness of our new method, it is applied to various data sets and compared with compression algorithms. The results show that our proposed method based on Shannon information closely approximates the results measured by other methods while taking a completely different approach.

Patterns

In this study, we deal with the calculation of the internal quantitative information content of discrete patterns. From the point of view of the calculation of the information content, the nature of the object of the measurement is irrelevant. The information content of events, signals, system states, or data sequences can be calculated since their models (with finite precision) can all be represented as discrete patterns. By moving along a spatial pattern, we get a temporal pattern and vice versa. Therefore, we do not distinguish between spatial and temporal patterns. The basic markings should be as follows.
Denote M ( R ) the set of finite sequences that can be generated from the set R :
M(R)={X:N+ → R} (1)
Let us call the finite sequence X ∈ M ( R ) a pattern:
X=[x1,...,xN](2)
Denote the length of the series X :
n(X)=N(3)
Denote the set of possible values of the series X :
R=RX={ r1,r2,...,rK} (4)
Let f( x ) denite the number of occurrences of x ∈ R X in the series of X :
f(x)= ∑ i=1K[ri=x](5)
Let the relative frequency of any x ∈ R element of the pattern X :
p(x)=f(x)/N(6)
Denote the concatenation of X 1 X 2 ... X K patterns as:
X 1 X 2 ... X K = ∥ i=1 K X i (7)

Information content

The information content can be interpreted intuitively when only the interpretable information content is examined [1]. In this study we examine the amount of the entire internal information content without interpreting it or considering the context.
The information content of a pattern can be characterized by the improbability of individual elements of the pattern (Shannon information [13]), the length of the most concise description of the pattern (Kolmogorov complexity [8]), or the degree of randomness of the pattern [9].
A fundamental difference between Shannon's and Kolmogorov's viewpoints is that Shannon considered only the probabilistic characteristics of the random source of information that created the pattern, ignoring the pattern itself. In contrast, Kolmogorov only focused on the pattern itself [5]. In their definition, Kolmogorov and Chaitin called (inaccurately) random the pattern with the maximum information content [10].
Information, complexity and randomness have such similar properties that we can reasonably assume that they are essentially approaching the same thing with different methods. It is sufficient to consider that the Shannon information, Kolmogorov complexity and randomness of a pattern consisting of identical elements are all minimal, while in the case of a true random pattern all three values are maximal, and they all assign the highest information value to data sets with maximum entropy [1].
The concepts of entropy and information are often confused [3], so it is important to mention that entropy can also be understood as such the average information content per element.
Approached intuitively, the amount of information is a function for which the following conditions are met:

 

  1. The information content of a pattern with zero length or consisting of identical elements is zero.
  2. The information content of the pattern consisting of repeating sections is (almost) identical to the information content of the repeating section.
  3. A pattern and its reflection have the same information content.
  4. The sum of the information content of patterns with disjoint value sets is smaller than the information content of the concatenated pattern.
  5. The information content of true random patterns is almost maximal.
Let the information content be the function I that assigns a non-negative real number to any arbitrary pattern X ∈ M ( R ) :
I: M R → R + (8)
In addition, the following conditions are met:

 

  1. I(X)=0 ↔ |RX|<2
  2. I( ∥ i=1KX)=I(X)
  3. I( ∥ i=1KXi)=I( ∥ i=K1Xi)
  4. | ∩ i=1KRXi|= ∅ ⇒ I( ∥ i=1KXi)> ∑ i=1KI(Xi)
  5. I(X) ≤ I(XTR), ∀ X ∈ M(R),|X|=|XTR|, where XTR ∈ M(R) is a real random pattern.
Since any pattern can be described in non-decomposable binary form, the unit of information content should be the bit.
It can be seen that for any pattern X ∈ M ( R ) , if N=n( X ) and K=|R| , then the maximum information content of X is:
IMAX(X)=N ⋅ log2(K)(9)
That is, I( X ) ≤ I MAX ( X ) for any pattern X ∈ M ( R ) . In the case of a binary pattern, I MAX ( X ) =N , the length of the pattern, which means that a maximum of N bits of information (decision) is required to describe the pattern.
If the maximum information content is known, the relative information content can be calculated:
I(rel)(X)=I(X)/IMAX(X)(10)

Shannon information

In theory, Kolmogorov complexity would provide a better approximation of the information content of patterns, but it has been proven that it cannot be calculated [5], in contrast to Shannon information [13], which can be calculated efficiently, but approximates the actual information content less well. Shannon information calculates the information content of the pattern based on the expected probability of occurrence (relative frequency) of the elements of the pattern.
The Shannon information of an arbitrary pattern X ∈ M ( R ) :
IS(X)= ∑ i=1Nlog2(1p(xi)))(11)
Since the relative frequency (expected occurrence) of the elements of the pattern is only one statistical characteristic of the pattern and does not take into account the order of the elements. That's why the Shannon information often gives a very inaccurate estimate of the information content.
The value of the Shannon information is the same for all patterns of the same length whose elements have the same relative frequency. If X ∈ M ( R ) , Y ∈ M ( Q ) and |R|=|Q|=K then it holds that:
IS(X)=IS(Y),if{p(r1),p(r2),...,(rK) }={p(q1),p(q2),...,(qK) } (12)
Shannon information ignores the structure of the patterns at different scales, the laws encoded in them, and therefore overestimates the information content of patterns consisting of repeating sections.
The problem can be illustrated with a simple example. Let's calculate the Shannon entropy of the following three patterns:

 

  1. XA:001101101010111001110010001001000100001000010000
  2. XB:101010101010101010101010101010101010101010101010
  3. XC:111111110000000011111111000000001111111100000000
In all three cases, the set of values is R= 0,1 , the probability of each element is p( 0 ) =0.5 and p( 1 ) =0.5 , and the Shannon entropy is I S ( X ) = ∑ i=1 Nlo g 2 ( 1 p( x i ) ) =16bit , although it is obvious that the information content of the data series differs significantly. Due to its randomness, the information content of data line X A is almost 16 bits, while the information content of the other two data lines is much smaller, as they contain repeated sections. In the X B data line, for example, the 2-bit section [ 10 ] is repeated, which means that its information content is closer to 2 bits.
The problem is that in the example above, we are examining the datasets at an elementary level, and our Shannon entropy function does not take into account the larger-scale structure of the dataset, such as the presence of repeating segments longer than 1 signal. Therefore, it is obvious to develop methods that are based on the Shannon entropy, but the data series are analyzed in the entire spectrum of the resolution, in the entire frequency range, and thus provide a more accurate approximation of the information content of the data series. Countless such solutions have already been published, which can be read, for example, in the articles [2] and [6]. This article presents an additional method.

SSM information

5.1 Shannon information spectrum

Let the pattern X be partitioned by sections of length r if m=[ N/r ] :
X(r)=[x1...xr,xr+1...x2r,...,x(m-1) ⋅ r+1...xm ⋅ r](13)
Let the following series denoted as Shannon information spectrum (SP) of pattern X :
ISP(r)(X)=IS(X(r)),r=1,...,[N/2](14)
From the sequences X ( r ) , we omit (truncated) partitions shorter than r , those that are shorter than r . In the cases r>[ N/2 ] , I SP ( X ( r ) ) =0 , so these are also omitted from the spectrum.
image: images/blog/publications/multiscale_information_measurement/0_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i_____ci__tartalom_m__r__si_m__dszer_K__pek_ISP.png
Figure 1. Diagram A shows the Shannon information spectrum of the random pattern X A , and diagram B shows the repeating pattern X C (Appendix I). It can be seen that in case B , a lower value appears at certain frequencies.

5.2 Maximal Shannon information spectrum

The Shannon information spectrum will be maximum in the case of random data sets. Let the following formula denoted as maximum Shannon information spectrum (SMS):
ISMS(r)(X)=m ⋅ log2(min(Kr,m)),r=1,...,[N/2](15)
I SMS ( r ) ( X ) is a supremum for all information spectrum having the same value set and pattern length. If K r <m , then in the case of random patterns, the value set of the partitioning contains most likely all possible partitions, so the information content is approximately m ⋅ lo g 2 ( K r ) . If K r >m , then the partitioning cannot contain all possible partitions, each partition will most likely be unique, so the information content will be m ⋅ lo g 2 ( m ) . If r is small enough, then the series X ( r ) most likely contains all possible partitions, therefore by random data sets the measured amount of information will approximately equal the maximum possible information content of the pattern, i.e. if r is small, then I SPM ( r ) ( X ( r ) ) ≈ I MAX ( X ( r ) ) =N ⋅ lo g 2 ( n ) .
image: images/blog/publications/multiscale_information_measurement/1_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i___artalom_m__r__si_m__dszer_K__pek_ISMS_-_ISP.png
Figure 2. Comparison of maximum Shannon information spectrum (ISMS) and Shannon information spectrum (ISP) of the repeating pattern X C .

5.3 Shannon normalized information spectrum

If we are interested in how much the information content seems to be relative to the maximum value in each resolution, we can normalize the spectrum with the maximum spectrum to the range [ 0-N ⋅ lo g 2 ( n ) ] . Let the following sequence denoted as Shannon normalized information spectrum (SNS):
ISNS(r)(X)={ISP(r)(X)ISMS(r)(X) ⋅ IMAX(X),if|RX(r)|>1r ⋅ ISP(1)(X)N),if|RX(r)|=1wherer=1,...,[N/2](16)
If the value set of the partitioning has only one element, i.e. | R X ( r ) |=1 , the normalized value would be 0 . In this case the information content should be the information content of the repeating partition, and the average Shannon entropy of an element of the elementary resolution is multiplied by the length of the partition: r ⋅ I SP ( 1 ) ( X ) N .
image: images/blog/publications/multiscale_information_measurement/2_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i____ci__tartalom_m__r__si_m__dszer_K__pek_ISNS.png
Figure 3. Comparison of Shannon normalized information spectrum (S of patterns from very different sources. The vertical axis represents the amounts of bits of information measured at the given resolution. You can see how different the spectrum of the different patterns is, but in most cases there is a resolution where the information content shows a definite minimum. The minima are marked with an arrow. A: random binary pattern, B: binary pattern with repeating sections, C: DNA section, D: English text, E: ECG signal, F: audio recording containing speech, G: evolution of the number of sunspots between 1700-2021, H: seismogram , I: Lena's photo.
The figures show that different types of patterns have very different and characteristic spectra. This suggests that the type or source of the pattern may be inferred from the nature of the spectrum, but we do not deal with this in this study.

5.4 SSM information

We know that the Shannon information gives an upper estimate in all cases, so we get the most accurate approximation of the information content from the normalized spectrum if we take the minimum. Let the information content calculated from the normalized spectrum denoted as Shannon spectrum minimum information (SSM information):
ISSM(X)=mini=1[N/2](ISNS(i)(X))(17)
Shannon information, SSM information and compression complexity of different patterns (Appendix I) in bits:
Pattern
Source
IMAX(X)
IS(X)
ISSM(X)
IZIP(X)
I7Z(X)
IZPAQ(X)
XA
Random binary pattern.
48
46
40
     
XB
Repeating binary pattern.
48
48
2
     
XC
Repeating binary pattern.
48
48
13
     
XD
Repeating text.
362
343
58
     
XE
Duplicate text with one character error.
374
347
116
     
XF
Random DNA pattern.
471
422
409
     
XG
DNA segment of COVID virus.
471
405
388
     
XH
Random string (0-9, a-z, A-Z).
1209
1174
1174
     
XI
English text (James Herriot's Cat Stories).
1104
971
971
     
XJ
Solar activity between 1700-2021 (A-Z).
1495
1349
1295
     
XK
Isaac Asimov: True love.
50901
37266
32649
30904
29968
25248
XL
Binary ECG signal.
80000
79491
47646
52320
41032
36968
XM
Binary seismic data.
313664
312320
171546
83920
66064
45824
XN
Speech recording.
325472
325342
277489
286760
257856
251408
XO
Lena.
524288
524216
422085
443096
371360
337408
Table 1. Comparison of SSM information and compression complexity of different patterns.
Relative Shannon information, SSM information, and compression complexity of different patterns (Appendix I) compared to maximum information:
Pattern
Source
I S ( rel ) ( X ) %
I SSM ( rel ) ( X ) %
I ZIP ( rel ) ( X ) %
I 7Z ( rel ) ( X ) %
I ( rel ) ZPAQ ( X ) %
XK
Isaac Asimov: True love.
73
64
61
59
50
XL
Binary ECG signal.
99
60
65
51
46
XM
Binary seismic data.
100
55
27
21
15
XN
Speech recording.
100
85
88
79
77
XO
Lena.
100
81
85
71
64
Table 2. Comparison of relative SSM information and relative compression complexity of different patterns.
It can be seen from the table that the SSM information gives similar results as the compression algorithms. In general, it is true that the more computationally demanding a compression or information measurement procedure is, the closer it is to Kolmogorov complexity. In the examined examples, the results of SSM information are usually located between the results of ZIP and 7Z, so the computational complexity of SSM information must be similar to the computational complexity of ZIP and 7Z.
image: images/blog/publications/multiscale_information_measurement/3_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i___om_m__r__si_m__dszer_K__pek_IS_ISSM_ZPAQ_EN.png
Figure 4. Comparison of the results of different information measurement methods.
image: images/blog/publications/multiscale_information_measurement/4_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i____si_m__dszer_K__pek_ISSM_ZIP_7Z_ZPAQ_AVR_EN.png
Figure 5. Comparison of the average results of different information measurement methods.

5.5 Comparison with computational complexity

If we do not know the signal set of the signal sequence, the first step is to determine the number of signals occurring in the signal sequence, which has an asymptotic complexity of O ( N ⋅ logN) .
Determining the Shannon information consists of two steps. In the first step, we determine the frequency of signals, which has a complexity of O ( N) , and in the second step, we sum up the entropy of each signal, so the total complexity of the Shannon information is O ( N ⋅ logN)+ O ( N)= O ( N ⋅ logN) .
For the ZIP, 7Z and ZPAQ algorithms used to calculate the compression complexity, the complexity is usually between O ( N) and O ( N ⋅ logN) , but for ZPAQ may be greater [7] [12] [11].
In the case of SSM information, the first step is also to determine the frequency of signals, which has a complexity of O ( N) . In the second step, the Shannon information spectrum is calculated O ( N)+ O ( N/2)+ O ( N/3)+...+ O ( 2)= O ( N ⋅ logN) complexity, finally the minimum of the spectrum can be determined O ( N) with complexity. The complexity of calculating the SSM information in the worst case is O ( I SSM ( X ) )= O ( N ⋅ logN)+ O ( N)+ O ( N ⋅ logN)+ O ( N)= O ( N ⋅ logN) , which is identical to compression algorithms.

5.6 Known issues

All methods of calculating the amount of information have inaccuracies. One of the problems with SSM information is that if the repetition in a repeating pattern is not perfect, the value of the SSM information is larger than expected, as shown in the example below.
X
I SSM ( X ) [bit]
123456789 123456789 123456789
29
223456789 123456789 123456789
50
Table 3. One element change can cause a notable difference in SSM information.

Conclusion

It can be shown SSM information can determine the information content of the patterns with an accuracy comparable to the compression algorithms, but at the same time it is simple. In addition information spectrum presented here provides a useful visual tool for studying the information structure of patterns in the frequency domain.

References

Scoville, John, "Fast Autocorrelated Context Models for Data Compression", (2013).
Laszlo Lovasz, Complexity of Algorithms (Boston University, 2020).
Ben-Naim, Arieh, "Entropy and Information Theory: Uses and Misuses", Entropy (2019).
Pieter Adriaans, "Facticity as the amount of self-descriptive information in a data set", (2012).
Juha Karkkainen, "Fast BWT in small space by blockwise suffix sorting", Theoretical Computer Science (2007).
A. N. Kolmogorov, "On tables of random numbers", Mathematical Reviews (1963).
Laszlo Lovasz, "Information and Complexity (How To Measure Them?)", The Emergence of Complexity in Mathematics, Physics, Chemistry and Biology, Pontifical Academy of Sciences (1996).
Anne Humeau-Heurtier, "The Multiscale Entropy Algorithm and Its Variants: A Review", Entropy (2015).
Allen, Benjamin and Stacey, Blake and Bar-Yam, Yaneer, "Multiscale Information Theory and the Marginal Utility of Information", Entropy (2017).
Goldberger, A. and Amaral, L. and Glass, L. and Hausdorff, J. and Ivanov, P. C. and Mark, R. and Stanley, H. E., "PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals.", Circulation (2000).
Markus Mauer, Timo Beller, Enno Ohlebush, "A Lempel-Ziv-style Compression Method for Repetitive Texts", (2017).
Grunwald, Peter and Vitanyi, Paul, "Shannon Information and Kolmogorov Complexity", CoRR (2004).
Claude E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal (1948).
Ervin Laszlo, Introduction to Systems Philosophy (Routledge, 1972).
Olimpia Lombardi and Federico Holik and Leonardo Vanni, "What is Shannon information?", Synthese (2015).

Appendix
I. Example patterns
Notation
A pattern or a detail of the pattern
Length
Explanation
XA
001101101010111001110010001001000100001000010000
48 bit
Random binary pattern.
XB
101010101010101010101010101010101010101010101010
48 bit
Repeating binary pattern.
XC
111111110000000011111111000000001111111100000000
48 bit
Repeating binary pattern.
XD
The sky is blue. The sky is blue. The sky is blue.
101 characters
Repeating text.
The sky is blue. The sky is blue. The sky is blue.
XE
The sky is blue. The sky is blue. The sky is blue.
101 characters
Duplicate text with one character error.
The sky is blue. The sky is glue. The sky is blue.
XF
cagtttctagctatattagcgggcacgactccactgcgcctatgcggaag
200 characters
Random DNA pattern.
cttgatcaaattttgaccagatcttaggtaacctgaacaagtcagttcgt
aggcgtcgattggccgacgggtgcgaagaaaaaagtgatcgttgtccaac
atctctagtacccaccgttgtgatgtacgttatacggacacgagcatatt
XG
cggcagtgaggacaatcagacaactactattcaaacaattgttgaggttc
200 characters
DNA segment of COVID virus.
aacctcaattagagatggaacttacaccagttgttcagactattgaagtg
aatagttttagtggttatttaaaacttactgacaatgtatacattaaaaa
tgcagacattgtggaagaagctaaaaaggtaaaaccaacagtggttgtta
XH
EK8Pi5sv2npTfzoaMNp87QtT5kbIUQkTJzHwICCstSmg4aksHT
200 characters
Random string (0-9, a-z, A-Z).
MwztgHFg3j8AoIobN3FycCLidGeyROiNyG5itB9kxyez1LZjFF
HIBjipE7hidZyiJmilXM0mwnxzlzWSfQ0xP1OuFpWosMwS1cjY
t4nyv4ONx1FceWkAf8SdvDGZVzeVzq2EmOqRF6Im2iudcYRswj
XI
I think it was the beginning of Mrs. Bond's
221 characters
English text (James Herriot's Cat Stories)
unquestioning faith in me when she saw me
quickly enveloping the cat till all you could
see of him was a small black and white head
protruding from an immovable cocoon of cloth.
XJ
ABCDFIEDBBAAAABEHJJGEEDBDGMSPLHFBACFKMRPLGDCA[...]
321 characters
Solar activity between 1700-2021 (A-Z).
XK
My name is Joe. That is what my colleague,
8391 characters
Isaac Asimov: True love.
Milton Davidson, calls me. He is a programmer and
I am a computer program. [...]
XL
1011000100110011101110111011001100110011[...]
80000 bit
Binary ECG signal [4].
XM
110000101000000011000010100000001100001010000[...]
313664 bit
Binary seismic data.
XN
0101001001001001010001100100011011100100[...]
325472 bit
Speech recording.
XO
1010001010100001101000001010001010100011[...]
524288 bit
Lena (256x256 pixel, grayscale).

Support us

We do our nonprofit conservation work completely free of charge, which does not generate any revenue. But we can only sustain ourselves with money. We are collecting to continue our value creation and preservation work, so if you have $5 to $25 to offer, please support us!

Bank: OTP Bank Nyrt.

IBAN: HU 25 11749015 28535807 00000000

SWIFT: OTPVHUHB

 

© 2018 Völgyerdő Nonprofit Kft. - All rights reserved. A honlapon közzétett tartalmnakat szerzői jogok védik, felhasználásukhoz engedély szükséges.

Partner Sites

JobcardSystems.com - Customizable management software

Hangocska.hu - Speech Development and Effective Learning Help

JHead.hu - A Java developer blog

ZsP.hu - Blog