Entropy

When I first got into the image registration, entropy (in the context of information theory) became a small obsession. Since that summer, I began to process any data in my hand (such as my email correspondence, key logger data, etc.) and computed the entropy. Without a training in information theory, I do not necessarily understand the full implications of the determined values, but I found it—just determining the entropy—to be quite amusing.

The entropy (or more specifically, Shannon Entropy) of a given stream of information can be determined with the following equation:
$$ H = - \sum_{i} {p_i \cdot \log_b(p_i)} $$

Since music is a form of language, I computed the entropy values for each variation using the processed data of the notes (frequency information) in each variation:

entropy per group

entropy per group normalized

Which variation has the highest/lowest entropy?

Highest entropy (3.49121): variation 15 (it is one of the longer ones, along with the variation 25)

Lowest entropy (3.03972): variation 5


One interesting feature of the Goldberg is that it is bundled in groups of 3, thus the aria and the 10 groups of three carrying certain elements. In this article, Schiff describes this theme:

Each group contains a brilliant virtuoso piece, a gentle character piece and a strictly polyphonic canon. ... Thus the three main elements are physical, emotional and intellectual.

Since there are repeated element, I was wondering how the entropy values change as the work progresses:

In this figure, each variation entropy was grouped into the repeating three elements (first, second, and third variations):

However, a quick ANOVA (p=0.324) tells that the differences are not statistically significant. Since the group sizes are small, I also performed non-parametric Kruskal-Wallis test, which does not require the assumption of normality:
Kruskal-Wallis result, p value=0.293