May 28, 2016

Beyond Reliability

I've been reviewing NT scholarship on the historical Jesus, social memory theory, and Gospel traditions in preparation for my trip to the Jesus and Memory conference in London next month. Today that included re-watching Chris Keith's Inaugural YouTube lecture for probably the seventh or tenth time. Partly to brag that I'm going, partly to explain my blog-hiatus from the ongoing project, and partly because this is really too awesome, I can't resist posting here one of my favorite parts from this video - which begins at 53:32 and runs to 56:30.

Note: a slightly modified and heavily footnoted version of Dr. Keith's talk appears in the journal Early Christianity, Vol. 6, No. 4, with the update of this excerpt beginning on p. 536. 

As the previous discussion has already revealed, undoubtedly the greatest source of contention between critics and supporters of social memory theory in Gospels scholarship has been the employment of theory in arguments for the historical reliability of the Gospels. [Richard] Bauckham, [Markus] Bockmuehl, [Craig] Keener, [Robert] McIver, and others have appealed to memory studies in arguments for the general historical reliability of the Jesus tradition, or at least the fact that it stems from eyewitness testimony. Foster, Crook, and others have countered that memory studies either fail to favor the historical reliability of the Gospels or, in fact, favor the historical unreliability of the Gospels. They have thus characterized appropriations of social memory theory in Gospels scholarship in general as “assertions that social memory validates the historicity of the events it purports to communicate."  
The foregoing discussion should suffice for demonstrating that such portraits of social memory theory’s presence in Gospels scholarship are so narrow as to be caricatures. The majority of scholars applying the theory do not use it to those ends. And I suggest here that this to-and-fro over the reliability of memory has obscured social memory theory’s genuine contributions to Gospels scholarship, which reside in its challenges to prior and particularly form-critical tradition models. 
First, and perhaps most importantly, social memory theory as a theory does not establish the Gospels as historically reliable or unreliable. It is not the business of theory to do the work of the theorist. There seems to be a logic to which both sides of this debate adhere. It runs like this: If the Jesus tradition is memory, and if memory is inherently reliable or unreliable, then the Jesus tradition is inherently reliable or unreliable. This logic is flawed, however, because “memory is a process, not a thing, and it works differently at different points in time.” Stated otherwise, memory can be both reliable and unreliable. Social memory theory is a tool for understanding the process by which groups conceptualize their individual and communal pasts from the position of the present. And – importantly – historically accurate and historically inaccurate social memories were subject to the same mnemonic processes. Social memory theory is not, therefore, in itself, a tool that establishes or pronounces memory as historically accurate or inaccurate.  
As we saw earlier, this doesn't mean that social memory theory is irrelevant for questions of historical accuracy. But it does serve to underscore that the analytical categories of “memory” and “social memory” do not function like a wall socket into which one plugs the Jesus tradition, automatically granting it currency as generally reliable or generally unreliable. Theorizing historical accuracy is more difficult than stating generalizations of memory.

Well said, Chris. Yes, Amen, and Howdydooyah.

I am so stoked about this conference, and I'll hope to post reflections here afterward.

Those of you waiting patiently for part 7 of Remembering Life Stories, please check back in July.

Anon, then...

May 8, 2016

Remembering Life Stories (6): Narrative Redundancy

Today’s post can be summarized in six sentences. Brace yourself. They're a mouthful.

Because memory is constructive, a storyline (a mnemonic fabula in chronological sequence) must be reassembled from preserved bits of story content. When that content is informationally “redundant”, the story structure can be reconstructed more efficiently, which explains why some types of narrative sequences typically seem more coherent than others: Chronicles (type 1) are incoherent because their informational content is predominantly random; Biographies (type 2) find modest coherence by representing familiar patterns; Emplotments (type 3) maximize coherence with content that seems entirely predictable, when viewed in retrospect. By examining all three of these types together, we may say that informational redundancy is generally low for Chronicles, intermediate for Biographies, and high for Emplotments. This broad comparison suggests that coherence is not only relative, but hypothetically measurable, if only by theorizing a continuum of redundancy against which all possible storylines might collectively form a standard. If coherence is rememberability, and mnemonic reconstruction depends on informational redundancy, then the ideal coherence of a given storyline can be estimated according to the predictive (statistical) regularities in the data stream which an audience must recall and mnemonically (re)sequence. When the ideal remembering of a linear fabula is successful, that storyline’s level of coherence will correspond to the degree of “Narrative Redundancy” provided by story content. 

Today’s post will begin to unpack that paragraph by comparing Chronicles versus Histories, and my next post will more fully explain the statistical nature of this “redundancy continuum” with particular respect to Biographies. So now, without further ado, we begin...


Coherence is a property of memory, not a feature of literature. We say a storyline seems coherent if it holds together in our minds, and since remembering is constructive this "holds together" really means "comes back together quickly and easily”. Ergo, coherence depends on reconstructive efficiency. In this series, I have repeatedly said that self-sequencing content is the primary efficiency for remembering storylines, and I stand by that statement, but self-sequencing content alone cannot explain why Emplotments are more easily remembered than Chronicles. An Aristotelian Plot is self-sequencing because of narrativized causality, but a Chronicle is also self-sequencing if you memorize dates.

Consider the U.S. Presidents. The sub-sequence “McKinley, Roosevelt, Taft” is not implicitly ordered, but “McKinley 1897, Roosevelt 1901, Taft 1909” is effectively self-sequencing. Technically, using a timeline to impose structure requires twice as much information to be recalled (a point we’ll address later) but the resulting constructive advantage is the same. Thus, given that Chronicles and Plots can both allow content to dictate its own structure, there must be some other mnemonic advantage at work which explains the high level of reconstructive efficiency that we find in a Plot.

This larger advantage is Informational Redundancy. The efficiency of reconstructing a storyline depends on the redundancy of its informational content. (So what the heck does that mean?) In statistics, redundancy means that some of your data can be deduced from other parts of the data. In information theory, redundancy most often refers to predictability in the flow of a data transmission. The practical advantage of redundancy is that it creates opportunities for efficiency. A high degree of informational redundancy provides optimal efficiency in constructive remembering. This principle is as true for non-linear data streams as for linear, just by the way.

Let’s illustrate this with an everyday morning experience. If you always have eggs and bacon for breakfast, then your typical breakfast is quite simply “eggs and bacon”. Such consistency enables that very simple description. Alternatively, if you prefer more variety in your breakfast experience then you cannot describe your own “typical breakfast” without summarizing a larger quantity of data, such as “usually eggs and bacon but sometimes fruit with a bagel, and other times whatever I can grab”. Notice how the reduced redundancy in your breakfast data equates to a reduced efficiency in your description of things. Now, for a very different example, consider the algebraic function “twice X plus 3”. This rule generates an infinite table of paired values [(1,5), (2,7), (3,9), etc…] but since each listing follows the formula, it’s possible to summarize that infinite listing just by re-stating the rule, “twice X plus 3”. What these examples illustrate is a key statistical axiom of information theory. Redundancy enables efficiency. A long description can shrink when a small portion of data implies the whole situation.

Now bring this back into memory. If a bit of trace content implies temporal context, then remembering a storyline becomes slightly more efficient because that content is self-sequencing. However, if that same bit of trace content implies both temporal context and additional pieces of pre-structured content, then remembering a storyline becomes incrementally more efficient because your self-sequencing content is also highly redundant.

For providing potentially optimal coherence, Homer’s Iliad is probably as close to an ideal example as anything. For example, if “Helen of Troy” reminds you that Paris stole her away, which reminds you that Agamemnon declared war, which reminds you that Achiles sailed to Troy, which reminds you of his feud with Agamemnon, which reminds you that Patroclus died, which reminds you that Hector was called out, which reminds you of Priam in mourning, which reminds you of the pretext for the Horse, which reminds you that Troy was conquered and burned… then “Helen of Troy” was a single piece of recalled information that sparked a chain reaction of associated content with structural implications. In other words, “Helen of Troy” can prove both self-sequencing and also highly redundant. In such a manner, the entire plot of the Iliad can be mentally reconstructed from a single trace bit of content.

By any measure, when one bit of trace memory indirectly implies all the requisite content and structure for remembering a whole, that’s a fully optimized reconstructive process. That’s ideal mnemonic coherence. But today’s key is recognizing that this only occurs when the information itself conveys a high degree of redundancy.

Aristotle said we recognize “Unity” in Homer’s Iliad because it forms “a single action”, in that each event triggers the next. From incident to incident the causality plays out like a string of falling dominoes. In hindsight the entire chain of events seems to have been inevitable. If we believe Homer, we are 100% certain that Troy was doomed to burn the very moment Helen left Sparta. Critically speaking, the fictive absurdity of such “post hoc” causality does not inhibit our cognitive processing in the slightest. We encode the information into our minds with a deterministic perspective. The Trojan situation does not appear to convey “probabilities”, but certainties. The entire string of outcomes is presented as a statistical lock. Therefore, the whole data stream seems completely predictable, in hindsight.(*) 

Note: I will sometimes use the paradoxical phrase “predictable, in hindsight”, because we are talking about Memory.  There are many cases in which predictability and redundancy are one and the same, but in some of those cases it will obviously make more sense to use one term than the others. At any rate, informational redundancy is always about statistical regularity - a.k.a. probabilities, frequencies, regular occurences, or simply “a set of outcomes that displays patterns to some degree or another”. Supposing absolute causality is the way our minds make sequenced outcomes seem absolutely “predictable in hindsight”.

When Homer’s discourse is finished, his story’s sequence seems to have been utterly inevitable. Each particular cause brings about exactly one unique effect. As soon as Paris steals Helen, Agammemnon must declare war on Troy. When Achiles agrees to fight, the Greeks are going to win. The story dictates such outcomes absolutely, as if things could not have been otherwise. The fictional content is mnemonically sequenced with an absolute probability. The percentage chance of the Iliad’s ending, given the Iliad’s beginning, is one-hundred percent. If pairings of cause and effect are preserved and recalled, each “next step” forms a 1-to-1 correlation. Each piece of data evokes its partner in causality. To recall one particular cause evokes its absolute effect. To recall a particular effect evokes its absolute cause. One trace memory evokes another, until all the plot points, each in turn, like falling dominoes, cascade back vividly into working memory. The sequence becomes unified because it reconstructs itself so effortlessly.

If we could measure the statistical redundancy of such a narrative sequence, in terms of probability, it would approximate a decimal number approaching “1.0”. That is, 100%.

So much for Emplotment. Let’s scroll down this narrative spectrum to its absolute opposite end.

Think about Chronicles. A list of the 44 U.S. Presidents has very little redundancy. At first glance, it appears to have no internal predictability whatsoever. The list does not reconstruct itself in our minds because none of its data can be deduced from anything else on the list. There is not a single name, nor any date, that tells an ignorant reader what name or date will appear next or any time afterward. Even with a thorough knowledge of American History, there is nothing about George Washington’s administration (1789-1797) that tells you his Vice President would be elected to succeed him. If we are trying to remember in the “forward directinon”, reciting each name from 1 to 44, we find no informational clues to trigger the necessary recall. For instance, there is nothing about Thomas Jefferson (1801-1809) that prepares us to guess that James Madison would serve next (1809-1817). For that matter, there is no piece of information that predicts John Adams would serve only a single term (1797-1801). There is no predictability about any of this, even in hindsight.

Sometimes we do get a tiny clue in advance. For instance, since you know Abraham Lincoln was killed, naming Lincoln automatically “predicts” that his vice president Andrew Johnson will be next on the list. That does not, of course, help at all in assigning their dates (1861-1865; 1865-1869). Likewise, if you know Nixon resigned, and you know his V.P. was Gerald Ford, then it is easier to remember that Ford follows Nixon. However, for almost all of the 44 points in this chronicle, we do not find this kind of implicit redundancy. No point of data helps us deduce any piece of subsequent information. Even with the great benefit of hindsight, we cannot observe any clues or implications from which any President pre-determined which names would follow in order.

In fairness, we can find some redundancy when reconstructing in “backwards order”. For instance, remembering George H. W. Bush might remind you he was previously V.P. under Regan. Likewise, the examples above about Adams or Johnson or Ford become more workable if you reconstruct the sequence in reverse. These deductive implications do reflect a measure of redundancy in the content of this chronicle, even though “predictability” loses all meaning in this case. Even “in hindsight”, George Washington is not “predictable” based on your knowledge about John Adams. This is one reason I prefer to call data “redundant”. However, for the rest of today’s post I’m going to stick with “forward” reconstruction and the analogy of “predictability”, which provides enough justification for theorizing a spectrum of mnemonic coherence, based on narrative redundancy. Once we establish the basic concept, it will be easy enough later to bring in the issue of doing backwards deductions. In particular, we’ll revisit backwards reconstruction in post #8, about Teleological Biographies.

Since the names come with dates, a list of the 44 U.S. Presidents provides self-sequencing content. Despite that, the bare bones “narrative sequence” bears a measure of redundancy that approximates a decimal number approaching “0.0”. That is, Zero percent.

The profound similarity between Chronicles and Emplotments is self-sequencing content, which provides the primary efficiency for constructively remembering any series of purported events. The observable difference between Chronicles and Emplotments is their degree of informational redundancy. When content implies other content (in addition to sequence), it renders much greater coherence. Whether these associative implications arise from the encoding of causality or from recognition of less predictive sequences (such as familiar serial patterns), the stronger mnemonic advantage is provided by narrative redundancy.

This is what it takes for a series of changes to be remembered as one single unit.


Now, to ask the big question: Should this distinction be considered a categorical polarity or might it be reenvisioned as two ends of a graduated continuum? The answer may already seem somewhat self-evident, especially if we consider that many Emplotments are not encoded with quite as much certainty as Homer’s Iliad, and by definition there are many possible chronicles (written or unwritten) which may include some degree of predictability in at least some portion of their event pairings. Thus, rather than two categorical absolutes - either a string of 100% predictable outcomes or a string of 0% predictable outcomes - we already seem to have shades of predictability near the top and bottom. On principle alone we might presume these ranges stretch a bit further than one or two points towards the middle from either extreme. However, to work towards a rigorous consideration of this question will require more than stretching both ends towards the middle. It requires asking ourselves what types of narrative sequences will populate the bulk of the middle itself.

The answer, by now, should be obvious. The statistical patterns which make up most life stories convey strings of outcomes that reflect a variety of relative probabilities for describing sequential chains of events that are common across human experience. But then, to do more than assume such an answer will require demonstrating in more detail HOW these terms of statistical analysis can be applied to our understanding of Remembering Life Stories. 

In my last two posts we discussed how Biographical Expertise can enable our remembering minds to “chunk” a Familar Serial Pattern as one single unit - i.e., one single trace memory. In that discussion, we described any such chunked serial pattern as a self-contained sequence. Now, consider all that in terms of informational redundancy. If a “unit-ized” temporal pattern is recalled as a whole, its contents can be immediately de-unit-ized. To illustrate with time patterns offered by Friedman: I say “seasons” and you say “Winter, Spring, Summer, Fall”. I say “Calendar” and you name all twelve Roman months in order. I say “John Adams” and you say “lawyer, representative, ambassador, vice president, president”. Thus, in all of these cases, one piece of recognized data triggers the immediate recall of many other pieces, which happen to structure themselves - and without the domino-effect of any previously narrativized causalities. 

What these examples indicate is that serial patterns provide a measure of redundancy, in that recalling one unit of trace content can trigger the recall of a larger volume of content, which is also pre-structured. In other words, the remembering of patterns is more efficient than the remembering of Chronicles, and obviously less efficient than the remembering of Emplotments. Thus, if Remembering Life Stories begins with encoding biographical patterns as unitized chunks, then the relative coherence of Life Stories falls somewhere in between the more heavily analyzed extremes. But now, can all this be considered the same type of “narrative redundancy”, and can the degree of redundancy in this case be comparatively “measured” in terms of statistical probability? 

The answer is yes, and for one simple reason. Biographical redundancy comes from pattern recognition, and these serial patterns can helpfully be redefined in statistical terms. However, because that’s a complicated statement to try and explain, let’s save it for next time. 

Before we wrap up today’s post, let’s further illustrate the coherence of Chronicles vs Emplotments.


Let’s revisit E.M. Forster’s famous distinction between “story” and “plot”. 

His first example merely lists two events. If we add numbers to these, we have a short, simple chronicle: “(1) The king died. (2) Then the Queen died.” Apart from the word “then”, this event sequence provides no clues, no deductive implications between consequent and subsequent. In order to make this a “story”, the reader must recall point (1) and point (2), as well as (3) which was first, and (4) which was second. It is only with successful recall of these four separate pieces of information that a reader is able to constructively remember the entire “story”.

Now, in Forster’s second example, we inject causality. The queen died because the king died. If the reader encodes this information properly, then (2) implies (1), and (1) can also imply (2). Thus, recalling either point automatically evokes the other, so that a second occurance of successfull recall is made unnecessary. To remember either one of these deaths is to remember the other. In addition to this informational redundancy, the connection between (1) and (2) also implies the order of the deaths, and the predictive causality removes the need to recall further that these two points go together to comprise one single “story”. In the second example, we get all five bits of information by recalling a single piece of content. If recalling the Queen’s death reminds you why she died, coherence is automatic.

We can make the same comparison more mathematically distinct with a longer sequence. “The king died, an old man died, a young lady died, the queen died, her lover died, another student died, then the prince died.” That’s seven deaths, which makes fourteen points of information if we add numbers. Memorize the numbers and you get the sequence for free, but that “free” benefit costs a lot of recall. However, if I now tell you we’re speaking about the main characters in Shakespeare’s Hamlet, you can reconstruct the seven point sequence more easily by remembering how the death of Hamlet’s father sets the plot into motion. Rather than fourteen points of separate recall for a seemingly random list of unassociated events, you only need one point of initial recall and all the indirect implications of content and structure which that single plot point evokes by association. The out-of-context listing offers zero redundancy. The major deaths in Hamlet are all contextually related to one another. The Plot offers maximum redundancy.

By featuring narrativized causality, a good literary History can direct a reader’s mind to encode each trace memory with sequential associations. When particular “causes” and “effects” evoke one another in sequence, a single trace of information can trigger a rapid reconstruction of the entire plot line. Recalling any one link indirectly evokes the whole chain. If you’re constructively remembering Homer’s Iliad, your initial spark might be Achiles’ feud with Agamemnon. If you know Polybius’ Histories, you might remember his emplotted series of causes by starting from Hannibal in the Alps, or Flammininus’ “liberation” of Greece. Aristotle’s “single action” governs popular histories (e.g., Constantine made the empire christian, Medieval Europe rediscovered the classics, Rosa Parks started a movement) as well as it rules epic fictions (e.g., Achilles feuds with Agamemnon, Grendel’s mother takes revenge, Dorothy can’t get her new shoes off). A tightly emplotted history is sequenced information that an audience can grasp, retain, and repeat as one rememberable whole.  

By the way, in discursive practice, the fact that a storyteller narrates events in relation to one another puts this “narrative redundancy” to work immediately and repeatedly, mnemonically reinforcing the story’s connectedness as each new plot point develops. Each new effect evokes the audience’s fresh memory of that effect’s recent causes. And when narrating in media res, a revelation of some original cause can reinforce recent memories of the previously narrated effects. Each new domino falling immediately strengthens the mnemonic connectedness of all the dominoes in the causal chain of events. The material is encoded and re-encoded, repeatedly. The implications are first useful when forming associations, and they are useful again later when relying upon those same connections in constructive remembering. This is one more layer of the process by which Emplotments maximize informational redundancy.

Naturally, at the other extreme we find chronicles, which Hayden White accepted as referential representations of temporality but diminished for having “no suggestion of any necessary connection between one event and another” (The Content of the Form, p.6). He was, of course, quite correct on both counts. In the narration or reading of a chronicle, there is no repeat reinforcement of content previously mentioned. The traditional chronicle is completely non-redundant because no point implies any other, and no point can be deduced from any other. A list of raw chronological data typically achieves zero degrees of informational redundancy.

Consider this excerpt from the Anglo-Saxon Chronicle (12th century):

A.D. 16 This year [sic] Tiberius succeeded to the empire.
A.D. 26 This year Pilate began to reign over the Jews.
A.D. 30 This year [sic] was Christ baptized…
A.D. 33 This year was Christ crucified…”

Technically, all this material is chronologically self-sequencing. Once you’ve memorized dates, whether accurate or inaccurate, every dated event finds its own place on a timeline. For a diligent student, this excerpt arguably represents four pieces of information, rather than eight, and thus requires only four actions of recall to reassemble the entire sequence. Even so, the reassembled excerpt is not unified as a whole because each data point evokes only itself, and sequences only itself. We have no rhyme or reason for leaping from 16 to 26. We can only try to memorize these particular dates. There are no links in the chain, no dominoes, no direct mnemonic associations between bits of story content. There is no available efficiency here by which the mind should connect multiple parts into one unified whole. Instead, we take on the added difficulty to memorize where the list ends and begins. Why are these four events the only ones worth remembering in an 18 year span? And how should we know whether or not we’ve forgotten an entry for A.D. 23?

While dates enable the reconstruction of sequence, eliminating the need to memorize sequence, there is no built-in associative network to optimize mnemonic reconstruction of the entire sequence as a whole. Because no single event evokes any other events, each piece of content must all be preserved separately and recalled separately. Without informational redundancy, there cannot be mnemonic coherence. In the Anglo-Saxon Chronicle, each point of content evokes one point of structure, so the chronicle is always precisely the sum of its parts. Without diligent study and deliberate cognitive chunking, these random bits of data cannot be one whole.

In fairness, if some chronicle listed a single event for each year, then a capable mind could perhaps memorize by the numbers, and the number line itself would then provide “unity” of a sort but of course this is not how chronicles typically work in actual practice (cf. Hayden White’s distinction between Chronicles and Annals in TCotF chp.1).

In order to remember that Jesus, Tiberius, and Pilate belong together in one list of dates, you must first recall each figure separately and then also remember that each figure for some reason belongs on that particular list. That’s not a unified process, it’s an aggregation of distinctly individual mnemonic constructions. Rebuilding the Anglo-Saxon Chronicle in your mind would demand a prohibitive volume of effort. Or, to put all this in one sentence: if wholeness is about reconstruction, then some reconstructions are simply more efficient than others. Students of this particular time period can achieve a sense of coherence about listings on a timeline, but that is a peculiar form of redundancy that is earned by rehearsal, rather than the kind of redundancy we find automatically when Life Stories present recognizably predictable sequences in familiar patterns of biographical sequence. 

A high degree of narrative redundancy strengthens connectedness and makes reconstruction fairly effortless. When all of the data can potentially evoke all the rest of the data, the storyline can be reassembled with maximum efficiency. This is what actually gives us a sense that a storyline is “coherent”. It holds together well because it reassembles easily.

A low degree of narrative redundancy excludes connectedness and prohibits reconstruction without extreme studiousness. When each point of data evokes only itself, the sequence of outcomes has no reassuring implications to forge a sense of coherence. This allows chronicles to be written efficiently, but prevents them from cohering mnemonically.

Most written narratives that are actually published provide a degree of redundancy that is somewhere in between Hayden White’s ideal categories of Emplotments versus Chronicles. Although White was justified to observe that historical narrativizations are driven to impose moral meaning through authorial bias, that is not the only motivating factor. At whatever point histories first started attempting to imitate fiction, they were only returning the favor. There is something much older than these sophisticated narrativizations, something which even predates Aristotle and Epic. The most fundamental reason why Emplotments are driven to maximize informational redundancy in their narrative sequence is because storytellers desire to convey coherence. There is no point in spreading one’s bias about sequential events if the audience cannot hold on to that message after it’s been received.

Above all else, a storyteller desires to spin narratives that an audience will remember.

 ~~~~~ Conclusion ~~~~~

Crafting a memorable narrative is not “hit or miss”. Even when successfully conveyed, and successfully received, literary coherence is a mnemonic construction, and mnemonic coherence is relative. The primary factor for determining the relative rememberability of any discoursed storyline is to “measure” the amount of informational redundancy implicit in story content. In informational terms, a story’s content may be more or less redundant. By “redundant” we do not mean identically repetitive, but implicitly predictive. The later events in a storyline can be deduced from the former, and (as noted above) sometimes the former can be deduced from the later. 

In observing these things, we must begin to explore a vastly unanalyzed field of narrative poetics in the range of structural variation which stretches along a spectrum of coherence between Chronicles and Emplotments. Most notably, in the mid-section of this new narrative continuum, we will undoubtedly find a great many Life Stories.

The proposal now is that written narratives should not be rigidly classified into one of three categories. While it remains fair to state in broad terms that the content of a story determines whether its structure will be random, or patterned, or predictable, It would be more accurate to see individual storylines as being uniquely more or less structured, as being relatively coherent, along a spectrum of informational redundancy (or, if you prefer, “retroactive predictability”). 

The concept of Narrative Redundancy provides a hypothetical means for “measuring” the coherence of storylines - albeit measurement must be merely comparative for the moment, in lieu of developing calculable standards. In theory, every possible storyline might be charted at some point along a statistical spectrum, somewhere between random and predictable, between chaotic and ordered, between complicated and simplified. If Chronicles are generally random, Life Stories are generally patterned, and Emplotments are generally predictable (“predictable in hindsight”), the entire continuum of coherence might be reasonably visualized by employing the following graphic.

In today’s post we have primarily considered the top end and bottom end of this proposed spectrum.

In my next post I will attempt to justify the middle range of this continuum by considering the statistical nature of familiar serial patterns. All that biographical expertise may have chunked these patterns as singular units, but the “unpacking” of those unit-ized chunks is still a serial pattern…

And a pattern can be understood as a collection of probabilities.


Recent Posts
Recent Posts Widget