December 28, 2015

The Inevitability of Integration

Michael Pahl has written a real gem on his blog, about Jesuses in the New Testament. More than most Pastor/Scholar types, Michael has a real gift for maintaining a high level of intellectual rigor *and* a sharp focus on spiritual depth. So when I say I am thrilled by 99% of what he just said in this post, I mean that sincerely as high praise. But obviously, it's that last 1% that I'm blogging about here.

Tell you what... ***Spoilers!***

Read my critique first, before you read his post at this link: After that, please add your voice. The more, the Jesuses-ier.

This is what I said to Michael tonight, on Facebook:

Ok, Michael, I'm 100% with you except for one phrase in your last paragraph. I agree we oppose "harmonizing". I agree we need multiple perspectives. I suppose I can even agree with not "blurring", but if by that last word you also mean that we must not blend or combine or integrate at all... 

Then to which Jesus do you pray?

Do you take turns? Do you pray to Luke's Jesus on Thursdays? Do you pray to your own imagined Jesus? Or do you pray to "the real Jesus"? And if the latter, then how does He relate to these others? 

Have you not, inevitably, in your mind, built your view of Jesus from all of these views?

I think you have. And I think that's good. I think we must *both* keep in mind the distinct perspectives of scripture *and* blend (if not "blur") aspects of those into an integrated view. To even think about "Jesus", we cannot possibly do anything else!

December 11, 2015

Jesus Research: Hermeneutics or Hypotheses?

At one point in the Syndicate Symposium for Jesus and the Chaos of History (which I am currently enjoying), in responding to a critique from Brent Driggers, (scroll through both here), the inimitable James Crossley engages the topic of reconstruction, and yet quickly reverts to the problem of authentication.
For Driggers we are effectively doing two different things (where Crossan is more reconstruction of the historical Jesus, I am more the earliest traditions). Here I would partly disagree with Driggers. On one level, this is obviously a fair assessment. Yet, on another, I do not think it is really that easy to do the kind of precise historical reconstruction of the figure of Jesus because we can only go back to the earliest perceptions. Thus, I would say that all we can realistically do is a reconstruction of the ideas present in the earliest tradition (with the qualification that such perceptions could, theoretically, have been present during the life of Jesus)...
The remainder of that paragraph drives the point home (rather heroically, I must add - although, James, please trust I make that remark with no satire intended!) by briefly surveying the work and impact of 2012's (earth shattering) Jesus, Criteria, and the Demise of Authenticity.

All well said. All good so far. And now we come to my takeaway.

Whether curiously or not, this passage began as Crossley's comparison of his own reconstructive work against Crossan's reconstructive work, and yet Crossley defends his comparison by recourse - not to methodology of hypothesizing based upon data - but by recourse to the perils of assessing historical data.

Regular readers should know I'm fond of discussing the relationship between these two factors -- the one being authenticity, a.k.a. historicity, the other being reconstruction, a.k.a. hypothesis -- but since returning from SBL last month I've been reflecting a lot about their relative importance NOT with regard to historiographical practice, BUT with regard to scholarly discussion within the guild of historical Jesus research. A question had begun weighing on me, first before and heavily during the H.J. session with Paul Foster, Jordan Ryan, and Brant Pitre. Given the fact that many H.J. scholars have long performed reconstruction, why has the scholarly debate fixated so predominantly upon historicity?

If this sounds like an old question, let me clarify. It's obvious why conservatives do this. "The liberal construction is obviously wrong because it doesn't line up with the Gospels. End of argument."

What I'm starting to wonder more recently is different. Why do so many non-evangelical, or "non-conservative" scholars so often do the same thing?

As of this week I am also (for the first time, at long last) currently working through Albert Schweitzer's The Quest of the Historical Jesus, and I keep wondering this question anew. Schweitzer had various problems with the 18th century's liberal lives of Jesus, but what stands out most among his critiques is stringent objections to insufficient judgment about historicity. What is accepted or not accepted sometimes seems to be - de facto - the only substantial consideration. Thither also came, evidently, Schweitzer's own search for (in Michael J. Thate's recent phrase) "a single hermeneutical key through which to read all the material in order to reconstruct a tidy profile". (H/T Rafael Rodriguez!)

What is this strange affinity for equating hermeneutics and hypothesizing? How we read the text is how we see the past? There is no putting together? No building? Is this strictly positivism? And last month, why did Jordan Ryan's vigorous review of Collingwood's 1950's admonition (against "scissors and paste" history, in favor of a "question and answer" approach) seem so startling, so foreign, so refreshing, so electric?

Now, it's unfair of me to go picking on James Crossley, who is valiant, noble, and handsome (or so I've been assured). He is also, quite sincerely, a gifted, talented, and insightful historian. To this injustice, therefore, I offer as my only defense that James is a very good sport! And, James, my only apology is that I've here only engaged with the meta-Chaos. Your Chaos well deserves further attention in its own right. Anon, to that. Anon. Anon.

But to wrap up today's post, let's return to the Symposium, where it seems to me Crossley's point (here at top) stands most convincingly with specific regard to the reconstruction of *ideas* and *material conditions*. In turn, perhaps Crossan​'s approach (or at least Driggers' view of it) is most justified with regard to the reconstruction of an individual's response to the ideas and conditions of his time. But perhaps this is largely to repeat what Driggers had originally offered:
So, whatever one makes of Crossan’s historical conclusions, we are really looking at two different questions, with Crossley assessing the ideologies of the earliest Palestinian traditions and Crossan reconstructing the mission and teachings of the historical Jesus
To that statement, I would editorially emmend "assessing". Crossley is not assessing ideologies. Properly speaking, he is assessing the texts of the Gospels, but the earliest traditions and their corresponding ideologies, Crossley is reconstructing. He's reconstructing very effectively. My concern at the moment is simply that James himself may not be quite sure where his own reading ends and his reconstructing begins. If so, then it may partly be that confusion which motivated his slip, in the blockquote at top, and not simply the expediency of dispatching Crossan via the easier method (which, let's be honest, is incredibly tempting to all of us, really).

I'll close with another quote, serendipitously from the very same Syndicate Symposium. In her response to Crossley's book, the illustrious scholar, historiographically proficient and methodologially erudite, Helen Bond offered up quite a mouthful, which I treat here as regarding all of this in general. I mean, it's really something, this paragraph. Although I'm not Pentecostal (and I don't suppose Helen is, either) I suspect she might well have prophesied, somewhere in here.
It has to be said, of course, that a critique of the “Great Men” view of history is nothing new, and it has been challenged in other disciplines. Crossley is right, however, to highlight its survival, even vitality, within contemporary Jesus scholarship. Although he doesn’t speculate much on why this might be, two reasons spring immediately to mind. The first is that most historical Jesus critics are not really “historians”; an analysis of the figure of Jesus is often a thinly veiled way to comment on theology, contemporary politics, or both. Second, although Jesus critics nowadays claim to treat Jesus in exactly the same way as figures such as Alexander the Great or Socrates, this is rarely the case. Jesus might have been an “ordinary” first-century Jew, but for many critics he wasn’t that ordinary (to adapt a favourite Crossley formulation). Holding on to a rather outdated way of doing history (whether consciously or not) is a useful sleight of hand for a critic who wants to claim rather more for Jesus and his legacy than strict evidence allows.
As for me, I continue to read, to ask questions, and to engage these, my most esteemed, and (thank God!) my most patient scholarly friends.

Dear readers, put together your own conclusions. Or better yet, educate me!

In professional Jesus studies, in actual practice, how often is there a difference between Hermeneutics and Hypotheses?

And more to the point, how often is that difference publicly recognized?

********** UPDATE (24 hours later) **********

In all sincerity, I continue trying to figure this out for myself. Where does reading end and reconstruction begin?

What is recontextualization? If recontextualization is the final stage in a process of reading, then it's hermeneutical. But if recontextualization is the beginning of constructing a new storyworld that best befits the (non-fiction) text, then it's hypothetical. But then again, to whatever extent that reconstruction (storyworld) is simply a "context for reading", then we're back to hermeneutics.

And yet, James Crossley, your work offers quite a bit more than simply a list of selections from texts, passages which you've deemed to be worthy of use. Your craft is well honed, and highly effective, but it's not strictly methodology, I don't think. There's something more than art but less than science in the way you discern patterns and put certain pieces together (but not others). It's not pure subjectivity. It's more than discernment. It's imagination, but it's highly structured. It's architecture, but critiques reduce it to accounting.

There is no uncontextualized reading, just as there are no uninterpreted facts, and there is no reconstruction that's not based on data. Calculus depends on statistics, but the reason we plot points is to examine the curve. So why does critique so often attack points, instead of the curve?

Is the answer staring me in the face? Is it simply that Scholar A's reconstruction relies on Scholar B's imagination, because B must follow A's argument and correctly put the data together, while Scholar A's data is more objectively available for critique?

Maybe it is just the expediency of attainable refutation, but I still think there's something deeper worth getting at in all of this. It's not just easier to dispute data. It's precisely that we cannot stop our brains from continuously re-blending data and interpretation back into one whole, again.

I see your data. I follow your arguments. And then a moment later I have trouble distinguishing where one began and one ended. Maybe this is why we slip so easily back and forth?

Perhaps that is all we do. But it's not all that we're doing...

Anon. ********** I was wrong. I'm not done. I have one last thought to play out in this thread. Then I'll quit. If we follow the metaphor of Calculus (plotting points and sketching curves), then the parallel to my question would be, How does Scholar B critique Scholar A's curve without critiquing Scholar A's points? And the only possible answer - keeping strictly within this particular metaphor - is that Scholar B can INTRODUCE ADDITIONAL POINTS. I think this is a self-critique of my own position, up to now.
If S.B. adds points of data, that would challenge S.A.'s sketch of the curve; e.g., "Your sketch shows development flattening out between H and K, but what if I rose and J fell again?" But from where should S.B. get such data? In Calculus, we can generate new data. In Jesus Research, there are no new pieces of data. The introduction of theories, hunches, conceptions, and plausible conjectures - while these can often be valid - are not typically the best ways for Scholar B to refute the presentation of Scholar A. That leaves data from the Gospels which has been previously rejected by Scholar A. And this, finally, may be where I conclusively answer my own question. Why do histories present reconstructions, while critiques focus on data? Because the most objective, valid, and available means of introducing new data is - as I just said - to salvage data previously rejected. Technically, this is a positive addition, but it plays straightforwardly as the critique of a negative. Thus, even when B grants A's positive assumptions, the second most available means of critiquing A's representation is to refute A's negative assumptions. This is particularly acute in examining the Gospels because (1) there is such limited data, and (2) there has been so much disagreement about it all.

Thus, it may be unavoidable. But a change in tone and awareness (at least!) is still needed. In Annette Merz' response to Foster, Ryan and Pitre (at the SBL session mentioned above), she discussed the rise of Mythycism in the U.K. and stressed the ethical responsibility of the Academy to combat this misinformation. At one point in her discussion, however, she mused a while about whether scholars' debates about historicity were also somewhat to blame. Mythycism, that is, can be considered as the extreme form of treating the Gospels as "fictions". To that end, and especially if this "reversion to data" is unavoidable in our critiques, we might at least do a better job of emphasizing HOW our disputes about data inclusion EFFECTS our constructions of past history. Not fiction. Not myth. History. And on that. At last. I will say good night... ------------------------------------------------------------------ Clarification: In case it hasn't been obvious enough, my entire post until this clarification was focused on critiques of histories, not on producing histories. This post was all about criticism, not historiography.

December 3, 2015

My NT/History Manifesto for 2016

Stories help us remember chronological change. Biographies, cause & effect, widely noted events, and perceived 'turning points' are the best ways for human memory to reconstruct the dynamic passing of time. If we think about first century chronology informally, as a collection of stories, we can more easily apply four-dimensional contexts to our readings* of the New Testament's narrative content.

*The validity of any such reading would depend of course on (1) the plausibility of an original audience being familiar with that historical context (as the inferred background to EITHER historical fiction OR historical nonfiction), and/or (2) the potential for historicity of the reconstructed scenario.

November 11, 2015

The Logos and The Eidos

Tonight, as I'm still working out a few kinks in my ongoing series, I remembered this little reflection I made in my iPhone's notepad, apparently in April of 2014. The contrast between words and things and ideas has always been captivating, hasn't it? Anyway, I'll just leave this here as some spark-worthy ramblings, which hopefully some of you might enjoy! 

The Thing about Plato's cave is not that normal people are actually living like shadow puppets. It's that guys like Plato prefer to see it that way because they can't really get into it. If you don't understand how to interact with normal life then you wind up creating a higher plane to imagine within. Or perhaps that's chicken and egg in reverse. Perhaps your imagination is so strong that there's nothing left in your brainpower for apprehending normal physical & social interactions. But whichever one sparks the other first, these two trends reinforce one another after a while and spiral ever stronger. 

So how does this really begin? It might have something to do with words. Words are algebraic. Language is the abstraction of things into symbols. In algebra we let X stand for a number and then having abstracted the concept of value we begin talking about X without reference to any particular number. For certain minds, all of language has this same power. The potential for variability of language is sometimes called irony or ambiguity but what it is is abstract variability. It's algebraic.

So, that helps explain Plato and people like him. Maybe it happens whenever a child is profoundly immersed in literature from an early age and happens to find it enjoyable. But being attracted to a story world which stands apart from the real world is only one part of the captivating mystique. Simply in the way that I can use the word tree and have that simple signifier conjure things inside your mind - images, experiences, perhaps even sounds and smells - memories of places you've been and sights you've overlooked. If focused upon strongly enough, the word tree has a magical power to conjure up an imaginary world inside your head. ...and how can reality ever compete?

But it is words especially that seem to have power. Although other things can bring the same effect at stimulating imagination and memory, they are far less common. I can walk in the park and stare at one tree and let it remind me of other countries or imaginary trees but there is always a particular tree physically visibly before me in front of which I am presently sitting. That cannot have the power of "tree ". And so there is something fascinating about words.  And just as some children become captivated by the motion of a ball or the appearance of pretty things or by making other children smile or even cry, there were some of us who became captivated very early on with the power of words. This can draw you in as fully as anything else. 

So eventually you get guys like Plato('s Socrates) who see the entire world as insufficient to account for the potential of words. It's a complete fabrication when they develop this concept about a higher plane of pure ideas but it isn't an accident. What they came up with was the natural progression of focusing on words and finding out what words can do. Let me say that again. The "idea plane" (or something very much like it) was every bit as inevitable among brainy wordsmiths as the Super Bowl (or something very much like it) was inevitable once guys began playing rugby.

Eventually... What this leads to is simultaneously a great benefit and a tremendous drawback for Christian theology, because this higher plane of being seems to mimic the higher plane of God...

Jesus is truly deeply and metaphorically the Word, not in human ideology, but Jesus is the actual signifier of God as a word is an actual signifier of things...  This does not mean that God is a higher form of human intellect, "the Word". No, this means that God became communicable by incarnating in Jesus. It does not mean Jesus is the words that God would speak to us, or thoughts that God would speak - although Jesus may be those things - but it means Jesus himself *IS* the speech-like effort of God to explain himself to human beings. He is what best references, signifies, represents, or stands for God. "The Word" is the opening metaphor of one idea that stays consistently expressed through the rest of the chapter. As one translator rendered, "He came to show us what God was like."

And so, whether we think about Logos or Eidos or simply higher levels of thinking in general, we do well to remember that words should always, in some way or another, point us back to a real experience of things...

Experience always trumps. Period.

Anon, then...

August 10, 2015

The Reason I Do What I Do

To do history is nothing more than to be a conscientious reader of history: to extrapolate ramifications from texts and other artifacts, to hypothesize the logistics implied by a literature, to imaginatively construct a non-fiction storyworld, to encompass the world in one's mind. It can be nothing else.

The history one "does" (or "produces") may ultimately take the form of yet another literary synopsis, or it may become yet another detailed critical analysis, or perhaps it will exist merely within one's own memory of the experience of studying "the past". In all these cases, past history always exists presently as nothing more than the narrative residue informing mnemonic reconstruction. The "actual past" is remembered not as a text, but as the memory of whatever you imagined that world and that era to have been like, as you were reading.

To say "Jesus wept." is not necessarily to say that Jesus wept. The difference is all in the mind. If we do not imagine a narrative storyworld, we are not reading about actual persons in past situations. If we're just mouthing the words, our default mental setting is to remain focused on self. To dig in, to imagine the story, is a valuable step towards focusing elsewhere.

We do not have to write history to do history. We do not have to do history to read literature. But to read literature without doing history is to treat the contents of that text as nothing more than a bit of light fiction, perhaps with a moral attached, and this makes any story as irrelevant to the present world as it seems non-existent to us, as it remains for us only in the non-envisioned past - words never dwelt upon, a non-situation, even less than forgotten.

I am not suggesting that we need to declare how much of a story we believe really happened. 

I'm suggesting we need to show care for important historical stories by reading and thinking about their contents AS IF those things actually happened.

Otherwise, I don't see how any portion or detail of a supposedly non-fiction story could ever, actually, presently, personally matter...

August 5, 2015

Peer Review, Peon Review

The illustrious and effervescent NT professor known (to human folk) as Christopher Skinner generated some great conversation among academics last week with his blog post about the academic citation of blogs, and another one about peer review. The first topic seemed simple to me: blogs should not be cited as scholarship but they are fair game to cite in scholarship. The second topic required some thought, and here's what I have to say about that. 

There's nothing wrong with peer review. It was, albeit flawed, the best possible system for the old media world. And although the way forward now is NOT to abandon the old way and embrace cacophonous ignorance (perish, forbid!) it would behoove the academy to re-apply the purpose of peer review in the new media world. The old model was a gatekeeper, and a good one, but now barbarians are flooding the court and no one is going to get them out again. The old model was "Filter, then publish." The new model is, and must be, "Publish, then filter.". 

As academic publishing goes more and more online, it needs a new revenue stream. Coincidentally, we'd need a huge budget increase for academics to begin offering a thorough and systematically comprehensive review of all the gibberish (and other stuff) being posted on the interwebs. With less and less administrative burden for dead tree pipeline, and with a noble cause sure to elicit massive donations, we might repurpose our best curators to start providing regular feedback and constructive criticism of the most prominent stuff that desperately needs a professional redressing, and perhaps occasionally showcasing a few modest voices whose ideas are worth fostering. 

We don't need to end peer review. 

We need to establish peon review.

July 19, 2015

Remembering Caligula's Life Story

a theoretical application, a personal exercise, and a methodological reflection
Predictable sequences are easier to reconstruct and correlation is sometimes as good as causation for remembering storylines. In a nutshell, that’s the theory I’ll attempt to describe in my next post. For today, here’s a personal exercise in remembering, and an immediate reflection on my own mnemonic sequencing methods. [Note: remembering an entire life story can be compressed into the memory of an outline or storyline, to which other details may be attached for a more completely reconstructed life story. Today's post, as in all my recent discussion, I will mention lots of details, but the theory and method here will remain focused on emphasizing the construction of a sequencable timeline.]

I will now actively remember the basic outline from one biographical storyline: Caligula grew up in the Roman imperial household, became a favorite nephew of the emperor Tiberius, succeeded to power with Tiberius’ chief advisor Macro, oversaw one year of stable government and then ruled maniacally for another two years, before finally being assassinated by his own elite bodyguards.

As I’ve just reconstructed it, from beginning to end, this sequence proceeds on the conditioning powers of probability. Other mnemonics could have been employed, but I wanted to start in early childhood and build forward through time. The six points in the finished outline had each been preserved somewhere in my brain, but I was able to sequence them all together (through “constructive remembering”) by paying attention to content itself.

Summing up what I remember about the early childhood “Little Boot”/Gaius Caesar is what gets me the first point, which allows but does not require the second point. Other children grew up in Augustus’ household, but searching my memory for the particular outcome of Caligula’s family situation reminds me (whether directly or indirectly) that he did somehow worm his way into favor. Next, remembering him from that vantage point on the Isle of Capri reminds me that Tiberius died there. Caligula’s life story content is divided neatly in two by Tiberius’ death. That passing, however, did not require Caligula’s succession. In fact, other heirs were passed over (Claudius) or killed off (Agrippa) so that Gaius Caligula could become Emperor. Nevertheless, I do not have to remember those details to recall the sequence “death, then succession”. Like everything else in this chain of events, the links aren’t forged by causality but probability. The death of a ruler doesn’t always result in an immediate succession. This is simply the most frequent outcome.

Next, it’s obvious that I remember Caligula did become emperor - it’s the only reason we’re talking about him - but how do I remember WHEN Caligula became Emperor? And more importantly, why does that mnemonic content rise to the surface at this point? My argument is, because it was specifically evoked at this point in the chain of conditioning - not by causality but by correlation. Even if it’s only because I was holding this key point in reserve until I had constructively re-collected and re-sequenced all the “early life” material I could muster, this is the same process repeating, as it did before. What else do I remember, and how best does it fit? I cycle through things I remember until finding something that fits a familiar pattern - not necessarily a logical or reasonable or necessary progression, but a progression that feels like a recognizable sequence.

Of all the content I can manage to remember from his life story, which bit happened next?

I remember all the anecdotes where he seems like a crazy tyrant, and I remember he obviously died at the end. But do I remember anything else? Do I remember anything before that?

As it so happens, personally, I remember Sutorius* Macro, Tiberius’ chief advisor who took over the reigns of empire after deposing the infamous prefect Sejanus in AD 31. From my personal studies of history, I remember Macro as a part of the smooth transition of power before Caligula got rid of him about a year later. (*His first name is the only thing I looked up, while writing this post.)

Was Macro’s influence causative? Does his removal explain the brevity of Caligula’s good year? I’ve never heard anyone say so, but to be honest, I do remember thinking this during my reconstruction. I’ll even admit I re-typed the italicized storyline (at top) to avoid giving the impression that causality was responsible for this sequence in my own mnemonic reconstruction. I have therefore admitted that causality did at least enter my thinking. However, I do not believe causality was responsible for my reconstruction of sequence, and after much reflection tonight I believe I can prove it.

Years ago, I was working on the background to Paul’s letter to Galatia, and trying to debunk a gerrymandered chronology which depended on supposing that Caligula gave Damascus to Aretas shortly after Tiberius died (AD 37). Of the many arguments I mounted against this, my most original point was the continuity of Macro. Since Macro had been running the empire for Tiberius, and Macro continued under Caligula initially, there was no cause for supposing that Rome suddenly reversed its position on Aretas just because the figurehead ruler was no longer enjoying his extravagantly debauched retirement on the Isle of Capri. That was my argument, and that’s undoubtedly why Macro still exists as a significant part of my memories about Caligula. Macro also happens to correspond with related research on Caligula’s gifting of Trachonitis to Herod Agrippa in that same time period. (For more about all that, see the bottom part of this very old, very long blogpost:

Now, I have also heard some claim that Caligula’s worst years might be explained by an illness that caused brain damage. I don’t remember (and for the purposes of today’s exercise, I am purposely not looking it up) in what month and year that illness was supposed to have been, but I think I remember hearing it was early in AD 38. Thus, even though I have always doubted the illness/insanity narrative, I did remember it and it did preserve the idea of a good year followed by two bad years. Next, I suspect due to a bit of unrelated borrowing, I also drew on some narratives about Nero that claim his horrible phase begins with his mother’s and Seneca’s deaths - good advisers, good ruler; dead advisers, horrifying ruler. It was into this paradigm, tonight, that I believe I attached Macro. Altogether, then, I may have conflated my memory of the illness timeline with the Nero narrative, and succinctly narrated Macro as the dividing line between good and bad years of Caligula’s rule.

Perhaps most intriguingly, however, I cannot recall ever thinking this about Macro before now.

If that’s true, then causality here was not a memory previously encoded but a brand new distortion, something I generated during tonight’s constructive remembering. Actually, with further research and debate, it could eventually prove a happy accident of historical imagination, but that’s aside from the point. For our purposes tonight, it doesn’t matter whether the recollection is true, and it doesn’t matter how the recollection was distorted. It matters when the recollection was distorted. Instead of using a previously encoded sense of causality to reconstruct temporal content, I took what were simply encoded (and conflated) associations and I inflated that correlation into a narratable causation - and perhaps this itself was partly because I was remembering in real time while typing on a keyboard. But none of this has yet to answer the question at hand: how did I remember the sequence in question?

Causality was neither available nor needed. Thanks to my old research on an ancillary topic, I remembered that Macro advised Tiberius before advising Caligula briefly. Macro was present and then he was absent, and the overlap lasted about one year, as best I recall. [Edit: Make that one year give or take a few months; from Tiberius’ death (March of 37) to Macro’s demotion to prefect of Egypt, which dates either from appointment (as early as January 38) or physical transfer (as early as June 38).]

In the act of constructive remembering, I adjusted my encoding of Macro’s significance. However, before I could do that I first had to recall Macro as having some other significance. And this was due to a simple correlation. When I asked myself, what’s the very next thing to include after Caligula’s succession, the trace memory I selected was Macro. It was not his effect I was originally thinking about, but his origin. Macro served Tiberius, Caligula succeeded Tiberius, and so then Macro served Caligula. That’s a temporal correlation, a mnemonic association of overlapping continuities, a conditioning of probabilities with no sense of causation.

My placement of Macro as the next bit of the storyline was purely because he corresponds in multiple ways with my memory of the regime transition.

Of course causality can serve the same self-sequencing purpose, for remembering chronologies. For instance, different reader could mnemonically reconstruct Caligula’s timeline without Macro, and they might sequence the good and bad years by recalling (however dubiously) a sense of causality they’d encoded from reading about the brain injury. Technically, this same causality did (admittedly) inform my own exercise here, on some deep level. Even though I rejected the causality as non-factual, that memory of rejected causality is what initially reminded me that there was a good year (or so) before the insanity began to run rampant.

Nevertheless, I believe I have shown that remembering causality was not absolutely required. Sometimes sequences depend on probability and/or pure correlation. For part of my process, a collection of memories all coincided with the mnemonic “time period” defined by Macro’s association with Caligula, all of which I summed up in one single point on my remembered storyline. That point, summing up the collective “phase” of the storyline, thereby sequenced itself.

So there’s one biographical storyline I constructively remembered today, and that’s my most honest reflection on my actual reconstructive process.  But whether you believe me or not, this illustrates the ways in which probability and correlation help the mind reconstruct sequences while remembering story content.

Other children grew up in Augustus’ household; Tiberius had other favorites; other heirs were killed off or passed over so Caligula could be chosen; nothing mandated a good start or a slide into horrifying insanity; and the praetorian assassins weren’t absolutely obligated to arrive at that choice. At each “gap” between these remembered “phases” or “turning points”, the storyline could have moved in some alternative direction. The fact of the actual sequence is what the naive refer to as “history”. The way I construct my remembered sequence is by comparing trace memories against recognizable time periods. The important thing isn’t logical necessity. It’s familiar frequencies.

The most rememberable sequences in Caligula’s life story may seem natural or “logical”, but they aren’t “logically necessary” by any stretch of the mind. What they actually are can be explained on a more basic level. The rememberable sequences are rememberable mainly because they’re predictable.

Growing up is necessarily first and assassination is necessarily last. That’s not causality. That’s statistics. That’s a common pattern we observe among life stories.

Succeeding Tiberius comes after earning favor and before ruling. That’s not logical necessity. That’s a pair of correlations. The context of story content either includes Tiberius being Emperor or it doesn’t.

And finally, since the insane years precede the assassination then the good years - by virtue of existing - must precede the bad years. That’s not karma. That’s two distinct data sets. Assassination is more likely to follow a tyrannical rule than a good rule, and in this case (as far as I remember) it did.

Assuming I remember each of these six points, they sequence themselves. More deeply, their self-sequencing property may be what reinforces their preservation as trace memories. The structural value of these six points as an outline is a survival advantage. Having already established that content dictates sequence, we may now observe the converse, that timelines “select” (in a Darwinian sense) their own content.

Our need to remember storylines privileges temporal content.

The more rememberable stories, over time, may by default become the more memorable as well.

This is one reason biography remains a perennially popular and a relatively reliable tool for remembering the past.

July 4, 2015

Remembering Life Stories (3): Biographical Temporality

Temporal content appears in biographical narratives not just through "sufficient causality" but in material that aligns with common patterns of human growth and personal development. Such content structures stories not by Plot but by Character. We identify this material in biographical literature according to three types: "necessary causality", statistical probability, and correlation with recognized "time patterns". Extensive illustrations are included. (This is post 3 of 6.)


Having previously identified self-sequencing memory in two areas (narrativized causality, as in the “post hoc” stylings of an Aristotelian Plot, and human mortality, by which birth and death provide every life story’s chronological Beginning and End), my last post adapted the more generally applicable work of William Friedman, who showed that we remember chronology (“the time of events”) by reconstructing temporal context from the informational content of preserved bits of memory (and associated knowledge). Thanks to Friedman, I feel justified in defining “temporal content” as “mnemonic content that structures itself" or "implies its own sequence".

With all this in mind we come to the practical question of literary biography.

What kinds of temporal content are typically found in narrated life stories? What helps readers remember a biographical storyline? Obviously all biographies feature human mortality, and many biographies feature narrativized causality, but what other types of raw narrative material typically convey the kind of temporal content readers use to mnemonically re/construct a sequence of life story events? In short, what kinds of information help the mind chronologize life stories according to character development, without relying on narrative causality?

Obviously, that exclusion has been the tricky part.

Building story structure around character development can depend largely on identifying a sequence of conditional prerequisites. But how can we identify life story progressions based on “necessary causality” when we’ve disallowed the narrativizations of “sufficient causality”? This requires something more than a categorical distinction. We need to distinguish these two causalities differently than historians or physicists (determining relative measures of actual causation) and differently than literary critics (categorizing ways in which composition supplies that which narrative requires). To maintain our focus on cognitive memory theory, we need to distinguish “necessary causality” and “sufficient causality” in terms of how they function as tools of constructive remembering.

Things happened. Storytellers purported causalities. But an audience member’s ability to remember that story is affected by which type of narrative “causality” is employed.

When memories are encoded with the narrative structure of “sufficient causality”, either cause or effect may evoke one another. Besides logical deduction, there is the original encoding of the mnemonic association. Each trace memory has been created in such a way that its own informational context requires (implies, evokes, triggers) a necessary remembering of the associated trace. The mnemonic associations are mutually reinforcing and they imply one another reciprocally. When encoded as such, the cause evokes the effect and the effect evokes the cause. Thus, if both bits of information are preserved (and theoretically, even if only one of them is) either one will necessarily trigger a reconstruction of their temporal relationship as prior and subsequent.

Note how the meta-level of memory flips our terminology. “Sufficient causality” implies sequence necessarily. “Necessary causality” doesn’t necessarily imply anything.

When memories are encoded with the narrative structure of “necessary causality”, the cause and effect do not have an equal ability to evoke one another. The outcome is encoded to reflect its necessary precondition, but the precondition encoded as such does not embed enough information to require (imply, evoke, trigger) the memory of any particular outcome. Even if both memory traces are preserved, their associations are not mutually reinforcing and the temporal implication is non-reciprocal. The effect evokes its cause but the cause does not automatically evoke its effect. The “necessary” association is absolute when remembering “backward”, but triggering that association is not absolutely necessary when remembering “forward”.

As Friedman demonstrated, it’s all about the information. Preserving information about a development reflects information about preconditions, but preserving information about preconditions doesn’t embed information which necessitates later developments. As long as the mnemonic content is encoded differently for these types of priors and subsequents, the remembering of “necessary causality” works differently in the forward and backward directions. If the mind works to reconstruct time in a backward direction, the “necessary effect” absolutely implies its own “necessary cause”. But to reconstruct time in a forward direction, the remembering mind must make a logical leap to link prior to its subsequent - and this is true even when both bits of information are preserved!

Surprisingly, however, this logical leap is not completely bereft of assistance. Although necessary prerequisites do not imply necessary effects, prerequisite causes do imply possible outcomes.  In fact, prerequisites often imply one or more probable outcomes. Despite lacking absolute causality, we do not fall all the way back to a completely freestyle “fill in the blanks” model of constructive remembering (as we often do in situations addressed by schema theory). Rather, when leaping forward in time from the memory of a prerequisite, the remembering mind is provisionally enabled to “connect dots” from within a selection of probable outcomes.

Cue inspirational music. We just passed through causation and drilled down to correlation.

Near the end of post #1, I said “our familiarity with certain predictive regularities of human growth and development enables various algorithms for the efficient compression and reliable reconstruction of biographical storylines”. These “predictive regularities” involve what I have just been explaining, and some algorithms for efficient compression of storylines will be the focus of post #4. Also in post #1, in the very next paragraph, I said that developmental storylines “may be informationally compressed into backward chains of “necessary causality””. We will look closely at those types of "backward" compressions in post #5.

What remains for today (here in post #3) is to identify instances of temporal content in life stories that build upon Character rather than Plot. We have just observed two types of material that mark such content, and we can add a third type more directly in line with William Friedman’s research (post #2).

The first type is “necessary causality” as reflected in biographical development. The second type is probability as conditioned by developmental prerequisites. The third type is any statistical correlation which conforms to what Friedman called “time patterns”. This third pattern can be redefined more purely in terms of statistical correlations (familiar frequencies and/or regular occurrences of conditioned outcomes), but that’s enough stats talk for the moment.

The pertinent issues should become more understandable as we begin to look at examples.


Let’s begin with the most distinctive of our three patterns - probability as conditioned by developmental prerequisites. If some life experience or narrative material is seen as a “necessary cause”, and encoded within trace memory as a precondition, then there remains a good chance the mind will successfully reconstruct a timeline by trying to recall trace memories that may be associated with one or more of the given precondition’s probable outcomes.

Consider the familiar experience of autobiographical memory.

Let’s say you have an old high school friend who joined the army after graduation. Now let’s further suppose that at some moment you happen to recall that this friend did join the army, and you find yourself trying to remember when this occurred. Because turning eighteen is a prerequisite to joining the army, both logic and general knowledge should assist you in triggering the preserved memory that your old friend was indeed eighteen at the time. (The conditional outcome has implied its own precondition, not just logically but informationally.) Again, if you first remember that your high school friend joined the army, and this reminds you about standing with her/him at graduation, then you’ve just constructively remembered temporal content in the backward direction.

In the forward direction, however, if you first recall the old high school friend and then find yourself trying to remember what she/he did after graduation, the fact that turning eighteen is a prerequisite to enlistment does not ensure you'll recall what happened next. If your memory of that later development is lost, you would merely be guessing, as anyone could. Probability alone tells us that military service is often one of the top four or five leading career choices popular among high school graduates, but of course we are talking about probability assisted remembering. If your memory of your friend's later development is not utterly lost, if the trace of that information has merely become faded over time, then probability offers more assistance to you than it would anyone else. If the memory is still accessible then your familiarity with the common patterns of human experience helps you access it more easily by “narrowing the search”. Rather than trying to “fill in the blank” by searching your mind aimlessly, your awareness of the most likely outcomes allows you to search your mind specifically for traces of any previously encoded memory that happens to match one of those probable scenarios. Again, if a previously encoded memory trace is accessible, the chances are good you’ll succeed in remembering it. (And not just "remembering" it.)

We pause to note three key points briefly. First, this scenario is not an outworking of “natural logic” or “causality” because the mnemonic assistance comes from your personal familiarity with statistical frequencies - which, themselves, are not comprehensively dictated by natural logic or direct causation. Second, this example obviously seems similar to situations of "remembering" in which familiar schemas help the mind construct false memories, and similar to situations where the reconstruction "successfully" recalls a previously encoded false memory. Indeed, probability can assist in recalling false traces and inventing new memories, as easily as probability can assist in recalling a "true" trace memory (as it did in the given scenario). However, as I said in post #2, the reliability of content must be held apart from the “reliability” (plausibility) of structure. Even false memory illustrates that probability can enable the constructive remembering of timelines. But this brings up our third point. Concerns about “false memory” in such scenarios can be largely alleviated when we leave autobiographical memory and focus on remembering literature. My memory of content from a literary narrative can be looked up and verified.

I think I remember that Lincoln was a Senator before he became President. I know most U.S. Presidents were Senators or Governors, and a few had been Generals. But I think Lincoln was a Senator. Notice how this is not strictly a guess. It’s not even mostly a guess. The odds help in two ways; they limit my options and they boost my general confidence, both of which reinforce my specific confidence, which is that I happen to feel strongly that I’ve remembered correctly. We could look it up. But you already know that in this case I did remember correctly.

Not incidentally, this Lincoln example belongs to category three. It’s a purely non-causal statistical correlation. Note the backward reconstruction (from president to one of three likely positions) and yet there is no necessary employment prerequisite before “president”. This is probability without a prerequisite, which makes it neither category one, nor two. This is a statistical pattern of common temporal progression. We reconstructed backwardly, assisted by known frequencies. For remembering temporality, conventional sequences are as helpful as causative influences.

We’ll come back to that point in a moment. Here’s another example.

Encoding information that “The queen died of grief” associates cause and effect so mutually (through the narrative distortion of “sufficient causality”) that recalling either point can trigger a recall of the other, and recalling in either direction reinforces the selfsame mutual association. Remembering the queen’s death reminds us of her grief, and remembering her grief reminds us that it killed her; or at least, that is what we have so long believed. Note how the prior implies its own subsequent as strongly as the subsequent implies its own prior. This is what I meant about the mnemonic association of narrativized causality being “reciprocal".

Contrast this with encoding information that “The queen became a great-grandmother”, which implies she had previously become a grandmother, and also previously a mother. Note how this implication is not reversible. Encoding a memory that “The queen birthed a child” or that “The queen became a grandmother” embeds no absolute implication about later developments. Becoming a great-grandmother reflects (implies, evokes, embeds) information about her earlier states. Observe once more, however, this is not merely “natural logic”. I presume we have all heard of Elizabeth, Charles, William, and George.

In this example, the subsequents imply their own necessary priors, not merely because logic requires it but because logic assists us in remembering information about this woman, Queen Elizabeth, and her next three successors. This is what Friedman called a “time pattern” - a previously established familiarity with the frequently observed (and often narrated) pattern in human experience - in this case, the succession of generations. Again, each subsequent in this chain implies its own priors, but each prior can only imply possible subsequents. If you did not remember that Elizabeth has a son, then you might not remember that Elizabeth has a grandson. But if you do recall the prerequisite event, then probability assists you in recalling whether the next likely outcome (in a familiar sequence or “time pattern”) may have occurred.

Remembering a prerequisite makes it easier to recall probable outcomes.

These are biographical patterns of life story development and they enable the efficient rememberability of story structure by focusing on Character rather than Plot. (Ta-da!)

All grandmothers had previously become mothers, and many mothers go on to become grandmothers. Every adult survived infancy, and most infants live to adulthood. All army soldiers must have reached age eighteen, and some eighteen year olds join the army. Every doctoral candidate was once a lowly undergraduate, but few college freshman pursue Ph.D status.

That's probably just enough to make my point. Let's wrap up this argument and I'll append tons more examples at the bottom of this post. Look for additional insights there as well. There is much more to note about all of this, as we go on with our study.


We have looked at examples of story content in which outcomes necessarily imply preconditions, examples of story content in which preconditions partially assist in implicating probable outcomes, and examples of story content in which other statistical patterns enable similarly assisted reconstructions from preserved trace memory content. All three types of content are common in biographical narratives for multiple reasons, including that they enable readers to attain greater efficiency in remembering a lengthy, elaborate storyline. These examples of temporal content have implied their own sequence without recourse to the narrativizations of post hoc causality. They imply story structure by focusing on Character rather than Plot.

With your approval, we might label these types of temporal content as biographical temporality.

So that's all very impressive, you say, but is that all there is to remembering a life storyAnd if not, then where do we next go from here in continuing this study? I'm so delighted you asked.

Focusing on any type of temporal content is one level of mnemonic efficiency. Bits of temporal content can easily imply their own sequence, but the mnemonic challenge increases when we attempt to string together several bits of sequencable data points, as one coherent timeline. The next level of efficiency involves remembering a whole story.

The remembering mind can repeatedly reconstruct any timeline, refreshing a memory of the overall sequence by focusing on bits of temporal content in chains of association, either by linking multiple pairs or by aligning individual data points with one or more pre-determined “time patterns”. In any of these cases, remembering a storyline involves once more renewing the work of constructive remembering. In practice, this probably accounts for the common experience of many readers who attempt to actively remember how story content fits into its most appropriate chronological (that is, its historical and/or logical and/or probable and/or authorially intended) sequence. But some minds also find more efficient ways to remember a sequence.

There are further levels of efficiency to attain by compressing a chain of temporal content into a more rememberable sequence. An entire biographical narrative can be remembered coherently, as a unified whole in the sense of its storyline, and not just in the ubiquity of its featured subject.

Come back for post #4 posts 4, 5, 6, and 7, in which we consider compression algorithms of information theory.

Come back for post #5 post 8, in which we consider teleology as a reflection of nested preconditions.

And come back for post #6 post 9, in which I will try to summarize and conclude.



I promised more examples of temporal content in life stories - fully or partially self-sequencing information which aligns with “time patterns” through necessity, probability, and correlation. To identify these, we can largely focus on illustrations of human growth and personal development.

Two predominant trajectories are biological growth and psychological development.

Childhood, for example, is chock-full of biological prerequisites. Babies roll over before they can stand up, crawl before they can walk, gnaw before they can bite, and babble before they can properly form words. Children do not mature sexually before surviving a dozen or more solar cycles. Teenage mothers are far less likely to die in childbirth, and elderly women can no longer conceive. Neither boys nor old men may capably plow a field. And etc. Most of this data suggests “causal necessity” if employed in the backward construction, and probability for constructing in the forward construction. Not every child survives to reach the next “stage” in progressing toward adulthood, but most growing children survive each successive prerequisite, and every adult was obviously once a child.

As these physiological “beginnings” help to sequence a life story’s “middle”, most remembering minds inevitably come to some degree of envisioning “stages”. Being subjective, this obviously varies a lot. Some of us might be content to perceive broad developmental phases (childhood, adulthood, infirmity), or construct narrower time periods (infancy, childhood, adolescence, parenthood, empty nest, retirement), while others might go by decades (twenties, thirties, forties, etc), and some rare minds might even insist on a more meticulous accounting by exact years of age. These are all Friedman-esque “time patterns” and any one of them can regulate temporal content, because the natural trajectory of biological development is apparent at literally any rate of subjective periodization. Some of us don’t distinguish between eighteen and twenty year olds, and some older folks don’t distinguish between thirty and forty year olds, but nobody confuses a five year old with a twenty year old, and no one overlaps the “life stages” of a young thirty-something and her elderly grandfather.

I said in post #1 that we cannot rely on this or that particular paradigm of “life stages”, but I am saying now that an individual mind can rely on any subjective paradigm that serves the purpose well enough. We may admit Richard Burridge's categories of ancient biographical narrative structure as one possible paradigm for a hypothetical reader. We just can't assign those categories hypothetically to all remembering readers. But in any case, however we slice it, define it, or label it, the natural progression of biological development is self-evidently sequential, as evidenced by prerequisites. We speak before we can write, become parents before we are grandparents, and if we die of old age then we must decline somewhat in general health prior to that. Etc, etc, etc.

In Friedman’s terms, a broken hip can be associated with the “time pattern” of old age and finger painting can be associated with the “time pattern” of kindergarten. The options are literally as unlimited as any statistical patterns that may happen to be familiar to some remembering mind.

Familiarity is huge, by the way. For probability to enable more efficient remembering, frequency and familiarity have to align. In fact, that’s the only reason biography is a special category of remembering temporality. 

Human development isn’t the only category of earthly experience in which probability implies temporality (that is, not the only area in which we might often note frequent progress within statistically observable patterns of change). That's not remotely the case. Actually, human development is simply the most popular category in which statistical frequency aligns with an intense familiarity that we all share for a single subject. Change happens. Dynamic systems develop. People watch people. And statistical patterns mount gradually. It’s only when all of these factors combine that literature finds a broad platform for conveying temporality through content that an audience can recognize

((***The same exact thing is what accounts for narrativized causality. Purporting causation literally depends on statistical correlation. That is, it depends on inflating a claim based some degree of relative correlation. In the whole history of our species, we've paid close attention to causality! But perhaps I digress.***))

Compared to biological growth, psychological development is probably less helpful - because it’s less frequently observable - but in terms of statistical correlations the progressions of cognitive growth are potentially just as helpful as anything. If the mind preserves temporal content, an awareness of probability can enable mnemonic reconstruction of temporality.

I won’t keep belaboring the memory theory after this point, but we should continue to identify biographical examples of probable outcomes.

Not many biographies linger on early childhood, or puberty and adolescence, which is when cognitive development tends to evidence itself in observable “leaps”, but the research of Jean Piaget does qualify here, technically. The vast majority of 9 month olds have developed object permanence (things still exist once removed from their sight). A predominant majority of two year olds have developed symbolic awareness (early language development). A typical eight year old can usually demonstrate logical thinking (such as cause & effect). And many twelve year olds can begin to engage and utilize abstract concepts (literary techniques, scientific method, basic algebra, political bias).

This covers the bulk of a human population through the 8th or 9th grade, in that most post-pubescent adolescents have already passed over these cognitive thresholds. We can also note very generally some adult patterns. Most adults develop mature thinking (a common outcome) through overcoming challenges (a statistically common prior, athough not a prerequisite). Emotional instability by an adult (as a later development) can be a likely indicator of a difficult childhood (again, not strictly a prerequisite). Declining mental capacity is not unlike the onset of physical infirmities, in that it usually appears during the later decades of an average lifespan. And so on.

Adults also display cognitive development by acquiring and increasing in particular knowledge. We can often estimate an adult’s years of experience in some area of skill (though not necessarily their age, obviously) by comparing the extent of acquired knowledge and skills. An adept mechanic most likely has years of experience. An inept mechanic is most likely a rookie. However, one complication of this acquired knowledge category is that all statistical likelihoods vary according to social demographics. If a character is discussing retirement planning, they would most likely be older than 50, but in some social sets college students and young professionals can often be found advising one another on retirement. On the positive side, this example does illustrate that temporal content can potentially help sequence all sorts of material, although the likely usefulness of such content can scale quickly towards zero.

Again, psychological change is less observable and less chronologically definite. Like mortality, it offers a broad trajectory in between early growth and later decline, but it is merely a probable and non-necessary trajectory. The recognizable milestones, (what Friedman calls "landmarks" and "locations in temporal patterns) are less plentiful in this area. But perhaps they are more apparent to specialists.

Some identifiable cognitive milestones can be found near the end of a lifetime. Memory loss is common but not necessarily typical. Social disorders can deteriorate towards extreme dysfunction. There can be various indications of a stroke or some other impairment which may indicate that a person (or biographical character) whose timeline we are trying to remember had almost certainly, by that point, reached a period of naturally declining health. Clearly, these types of indicators imply that related story content belongs to a time period after biographical phases when good health would have been a necessary prerequisite. And here’s one for history buffs. In ancient times, before the advent of pharmaceuticals, people were rarely known to recover from insanity. Thus, historians debate when Caligula’s apparent madness may have begun, but when we remember story content from the craziest episodes of Suetonius’ account, we instinctively place those anecdotes near the end of his life.

So much for biology and psychology of human growth and deterioration.

We must also consider the recognizable conventions that occur within social, cultural, and political patterns of personal development.

These developmental sequences (or “stages”) are based in shared experiences that are less universal than biological or psychological growth & decay, but as we have noted, any statistical frequency is fair game as long as it’s familiar to the remembering mind. More broadly, of course, we prefer to identify familiar frequencies which are well-recognized enough that could benefit an entire audience through their temporal indications. But in this exercise we’ll note whatever we can note. For instance, I confess my own sights here have mostly settled on the following generalization: It is axiomatic of literary practice that authors and audiences will quite often share a similar if not identical context for cultural traditions, enforced social standards, standard customs and behavioral norms. When these patterns bear temporal implications, it may be only due to these smaller statistical samples of a literary audience that shares a demographic subpopulation. The employees of Google, for example, must have unique ways to spot the newbies on their campus, and these temporal indicators are undoubtedly quite different than whatever helps veteran stock brokers spot the rookies on Wall Street.

That said, progressions across cultures may appear in the same general areas. Education and apprenticeship most often imply adolescence and early adulthood while advanced positions in organized institutions are typically not earned until later in life. Likewise, it is most common that a recent marriage typically reflects two people an early stage of adulthood, while in some cultures an older husband and younger bride are more customary, and yet in some populations marriage could rarely imply more than the legal age of consent. Despite differing specifics, these general categories of human experience can inform story content that becomes explicitly self-sequencing, to any audience in the know.

Many social and cultural examples have a lot to do with family and career. Besides marriage, the typical age of child bearing can be a trend based as much if not more upon societal expectation as biological limits, and yet in some circles pregnancies are scheduled according to social pressure, and this scheduling itself varies widely according to group. Likewise, the tell-tale signs of an “empty nest” household is only an indicator of some parents’ biological age as is frequent (and familiar) within sub-cultures. Again, however, as often as such knowledge is familiar between author and audience, this content can also self-sequence.

For probabilities of political development, there’s no better example than Suetonius’ Lives of the Caesars, which exhibit a more detailed story structure for readers in the know (readers who are familiar with the customary sequence of advancement in Roman public life and military appointment) than for readers who do not happen to be acquainted with ancient Italian political norms.

With these same caveats, we might also consider a few culturally conditioned examples of accumulation of money, possessions, accomplishment, or even the social accumulation of friends and family members. In general, more time usually equates to more accumulation. We can likewise estimate age - of endeavors, not persons - by observing the extent of accumulated damages, physical wear and tear, incremental social or economic decline, or other accumulated defeats and personal losses. As above, the qualification of any such case will depend not on whatever we estimate as a necessary or a probable or a typical time pattern. At this point, I shouldn’t need to suggest more specific examples, much less defend them.

As ought to be clear by now, the remembering mind can build a time pattern from any perceived phenomena that are frequently (and familiarly) display the same temporal sequence. Further, any statistical pattern can assist the remembering mind in the process of sequencing story content.

Finally, whenever such frequencies are familiar to a large number of remembering minds, such that the inclusion of such data assists many minds in remembering the storyline of a particular biographical narrative, the inclusion of those frequencies - those particular indicators of temporality in human growth and development - those types of patterns in narrated story content will provide an advantage towards the survival of that biographical narrative among a popular audience.

These are just a few basic examples of how biographical material displays temporal transition through character development.

This concludes the bonus section below post #3, identifying types of content which self-sequence in life stories.

Come back for three more posts in this series, as outlined in my conclusion above.

Anon, my friends...

Recent Posts
Recent Posts Widget
"If I have ever made any valuable discoveries, it has been owing more to patient observation than to any other reason."

-- Isaac Newton