February 12, 2013

Cross-referencing Ambiguities: towards Algorithms for Writing and Reading

My working theory and methodology of literature continues to develop...

Is it too strict, or not, to say that language is representational in its denotative function and evocative in its connotative function? That is, the denotation(s) within a word are referential, and the connotation(s) within a word are contextual. "Cow" gives you both a thing to envision as well as a pre-loaded collection of typical places to put it, people it typically works with, and things a cow would typically make and do. Like chewing its cud, giving milk and, on rare occasions, parachuting into stadiums.

As any sentence progresses, each word offered in sequence introduces vast ambiguities, unclear possibilities of endless potential meanings, which our mind processes at nigh infinite speed. For example, just look back two lines: "As", "any", "sentence", "progresses"; even that phrase has no coherent meaning until the possibilities of those first three words are tied together in one meaning by the fourth word in its turn. Likewise, "progresses" by itself conveys many possible meanings, but, as the fourth word in this particular phrase, the potential meanings for "progresses" have been reduced to a single meaning, due to the combination of cross-referenced ambiguities when combined with "As any sentence".

Likewise, the pool of uncertain meanings for "As" and "As any" and "As any sentence" becomes gradually smaller, by association, and thus more clear. The first three words restrict the fourth word to its intended meaning, and although this is addition of words is a constructive process, the work being done is actually a reductive enterprise. In order to write with clarity, the proliferation of meanings from individual words must be cancelled out by juxtaposition with other words. In order to be clear, the writer does not encode specific meanings so much as cancel out extraneous ones.

Eventually - early on, actually - the human computer learns to process whole phrases as units, so frequent combinations don't require reprocessing each time. Consider, as a unit, "And they're off." Does that refer to horse racing, or something else? Consider these familiar standards, each three words long: "Can I have", "Did they really", "How do you" and "Would you like". Each phrase, as a unit, conveys a normal set of referential and evocative potential. Now, consider that "Would you like a" presents another infinitely different set of meanings than "Would you like to". Different, and yet, smaller.

Observe that "Would you like" contains all the potential of "Would you like a" plus all the potential of "Would you like to" as well as several other possible variations. ("Would you like several" of something; "Would you like not" anticipating a gerund; Etc.) The variation of meanings appears to multiply, but in practice it actually divides. Comprehensively, it is not the vast difference of "a" versus "to" that somehow 'creates' a new set of thoughts. Rather, it's the combination of potential meaning sets that strategically reduces ambiguity until one meaning is clear. Potential meanings are reduced by cross-referencing against one another.

This explains both why and how the last word in a phrase often causes re-evaluation of the first word in a phrase, and of the entire phrase. The process has been going on all along. It isn't magic, it's an algorithm! What feels like magic is when a particularly surprising combination appears, just at the end. The common suddenly twists to become something uncommon. But! There isn't a different process going on when the last word is surprising. In fact, this process of detecting such "hidden meanings" - whether symbolism, irony, sarcasm, or punch lines - is always precisely the same.

A connection of two or three words (meanings) doesn't create a new meaning, it cross-checks, or 'triangulates' their trajectories from all possible meanings. As those vectors are starting to converge in a general area of thought, a new laser beam joins the rest from an unexpected angle, and shouts 'hey, over here'. Now the semantic search area gets smaller. The combining of words is what provides more precise meaning, but the eventual meaning we're given (*or, the one that we 'take') was actually there all along, waiting to be discovered, once we knew where to look. (*Unless the reader gets truly inventive; on which, see below.)

Remember Mark Twain's famous dictum on word choice: "The difference between the almost right word and the right word is really a large matter—’tis the difference between the lightning-bug and the lightning." Actually, that's the original quote, according to Bartleby.com, but the famous line has been popularly rephrased, so that "bug" now tends to be the last word of the quote. This collectively approved revision slightly improves on the quotation, if not the idea, because, to a broad audience, it better illustrates the point being made. While Twain's original emphasis moved from weak (an insect) to impactful (a storm), and thus encouraged authors to work for poetic effect, the "bug" ending (while more pedantic) emphasizes the most basic aspect of what's being discussed in the first place. What better way to illustrate the power of word choice than by employing the ever popular 'twist'! 

Just as the last scene in a story can cause reevaluation of the entire plot line, the last word in a sentence has a well known ability to provide this same counter-interpretative effect. My point today is to observe that there's nothing especially magical about the last word, at least, not apart from all the words that preceded. As we all know, 'the twist' doesn't change things. It reveals things that were already there.

The best writers have known this for eons. The real power behind a punch line is all in the setup. For instance, here's a groaner that I happen to adore. Did you hear the one about the golden retriever, in the old west? He limped into town one day and said, "I'm looking for the man who shot my paw." (Cue groan.) I enjoy telling that one mostly for the brevity and efficiency. Set the stage. Load the twist. Pull the trigger. Paw!

It's not the funniest joke, but the minimalism of construction is beautiful. Telling that joke is like a social experiment. The phrases pile up, the world of infinite possibility is slowly whittled down, and the search for understanding is visible on your listener's face. A positive subject (lovable dog, must be our protagonist!), a setting (time and place, probably visualizing the cliche'd main street or ghost town) an odd detail (the limp) more familiar cliches ('into town', 'looking for a man', together evoking the well worn pastiche of the main street showdown) and the punch line, which evokes one last familiar 'old west' cliche, replete with the pun ("shot my Pa"). 


The cliches and the pun certainly undermine the joke's quality but the efficiency is breathtaking. A whole world is built - actually not built, but evoked - fleshed out and then made unique. The uniqueness comes in the surprise juxtaposition. We've heard all these phrases before, but never in this particular combination. Again, meaning is not so much constructed as restricted, with fine tuned precision. A series of denotations and evocations in sequence systematically reduces the listener's ambiguity, as they process rapidly, and the potential meanings coalesce into one particular world, denoting one particular event, including the twist. (Note: the most work this joke has to do, linguistically, may be in the opening. I've tried variations on this one dozens if not hundreds of times, and when I leave out "Did you hear the one about", the punch line sometimes leaves them hanging. In other words, you have to set up that this is going to be a joke! Apparently golden retrievers and cowboy movies aren't well known for being used in comedy. However, with the first line included, or perhaps with people who know me as a joke-ster, the punch line rarely fails to deliver - laughs and/or groans, that is!)

In many ways none of this is news to our understanding of human communication, but the innovation in terms of literary and language theory is that instead of looking for "the loaded word" which connects with the twist, we recognize that *all* words in an effective composition are designed to contribute - not just to the 'punch line' but - to a strategic, even a systematic, sentence-wide program of reducing ambiguity by cross-referencing ambiguities. 


All the words must be checked against one another, while considering meanings, in sequence, before the last word can fly in and take all the glory. Even with normal sentences, that don't appear to have such a big 'twist', the last word can be fairly predictable, but it still ties up the meaning. Thus, all last words in sentences (or phrases, or clauses) perform this type of a function, but some last words get less glory than others.

There's a grammatical corollary here, also. Punctuation doesn't so much indicate a pause for breath or style, so much as when to pause and compile the most immediate unit of meaning, or when to stop and re-compile several units as one. Alas, the period will never get as much glory as that crucial last word!

Here's a common experience summed up in a well known sarcastic saying. It goes, "How come you always find something in the last place you look?" We recognize the absurdity alongside the familiar emotion. Finding something after much exasperated searching does feel that way, producing that Aha! moment in a way that feels more dramatic than if you hadn't spent so much time looking fruitlessly in all those places at first. Except that's just it precisely. 


You didn't look fruitlessly in all those other places. You concluded, sequentially, that each of those other places was not the desired location. Thus, revelation arrives not by a sudden discovery, but by a gradual process of elimination, which can quickly approach exhaustive proportions. Whatever the proportions, this much is true. In general, the more work goes on during that elimination process, the more profoundly one feels that satisfying surprise at the end. You thought it was going to be in all these other places, but it's here, and you didn't see it, but it seems so obvious now.

So goes the twist sentence. You thought the meaning was going to be all of these other things, but it's this, and you didn't see it, but it seems so obvious now. Like the 'fruitless search', t
he more work being done by all the words being cross-referenced, the greater the impact of a twist at the end. But - and this is vitally important - the twist both is and isn't the thing, at least not like we think of twists. That is, the twist may almost always be there, but it rarely has to be something incredibly special.

Observe: Jack and Jill go up a hill
Jack and Jill go up a ladderJack and Jill go up to bedJack and Jill go up the org-chartJack and Jill go up the meter. Jack and Jill go up the anteJack and Jill go up in flames.
Here, it's easier to see how the clarifying power of the final word always takes effect in reverse. Again, the period is a pause to compile. In these elementary examples, note how the meaning of "go" and "up" changes based on whatever comes next. Even the context of going up
a ladder conjures a dramatically different situation than going up a hill. Further, if we add "go up the ladder" you might mentally insert 'corporate' before 'ladder'. This is not merely elementary.

What's most instructive is recognizing what all this implies. All language begins in ambiguity and the progress of working towards clarity is actually negative, rather than positive. All language works together in varying juxtapositions, constricting meanings both ahead and behind, meta the linear sequence, but in order to communicate more precise meanings the work being done is not constructive so much as reductive

Sentences are built but meaning is sculpted.

In theory, this implies a heavy role for the writer. In practice, of course, the reader's role is as important as the writer, if not much more so. As they say, "Ginger Rogers did everything Fred Astaire did, going backwards in high heels." In all the strategy of composition, it is ultimately the reader who does the hard work of reducing ambiguities. There is much more to be said here about the reader's role which is positive.

On the flip side, however, an overly subjective approach to reader-theory destroys the whole game. If readers create new meanings after the writer has finished composing, then those new meanings were not available to the writer (as part of the collective pool of all meanings shared by their culture or sub-group of language users) and thus a fully reader centric approach to "meaning" is, by definition, a deliberate sabotage of authorial intent.


On the other hand, both writer and reader know that language is always evolving. Both writer and reader know that the writer is capable of coining new phrases. Indeed, an enjoyable writer will often invent neologisms and neophrasisms as well, although the experienced reader knows that this type of surprise meaning construction will generally be rare in most compositional efforts. Either way, the reader who leans heavily toward creative interpretation in meaning "construction" has definitively dropped all respect for the writer as strategist, and for the dynamic of composition itself.

Composition itself, as this theory now holds, works by strategy. Without strategic reduction of ambiguities in language, there is no possibility of communication between two persons. Thus, overly subjective or creative readings can be valid as interpretive exercises, or perhaps even as defiantly personal affirmations, quixotically, but when the reader divorces the writer she destroys the text as composition. In a real sense, it remains true that "All meaning is constructed" but strong minded readers should also grapple with "All text is composition" and "All communication is reductive."

In short: Please do not attach onto my words any additional meanings because the whole point is that I was busily trying to whittle them down, for your sake.

And yet, there is a fundamental problem remaining.

Creative readings become inevitable whenever compositions are less than completely effective at reducing ambiguity. Of course, this describes all writing, at times.

Here is where the rubber finally meets the road.

So far, this theory has implicitly described the way in which "good" writers communicate effectively and the way in which "good" readers follow the appropriate cues in "making meaning" successfully. Ah, but who is a "good" writer? Everyone sometimes, but nobody always. Therefore, in practical terms, the real challenge is not what to do when ambiguity persists. The real challenge is what to do when a writer is unclear. Technically, that should say, "when writing is unclear", but although this does not often describe whole works of literature, but it does often describe portions and snippets and phrases within literary works. 

Quite often, compositions show consistent patterns in their attempted strategy, however inconsistently effective it may be. The most basic axiom of this 'Ambiguity Theory' has to say about Literature is that writers attempt to be clear by reducing ambiguity, and that any persistent ambiguity may indicate a point where the composition needed additional work, but it also indicates a moment when the composer expected the opposite. In short, patterns of persistent ambiguity may, themselves, suggest the readers' path towards clarity.

To underscore the point a bit: Just as there is no Santa's list of naughty or nice little children, for all are both at times, so also there is no way to divide writers between "good" and "bad" and there is no way to judge units of language as objectively "clear" or "unclear" - at least, not in a Boolean sense. If this theory only worked for "good" writing, then it would be no theory at all. Rather, perhaps it would not even be necessary. (!) To illustrate, we may recall that the most frequently misconstrued book in the western civilization is widely believed to have been written by God, and there is probably no theory of "authorial intent" which can square that paradox objectively. (!!) 

Where, then, is the "good writing", and how do we judge portions of it to be relatively clear or unclear? In one sense, there is none and we cannot. In a more practical sense, however, we may have some graspable handles on this problem, right in front of our faces.

What we need is a method for measuring - comparatively, if not independently (although, according to Physics, all measurement is technically comparative and no measurement is technically independent, but I mean here to draw contrast against the conventional sense of how people measure things, in practice, versus (say) how we measure people, which is by comparison to other people) - just how often any given writer appears to be clear or unclear. 

This, at last, may present a practical algorithm for readers. Are there any patterns to notice in the way a text leaves some terms are unexplained while providing other references with (alternately) minimal or excessive amounts of expository attention?

Given that all writers vary somewhat in terms of how effectively they provide readers with clarity, or 'reduce ambiguity' as we can say now, then the best way of understanding a given writer should be to study their most ambiguous elements first of all, gather observations and draw tentative conclusions if possible, and then apply those discoveries as a comparative standard for recognizing and interpreting less ambiguous elements within the same work.

Wherever meanings can be exhaustively catalogued - which may not be very often - then exhaustive cross-referencing may be possible, perhaps by computer. In all fairness, a full application of this seems completely unattainable for most words/meanings in any language, but a moderate application may be somewhat more feasible for certain categories of meaning than others.

For starters, historical information may be of some use here; if a writer shows by greater ambiguity which historical references he expects his readers to need no help in remembering, then we might ask - Where by comparison does the writer spend more labor, attempting to help the reader recall, or (alternatively) attempting to help the reader reframe particular facts and suggest her opinions? Where does he work less, and where does he work more? In the more laboring passages (note: I do not mean 'laborious'), we will have to judge: is this verbal labor sufficient to identify and introduce, or does it seem more characteristic of what is modernly called 'spin'? Is the amount of explanation being provided for some historical reference unduly dissimilar to the amount provided for a related reference, which was provided with complete ambiguity (ie, total confidence of reader recognition)? 

Depending on how we answer these questions, we might well discover what the reader "knows" (or perhaps, remembers) and when the writer is trying to reframe in some fashion, to clear up popular misconceptions or to push an agenda (whether personally or narratively driven), as opposed to when the writer is merely trying to inform ignorance. (The greater bulk of all literature, one suspects, takes by far the less noble endeavor. I don't merely want you to know what I know. I want you to see as I see. If I have to inform, it becomes harder to spin. Spinning works best when there is a shared experience to start from. Thus, we should expect writers to assume that readers know a great deal. It's only how to know, and what they know, that are questions for us.)

Still with regard to historical references: Even in places where we lack external corroboration (or lack additional information that bears against some apparent non-information in the composition being studied) we may be able to delineate patterns that show what is substantially explained, versus what is substantially unexplained, versus what perhaps seems more "spun" than explained. In turn, all of this might begin to show how the writer's compositional mind was working, strategically, at least some of the time.

If we find success via this method for discerning historical meanings, we might then proceed to more esoteric meanings that convey 'themes', ideologies and so forth. The kind of trope (irony, metaphor, etc)  should not be the determinative difference, but the accessibility of meanings. 

For another example, let's consider geographical references. If a modern writer says "New York" it may remain completely ambiguous, unless he wishes to draw out particular aspects of New York, to highlight or refresh particular connotations in the popular awareness of "New York". Alternatively, if a writer says, "Yonkers" or "Pougkeepsie" or "Oneonta", the burden of necessary explanation would probably rise. Naturally, the most efficient writers would find ways to both inform and to spin simultaneously, which also enhances engagement for differently informed readers all at once, and the more pedantic writers (or those writers deliberately aiming at lower levels of readers) might explain before proceeding to spin. Nevertheless, researchers in some post-apocalyptic library in the far future would likely be able to determine, comparatively, that the burden of reducing ambiguity fell disproportionately on the less familiar of locations. Even if the state and island of New York were completely obliterated (in this hypothetical future), the ubiquity of that term, "New York", and (more importantly) the high levels of ambiguity that various writers felt comfortable allowing for that term, would naturally testify as to the familiarity that pre-apocalyptic readers were assumed to have had with the term, "New York". The post-apocalyptic critic could then proceed to consider how much literal exposition "Yonkers" and "Pougkeepsie" and "Oneonta" received, comparatively. And so forth.

This results in the kinds of observation that have been obvious in ancient studies, at the times when they've been obvious.

What I am wondering about in this theory is whether this can be made systematic, algorithmically. 

Now, let's try and pull this all together.

Instead of focusing primarily on looking for 'unknown unknowns' (or, more accurately, worrying about not knowing when we're missing a hidden meaning and thus a hidden connection) we might gain more ground by beginning with 'known unknowns', that is, identifying the most blatant ambiguities across one piece of literature and using those as a sort of 'meaning map', detailing what types of information the writer assumed (whether thoughtfully or tacitly) that the reader would also assume. 

Such a catalog, or meaning map, built on the most ambiguous aspects of a text, could be helpful in discerning the strategic purpose of less ambiguous phrase work, whether that might be to introduce completely new information, or to redirect the readers' thoughts about familiar information, or perhaps to do both at one time. Again, it should be the comparative patterns of one writer within one work (or across multiple works) that can reveal what a writer most likely assumed readers to recognize, to know, to remember, to varying degrees.

The basic idea is to begin with a text, analyze it all throughout, and consider what types of reader knowledge (or memory) this writer went about assuming, in general, before finally going back to review individual statements. The basic hope is that we might determine, at least, whether some phrase of dubious clarity has any parallel in linguistic construction or in topical similarity, elsewhere, that can reveal the more likely angle of the phrase under scrutiny, whether: to inform afresh, to explain known curiosities, to reframe the familiar, or to ironize (play on) the familiar. Note that all of these angles can be for various purposes, whether: rhetorical, narrative or ideological.

All of this contrasts with the opposite method: speculate, fill in perceived "gaps", and then put it all together with a semblance of objectivity.

Clear writing reduces ambiguities through precise cross-referencing. Unclear writing perhaps attempts this but fails at reducing precisely enough, for whatever reason. The critical problem of bad writing is assuming too much. The critical problem of good writing is assuming just enough. No one writer is perfectly "good" or "bad", but many writers display a consistency of technique and ability across individual works, for the most part. Comparing the relative ambiguities allowed to remain in a single literary piece may be the best way to determine precisely how much is being left "in between the lines".

This is all I can say without further experiment.

Look for an application of this theory to the Gospel of Matthew, as soon as I'm able.


Thanks for reading. I know this was somewhat repetitive, but I sure hope it was clear!

No comments:

Recent Posts
Recent Posts Widget
"If I have ever made any valuable discoveries, it has been owing more to patient observation than to any other reason."

-- Isaac Newton