Augmentology"...is a concise manual of reality for our digital age."

Mark Hancock,_Augmentology: Interfaccia Tra Due Mondi_

[Sponsored by The Ars Virtua Foundation/CADRE Laboratory for New Media]

In 2008, several articles here at Augmentology examined the concepts of Synthetic Presencing and Synthapticism. Both concepts are part of a theoretical framework that attempts to explain developing cultural > augmentological patterns. Presencing embodies a rethink of conventional entertainment modes:

Fiction and non-fiction classifications are designed to map to boundaries of known forms [think: cinema, literature, television and music]. They are so designed to provoke audience responses introspectively and externally. Current synthetic practices are refashioning this entertainment base via the perpetuation of types of unintentional and deliberately augmented recreation. These recreation types are reliant on immediacy of response, play, and Pranksterism. They employ Sandboxing, Gonzoism and spontaneous engagement. This type of entertainment is termed _Presencing_. Presencing involves loose clusters of pursuits that evolve in, or are associated with, synthetic environments. Examples include the Streisand Effect, Supercutting, Flashmobbing, the Slashdot EffectGeohashing, Image macro generation and Internet meme threading…Presencing showcases accidental or reflexive entertainment elements where the fictional/non-fictional divide is erased; associated validity qualifiers are also removed and reconceptualised. Amateur production is equated with valued expression. Presencing also offers adaptive potential for augmented attempts at mediating geophysical constraints.

The complementary concept of Synthapticism involves:

…Crowdsourcers [who] produce clusters of user-mediated data through surges of concentrated attention… Synthetics display attentional surges appropriate to synthaptic shiftings. Synthetic environments operate in accordance with this surge potentiality, with users adopting platforms that offer a contemporary catering for the relevant surge…Synthapticism produces unprecedented connections between synthetic participants. Adjunctive relationships are constructed via Identity interfacing and cushioned by support networks with a comparable emotional weighting to those found in traditional sociocentric structures [acquaintance>friendship>family>community]. Synthaptic communication may appear as fractured or trivial to those not connected synthaptically…”

One contemporary example of a Presencing/Synthaptic Campaign centres on a PR-created character called “The Old Spice Guy” or @OldSpice. This campaign, which makes extended use of social media > network dynamics, initialized with a Synthaptic threading system directly developed from conventional advertising:

Anything is possible when you smell like an Old Spice man and our hero, Isaiah Mustafa, is back to illustrate just a few of the amazing things that an Old Spice man can do. The latest effort is a fully integrated campaign with TV, print and digital executions, targeted at both men and women.

On July 13th 2010 [USA Portland time] the Old Spice brand extended this “personalised” social presence/character via synthetically dependent platforms including Youtube, Twitter, Reddit, and Facebook. “The Old Spice Guy” character urged cross-platform users to AMA [a popular internet thread on several boards which means "Ask Me Anything"]. The humorous > quirky responses included almost instantaneous @OldSpice micro-video answers to selected users, including meta-referencing by Isaiah Mustafa:

One response that encapsulates the Synthaptic aspects of this campaign began with the user @Jsbeals asking @OldSpice to make a marriage proposal on his behalf:

@Jsbeals later tweeted that his girlfriend had accepted the proposal:

@Jsbeals: @OldSpice SHE SAID YES!!!! #OldSpice @Jsbeals

….and who then changed his Twitter Biography to:

The Old Spice proposal was real. Thank you Old Spice for helping me with this.

A second response set resulted in the sending of roses [referenced in a micro-video response] to Alyssa Milano. A third instance that highlights the episodic > cross-platform > Synthetic Presencing aspect is summed up by Twitter user @rob_sheridan:

Presencing In Action: @OldSpice + Reddit Contributors

Each episodic response illustrates the flattening of traditional entertainment factors [think: @OldSpice responding to "everyday" users as well as more established Hollywood/Internet celebrities]. The campaign realigns passive entertainment construction and distanced absorption via real-time Immediation and Regenerative Comprehension. The Old Spice Guy Synthaptic threading is currently ongoing with replies continually being posted via Youtube.

Part 3: The Crystal Ball

Film Still, The Wizard of Oz.

<continued from Part 2: Infinite Summer Afternoons>

During the 1939 film version of The Wizard of Oz, Dorothy visits Professor Marvel and has him read her fortune from his crystal ball. He asks her to close her eyes and takes the opportunity to “read” the belongings in her basket. From these artifacts, Professor Marvel pieces together a story based on his intuition of the meaning of the objects and the context of Dorothy’s visit. Professor Marvel is reading Dorothy’s aura by diving into her metadata and delivers his observations in dramatic and persuasive tones.

Now imagine if Dorothy visited Professor Marvel in the 21st century. His crystal ball is a web-ready mobile device capable of scanning Dorothy’s possessions, clothes, face – maybe even her DNA. This cloud of data is cross-referenced and interlinked with Dorothy’s online profiles and he’s able to quickly conjure up an extremely detailed impression of Dorothy’s past, present and future. At the very least, he’d spot Auntie Em in Dorothy’s Flickr account and come to similar conclusions about Dorothy’s family situation as he does in the film.

As aurec technology improves it will know more and more about us; it will become better at predicting what we do and how we prefer to do it. It will enable us to customize our interactions with everything that surrounds us while also allowing us to share these preferences with others. Search is the essential experience of the web (witness Google). The web asks us “what are you looking for?” every time we use it. To understand the potential of aurec we need to be sensitized to the fact that it will reduce the importance of the question/answer relationship posed by the web and open up an environment of ambient data.

It is my hope that shared aurec experiences will have positive effects on our relationships with other people, allowing us new degrees of emotional intimacy and mutual understanding. Aurec has the potential to change our relations with natural and urban environments by revealing otherwise hidden information on a bespoke basis. This could lead to increased corporate and governmental transparency/accountability as the norm shifts to a sharing paradigm as opposed to hiding data. The more we shift our attention away from gimmicky iphone apps and focus on the broader ontological implications of aura recognition, the more aurec will have the best chances of actualization.

Special thanks to NotThisBody for brilliant insights and reflections while writing this article.

Part 2: Infinite Summer Afternoons

Images from Initiations-Studies II by Panos Tsagaris
Images from Initiations-Studies II by Panos Tsagaris with Kimberley Norcott

Having summarily rejected the term augmented reality for the reasons listed here, I’ll now propose alternate terminology to describe the phenomenon. The following elements contribute to this formation:

  • The mobile web will enable us to become aware of metadata that was previously obscured in day-to-day life.
  • Many current AR applications pride themselves on exposing indications of present metadata relationships which are not as readily apparent as traditional urban indicators (think: fashion).
  • Contemporary visions of AR as something which will merely allow us to hold up our smart phones and look through an AR “window”.

This process of metadata revealing is termed “aura recognition” (or aurec for short). In a future post I will address what I see as shortcomings of visual interfaces for aurec.

In his essay The Work of Art in the Age of Mechanical Reproduction (1935), Walter Benjamin makes the  following observations regarding aura:

If, while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch. This image makes it easy to comprehend the social bases of the contemporary decay of the aura. It rests on two circumstances, both of which are related to the increasing significance of the masses in contemporary life. Namely, the desire of contemporary masses to bring things “closer” spatially and humanly, which is just as ardent as their bent toward overcoming the uniqueness of every reality by accepting its reproduction. Every day the urge grows stronger to get hold of an object at very close range by way of its likeness, its reproduction.”

Certainly – since 1935 – these two “social bases” identified by Benjamin have reached their apex in contemporary digital life. Never before have we had as much convenience in bringing things – whether physical objects or information – into our immediate proximity (think: Amazon, Ebay, Google). Neither have we had the experience of such widespread meme and brand propagation in our physical environment (eg shopping malls, international airports, and fast food franchises). Benjamin continues:

Unmistakably, reproduction as offered by picture magazines and newsreels differs from the image seen by the unarmed eye. Uniqueness and permanence are as closely linked in the latter as are transitoriness and reproducibility in the former. To pry an object from its shell, to destroy its aura, is the mark of a perception whose “sense of the universal equality of things” has increased to such a degree that it extracts it even from a unique object by means of reproduction. Thus is manifested in the field of perception what in the theoretical sphere is noticeable in the increasing importance of statistics. The adjustment of reality to the masses and of the masses to reality is a process of unlimited scope, as much for thinking as for perception.”

This “sense of the universal equality of things” is the hallmark of the web. All searches are, ostensibly, equal before Google. Yet, among the ruins of this auric destruction, the web is simultaneously imbuing our lives with all kinds of unique and permanent phenomena. These phenomena make up the essence of our digital auras; auras created less by physical objects than by the specificity of context, relationship and juxtaposition. Aura Recognition is the means by which we access these phenomena.

Consider for instance how unique it is to geophysically meet someone who you’ve only previously known online. In the best case scenario, aurec will help us make sense of the emotional significance of digital phenomenon in ways which are meaningful and helpful. Location based services (think: GPS technology) provoke new experiences which are just as dependent on proximity as Benjamin’s proverbial summer afternoon.

<to be continued in _Part 3: The Crystal Ball_>

Part 1: Absurd Assumptions


As many opinion leaders have noted, Augmented Reality (AR) may very well be the next evolutionary step in bringing the metadata of the web into our day-to-day lives. Some suggest that AR technology may even surpass the Web in its sustained impact on culture.

While I whole-heartedly agree with this observation, the use of the term “Augmented Reality” may actually impede any progress forged by these technologies, especially in terms of broad/mainstream acceptance.

The first reason why the actual phrase “Augmented Reality” may impede the cultural uptake of associated technologies is via the use of the word “augmented” – meaning to raise or make larger. AR enthusiasts seem to be comfortable implying that this new technology is somehow the first technology to augment or enhance our reality. This seems absurd, as human societies have a well-documented history of using biochemical technology to augment reality in the tradition of psychotropic plant-aided shamanism. The innovation of written language was a concrete visualization of reality-augmenting metadata. The city may also be considered an extension of reality considering cities are highly constructed frameworks of architecture, roads, sewers, electrical and telephone lines. It seems more relevant to utilize a word that more accurately describes the idiosyncratic peculiarities of a mobile web-ready experience.

My second reason for objecting to the AR term stems from when the word “reality” is employed in relation to what are (in most cases) mobile-web applications. This usage implies that other computer applications are not affecting reality, or at least are not affecting reality sufficiently to be labeled accordingly. This also seems an absurd assumption; the host of software which has prevailed during the history of computing have had an affect on reality too (this, of course, is a total understatement). If it were not for preceding software which has already changed our reality, these so-called “augmented reality” applications would not even exist. Furthermore, this use of “reality” in this context indicates that there is one concrete reality which we are in the process of altering with specific technology. Yet, each of us have our own subjective “reality” experience, with some physicists even postulating theories of a holographic reality. While standards for augmented reality ought to be open to ensure accessibility by any mobile web-enabled device, it is a fallacy to interpret these standards as a consensus on reality itself. This new technology is posed to allow us to customize and tweak our own experience of our reality like never before, as well as the “reality” we share with others.

<to be continued in _Part 2: Infinite Summer Afternoons_>

“Augmented Reality”.  It doesn’t quite roll off the tongue in a manner that could be described as euphonious. The term sounds lopsided and clunky. Definitely not two words that I find compelling or evocative. Those two words are the literary equivalent of a blunt instrument: slow, heavy, and strong. In fact, the term feels like a badly written movie that goes straight to DVD (and that you’ll eventually find it in the bottom bin at a Wal-Mart sale for $2.99). You can’t even make a workable abbreviation out of it; if you say “AR” in the wrong crowd, they will think you are referring to Accounts Receivable or Arkansas.

While we shouldn’t judge a book by its cover (or even by the movie “based on the book”), we likewise shouldn’t judge a technology based on its name. In a similar vein, we shouldn’t be quick to discount augmented reality based on early examples/demonstrations that appear gimmicky. It is easy to miss the full earth-shaking, mind-rattling, jaw-dropping paradigm-shifting potential of future AR as both the technology and the industry matures. We are at the dawn of something new: it is almost impossible to understand the full scope and impact of what is coming. In many respects, it’s as if we have discovered a new country full of promise and hope. This “AR country” offers enormous potentiality for change, as well as many associated risks.

And just what is this augmented reality stuff anyway?

Augmented Reality in its most basic form is the blend of the real and the virtual. Beyond this, there is some contention as to what AR is or isn’t. There’s also the issue of whether any given example could fall under the categories of Mixed Reality, Virtuality, or something else entirely. We could construct various models and/or other litmus tests to determine if something should be referred to as AR, or we could easily adopt any of the more common definitions.

For now, let’s just keep it simple and a little broad. AR is the blend of the real and the virtual which can be experienced through a number of modes or modalities. It usually requires a digital video camera, a monitor, and either a printed marker or a pre-defined image which is tracked (which effectively replaces the marker). This definition is particularly suited to the past and the present state of AR technology.

In the near future, AR will incorporate geolocative, spatial, contextual, interactive, semantic, mobile, massively multi-user, and pervasive technologies. In the long-term AR will evolve into a platform that is extraordinarily dynamic and immersive. The popular/primary interface will include a pair of wearable displays with transparent lenses similar to a head’s up display. The form of these wearable displays will be nearly identical to a contemporary pair of Ray Bans or Oakleys:

This interface will be linked (hopefully wirelessly) to a mobile internet device that is likely to be clipped to a belt or sewn into clothes.

So what does all this mean? Why am I constantly going on about the blue sky potential of mobile augmented reality? With all combined AR elements, we will effectively be able to create an experience that is like a rudimentary Star Trek Holodeck. Interactive virtual objects, information, and life sized avatars will blend with the world around us:

…and will appear like semi-transparent holograms or digital ghosts. We will own virtual pets. Data visualizations will exist for everything from directional floating arrows to information tags anchored to every object (including us). 3D movies will be completely redefined. MMORPGs will be played in public parks. Doctors will see patients overlaid with X-Ray and MRI information. Education will come alive in the classroom….

There are thousands of potential applications and mobile AR experiences that will change nearly every aspect of our lives. A media revolution will occur; we will be thrust into a new information age where we are no longer chained to bulky PCs, heavy laptops and/or power hungry monitors.

This above vision is one that I am pursuing through my company, Neogence Enterprises. Although Augmented Reality has been in existence previously – the list of true early pioneers, innovators, and academics is long – Neogence wants to be at the forefront of taking AR to a new functional level. It may be a few years before our full vision is realized. There are plenty of technical hurdles still to overcome; in the meantime Neogence will aggressively push ahead one step at a time, building up piece by piece. If all goes well, we will be launching the first commercial version of a global mobile augmented reality network on October 10th, 2010 at 1010am Eastern. We plan on releasing bits and pieces along the way with some closed beta testing in the Spring. We want to build this emergent technology correctly and create something that is infinitely extensible and expandable. We intend to focus on the end-user experience and empower you (the user) to create wonderfully original applications and content.

Join us on our journey and help us build the future. In the next week or so, Neogence will open mirascape.com. We will allow for closed beta registration in the Spring. I have some special plans for the first 100,000 unique sign ups when we launch. The future awaits…