Part 3: The Crystal Ball
Film Still, The Wizard of Oz.
<continued from Part 2: Infinite Summer Afternoons>
During the 1939 film version of The Wizard of Oz, Dorothy visits Professor Marvel and has him read her fortune from his crystal ball. He asks her to close her eyes and takes the opportunity to “read” the belongings in her basket. From these artifacts, Professor Marvel pieces together a story based on his intuition of the meaning of the objects and the context of Dorothy’s visit. Professor Marvel is reading Dorothy’s aura by diving into her metadata and delivers his observations in dramatic and persuasive tones.
Now imagine if Dorothy visited Professor Marvel in the 21st century. His crystal ball is a web-ready mobile device capable of scanning Dorothy’s possessions, clothes, face – maybe even her DNA. This cloud of data is cross-referenced and interlinked with Dorothy’s online profiles and he’s able to quickly conjure up an extremely detailed impression of Dorothy’s past, present and future. At the very least, he’d spot Auntie Em in Dorothy’s Flickr account and come to similar conclusions about Dorothy’s family situation as he does in the film.
As aurec technology improves it will know more and more about us; it will become better at predicting what we do and how we prefer to do it. It will enable us to customize our interactions with everything that surrounds us while also allowing us to share these preferences with others. Search is the essential experience of the web (witness Google). The web asks us “what are you looking for?” every time we use it. To understand the potential of aurec we need to be sensitized to the fact that it will reduce the importance of the question/answer relationship posed by the web and open up an environment of ambient data.
It is my hope that shared aurec experiences will have positive effects on our relationships with other people, allowing us new degrees of emotional intimacy and mutual understanding. Aurec has the potential to change our relations with natural and urban environments by revealing otherwise hidden information on a bespoke basis. This could lead to increased corporate and governmental transparency/accountability as the norm shifts to a sharing paradigm as opposed to hiding data. The more we shift our attention away from gimmicky iphone apps and focus on the broader ontological implications of aura recognition, the more aurec will have the best chances of actualization.
Special thanks to NotThisBody for brilliant insights and reflections while writing this article.
Part 2: Infinite Summer Afternoons
Images from Initiations-Studies II by Panos Tsagaris with Kimberley Norcott
Having summarily rejected the term augmented reality for the reasons listed here, I’ll now propose alternate terminology to describe the phenomenon. The following elements contribute to this formation:
- The mobile web will enable us to become aware of metadata that was previously obscured in day-to-day life.
- Many current AR applications pride themselves on exposing indications of present metadata relationships which are not as readily apparent as traditional urban indicators (think: fashion).
- Contemporary visions of AR as something which will merely allow us to hold up our smart phones and look through an AR “window”.
This process of metadata revealing is termed “aura recognition” (or aurec for short). In a future post I will address what I see as shortcomings of visual interfaces for aurec.
In his essay The Work of Art in the Age of Mechanical Reproduction (1935), Walter Benjamin makes the following observations regarding aura:
If, while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch. This image makes it easy to comprehend the social bases of the contemporary decay of the aura. It rests on two circumstances, both of which are related to the increasing significance of the masses in contemporary life. Namely, the desire of contemporary masses to bring things “closer” spatially and humanly, which is just as ardent as their bent toward overcoming the uniqueness of every reality by accepting its reproduction. Every day the urge grows stronger to get hold of an object at very close range by way of its likeness, its reproduction.”
Certainly – since 1935 – these two “social bases” identified by Benjamin have reached their apex in contemporary digital life. Never before have we had as much convenience in bringing things – whether physical objects or information – into our immediate proximity (think: Amazon, Ebay, Google). Neither have we had the experience of such widespread meme and brand propagation in our physical environment (eg shopping malls, international airports, and fast food franchises). Benjamin continues:
Unmistakably, reproduction as offered by picture magazines and newsreels differs from the image seen by the unarmed eye. Uniqueness and permanence are as closely linked in the latter as are transitoriness and reproducibility in the former. To pry an object from its shell, to destroy its aura, is the mark of a perception whose “sense of the universal equality of things” has increased to such a degree that it extracts it even from a unique object by means of reproduction. Thus is manifested in the field of perception what in the theoretical sphere is noticeable in the increasing importance of statistics. The adjustment of reality to the masses and of the masses to reality is a process of unlimited scope, as much for thinking as for perception.”
This “sense of the universal equality of things” is the hallmark of the web. All searches are, ostensibly, equal before Google. Yet, among the ruins of this auric destruction, the web is simultaneously imbuing our lives with all kinds of unique and permanent phenomena. These phenomena make up the essence of our digital auras; auras created less by physical objects than by the specificity of context, relationship and juxtaposition. Aura Recognition is the means by which we access these phenomena.
Consider for instance how unique it is to geophysically meet someone who you’ve only previously known online. In the best case scenario, aurec will help us make sense of the emotional significance of digital phenomenon in ways which are meaningful and helpful. Location based services (think: GPS technology) provoke new experiences which are just as dependent on proximity as Benjamin’s proverbial summer afternoon.
<to be continued in _Part 3: The Crystal Ball_>
Part 1: Absurd Assumptions
As many opinion leaders have noted, Augmented Reality (AR) may very well be the next evolutionary step in bringing the metadata of the web into our day-to-day lives. Some suggest that AR technology may even surpass the Web in its sustained impact on culture.
While I whole-heartedly agree with this observation, the use of the term “Augmented Reality” may actually impede any progress forged by these technologies, especially in terms of broad/mainstream acceptance.
The first reason why the actual phrase “Augmented Reality” may impede the cultural uptake of associated technologies is via the use of the word “augmented” – meaning to raise or make larger. AR enthusiasts seem to be comfortable implying that this new technology is somehow the first technology to augment or enhance our reality. This seems absurd, as human societies have a well-documented history of using biochemical technology to augment reality in the tradition of psychotropic plant-aided shamanism. The innovation of written language was a concrete visualization of reality-augmenting metadata. The city may also be considered an extension of reality considering cities are highly constructed frameworks of architecture, roads, sewers, electrical and telephone lines. It seems more relevant to utilize a word that more accurately describes the idiosyncratic peculiarities of a mobile web-ready experience.
My second reason for objecting to the AR term stems from when the word “reality” is employed in relation to what are (in most cases) mobile-web applications. This usage implies that other computer applications are not affecting reality, or at least are not affecting reality sufficiently to be labeled accordingly. This also seems an absurd assumption; the host of software which has prevailed during the history of computing have had an affect on reality too (this, of course, is a total understatement). If it were not for preceding software which has already changed our reality, these so-called “augmented reality” applications would not even exist. Furthermore, this use of “reality” in this context indicates that there is one concrete reality which we are in the process of altering with specific technology. Yet, each of us have our own subjective “reality” experience, with some physicists even postulating theories of a holographic reality. While standards for augmented reality ought to be open to ensure accessibility by any mobile web-enabled device, it is a fallacy to interpret these standards as a consensus on reality itself. This new technology is posed to allow us to customize and tweak our own experience of our reality like never before, as well as the “reality” we share with others.
<to be continued in _Part 2: Infinite Summer Afternoons_>
“There are many worlds and many realities in our universe. When one reality, or one world-view is superimposed on another, it is inevitable that social, economic and cultural problems arise. Hierarchies of worlds are constructs of a bygone era. Ecologies of worlds should guide us in considering our future… We can begin by designing environments that can respond to physical, environmental, or social needs. Not only the needs of human beings, but also of the organisms and elements with whom we share the Biosphere.” – fo.am
Rezzing occurs in the space in-between worlds. Rezzing happens in the moment we switch from one reality to another: where the structure of synthetic worlds is unveiled. We see these spaces appear gradually – textures, alpha channels and audio appear in layers. Forms start as simple grey patterns that morph and evolve via emergent detail. These patterns resolve as final forms that adhere to in-game physics and flop into “place”.
When I begin rezzing – and am between avatars – my body disappears. Then, the simple basic shape beneath is exposed with pitch black skin and bizarre proportions. Finally, my body parts materialise.
I stand naked: staring ahead as my clothes begin to appear, one piece at a time. As the textures of my skin are downloaded, my blurry body is redrawn in photorealistic detail. In Second Life, Linden Labs has added a feature where rezzing avatars are surrounded by a cloud whilst forming. This cloud presumably covers the moments of nakedness while an avatar’s clothes are appearing and bare pixel genitalia are exposed.
There is no geophysical equivalent to the act of rezzing. The closest phenomenon is the act of awakening from – or falling into – dreams. When an object or avatar is rezzed in a synthetic environment, its data representation is downloaded from the database into the local client. On screen, a visual “something” is created from synthetic “nothing” – an ontological novelty out of the pure void. This act reveals a flaw in the materiality and persistence of these worlds or a type of virtual ontology similar to Deleuze’s Spinozan plenty without void.
After encountering the whooshing sound that indicates teleporting, I am dropped into an incomplete world. Often during this phase, my avatar manifests in a falling animation. First, all is sky and water which faithfully glistens with the sun (according to environment settings). Then, distant objects appear. In complex areas this can take minutes as particle scripts initialise and begin to swirl and glow before the details of architecture appear.
During the rezzing process, as a user’s body begins to form they step into a swirl of affect. This affect may induce feelings of identification with the avatar or a revulsion from it. This emotional polarisation may produce a sense of pleasure in seeing or a sense of disjunctive discomfort. The activity of the database creates its own unreproducable order dependant on the speed of the bytes transferred. Hair or pants/skirt may take minutes to download, with the avatar blinking into space with a bald head or exposed thong in the meantime. At this juncture, the avatar hover-stands in an unfolding environment and waits for the expected transactions of the “normal” synthetic world to begin.
How do we come to understand the resonances, affects and effects of rezzing into synthetic environments? With augmented reality making headline news, can we think of other ways of entering other realities which are not limited to visual modes? What about pain? Sound? Smell? Can Mixed Reality Performances be used to develop and explore these methods of realityshifting? If we can think of ways of finding spaces between realities, then can we think of the space between realities as similar to the space between genders and sexualities? Could entering a space between realities free us of certain rules, be a strategy of liberation and transformation? Part 2 will explore these questions.
“Augmented Reality”. It doesn’t quite roll off the tongue in a manner that could be described as euphonious. The term sounds lopsided and clunky. Definitely not two words that I find compelling or evocative. Those two words are the literary equivalent of a blunt instrument: slow, heavy, and strong. In fact, the term feels like a badly written movie that goes straight to DVD (and that you’ll eventually find it in the bottom bin at a Wal-Mart sale for $2.99). You can’t even make a workable abbreviation out of it; if you say “AR” in the wrong crowd, they will think you are referring to Accounts Receivable or Arkansas.
While we shouldn’t judge a book by its cover (or even by the movie “based on the book”), we likewise shouldn’t judge a technology based on its name. In a similar vein, we shouldn’t be quick to discount augmented reality based on early examples/demonstrations that appear gimmicky. It is easy to miss the full earth-shaking, mind-rattling, jaw-dropping paradigm-shifting potential of future AR as both the technology and the industry matures. We are at the dawn of something new: it is almost impossible to understand the full scope and impact of what is coming. In many respects, it’s as if we have discovered a new country full of promise and hope. This “AR country” offers enormous potentiality for change, as well as many associated risks.
And just what is this augmented reality stuff anyway?
Augmented Reality in its most basic form is the blend of the real and the virtual. Beyond this, there is some contention as to what AR is or isn’t. There’s also the issue of whether any given example could fall under the categories of Mixed Reality, Virtuality, or something else entirely. We could construct various models and/or other litmus tests to determine if something should be referred to as AR, or we could easily adopt any of the more common definitions.
For now, let’s just keep it simple and a little broad. AR is the blend of the real and the virtual which can be experienced through a number of modes or modalities. It usually requires a digital video camera, a monitor, and either a printed marker or a pre-defined image which is tracked (which effectively replaces the marker). This definition is particularly suited to the past and the present state of AR technology.
In the near future, AR will incorporate geolocative, spatial, contextual, interactive, semantic, mobile, massively multi-user, and pervasive technologies. In the long-term AR will evolve into a platform that is extraordinarily dynamic and immersive. The popular/primary interface will include a pair of wearable displays with transparent lenses similar to a head’s up display. The form of these wearable displays will be nearly identical to a contemporary pair of Ray Bans or Oakleys:
This interface will be linked (hopefully wirelessly) to a mobile internet device that is likely to be clipped to a belt or sewn into clothes.
So what does all this mean? Why am I constantly going on about the blue sky potential of mobile augmented reality? With all combined AR elements, we will effectively be able to create an experience that is like a rudimentary Star Trek Holodeck. Interactive virtual objects, information, and life sized avatars will blend with the world around us:
…and will appear like semi-transparent holograms or digital ghosts. We will own virtual pets. Data visualizations will exist for everything from directional floating arrows to information tags anchored to every object (including us). 3D movies will be completely redefined. MMORPGs will be played in public parks. Doctors will see patients overlaid with X-Ray and MRI information. Education will come alive in the classroom….
There are thousands of potential applications and mobile AR experiences that will change nearly every aspect of our lives. A media revolution will occur; we will be thrust into a new information age where we are no longer chained to bulky PCs, heavy laptops and/or power hungry monitors.
This above vision is one that I am pursuing through my company, Neogence Enterprises. Although Augmented Reality has been in existence previously – the list of true early pioneers, innovators, and academics is long – Neogence wants to be at the forefront of taking AR to a new functional level. It may be a few years before our full vision is realized. There are plenty of technical hurdles still to overcome; in the meantime Neogence will aggressively push ahead one step at a time, building up piece by piece. If all goes well, we will be launching the first commercial version of a global mobile augmented reality network on October 10th, 2010 at 1010am Eastern. We plan on releasing bits and pieces along the way with some closed beta testing in the Spring. We want to build this emergent technology correctly and create something that is infinitely extensible and expandable. We intend to focus on the end-user experience and empower you (the user) to create wonderfully original applications and content.
Join us on our journey and help us build the future. In the next week or so, Neogence will open mirascape.com. We will allow for closed beta registration in the Spring. I have some special plans for the first 100,000 unique sign ups when we launch. The future awaits…