Immersive Sight Within the Third Space: Augmentation and Spatial Interface in exhibition space
Introduction: Immersive Sight
Our field of vision is a continual, multi-tiered number crunching. Bicameral sight is always being processed , interpreted, reacted to, adjusted for focus, comparisons made. It simply is always running as an immersive, multi layered interaction of information and movement in a space.
The logical progression of virtual reality is into augmented reality with smaller lenses and data fit more discreetly and logically layered into one’s natural field of vision.
This has many applications in the traditional museum space. The eye through the cerebral cortex processes and contextualizes constantly at a rapid rate. The key is to allow spatial augmentation to do the same. A fast engine can adjust levels of data and visuals quickly as the participant moves and adjusts their desired experience. This allows another sense of sight.
This concept of immersive sight is key to an integration of elements of virtual reality, augmented reality and exhibition. A viewer can move in a physical location with 3d graphic objects overlaid spatially in key locations. These works can also play with the architecture of each wall and space of a specific museum as one gazes at real world objects in exhibition, vr graphic works and especially, a new field of collaborative works designed to work as a fusion of a real world work and a vr one that only comes into true depth of focus, context and semiotics upon fused overlay. The viewer can select from layered options as to deeper layers of audio, text, triggered site specific video and of deeper information as to context, history, etc. This can both apply to creative work and as a way to fashion deeply interactive historical and natural history exhibitions.
This integration will allow 2d and 3d images to trigger at key locations, creating a true integration of vr and location and a new “museum”. This also allows a person to adjust the experience to their desires and interests, thus personalizing the experience and allowing a new level of immersion, interactivity and freedom.
These concepts have many applications beyond that of exhibition, but this essay will focus on them in terms of exhibition spaces specifically.
Spatial relationships and context
The immediate example of immersion is that of dipping into a body of water such as a pool.
Once one is under the water, another space or room has emerged and one moves within it. The literal interpretation would also connote that the deeper one dives correlates directly to a deeper sense of immersion and it occurs incrementally as one moves. This is not false, but is a gross oversimplification, which makes it miss the resonance of space and interaction in its example. The diver is aware that it is a created space (the regular world and its skin of entry/exit is visible even though abstracted on the surface of the water in a literal sense). Also, the diver knows this intuitively and by semantics and experience unless they not only have never swam, but also have never heard of swimming or pools in any capacity. This, therefore is a sort of spatial immersion similar to augmented reality as the diver is in fact , at all times aware of being in two spaces at once, the larger space of the pool, its walls and the physical world and that of the space of the water sounds , images and sensations.
The “third space”: vr+ar+locative media tropes as integrated immersive spatial interface
There is a third space. It is not the physical world, it is not vr; it is both at once. The use of augmented reality to project information directly in the field of vision as one moves allows this immersive third space. The use of lens-fed data into the field of vision based on one’s position in a space opens an area of spatial data, immersion and experiential interface. Locative media concepts allow methods of developing a spatial interface, spatial augmentation within discreet locations in a space and experiential interface.
There is a lot of buzz currently about spaces like Second Life where avatars move in an open source architectural graphic world. There is an active dialog about the site specific information and experiential interface of locative media. There is growing development of prototypes and explorations placing virtual objects and recorded audio and images in one’s field of vision as one moves (ar). Something very interesting happens when one considers the possibilities and applications of combining primary elements of each together at once.
As we move in physical space we already are using a vastly complex and intuitive spatial/semantic/linguistic interface. It crunches massive amounts of data at once from different processes looking at spacial relationships, meanings and boundaries of what things are in their semantics, past experiences and memory, changes in perspective and scope and overlaying all these elements seamlessly in a live environment, interface and navigation engine (sight).
This is a key analog to augmentation in a physical space and of how this works in second life type virtual environments, in locative media’s attachment of location specific added information and context and that of augmented reality’s layering of virtual and stored data into one’s field of vision as one moves.
Locative media is often associated with gps only and how it allows plays on annotation and mapping. Locative media discourse also has touched upon other less obvious areas about space, interaction, immersion, non linearity in a physical interface, the individual as end author while observing, site specificity as annotation; the “reading” of a space both overall and in discreet singular location. These elements can be brought indoors (without gps signal). The discourse can transpose these key elements and issues into interior augmentation. The points that trigger information in the third space are site specific trigger points (key to locative media) and thus are to be developed as specific location “hot spots” that thus can form a system and progression to the augmentation of a physical interior space as one moves. Augmentation can be sequenced in a space by increments of distance as well as by specific location and its context.
Distance as context and immersive interpretation
A sense of perspective and sequence is always shifting in one’s field of vision as are multiple hierarchical relationships in the visual field. This is true in architecture, in composition, in design, and quite simply in bicameral sight and vision. Compositional balance is established by a sense of “weight” of forms in inter-relationship within a space (room, piece of paper, canvas, web site, photo, film frame, etc…)
In a more esoteric sense, all objects are partially defined by the blank or negative space around them. It is what allows a form as much as its texture, mass, dimensions , surface, density, material etc. A room is a field of charged negative space in the same way.
Even an empty room has the tension of a space between walls, between floor and ceiling, scale, light, how one could move within it, place thing in its space, and the aesthetics of its basic materials.
Immersive sight and computational speed/the importance of server clusters
The analogy can be made to not only how the eye to mind “sees” and processes, but also to its amazing computational speed and storage. The brain is increasingly being seen as able to process and store in a system of individual nodes in an interconnected system. This makes total sense as a logical way to operate a high speed processing system, its data, its interpretation and run speed and to minimize system crash due to individual error.
The concern in a model such as a fast ar engine doing multiple tasks is of it crashing and running fast enough. The mind example is a good one to consider. It is optimal to have several different servers and programs running different aspects of immersive sight type systems to minimize system failure and to maximize run speed, fluidity and data density of rich content. The less lag time the more natural the immersion will feel as one moves in the space. The greater availability of space for content and greater sense of spatial mapping and interaction will allow optimal fluidity and range of options spatially as well as deeper content. The multi-server context also lessens system failure as it runs in a “hive” context, thus a problem can be more easily located and isolated.
Immersive interaction and architecture of an interior space
A museum room may have been built in 1924 with high ceilings and arching doorways. Another space may be a former storage space built in 1960 with simple small square rooms interconnected along two long hallways. Their architecture presents primary forms. In the museum, there are the curved lines of the arch of the doorway and two rectangles of two shuttered windows. In the alternate space there is the shape of each room slightly different from the others, its relationship sequentially along the primary hallway and the exit doors and its particular light and feeling of spatial enclosure in relation to the building as a whole. These elements can be interpreted spatially and also as design elements and metaphorically. The arch can be translated to a primary arch form in a design layout alongside text and images. The tension of it in relation to the windows establishes in the visual field a compositional tension and inter-relationship of forms akin to balance (symmetrical or asymmetrical). This allows a deconstruction of the architectural forms and feel of a physical space to be transmutable into a design template. It also acts as architectural interpretation (interpretation and “read” of materials, aesthetics, spatial and formal inter-relationships).
The interesting thing is that this “layout” and tension changes with one’s perspective as one moves in the space as these primary forms and their relationships to the “room” change. This is akin to taking an abstract painting and shifting its forms and composition live in physical space or a Calder mobile that changes its primary forms and balance based on point of view. This is already hybridizing elemental architecture and elemental design and art concepts of composition and visual tension, balance and weight dispersion.
With augmentation this can be made visual and also create new possibilities for interface design, spatial triggers of information beyond audio tours and booklets and their set information.
Imagine as one moves through the room and the arch shifts its visual weight as do the windows and it is all perspective shifts yet there is a tension in the open space between as one approaches a room and then moves between and toward its individual elements of exhibition. At certain physical intervals layers of augmentation data (text, image, audio, video) trigger in the field of vision
A hypothetical example can be an exhibition on engineering feats of the 19th and 20th centuries. At certain physical intervals layers of augmentation data (text,image,audio,video) can trigger in the field of vision with a design composition (from abstract to more formal) that is of augmentation information (blueprints, related audio clips, sketches, virtual objects, free floating kiosk like text and image information…) and as one moves from that say 50 foot away start it fades, get to 30 feet and another triggers, then 15. The architectural forms of the space have been used in the layout of these informational overlay points, they shift in a design interpretation of the architecture of the place (the physical) and of the tension in the space between these forms, and the shifting context of the exhibition as a whole in its subsections and individual displays as one moves within the exhibition and its space.
Immersive Interactive Spatial graphic design
The graphic designer now can work not only with the material and concepts of the show in developing supplemental materials (brochure, posters, intro to exhibition and section of exhibition wall info) but can now work with the space itself and how one interacts within it and the exhibit. The design becomes a malleable graphic design both functional, spatial and immersive. The previously supplementary and introductory materials and their content, context and design aesthetics instead become organically integrated into the physical space and the direct, immediate experience of the viewer. This is a logical and rich cohesion.
As the viewer approaches a particular work, they move through a series of points of visual and spatial reference. The larger context and overview may be seen upon initial entry and approach and this moves toward a singularity of focus as the viewer moves toward a specific work or display. The spaces from entry to the room to right before a specific work are incremental shifts in context, focus and immersion. The viewer is both immersed in a physical space, its overall theme and information and the content of each individual piece of the exhibition.
The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.
Another interesting possibility is that individual points of augmentation don’t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect).
Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.
Immersive interactive spatial augmentation
A singular work in exhibition has layers of unseen satellite contextualizations. What is its relationship in time to other works? What is the time line leading up to it? back to it? What are its components that can be seen and interacted with as 3d objects? What schematic or blue print either already exists or could be made as commentary of some kind? What would be further information about its semiotics, semantics and context in larger issues of history, culture, science, and other strands and lines of discourse? What is the back story on it and/or its creator or subject that is relevant? What school is it from in art? Where does it sit among the larger record and discourse if science or natural history? What would be an alternate view of it that could be realized in new media and/or vr to augment (micro scale detail view, interpreted in another form or hybrid like text of image etc.., an image or object of it transposed or placed in another larger view as augmentation)? What section could be shown isolated to be rotated for detailed analysis and view of the otherwise static? What options can be made with the viewer choosing these layers and their order as they approach and ultimately stand before it? What about playing with it changing based on their first approach and then return to eliminate the bowling concept of reset into a greater sense of immersive interactivity?
New media immersion in third space exhibition and its possibilities
New media art has long been in a complex space, often on the outside of traditional exhibition and its spaces whether by choice, or often by exclusion. Part of the issue is of the screen versus the blank walled room. In the third space new media can maintain its integrity as individual, self contained works yet exist in the physical with a greater sense of place and resonance. Some new media works can be shown free floating in the field of vision as one moves in open space with all the interactivity of a screen based interaction and yet a place in the white room. Other new media works can be in shows developed specifically for new media artists to comment on these spaces, their semiotics and of themes selected for an exhibition. New media artists also can be asked to be in shows working with items from a museum’s permanent collection for an exhibition specifically to be about the resultant dialog that will emerge as the artists create a commentary on the works and forms. Multiple artists or artist teams can choose to make different individual works commenting on the theme and one specific work thus allowing the viewer the option of multiple augmented choices and a range of voices in one spot.
Perhaps the most interesting of all would be exhibitions of art about science or natural history that specifically are created with fine artists in physical works, new media artists, ar artists and practitioners and scientists in tandem. This allows a new type of exhibition designed as deeply immersive, interactive, content rich in layers and commenting on physicality and augmentation. This space can be developed with an as yet unseen richness of interaction among those creating it as well as of the end users who experience it. What commentary and depth of information can be laid out in development with voices of expertise in physical art and space, new media and space, ar and vr and space, and of location specificity (locative media)? This could work quite well in larger scale shows as well as in shows exploring complex themes.
Examples of range of possible augmentation (snap to model form)
An intriguing subset of augmenting a specific object is of the possibility of its augmentation only “snapping into place” after one has already been moving close and observing it. This delay can allow the classic narrative strategy of delay to first immerse the viewer in a study of the object itself and its singular interpretation. This then can be shifted into an “aha” moment of a new context suddenly and unexpectedly appearing and immediately adding a whole new context and scope to the singular object, its resonance and its inter-relationships.
possible usage options:
- object—-then its environs
- object—-then a layer of an event that came later that affected it and/or changed(s) its interpretation
- object—-then a key juxtaposition for resonance
- object—- then its innards… x ray… spec drawings…. plans…. blueprints…
- object—-then a larger image around it of its physical context (diorama, part to whole, overlay of its scale, larger scaled object overlaid for scale shift for emphasis and poignance in its relative size and role)
- object—-then a radical questioning or reinterpretation
- object—-then a partial erasure formed by a.r overlay of negative space, signal noise, distortion etc to make a point
Avatar as artifact and Mapping the ephemeral
This concept came from discussion with graphic designer/artist Paul Wehby about spatial interaction , visual representation and design as artifact after the event.
As the viewer moves in a space with ar, they are working with layers of information and video to display the augmentation in the field of vision. An interesting possibility emerges when one considers that this can also be captured for playback. After one leaves they can have an artifact saved of their individual navigation, gestures and paths. This forms a gestural archive. The option can be made to be select a 2d or 3d playback.
In 3d playback, the person is represented by an avatar. This icon is a place marker of their moments otherwise lost in time. It can be viewed in an on-line version of the exhibition and archive The information on how to view and a reminder of one’s selected avatar and form can be emailed individually only if the person wishes for this. The person can view animations or specific snapshots of the playback as they choose (otherwise it veers uncomfortably into surveillance and monitoring).
The playback mode can show what gestures were made (turns, pauses, abrupt stops) and this can be shown in 3d mode by their avatar, and in flattened 2d mode this becomes a sort of active map as animation. Each person can select a color and texture form to be their representation if they wish. In this mode, the person can also select what time frame of their visit they wish to view. They then will be represented in the space as a thick graphic ribbon moving from point to point and gestures can be shown as map icon forms (turn as rounded curve, abrupt shift as triangular point) and the intensity of their gesture can form the malleable architecture of shapes on the line into sharper and larger or subtler and smaller forms.
The ribbon line fades back along a section of its length into opacity to show movement and sequence, but not cloud the space into confusion with overlap. This is a sort of fusion of mapping and its iconography, motion studies and animation, and real world interaction replayed as artifacts of the otherwise ephemeral.
Overall playback interface possibilities include how to select 2d or 3d view, ways to set selected length of a viewer’s desired playback in terms of time stamp or section of exhibit, ways to selections to view oneself individually with no other avatars or lines of that time of visit or to see oneself among everyone else from that time as avatars or differently colored lines of movement and gesture in the space. The last option can allow a divergent avatar space or interactive map either of one’s movements, gestures and choices alone and its aesthetics, or as one among many at the time and what emerges of all gestural and movement and interactions or non -interactions in that time and space.
This is interesting as it plays on the concept of artifact and of the usual roles of virtual and real world movement and can serve as a means to archive and translate the physical and otherwise ephemeral.
Education in a third space (additional immersion indoors and outdoors with gps and ar)
The potential for education in the third space is across age groups and disciplines. The potential is there for a greater layering and interactivity in exhibition as educational, but that is only a piece of what is possible. The students can work while moving through the space on an optional augmentation layering or even module in a series for educators that moves beyond simply interacting as reading as seeing. The space will be overlaid with gridded spatial awareness and measurement and with specific augmentation points, or “spatial roots”
Assignments can be interactive portions of the display and augmentation. Assignments can be emailed on-site upon completion for later evaluation. Students can work with great autonomy yet with direction as spatialized prompts and explorations open up a new paradigm of what can be a non linear completion of an otherwise linear task. The students can work in teams also and communicate through instant messaging, headsets or in visual systems that take a portion of the field of vision (semi-peripheral for focus) and show the position and retrieved or sent/shared data of the team mate/collaborator live in the field of vision. This can allow a great sense of collaborative exploration and assignments can be designed as team based and shared role based for emphasis and greater exploration. The education module will bring additional layers of augmentation in the space for this purpose that otherwise would not be triggered by a viewer. This allows modules to be malleable for education of different age groups and topics in the same space and exhibition.
Hypothetical example 1
A show at a museum about the works of artists such as Escher that utilize mathematics in their works.
The students can see the works with overlays about the math involved in each work, of its history, of possible similarities in arcs, lines, plotted points, geometries, v.r and new media work done based on the same mathematical theorem for comparison with explanation…. The students after finishing the exhibition can move outside with either a.r head ware or pdas and get into teams of 2. The students will go to starting points set to locations on the grass outside the museum cafeteria. The student on one side will move along the x axis both on-screen and by their movements in real space, while the student on the other side on his team will move along the y axis and its plotted points, working on tandem in the space of a geometric problem. This exercise can show the students through spatialization and play that the x and y axis are indeed elements of a 3d space to better elucidate the equation and steps of computation and to show how it is not simply math in a book and number world, but truly in parallel to elements in the real world at the same time. As each team of 2 moves they can see on-screen the positions and movements of their partner on the other axis as well as their calculations on part of the screen as input in keypad for a team effort (also can be with communication either by talking or by headset if at greater initial distance).
A second assignment can be a visual word problem. The students can see in key locations the elements of the specific word problem and in between at spatialized intervals can see the gridded intervals between them. This can take the classic “2 trains , 15 miles apart are leaving their respective stations at 3pm, one is going south at 25 miles an hour, the other north at 15 miles an hour, at what point will they cross? type math problem and spatialize it. At one spatial root location (key location outdoors set spatially by gps ) will appear train one with a visual and audio and in another location on the other side of the field of grass at another key spatial root location will appear train 2. The student then can see the trains and in between can see elements of track and the spatial grid of the distance between. The readout will relay their position as the student moves and they can calculate the distance as from the starting points with the trains. This takes the calculations and distance within a word problem and spatializes it physically. The student still must do the same calculations (the trains won’t give it away) but it now is immersive and far less abstract a task. The spatial immersion works to bring the student into the dynamics of mathematical computation and space.
Hypothetical example 2
Exhibition by artists and scientists looking at “ecosystems”
Students in the exhibition rooms experience combined physical and ar/vr works about ecosystems in nature. The augmentation layers explore the inter-relationships in nature between larger and smaller ecosystems in habitats as the information and augmentation shifts in progressions as one moves in the space and from open areas toward specific points of exhibition. Snap to works play with elements of evolution of ecosystem(s) over time, unexpected symbiotic relationships and 3d rotatable elements within an ecosystem.
Outside in the open area the module has an open lab set to spatial points. The audio and text at certain location creates an immersive effect of a certain area in nature in the world, its ambient sounds and information while other locations add images both moving and static as well. The larger effect is a transposing of an immersive space of an environment and its ecosystem within the physical space of the grassy lawns of the museum. One student exercise is to care for certain living creatures within the ecosystem and this requires an understanding of its predators, prey, and diet. The student must complete a series of related tasks by moving about the area to ensure the safety of the animal, but also in doing so completes a review assignment on all the elements studied within the exhibition and its augmented lesson.
Hypothetical example 3
Natural History Exhibition on prehistoric animals
Students can observe snap to visuals of different visual examples (static of moving animation/2d or 3d) of several of the primary theories by scientists as to what was their demise. The delay effect would allow the student to first become immersed in the space and animal itself and the delay could give more of a visceral impact as to the shock of what may become of them in time. Students can learn from additional education module layers of augmentation about greater specifics about each animal, their habitat and their time frame in the fossil record. Younger students can learn through visual games and animations overlaid as they move through the exhibition. Games can include the classic tropes of “which one does not belong in this picture?” “ What would be the next picture in the story?” and find all the objects on the list hidden in the picture (in this case it can be permafrost or a dig site) Older students can do things like work more with overlays of rotatable images of fossils, of multiple habitats for review after visiting one section and moving toward another as immersive visual multiple choice.
Jeremy Hight is a locative media/new media artist and a writer. He is credited with inventing locative spatial narrative in the first locative narrative project 34 north 118 west. His essay Narrative Archaeology was recently named one of the 4 primary texts in locative media. A retrospective look at his work and a look at "reading" the landscape is in volume 14 issue 08 of Leonardo. He has published over 20 theoretical essays and exhibited work in festivals and museums internationally.