“There is a special place in virtual reality—we call it Near-Field VR,” says Mark Bolas, “It is the place that is within arm’s reach of a user, and it is magical, as it provides the best stereoscopic and motion cues of VR.” Bolas discusses his recent work coming out of the MxR Studio at USC’s School of Cinematic Arts in cooperation with students from the Animation Department, in a WIRED article
This time I’m going to talk about one of my projects. It was the second project in Andreas’ CTIN 543 class. The requirement was, briefly speaking, telling a non-linear story. Generally, our understanding about non-linear is “different choices lead to different consequences”. But due to the restriction with technology, we were not able to give players unlimited freedom; what we could provide were prefab options.
That was not I wanted. In my opinion, although artists were able to create a great story by providing options and corresponding endings, as soon as players feel that “I can’t find what I really want to do in these options”, the system is broken. So how to fix this? How to provide players with infinite freedom, or at least make them feel it’s infinite? Fortunately, I was reading something about postmodernism. In postmodernism theories, one opinion about audiences’ attendance said that audiences’ understanding about the artwork was a vital part of them. To which I totally agree and thus had this idea: Why didn’t I just create linear stories, but let audiences watch them in a non-linear way?
I decided to make an interactive film. I combined together four films, each was mimicking a surveillance camera monitoring a different room of the same house. Three of them were bedroom and one was living room. All four films started the same time and had parallel timelines. Audiences can switch among four cameras to watch the residents move around the house and minding their own business. They could choose to follow a character from one room to another, or just focus on one room to see what’s going on. If they thought they missed anything, they could play back. In this project, I wanted give players “freedom” in both time and space.
To give the sense of believability, all footage was shot from a 45 degree angle, and rendered black and white, as if it really came from a surveillance camera. All sound were muted.
Also, Julian helped me to build the interactive part. We used html5 to manage the playing, because I wanted audiences were able to switch from on video to another at the certain time point continuously, and no existed video player were able to do so. The interface was like:
These films were shoot separately. To make sure there were no bugs when one people walk from one room to another, I used sketch to control the timeline (please forgive my terrible hand-writing):
You can see there were two gaps on each timeline. They were power shortage, which meant I can pause the shooting at the moment. It was because the length of these videos were about five minutes. If I did each in one shot, it would result in many rework. So I split them into three parts each.
There was another advantage about faking CCTV shoots: I didn’t need to record dialogue. Actually, in the original video, you could hear I was reminding what actors and actress should do loudly
Speakers: Joshua McVeigh-Schultz, Julian Bleecker, Flint Dille
Time: Wednesday, April 29, 4-6pm
Location: USC’s School of Cinematic Arts Interactive Media Building(SCI), Room 108
Speakers: Joshua McVeigh-Schultz, Julian Bleecker, Flint Dille
For the final seminar of this semester, we will experiment with a prototype learning experience designed to flip the idea of Massive Open Online Courses (MOOCs). Whereas MOOCs bring students from outside the university into the academy through a virtual learning experience, we are developing a platform that enables students within the university to venture off-campus and into the most exciting, demanding and provocative worlds of professionals in their field. The prototype uses available technology to create a series of two-way conversations between students on campus and professionals in the field using Google Glass and a customized interface designed to facilitate clarity and intimacy between the students and the professional. The seminar will start with a brief history of Telematics and Telepresence by iMAP PhD candidate Joshua McVeigh-Schultz and then connect to the viewpoint of a guide who will tour us through the remote location.
In 1959, British scientist and novelist, C.P.Snow delivered a lecture at Cambridge called “The Two Cultures and the Scientific Revolution,” which was later published in book form. Snow worried about the supposed gulf between the two cultures, namely the sciences and humanities, would damage both individual cultivation and social well-being. He asserted that “The rigid division between disciplines, the lack of mutual comprehension, the misplaced feelings of superiority or disclaim in different professional groups-
these should be seen as problems, not fatalistically accepted as part of the immutable order of things.”9
However as human knowledge developed, being encyclopedic is impossible for a person. Every moment there could be new discoveries and inventions adding to the database of human knowledge (and skills). Moreover, today, half century after Snow’s diagnosis, the Internet addresses the human condition in ways Snow did not anticipate. It is not at all surprising that, in early 90s, for instance, according to the ministry of education of China, there are 58 academic disciplines, 573 sub-field within the academic discipline and nearly 6000 majors under the sub-fields.
Despite the fact that many newborn subjects are the results of the multiple subjects’ collaboration and amalgamation, the labels that “claim where a part of the new object is from” has never been so clear, and the gaps between subjects has never been so huge.
Art gives a means of “training perception” for new cultural changes. Art should not merely provide a “consumer commodity” but also functions as an “early alarm system” or an “antennae of the race”. The artist as a pioneer always studies the present. Many artists are now working in science sites. An important goal is to open the black box of science and other technology. Instead of assuming that “facts” are objective and immune from debate, they look for sites of disagreement and discussion.
“We study science in action before the facts and machines are black-boxed or
we follow the controversies that reopen them.”10(Bruno Latour) French sociologist Bruno Latour, In his book Reassembling the social, introduced the Actor-network theory, with a goal of tracing the associations between diverse actants.
An actant is an intermediary, often has a program or script (A script is a typical situation, a sequence of actions). For instance, a bomb is programmed to bomb; the user of a restaurant follows a typical script (a sequence of actions). People program things to follow certain scripts,and things program
people to follow other scripts, as well. One actant can sometimes encourage or force another actant to change its script. For instance, a road bump changes the driver’s script; an unusually heavy key can compel people to deposit it at the hotel counter. New affordances often change our scripts. This idea has been applied in fields other than science studies. Experimental artists often produce new affordances that destabilize our usual scripts. They sometimes also change the scripts of objects.
An actant that follows its script in a predictable and regular way often becomes a black box. “When a machine runs efficiently, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.”11
People often treat a black box as one thing rather than as a complex system. For instance, people think of “the government” but it is in fact, a complex combination of many actants the people working for, and the networks of people (scientists, businessman, technicians, etc.) and organizations related to it. In fact, every object is a network.
The network can be as a site for art practice, science research and social critique. Through the analysis of We Feel Fine, we see it as a space for creative expression. In addition to focusing on the technical functioning of networks (topologies, protocols, software, and hardware), art critics (or artists) should explore the symbolic and aesthetic dimensions of the genre as a means for cultural production by asking questions like, Why would their work be considered “network-art”/ intermedia art? Why it is interesting and/or important? How do specific design elements and aesthetic choices relate to core concepts? And How do their pieces work (technically)? How do they use networks/media? And how does it related to the essence to the technology they apply?
Artists should be encouraged to have a technical background on the structure and function of computer networks or other technology related to the area they want to explore, engage critically with networks, media and other black boxes as an element of culture facilitate the creation of novel works of art.
Conclusion Programming as an interface between man and machine is usually considered to be a logical and clean discipline, with clearly defined aims; while art, in a conventional sense is more like an emotional subject, defying definition and highly subjective. However, people can see through network art and many other media artwork, the notion of art and science work separately is non-sense. Artists are using technology, science experience methods as inspirations and subject matter. “Science must no longer give the impression it represents a faithful reflection of reality. What it is, rather, is a cultural system, and it exhibits to us an alienated interest-deter minded image of reality specific to a definite time and space.”12The gap between two cultures, can be filled up with creative expressions from either side.
Computer graphics and visualization technologies have reached a stage where realistic, real time rendering of images is possible on high end consumer hardware. This push towards bringing real time visualization technologies into the consumer sector was largely an effort made by the video game industry, who’s products were solely targeted towards the consumer sector. However, real time computer graphics technologies have not been the sole domain of video games only. Scientific research labs and the military labs have been experimenting with this technology since its inception, creating simulations and visualizing data sets. For example, the military uses this technology to create flight simulators to train pilots by doing virtual dry runs. These simulators can be thought of as video games catering to a specific purpose, for training. A lot of the underlying technology and mechanics are similar to that of video games.
Such custom simulators for non game specific purposes have always required a lot of financial investment, time, and a dedicated software development team to design and implement the various elements that comprise the application. Each software would be built from the ground up, and even though most projects had a great overlap as far as features and functionality were concerned, they were constructed independently of one another, using different software development environments and programming languages. There was no common ground, and in effect, each project reinvented the wheel as far as certain technologies and operations were concerned. Video game development works a little differently. Game designers use a game engine as the foundation of the game they design. The game engine is a complex software system that contains the building blocks of a game – displaying the graphics and visual effects content, managing the audio effects, managing the interaction and event handling, and processing the mathematical and physics based calculations required to simulate the game world. Rather than scripting each of these components from the ground up, game developers simply call on the various functions that comprise these blocks, thus concentrating more on the mechanics of the game, and on refining and upgrading each of the core building blocks to include new features and optimizations. With each new game, new features are incorporated into the existing game engine, which in turn are then available to use in the next video game production. Working in this way, video game developers have kept improving upon their own and each others previous work rather than creating the components from scratch for each project. There are about four game engines (Unreal Engine, CryEngine, Valve Source Engine, Unity Game Engine) that are used by the majority of developers in the industry. Based on the needs of the industry, and with the help of the game developers, these game engines are constantly updated and maintained by a dedicated team of engineers. Thus, these game engines form a very solid foundation for creating a simulated interactive environment.
The video games industry has seen a massive growth in the last ten years, both in game content as well as advancement of technology. With the demand for content constantly increasing, and the competition between developers being fierce, it is the game engines that have benefited dramatically from constant innovation. The competition drove both the advancement of the game engines as well as the hardware required to run the games. Graphics processing hardware has made enormous leaps of progress, enabling the graphics chip on a computer to perform many more tasks other than rendering images. Thus, many new possibilities for real time visualisation have opened up. The video games that are available today are capable of simulating a real world environment, complete with near photorealistic visuals and accurate physics. It is the game engines that have acted as a common development environment across a range of developers, and so new features added to the game engine are immediately available to all developers using that engine.
Developers of non game specific simulations have upgraded and improved their existing development systems with every project they did, however, there was no collaboration between developers at a level that is seen in the games industry, and there was no dedicated team working on upgrading and maintaining a common development system/engine. Due to this gap in production pipeline workflow technology, the non-gaming simulation industry has largely remained a highly specialised cottage industry. It is understandable that non gaming simulators may not feel as refined as video games in some respects, partly due to the lack of significant demand, and partly because each developer worked in their own way, using their own proprietary software, as opposed to a more commonly used development environment.
To a large extent, this has been the primary production workflow for mainstream medical training developers. Those who have been invested in this industry since its early days have no doubt formed their own methodologies, tools and workflow. However, to break into this industry as a newcomer was no easy task. One would need to assemble a large enough production team and then begin creating the application from scratch. This would involve a lot of initial investment in cost and time, to flesh out the foundation upon which the application would be built. Modern game development technologies and workflows have eliminated this time consuming step. The game engine acts as the common foundation layer and a library of operations, allowing developers to begin creating game content right from the get go, instead of first setting up the foundation layer.
This landscape is now changing as more and more games break the mold of standard gameplay and wander off into many experimental directions. The rise of indie games and mobile games has lead to many innovative new types of gameplay, and this in turn has fueled innovation in game engine technology. This has been noticed by developers looking to create interactive experiences that may not fit into the category of games, or even entertainment. More and more developers with similar intentions are looking to game engines as a starting point to create interactive experiences, largely due to the fact that game engines have become so advanced that they allow rapid prototyping, creation and testing of interactive systems or simulations of many different types.
Vast improvements in consumer hardware technologies has put a significant amount of computing power into the hands of individual consumers, while the indie games boom has driven the price of software down, making game engines and development environments more affordable and accessible to individual developers. This has enabled a vast number of individual artists and programmers to participate in the creation of interactive digital experiences, and the fruits of their labor is clearly visible on the many popular gaming platforms, from consoles to mobile phones to physical interactive entertainment. Similarly, there has been a great surge in experimentation with serious games and more serious applications of this widely accessible game engine technology. Many of these serious games targeted the field of medicine, an industry where information visualisation and simulated training software is proving to be instrumental in improving the quality of training received by aspiring doctors.
Sound healing and the use of sacred/natural frequencies have played an important role in many cultures. The idea of manipulating the human body and to an extent, the human consciousness through sound or vibration has always sparked a keen interest in me. Given that the human body is essentially a tight mass of vibrating particles, we ourselves are generators of vibration. We may not hear the vibrational sound of our own bodies in our normal conscious state, but many people have observed hearing an internal ‘humming’ sound which seems to originate from within their bodies, or a faint ring in their head or ears. This phenomena is most commonly observed during heightened states of awareness caused by meditation, yoga, breathing exercises, physical exertion, fatigue, sickness and when intoxicated on certain mind expanding substances. The one aspect common to all these states is that we are more aware of our beings than we usually are, which is why we are a lot more sensitive to outside stimuli in these states. This heightened awareness allows us to easily focus our mind on this internal hum or vibration.
The deeper we go into the meditative state, the more intense and pleasurable the vibration becomes, and at one point, it feels like the entire body is going to explode with the build up of vibrational energy. It is a state of pure bliss. One generally comes out of this state feeling extremely positive, calm and relaxed. Having personally experienced this state during a 10 day meditation retreat, I know that it is extremely difficult to get to this level of awareness, and one must usually disconnect from the outside world, and put a lot of effort and time into meditation. It is certainly not something to be done casually, and thus makes the experience extremely rare and something to be cherished.
In certain cultures, rhythmic drumming is incorporated into spiritual ceremonies to bring about a state of heightened awareness. The synchronised vibrations from all the drums creates a resonance in player’s bodies. Another method involves rhythmic drumming on a persons body – where an even number of players sit on either side of a person laying on his stomach, and play on his or her body like it were a drum table. Again, the synchronised drumming creates resonance in the person lying down, causing at times, an out-of-body experience.
The use of entheogens in native American and Amazonian culture provided a temporary way to experience a similar state of intense vibration and the feeling of bliss. These ancient cultures also make use of chants, sacred songs, rhythmic drumming, and various musical instruments in their entheogenic ceremonies to further amplify the vibrational experience. When in the state of heightened awareness that is brought about by the entheogens, users experience “feeling” the sounds from instruments that they would otherwise only hear, but would not feel throughout their bodies. In the case of rhythmic drumming, the vibrations were further compounded with the intricate expanding or contracting drumming patterns, and would bring up all sorts of reactions in the participants.
The common factor in most of the spiritual techniques mentioned above is the amplification of vibrations – whether by focusing the mind through meditation, expanding it through entheogens, or physically vibrating it through rhythmic drumming. Music is made up of vibrations, and in many cultures, music originated through spiritual ceremony. At its core, music has always had a spiritual connection. Over the last half a century, music has been produced primarily not for spiritual purposes, but for entertainment purposes. In recent times however, there has been a lot of interest in the spiritual power of music, sparked by a vast array of sound healing or sound bath workshops, binaural frequencies and brain entrainment media, and a musical shift in underground electronic music towards the production of contemporary music that had a firm grounding in spirituality. This was achieved through the sampling of cultural chants and sacred songs from ancient cultures, the use of spiritual musical instruments such as temple bells, gongs and medicine bowls, to the direct creation of certain sounds or frequencies that are in tune with the natural vibrational frequency of the human body, thus causing the body to resonate with the sound. Such music has been given labels such as “sacred bass”, “temple bass”, “shamanic” etc. There is a lot of potential for the spiritual aspects to be interwoven with the entertainment aspects of music, thus creating a unique synergy of awareness and enjoyment.
Digital sound design techniques allow producers to replicate real world sounds as well as create brand new sounds. They can also create specific frequencies and effects, and manipulate them in great detail. This allows sound designers to shape sounds in certain ways so that they create a certain effect in listeners, much like the rhythmic drumming seen in some cultures. Trance music for example, uses repetitive drumming and entrainment to lock listeners into a particular groove. Faster trance music has been used before in several racing games due to the high energy of the music. The listeners heartbeat synchronizes with the beat of the music, rendering the listener more alert and active. On the other hand, the music played at yoga studios is usually a sonic tapestry of Tibetan singing bowls and oriental musical instruments, played through a hi-fi system. All of these sounds and their effects can be created digitally. It allows the sound designer to manipulate users state of mind. Some of these techniques like entrainment is already used in games, all the way back to games like Space Invaders. This field is ripe for exploration as there is a lot of untapped ancient knowledge on sound which can be implemented digitally using current generation technology to push the boundaries of what is possible with sound design.
I recently watched a mind jam panel involving media theorist Douglas Rushkoff which he discussed virtual reality technologies and the underlying intent behind the exploration of this new medium. The discussion briefly touched upon the human nature of wanting to chase highs or blissful states of consciousness, and the tools that have been used to achieve these mental states.
Before the invention of digital technology, human beings used religious practice and rituals, meditation, sound healing and therapy, substance consumption, dreaming etc as tools to achieve different types of mental highs. These tools have been used as a way to expand consciousness, experience reality from a different perspective, as well as to temporarily escape reality itself. These altered states of mind allowed humans to have mystical experiences, in which they experienced everything from ecstasy and bliss to pain and suffering. Expanded consciousness brought with it a loss of identity of the self and realization of the deeper connection to the world around us. The insights and knowledge that came through these experiences were processed and integrated into the individuals life. The temporary yet profound nature of these experiences created a sense of awe, which in turn enhanced wellbeing and boosted life satisfaction. This was incentive enough for humans to continue to chase them and search for different types of highs.
The digital age presented a new medium or tool to achieve a technology induced high. Every iteration of technological progress promised and delivered a higher definition and higher fidelity version of the digital experience, aimed at fully immersing the spectator. Graphics and audio technologies are so advanced that they have the power to temporarily transport the spectator to another reality. The next big technology push is virtual reality, which promises to be more immersive than the largest of projection screens.
Just like the tools from the past, these digital tools can be engineered to help us look at ourselves and our present reality, or used as a means of temporarily escaping reality. In both film and video games, there are examples of the technology being used to educate and inform, as well as examples of technology used to transport the user to a different reality. Virtual reality promises to take the immersive qualities of the experience to a whole new level, and is arriving in the spotlight at a time when the world is facing many different problems.
In the discussion, Rushkoff questions the intent behind the development of these technologies. He talks about a lot of new media startups that began life with the intention of changing the world and its systems, but upon reaching popularity, started dealing with Wall Street, which represented the old system. He believes that the opportunity of moving into the digital age is not to build upon, but challenge the systems of the industrial age and retrieve all the values that got left behind. Technologies like peer-to-peer networking allow people to connect with others in their locality and he feels that technologies such as this and others should be harnessed to form an entirely new refined way of living.
Similarly, his thoughts on virtual reality technologies take influences from various mystical experiences such as shamanism and mind altered states. With the level of immersion that virtual reality can potentially provide, technologists can harness the illusion of reality to give the user a powerful experience. In the same way as shamans in indigenous cultures take participants of a ritual through an introspective mind altered journey, technologists and artists too can create experiences that take users through a contemplative audiovisual journey. While each participant in the shamanic ritual goes through their own unique personal experience, influenced by their state of mind, the same is not the case with a virtual reality experience, as there is no connection to the participants subconscious mind. These experiences, while bearing no connection to any individual, can still be designed as a tool to observe and inspect ourselves, our culture and our connection to the planet from a different perspective.
As discussed earlier, mystical experiences create a sense of awe. This is a feeling that is different from happiness. Awe is a powerful emotion with two defining features. One involves perceptual vastness, which is the sense that one has come upon something immense in size, number, scope, complexity, ability or social bearing. The other involves stimulating a need for accommodation, meaning it alters one’s understanding of the world. These features of awe are intertwined, whereby events that expand one’s usual frame of reference, such as natural events, personal transitions, or unfathomable structures, stimulate new mental models. A research study from Stanford University and University of Minnesota which was published in Psychological Science explores how awe expands the perception of time.
Big screen formats such as IMAX have the capability of reproducing the sense of awe to some level. For the most part, the user is conscious of looking at a screen, but for a few moments does get transported into the reality projected on the screen. When the technical issues and kinks in current generation virtual reality technologies get sorted, and when the graphics and spatial audio is well integrated, virtual reality has the potential to increase that simulated sense of awe, by transporting the user into a digital lucid dream. With added immersion and interaction coupled with high fidelity graphics and audio, this technology can be used therapeutically to provide wellness, a sense of wellbeing and life satisfaction.
The link to the discussion between Douglas Rushkoff and Jason Silva: https://www.youtube.com/watch?v=Ggmiljn8LH0
The link to the Stanford University research on awe: http://faculty-gsb.stanford.edu/aaker/pages/documents/TimeandAwe2012_workingpaper.pdf
Speaker: John Craig Freeman
Time: Wednesday, April 22, 4-6pm
Location: USC’s School of Cinematic Arts Interactive Media Building(SCI), Room 108
Speaker: John Craig Freeman
John Craig Freeman is a public artist with over twenty five years of experience using emergent technologies to produce largescale public work at sites where the forces of globalization are impacting the lives of individuals in local communities. His work seeks to expand the notion of public by exploring how digital networked technology is transforming our sense of place. Freeman is a founding member of the international artists collective Manifest.AR and he has produced work and exhibited around the world including at the Los Angeles County Museum of Art, the San Francisco Museum of Modern Art, FACT Liverpool, Kunsthallen Nikolaj Copenhagen, Triennale di Milano, the Institute of Contemporary Art Boston, and the Museum of Contemporary Art Beijing. He has had work commissioned by the ZERO1, Rhizome.org
In April 2014, I interviewed Andrew Chambers in Beijing. Andrew used to be a senior game designer at Blizzard, where he was a member of Diablo III: Reaper of Souls development team. He designed the new class “Crusader” and craft system. Now he is the creative director at Funplus. This piece was originally written in Chinese and published on Game Grapes, a Chinese game media. I thought his development experience and opinions about mobile game could be useful to some people at IMGD. I lost the original recording file, so I translated it into English.
Q: In the latest expansion of Diablo III, Reaper of Souls, you designed the new class crusader. It was a successful design and very popular after the expansion released. Could you introduce the original idea of crusader?
A: At the very beginning, we had already decided on the three major design principles of the class Crusader: he’s a tank, uses holy fire to attack, and the class stands for order. Then we started to discuss what kind of gameplay could be combined with the class features. We decided that he was melee and mid-range, which could better reveal his power.
After this, we spent a long time implementing these ideas. Unfortunately, we couldn’t implement everything we wanted. During the development process, we kept revising our ideas. That’s also something we are doing at Funplus: starting with good ideas, and trying out different ways to implement them.
Speakers: IMGD 2nd Year MFA Students
Time: Wednesday, April 15, 4-5:50pm
Location: USC’s School of Cinematic Arts Interactive Media Building (SCI), Room 206
Speakers: IMGD 2nd Year MFA Students
Please join us for a special event this Wednesday April 15 at 4:00-6:00PM in SCI 206 when the second year MFA students will be presenting their experiments, prototypes and previews of next year’s IMGD Thesis projects. This will be a dynamic, hands-on opportunity to experience and critique a variety of interactive experiences at a crucial moment in their development. Don’t miss this preview of things to come.