A toothpick might be the simplest device known to man, unless that man is artist Scott Weaver.
After 35 years and roughly 100,000 little wooden toothpicks, the third-generation San Francisco resident has created an incredibly intricate and interactive replica of The City by the Bay.
Titled "Rolling Through the Bay," Weaver's project chronicles life in his home city with a kinetic tour through San Francisco's sights, monuments, and history. Ping-pong balls roll down winding toothpick tracks, visiting the Golden Gate Bridge, Alcatraz Island, and The Palace of Fine Arts, which contains a heart that, of course, was "left in San Francisco."
"I have used different brands of toothpicks depending on what I am building," Weaver explains on his website. "I also have many friends and family members that collect toothpicks in their travels for me. For example, some of the trees in Golden Gate Park are made from toothpicks from Kenya, Morocco, Spain, West Germany and Italy."
Weaver's masterpiece also features wooden models of a Rice-A-Roni cable car, a dragon for China Town, a psychedelic tribute to the Haight-Ashbury neighborhood, the Painted Ladies (think "Full House” homes) and a miniature of the World Series trophy.
"Rolling Through the Bay" will be on exhibit at San Francisco's Tinkering Studio until June 19.
From Hunch Blog.
For more info, check out the article.
"Our latest data project was to analyze how self-described Mac and PC people are different. The infographic below, designed by the talented folks at Column Five Media, breaks it down. Keep reading after the Infographic for more background and analysis, including some comparisons to findings from 18 months ago when we first looked at this issue."
The elderly woman is now facing a prison sentence in her native Georgia, after she reportedly did with a spade what a legion of hackers could only dream of - took down all access to the internet in neighbouring Armenia for up to 12 hours.
The country was knocked offline on 28 March after cables linking Armenia to Georgia were cut through by the woman's shovel.
A Georgian interior ministry spokesman said a 75-year-old woman - dubbed the 'spade-hacker' locally - had admitted she damaged the fibre-optic cables while she scavenging for copper wire in the village of Ksani, not far from the capital Tiblisi.
She reportedly faces up to three years in prison after being charged with damaging property.
'Taking into account her advancing years, she has been released pending the end of the investigation and subsequent trial,' interior ministry spokesman Zura Gvenetadze told the AFP news agency.
The cables were owned by the Georgian Railway Telecom company, and serve eastern Georgia, Armenia and Azerbaijan. Giorgi Ionatamishvili, a spokesman for firm, said customers suffered 'massive and catastrophic' damage. Parts of Georgia and Azerbaijan were also cut off.
Georgia provides 90 percent of Armenia’s Internet.
'We don’t how she found the optic cable, which was secure,' Ionatamishvili said, adding that severe weather and mudslides may have exposed the cable to copper hunters.
All three internet providers in Armenia - ArmenTel, FiberNet Communication and GNC-Alfa - were unable to provide their usual service on the evening of 28 March, Armenia’s Arka news agency reported.
Services were eventually restored after midnight.
For the next day and change, until MARCH 24TH, 5PM JAPAN TIME (MARCH 24TH, 3AM EST), Tokyoflash is giving 100% of all purchases values to the Japanese Tsunami Relief Fund. Tokyoflash is a unique company which manufactures "wearable -art" that slightly resembles a normal watch in how you wear them. Tokyoflas will donate 100% of all purchases, including shipping and handeling for the next 2 days to the Tsunami relief fund.
Link to Tokyoflash Official Website:
"By hooking a Kinect sensor up to one of those funky spherical Pufferfish displays, The Technology Studio in the U.K. has built itself a desktop version of the unblinking Eye of Sauron, which follows you around with its gaze. It's almost creepy enough to make you want to turn invisible.
Pufferfish displays, called PufferSpheres, use an internal projector and some kind of totally sweet savory lens called an Umami lens that can toss the projection onto a nearly seamless 360 degree sphere. By wiring the projector up to a Kinect sensor, the Great Lidless Eye can follow you as you move around the room:"
Link to website:
"..March 5 at dawn, National Geographic Channel and a team of scientists, engineers, and two world-class balloon pilots successfully launched a 16' X 16' house 18' tall with 300 8' colored weather balloons from a private airfield east of Los Angeles, and set a new world record for the largest balloon cluster flight ever attempted. The entire experimental aircraft was more than 10 stories high, reached an altitude of over 10,000 feet, and flew for approximately one hour.
The filming of the event, from a private airstrip, will be part of a new National Geographic Channel series called How Hard Can it Be?, which will premiere in fall 2011."
Further info and pictures:
Today’s entry is our first guest blog. It follows naturally from the last entry on how our eyes scan and sample images. Tim Smith is a psychological researcher particularly interested in how movie viewers watch. You can follow his work on his blog Continuity Boy and his research site.
I asked Tim to develop some of his ideas for our readers, and he obliged by providing an experiment that takes off from my analysis of staging in one scene of There Will Be Blood, posted here back in 2008. The result is almost unprecedented in film studies, I think: an effort to test a critic’s analysis against measurable effects of a movie. What follows may well change the way you think about visual storytelling.
Tim’s colorful findings also suggest how research into art can benefit from merging humanistic and social-scientific inquiry. Kristin and I thank Tim for his willingness to share his work.
Tim Smith writes:
David’s previous post provided a nice introduction to eye tracking and its possible significance for understanding film viewing. Now it is my job to show you what we can do with it.
Continuity errors: How they escape us
Knowing where a viewer is looking is critical to beginning to understand how a viewer experiences a film. Only the visual information at the centre of attention can be perceived in detail and encoded in memory. Peripheral information is processed in much less detail and mostly contributes to our perception of space, movement and general categorisation and layout of a scene.
The incredibly reductive nature of visual attention explains why large changes can occur in a visual scene without our noticing. Clear examples of this are the glaring continuity errors found in some films. Lighting that changes throughout a scene, cigarettes that never burn down, and drinks that instantly refill plague films and television but we rarely notice them except on repeated or more deliberate viewing. In my PhD thesis I created a taxonomy of continuity errors in feature films and related them to various failings during pre-production, filming, and post-production.
Our inability to detect continuity errors was elegantly demonstrated in a study by Dan Levin and Dan Simons. In their study continuity errors were purposefully introduced into a film sequence of two women conversing across a dinner table. If you haven’t seen it before, watch the video here before continuing, and see how many continuity errors you can spot.
Two frames from the clip used by Levin and Simons (1997). Continuity errors were deliberately inserted across cuts (e.g., the disappearing scarf), and viewers were asked after watching the video whether they noticed any.
The short clip contained nine continuity errors, such as a scarf that changed colour, then disappeared, plates that changed colour and hands that changed position. During the first viewing, viewers were told to pay close attention but were not informed about the continuity errors. When asked afterwards if they noticed anything change, only one participant reported seeing anything and that was a vague sense that the posture of the actors changed. Even during a second viewing in which they were instructed to detect changes, viewers only detected an average of 2 out of the 9 changes and tended to notice changes closest to the actors’ faces such as the scarf.
Although Levin and Simons did not record viewer eye movements, my own experiments investigating gaze behaviour during film viewing indicates that our eyes will mostly be focussed on faces and spend virtually no time on peripheral details. If you as a viewer don’t fixate a peripheral object such as the plate, you are unable to represent the colour of the plate in memory and can, therefore not detect the change in colour when you later refixate it.
To see how reductive and tightly focused our gaze is whilst watching a film, consider Paul Thomas Anderson’s There Will Be Blood (TWBB; 2007). In an earlier post, David used a scene from this film as an example of how staging can be used to direct viewer attention without the need for editing.
The scene depicts Paul Sunday describing the location of his family farm on a map to Daniel Planview, his partner Fletcher Hamilton, and his son H.W. The entire scene is treated in a long, static shot (with a slight movement in at the beginning). Most modern film and television productions would use rapid editing and close-up shots to shift attention between the map and the characters within this scene. This frenetic style of filmmaking–which David termed intensified continuity in his book The Way Hollywood Tells It (2006)–breaks a scene down into a succession of many viewpoints, rapidly and forcefully presented to the viewer.
Intensified continuity is in stark contrast to the long-take style used in this scene from TWBB. The long-take style, which was common in the 1910s and recurred at intervals after that period, relies more on staging and compositional techniques to guide viewer attention within a prolonged shot. For example, lighting, colour, and focal depth can guide viewer attention within the frame, prioritising certain parts of the scene over others. However, even without such compositional techniques, the director can still influence viewer attention by co-opting natural biases in our attention: our sensitivity to faces, hands, and movement.
In order to see these biases in action during TWBB we need to record viewer eye movements. In a small pilot study, I recorded the eye movements of 11 adults using an Eyelink 1000 (SR Research) eyetracker. This eyetracker uses an infrared camera to accurately track the viewer’s pupil every millisecond. The movements of the pupil are then analysed to identify fixations, when the eyes are relatively still and visual processing happens; saccadic eye movements (saccades), when the eyes quickly move between locations and visual processing shuts down; smooth pursuit movements, when we process a moving object; and blinks.
Eye movements on their own can be interesting for drawing inferences about cognitive processing, but when thinking about film viewing, where a viewer looks is of most interest. As David demonstrated in his last post, analysing where a viewer looks whilst viewing a static scene, such as Repin’s painting An Unexpected Visitor, is relatively simple. The gaze of a viewer can be plotted on to the static image and the time spent looking at each region, such as a characters face or an object in the scene can be measured.
However, when the scene is moving, it is much more difficult to relate the gaze of a viewer on the screen to objects in the scene. To overcome this difficulty, my colleagues and I developed new visualisation techniques and analysis tools. These efforts were part of a large project investigating eye movement behaviour during film and TV viewing (Dynamic Images and Eye Movements, what we call the DIEM project). These techniques allow us to capture the dynamics of gaze during film viewing and display it in all its fascinating, frenetic glory.
To begin, the gaze location of each viewer is placed as a point on the corresponding frame of the movie. The point is represented as a circle with the size of the circle denoting how long the eyes have remained in the same location, i.e. fixated that location. We then add the gaze location of all viewers on to the same frame. Although the viewers watched the clip at different times, plotting all viewers together allows us to look for similarities and differences between where people look and when they look there. This figure shows the gaze location of 8 viewers at one moment in the scene. (The remaining 3 viewers are blinking at this moment.)
A snapshot of gaze locations of 8 viewers whilst watching the “map” sequence from There Will Be Blood (2007). Each green circle represents the gaze location of one participant, with the size of the circle indicating how long the eyes have been in fixation (bigger equals longer).
You have a roving eye
Plotting static gaze points onto a single frame of the movie allows us to see what viewers were looking at in a particular frame, but we don’t get a true sense of how we watch movies until we animate the gaze on top of the movie as it plays back. Here is a video of the entire sequence from TWBB with superimposed gaze of 11 viewers.
You can also see it here. The main table-top map sequence we are interested begins at 3 minutes, 37 seconds.
The most striking feature of the gaze behaviour when it is animated in this way is the very fast pace at which we shift our eyes around the screen. On average, each fixation is about 300 milliseconds in duration. (A millisecond is a thousandth of a second.) Amazingly, that means that each fixation of the fovea lasts only about 1/3 of a second. These fixations are separated by even briefer saccadic eye movements, taking between 15 and 30 milliseconds!
Looking at these patterns, our gaze may appear unusually busy and erratic, but we’re moving our eyes like this every moment of our waking lives. We are not aware of the frenetic pace of our attention because we are effectively blind every time we saccade between locations. This process is known as saccadic suppression. Our visual system automatically stitches together the information encoded during each fixation to effortlessly create the perception of a constant, stable scene.
In other experiments with static scenes, my colleagues and I have shown that even if the overall scene is hidden 150milliseconds into every fixation, we are still able to move our eyes around and find a desired object. Our visual system is built to deal with such disruptions and perceive a coherent world from fragments of information encoded during each fixation.
The second most striking observation you may have about the video is how coordinated the gaze of multiple viewers is. Most of the time, all viewers are looking in a similar place. This is a phenomenon I have termed Attentional Synchrony. If several viewers examine a static scene like the Repin painting discussed in David’s last post, they will look in similar places, but not at the same time. Yet as soon as the image moves, we get a high degree of attentional synchrony. Something about the dynamics of a moving scene leads to all viewers looking at the same place, at the same time.
The main factors influencing gaze can be divided into bottom-up involuntary control by the visual scene and top-down voluntary control by the viewer’s intentions, desires, and prior experience. As part of the DIEM project we were able to identify the influence of bottom-up factors on gaze during film viewing using computer vision techniques. These techniques allowed us to dissect a sequence of film into its visual constituents such as colour, brightness, edges, and motion. We found that moments of attentional synchrony can be predicted by points of motion within an otherwise static scene (i.e. motion contrast).
You can see this for yourself when you watch the gaze video. Viewers’ gazes are attracted by the sudden appearance of objects, moving hands, heads, and bodies. The greater the motion contrast between the point of motion and the static background, the more likely viewers will look at it. If there is only one point of motion at a particular moment, then all viewers will look at the motion, creating attentional synchrony.
This is a powerful technique for guiding attention through a film. But it’s of course not unique to film. Noticing points of motion is a natural bias which we have evolved by living in the real world. If we were not sensitive to peripheral motion, then the tiger in the bushes might have killed our ancestors before they had chance to pass their genes down to us.
But points of motion do not exist in film without an object executing the movement. This brings us to David’s earlier analysis of the staging of this sequence from TWBB. This might be a good time to go back and read David’s analysis before we begin testing his hypotheses with eyetracking. Is David right in predicting that, even in the absence of other compositional techniques such as lighting, camera movement, and editing, viewer attention during this sequence is tightly controlled by staging?
All together now
To help us test David’s hypotheses I am going to perform a little visualisation trick. Making sense of where people are looking by observing a swarm of gaze points can often be very tricky. To simplify things we can create a “peekthrough” heatmap. A virtual spotlight is cast around each gaze point. This spotlight casts a cold, blue light on the area around the gaze point. If the gazes of multiple viewers are in the same location their spotlights combine and create a hotter/redder heatmap. Areas of the frame that are unattended remain black. By then removing the gaze points but leaving the heatmap we get a “peekthrough” to the movie which allows us to clearly see which parts of the frame are at the centre of attention, which are ignored and how coordinated viewer gaze is.
Here is the resulting peekthrough video; also available here. The map sequence begins at 3:38.
Here is the image of gaze location I showed above, now matched to the same frame of the peekthrough video.
The gaze data from multiple viewers is used to create a “peekthrough” heatmap in which each gaze location shines a virtual spotlight on the film frame. Any part of the frame not attended is black, and the more viewers look in the same location, the hotter the color.
David’s first hypothesis about the map sequence is that the faces and hands of the actors command our attention. This is immediately apparent from the peekthrough video. Most gaze is focused on faces, shifting between them as the conversation switches from one character to another.
The map receives a few brief fixations at the beginning of the scene but the viewers quickly realise that it is devoid of information and spend the remainder of the scene looking at faces. The only time the map is fixated is when one of the characters gestures towards it (as above).
We can see the effect of turn-taking in the conversation on viewer attention by analyzing a few exchanges. The sequence begins with Paul pointing at the map and describing the location of his family farm to Daniel. Most viewers’ gazes are focused on Paul’s face as he talks, with some glances to other faces and the rest of the scene. When Paul points to the map, our gaze is channeled between his face and what he is gazing/pointing at.
Such gaze prompting and gesturing are powerful social cues for attention, directing attention along a person’s sightline to the target of their gaze or gesture. Gaze cues form the basis of a lot of editing conventions such as the match an action, shot/reverse-shot dialogue pairings, and point-of-view shots. However, in this scene gaze cuing is used in its more natural form to cue viewer attention within a single shot rather than across cuts.
As Paul finishes giving directions, Daniel asks him a question which immediately results in all viewers shifting the gaze to Daniel’s face. Gaze then alternates between Daniel and Paul as the conversation passes between them. The viewers are both watching the speaker to see what he is saying and also monitoring the listener’s responses in the form of facial expressions and body movement.
Daniel turns his back to the camera, creating a conflict between where the viewer wants to look (Daniel’s face) and what they can see (the back of his head). As David rightly predicted, by removing the current target of our attention the probability that we attend to other parts of the scene is increased, such as H. W., who up until this point has not played a role in the interaction. Viewers begin glancing towards HW and then quickly shift their gaze to him when he asks Paul how many sisters he has.
Gaze returns to Paul as he responds.
Gaze shifts from Paul to Daniel as he asks a short question, and then moves to Fletcher as he joins the conversation.
The quick exchanges of dialogue ensure that viewers only have enough time to shift their gaze to the speaker and then shift to the respondent. When gaze dwells longer on a speaker, such as during the exchange between Fletcher and Paul, there is an increase in glances away from the speaker to other parts of the scene such as the other silent faces or objects.
An object that receives more fixations as the scene develops is Paul’s hat, which he nervously fiddles with. At one point, when responding to Fletcher’s question about what they grow on the farm, Paul glances down at his hat. This triggers a large shift of viewer gaze, which slides down to the hat. Likewise, a subtle turn of the head creates a highly significant cue for viewers, steering them towards what Paul is looking at while also conveying his uneasiness.
The most subtle gesture of the scene comes soon after as Fletcher asks about water at the farm. Paul states that the water is generally salty and as he speaks Fletcher shifts his eyes slightly in the direction of Daniel. This subtle movement is enough to cue three of the viewers to shift their gaze to Daniel, registering their silent exchange.
This small piece of information seems critical to Daniel and Fletcher’s decision to follow up Paul’s lead, but its significance can be registered by viewers only if they happened to be fixating Fletcher at the time he glanced at Daniel. The majority of viewers are looking at Paul as he speaks and they miss the gesture. For these viewers, the significance of the statement may be lost, or they may have to deduce the significance either from their own understanding of oil prospecting or other information exchanged during the scene.
The final and most significant gesture of the scene is Daniel’s threatening raised hand. As Paul goes to leave, Daniel stalls him by raising his hand centre frame in a confusing gesture hovering midway between a menacing attack and a friendly handshake. In David’s earlier post he predicted that the hand would “command our attention.” Viewer gaze data confirm this prediction. Daniel draws all gazes to him as he abruptly states “Listen….Paul,” and lifts his hand.
Gaze then shifts quickly; the raised hand becomes a stopping off point on the way to Paul’s face. . .
. . . finally following Daniel’s hand down as he grasps Paul’s in a handshake.
We like to watch
The rapid sequence of actions clearly guide our attention around the scene: Daniel – Hand -Paul – Hand. David’s analysis of how the staging in this scene tightly controls viewer attention was spot-on and can be confirmed by eyetracking. At any one moment in the scene there is a principal action signified either by dialogue or motion. By minimising background distractions and staging the scene in a clear sequential manner using basic principles of visual attention, P. T. Anderson has created a scene which commands viewer attention as precisely as a rapidly edited sequence of close-up shots.
The benefit of using a single long shot is the illusion of volition. Viewers think they are free to look where they want but, due to the subtle influence of the director and actors, where they want to look is also where the director wants them to look. A single static long shot also creates a sense of space, clear relationship between the characters, and a calm, slow pace which is critical for the rest of the film. The same scene edited into close-ups would have left the viewer with a completely different interpretation of the scene.
I hope I’ve shown how some questions about film form, style, practice, and spectatorship can be informed by borrowing theory and methods from cognitive psychology. The techniques I have utilised in recording viewer gaze and relating it to the visual content of a film are the same methods I would use if I was conducting an experiment on a seemingly unrelated topic such as visual search. (See this paper for an example.)
The key difference is that the present analysis is exploratory and simply describes the viewing behaviour during an existing clip. What we cannot conclude from such a study is which aspects of the scene are critical for the gaze behaviour we observe. For instance, how important is the dialogue for guiding attention? To investigate the contribution of individual factors such as dialogue we need to manipulate the film and test how gaze behaviour changes when we add or remove a factor. This type of empirical manipulation is critical to furthering our understanding of film cognition and employing all of the tools cognitive psychology has to offer.
But I expect an objection. Isn’t this sort of empirical inquiry too reductive to capture the complexities of film viewing? In some respects, yes. This is what we do. Reducing complex processes down to simple, manageable, and controllable chunks is the main principle of empirical psychology. Understanding a psychological process begins with formalizing what it and its constituent parts are, and then systematically manipulating and testing their effect. If we are to understand something as complex as how we experience film we must apply the same techniques.
As in all empirical psychology the danger is always that we lose sight of the forest whilst measuring the trees. This is why the partnership between film theorists and empiricists like myself is critical. The decades of film theory, analysis, practice and intuition provide the framework and “Big Picture” to which we empiricists contribute. By sharing forces and combining perspectives, we can aid each other’s understanding of the film experience without losing sight of the majesty that drew us to cinema in the first place.
5' x 3' x 2'
50k - 60k pieces
Black, white, dark and bluish gray, clear trans and black trans colors used.
No foreign materials (wood, glue, paint or otherwise) were used – this is pure Lego.
Photo retouching used only for adding contrast and color correction & background.
Approx 450 hours to build
Second in my series of Abandoned Houses
Abandoned houses offer unique opportunities from a visual point of view. The deterioration transforms materials. Texture on top of texture. New patterns overtaking old ones. Nature repossessing. This textural aspect to deterioration and the patterns that it creates can be rich and fascinating to look at.
I also find that the experience of seeing a deteriorated house (or any familiar object) interesting. When looking at the image we see a dual image of the house – one as it is, and one as it was. You see a huge hole in the side of the house not just as a hole, but also as an interruption of the known. And so the mind seeks to recreate the known. We fill in the holes. We project. Our eyes follow the angle of the broken awning to a point, now destroyed, and we can feel the mass that was of the front 3rd floor. The same with the porch covering. This visual duality – the mind flipping between destruction and pre-destruction – is magic. It's entertaining and engaging.
Many ask me how I go about planning and building these pieces. Sadly, I tend to be a 'messy' planner, so I do not make any blueprints or basic construction drawings. Rather I just get to work. I start by researching photos I find online. Generally, I find a house feel I would like to recreate. I also look for others that have specific moments of deterioration that I find interesting. In this case, I also researched houses that have been smashed by fallen trees. Next, I take a look at other moc's to see if there are any special techniques I can use based on the subject matter.
Now for the size. I look on the buildings for objects that I would like to recreate with a piece. In this case, the scale was determined by the size of the bricks. One real life brick is almost the same size as a 1 x 2 tile – the 1 x 2 tile being a little bigger, but not by much. From here, I count out the bricks on the building to determine width and height and use rudimentary measuring tools, like a pencil or thumb held up to gauge relative proportions between the real thing and my work. In this way, I can make sure all is on track. I've tried plotting everything out on paper and using measurements, but inevitably I mess up somewhere along the line with the numbers and then have to start over again. Thus, I tend to just 'wing it'.
In this series, I am most interested in textures and the effect of layering textures over each other. To this end, the absence of color helps the viewer to focus on just this. Lego colors tend to be pretty harsh and unrealistic for my tastes, so I stick to black/white and grays. Without color, we dive right into form, which is where I want you to be.
The tree was the most difficult texture to determine. I had thought by reversing the bricks – to show backs – worked best (you can see this in my previous post with the detail of the tree trunk). But very late into the process, a friend had advised me that it didn't look as real as everything else. What to do? Spend a week rebuilding the tree and perhaps money for more bricks or let well enough be. In the end, I found that hinge cylinders worked well to describe bark texture. Strung together, they conform to all sorts of organic configurations. Additionally, they could be skinned onto the trees that I had already built so I would not have to rebuild or spend much more money. Whew! It's not perfect, and I hope to try something similar but different in future, but for now, seemed pretty effective. The branches were created with ridged 3mm hose and a variety of droid arms as well as other technic connectors to give the appearance of branches.
|Cylinder hinges were used to give the tree texture and a more organic form than bricks were offering me.|
I also had difficulty creating the burned out area coming from the mid floor's window. Lego does not provide a good variety of grays to blend, so I ended up using some trans black tiles to help smooth out the difference.
|A lever (control stick) on left used for grass and droid arm used for weeds and branches. Thousands of each of these were used in the landscape of this piece.|
From the start, the ground texture was of primary importance. I had wanted to create a dense textural experience here that would dazzle and sparkle. I ended up using levers for grass and droid arms for branches and weeds. There are thousands of each to hopefully capture the unevenness of an unattended yard. This wild growth also allowed for some nice irregularity to break free of the mass of the base and into the background void. In this way, they soften the piece a bit. The bushes on the left and right of the foreground also were much fun to make. Very quick (perhaps 30 minutes each) and effective. Each one of the four bushes must have a hundred or more droid arms!
The hardest technical aspect to the piece was the roof. In particular dealing with the seams where each of the four sides meet. For the photo, it is fine enough as the shot does not show the imperfections of the joints. Still, it would be nice to understand how to better manage it. In addition to 2 x 2 tiles, I used diver flippers as a second shingle type. It's not original, but is nonetheless, effective.
As I've mentioned before, I love looking at things through other things. So, I seek out opportunities to set up situations where there is a sort of layering and openness to structure. This to give the viewer a peek into another space. An instance of this is the way the tree overlaps the porch and then the porch contains a door which is open looking into another space. One enters, then enters and then enters again.
Hope you enjoy!
You can see more pictures and other projects from Mike at http://mikedoylesnap.blogspot.com/
A samurai/artist/performer 'coregraphically' interacts with a projection resulting in a performance where it appears that the samurai is fighting a 'spirit' and eventually another samurai. Quite an interesting idea! (I could not find an article or any further information on this piece)
From Mr. Beam:
"We created an unique physical 3D video mapping experience by turning a white living room into a spacious 360° projection area.
This technique allowed us to take control of all colors, patterns and textures of the furniture, wallpapers and carpet. All done with 2 projectors."
Be sure to check out the video of the room and other projects at 'Mr. Beams' official website:
<iframe src="http://player.vimeo.com/video/18460233?title=0&byline=0&portrait=0&color=ffca00" width="400" height="225" frameborder="0"></iframe><p><a href="http://vimeo.com/18460233">Living Room</a> from <a href="http://vimeo.com/mrbeam">Mr.Beam</a> on <a href="http://vimeo.com">Vimeo</a>.</p>