Shooting and Editing Abstract Video

Click here for the link to the video.

The shoot

The shoot was difficult for me to achieve, because I wasn’t quite sure of what was expected, nor did I have the time to properly experiment. My biggest issue was trying to achieve good white balance. In this sense, I experienced great technical difficulty in my attempt to capture a well color balanced image. However, because of previous chances to use the Z5, I had little problem setting up the camera in other aspects.

I also had problems capturing abstract footage. It was further impeded by the fact that I had to leave early in order to do filming for another subject, and also due to the aforementioned white balance issue. I only had about 10 minutes in the end to choose and shoot some footage, most of which was around the vicinity of the classroom. However, looking at some of the footage from other classmates, I could see how I also boxed myself into thinking only about camera angles closer to my eye level, and mostly wide shots which would include lots of different movement within the frame; some of the other groups used close-up shots, or extremely low angle shots, which only captures one small movement, and not try to have a dozen things happening at once.

The edit

Editing abstractly is a new process for me which included more lateral thinking. It was helpful to have a framework of haiku to work with, however I found it easier to create an abstract meaning between the images and other images and sound. For example when the climber is seen climbing, I overlaid text about Climb Mount Fuji, and images of the traffic being stuck at the lights are juxtaposed with free flowing water through a grate. In this way, I allowed the relationship between the images, that is, the closure created by the viewer, to tell the narrative instead of forcing a story onto them. This way of storytelling also allowed me to use the individual parts to tell their own stories with the clips and audio around them, rather than the entire part telling just one story.

Video pw: industrialmedia

(Please note: At time of publishing this post, I was having an issue with Vimeo where it shows I was still uploading the video despite having begun uploading it 2 days prior. I am aiming to try re-uploading the video when I am on campus again on Wednesday. I’m not quite sure why this was happening as when I was uploading from uni, it said upload complete, and therefore I’d left it as done and dusted. I apologize, and I hope you don’t consider it a late submission.)

The Director and the Actor/Notes to the Cinematographer

While previously I’ve focused on technical aspects of pre- and production, this time I will examine the importance of actors during production. Also, I will engage with the creative and technical differences between focusing on sound and sight.

In my reading of Mackendrick’s ‘The Director and the Actor’ (2004), a few points stood out to me as interesting. The idea that actors should have a general technical idea of how production works, and the usage of props to enact a more natural performance from the actors.

The idea that an actor should be able to have “the unselfconscious and automatic ability to adjust to the position of the camera, a sense of its place…and an understanding of continuity” (180) is a concept which I have only read in print for the first time, but something I’d always thought should be the case. Similar to a stage performer’s need to understand voice projection or blocking, a screen actor’s repertoire of skills should include the aforementioned awareness of camera angles. While obviously there are dedicated roles for these factors in filming – the camera operator, the continuity person etc – it would make the process a lot easier if the actor acts like, as Mackendrick suggested, an athlete in their automatic physical response when acting. Similarly, because of the ability to cut closer or intercut between visual aids in screen, actors need to be aware of constant scrutiny across every facet of their visual presentation as possible; they have to not only be “imagining” their characters (179-180) but they also have to perform and act out every part of that character in relation to the camera, and more importantly, the possibilities that the editor may create from their acting. As Mackendrick noted, subtext and nuances created in post-production editing does rely on the actor understanding that a meaningful glance needs to be conveyed through a series of cuts, not necessarily a moment of silence as the action is performed (181), and so the actor would have to, in this example, understand not only camera angles, but also how their action could be utilized to the maximum when cut between different angles. In this sense, Mackendrick’s assertion that an actor cannot not be informed in facets of technical production is incredibly insightful.

However, if an actor is not skilled, or as Mackendrick puts it, too “self-conscious” (or, inversely, too over confident) (186), there are ways in which the director, in working with said actors, can create a natural frame in which the actor can be in relation to the camera. With props, the director can essentially direct the actor towards a certain mark, and have them stay there for the duration of that mark while using or interacting with the prop. This, Mackendrick describes, has no inherent significance to the narrative, yet can occupy the actor’s focus and/or overall body-language in a more natural way (186). For example, when fiddling with a bottle top, or doing something like dressing themselves, the actor can much better pace and organize the way in which they shift focus from and to their co-star, and the prop/action at hand. It can also negate any over-the-top or unnaturally dedicated focus the actors may have with each other while on screen, thereby avoiding a “pretentious” cinema moment (187). This I feel to be a point that I would take on when I watch TV or film from now on. Especially in important dramatic moments, or even in innane dialogue, I will be watching for the existence of props that have no narrative meaning – ie, drinking coffee in the morning may have some narrative meaning in the sense that the character is tired, but picking at their toast would probably have no narrative meaning – but instead help create a paced and natural way that the actors speak to each other. This is also something which I wish I had read sooner, because in my short film for another subject, which featured dialogue heavy chunks, I didn’t direct the actors to focus more on their prop (or, for that matter, provide much prop) which would, in the next few weeks, provide a challenge when editing.

In Bresson’s 1986 ‘Notes on the Cinematographer’, a clear distinction is drawn between the virtues of sound, and that of vision. “What is for the eye must not duplicate what is for the ear” (50), especially, draws attention to the separate but equally important roles each plays in a screenplay. This was something which was mirrored in screenwriting lectures, where I was told to not say what can be acted. This in turn translates to sound and sight, respectively, where we do not hear what can be seen (dialogue vs action), yet we don’t have to see what can be conveyed much clearer with sound (a series of actions where we can just hear the sound for it). Having not had much experience when dealing with sound, thinking about both visual and sound as separate parts of a greater whole is something I missed out on doing productions this semester.

Bresson also makes a point that “the ear goes more towards the within, the eye towards the outer” (51), which extends upon the idea that sight and sound speak to different parts of the mind. Using this logic, that means if the scene calls for a lot of emotion, whether to reflect the character on screen or to evoke one into the audience, the is more effective to focus on a richer sound track than to, for example, show a lot of emotion-evoking imagery. However, the two should be working in sync as well – to explain that a character is energetic and cheerful, it’s better to show both the character being upbeat, using dynamic shots and cuts, and also an upbeat soundtrack with a fast tempo or a series of sounds that are quick (ie not long, slow sounds).

These ideas, while new, are ones which in some ways I’ve already implemented in my work. However, in future works I plan to pay even closer attention to these factors.

Nostalgia of the Light – An Analysis

Watching a small clip from Guzmann’s Nostalgia for the Light (2010), the most captivating moment for me was when the black and white contrasted images of the craters on the moon fades to a black and white kaleidoscope of the shadows of the leaves on the kitchen window. It was at this point that I properly understood the power of visual juxtaposition in this piece.

Nostalgia for the Light relies very heavily on montaging juxtaposed clips with a voice-over narrative. The images are composed with superior cinematography in mind, and are mostly of still subjects, with hints of movement only available when there is actual movement of the subjects in the background, such as shifting of the light and shadows, or the leaves on a tree swaying in the wind. This adds a stillness to the aesthetic of the film, because while the cuts are not long, they are not rapid or dynamic either. The framing is extremely intimate, with close to extreme closeups of the inanimate subjects. The lighting on these subjects (which are all either everyday homely items, or astronomy instruments) are very warm, and even though sometimes there are strong shadows, they are never rendered alienated or intimidating. The colors are also very vibrant and saturated. This is contrasted very strikingly with the highly stark and monochrome images of the moon at the beginning of the clip.

The audio is extremely minimalistic, with the focal sound being the narration, or when there is no narration, diegetic foreground noises – eg the grinding of the gears on the telescope; the squeaking of the hatch opening on the roof. Furthermore, in ground sound, there is the sound of nature, to complement the environment of the frame during the montage of household items. These
nature sounds included the chirping of birds and some light breeze noises, and may or may not be diegetic, but is more likely to be mixed in during post-production.

Narratively speaking, it was a little confusing when trying to understand the contents of the narration in relation with the visual. Logically speaking, there is little correlation between what the narrator is talking about (Chile, astronomy and social revolution) and the images that we are shown (household items such as furniture, pictures on the wall etc). It isn’t until the location of the montage moves onto the dusty remains of the observatory that I can begin to form a more coherent correlation between the visual and the audio. Alternatively, it is possible that the visual media and the audio isn’t meant to have an immediate correlation, but exist to complement the tone of the other – that is, the intimate and comfortable visuals complement the reminiscent nature of the narration, and the calm slow tones of the speaker complements the softness of the images.

One other interesting moment is when specks of light and dust becomes super imposed onto a shot of a tree blowing in the wind. The colorful image (blue, yellow and green) starts fading into a almost monochromatic and impossibly detailed shot of specks of dust and bokeh floating in the air. This leads me to think that the dust and bokeh were treated with special effects. It is also a
beautiful transition from the urban setting of the house to the more foreign settings of the observatory. In a clip that is heavy on cuts between changing visuals, changing physical location using a fade seems to be a conscious effort to minimize a jarring transition from home-life to astronomy.

Nostalgia for the Light contains many layers of deeper and inferred meaning, drawn from both aesthetics and content. It beautifully incorporates photographic cinematography and slow camera movements to create a sense of calm and stillness, and mixes this visual with a continuous but non-discordant audio narrative and other background audio materials, both diegetic and non-diegetic
to the visuals.

Conventions of Sound

Sound production, manipulation and mixing is something which I do not have much prior experience in. Ruoff (1993) outlines some interesting and important factors when working with sound in media, which I explore below.

Having come from a linguistic background, Ruoff’s outline of how conventional spoken conversation and dialogues differ greatly from scripted or written dialogue, purely because the spoken dialogue will always contain interjections and “hemming and hamming” as a natural and necessary part of lingual semiotics. However, it was very interesting to then expand on this knowledge to the scope of documentaries, within the discourse of audience understanding of conventions. For example, if a conversation that was supposed to be candid, or observational “fly on the wall” was too clean, or too organized and orderly, the audience would immediately feel that it was put on, regardless of whether the camera frames the subjects in a ‘hidden camera’ manner, or any other visual cues. By the same token, if a face-to-face interview was too disorderly with interjections of “uh huh” or other verbal prompts, it would be distracting to the viewer or listener. This fully explains the importance of recording through separate channels (and using directional mics – shotgun mics) especially during an interview, as to minimize and have full control in post over the interjections if
they occur during the interview.

It’s also interesting to learn about interview techniques that are specific for the screen, or for audio productions. My background was in written and published journalism, so interview techniques mostly taught us to not use leading yes/no questions, or to allow the subject to tell their own story through their own words, not through your questions. This remains true for video and audio interviews, but even more importantly, it’s a matter of recording and interviewing for the answers to be used in the final product. So, the whole idea of asking the interviewee to answer in a full sentence allows the editors in post to leave out the interviewer’s question. This obviously goes against a ‘natural’ conversation flow, but this is a discourse unique to interviews, where the staged theatrics of a ‘conversation’ is how we expect a successful and coherent interview to go.

The above points are majorly important to the projects which I am working on this semester, but are also widely applicable when creating any kind of media product which may include sound.

Basic Audio Editing

When it comes to editing audio, the most important part for newcomers like me, is to know how each sound is supposed to work with each other. For example, if you are editing a scene where a person is speaking, and an ambulance drives by, it is important to know how the dialogue sounds should interact with the ambulance.

On a very beginner level, and using basic programs such as Audacity, the main things to look out for in this situation would be the volume and the direction.

If you had a simple synthesized ambulance sound, to create the audio space of an oncoming ambulance, you’d have to edit the sound to become louder and louder. You also need to make sure that the sound is coming from one direction to begin with, and slowly moving towards the center as it becomes louder.

On a more advanced level, there is obviously a lot more to creating the sounds of an ambulance driving by a speaker, because you would also need to edit in ground sounds, or sounds that speak to the context (such as other voices speaking, the rushing of cars on a road) as well as field sounds, or sounds that are in the atmosphere (such as wind, or light rain, etc).

(Ground and Field Sounds are mentioned in Leeuwen’s 1999 Speech, Music and Sound, and is a great way to think about the context and meaning of each sound depending on if it’s Ground, Field, or Figure (being the sound that is being actively listened to).)

Audacity is a fantastic beginner’s editing tool for audio editing; because sound editing can be a lot trickier and more nuanced than visual and graphic editing, having a simple workspace like Audacity means you can easily understand how to cut, copy, and mix in levels of sound. As a beginner in sound editing, I used Audacity for a podcast assignment in my undergrad classes. It was extremely simple to upload the audio materials that I’d collected and recorded, and the menus work according to many familiar programs such as Photoshop or MS Word, so buttons to export or to save are where you expect them. It would probably be simpler for beginners to use Audacity as their first tool to edit sounds, before uploading the edited sounds onto video editors such as Adobe Premier, in order to lessen the jumble of materials on the ‘cutting board’.

Media Objects Audio Project

In response to the theme ‘catalyst’, the sounds that I chose to record and/or find online are ones that happen as a result of something else, and often create a change in the listener as well. Of the 7 sounds, the ones I recorded are ‘scream’, ‘bark’, ‘car horn’, and ‘drop’; the ones from the internet are ‘siren’, ‘smoke alarm’ and ‘car skid’.

Alten notes that‘sound is a force…it can excite feeling, convey meaning’ (4), and iconic sounds are especially useful in doing this. The 7 sounds all have instantly recognizable meanings, and the narrative context is easy to infer by the listener.

To break these down, the listeners are able to discern a sense of urgency or agitation from ‘smoke alarm’ and ‘siren’ because of the high pitch and quick tempo of both sounds (10), whereas the sudden attack to a very high volume of ‘car horn’, ‘scream’ and ‘car skid’ creates a sharpness, as well as note danger or suddenness (11).

Finally, the ‘drop’ and ‘bark’ are both very organic noises, with uneven rhythm. ‘Drop’ has a very sudden attack, followed by a slow decay as the item slowly comes to a stop on the floor. On the other hand, ‘bark’ has a strange rhythm, where any pattern in barking can be broken by the dog deciding to bark differently. An uneven rhythm denotes erraticism (10), as is the pattern with animals and dropped objects, whereas slow decay denotes uncertainty (11).

I recorded using a personal note-taker, meaning the button presses are audible. Excluding those sounds, I took care in recording in the natural environment of the sound in order to create the proper audio space.

For example, ‘bark’ was recorded inside a house with hard and soft objects that both absorb and reflect, creating a familiar indoor texture. I stayed stationary while the dog moved, and thus created perspective and direction (271). ‘Car horn’ was recorded inside a garage, and the echo from metal walls created a very sharp timbre. ‘Drop’ was done in the kitchen using an aluminium bowl and tiles. The kitchen is full of hard surfaces so the timbre was extremely cold. ‘Scream’ was intentionally done in a very large open area without many trees, in order to best create a big echo, and a large distance between listener and sound.

I intended to avoid as much ground or field sound as possible, but it was not possible in ‘scream’ when there were factors I couldn’t control such as other people.

The sounds I chose from elsewhere reflect my intentions. ‘Siren’ is taken from a Japanese ambulance, and contains several layers of siren in varying rhythm. I cannot know, but I would guess that ‘smoke alarm’ is recorded in a very quiet room; and ‘car skid’ was recorded in an open and empty car park, but with enough surroundings to mask echo.

References:

Alten, S., Audio in Media, Belmont: Wadsworth, 1994. Pg 5-12; 266-286.

Sounds:

Sirens‘ by Trinity101 is available at FreeSound.org, under a Creative Commons Attribution-NonCommercial 3.0

Smoke Alarm Piep Piep‘ by Jan18101997 is available at FreeSound.org, under a Creative Commons Public Domain 1.0

Car Breaking Skid‘ by Iberian_Runa is available at FreeSound.org under a Creative Commons Attribution 3.0