top of page
Search

Transcript of Sound of Space Symposium presentation Dec 2019




This presentation focuses on a project that was performed in 2017 in collaboration with Prof. Jim Barbour, Prof. Jane Burry and Prof. Mark Burry from Uni. Swinburne in Australia and generously funded by the Australian Research Council.


My research and especially the project that I will present today, is concerned with discovering and exploiting creative reciprocities between music as constructed sound, and architecture as constructed space. You will hear me use the word “spatiosonic” which is a self-coined term, used to refer to any work which is both interdisciplinary and transdisciplinary, amongst the interconnected fields of architecture, acoustic engineering, music composition and performance.


Historically, architectural space has played a highly active role in influencing the experience and composition of music, as evidenced in the “spatialised choirs” of Dutch renaissance composer Adrian Willaert. Conversely architects and engineers have long acknowledged the desires of music and sound in space. Many of us are familiar with the notion of Vitruvian vessels, or pots embedded in the wall as an attempt to “improve” the acoustics of theatres and performance spaces.

Despite these historic examples, only a small handful of practitioners have managed to rigorously explore creative, interactive parallels between music and architectural space in their work. 20th Century American composer Henry Brant produced many spatialised compositions, which explored the idea that aspects of physical space (particularly distance and direction) can be as compositionally active as the explicitly musical elements of tone and timbre.

Architecture, acoustical engineering, music composition and performance all present methods for controlling interactions between sound and space, whether these interactions are explicitly and deliberately calculated in advance or emerge as a consequence of speculation and experimentation. However, currently, in each of these practices it’s common to place a strong reliance on highly reductive abstractions and representations of these interactions in order to anticipate, record and recall spatiosonic works.

A lot of people in this room are probably familiar with the formula that physicist Wallace Clement Sabine developed in the 1860’s for predicting the time it takes for sound to decay in a space (based on room volume, surface area and absorption coefficient of materials which constitute the room). This formula is only capable of discussing a very limited set of behaviours, with regard to how sound and space interact, however - as one of the most conceptually accessible aspects of acoustic phenomena - the calculation of reverberation time has become a central focus in the practice of architectural acoustics and as such - a key parameter in the design of spaces for music… The problem with this is that the privileging of such reductive abstractions of the behaviour of sound in space, is coupled with a desire to determine a “standard” target for reverb times, to reassure clients and designers that their spaces will sound “good” for “music” – as a generic idea that somehow caters for all of music… As such, spaces that are dedicated to hosting musical performance are most often governed by acoustic standards which are rarely, carefully questioned outside of what’s quickly becoming a set of conventional expectations… Though the resultant concert halls are generally able to host a reasonably wide-range of types of music - The attitude that this design methodology fosters, largely results in a disconnect between the practices of music composition and performance and the design of spaces for music - These are two worlds which used to enjoy a tremendously rich exchange, before their relationship was diagramatised, albeit a less explicitly “measurable” one…

This is by no means a new observation - it’s one that other practitioners and researchers, some of whom will present later, are also frustrated with. Composers and musicians are also partly to blame for this disconnect - architecture is only very rarely, seriously interrogated as anything other than a container for music, rather than as a compositional element.

My presentation today focuses on a project where I was able to challenge these relationships as both an architect AND a composer. In this project - a site-specific duet written for Violin and Cello to be performed at the Sagrada Familia - I used tools typically employed by architects and engineers for designing spaces, as tools for rehearsing compositional, musical ideas - particularly for predicting the effect of the positioning of musicians around the space, and for understanding which frequencies (or instrumental range) were particularly reflective in the space. The Sagrada Familia is possibly the most acoustically reflective space that I have ever and will ever work with, so in this case, it’s rather convenient that the tools available for modelling acoustic behaviour, focus on reflected sound as opposed to other behaviours.


Previous research conducted by Professors Mark and Jane Burry and Pantea (who will speak next), revealed that the interior of the Sagrada Familia has a reverberation time of between 9 and 12 seconds. Such long reverberation times had previously posed serious problems with regard to speech intelligibility, in the delivery of sermons, to the point where teams of engineers and designers have proposed to fix this problem by retrofitting the interior with absorbent acoustic treatments. However from a music composition point of view, this extreme reflective condition provides a fantastic creative opportunity in the form of a highly site-specific and acoustically-aware musical project.


Despite a distant memory of a visit to the Sagrada Familila as a tourist, several years prior to this project, I wasn’t able to make a visit to “double check” the sound of the space before starting the composition. At this stage it’s worth me introducing this work-in progress diagram which attempts to discuss an initial identification and categorisation of “spatial typologies” in relation to sonic media. I use this diagram as a roadmap for new projects, as a means of understanding the terms by which characteristics of both real and virtual space can be more precisely calibrated with musical ideas. Admittedly, the diagram is slightly problematic as there will inevitably be examples that don’t fit neatly into one category, or the organisation of these categories might also even shift throughout the development of the projects - so I need to caveat this diagram by saying that it should be treated more as a navigation aide as opposed to presenting an absolute truth! It’s also worth noting that this diagram is made for practitioners, as opposed to audiences or performers. According to this diagram, the Sagrada Familia project addresses 4 categories. The first three sits on the right-hand side of the diagram. Starting with remembered space (from my visit several years ago) - followed by imagined space (evoked by looking at photos and drawings of the space and influenced by a lifetime of listening when visiting similar (though admittedly not as extreme) buildings and) - and finally metaphorical space - (very much related to imagined space, but specifically deals with discussions of space that are ”like” the Sagrada Familia - and accessed by listening to generic (non-site-specific) reverb effects and visiting spaces that are materially and volumetrically similar (though admittedly there aren’t many of those…))... The other category relevant to the composition process is “Simulated space”

In order to test the effect of the positions of musicians around the space, I used the “SoundSpace” at Max Fordham developed by Pedro Novo. A spherical array of speakers (hidden in the floor and ceiling) is able to position sounds, in space around the listener, so I was able to carefully consider which areas of the Sagrada Familia I would place the musicians in advance.

I also used various pieces of software (Pachyderm for Rhino, EAR Acoustic for Blender and CATT acoustic) to make ray traced geometric acoustic models to generate impulse response files, which were able to give me an “impression” or “acoustic representation “ of what I could expect. Very much like the equivalent of an architectural render or physical model… In an essay titled “Space within Space: Artificial Reverb and the Detachable Echo” McGill researcher Jonathan Sterne acknowledges, on acoustic simulation: “so many things are happening in so many different ways that they cannot be calculated or captured by any modern computing device…” - sound doesn’t always neatly travel in straight lines and many simulation tools don’t account for complex wave behaviours - so these tools can only ever provide a representation of the sound in the space which is definitely not as rich or as complex as the reality, but it definitely provides a useful representation, with which it was possible for me to make musical and spatial decisions… The Sagrada Familia project presented an opportunity to both use and critique this acoustic simulation technology, but from the viewpoint of a composer with a set of architecturally sensitive expectations, as opposed to an architect with a keen ear, looking for an audible representation of a space not yet realised, which is how these simulation tools are typically used.

The visualisations of the ray tracing models (from Pachyderm) were geometrically able spot areas where reflections might become particularly intense. And visualisations of simulated impulse response files (using EAR Acoustic and CATT acoustic) were able to provide information as to what frequencies the space is particularly reflective to and importantly, in response to the specific positions of the musicians and listener (or microphone). EAR for Blender, though sadly no longer supported and perhaps the least “accurate” in terms of its ability to replicate the sound of the space, was particularly useful in what was quite a sculptural process of “virtually” quickly moving musicians around to get a sketch of “right” sort of sound. This is perhaps the digital equivalent of what musicians and composers used to do, centuries before through trial and error, before acoustics became a more exclusive realm and before it was possible to calculate this stuff at a distance from its physical site.

The spectrogram visualisation also provides verification of what I’m hearing when trying to understand things like which overtones are strongest when playing a A on an open string compared to playing on the next string, further up the fingerboard. Or what happens when an A in the same octave is played on the cello compared to the violin - which one is more frequency rich and therefore contains more “interactive potential” with the architecture - in light of the architecture acting as a compositional element.

The conceptual focus of this project primarily attempts to elevate architecture, to become not just a container for sound, but a compositional tool in itself, on the same level as aspects of musical tone, timbre and rhythm and therefore capable of rendering or transforming a musical idea - I hypothesised that “Construction 002: Tracing” would be a different piece when played in an extremely acoustically dry location, such as an anechoic chamber, than when it’s played in the extremely reverberant setting of the Sagrada Familia. Conceptually this raises some pretty big questions as to the nature of “the piece” - as some would argue that any musical idea that possesses a portability will maintain its integrity no matter who is playing it and where it’s played. What proposing is by no means a watertight argument (yet), but if we conceptualise the score for “Construction 002” as the facilitator of a site-specific musical “event” as opposed to a musical “thing” that tours different locations - then it’s easier to accept the idea that the architecture in which the musical event takes place, can be as compositionally active as the other musical elements of tone and timbre etc…


In order to test this theory, in the context of this project - we performed the piece in another extreme acoustic space - in the form of a very small (and extremely absorbent) anechoic chamber, where the room provided absolutely no acoustic response. Unfortunately I have tinnitus, so all I could hear in here was fizzing and ringing, but apparently with good ears, you are supposed to be able to hear someone in the opposite corner of the room rubbing their fingers together. From this space, we gained a very dry recording which provided a useful comparison to the incredibly reverberant recording that we made at the Sagrada Familia.


By contrast, when performed in the Sagrada Familia, the 12 second reverberation time and vast possible distances between musicians, was enough to transform an explicitly melodic idea (as heard in the anechoic chamber) into a harmonic construction (when blended by the reflections of the Sagrada Familia). Here’s a quick AB comparison of the dry and wet recordings.

(play anechoic version and recorded version)

Again, as well as hearing the difference - this can be seen on the spectrogram & waveform visualisation which compares the anechoic recording to one of the Sagrada recordings… As you heard in those opening bars, the piece contains a number of sharp “stabs” which get muted and are then allowed to ring out for a while, before the start of the next note, to let the space assert its opinions… Here, you can clearly see the anechoic chamber, these stabs stop immediately, but in the Sagrada Familia, the presence of the reflected stabs continue right into the next note. This is exactly the “blending” effect that I was hoping would happen...


Apart from the more obvious transformation of short (unreflected) sounds into elongated (reflected ones) there were also subtle differences in the presence of overtones and the ability to achieve harmonics. In the anechoic chamber, our cellist Theo was most disturbed to realise that he wasn’t able to achieve harmonics as easily as when he is in a “normal” room with even minimal reflections. At first he thought he had somehow forgotten how to play, but (after some gentle counselling and discussion) - we decided that the anechoic room was simply not “feeding” the cello with an amount of reflection that is necessary for exciting the instrument and producing and sustaining harmonics. By contrast, in the Sagrada, the same harmonics were “easy peasy” to achieve… A more subtle observation is in the presence of overtones when they either get reflected or disappear… In the Sagrada Familia performances, after the initial attack of some of the more frequency rich notes (usually those played on open strings), it sounded like the some of the higher frequency overtones disappeared quite quickly, when some of the mid to low frequency overtones were still bouncing around after even the fundamentals had stopped reflecting! Again - I’m looking at the spectrogram to verify what I’m hearing… Here’s the first stab - you can hear that G continues - but the other frequencies tail off fast…


For example - in the stabs, the cello is playing a low C# (not quite an open string, but nearly…) - which emits a fundamental tone of around 69 Hz, but the overtones between 100 and 400 Hz are also pretty rich on a low C# which. The violin is simultaneously playing a double-stop on a low G (at 196 Hz) on the open string and F (at 350 Hz), positioned low down on the D string - both of which involve a long length of vibrating string and are similarly rich in overtones around the 400 Hz mark - So it’s possible that the overtones, within the 100 - 500/400 Hz frequency band were bouncing around for longer than the higher frequency ones, because of the acoustic potential of the space in terms of material absorption and spatial volume, but also perhaps due to the large distances between the interior surfaces? One of the acoustic engineers in the room might be able to help unpack this, later on!


Such reflective behaviour creates some rather delightful tonal ambiguities in parts of the music, which is quite a fun thing to work with as a compositional element and one which is of course incredibly site specific. As the cello and violin were sharing overtones and these shared overtones were reflected for longer than expected in the space, the sound had an almost overwhelming immersive quality as the directionality of the sound was pretty hard to pinpoint - in the case of both the slow swelling notes AND sometimes the sharp stabs, too!


As “rehearsed” previously in the digital models, we tried the musicians in a number of different locations and a static, ambisonic microphone array was placed near the altar, at the position where the impulse response files were generated. By this logic, we could say that the position of the mic is the “optimum” listening position as this is the theoretical “standing point” from which the music was constructed. This is much like a measured perspective where you set your horizon line and vanishing points and then decide on a standing point as the “viewer” (or in this case, the listener). As with any perspectival drawing or sound recording, you (as the viewer or listener) are unavoidably implicated in the scene - and the same could be said for this “spatialised” construction. Though I’m generally in agreement with composer Henry Brant (who we met earlier), Brant states that there is no optimal listening position in his spatialised constructions. Conversely, some architects and engineers design performance spaces to deliver an “optimum” visual and audible experience for EVERYONE - as in Hans Scharoun’s Berlin Philharmonie where there are apparently no “bad” seats… In the context of my own work, I personally think it’s a little more complicated than this - especially when the music is written for a specific space and the individual sounds are widely spatialised…


I should mention that the microphone that we used to record the performance was invented by Dr. Jim Barbour who collaborated with us on this project. Jim’s research is interested in capturing the verticality of the space as well as registering lateral energy in ambisonic recordings, so he devised a method using a tall stack of 12 microphones which were also arranged radially in plan, to capture the spatiality of the sound under one of the tallest parts of the Sagrada Familia’s interior.


In a way - this project is a measure of the interior space at the Sagrada Familia. It’s doing the audible equivalent of a 3D scanner - a signal is emitted (in the case of a scanner, it’s a laser beam, in the case of this project it’s a sound emitted from an instrument) - the signal travels outward and hits a surface, at which point it turns back and the presence of this reflection is detected by a receiver (in this case, a microphone, or our ears) - the information we get back gives us clues as to how reflective (or absorbent) the surface of interaction is and to some degree, how far away it is. In order to sonically “scan” the space effectively, “Construction 002” was performed a number of times, with the musicians in different locations…


Firstly: With the cello and violin around 10 metres apart, horizontally and 17 metres in the air, on a balcony.


Then: Staying at 17 metres in the air, but at 50 metres distance across from each other


After this we: Went down to the ground floor where the cello was close to the altar and the violin was all the way towards the other end of the nave.


Then we: tried a slightly more sensible position with the cello and violin opposite each other on the ground floor


And finally: we separated them again at 20 metres in the air and slightly over 60 metres apart!

I should also mention that we unusually had the space to ourselves during the recording session - with the help of Mark and Jane, we were able to access the basilica from 1 in the morning until 6 in the morning. So we were able to avoid too much noise from traffic outside and noise from visitors. In some of the recordings from later in the session you do start to hear some noise from the cleaners! But we were largely not interrupted...


As well as the space reflecting the sound in interesting ways, it was also affecting the musicians, particularly when the distances between them were larger… For instance, the tempo that we heard in the Sagrada Familia was noticeably slower than in the anechoic chamber. In this comparison you can see that in the Anechoic chamber at 80 BPM it should be finishing at just before 5 minutes, and it finishes roughly around 5 minutes. In the Sagrada however, it’s nearly 6 mins 20 seconds meaning they were playing closer to 60 BPM!

This could be because the musicians were waiting for the reflections to finish, or perhaps that they just couldn’t see each other - it could also be as a result of the delay due to distance. At 64 metres apart - if the sound is travelling at 343 m/s - there would be a delay of around 0.18 of a second - which at a tempo of 80 beats per minute - if 1 quaver is roughly 0.75 of second - the delay due to distance is roughly equivalent to a semiquaver. So, definitely enough to have a noticeable effect on rhythmic precision!! When I asked at the end of the session why this might have happened, they both said that they hadn’t noticed and that it wasn’t a conscious decision, but the violinist did liken playing in the Sagrada Familia to walking underwater and that the sound at times behaved in very unpredictable ways. Of course the simulations that I ran beforehand didn’t pick up on these sorts of discrepancies as these tools can’t predict human behaviour or more complex wave behaviours!


We’ve still got some time to listen to some examples - which I’ll keep short in order to make a few AB comparisons easier to compare and maybe do a show of hands to see which you guys think is which.

The part that I’m going to select is the start of the piece – which we’re already familiar with – starting with the initial “stab” then slowly swelling into a chord.

We’re going to play a game of simulated or real!!

I’ll play the 3 short clips through once, then second play through I’ll ask for a show of hands for who thinks it’s the real thing (not the simulation) – so listen carefully:

(response to show of hands)

The first one is the dry recording, convolved with an impulse response that was generated from a CATT acoustic model that Pedro Novo from Max Fordham helped me with.

The second one is convolved with an IR which was captured by Jim Barbour, directly in the space.

The last one is the real thing, with the musicians in the 3rd position – on ground floor, but a good 60 metres apart.

Left = CATT

Mid = IR from Jim

Right = REAL Pos 1


Now that we’re familiar with how the real thing sounds, I’ll play quick comparison of the 3 most “different” positions, so you can get a very rough sense of how the positioning affected the sound (from the perspective of he microphone).

That’s all I’ve got time for, but thanks for listening and don’t forget to save your questions for the discussion at the end of this session, following Trevor Cox’s presentation!

I’m now going to hand over now to Mark Burry:

Professor Mark Burry is a registered architect and the Founding Director for Swinburne University of Technology’s Smart Cities Research Institute (SCRI) since May 2017. His role is to lead the development of a whole-of-university research approach to ‘urban futures’, helping ensure that our future cities anticipate and meet the needs of all – smart citizens participating in the development of smart cities. ​

Mark is also a practising architect who has published internationally on two main themes: putting theory into practice with regard to procuring ‘challenging’ architecture, and the life, work and theories of the architect Antoni Gaudí. Starting in 1979 until 2016 he has been Senior Architect to the Sagrada Família Basilica Foundation, pioneering distant design collaboration with the team based on-site in Barcelona. Over to Mark…

0 comments

Recent Posts

See All
bottom of page