Someone on the VI Forum/SFX asked a question about sound library formatting, specifically about the chocie between [1 take per file] and [multiple takes per file]
“One oddity I’ve run into is that sometimes, a wav file in a pack may offer multiple variations of a sound/oneshot/effect within the same file with maybe a second of spacing. Is this usual? The only reason to make use of such a file is to choose a starting point programmatically in software, but I don’t see why you wouldn’t just cut up your variations into multiple files.”
As I had to think through all angles of this question ten years ago when deciding how to deliver the first HISSandaROAR sound library, I wrote a stream of consciousness reply and figured I’d post it here as it may be useful to others… And also so it exists in my own archive….
The short answer is: who is your target user and how do they prefer it?
You mention ‘one-shots’ which is a music term, and not a sound FX/sound design term, so maybe you are talking about music samples and not SFX? I am referring to SFX, since music samples are usually used either via VIs (where individual sounds are not even accessible) or via auditioning & loading singular sounds into a sampler etc which is a totally different use case to SFX.
The longer answer: In my experience as both a user, and a library developer the reason for not delivering SFX libaries as [1 take per file, when it is a multi-take/variations example] is due to a couple of different and important reasons: First and very important, that approach does not scale. Second is due to the typical workflow of how SFX are used. So you need to be very clear on the use case. For example, if I search my music sample library in SoundMiner, all the ‘808 kick’ are single takes per file, because that is how they are used by a musician. But if I search my SFX library for ‘punch’, none of the punches are delivered as single take per file. A “one shot” is a music term and I would expect it to be one take per file.
So why does a seperate take per file not scale, for sound effects? A simple example, my personal SFX and AMB library has over 500k sounds in it. If those sounds were broken out into seperate files for every take, my library would not be 500k sounds, it would be more like 500 million and when I searched for ‘METAL IMPACT’ in SoundMiner I would get 100,000 hits and auditioning my way through all of those is simply not viable – just imagine it! This problem won’t be apparent while you work on your own library, but as soon as your library is added to a users personal library containing hundreds of thousands of other sound files, it will become very, very apparent. It’s a similar reason why file names and metadata are so important – on their own, a single library is no problem, but add it to a larger library with thousands of other libraries and if your sounds can’t be efficiently found and identified, they will not be used. But again I mean SFX, not music samples.
Second, the workflow of professional sound editors & sound designers (ie those most likely to buy your libraries, and not people primarily looking for free sounds) is usually via a soundlibrary app, which makes it very easy to transfer part of a file. So for example if you audition a file of 20 punch takes and only want take 3, simply select take 3, transfer & done! But eg in SoundMiners case the silence between takes can be used to auto split and load discrete takes into Radium sampler (and the same would apply to many samplers)
When working in a linear sound FX editor fashion, say you import a single composite file of 20 takes of punch that you like and want to use, into your edit session for a fight scene. As soon as you have used the first punch you will want to use a different punch sound for the next occurrence (there is nothing more cringey than repeating identical sound FX) Rather than going back and importing another very short soundfile, you can simply stay in your edit session and move to the next take/s within the composite file. It is a much more efficient way to work with as a sound editor, rather than to be dealing with Punch01.wav, Punch02.wav – Punch20.wav and that’s just of your first particular punch. There might be 20 variations of every other punch too… It again depends on common sense, eg for AMB libraries if they are different locations (eg AMB city skyline 1, AMB city skyline 2) then they would be seperate files because they are not ‘take variations,’ they are entirely different locations.
This isn’t to say it is the only way or method. Some people (especially game audio) may prefer one file per sound, especially when implementing them. But unless you are going to deliver both options then you are going to frustrate one group or the other. With a composite file (X takes in a single file, seperated by silence) if someone does prefer 1 take per file, then they can very, very easily split & output that as they wish, due to the silence between takes. Eg ProTools strip silence, export, done. Every DAW has such options. But if the reverse is delivered, one take per file, they would have to import 20 seperate files, space them a second part, combine them into a composite file and export it as a single file, likely losing all the metadata along the way.
Considering the various likely use cases, and also thinking how you as a user prefer to work, is what should inform your thinking. While some people might think there isn’t much difference between a music sample library and sound FX library, some very important differences are as per the very question you ask. Also the use of metadata (absolutely crucial for sound FX/design use) along with consistent file naming, bit & sample rates etc. differ vastly between the two use cases…