Rather a slow and lost couple of days, perhaps tiredness due to a busy January, but also a change in direction from working on music production to the more tedious work of graphic arts. I finished the album artwork yesterday for Heart of Snow, and recorded new vocals for the last of the new Burn of God tracks, the rather lovely Garden of Spring.
Late at night, I started to re-listen to Heart of Snow and found myself in the mental spaghetti of wanting to remix the whole album! I really want and have always wanted and have always aimed to never apply equalisation, and certainly no compression, to an album at the mastering stage, but yesterday night I found myself doing this and finding that the results were much improved. So, I wondered if I should be fixing these problems in the mixing stage instead. If the mix sounds better brighter, then perhaps, perhaps ideally, the brighter instruments need to be louder, perhaps. But I'm unsure.
The main Heart of Snow track, for example, has vocals with pretty much the same balancing as on every track, I tend to use the same filtering and the same sort of reverb, in spectra space, so why does the whole track sound more like the others, more in keeping, when this is given more higher space? Perhaps there is some science to an overall mix which defies this simple logic. Perhaps bass sounds make higher ones harder to perceive, even when they are, on a technical level, the same.
Perhaps I need to consider mastering a separate skill and entity to studio mixing, again, something I've resisted. But not entirely, I do, for example, enjoy playing with relative volumes and the segueing of tracks. Perhaps I should see balancing the equalisation in those terms, a last polish.
One difference is that EQ is a lot more complex, there are a lot more variables. I like the 10-band EQ in CD Architect, but at times I wish for simpler presets... of course I can make some, but they, annoyingly, don't seem tom work as well as tweaking the dials for each song! I don't like this organic factor.
Today, in something of a slump, I decided to program a spectrum analyser and see if this would help. It's relatively simple and uses 10 bands, filtering the signal and measuring each result. Here is an example of the output:
This isn't finished. I had hoped that the Boost values would identify in an instant where to push the song to make it sound ideal; this may still help, but it is very much in development at the moment. The relative RMS values (relative, all numbers are scaled between 0 and 1) help identify the dominant frequencies. I'm unsure what use this tool is, but it many help and was relatively easy to code.
I really need to complete the Spotify Canvases for Heart of Snow next. I feel this week has a been a bit wasted. I've been busy, but not busy enough to be satisfied with it.
Tomorrow I'll visit the museum.