This should in theory prevent Nimble Kick from triggering a stack
overflow when the event loop timer procs before `IPlugView::attached()`
gets called and the plugin hasn't been registered yet. I haven't seen
any other VST3 plugins trigger a race condition here.
This should fix#118 without breaking our _other_ workaround from
yabridge 3.4.0 to fix the issue where a plugin would freeze if it would
try to resize itself while at the same time it sent parameter changes
from the audio thread. (and both of these issues of course are caused by
the same JUCE bug)
This is super difficult to trigger on purpose, but I did run into it at
least once just now so it seems like a good idea to at least make sure
that this doesn't happen.
REAPER initializes the plugin's editor first before reparenting the
parent window to the FX window, so our `topmost_window` didn't actually
refer to the FX window.
JUCE incorrectly calls `IComponentHandler::performEdit()` from the audio
thread instead of using the output parameter changes queue, so this
would also cause GUI resizes to be handled from the audio thread if they
come in at the same time as such a parameter change.
This is needed as a workaround to support Waves VST3 plugins.
Right now does does not actually fix the issue because the arguments are
not updated in the subclasses. The next commit will fix this.
Seems weird to need this specifically so we can use the map overload
that creates a new instance when the key doesn't exist in the map. This
seems safer.
Since there isn't any public documentation on VST2, I saw JUCE and a
couple of other plugins and bridges use this, but they all redefined the
symbol to`main`.
At some point Doom Emacs broke on-save formatting with lsp-mode in
certain circumstances, and I made these changes with wgrep so apparently
they were never formatted.
Ardour apparently always calls `effMainsChanged()` with a value argument
of 0 when unloading the plugin, regardless of whether it has actually
initialized audio processing before that point.
The sizes were wrong, and Blue Cat Audio's VST3 plugins seem to use the
upper bits to store the channel configuration, which thus got read out
incorrectly.
In the same way as 50c25c1cf0 did it for
VST2 plugins. Input and output audio data is now stored in a shared
memory buffer instead of being sent over the sockets. This reduces the
bridging overhead to a minimum since copying data was the most expensive
operation we were doing and we now only need to copy the entire buffer
once per processing cycle.
We now use shared memory to store the input and output audio buffers.
This means that we have to copy less data every processing cycle, since
a single copy to and a single copy from the shared memory object
suffices now. This should reduce the DSP load for VST2
plugins (especially when used in a plugin group) marginally to
significantly depending on the plugins used and the system
configuration.
Every DAW will just send all events in one go (and I think that's the
only way you should do it, but the VST2 spec is a bit leaky so who
knows). It wouldn't make much sense to preallocate more capacity,
because when DAWs do send all of those events individually they might
end up sending more than four of these anyways.
On the Wine side. Instead of always having it enabled and disabling it
when it could potentially hurt (i.e. when handling GUI related things),
we'll now only enable it when it's potentially beneficial. This way we
don't have to constantly switch scheduling policies on the GUI thread.
While a VST2 plugin is being initialized in a plugin group, all host
callbacks would go over that new plugin instance's sockets. This would
cause a race condition if the host processes audio while loading
plugins. This issue has been there since the introduction of plugin
groups, but it's only noticeable in Bitwig Studio, and only when using
over thirty instances of the same plugin in a plugin group.
PG-8X in REAPER has the same mutual recursion limitation the Voxengo
plugins had in Renoise, but with `effGetProgramName()` instead of
`effGetProgram()`.
This basically changes the default small vectors during VST2 event
processing from 256 bytes to the size of a `DynamicVstEvents`
object (which also includes a small_vector to hold MIDI events without
allocating) and makes them thread local. We already have a similar
optimization for VST3. There it's a bit neater since we already had to
separate audio processing functions from non-time critical functions.
Here we don't have that separation, so we just made these buffers thread
local, large enough to hold our predefined number of events, and we then
just shrink them to fit if these buffers grow even more (which can only
happen after reading or writing chunk data).
The change doesn't specifically target `effProcessEvents()`, but that's
where you would see the differences. This is also relevant for
`audioMasterProcessEvents()`.
I once read years ago somewhere on Stack Overflow that `std::vectors`
with that are preinitialized to a default size would allocate the
initial capacity on the stack. This of course doesn't make any
sense (run time sized stack allocations can cause all kinds of issues),
so we were still allocating with our default 64-byte sized buffers, but
just not as often.