As per Boost.Asio's manual, an explicit `socket.shutdown()` is needed
before calling `close()`. For some reason this worked fine in almost
every situation, but when hosting both a plugin hosted within a group
host process and a normal individually hosted plugin within a single
process, and then removing those two plugins in order, the
`host_vst_dispatch` socket of the first plugin never got closed. This
would hang the entire shutdown sequence to hang on the
`dispatch_handler` jthread.
First discovered in #45
Boost.Process's `boost::process::environment::at` throws when the
environment variable does not exist, as opposed to `operator[]` which
falls back to an empty value.
This is not ideal since it requires the user to know about this option
and to create a config file, but I think it's the best we can do without
compromising on yabridge's transparency and 'zero hacks' philosophy.
See #29 and #32.
This significantly reduces the latency with no real drawbacks from what
I've noticed. Wineserver is still run using the normal scheduling
policies because from my testing running that with realtime priority
that can actually increase latencies, although doing so will greatly
reduce the variance in processing time.
When creating copies with yabridgectl. This should at least give an
advance warning that some additional steps are required when first
setting up yabridge.
It's a bit clearer this way. I would prefer using jthreads here, but we
would still need this try-catch block since there's no way to cancel
synchronous Boost.Asio socket operations other than closing the socket.
Not sure how this got in, and I'm even less sure why this has not caused
any issues before this. In the particular case that was causing a crash,
the host was sending 138 sample sized buffers. This error likely only
became visible because the lack of memory alignment caused writes to
parts of the vector objects themselves.
I did not know that `std::optional::value()` did checked access. And I
still prefer a more explicit .has_value() over boolean conversion, but
this seems to be the accepted way to do this.
This cleans up the PluginBridge significantly by getting rid of all
fields and handling that was only needed for connecting to plugin
groups. This was also the last thing I wanted to refactor before
releasing the plugin groups feature with yabridge 1.2.
In case two yabridge instances start at the same time and both try to
launch a group host for the same group and the second plugin connects to
the first group host, then we should of course be using that process'es
PID to check if the group host is still running.