My problem haunted me until today, but I found a way to do exactly what I want!
Reminder: I am outputting
only via encoders,
no devices involved. After the delay measurements were working the only missing piece was a way to buffer data
before it is sent to an encoder.
Device/Playback buffers don't help here because encoders are fed by DSPs which run before data reaches any playback/device buffers.
For anyone interested in the results, here is the process I went through and how I finally managed to do it the most flexible way:
My first idea was to use BASS_StreamCreate with the
STREAMPROC_DEVICE flag. Attach Encoders to this channel and they will get fed by the device/playback buffers! Only problem: This doesn't seem to work with the 'No Sound' device which makes sense.
The right path for me was to either use a custom buffer or a stream created with
STREAMPROC_PUSH - which does exactly what I want when its playback buffer is disabled: Queue and play data!
The Push Stream is fed by a
DSP on the source channel (just remove the encoder(s) and attach a
DSP that calls BASS_StreamPutData with the received data).
Now if you only play the source channel the queue in the Push Stream will grow. Attach the encoders to the Push Stream and play it shortly after the source and you have a buffer which contains the time difference between the play calls.
The final touches were to remove the time delay between the playback of both channels.
I managed to do this by making the source channel a DECODE stream. As this needs to be played (you cannot play a DECODE stream directly) I created another Stream with a
user defined writing function (use the 'proc' parameter of BASS_StreamCreate to do this) which just returns the result of
BASS_ChannelGetData on the source channel and thus "plays" it.
Be careful,
you cannot discard the data (set buffer=NULL)
when requesting data from a non-recording stream with BASS_ChannelGetData! I created a "garbage" buffer that I pass to calls like this to make it work.
On startup BASS_ChannelGetData is called on the source channel to
fill the Push Stream with as much data as you want. Now
play both the source and the user defined Stream (I linked them) and you have a buffer of exactly the size you want and without waiting for it to fill up initially.
I can elaborate more if someone else needs advice on a similar issue.
And if you are still here, Ian: thank you for your help along the way, much appreciated! And maybe you can make it possible to discard data in non-recording streams in the future. The behaviour is not too user friendly: During my testing BASS_ChannelGetData did not return '-1' if you wrongly passed NULL but just hanged forever. (yes, I know, RTFM
)