Author Topic: Measure delay to encoder - is my logic correct?  (Read 1078 times)

konnex

  • Posts: 14
Hello folks,

I am trying to include custom data in an encoder's output with precise timing. No sound card involved.

The processing chain looks as follows: Multiple File streams (decode) -> Mixer (played) -> Encoder1 -> ENCODEPROC calling BASS_Encode_Write on Encoder2 -> Encoder2 -> ENCODEPROC (where the custom data is included)

The timed event is implemented via a SYNCPROC on the mixer with BASS_SYNC_MIXTIME (to prevent delays adding/removing sources due to playback buffer).

Inside that SYNCPROC, my current logic is to add all the buffers in the chain (behind the mixer) and count output frames of Encoder2 until that data has "passed" the encoder. The next encoded frame is manipulated then.

Calculation of buffer level in the chain is the sum of:
- Mixer Buffer: BASS_ChannelGetData with BASS_DATA_AVAILABLE flag (Playback Buffer has to be considered because the SYNCPROC is fired at mixtime = decode position)
- Encoder1 Buffer: BASS_ENCODE_COUNT_QUEUE + BASS_ENCODE_COUNT_IN - BASS_ENCODE_COUNT_OUT = bytes currently in Encoder1
- Encoder2 Buffer: BASS_ENCODE_COUNT_QUEUE + BASS_ENCODE_COUNT_IN - Encoded Seconds (this is calculated by counting frames) = bytes currently in Encoder2

This all is evaluated to a value in seconds which is translated to the frames that need to pass before manipulating the data in the final output.

So far so good, but the result is inconsistent. My only idea right now is that it could be that BASS_ENCODE_COUNT_IN is incremented already when data is queued for the encoders.

But I would like to ask the experts for an opinion on this and maybe it will help others as well.

Thank you and best regards
Konnex

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #1 on: 7 Jul '23 - 11:47 »
I am trying to include custom data in an encoder's output with precise timing. No sound card involved.

The processing chain looks as follows: Multiple File streams (decode) -> Mixer (played) -> Encoder1 -> ENCODEPROC calling BASS_Encode_Write on Encoder2 -> Encoder2 -> ENCODEPROC (where the custom data is included)

I'm not sure I entirely understand what you're doing there, in particular the 2 encoders part; it looks like you're feeding the output from the 1st encoder into the 2nd encoder? If so, I guess the 2nd encoder must be a PCM encoder, ie. not actually doing any encoding? To confirm, please show how you are starting both encoders, ie. the calls/parameters.

Calculation of buffer level in the chain is the sum of:
- Mixer Buffer: BASS_ChannelGetData with BASS_DATA_AVAILABLE flag (Playback Buffer has to be considered because the SYNCPROC is fired at mixtime = decode position)
- Encoder1 Buffer: BASS_ENCODE_COUNT_QUEUE + BASS_ENCODE_COUNT_IN - BASS_ENCODE_COUNT_OUT = bytes currently in Encoder1
- Encoder2 Buffer: BASS_ENCODE_COUNT_QUEUE + BASS_ENCODE_COUNT_IN - Encoded Seconds (this is calculated by counting frames) = bytes currently in Encoder2

This all is evaluated to a value in seconds which is translated to the frames that need to pass before manipulating the data in the final output.

That Encoder1 calculation looks troublesome, because BASS_ENCODE_COUNT_OUT isn't usually dealing with the same data format as BASS_ENCODE_COUNT_QUEUE and BASS_ENCODE_COUNT_IN (and BASS_DATA_AVAILABLE) are, ie. it's dealing with encoded output while the others are PCM input.

So far so good, but the result is inconsistent. My only idea right now is that it could be that BASS_ENCODE_COUNT_IN is incremented already when data is queued for the encoders.

The BASS_ENCODE_COUNT_IN count increases straight after the data has been fed to the encoder, ie. after leaving the queue when BASS_ENCODE_QUEUE is enabled.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #2 on: 7 Jul '23 - 15:10 »
I'm not sure I entirely understand what you're doing there, in particular the 2 encoders part; it looks like you're feeding the output from the 1st encoder into the 2nd encoder? If so, I guess the 2nd encoder must be a PCM encoder, ie. not actually doing any encoding? To confirm, please show how you are starting both encoders, ie. the calls/parameters.

The first encoder is doing sound processing with a binary (stdin-stdout), so the sample format for in- and output stays the same (DSP is not possible in my case).
The second encoder is ffmpeg outputting mp3 (also stdin-stdout). Both encoders are started with BASS_Encode_Start using BASS_ENCODE_NOHEAD and BASS_ENCODE_QUEUE.

This should also explain why I am using BASS_ENCODE_COUNT_OUT in the calculation for Encoder1.

The BASS_ENCODE_COUNT_IN count increases straight after the data has been fed to the encoder, ie. after leaving the queue when BASS_ENCODE_QUEUE is enabled.

Okay, thanks for confirming.

Maybe the root of my problem is that I am understanding the playback buffers wrong. Is it correct that BASS_ChannelPlay just calls BASS_ChannelGetData periodically and feeds that data (coming from the "old" end of the playback buffer) into the attached encoder?

Sorry for not clarifying what the encoders are doing right away.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #3 on: 7 Jul '23 - 17:26 »
The encoder feeding (or queuing with BASS_ENCODE_QUEUE) happens during a stream's DSP processing (see BASS_CONFIG_ENCODE_PRIORITY), which means it won't be delayed by the stream's playback buffer, ie. the data is encoded when it's put in the playback buffer, not when it's taken out.

Are you ultimately trying to calculate the time of the MP3 encoder's output? BASS_ENCODE_COUNT_IN will tell you how much has gone into the encoder, but the encoder is quite likely to have its own buffering, which means the output is a bit behind the input. You could try using the BASSenc_MP3 add-on instead of FFMPEG to see whether that helps in your case, eg. perhaps it has less buffering or more consistent buffering.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #4 on: 7 Jul '23 - 17:35 »
The encoder feeding (or queuing with BASS_ENCODE_QUEUE) happens during a stream's DSP processing (see BASS_CONFIG_ENCODE_PRIORITY), which means it won't be delayed by the stream's playback buffer, ie. the data is encoded when it's put in the playback buffer, not when it's taken out.

This points me in the right direction, I think. To achieve what I want I cannot take the playback buffers into account. I will try it without and report back.

The encoder buffer will not make that much difference but I will look at that once I get the timing to the ~50ms precision.

Thank you! Now I understand that the playback buffer is only delaying the audio on an audio device, correct? If so, I can deactivate it since I am using no audio devices but only feed the samples into encoders.
Is there an example on how to feed the encoder without using BASS_ChannelPlay at all? I am not sure how I should construct a loop that calls BASS_ChannelGetData to get a robust chain.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #5 on: 7 Jul '23 - 17:52 »
If you don't need the encoding to run at realtime/playback speed, then you could make the mixer a "decoding channel" (using BASS_STREAM_DECODE) and repeatedly call BASS_ChannelGetData on it (instead of BASS_ChannelPlay). For example, you could have a worker thread with a processing loop like this:

Code: [Select]
for (;;) {
BYTE buf[20000]; // processing buffer
int got = BASS_ChannelGetData(mixer, buf, sizeof(buf)); // process the mixer
if (got < 0) break; // exit if failed (ended or freed)
}

You should also disable queuing (remove BASS_ENCODE_QUEUE) then, as it isn't really needed and would probably result in dropped data or use a lot of memory (depending on whether the queue is limited via BASS_CONFIG_ENCODE_QUEUE).

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #6 on: 7 Jul '23 - 21:49 »
A decode channel would work in theory, I am just worried about stability because the application should be capable of running for a very long time.
I suppose there is a similar implementation in BASS_ChannelPlay, how is BASS_ChannelGetData called there, in a timer? Is there any compensation if the timer is called too late? And the interval is probably just the buffer update period, correct?
Sorry for asking 3 questions at once.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #7 on: 7 Jul '23 - 21:50 »
Sorry I missed the “if you do not need it to be real time”. I am probably better off with using BASS_ChannelPlay and disabling the playback buffer because I don’t need it.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #8 on: 9 Jul '23 - 01:06 »
I got the accuracy near perfect now by ignoring the playback buffer in the calculation, thank you!
One last question though: When a MIXTIME Sync on a mixer is called, are the samples already sent to the encoder or are they about to be sent? Because it would explain the last little inaccuracy if they are sent to the encoder before the sync is called.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #9 on: 10 Jul '23 - 12:16 »
Sorry I missed the “if you do not need it to be real time”. I am probably better off with using BASS_ChannelPlay and disabling the playback buffer because I don’t need it.

If you want the mixer to be processed at realtime speed but don't want to hear it, then you could perhaps play it on the "No Sound" device. Unless it's messing up your timing, I would suggest keeping the playback buffer enabled, as it gives more margin for processing delays, ie. processing extra to catch-up after a delay rather than skipping update cycles.

I got the accuracy near perfect now by ignoring the playback buffer in the calculation, thank you!
One last question though: When a MIXTIME Sync on a mixer is called, are the samples already sent to the encoder or are they about to be sent? Because it would explain the last little inaccuracy if they are sent to the encoder before the sync is called.

A mixtime BASS_SYNC_POS sync will be called after the channel's processing (decoding/DSP/FX/encoding) reaches that position, before any more processing. For example, if a sync is set at 10000, then all data up to 10000 has been sent to the encoder (or at least queued with BASS_ENCODE_QUEUE) when the sync is called. A possible exception to that is when using the BASS_ATTRIB_GRANULE option (please see the documentation for details).

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #10 on: 10 Jul '23 - 15:51 »
If you want the mixer to be processed at realtime speed but don't want to hear it, then you could perhaps play it on the "No Sound" device. Unless it's messing up your timing, I would suggest keeping the playback buffer enabled, as it gives more margin for processing delays, ie. processing extra to catch-up after a delay rather than skipping update cycles.

Sorry, I do not understand the point of having a playback buffer if nothing is fed by it? If I understand correctly, the mixing/decoding works the same without it. And about the update cycles, if there are no playback buffers, aren't there no update cycles as well and this is even better for performance?
I do understand that there is a point in having a buffer, but in this case I would have to implement one myself since the only consumer of the stream is an encoder which is fed independent of the playback buffer.
Please let me know if I am missing something here or if there is a possibility to feed the encoder from the playback buffer for added stability.

A mixtime BASS_SYNC_POS sync will be called after the channel's processing (decoding/DSP/FX/encoding) reaches that position, before any more processing. For example, if a sync is set at 10000, then all data up to 10000 has been sent to the encoder (or at least queued with BASS_ENCODE_QUEUE) when the sync is called. A possible exception to that is when using the BASS_ATTRIB_GRANULE option (please see the documentation for details).

I understand, that explains why I cannot factor in the contents of the encoder+queue - because at the moment the sync is called the data until that point has reached the encoder already.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #11 on: 10 Jul '23 - 17:10 »
Sorry, I do not understand the point of having a playback buffer if nothing is fed by it? If I understand correctly, the mixing/decoding works the same without it. And about the update cycles, if there are no playback buffers, aren't there no update cycles as well and this is even better for performance?
I do understand that there is a point in having a buffer, but in this case I would have to implement one myself since the only consumer of the stream is an encoder which is fed independent of the playback buffer.
Please let me know if I am missing something here or if there is a possibility to feed the encoder from the playback buffer for added stability.

The difference with and without playback buffering is which thread the stream's processing happens in. With playback buffering enabled, the processing happens in a playback buffer update thread (see BASS_CONFIG_UPDATETHREADS), and then the buffered result is eventually mixed with the buffers of other playing streams (to produce the final mix) by a device output thread. With playback buffering disabled, both the processing and mixing happens in the device output thread. The playback buffers and device buffers have independent lengths and update cycles, determined by the BASS_CONFIG_BUFFER/UPDATEPERIOD and BASS_DONFIG_DEV_BUFFER/PERIOD settings. A playback buffer is usually bigger than a device output buffer, which means it has more chance of seamlesly recovering from any delayed update cycles. For example, if the playback buffer is 500ms and the device buffer is 50ms and there's a delay of 100ms, then the playback buffer can cover it (the buffer isn't emptied), while the device buffer cannot (there's a 50ms gap).

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #12 on: 10 Jul '23 - 20:47 »
The difference with and without playback buffering is which thread the stream's processing happens in. With playback buffering enabled, the processing happens in a playback buffer update thread (see BASS_CONFIG_UPDATETHREADS), and then the buffered result is eventually mixed with the buffers of other playing streams (to produce the final mix) by a device output thread. With playback buffering disabled, both the processing and mixing happens in the device output thread. The playback buffers and device buffers have independent lengths and update cycles, determined by the BASS_CONFIG_BUFFER/UPDATEPERIOD and BASS_DONFIG_DEV_BUFFER/PERIOD settings. A playback buffer is usually bigger than a device output buffer, which means it has more chance of seamlesly recovering from any delayed update cycles. For example, if the playback buffer is 500ms and the device buffer is 50ms and there's a delay of 100ms, then the playback buffer can cover it (the buffer isn't emptied), while the device buffer cannot (there's a 50ms gap).

See attached picture, this is probably easier. Is the order of buffers correct there? If yes, how does the encoder benefit from a playback/device buffer?

What would give me more stability is buffering between the update thread and the encoder (this could probably be done by setting the encoder to realtime and filling the queue a bit before unpausing the encoder, but this is not my preferred option)

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #13 on: 11 Jul '23 - 12:10 »
The flow looks like this:


stream (BASSmix mixer) 🡆 DSP 🡆 playback buffer 🡆 BASS mixer 🡆 device buffer 🡆 device
                          🡇
                       encoder


With playback buffering enabled, everything before the BASS mixer is in a playback update thread, while it's all in a device thread with playback buffering disabled. Note that it is the device that controls the rate in either case. Playback buffering means that data is processed further in advance, and means the BASS mixer can still have data available to it when there are any delays in that processing. Although it doesn't sound like a concern in your case because nothing else is playing, also note that disabling playback buffering means that any processing delays will also delay the BASS mixer, affecting everything that's being played by it.

If you don't want to have playback buffering enabled for any reason (eg. it's complicating your calculations) then you could increase the length of the device buffer via BASS_CONFIG_DEV_BUFFER instead for similar effect.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #14 on: 11 Jul '23 - 12:34 »
Thank you very much for the visual explanation!

In my case I am playing to "0" device though, how do the buffers influence how data is sent to the encoder? Or does the "0" device do anything special that I do not understand?

The calculation does not have to be easy, it just has to be accurate and reliable.

What if I feed the playing mixer into another BASSmix Mixer and attach the encoder to that one? That should technically feed the encoder from the playback buffer of the first mixer, correct?

Sorry for the repeated questions and thank you for taking this deep dive into how everything works with me!

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #15 on: 11 Jul '23 - 15:21 »
In my case I am playing to "0" device though, how do the buffers influence how data is sent to the encoder? Or does the "0" device do anything special that I do not understand?

Things are a bit different with the "No Sound" device (device 0), as there is no device buffer. The current BASS release (2.4.17) will allow up to one update cycle (BASS_CONFIG_DEV_PERIOD) of delay before skipping updates, ie. it'll process up to one extra update cycle to catch up after a delay. The latest build instead uses the BASS_CONFIG_DEV_BUFFER setting to determine the maximum allowed delay:

   www.un4seen.com/stuff/bass.zip

It may be that a separate config option is added for this (instead of piggybacking BASS_CONFIG_DEV_BUFFER) in the next release, but this update should be fine to use in the meantime. You would just need to make sure to check the next release's changelog.

What if I feed the playing mixer into another BASSmix Mixer and attach the encoder to that one? That should technically feed the encoder from the playback buffer of the first mixer, correct?

Mixer sources must be "decoding channels" (have BASS_STREAM_DECODE set), which can't be played directly (with BASS_ChannelPlay). So it isn't possible to plug a playing mixer directly into another mixer. You could perhaps use splitters (see BASS_Split_StreamCreate) to both play and mix a mixer, but doing that doesn't seem like it would help in your case?

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #16 on: 11 Jul '23 - 19:45 »
Thank you for the explanation of the update mechanisms of the "No sound" device.

I think the options for me to get any buffering are:
1. an option is introduced to send samples to the encoder after passing the playback buffer
2. a callback is introduced to receive samples from the playback buffer as the channel is played
3. I implement a loop that mimics BASS_ChannelPlay with BASS_ChannelGetData and takes care of all timing issues related to realtime playback

Since 1. and 2. are probably not likely to happen here is how I would try to do 3.:

Look for a way to call BASS_ChannelGetData repeatedly and compensate for any delays (there must be others who did this - or you can share how this -in principle- works inside BASS_ChannelPlay).

One "cheap" way could be to limit the encoder to realtime (either with ffmpeg -re flag or with BASS_ENCODE_LIMIT - although that has its own limitations according to the documentation)
- BASS_ENCODE_LIMIT: call BASS_ChannelGetData with a reasonable size as fast as possible and let BASS_Encode_Write delays handle the limiting
- ffmpeg -re flag: set a reasonable BASS_CONFIG_ENCODE_QUEUE size and fill the empty space with data from BASS_ChannelGetData e.g. every 20-100ms
By doing that I could always fill up the free space in that queue and that should compensate for any delays.

What are your thoughts on this?

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #17 on: 12 Jul '23 - 13:50 »
Please clarify what it is that you ultimately want to achieve. From the original post, it sounds like you want to insert pre-encoded data into the encoder output, eg. perhaps ads in a live stream? If so, won't that break the realtime speed requirement, ie. the inserted data takes it above realtime speed? Wouldn't it be better to feed the pre-encoded data (via a stream) into the mixer? It seems to me like doing so would simplify timing for you, as you only need to be concerned with the mixer's position, not the encoder's position.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #18 on: 13 Jul '23 - 00:57 »
Your guess is pretty close. But I am not inserting audio. I only flip a bit in the frame header and the timing stays the same. Ad insertion happens server side.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #19 on: 13 Jul '23 - 15:51 »
OK. I think the best way to get the MP3 encoder output position would be to count the MP3 frames received by your ENCODEPROC, and multiply that by 1152 (or 576 with sample rates below 32000), which is the number of samples per frame. As you're already locating the frame headers while counting them, that makes it pretty simple to modify them too.

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #20 on: 14 Jul '23 - 16:17 »
That is exactly what I am doing :)

The only thing left unclear is a good way to have a buffer before the encoder. I will try the things I outlined before, but more suggestions on this are welcome of course.

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #21 on: 17 Jul '23 - 12:30 »
Oh yes, looking at the original post again now, I see you mentioned counting frames. But it isn't clear to me why you want to add a buffer. If it's just to cover any processing delays then I think the previously mentioned playback and/or device buffers should already do that.

If you need your encoder output position to be as accurate as possible then one thing to be aware of is that MP3 encoding introduces a delay in the data, ie. the MP3 output contains some silence at the start. Some encoders (eg. LAME used by BASSenc_MP3) store the length of the delay in a header, so you could parse that and subtract it for improved accuracy. If you don't want to bother parsing it, skipping 1 frame (plus the frame containing the LAME header) should be a decent estimate. Of course, it won't be possible for you to get more precise than within a frame anyway (which is 26ms @ 44100hz).

konnex

  • Posts: 14
Re: Measure delay to encoder - is my logic correct?
« Reply #22 on: 9 Aug '23 - 23:28 »
My problem haunted me until today, but I found a way to do exactly what I want!

Reminder: I am outputting only via encoders, no devices involved. After the delay measurements were working the only missing piece was a way to buffer data before it is sent to an encoder. Device/Playback buffers don't help here because encoders are fed by DSPs which run before data reaches any playback/device buffers.

For anyone interested in the results, here is the process I went through and how I finally managed to do it the most flexible way:

My first idea was to use BASS_StreamCreate with the STREAMPROC_DEVICE flag. Attach Encoders to this channel and they will get fed by the device/playback buffers! Only problem: This doesn't seem to work with the 'No Sound' device which makes sense.

The right path for me was to either use a custom buffer or a stream created with STREAMPROC_PUSH - which does exactly what I want when its playback buffer is disabled: Queue and play data!
The Push Stream is fed by a DSP on the source channel (just remove the encoder(s) and attach a DSP that calls BASS_StreamPutData with the received data).

Now if you only play the source channel the queue in the Push Stream will grow. Attach the encoders to the Push Stream and play it shortly after the source and you have a buffer which contains the time difference between the play calls.

The final touches were to remove the time delay between the playback of both channels.
I managed to do this by making the source channel a DECODE stream. As this needs to be played (you cannot play a DECODE stream directly) I created another Stream with a user defined writing function (use the 'proc' parameter of BASS_StreamCreate to do this) which just returns the result of BASS_ChannelGetData on the source channel and thus "plays" it.
Be careful, you cannot discard the data (set buffer=NULL) when requesting data from a non-recording stream with BASS_ChannelGetData! I created a "garbage" buffer that I pass to calls like this to make it work.

On startup BASS_ChannelGetData is called on the source channel to fill the Push Stream with as much data as you want. Now play both the source and the user defined Stream (I linked them) and you have a buffer of exactly the size you want and without waiting for it to fill up initially.

I can elaborate more if someone else needs advice on a similar issue.

And if you are still here, Ian: thank you for your help along the way, much appreciated! And maybe you can make it possible to discard data in non-recording streams in the future. The behaviour is not too user friendly: During my testing BASS_ChannelGetData did not return '-1' if you wrongly passed NULL but just hanged forever. (yes, I know, RTFM  ;D)

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #23 on: 10 Aug '23 - 18:00 »
My first idea was to use BASS_StreamCreate with the STREAMPROC_DEVICE flag. Attach Encoders to this channel and they will get fed by the device/playback buffers! Only problem: This doesn't seem to work with the 'No Sound' device which makes sense.

Were you playing anything on the "No Sound" device at the time? A STREAMPROC_DEVICE stream should work on the "No Sound" device, but only when there are other channels playing on the device. If you have the BASS_CONFIG_DEV_NONSTOP option enabled, that doesn't actually apply to the "No Sound" device currently, so it won't keep the device producing output when nothing is playing (like it would with other devices).

And if you are still here, Ian: thank you for your help along the way, much appreciated! And maybe you can make it possible to discard data in non-recording streams in the future. The behaviour is not too user friendly: During my testing BASS_ChannelGetData did not return '-1' if you wrongly passed NULL but just hanged forever. (yes, I know, RTFM  ;D)

One way I think you can achieve the same result now is to instead use BASS_ChannelGetLevelEx (which internally uses BASS_ChannelGetData). Something like this:

Code: [Select]
float level[1];
BASS_ChannelGetLevelEx(handle, level, BASS_ChannelBytes2Seconds(handle, discard_bytes), BASS_LEVEL_MONO);

Of course, the level measurement part of that is bit unnecessary/wasteful, but if you're using a managed language then perhaps it's less wasteful than allocating/marshalling an array for BASS_ChannelGetData.

Anyway, good to see that you've got things working well now, even if it is a bit convoluted! :)

Ian @ un4seen

  • Administrator
  • Posts: 26026
Re: Measure delay to encoder - is my logic correct?
« Reply #24 on: 11 Aug '23 - 16:02 »
I forgot to mention that another way you can discard data is to use BASS_ChannelSetPosition with the BASS_POS_DECODETO and BASS_POS_RELATIVE flags, like this:

Code: [Select]
BASS_ChannelSetPosition(handle, discard_bytes, BASS_POS_DECODETO | BASS_POS_RELATIVE);

This method is slightly different to using BASS_ChannelGetData/GetLevelEx in that it won't apply DSP/FX to the discarded data, while they will (if there are any set). Whether that's a good or bad thing depends on what you want. When using BASS_ChannelGetData to discard data from a recording channel, DSP/FX aren't applied to it, so this method is more like that.