Author Topic: Monitoring a microphone recording produces static only on Linux  (Read 2713 times)

rileythefox

  • Posts: 30
I'm developing a rhythm game which includes singing and I am trying to monitor a microphone recording so that the user can hear themselves sing. I've had trouble getting this to work on Linux previously as there's some limitations on Linux compared to Windows (simultaneous recordings are hit or miss, can't retrieve recording device info for things like supported formats and channel count) but after some redesign I'm finally able to get microphones working on Linux.

But now I've ran into a problem with the monitoring stream. When the user is not singing anything or producing any meaningful sound input, a loud static crackling sound begins to be heard from the monitoring channel. This behavior can only be observed on Linux systems, Windows systems are not affected by this. I've narrowed the problem down to 2 things: A reverb effect applied on the monitoring channel, and fetching data (BASS_ChannelGetData) from a secondary decoding push stream.

The setup for this system goes as follows:
1. Create a push stream for the monitoring channel.
2. Apply BASS_FX Freeverb to the monitoring channel, and a DSP to apply gain.
3. Begin playing the monitoring channel.
4. Initialise a recording device and start recording from it.
5. Create a decoding push stream for batching data for pitch detection.
6. Apply 2 BASS_FX PeakEQ effects to the pitch detection channel, one to cut off low frequencies and the other to cut off high frequencies.

The RECORDPROC callback function for the recording stream works as follows:
1. Push the sample data to the monitoring channel so it is immediately feedbacked to the user.
2. Push the sample data to the pitch processing decoding channel.
3. Check if 40ms has passed by accumulating the recording period until it reaches 40ms.
4. If 40ms has passed, all the data buffered in the pitch processing channel is fetched with BASS_ChannelGetData and then passed into a pitch detection algorithm.

I "batch" the data every 40ms because the pitch detection algorithm is ran 25 times a second. That is why a decoding push stream is used here, to place data in ready to be decoded every 40ms (and to pass it through the EQ effects)

Here's a sample of the code. It's written in C# using the ManagedBass wrapper library but it should make sense from a C perspective.
The stream setup code:
Code: [Select]
int monitorHandle = 0;
int monitorGainHandle = 0;

int recordingHandle = 0;
int pitchProcessingHandle = 0;

int timeAccumulated = 0;
int processingBufferLength = 0;

void Setup()
{
    monitorHandle = Bass.CreateStream(44100, 1, BassFlags.Default, StreamProcedureType.Push);
    ReverbParameters reverbParameters = new()
    {
        fDryMix = 0.3f, fWetMix = 1f, fRoomSize = 0.4f, fDamp = 0.7f
    };

    // Problematic effect. If this effect is not added, crackling is not heard on the monitoring stream.
    int reverbHandle = Bass.ChannelSetFX(monitorHandle, EffectType.Freeverb, 1);
    Bass.FXSetParameters(reverbHandle, reverbParameters);

    monitorGainHandle = Bass.ChannelSetDSP(monitorHandle, ApplyGain);

    Bass.ChannelPlay(monitorHandle);

    // Recording setup
    Bass.RecordInit(1);

    recordingHandle = Bass.RecordStart(44100, 1, BassFlags.Default, 10, ProcessRecordingData, IntPtr.Zero);

    pitchProcessingHandle = Bass.CreateStream(44100, 1, BassFlags.Decode, StreamProcedureType.Push);

    PeakEQParameters lowEqParameters = new()
    {
        fBandwidth = 2.5f, fCenter = 20f, fGain = -10f
    };
    PeakEQParameters highEqParameters = new()
    {
        fBandwidth = 2.5f, fCenter = 10_000f, fGain = -10f
    };

    int lowEqHandle = Bass.ChannelSetFX(pitchProcessingHandle, EffectType.PeakEQ, 0);
    int highEqHandle = Bass.ChannelSetFX(pitchProcessingHandle, EffectType.PeakEQ, 0);

    Bass.FXSetParameters(lowEqHandle, lowEqParameters);
    Bass.FXSetParameters(highEqHandle, highEqParameters);
}

And the code for the RECORDPROC callback:
Code: [Select]
bool ProcessRecordingData(int handle, IntPtr buffer, int length, IntPtr user)
{
    // Copies the data from the recording buffer to the monitor playback buffer.
    Bass.StreamPutData(monitorHandle, buffer, length);

    // Copy the data to the pitch processing handle to apply EQ FX
    Bass.StreamPutData(pitchProcessingHandle, buffer, length);

    timeAccumulated += 10;

    processingBufferLength += length;

    // 40ms has passed so get the data and run pitch detection
    if (timeAccumulated >= 40)
    {
        unsafe
        {
            // Allocate a buffer on the stack
            byte* procBuff = stackalloc byte[processingBufferLength];

            // This call produces crackling on the monitoring stream (only on Linux)
            Bass.ChannelGetData(pitchProcessingHandle, (IntPtr) procBuff, processingBufferLength);

            int shortLength = processingBufferLength / sizeof(short);
            var readOnlySpan = new ReadOnlySpan<short>(procBuff, shortLength);

            // Calculate pitch
            CalculatePitchAndAmplitude(readOnlySpan);
        }

        timeAccumulated = 0;
        processingBufferLength = 0;
    }

    return true;
}

I've done some testing and narrowed down to where the crackling comes from. In the RECORDPROC callback, commenting out this line gets rid of the crackling sound:
Code: [Select]
// This call produces crackling on the monitoring stream (only on Linux)
Bass.ChannelGetData(pitchProcessingHandle, (IntPtr) procBuff, processingBufferLength);

As well as this, in the stream setup, commenting out the lines which add the Freeverb effect to the monitoring channel also removed the static sound.
Code: [Select]
// Problematic effect. If this effect is not added, crackling is not heard on the monitoring stream.
int reverbHandle = Bass.ChannelSetFX(monitorHandle, EffectType.Freeverb, 1);
Bass.FXSetParameters(reverbHandle, reverbParameters);

I've been trying to figure this out for around a day now and I'm stumped as to what could be causing the static. It's even weirder that this behavior does not happen on Windows, but only Linux. I'm not sure why the call to BASS_ChannelGetData on the pitch processing push stream can cause static crackling on the monitoring channel, nor why removing the Freeverb effect makes it go away.

I've attached a short mp4 clip of what the static sounds like in a zip file. Not heard in the clip is the sound going away when you begin to talk or sing into the microphone. When the user does that, they can hear themselves normally but quickly after they go quiet the static comes back.

Ian @ un4seen

  • Administrator
  • Posts: 26083
Does the recorded data become pure silence, ie. all 0s? If so, and the problem only happens after, that then perhaps it's denormals in the reverb and/or CalculatePitchAndAmplitude processing, increasing CPU usage and causing delays.

What BASS and BASS_FX versions are you currently using, ie. what do BASS_GetVersion and BASS_FX_GetVersion report? If you aren't already doing so, please try these latest BASS and BASS_FX builds:

   www.un4seen.com/stuff/bass-linux.zip
   www.un4seen.com/stuff/bass_fx-linux.zip

If the problem persists also confirm which architecture libraries you're using, eg. x86 or x86_64.

rileythefox

  • Posts: 30
The BASS version I'm using is 2.4.17.10 and the BASS_FX version is 2.4.12.6. The recorded input always has some minimal ambient background noise so I would assume its not all 0s but the static sound still comes through. It seems to be the louder the input the less the static is prominent.

I tried the latest builds you linked and the problem still persists. The architecture I'm using is x86_64.

I also initially thought that the pitch detection algorithm could be slowing it down and causing delays, but I removed it and just kept the pushing/data fetching and the problem was still present. And when I remove the data fetching by removing the call to BASS_ChannelGetData, the static sound is gone entirely. Same as if I remove the Freeverb effect from the monitoring channel. I benchmarked the pitch detection algorithm and it takes roughly 250 microseconds to run, so I don't think it was causing any significant delays when enabled anyway.

I measured the CPU usage of BASS using BASS_GetCPU and the value averaged around 0.1 (10%), never going above 0.12 (12%)

Ian @ un4seen

  • Administrator
  • Posts: 26083
OK, that doesn't look like it's denormals then. BASS_GetCPU returns a percentage (eg. 100 = 100%), so it looks like the "CPU usage" (processing timeliness) in your case is actually only 0.1-0.12%, but BASS_GetCPU only covers playback, not recording (so doesn't include your RECORDPROC).

To perhaps narrow down what/where the problem is, please try writing the recording and monitoring channels to WAV files, and then listen/look at those WAV files for any problems. You can use the BASSenc add-on to write the WAV files:

Code: [Select]
BASS_Encode_Start(recordingHandle, "recording.wav", BASS_ENCODE_PCM | BASS_ENCODE_QUEUE | BASS_ENCODE_AUTOFREE, 0, 0);
BASS_Encode_Start(monitorHandle, "monitor.wav", BASS_ENCODE_PCM | BASS_ENCODE_QUEUE | BASS_ENCODE_AUTOFREE, 0, 0);

rileythefox

  • Posts: 30
I'm having trouble getting the encoder to work. When I start the encoder for the recording channel, my application immediately crashes as soon as I begin playing the recording channel. When I start the encoder for the monitoring channel, the application crashes as soon as I call BASS_StreamPutData on the monitoring channel from the RECORDPROC callback.

I've debugged the application and I'm getting a stack overflow in a thread belonging to bassenc.dll (I'm writing the application on Windows and testing for the monitoring problems on Linux)

This is the code I wrote for starting the encoding
Code: [Select]
monitorHandle = Bass.CreateStream(44100, 1, BassFlags.Default, StreamProcedureType.Push);
ReverbParameters reverbParameters = new()
{
    fDryMix = 0.3f, fWetMix = 1f, fRoomSize = 0.4f, fDamp = 0.7f
};

// Problematic effect. If this effect is not added, crackling is not heard on the monitoring stream.
int reverbHandle = Bass.ChannelSetFX(monitorHandle, EffectType.Freeverb, 1);
Bass.FXSetParameters(reverbHandle, reverbParameters);

monitorGainHandle = Bass.ChannelSetDSP(monitorHandle, ApplyGain);

// Recording setup
Bass.RecordInit(1);

recordingHandle = Bass.RecordStart(44100, 1, BassFlags.Default | BassFlags.RecordPause, 10, ProcessRecordingData, IntPtr.Zero);

BassEnc.EncodeStart(recordingHandle, "recording.wav", EncodeFlags.PCM | EncodeFlags.Queue | EncodeFlags.AutoFree, null, IntPtr.Zero);
BassEnc.EncodeStart(monitorHandle, "monitor.wav", EncodeFlags.PCM | EncodeFlags.Queue | EncodeFlags.AutoFree, null, IntPtr.Zero);

Bass.ChannelPlay(monitorHandle);
Bass.ChannelPlay(recordingHandle);

Everything else such as the code from the RECORDPROC is identical to the original post.

Ian @ un4seen

  • Administrator
  • Posts: 26083
ManagedBass has a few BassEnc.EncodeStart overloads and perhaps the wrong one is getting called? You could try enforcing the right one like this:

Code: [Select]
BassEnc.EncodeStart(recordingHandle, CommandLine:"recording.wav", EncodeFlags.PCM | EncodeFlags.Queue | EncodeFlags.AutoFree, null, IntPtr.Zero);
BassEnc.EncodeStart(monitorHandle, CommandLine:"monitor.wav", EncodeFlags.PCM | EncodeFlags.Queue | EncodeFlags.AutoFree, null, IntPtr.Zero);

If it still doesn't work then you could try writing FLAC files instead with BASSenc_FLAC, like this (I think):

Code: [Select]
BassEnc_Flac.Start(recordingHandle, null, EncodeFlags.Queue | EncodeFlags.AutoFree, "recording.wav");
BassEnc_Flac.Start(monitorHandle, null, EncodeFlags.Queue | EncodeFlags.AutoFree, "monitor.wav");

rileythefox

  • Posts: 30
I managed to get the encoding to work after specifying the right overload. Thanks for pointing that out.

As for what can be heard in the wav files:
- "recording.wav" is completely clean, no static sounds.
- "monitor.wav" contains the static sounds just like it was heard from the channel in realtime

I encoded a 3rd file, "processed.wav" which is from the channel that I pass the sample data through for pitch detection (and it applies EQ effects). This file is also completely clean of any static sounds. This adds more confusion because the monitoring channel and the pitch processing channel have identical settings (sample rate, channels, both push streams). The only difference is that the pitch processing channel is a decoding channel. And again to reiterate, the crackling sounds only occur when fetching data with BASS_ChannelGetData from the pitch stream or when the Freeverb is added to the monitoring.

I opened the "monitor.wav" file in Audacity and looked at the waveform and spectogram view and this is what it looks like.



As you can see in the spectogram the frequency is spiking to 20KHz+. Not sure what could be causing this.

I have gathered a little bit more information however. One of the people helping me test this issue tried 2 audio devices, a Focusrite Scarlett Solo audio interface and a RODE NT USB microphone. When they tried the RODE mic, the static sounds were heard on the monitoring. When they tried the Focusrite Scarlett, there was no static at all. I asked them to provide some details about the audio devices' setups such as sample rate, channel count and this is what I got back:

Code: [Select]
117     alsa_output.usb-Focusrite_Scarlett_Solo_USB-00.3.HiFihw_USBsink     PipeWire        s32le 2ch 48000Hz SUSPENDED
513     alsa_output.usb-RODE_Microphones_RODE_NT-USB-00.analog-stereo   PipeWire        s16le 2ch 48000HzSUSPENDED

At first I was a bit stumped as I initially thought that the sample rate mismatch from the devices and what the streams are created with (44,100) could be the problem, as maybe it was not being converted by BASS properly automatically on Linux. My own microphone on Windows is set to 48,000Hz and is converted fine when using 44,100Hz streams. However when I asked if they could set the devices to 44,100 there was no difference in the results.

However after a while I noticed that the Focusrite Scarlett is using signed 32 bit little endian PCM sample data, while the RODE mic is using signed 16 bit little endian data. Could this be a cause of the problem?

Ian @ un4seen

  • Administrator
  • Posts: 26083
From your screenshot, it looks like there are some invalid numbers in the sample data. I don't think the device's sample format (eg. 16-bit vs 32-bit) could be causing it because that doesn't come into play until later (after the data is in the final mix), so it seems strange that one device would be affected while another (on the same system?) isn't. Please try setting the BASS_SAMPLE_FLOAT flag on the channels for floating-point data and see what it looks like then. Also try adding the same Freeverb effect (and parameters) to your pitch detection channel and see if the problem appears in its data too then.

rileythefox

  • Posts: 30
Hi Ian.

I added the BASS_SAMPLE_FLOAT to all the channels I could. This includes the recording channel, the monitoring channel and the pitch processing channel. I also added BASS_ENCODE_FP_AUTO to the encoders. This seems to have fixed the issue entirely. 3 people tried the change and all of them reported that the static sounds were gone. I don't know if adding the Freeverb to the pitch channel made any difference as I think the floating point sample data may have fixed it entirely.

Is it worth trying to investigate it further to determine the root cause of why this issue is only present on Linux? Or should I just stick with using the floating point channels? I've never had to use floating point to work around an issue like this.

I'm also a little concerned because I think I've had some problems in the past trying to use floating point on channels with certain systems. I believe I have gotten BASS_ERROR_FORMAT errors when specifying BASS_SAMPLE_FLOAT before, but I can't remember exactly. ManagedBass states that the BASS_STREAM_DECODE flag or WDM drivers are required to use BASS_SAMPLE_FLOAT, but the BASS documentation makes no mention of this or any limitations of where you can use BASS_SAMPLE_FLOAT, and its now working fine on Linux where WDM drivers would not be present and I put the flag on channels which don't have BASS_STREAM_DECODE.

I'm glad that a solution has been found, still confuses me that this is a Linux only issue. Is there anything else that can be done?

Ian @ un4seen

  • Administrator
  • Posts: 26083
I added the BASS_SAMPLE_FLOAT to all the channels I could. This includes the recording channel, the monitoring channel and the pitch processing channel. I also added BASS_ENCODE_FP_AUTO to the encoders. This seems to have fixed the issue entirely. 3 people tried the change and all of them reported that the static sounds were gone. I don't know if adding the Freeverb to the pitch channel made any difference as I think the floating point sample data may have fixed it entirely.

Is it worth trying to investigate it further to determine the root cause of why this issue is only present on Linux? Or should I just stick with using the floating point channels? I've never had to use floating point to work around an issue like this.

Interesting. Did you also try settng the Freeverb effect on the your pitch detection channel without adding the BASS_SAMPLE_FLOAT flag everywhere? If not, please try that to check whether the problem may have been related to the Freeverb effect. Another thing that may be worth checking is whether the problem also happens when using the x86 arch (not only x86_64).

I'm also a little concerned because I think I've had some problems in the past trying to use floating point on channels with certain systems. I believe I have gotten BASS_ERROR_FORMAT errors when specifying BASS_SAMPLE_FLOAT before, but I can't remember exactly. ManagedBass states that the BASS_STREAM_DECODE flag or WDM drivers are required to use BASS_SAMPLE_FLOAT, but the BASS documentation makes no mention of this or any limitations of where you can use BASS_SAMPLE_FLOAT, and its now working fine on Linux where WDM drivers would not be present and I put the flag on channels which don't have BASS_STREAM_DECODE.

WDM drivers were required for floating-point playback on Windows back when DirectSound handled the mixing, but BASS always handles the mixing itself these days (even when using DirectSound output). Floating-point support was always available on other platforms (inc. Linux), except for the old non-FPU ARM builds (no longer provided).

rileythefox

  • Posts: 30
I removed the float flag from all the channels and just kept the added Freeverb effect on the pitch detection channel. No static sounds can be heard on the outputted pitch processing channel but they are present on the monitoring channel still.

Really strange how no other channel is affected here and only the monitoring channel. And the static only occurs if I call BASS_ChannelGetData on the pitch processing channel like I do in the RECORDPROC:
Code: [Select]
public bool ProcessRecordData(int handle, IntPtr buffer, int length, IntPtr user)
{
    // Copies the data from the recording buffer to the monitor playback buffer.
    if (Bass.StreamPutData(monitorHandle, buffer, length) == -1)
    {
        Console.WriteLine("Error pushing data to monitor stream: {0}", Bass.LastError);
    }

    // Copy the data to the pitch processing handle to apply EQ FX
    Bass.StreamPutData(pitchProcessingHandle, buffer, length);

    // Need to do it based on device period
    _timeAccumulated += 10; //RecordingHandle.CLEAN_RECORD_PERIOD_MS;

    _processedBufferLength += length;

    // Enough time has passed for pitch detection
    if (_timeAccumulated >= RECORD_PERIOD_MS)
    {
        unsafe
        {
            byte* procBuff = stackalloc byte[_processedBufferLength];

            // This is what produces the static sound. Removing this call fixes the problem.
            Bass.ChannelGetData(pitchProcessingHandle, (IntPtr)procBuff, _processedBufferLength);
        }

        _timeAccumulated = 0;
        _processedBufferLength = 0;
    }

    return true;
}

Any ideas as to why calling BASS_ChannelGetData can malform the data in the monitoring channel? They should be isolated from one another right? That's the part that confuses me the most. The same data is pushed to both channels but one is fetched manually as it is a decoding channel. That fetch then causes the pushed data from the other channel to get transformed in some way. At least that's what I think I'm observing here, but I'm not entirely sure.

Unfortunately I can't test out x86 architecture because .NET does not have a runtime for x86 Linux. I would rewrite my test application in C and compile to x86 through Windows Subsystem for Linux, but I'm not familiar enough with the toolchain for compiling and I can't figure out how to link against BASS with gcc on Linux.

Ian @ un4seen

  • Administrator
  • Posts: 26083
It shouldn't be possible for the channels to directly affect each other's data like that, but one possibility is that some processing is changing the FPU control word, and that's affecting other processing. That seems unlikely on x86_64 (where SSE is usually used) but who knows. Judging by your latest code, the problem happens even if you don't actually do any pitch processing, just calling BASS_ChannelGetData is enough? If so, please try removing all effects/DSP (so there's no processing) from that channel (pitchProcessingHandle) and see if the problem still happens then.

rileythefox

  • Posts: 30
I removed all FX/DSP from the pitch processing channel and the problem still persists on the monitoring. You're right in saying that the problems happens without any pitch processing. I removed the call to that a while ago to eliminate any extra variables, so for pretty much this entire post it has just been the call to BASS_ChannelGetData.

I too thought it would be unlikely for channels to affect another channel's data like this (because pushing data to a stream should be making a separate copy into its own buffer) but mysteriously that's what appears to be happening.

Reminder that if I remove the call to BASS_ChannelGetData on the pitch processing channel (and just keep pushing data but never fetching it), the problem goes away. If I remove the Freeverb effect from the monitoring channel, the problem also goes away. Something to do with decoding the data from the pitch channel (its a decode push stream) using BASS_ChannelGetData somehow affects the Freeverb processing on the monitoring channel. That's what I think I'm seeing at least. Because without the BASS_ChannelGetData call to the pitch channel OR the Freeverb effect on the monitoring (do not have to remove both, just one or the other), the problem disappears.

And again, only a Linux issue (at least on x86_64, I can't test x86)

Ian @ un4seen

  • Administrator
  • Posts: 26083
The only way I can see that BASS_ChannelGetData call affecting other things is by changing the timing, ie. delaying them slightly. But even that seems unlikely, especially if there's no DSP/FX on it. Perhaps it's the "procBuff" allocation? When you try removing the BASS_ChannelGetData call, do you also remove that? Does the problem happen if you use an invalid handle in the BASS_ChannelGetData call with the other parameters left the same?

To confirm whether the problem is being introduced by the Freeverb effect, please try setting another WAV writer before it in the DSP chain, ie. so you have WAV files of before and after it. You can adjust the encoder DSP chain position via the BASS_CONFIG_ENCODE_PRIORITY option, something like this:

Code: [Select]
BASS_SetConfig(BASS_CONFIG_ENCODE_PRIORITY, 1); // set encoder "priority" higher than the freeverb effect
BASS_Encode_Start(monitorHandle, "monitor-before.wav", BASS_ENCODE_PCM | BASS_ENCODE_QUEUE | BASS_ENCODE_AUTOFREE, 0, 0);
BASS_SetConfig(BASS_CONFIG_ENCODE_PRIORITY, -1000); // reset to default
BASS_Encode_Start(monitorHandle, "monitor-after.wav", BASS_ENCODE_PCM | BASS_ENCODE_QUEUE | BASS_ENCODE_AUTOFREE, 0, 0);

rileythefox

  • Posts: 30
I set the encoders up like you showed and got results back from testing. The "monitor-before.wav" file has no static sound, while the "monitor-after.wav" file does have the static. So it looks like it's being caused by the Freeverb effect, but only when calling BASS_ChannelGetData on that other channel?

I've still got no other effects/DSP on the other channels to rule out everything else. The microphone recording channel is clean and so is the pitch processing channel. The only stream with effects on it now is the monitoring channel.


Ian @ un4seen

  • Administrator
  • Posts: 26083
Did you try using an invalid handle (eg. 0) in the BASS_ChannelGetData call? If not, please do so (with no other changes) and see if the problem still happens then.

Also, does the same/similar problem happen if you use a different BASS_FX effect instead of Freeverb?

rileythefox

  • Posts: 30
I changed the call to BASS_ChannelGetData to use 0 as the handle and the problem persisted. However I tried a different BASS_FX effect (Chorus) and the problem went away. Looks to be a problem with the Freeverb effect on Linux? Not too sure why removing the BASS_ChannelGetData call entirely earlier in this post also seemed to fix it though.

Ian @ un4seen

  • Administrator
  • Posts: 26083
I changed the call to BASS_ChannelGetData to use 0 as the handle and the problem persisted.

Does the problem still happen if you also remove the "procBuff" allocation and use buffer=null in the BASS_ChannelGetData call?

Looks to be a problem with the Freeverb effect on Linux?

Do the effect's parameters make any difference, eg. does it happen if you leave the defaults? And does it happen if you use stereo instead of mono? Also, to confirm, it doesn't happen if you use the BASS_SAMPLE_FLOAT flag?

rileythefox

  • Posts: 30
I removed the buffer allocation and used null for the call, problem still happens.

I tried using default parameters for the reverb and the problem still happens.

I couldn't get stereo to work properly for some reason (Just changing the monitoring to stereo made it fail to push to the stream and changing the recording channel to stereo caused the sound to be really slow and delayed?)

And the problem doesn't occur if I use BASS_SAMPLE_FLOAT on all the channels.

Ian @ un4seen

  • Administrator
  • Posts: 26083
I removed the buffer allocation and used null for the call, problem still happens.

And it never happens if you remove the BASS_ChannelGetData call? It's strange if that call is having such influence because with handle=0 it simply returns immediately. I'm not familiar with what the "unsafe" line does, but does the problem still happen if you remove that?

I couldn't get stereo to work properly for some reason (Just changing the monitoring to stereo made it fail to push to the stream and changing the recording channel to stereo caused the sound to be really slow and delayed?)

You would need to set chans=2 in the BASS_RecordStart and BASS_StreamCreate calls. Is that would you did? If you only change one of them then the output/monitor will indeed sound the wrong speed.

Is the problem happening everywhere or only on particular systems? If everywhere, would it be possible to create a minimal test (containing only the required code) to reproduce it with here?

rileythefox

  • Posts: 30
I removed the BASS_ChannelGetData call again to confirm but now I'm being told the problem still happens without it?

I've decided to cross check the previous iterations of the code I've tried over the past 2 weeks and the only difference I can see is that previously when I removed the BASS_ChannelGetData call and that resolved it, I had the BASS_CONFIG_FLOATDSP config option enabled. In the test I tried just before that one, the problem occurred with the BASS_ChannelGetData call and that config option enabled.

In the code I have now, I don't have BASS_CONFIG_FLOATDSP enabled and removing the call to BASS_ChannelGetData did not resolve it this time. However in previous tests from a couple weeks ago, when setting every channel to use BASS_SAMPLE_FLOAT the problem went away entirely.

So this is what I'm noticing:
BASS_CONFIG_FLOATDSP Enabled -> Fetch channel data -> Problem occurs
BASS_CONFIG_FLOATDSP Enabled -> DONT fetch channel data -> No static
BASS_CONFIG_FLOATDSP Disabled -> Problem occurs regardless

BASS_SAMPLE_FLOAT Enabled -> No problem at all.

I read the documentation to double check and it says that BASS_CONFIG_FLOATDSP affects both user-created DSPPROCs and built-in effects. So maybe the problem is something to do with using/not using floating point sample data on the channel directly and having it be converted as its being passed to DSPs/effects?

To answer what "unsafe" does, in C# thats a way of accessing C-style pointers (int*, void* etc) for variables. By default C# is "safe" and doesn't allow raw pointer access and instead provides "safer" wrappers around them. But if that extra leverage is needed you can write what is called "unsafe" code which uses pointers. The reason it's being used in my code is I'm allocating a buffer on the stack instead of the heap to avoid garbage collection, and using "stackalloc" returns a pointer to the array that was allocated, so unsafe code is required.

Ian @ un4seen

  • Administrator
  • Posts: 26083
Looking at the original post, I see a BASS_ChannelSetDSP call. Is that still present, and if so, what sample format is the ApplyGain callback function expecting? That would be affected by the BASS_CONFIG_FLOATDSP setting, so you wouldn't be able to just change that setting without also changing the callback's code. If enabling BASS_CONFIG_FLOATDSP or using BASS_SAMPLE_FLOAT fixes the problem then perhaps that code is expecting floating-point data and it's otherwise corrupting the data?

rileythefox

  • Posts: 30
In the original code from my game, BASS_CONFIG_FLOATDSP was enabled and the ApplyGain function is expecting floating-point data.

However all the channels in this environment (recording, monitoring and pitch processing) are setup to use 16-bit sample format (i.e no BASS_SAMPLE_FORMAT flag). Does BASS_CONFIG_FLOATDSP affect the data received in a RECORDPROC? The documentation only specifies that DSPPROCs are affected by it. If that is the case, it could perhaps explain this problem as floating-point data would be getting pushed to the monitoring stream which uses 16-bit samples, but then wouldn't all of the samples be corrupted? Not just samples where there is no speaking/singing.

Even weirder though is again this is a Linux only issue, if it were something like that I'd expect it to affect all platforms.

Ian @ un4seen

  • Administrator
  • Posts: 26083
BASS_CONFIG_FLOATDSP should only affect DSP/FX (including encoders), not RECORDPROCs. Make sure you're setting it before any DSP/FX, ie. don't change it while there are existing DSP/FX. To confirm whether the ApplyGain function may be somehow causing the problem, does it still happen if you remove that? Regarding why the problem only happens sometimes, does the ApplyGain function always modify the data or is it sometimes bypassed, eg. when gain=0dB?

Is the problem happening on all Linux system or only specific ones? If specific ones, is there a pattern?

rileythefox

  • Posts: 30
From previous testing removing the ApplyGain dsp made no change to the outcome. The order of the effects/dsp added is the Freeverb then the ApplyGain dsp. So the reverb comes before it and removing that gets rid of the problem. The DSP function always modifies the data, multiplying the samples by 1.3 to increase the volume.

I don't have a large sample size of systems but I believe they're all using PipeWire on Linux. I did mention previously that one user tried a different audio device/interface and it went away without making any changes to the code, but I couldn't determine why that was the case.