BASS.NET API 2.4.17.7

Started by radio42,

Ian @ un4seen

BASS_CONFIG_HLS_DELAY is defined as 0x10902 (in BASSHLS.H), so you would use (BASSConfig)0x10902 in this case.

sirnaysayer

#1526
I am using the .NET Core version with .NET Core 3.1, however the Visuals class is not showing up with Intellisense. Is the Visuals functionality not included with the .NET Core version of the API?

Edit: I was able to decompile Visuals.cs with ILSpy from the 4.5 version of the library. I simply replaced the Windows.Forms reference with compatible code and it seems to be working. Editing my message in case anyone else had a similar idea.

radio42

No, .Net Core doesn't support such functionality - thus it has to be excluded.

toob

I'm trying to play various .ogg files but none of them play with my code. I get BASS_ErrorGetCode 41 Unsupported file format , yet I can open the files in other players like GoldWave where the files play fine. What am I doing wrong? Note .mp3 files plays fine using the same code.
res = Bass.BASS_StreamCreateFile(fname, 0, 0, BASSFlag.BASS_STREAM_DECODE Or BASSFlag.BASS_SAMPLE_SOFTWARE)
res = 0 when .ogg file
fname is the full path and file name.

Hex from the beginning of the file, if that helps:
00000000h: 56 44 4A 00 3E 03 00 00 23 01 00 00 02 F6 00 00 ; VDJ.>...#....ö..
00000010h: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ; ................
00000020h: 4F 04 B9 3E E2 8C 1A 3C 00 00 00 00 00 00 00 00 ; O.¹>âŒ.<........
00000030h: EC 51 B8 1E 85 EB 07 40 EC 51 B8 1E 85 EB 07 40 ; ìQ¸....ë.@ìQ¸....ë.@
00000040h: EC 51 B8 1E 85 EB 07 40 E1 AC 96 3F 00 00 00 00 ; ìQ¸....ë.@ᬖ?....
00000050h: 00 00 00 00 25 F7 00 00 49 2D 02 00 70 00 00 00 ; ....%÷..I-..p...
00000060h: B3 00 00 00 F4 1C 0C 00 00 00 00 00 00 00 00 00 ; ³...ô...........
00000070h: 43 3A 5C 55 73 65 72 73 5C 52 75 6E 65 5C 44 6F ; C:\Users\Rune\Do
00000080h: 63 75 6D 65 6E 74 73 5C 56 69 72 74 75 61 6C 44 ; cuments\VirtualD
00000090h: 4A 5C 73 6F 75 6E 64 20 73 61 6D 70 6C 65 72 20 ; J\sound sampler
000000a0h: 73 6F 75 72 63 65 73 5C 43 79 6D 61 74 69 63 73 ; sources\Cymatics
000000b0h: 46 72 65 65 48 61 6C 6C 6F 77 65 65 6E 56 6F 63 ; FreeHalloweenVoc
000000c0h: 61 6C 73 2D 56 31 2D 69 38 72 5C 43 79 6D 61 74 ; als-V1-i8r\Cymat
000000d0h: 69 63 73 20 2D 20 46 72 65 65 20 48 61 6C 6C 6F ; ics - Free Hallo
000000e0h: 77 65 65 6E 20 56 6F 63 61 6C 73 5C 50 68 72 61 ; ween Vocals\Phra
000000f0h: 73 65 73 5C 43 79 6D 61 74 69 63 73 20 2D 20 56 ; ses\Cymatics - V
00000100h: 61 72 69 6F 75 73 20 50 68 72 61 73 65 20 2D 20 ; arious Phrase -
00000110h: 48 61 70 70 79 20 48 61 6C 6C 6F 77 65 65 6E 2E ; Happy Halloween.
00000120h: 77 61 76 4F 67 67 53 00 02 00 00 00 00 00 00 00 ; wavOggS.........
00000130h: 00 5E 88 7A BA 00 00 00 00 6E EE B4 9F 01 1E 01 ; .^ˆzº....nî´Ÿ...
00000140h: 76 6F 72 62 69 73 00 00 00 00 02 44 AC 00 00 00 ; vorbis.....D¬...
00000150h: 00 00 00 00 EE 02 00 00 00 00 00 B8 01 4F 67 67 ; ....î......¸.Ogg
00000160h: 53 00 00 00 00 00 00 00 00 00 00 5E 88 7A BA 01 ; S..........^ˆzº.
00000170h: 00 00 00 EC 7E 79 17 10 36 FF FF FF FF FF FF FF ; ...ì~y..6ÿÿÿÿÿÿÿ
00000180h: FF FF FF FF FF FF FF 71 03 76 6F 72 62 69 73 0D ; ÿÿÿÿÿÿÿq.vorbis.
00000190h: 00 00 00 4C 61 76 66 35 37 2E 37 31 2E 31 30 30 ; ...Lavf57.71.100
000001a0h: 01 00 00 00 15 00 00 00 65 6E 63 6F 64 65 72 3D ; ........encoder=
000001b0h: 4C 61 76 66 35 37 2E 37 31 2E 31 30 30 01 05 76 ; Lavf57.71.100..v
000001c0h: 6F 72 62 69 73 2B 42 43 56 01 00 08 00 00 00 31 ; orbis+BCV......1
000001d0h: 4C 20 C5 80 D0 90 55 00 00 10 00 00 60 24 29 0E ; L ŀАU.....`$).
000001e0h: 93 66 49 29 A5 94 A1 28 79 98 94 48 49 29 A5 94 ; "fI)¥"¡(y˜"HI)¥"
000001f0h: C5 30 89 98 94 89 C5 18 63 8C 31 C6 18 63 8C 31 ; Å0‰˜"‰Å.cŒ1Æ.cŒ1

Ian @ un4seen

That isn't a standard OGG file, which should begin with "OggS". From the "VDJ" signature, it appears to be VirtualDJ file? It looks like the DWORD at offset +8 tells where the OGG data begins. If so, you could read that from the file and set the BASS_StreamCreateFile "offset" parameter to it.

toob

Yes I think these files contain DJ samples and VDJ have adjusted the headers like you say. GoldWave is clever enough to detect that when reading in the file and states it's ogg, as they play without issue. I've Tried manually editing the header without luck so perhaps VDJ do something else later in the file. So for now I'm duplicating as mp3 to get round it, not ideal but it's a solution. Thanks  ;D

jasona

#1531
Having some marshaling issues when targeting netcoreapp3.1 with the following code:
            Bass.LoadMe();
            BassFx.LoadMe();
            Bass.BASS_Init(0, 44100, BASSInit.BASS_DEVICE_DEFAULT, IntPtr.Zero, Guid.Empty);

            var channel = Bass.BASS_StreamCreateFile(@"music.mp3", 0, 0, BASSFlag.BASS_STREAM_DECODE);

            var fx = Bass.BASS_ChannelSetFX(channel, BASSFXType.BASS_FX_BFX_VOLUME_ENV, 1);
            var success = Bass.BASS_FXSetParameters(fx, new BASS_BFX_VOLUME_ENV(2));

BASS_FXSetParameters throws with
Unhandled exception. System.NotSupportedException: Operation is not supported. (0x80131515)
   at System.StubHelpers.StubHelpers.FmtClassUpdateNativeInternal(Object obj, Byte* pNative, CleanupWorkListElement& pCleanupWorkList)
   at System.StubHelpers.AsAnyMarshaler.ConvertLayoutToNative(Object pManagedHome, Int32 dwFlags)
   at System.StubHelpers.AsAnyMarshaler.ConvertToNative(Object pManagedHome, Int32 dwFlags)
   at Un4seen.Bass.Bass.BASS_FXSetParametersExt(Int32 handle, Object par)
   at Un4seen.Bass.Bass.BASS_FXSetParameters(Int32 handle, Object par)

This doesn't happen for BASS_FX_BFX_ECHO4 type for example. Looking at the source there appears to be additional memory pinning before calling BASS_FXSetParametersExt

Tried with both BASS 2.4.14 and 2.4.15. This works fine when targeting .NET Framework such as net471.

Are there any workarounds for this, or a fix?

EDIT:
Worth clarifying I am using the BASS.NET 'standard' DLL.

radio42

Internally the following is used:
private static extern bool BASS_FXSetParametersExt(int handle, [In][MarshalAs(UnmanagedType.AsAny)] object par);

However, for the BASS_BFX_VOLUME_ENV a special trick must be used in order to support dynamic arrays.
Thus internally an pinned pointer to that array is created using the following code:
hgc = GCHandle.Alloc(pNodes, GCHandleType.Pinned);
ptr = hgc.AddrOfPinnedObject();
So I don't see any reason for .Net Code to complain about this - as all functions should be fully supported.
However, in that past there had been various issues with this kind of marshalling.
Maybe there is still a bug in .NetCode 3.1 ?!

But I am wondering, why you don't use the special .Netcore buikd I am providing?
Please try that to see, if that fixes the issue!

emco

#1533
Is the .NET Standard version able to run on Linux (RaspberryPi with dotnet core 3.1 runtime)?

When running Bass.LoadMe("libbass.so"); I get

Unhandled exception. System.DllNotFoundException: Unable to load shared library 'kernel32.dll' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libkernel32.dll: cannot open shared object file: No such file or directory
   at Un4seen.Bass.Utils.LIBLoadLibrary(String fileName)
   at Un4seen.Bass.Utils.LoadLib(String moduleName, Int32& handle)
   at Un4seen.Bass.Bass.LoadMe(String path)

on linux, shouldnt it look for kernel32.dll.so ?

radio42

Yes, but there is a special .Net core version available which also runs fine on Linux!
But not all internal methods are available/supported on all OSs.
E.g. the LoadMe methods only work on Windows, but is essentially not needed, as each BASS lib is automatically loaded without any special call (even on Windows) when you invoke any method the first time.
Typically you might use e.g. the xxxx_GetVerison method as a first call to load the related lib.
PLugins might be loaded via the BASS_PLuginLoad method respectively.

emco

#1535
Thanks for the help, radio42!
I was able to successfully start my application now. Turns out on raspberrypi you have to use the hardfp version of libbass.so ;)

Unfortunately the sound i get is very distorted. (Tried both the core and the standard dll).
The same code plays fine on a windows machine.

Also a simple C program using just libbass.so plays the same sound fine.

Interestingly once the .NET application has played that distorted sound, all following sounds played on the machine are also distorted, even when played with a completely different applications (i.e. omxplayer). Only a reboot brings back normal sound.

Ian @ un4seen

That sounds strange. Is the problem happening even if you only call BASS_Init + BASS_StreamCreateFile + BASS_ChannelPlay? And are you using PulseAudio or only ALSA? One thing you could try is increasing the output buffer size via the BASS_CONFIG_DEV_BUFFER option (before BASS_Init). The default on Linux is 40ms, so try going up from there.

emco

#1537
Thanks for your answer, Ian.

In the meantime I have tried more things, and infact the BASS_Init call already causes the distorted sound to be played.

Infact any following call to a BASS .NET function changes the frequency of the distortion somehow. It sounds a bit like a late 90s modem dialing up.

The output of Version and Info is:
Bass Version: 2040F00
Speakers=2, MinRate=0, MaxRate=0, DX=0, EAX=False

I am not sure about Pulse or Alsa, but I would assume its Alsa as its a fresh install of raspbian.

Here is my simple code but already the first line causes the distorted sound.

Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT, IntPtr.Zero);
var stream = Bass.BASS_StreamCreateFile("Data/Sounds/sample.ogg", 0L, 0L, BASSFlag.BASS_DEFAULT);
Bass.BASS_ChannelPlay(stream, false);

The same 3 calls in the C version play fine on the same raspberrypi:

BASS_Init(-1, 44100, 0, NULL, NULL);
HSTREAM stream = BASS_StreamCreateFile(FALSE, "Data/Sounds/sample.ogg", 0, 0, 0);
BASS_ChannelPlay(stream, FALSE);

Ian @ un4seen

I'm not sure how .Net would be affecting the output. Very strange indeed. I will send you a debug BASS version to get more info.

radio42

.Net is not doing anything with this. It behaves exactly like native BASS - except for callbacks where a GC might interrupt things, if not carefully coded.

emco

The distorted sound turned out to be from a PWM signal which got activated by my .NET application. Apparently on the raspberry pi one chip (bcm2835) is used for both the audio and PWM signals to GPIO pins.
Disabling the PWM part of the application solved the issue and sound is playing fine.

zippo227

I have been working to convert my unity project to use their new IL2CPP system. I have some code that returns the volume levels using Windows with IL2CPP, and it also runs on macOS MONO in the unity editor. However when running on macOS IL2CPP, the levels are always 0. The call to BASS_ChannelGetLevel is returning true, so I'm not sure why it is not returning something > 0. I'm using Soundflower to create an aggregate device which is what I'm inputting into the client. Thanks for taking a look.

private const int levelSamplingIntervalMS = 1000;
private const float levelSamplingIntervalF = 1f;

private void MonitorLevels()
{
while (Started == true)
{
float[] data = new float[2];

if (RecordChannel != 0)
{
//Do stuff with PCM
bool success = Bass.BASS_ChannelGetLevel(RecordChannel, data, levelSamplingIntervalF, BASSLevel.BASS_LEVEL_STEREO);
if (success == false)
{
TextError = Utils.GetBassError("Could not extract level");
}
}

//TextLog = "Levels " + data[0] + " " + data[1];

iBroadcastMonitor.LevelsCallback(data);

Thread.Sleep(levelSamplingIntervalMS);
}

TextLog = "Exiting MonitorLevels";
}

Ian @ un4seen

From its name, I guess "RecordChannel" was created with BASS_RecordStart. If so, is it using a RECORDPROC callback? Without a RECORDPROC, each BASS_ChannelGetLevel call will be taking data out of the recording buffer, and so if you call it often and/or with a high "levelSamplingIntervalF" value, there may sometimes be no data for it to measure (so it returns a 0 level).

zippo227

Quote from: Ian @ un4seenFrom its name, I guess "RecordChannel" was created with BASS_RecordStart. If so, is it using a RECORDPROC callback? Without a RECORDPROC, each BASS_ChannelGetLevel call will be taking data out of the recording buffer, and so if you call it often and/or with a high "levelSamplingIntervalF" value, there may sometimes be no data for it to measure (so it returns a 0 level).

Hi Ian, that's interesting. Would you recommend I add a RECORDPROC instead of running a separate thread that tries to measure the levels? My only concern with the RECORDPROC previously was it seemed like I would need to process the data there. However, if I can simply make the call to GetLevel there instead of in my thread, that might do the trick?

I've attached the code I'm using to start the recording. You're right, I am using Bass.BASS_RecordStart with a null RECORDPROC.

/// <summary>
/// Begins recording the selected device. The device
/// will now be enabled for monitoring and encoding.
/// </summary>
/// <param name="selectedDevice"></param>
/// <returns></returns>
protected override void StartRecordingChannel(int selectedDevice)
{
Bass.BASS_RecordFree();

if (Bass.BASS_RecordInit(selectedDevice) == false)
{
TextError = Utils.GetBassError("Couldn't init recording");
return;
}

BASSInputType inputType = new BASSInputType();
Bass.BASS_RecordGetInputType(selectedDevice);

Console.WriteLine(string.Format("InputType {0}", inputType));

BASS_RECORDINFO recordInfo = new BASS_RECORDINFO();
if (Bass.BASS_RecordGetInfo(recordInfo) == false)
{
TextError = Utils.GetBassError("Couldn't get recordInfo");
return;
}

NumChannels = recordInfo.Channels;
TextLog = string.Format("Supported Channels {0}", recordInfo.Channels);

// start recording @ 44100hz 16-bit stereo (paused to setup encoder first)
// Returns a recording handle
//m_recordingCallbackProc is null to decrease latency
RecordChannel = Bass.BASS_RecordStart((int)SampleRate, 2, BASSFlag.BASS_RECORD_PAUSE, null, IntPtr.Zero);

//Try recording mono if this failed
if (RecordChannel == 0)
{
TextLog = "Could not initialize two recording channels. Attempting Mono.";
RecordChannel = Bass.BASS_RecordStart((int)SampleRate, 1, BASSFlag.BASS_RECORD_PAUSE, null, IntPtr.Zero);
}

if (RecordChannel == 0)
{
TextError = Utils.GetBassError("Couldn't start recording");
return;
}

if (Bass.BASS_ChannelPlay(RecordChannel, false) == false)
{
TextError = Utils.GetBassError("Couldn't play recording");
return;
}
}

Ian @ un4seen

Yes, you should use a RECORDPROC callback in that case, to stop BASS_ChannelGetLevel removing data from the recording buffer. You don't need to move the BASS_ChannelGetLevel calls into the RECORDPROC though - it can simply "return true".

zippo227

Quote from: Ian @ un4seenYes, you should use a RECORDPROC callback in that case, to stop BASS_ChannelGetLevel removing data from the recording buffer. You don't need to move the BASS_ChannelGetLevel calls into the RECORDPROC though - it can simply "return true".

Hi Ian. That worked ;D! Attaching the relevant code for anyone who wants to see it.

//...
RecordChannel = Bass.BASS_RecordStart((int)SampleRate, 2, BASSFlag.BASS_RECORD_PAUSE, RecordProc, IntPtr.Zero);
//...

[AOT.MonoPInvokeCallback(typeof(RECORDPROC))]
private static bool RecordProc(int handle, IntPtr buffer, int length, IntPtr user)
{
    return true;
}

santanutosh

Hi,
I have been trying the Utils.GetNormalizationGain function to check the Peak and Gain factor of an Audio file. Then if required I would apply this gain factor to the same audio and save it as a new audio file. My code is like this:
_stream = Bass.BASS_StreamCreateFile(filename, 0, 0, BASSFlag.BASS_STREAM_DECODE)
gain = Utils.GetNormalizationGain(filename, 0.5, -1, -1, peak)
Bass.BASS_ChannelSetAttribute(_stream, BASSAttribute.BASS_ATTRIB_VOL, gain)

encHandle = BassEnc.BASS_Encode_Start(_stream, "Normalized.wav", BASSEncode.BASS_ENCODE_PCM Or BASSEncode.BASS_ENCODE_FP_16BIT, Nothing, 0)
If encHandle <> 0 Then
      While length > 0
        length= Bass.BASS_ChannelGetData(_stream, buffer, buffer.Length)
      End While
      BassEnc.BASS_Encode_Stop(encHandle)
End If

The new audio file is created. But there is no change in audio level. Although the gain factor is displayed mostly as 1.xxx, i.e. greater than 1.
What am I doing wrong?

radio42

This function (like the documentation describes) doesn't calculate a ReplayGain (to calculate the 'perceived' loudness value), but calculates the gain value to normalize the audio, so that the peak level is adjusted to 0dB.
So if the audio is already normalized (any peak level is already at max), there would be no change of course.
Bass.Net doesn't contain any function to calculate the ReplayGain.

santanutosh

Thanks for replying.
I tested with an Audio file which gives Peak = 0.4068 and Gain = 2.4580.
Correct me, if I'm wrong. My understanding is that the peak value is much lower than 0dB (i.e. Peak = 1.0). So, if I apply the gain factor 2.4580 to this stream I would get a stream which would have it's peak at 0dB.
Bass.BASS_ChannelSetAttribute(_stream, BASSAttribute.BASS_ATTRIB_VOL, gain)But after applying the above code, all I get is a duplicate copy of the original audio. And on testing the new file, again I have Peak = 0.4068 and Gain = 2.4580.
The waveform also hasn't changed.

radio42

Yes, that is correct.
But BASS_ChannelSetAttribute would change any WaveForm!
I am not sure, how you are testing things, but if you set via BASS_ChannelSetAttribute a volume factor above 1.0, that the volume of that stream would be amplified.
But that doesn't change the data send to the encoder, but is only applied to the output!