I might as well ask something just to be sure.

Does the sample rate affect anything as to the tone range of the audio? I was assuming that 48000 had wider range and more bass, while 24000 is more limited to the ranged used for voice communication.

But when I'm calculating all the frequency tones over 5 octaves, it doesn't seem to change anything at all here? Algorithm seems to be working fine unless I'm missing something.

`/// <summary>`

/// Returns a cached array of tones 20 to 62 (5 octaves + a tone before and after).

/// </summary>

protected static float[] ToneFreq

{

get

{

if (_toneFreq == null)

{

_toneFreq = new float[62];

for (var i = 0; i < 62; i++)

{

_toneFreq[i] = (float)Math.Pow(2, (20 + i - 49) / 12.0) * 440;

}

}

return _toneFreq;

}

}

Also the code calculates on what tone (increasing the above array by increments of 0.1hz) it more closely matches FFT peaks, by summing all matching FFT values. Highest sum wins.

Then I can calculate at various sample rates and take the most confident value (highest sum).

One weird thing though, at lower sample rates, the sums are lower, which could be explained by narrower bands. But to make up for it and be able to compare the sums side-by-side, I need to multiply the sums by this factor. I had to go by trial-and-error testing to get this .2577 constant, and I don't understand it. Got any explanation here?

`var adjust = (44100f / sampleRate - 1) *.2577f + 1;`