-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADC render sizes vs AudioContext #9
Comments
I think it's best to do something like http://www.grame.fr/ressources/publications/CallbackAdaptation.pdf. This works both ways (ADC with bigger buffer size than 128, and the opposite). |
Yes! Then we can assign this issue to @sletz. :) |
Well this old work was done in the context of adding ASIO support in PortAudio API. On Windows some drivers were using quite exotic buffer size values, so we has to adapt those values to more standard ones (like power of two) and wanted to minimize latency. |
Right, I think we generally agree here. Authors will be able to pick the best buffer size for the platform (and I expect most of them to do that), but are allowed to pick something else, in which case, we'll have to do something else, such as the technique described in your paper. |
"The main drawback of this kind of approach is that DSP CPU usage in not "homogeneously distributed in time" anymore." Take something like ADC running at 64 frames and AudioContext at 128 frames: AudioContext callback is called every 2 buffers, but still has to complete in 64 frames duration right? So you only have 50% of the DSP CPU bandwidth. If AudioContext callback takes more than 50%, it will not end in time. Or do you think of even more complex buffering scheme? |
This issue came up in the past. I think the proper buffer size can be calculated, but the non-homogeneous processing load is a problem. The most intuitive way to handle this is allowing the AudioContext to render less than 128 frames. The alternative would be having a ring buffer between the context and the device client and running them independently with an own clock. But this will have a drift problem. |
I agree. We should update WebAudio AudioContext (and OfflineAudioContext) to allow a new option to specify the rendering size. I'll file an issue for that. |
If the Web Audio API gains the capability to change its internal buffer processing size, then the ADC proposal has no advantages over using an |
It seems like there's a lot to be said for implementing a configurable buffer size with the current Web Audio API regardless of where ADC eventually ends up. Existing apps that are running on devices with eccentric buffer sizes like 192 will stand to benefit from a configureable render quantum as long as the developers make use of the new option - a significantly easier change that switching to an ADC based implementation. See related: https://bugs.chromium.org/p/chromium/issues/detail?id=924426 |
As currently proposed, the ADC includes an AudioContext and provides a callback to call the audio graph and returns the data produced by the audio graph.
The ADC also can support different render sizes. Let's say 64 is the suggested HW size. How does that work with an AudioContext that must render 128 frames? Especially if there's an input to the AudioContext that is supposed to be generated from the ADC and sent to the AudioContext?
Perhaps the solution is to allow the AudioContext to work at different block sizes? Then it can match the optimum (or selected) value for the ADC.
The text was updated successfully, but these errors were encountered: