NodeJs实现音频录制 - Node Core Audio

NodeJs实现音频录制 - Node Core Audio
yicheng
发布于

Node Core Audio

NodeJs实现音频录制 - Node Core Audio

一个c++扩展node.js让javascript访问音频缓冲区和基本的音频处理功能。

现在,它是一个基于nodejs的跨平台音频库     


安装

npm install node-core-audio

简单用例

Below is the most basic use of the audio engine. We create a new instance of node-core-audio, and then give it our processing function. The audio engine will call the audio callback whenever it needs an output buffer to send to the sound card.下面是最基本的音频引擎的使用。我们创建一个新的node-core-audio的实例,然后给我们的处理函数。音频引擎将调用音频回调时需要一个声卡输出缓冲区发送。

// Create a new instance of node-core-audio
var coreAudio = require("node-core-audio");
    
// Create a new audio engine
var engine = coreAudio.createNewAudioEngine();
    
// Add an audio processing callback
// This function accepts an input buffer coming from the sound card,
// and returns an ourput buffer to be sent to your speakers.
//
// Note: This function must return an output buffer
    
function processAudio( inputBuffer ) {    
    console.log( "%d channels", inputBuffer.length );    
    console.log( "Channel 0 has %d samples", inputBuffer[0].length );    
    return inputBuffer;
}
engine.addAudioCallback( processAudio );

// Alternatively, you can read/write samples to the sound card manually

var engine = coreAudio.createNewAudioEngine();

// Grab a buffer
var buffer = engine.read();

// Silence the 0th channel
for( var iSample=0; iSample<inputBuffer[0].length; ++iSample )
    buffer[0][iSample] = 0.0;
    
    // Send the buffer back to the sound 
cardengine.write( buffer );

性能问题

When you are writing code inside of your audio callback, you are operating on the processing thread of the application. This high priority environment means you should try to think about performance as much as possible. Allocations and other complex operations are possible, but dangerous.当你编写代码的内部音频回调,你操作的是应用程序的处理线程。这个优先环境意味着你应该试着尽可能考虑性能。分配和其他复杂的操作是可能的,但是危险。

IF YOU TAKE TOO LONG TO RETURN A BUFFER TO THE SOUND CARD, YOU WILL HAVE AUDIO DROPOUTS 如果你在返回给声卡缓冲数据时 花费太多时间的话,你可能会丢失数据


The basic principle is that you should have everything ready to go before you enter the processing function. Buffers, objects, and functions should be created in a constructor or static function outside of the audio callback whenever possible. The examples in this readme are not necessarily good practice as far as performance is concerned.基本原则是,你应该进入前的一切准备处理函数。应该创建缓冲区、对象和函数在构造函数或静态函数之外的音频回调。这个readme不一定是好的做法的例子就性能而言。

The callback is only called if all buffers has been processed by the soundcard.如果所有缓冲数据已经被声卡处理完,回调函数才被调用。

Audio Engine Options

  • 采样率 sampleRate  [default 44100]

    • Sample rate - number of samples per second in the audio stream

  • 采样格式 sampleFormat  [default sampleFormatFloat32]

    • Bit depth - Number of bits used to represent sample values

    • formats are sampleFormatFloat32, sampleFormatInt32, sampleFormatInt24, sampleFormatInt16, sampleFormatInt8, sampleFormatUInt8.

  • 缓冲区长度 framesPerBuffer [default 256]

    • Buffer length - Number of samples per buffer

  • interleaved [default false]

    • Interleaved / Deinterleaved - determines whether samples are given to you as a two dimensional array (buffer[channel][sample]) (deinterleaved) or one buffer with samples from alternating channels (interleaved).

  • 输入通道 inputChannels [default 2]

    • Input channels - number of input channels

  • 输出通道 outputChannels [default 2]

    • Output channels - number of output channels

  • 输入设备 inputDevice [default to Pa_GetDefaultInputDevice]

    • Input device - id of the input device

  • 输出设备 outputDevice [default to Pa_GetDefaultOutputDevice]

    • Output device - id of the output device

API

First things first 先说重要的

var coreAudio = require("node-core-audio");

Create and audio processing function 创建和音频处理函数

function processAudio( inputBuffer ) {    
    // Just print the value of the first sample on the left channel
    console.log( inputBuffer[0][0] );
}

Initialize the audio engine and setup the processing loop 初始化音频引擎和设置处理循环回调函数

var engine = coreAudio.createNewAudioEngine();
engine.addAudioCallback( processAudio );

General functionality

// Returns whether the audio engine is activebool 
engine.isActive();

// Updates the parameters and restarts the engine. All keys from getOptions() are available.
engine.setOptions({inputChannels: 2});

// Returns all parametersarray 
engine.getOptions();

// Reads buffer of the input of the soundcard and returns as array.
// Note: this is a blocking call, don't take too long!array 
engine.read();

// Writes the buffer to the output of the soundcard. Returns false if underflowed.
// notic: blocking i/obool 
engine.write(array input);

// Returns the name of a given devicestring 
engine.getDeviceName( int inputDeviceIndex );

// Returns the total number of audio devicesint 
engine.getNumDevices();

Known Issues / TODO

  • Add FFTW to C++ extension, so you can get fast FFT's from javascript, and also register for the FFT of incoming audio, rather than the audio itself

  • Add support for streaming audio over sockets

本文地址:茜文博客 >>NodeJs实现音频录制 - Node Core Audio

转载请注明出处!!!3Q~~~