Writing bytes to audio file using AUHAL audio unit
I am trying to create a wav file from the sound input I get from the default input device of my macbook (built-in mic). However, the resultant file when imported to audacity as raw data is complete garbage.
First I initialize the audio file reference so I can later write to it in the audio unit input callback.
// struct contains audiofileID as member
MyAUGraphPlayer player = {0};
player.startingByte = 0;
// describe a PCM format for audio file
AudioStreamBasicDescription format = { 0 };
format.mBytesPerFrame = 2;
format.mBytesPerPacket = 2;
format.mChannelsPerFrame = 1;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
format.mFormatID = kAudioFormatLinearPCM;
CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("./test.wav"), kCFURLPOSIXPathStyle, false);
//CFShow (myFileURL);
CheckError(AudioFileCreateWithURL(myFileURL,
kAudioFileWAVEType,
&format,
kAudioFileFlags_EraseFile,
&player.recordFile), "AudioFileCreateWithURL failed");
Here I malloc some buffers to hold the audio data coming in from the AUHAL unit.
UInt32 bufferSizeFrames = 0;
propertySize = sizeof(UInt32);
CheckError (AudioUnitGetProperty(player->inputUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&bufferSizeFrames,
&propertySize), "Couldn't get buffer frame size from input unit");
UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);
printf("buffer num of frames %i", bufferSizeFrames);
if (player->streamFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved) {
int offset = offsetof(AudioBufferList, mBuffers[0]);
int sizeOfAB = sizeof(AudioBuffer);
int chNum = player->streamFormat.mChannelsPerFrame;
int inputBufferSize = offset + sizeOfAB * chNum;
//malloc buffer lists
player->inputBuffer = (AudioBufferList *)malloc(inputBufferSize);
player->inputBuffer->mNumberBuffers = chNum;
for (UInt32 i = 0; i < chNum ; i++) {
player->inputBuffer->mBuffers[i].mNumberChannels = 1;
player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
}
}
To check that the data is actually sensible, I render the audio unit and than log the first 4 bytes of each set of frames (4096) in each callback. The reason was to check that the values were in keeping with what was going into the mic. As I would talk into the mic I noticed the logged out values in this location of memory corresponded to the input. So it seems that things are working in that regard:
// render into our buffer
OSStatus inputProcErr = noErr;
inputProcErr = AudioUnitRender(player->inputUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
player->inputBuffer);
// copy from our buffer to ring buffer
Float32 someDataL = *(Float32*)(player->inputBuffer->mBuffers[0].mData);
printf("L2 input: % 1.7f n",someDataL);
And finally, in the input callback I write the audio bytes to the file.
UInt32 numOfBytes = 4096*player->streamFormat.mBytesPerFrame;
AudioFileWriteBytes(player->recordFile,
FALSE,
player->startingByte,
&numOfBytes,
&ioData[0].mBuffers[0].mData);
player->startingByte += numOfBytes;
So I have not figured out why the data comes out sounding glitchy, distorted or not there at all. One thing is that the resultant audio file is about as long as I actually recorded for. (hitting return stops the audio units and closes the audiofile).
I'm not sure what to look at next. Has anyone attempted writing to an audiofile from the AUHAL callback and had similar results?
For the sake of simplicity and staying on-topic, which seems to be Writing bytes to audio file using AUHAL audio unit , I'll try not to discuss assumptions which seem overly complex or too broad and therefore difficult to trace and debug, within the scope of this reply.
For making things work, as asked in the question, one doesn't need an AUGraph . A single HAL audio component does the job.
When using ExtAudioFileWriteAsync( ) for writing linear PCM to file, it is irrelevant if one is writing bytes or packets or whatsoever, AFAIK. It can just write inNumberFrames member elements from the bufferList.
For simplicity, I presume one input channel per frame, Float32 data format, no format conversion.
Under assumption everything is properly declared and initialized (which is well covered in documentation, textbooks, tutorials and sample code), the following 30-line plain-C render callback on a single AU of kAudioUnitSubType_HALOutput does the job, and I'm sure it can be made even simpler:
static OSStatus inputRenderProc (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
Float64 recordedTime = inTimeStamp->mSampleTime/kSampleRate;
Float32 samples[inNumberFrames];
memset (&samples, 0, sizeof (samples));
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = samples;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames*sizeof(Float32);
myPlayer* player = ( myPlayer *)inRefCon;
CheckError(AudioUnitRender(player->mAudioUnit,
ioActionFlags,
inTimeStamp,
kInputBus,
inNumberFrames,
&bufferList),
"Couldn't render from device");
// Write samples from bufferList into file
ExtAudioFileWriteAsync(player->mAudioFileRef, inNumberFrames, &bufferList);
return noErr;
}
Additional user-level tip is to create and select so-called Aggregate Device consisting of Internal microphone and Built-in output in Audio MIDI Setup.app to make a single HAL AU render properly.
Once sure that such a simple code behaves as expected, one can build more complex programs, including graphs, but a graph is not a prerequisite for low-level audio processing in OSX. Hope this can help.
If you can confirm that synchronous file writing is your issue, you can use GCD to write your file asynchronously. It is a queue so it preserves the order. It processes things one at a time. You can also check to see how many items are left, when it finishes etc...
dispatch_queue_t fileWritingQueue;
-(void) setup
{
// be sure to initialize this
fileWritingQueue = dispatch_queue_create("myQueue", NULL);
}
YourInputCallaback
{
// replace this with your synchronous file writer
dispatch_async(fileWritingQueue, ^{
// write file chunk here
});
}
-(void) cleanUp
{
dispatch_release(fileWritingQueue);
}
I think there's a problem with your AudioStreamBasicDescription: You have:
format.mBytesPerFrame = 2;
format.mBytesPerPacket = 2;
format.mChannelsPerFrame = 1;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat
But you should have:
format.mBytesPerFrame = format.mBytesPerPacket = 4;
and
format.mBitsPerChannel=4
when using floats. I remember having trouble with those AudioStreamBasicDescription's because you never get a meaningful error back when the description doesn't make sense, so it always worth to double check.
链接地址: http://www.djcxy.com/p/60190.html上一篇: 获取HTML5音频输入标签
下一篇: 使用AUHAL音频单元将字节写入音频文件