error converting AudioBufferList to CMBlockBufferRef

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.

I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.

- (void)copyBuffer:(CMSampleBufferRef)buf {  
    if (_readyForMoreBytes == NO)  
    {  
        return;  
    }  

    AudioBufferList abl;  
    CMBlockBufferRef blockBuffer;  
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);  

    UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);  
    BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);  

    if (!bytesCopied){  
        /  
        _readyForMoreBytes = NO;  

        if (size > kRescueBufferSize){  
            NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame");  
        } else {  
            if (rescueBuffer == nil) {  
                rescueBuffer = malloc(kRescueBufferSize);  
            }  

            rescueBufferSize = size;  
            memcpy(rescueBuffer, abl.mBuffers[0].mData, size);  
        }  
    }  

    CFRelease(blockBuffer);  
    if (!self.hasBuffer && bytesCopied > 0)  
    {  
        self.hasBuffer = YES;  
    }  
} 

Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731) .

-(CMSampleBufferRef)processOutput  
{  
    if(self.offline == NO)  
    {  
        return NULL;  
    }  

    AudioUnitRenderActionFlags flags = 0;  
    AudioTimeStamp inTimeStamp;  
    memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));  
    inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;  
    UInt32 busNumber = 0;  

    UInt32 numberFrames = 512;  
    inTimeStamp.mSampleTime = 0;  
    UInt32 channelCount = 2;  

    AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));  
    bufferList->mNumberBuffers = channelCount;  
    for (int j=0; j<channelCount; j++)  
    {  
        AudioBuffer buffer = {0};  
        buffer.mNumberChannels = 1;  
        buffer.mDataByteSize = numberFrames*sizeof(SInt32);  
        buffer.mData = calloc(numberFrames,sizeof(SInt32));  

        bufferList->mBuffers[j] = buffer;  

    }  
    CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit");  

    CMSampleBufferRef sampleBufferRef = NULL;  
    CMFormatDescriptionRef format = NULL;  
    CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };  
    AudioStreamBasicDescription audioFormat = self.audioFormat;  
    CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate");  
    CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate");  
    CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList");  

    return sampleBufferRef;  
} 


static OSStatus playbackCallback(void *inRefCon,  
                                 AudioUnitRenderActionFlags *ioActionFlags,  
                                 const AudioTimeStamp *inTimeStamp,  
                                 UInt32 inBusNumber,  
                                 UInt32 inNumberFrames,  
                                 AudioBufferList *ioData)  
{  
    int numberOfChannels = ioData->mBuffers[0].mNumberChannels;  
    SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;  

    /  
    memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);  

    MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;  

    if (p.hasBuffer){  
        int32_t availableBytes;  
        SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);  

        int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;  

        int bytesToRead = MIN(availableBytes, requestedBytesSize);  
        memcpy(outSample, bufferTail, bytesToRead);  
        TPCircularBufferConsume([p getBuffer], bytesToRead);  

        if (availableBytes <= requestedBytesSize*2){  
            [p setReadyForMoreBytes];  
        }  

        if (availableBytes <= requestedBytesSize) {  
            p.hasBuffer = NO;  
        }    
    }  
    return noErr;  
} 

The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)

CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180  
  invalid = NO  
  dataReady = NO  
  makeDataReadyCallback = 0x0  
  makeDataReadyRefcon = 0x0  
  formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {  
  mediaType:'soun'  
  mediaSubType:'lpcm'  
  mediaSpecific: {  
  ASBD: {  
  mSampleRate: 44100.000000  
  mFormatID: 'lpcm'  
  mFormatFlags: 0xc2c  
  mBytesPerPacket: 2  
  mFramesPerPacket: 1  
  mBytesPerFrame: 2  
  mChannelsPerFrame: 1  
  mBitsPerChannel: 16 }  
  cookie: {(null)}  
  ACL: {(null)}  
  }  
  extensions: {(null)}  
}  
  sbufToTrackReadiness = 0x0  
  numSamples = 512  
  sampleTimingArray[1] = {  
  {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},  
  }  
  dataBuffer = 0x0  

The buffer list looks like this

Printing description of bufferList:  
(AudioBufferList *) bufferList = 0x00007f87d280b0a0  
Printing description of bufferList->mNumberBuffers:  
(UInt32) mNumberBuffers = 2  
Printing description of bufferList->mBuffers:  
(AudioBuffer [1]) mBuffers = {  
  [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)  
}  

Really at a loss here, hoping someone can help. Thanks,

In case it matters i am debugging this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.

I have read the following issues, however still to no avail, things are not working.

How to convert AudioBufferList to CMSampleBuffer?

Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

core audio offline rendering GenericOutput

UPDATE

I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:

bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048

mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:

bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024

not really sure where that other buffer is going, or the other 1024 byte size...

if when i get finished calling Redner if I do something like this

AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;

and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.

If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender

UPDATE 2

I hooked up a render callback so I can inspect the output when I play the sound over the speakers. I noticed that the output that goes to the speakers also has a AudioBufferList with 2 buffers, and the mDataByteSize during the input callback is 1024 and in the render callback its 2048, which is the same as I have been seeing when manually calling AudioUnitRender. When I inspect the data in the rendered AudioBufferList I notice that the bytes in the 2 buffers are the same, which means I can just ignore the second buffer. But I am not sure how to handle the fact that the data is 2048 in size after being rendered instead of 1024 as it's being taken in. Any ideas on why that could be happening? Is it in more of a raw form after going through the audio graph and that is why the size is doubling?


Sounds like the issue you're dealing with is because of a discrepancy in the number of channels. The reason you're seeing data in blocks of 2048 instead of 1024 is because it is feeding you back two channels (stereo). Check to make sure all of your audio units are properly configured to use mono throughout the entire audio graph, including the Pitch Unit and any audio format descriptions.

One thing to especially beware of is that calls to AudioUnitSetProperty can fail - so be sure to wrap those in CheckError() as well.

链接地址: http://www.djcxy.com/p/72046.html

上一篇: Android使用MediaCodec解码原始h264流

下一篇: 将AudioBufferList转换为CMBlockBufferRef时出错