Android Vision Face Detection with Video Stream

I am trying to integrate the face detection api in a video stream I am receiving from a parrot bebop drone.

The stream is decoded with the MediaCodec class (http://developer.android.com/reference/android/media/MediaCodec.html) and this is working fine. Rather than rendering the decoded frame data to a surface view, I can successfully access the ByteBuffer with the decoded frame data from the decoder.

I can also access the decoded image objects (class https://developer.android.com/reference/android/media/Image.html) from the decoder, they have a timestamp and I get the following infos:

  • width: 640
  • height: 368
  • format: YUV_420_888
  • First thing I tried to do was generating Frame objects for the vision api (com/google/android/gms/vision/Frame) via the Framebuilder (android/gms/vision/Frame.Builder)

    ...
     ByteBuffer decodedOutputByteBufferFrame = mediaCodec.getOutputBuffer(outIndex);
    Image image =  mediaCodec.getOutputImage(outIndex);
    ...
    decodedOutputByteBufferFrame.position(bufferInfo.offset);
    decodedOutputByteBufferFrame.limit(bufferInfo.offset+bufferInfo.size);
    frameBuilder.setImageData(decodedOutputByteBufferFrame, 640, 368,ImageFormat.YV12);
    frameBuilder.setTimestampMillis(image.getTimestamp());
    Frame googleVisFrame = frameBuilder.build();
    

    This codes does not give me any error and the googleVisFrame object is not null, but when I call googleVis.getBitmap() , I get null . Subsequently, a Facedetection does not work (I suppose because there's an issue with my vision frame objects...)

    Even if this would work, I am not sure how to handle the videostream with the vision api as all the code I find demonstrates the use with the internal camera.

    If you could point me to the right direction, I would be very thankfull.

    链接地址: http://www.djcxy.com/p/92540.html

    上一篇: Android Camera2捕捉图像歪斜

    下一篇: 带视频流的Android视觉人脸检测