注册

移动互联网实时视频通讯之视频采集

一 、前言

一套完整的实时网络视频通讯系统包括视频采集、视频编码、视频传输、视频解码和播放。对于视频采集,大多数视频编码器对输入原始视频的格式要求是YUV420。YUV420格式是YUV格式的一种,YUV分为三个分量,Y代表亮度,也就是灰度值,U和V表示的是色度,用于描述图像的色彩和饱和度。YUV420格式的数据的各分量的布局是YYYYYYYYUUVV,视频采集时,如果输入的YUV数据各分量不是这种布局,需要先转化为这种布局,然后再送到编码器进行编码。对于android手机,从摄像头捕获的YUV数据格式是NV21,其YUV布局是YYYYYYYYVUVU;对于iphone手机,摄像头采集的YUV数据格式是NV12,其YUV布局是YYYYYYYYUVUV。因此对于android手机和iphone手机,视频采集获得的数据都需 要进行转换为YUV420的数据布局才能进一步进行编码和传输等工作。
 
二、android视频采集
 
对于android系统,通过Camera.PreviewCallback的onPreviewFrame回调函数,实时截取每一帧视频流数据。在Camera对象上,有3种不同的方式使用这个回调:
  • setPreviewCallback(Camera.PreviewCallback):在屏幕上显示一个新的预览帧时调用onPreviewFrame方法。
  •  
  • setOneShotPreviewCallback(Camera.PreviewCallback):当下一幅预览图像可用时调用onPreviewFrame。
  •  
  • setPreviewCallbackWithBuffer(Camera.PreviewCallback):在Android 2.2中引入了该方法,其与setPreviewCallback的工作方式相同,但需要提供一个字节数组作为缓冲区,用于保存预览图像数据。这是为了能够更好地管理处理预览图像时使用的内存,避免内存的频繁分配和销毁。

 
示例代码
 
public class VideoActivity extends Activity implements SurfaceHolder.Callback,Camera.PreviewCallback {
 
static int screenWidth = 0;
static int screenHeight = 0;
private SurfaceView mSurfaceview = null;
private SurfaceHolder mSurfaceHolder = null;
private Camera mCamera = null;
private byte yuv_frame[];
private Parameters mParameters;
static int mwidth = 320;
static int mheight = 240;
 
// Setup
protected void onCreate(Bundle savedInstanceState) {
 
requestWindowFeature(Window.FEATURE_NO_TITLE);
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_video_capture);
mSurfaceview = (SurfaceView) findViewById(R.id.surfaceview);
mSurfaceHolder = mSurfaceview.getHolder();
mSurfaceHolder.addCallback(this);
screenWidth = getWindowManager().getDefaultDisplay().getWidth();
screenHeight = getWindowManager().getDefaultDisplay().getHeight();
LayoutParams layoutparam = new LayoutParams(screenWidth, screenHeight);
SurfaceHolder holder = mSurface.getHolder();
holder.setFixedSize(screenWidth,screenHeight);
}
 
void startcapture()
{
try {
if (mCamera == null) {
mCamera = Camera.open();
}
mCamera.stopPreview();
mParameters = mCamera.getParameters();
mParameters.setPreviewSize(mwidth, mheight);//set video resolution ratio
mParameters.setPreviewFrameRate(15); //set frame rate
mCamera.setParameters(mParameters);
int mformat = mParameters.getPreviewFormat();
int bitsperpixel = ImageFormat.getBitsPerPixel(mformat);
yuv_frame = new byte[mwidth * mheight * bitsperpixel / 8];//buffer to store NV21preview  data
mCamera.addCallbackBuffer(yuv_frame);
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setPreviewCallbackWithBuffer(this);//set callback for camera
mCamera.startPreview();//trigger callback onPreviewFrame
}catch (IOException e) {
e.printStackTrace();
}
}
 
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//add code to process the data captured,data layout is YYYYYYYYVUVU,should be converted to                               // YYYYYYYYUUVV before encoded
camera.addCallbackBuffer(yuv_frame);
}
protected void onPause() {
super.onPause();
}
public void onResume() {
super.onResume();
}
protected void onDestroy() {
if (mCamera != null) {
mCamera.setPreviewCallback(null);
mCamera.stopPreview(); //stop capture video data
mCamera.release();
mCamera = null;
}
super.onDestroy();
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width_size,int height_size) {
startcapture();//start capture NV21 video data
}
}
 
二、IOS视频采集
 
对于IOS系统,为了完成实时视频采集,首先初始化一个AVCaputureSession对象,AVCaptureSession对象用于将AV输入设备的数据流转换到输出。然后,初始化一个AVCaptureDeviceInput对象,调用addInput方法将AVCaptureDeviceInput对象添加到AVCaptureSession对象。接着初始化一个AVCaptureVideoDataOuput对象,调用addOutput方法将该对象添加到AVCaputureSession对象。AVCaptureVideoDataOutput初始化后可以通过captureOutput:didOutputSampleBuffer:fromConnection:这个委托方法获取视频帧,这个委托方法必须采用AVCaptureVideoDataOutputSampleBufferDelegate协议。
 
示例代码
 
int frame_rate = 15;
int mWidth = 640;
int mHeight 480;
AVCaptureDeviceInput *videoInput = nil;
AVCaptureVideoDataOutput *avCaptureVideoDataOutput =nil;
AVCaptureSession* mCaptureSession = nil;
AVCaptureDevice *mCaptureDevice = nil;
 
- (void)startCapture
{
if(mCaptureDevice || mCaptureSession)
{
NSLog(@"Already capturing");
return;
}
NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in cameras){
if (device.position == AVCaptureDevicePositionFront){
AVCaptureDevice = device;
}
}
if(AVCaptureDevice == nil)
{
AVCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
if(mCaptureDevice == nil)
{
NSLog(@"Failed to get valide capture device");
return;
}
NSError *error = nil;
videoInput = [AVCaptureDeviceInput deviceInputWithDevice:mCaptureDevice error:&error];
if (!videoInput)
{
NSLog(@"Failed to get video input");
mCaptureDevice = nil;
return;
}
mCaptureSession = [[AVCaptureSession alloc] init];
mCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;//set video resolution ratio
[mCaptureSession addInput:videoInput];
avCaptureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],                                   kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt: mWidth], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithInt: mHeight], (id)kCVPixelBufferHeightKey,
nil];
avCaptureVideoDataOutput.videoSettings = settings;
[settings release];
avCaptureVideoDataOutput.minFrameDuration = CMTimeMake(1, frame_rate);//set video frame rate
avCaptureVideoDataOutput.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue_ = dispatch_queue_create("www.easemob.com", NULL);
[avCaptureVideoDataOutput setSampleBufferDelegate:self queue:queue_];
[mCaptureSession addOutput:avCaptureVideoDataOutput];
[avCaptureVideoDataOutput release];
dispatch_release(queue_);
}
- (void)stopCapture
{
 
if(mCaptureSession){
[mCaptureSession stopRunning];
[mCaptureSession removeInput:videoInput];
[mCaptureSession removeOutput:avCaptureVideoDataOutput];
[avCaptureVideoDataOutput release];
[videoInput release];
[mCaptureSession release], mCaptureSession = nil;
[mCaptureDevice release], mCaptureDevice = nil;
}
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/* unlock the buffer*/
if(CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
{
UInt8 *bufferbasePtr = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer);
UInt8 *bufferPtr = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
UInt8 *bufferPtr1 = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,1);
size_t buffeSize = CVPixelBufferGetDataSize(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t bytesrow0 = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);
size_t bytesrow1  = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,1);
size_t bytesrow2 = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,2);
UInt8 *yuv420_data = (UInt8 *)malloc(width * height *3/ 2);//buffer to store YUV with layout YYYYYYYYUUVV
 
/* convert NV21 data to YUV420*/
 
UInt8 *pY = bufferPtr ;
UInt8 *pUV = bufferPtr1;
UInt8 *pU = yuv420_data + width*height;
UInt8 *pV = pU + width*height/4;
for(int i =0;i {
memcpy(yuv420_data+i*width,pY+i*bytesrow0,width);
}
for(int j = 0;j {
for(int i =0;i {
*(pU++) = pUV[i<<1];
*(pV++) = pUV[(i<<1) + 1];
}
pUV+=bytesrow1;
}
//add code to push yuv420_data to video encoder here
 
free(yuv420_data);
/* unlock the buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
 
作者: 彭祖元

0 个评论

要回复文章请先登录注册