如何進行視頻的采集和預覽
視頻的采集需要用到Camera這個API,谷歌在5.0引入了camrea2,為了適配所有的機型,我將分別介紹camera和camera2
預覽可以使用SurfaceView和TextureView
<uses-permission android:name="android.permission.CAMERA" />
采集
camera的基本使用
1.打開攝像頭
mCamera=Camera.open(mCameraId);
mCameraId為int型,1代表前置攝像頭,0代表后置攝像頭
2.設置預覽的媒介
mCamera.setPreviewDisplay(mHolder);
mCamera.setPreviewTexture(mTexture);
前者是SurfaceView預覽,后者是TextureView預覽
3.設置參數(shù)
Camera.Parameters parameters = mCamera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21); //預覽數(shù)據(jù)格式
parameters.setPictureFormat(ImageFormat.JPEG); //設置拍照圖片格式
parameters.setPreviewSize(mSize.width, mSize.height); //預覽視頻尺寸
parameters.setPictureSize(nSize.width, nSize.height); //拍照圖片尺寸
4.設置預覽方向和拍攝圖片的方向
//設置拍照后圖片的方向,否則方向不對
if (mCameraId == 0) {
parameters.setRotation(90); //后置
} else {
parameters.setRotation(270); //前置
}
mCamera.setParameters(parameters);
mCamera.setDisplayOrientation(90); //預覽角度默認0度,手機左側(cè)為0
5.開啟預覽
mCamera.setPreviewCallback(this); //接收每一幀的數(shù)據(jù)
mCamera.startPreview();
camera2的基本使用
camera2相較于camera發(fā)生了很大的變化
先上兩張網(wǎng)上找來的圖


圖一說明
Android Device相當于我們的app
Camera Device相當于手機上的相機
app想要使用相機,就需要在app和相機之間建立一個連接
連接建立完成之后app就可以向相機發(fā)起數(shù)據(jù)請求
相機響應請求并向app返回數(shù)據(jù)
app得到數(shù)據(jù)之后可以通過各種surface處理數(shù)據(jù)
圖二說明
圖二介紹了連接過程中用到的關(guān)鍵類
CameraManager
負責管理所有的攝像頭,如打開相機
CameraDevice.StateCallback
相機設備狀態(tài)的回調(diào),可以判斷相機是否打開
CaptureRequest.Builder
通過Builder.build()創(chuàng)建一個CaptureRequest,如預覽,拍照,錄像等等
通過addTarget(Surface outputTarget)為對應的每一個CaptureRequest提供數(shù)據(jù)處理的地方,即數(shù)據(jù)著陸點
CameraCaptureSession.StateCallback
通過 mCameraDevice.createCaptureSession創(chuàng)建一次會話,到了這里app和相機才算了建立起了連接
camera2流程梳理
預覽
1.通過CameraManager打開相機
mCameraManager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
mCameraManager.openCamera(String.valueOf(mCameraId),
new CameraStateCallback(), mCameraHandler);
2.在CameraDevice.StateCallback的onOpened回調(diào)中拿到相機對象,同時創(chuàng)建一個預覽的請求
@Override
public void onOpened(@NonNull CameraDevice camera) {
mCameraDevice = camera;
startPreview();
}
CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
Surface surface = getSurface();
builder.addTarget(surface);
mCaptureRequest = builder.build();
3.通過相機對象建立一個連接
mCameraDevice.createCaptureSession(Arrays.asList(surface, mPictureReader.getSurface()),
new CameraSessionCallback(), mCameraHandler);
4.在連接建立成功的回調(diào)中發(fā)送之前創(chuàng)建的預覽這個請求
public void onConfigured(@NonNull CameraCaptureSession session) {
try {
mCameraCaptureSession = session;
//預覽
mCameraCaptureSession.setRepeatingRequest(mCaptureRequest, null, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
至此整個預覽就完成了
拍照
拍照相當于一個CaptureRequest,如果我們已經(jīng)完成了預覽的流程,對于拍照只需要新創(chuàng)建一個請求,然后通過之前創(chuàng)建的Session發(fā)送這個請求即可。
在camera2中一次會話可以包含多個請求,因此我們只需要創(chuàng)建一次會話即可
public void takePicture(String path) {
mPicturePath = path;
try {
CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
builder.addTarget(mPictureReader.getSurface());
//拍照
mCameraCaptureSession.capture(builder.build(), null, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
mPictureReader的寫法
//2代表ImageReader中最多可以獲取兩幀圖像流
mPictureReader = ImageReader.newInstance(mCaptureSize.getWidth(), mCaptureSize.getHeight(),
ImageFormat.JPEG, 2);
mPictureReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
//將這幀數(shù)據(jù)轉(zhuǎn)成字節(jié)數(shù)組,類似于Camera1的PreviewCallback回調(diào)的預覽幀數(shù)據(jù)
savePicture(reader);
}
}, mCameraHandler);
由于我們設置的圖片是JPEG格式,則在onImageAvailable回調(diào)中,直接可以將字節(jié)數(shù)組保存為jpg格式的圖片
private void savePicture(ImageReader reader) {
Image image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
try {
FileOutputStream fos = new FileOutputStream(mPicturePath);
fos.write(data);
fos.close();
Toast.makeText(mContext,"圖片保存至:"+mPicturePath,Toast.LENGTH_SHORT).show();
} catch (FileNotFoundException e) {
e.printStackTrace();
Log.d(TAG, "Camera2 FileNotFoundException");
} catch (IOException e) {
e.printStackTrace();
Log.d(TAG, "Camera2 IOException");
} finally {
image.close();
}
}
攝像
攝像也相當于一個請求,我們同樣可以創(chuàng)建一個攝像的請求,然后通過之前創(chuàng)建的session將這個請求發(fā)送出去
攝像和預覽一樣都是通過在CameraCaptureSession.StateCallback的回調(diào)中調(diào)用setRepeatingRequest方法
兩者的區(qū)別
預覽是將數(shù)據(jù)展示在SurfaceView或者TextureView上
攝像是將數(shù)據(jù)保存成一個視頻文件
攝像數(shù)據(jù)的保存
通過ImageReader拿到原始的NV21數(shù)據(jù)
ImageReader.newInstance(width,height,ImageFormat.NV21, 1);
注意Camera2已經(jīng)不支持NV21格式了,通過源碼可以看出當使用該格式時,會拋異常
protected ImageReader(int width, int height, int format, int maxImages, long usage) {
mWidth = width;
mHeight = height;
mFormat = format;
mMaxImages = maxImages;
if (width < 1 || height < 1) {
throw new IllegalArgumentException(
"The image dimensions must be positive");
}
if (mMaxImages < 1) {
throw new IllegalArgumentException(
"Maximum outstanding image count must be at least 1");
}
if (format == ImageFormat.NV21) {
throw new IllegalArgumentException(
"NV21 format is not supported");
}
mNumPlanes = ImageUtils.getNumPlanesForFormat(mFormat);
nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage);
mSurface = nativeGetSurface();
mIsReaderValid = true;
// Estimate the native buffer allocation size and register it so it gets accounted for
// during GC. Note that this doesn't include the buffers required by the buffer queue
// itself and the buffers requested by the producer.
// Only include memory for 1 buffer, since actually accounting for the memory used is
// complex, and 1 buffer is enough for the VM to treat the ImageReader as being of some
// size.
mEstimatedNativeAllocBytes = ImageUtils.getEstimatedNativeAllocBytes(
width, height, format, /*buffer count*/ 1);
VMRuntime.getRuntime().registerNativeAllocation(mEstimatedNativeAllocBytes);
}
解決辦法,改為YUV_420_888格式
mPreviewReader = ImageReader.newInstance(1280, 720,
ImageFormat.YUV_420_888, 1);
然后將YUV_420_888格式轉(zhuǎn)化為NV21即可,具體轉(zhuǎn)換方式可以百度
參考鏈接
通過MediaRecorder直接存儲為經(jīng)過編碼壓縮的文件
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
兩者區(qū)別
前者更靈活,拿到原始數(shù)據(jù)之后可以做一些自己的處理
后者更簡單,無需我們在編碼和壓縮了
封裝
前面已經(jīng)介紹了camera和camera2的用法,通過代碼的練習,相信大家已經(jīng)了解了兩種camera的基本用法了
那么問題來了,我們怎樣用這兩種camera了
由于camera2是5.0中的api,為了兼容舊版手機,我們又不得不繼續(xù)使用camera,但兩者的用法卻完全不一樣
為此我封裝了一個SofarCamera類,為camera和camera2的提供了統(tǒng)一的調(diào)用方法,只需要一個參數(shù)的區(qū)分,用戶就可以自由切換要使用的是舊的camera還是5.0的camera2
同時在封裝過程中,我將數(shù)據(jù)和UI界面分離開,讀者可以自己設計UI界面,用的時候只需要傳遞SurfaceView的holder或者TextureView的texture即可
protected SurfaceHolder mHolder; //SurfaceView預覽
protected SurfaceTexture mTexture; //TextureView預覽
對外統(tǒng)一調(diào)用類SofarCamera
public class SofarCamera {
public static final int CAMERA_FRONT=1; //前置攝像頭
public static final int CAMERA_BACK=0; //后置攝像頭
public static final int CAMERA1=1; //舊api
public static final int CAMERA2=2; //新api
private Context context;
private int cameraId;
private int cameraApi;
private SurfaceHolder holder;
private SurfaceTexture texture;
private BaseCamera baseCamera;
private SofarCamera(Builder builder){
this.context=builder.context;
this.cameraId=builder.cameraId;
this.cameraApi=builder.cameraApi;
this.holder=builder.holder;
this.texture=builder.texture;
initCamera();
}
private void initCamera(){
if(cameraApi==CAMERA1){
baseCamera=new Camera1();
}else if(cameraApi==CAMERA2){
baseCamera=new Camera2();
}
baseCamera.setContext(context);
baseCamera.setCameraId(cameraId);
baseCamera.setDisplay(holder);
baseCamera.setDisplay(texture);
}
public void openCamera(){
baseCamera.openCamera();
}
public void destroyCamera(){
baseCamera.destroyCamera();
}
public void switchCamera(){
if(cameraId==CAMERA_FRONT){
cameraId=CAMERA_BACK;
}else {
cameraId=CAMERA_FRONT;
}
destroyCamera();
initCamera();
openCamera();
}
public void takePicture(String path){
baseCamera.takePicture(path);
}
public Builder newBuilder() {
return new Builder(this);
}
public static final class Builder{
private Context context;
private int cameraId;
private int cameraApi;
private SurfaceHolder holder;
private SurfaceTexture texture;
public Builder(){
}
public Builder(SofarCamera camera){
this.cameraId=camera.cameraId;
this.cameraApi=camera.cameraApi;
this.holder=camera.holder;
this.texture=camera.texture;
}
public Builder context(Context context){
this.context=context;
return this;
}
public Builder cameraId(int cameraId){
this.cameraId=cameraId;
return this;
}
public Builder cameraApi(@CameraApi int cameraApi){
this.cameraApi=cameraApi;
return this;
}
public Builder holder(SurfaceHolder holder){
this.holder=holder;
return this;
}
public Builder texture(SurfaceTexture texture){
this.texture=texture;
return this;
}
public SofarCamera build() {
return new SofarCamera(this);
}
}
@IntDef({CAMERA1,CAMERA2})
@Retention(RetentionPolicy.SOURCE)
public @interface CameraApi{
}
}
Camera的抽象
對于Camera1和Camera2只需要繼承該類,并實現(xiàn)必要的方法即可
public abstract class BaseCamera {
private static final String TAG = "BaseCamera";
protected Context mContext;
protected int mCameraId = 1; //1前置 0后置
protected SurfaceHolder mHolder; //SurfaceView預覽
protected SurfaceTexture mTexture; //TextureView預覽
protected String mPicturePath; //拍照后的圖片存儲路徑
//SurfaceView預覽
public void setDisplay(SurfaceHolder holder) {
mHolder = holder;
}
//TextureView預覽
public void setDisplay(SurfaceTexture texture) {
mTexture = texture;
}
//設置前置或后置攝像頭
public void setCameraId(int cameraId) {
if (cameraId != 1 && cameraId != 0) {
Log.d(TAG, "error cameraId:" + cameraId + " BaseCamera cameraId only support 0 or 1 ");
return;
}
mCameraId = cameraId;
}
public void setContext(Context context) {
mContext = context;
}
public abstract void openCamera();
public abstract void destroyCamera();
public abstract void takePicture(String path);
}
調(diào)用方法
使用camera
mSofarCamera = new SofarCamera.Builder()
.context(this)
.cameraApi(SofarCamera.CAMERA1)
.cameraId(SofarCamera.CAMERA_BACK)
.holder(holder)
.build();
使用camera2
mSofarCamera = new SofarCamera.Builder()
.context(this)
.cameraApi(SofarCamera.CAMERA2)
.cameraId(SofarCamera.CAMERA_BACK)
.holder(holder)
.build();
最后的話
由于篇幅有限,Camera1和Camera2的代碼我就不貼了
讀者可以閱讀前面的使用介紹或者一些其他資料,自己來實現(xiàn)它
我的實現(xiàn)已經(jīng)放置在github上
https://github.com/hustersf/SofarMusic
Camera的封裝放置在libplayer下的video包下
方法調(diào)用放置在app/demo/media/video下