因?yàn)樽罱男枨螅洪_啟相機(jī),實(shí)時(shí)獲取相機(jī)的照片并提取其中的A4紙矩形區(qū)域,獲取滿足條件的矩形區(qū)域后就拍照,把A4紙區(qū)域摳出來(lái)。
android官方好像是沒有這方面的封裝的,需要自己擼,IOS聽同事說(shuō)官方有這方面功能的封裝,可以直接拿來(lái)用,所以下面就介紹下我在項(xiàng)目中對(duì)opencv提取矩形的處理步驟。
1.在opencv官網(wǎng)下載需要版本的sdk,這里有不同版本的release對(duì)應(yīng)的sdk,

在比較老版本的sdk中,支持armeabi架構(gòu),后面的版本是不支持aremabi架構(gòu)的,但是我在本文中也會(huì)介紹如何編譯armeabi的步驟,放在文章最后(因?yàn)閍rmeabi確實(shí)太老了,模擬浮點(diǎn)計(jì)算處理圖片性能堪憂,因?yàn)槲业捻?xiàng)目是實(shí)時(shí)處理相機(jī)返回圖片數(shù)據(jù),建議在項(xiàng)目允許的情況下不用armeabi,速度會(huì)又一點(diǎn)慢,不過(guò)還好,也不是太慢)
2.編譯opencv
下載之后AS要打開samples這個(gè)文件夾,打開之后如下圖

對(duì)了編譯opencv需要ndk,如果平時(shí)開發(fā)沒有用到NDK的話,可能要去下載NDK,同時(shí)要cmake工具,本篇就不介紹怎么配置ndk開發(fā)環(huán)境了,不太清楚的,可以自己網(wǎng)上搜下。
2.1 編譯得到so
順利打開項(xiàng)目的話,能看到有多個(gè),(注意:每個(gè)都是一個(gè)可以單獨(dú)編譯的app項(xiàng)目,不是相互依賴的module)然后我們選擇“tutorial-2” 進(jìn)行編譯,
不管是選擇apk編譯,還是選擇bundle編譯編譯完成后都會(huì)生成so,如下圖

一定要記住,在生產(chǎn)上,需要用release的so(編譯release,就可以得到了),現(xiàn)在編譯的是debug,現(xiàn)在調(diào)試就不管
2.2運(yùn)行apk
連接手機(jī),直接運(yùn)行在手機(jī)上,看下效果,菜單切換的時(shí)候,可以看到不同的輸出圖像,選擇“Canny”,就能看到輪廓,這個(gè)輪廓和我們需要的差了十萬(wàn)八千里,所以還需要修改代碼, 這個(gè)項(xiàng)目用到了opencv官方的兩個(gè)類,CameraBridgeViewBase 和 JavaCameraView。我們主要也就是修改這兩個(gè)類的代碼,來(lái)提取輪廓。
3.opencv運(yùn)行在自己的項(xiàng)目中
3.1
AS新建一個(gè)項(xiàng)目,再新建一個(gè)module假設(shè)命名為myopencv,在app中引入myopencv,首先把剛剛貶編譯出來(lái)的so引入到myopencv中去,在myopencv 的gradle中添加對(duì)so的引用
3.2
配置好后,在把剛剛的opencv工程中的opencv代碼復(fù)制到myopencv里面,(如下圖,是opencv官方的Java代碼,不直接在sdk中修改代碼的原因是,如果改錯(cuò)了,可以重新復(fù)制替換,也可以防止在修改時(shí)出了問(wèn)題,沒有對(duì)比)

4.修改代碼
4.1修改CameraBridgeViewBase類
找到方法 protected void deliverAndDrawFrame(CvCameraViewFrame frame);
我們主要修改該類的也是這個(gè)方法
提示:
顯示到屏幕上的相機(jī)數(shù)據(jù)是通過(guò) 方 getHolder().unlockCanvasAndPost(canvas);繪制上去的,所以在 getHolder().unlockCanvasAndPost(canvas);方法之前我們要識(shí)別出矩形區(qū)域,在矩形區(qū)域外繪制其它的半透明顏色,來(lái)區(qū)分識(shí)別出來(lái)的矩形區(qū)域和A4紙的重合度,(看自己需求,若果不需要區(qū)分的話,就可以不用繪制矩形區(qū)域之外的顏色,直接把識(shí)別出來(lái)的矩形區(qū)域繪制出來(lái)都可以,根據(jù)自己項(xiàng)目需求處理)
方法 Utils.matToBitmap(modified, mCacheBitmap);是將Mat對(duì)象轉(zhuǎn)化為bitmao對(duì)象,我們需要處理的就是在canvas繪制bitmap前 處理Mat對(duì)象,拿到矩形 org.opencv.core.Rect區(qū)域。那我們的目標(biāo)很明確了,一:就是處理Mat對(duì)象,二:拿到Rect,
網(wǎng)上關(guān)于矩形提取的資料五花八門,各有千秋,但是我用了幾個(gè)之后發(fā)現(xiàn)效果不好,所以還是根據(jù)OpenCV官方關(guān)于矩形識(shí)別的例子來(lái)調(diào)試我們代碼。新建輪廓輔助類
/**
* 輪廓輔助類
*/
public class CountersAuxiliary {
public CountersAuxiliary() {
}
private Mat image;
private Mat originalImage;
private List<MatOfPoint> contours;
private Mat hierarchy;
private int HEIGHT;
private int WIDTH;
private static List<Rect> rects = new ArrayList<Rect>();
public static void setFilter(Mat image) {
//Apply gaussian blur to remove noise
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
//Threshold
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, 7, 1);
//Invert the image
Core.bitwise_not(image, image);
//Dilate
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_DILATE, new Size(3, 3), new Point(1, 1));
Imgproc.dilate(image, image, kernel);
}
public static void findRectangle(Mat originalImage, Mat image) {
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
List<Rect> rects = new ArrayList<Rect>();
long startTime0 = System.currentTimeMillis();
Imgproc.cvtColor(originalImage, image, Imgproc.COLOR_BGR2GRAY);
setFilter(image);
rects.clear();
//Find Contours
Imgproc.findContours(image, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
//For conversion later on
MatOfPoint2f approxCurve = new MatOfPoint2f();
long startTime = System.currentTimeMillis();
//For each contour found
for (int i = 0; i < contours.size(); i++) {
//Convert contours from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
if (approxDistance > 1) {
//Find Polygons
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
//Rectangle Checks - Points, area, convexity
if (points.total() == 4 && Math.abs(Imgproc.contourArea(points)) > 1000 && Imgproc.isContourConvex(points)) {
double cos = 0;
double mcos = 0;
for (int sc = 2; sc < 5; sc++) {
// TO-DO Figure a way to check angle
cos = Math.abs(angle(points.toList().get(sc % 4), points.toList().get(sc - 2), points.toList().get(sc - 1)));
if (cos > mcos) {
mcos = cos;
}
}
if (mcos < 0.3) {
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
// if (Math.abs(rect.height - rect.width) < 1000) {
// System.out.println(i + "| x: " + rect.x + " + width(" + rect.width + "), y: " + rect.y + "+ width(" + rect.height + ")");
rects.add(rect);
// Imgproc.rectangle(originalImage, rect.tl(), rect.br(), new Scalar(255, 0, 0), -1, 4, 0);
// Log.i("GGGGSSS", "helper:" + rect.toString());
// Imgproc.drawContours(originalImage, contours, i, new Scalar(0, 255, 0, .8), 2);
// Highgui.imwrite("detected_layers"+i+".png", originalImage);
// }
}
}
}
}
}
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
public static double angle(Point pt1, Point pt2, Point pt0) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1 * dx2 + dy1 * dy2) / Math.sqrt((dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2) + 1e-10);
}
在下面方法中引用
private static int N = 16;
/**
* This method shall be called by the subclasses when they have valid
* object and want it to be delivered to external client (via callback) and
* then displayed on the screen.
*
* @param frame - the current frame to be delivered
*/
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {//該方法中有部分無(wú)用代碼,調(diào)試時(shí)沒刪除,自己可以刪除
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch (Exception e) {
e.printStackTrace();
bmpValid = false;
}
} else {
isCompleted = true;
return;
}
/*****************************************************/
Mat src = new Mat();
Mat src1 = new Mat();
Mat src2 = new Mat();
Mat src3 = new Mat();
Mat des = new Mat();
// Imgproc.resize(modified, src, new Size(modified.width() / N, modified.height() / N));
Imgproc.pyrDown(modified, src);//金字塔縮小為原來(lái)的1/16,在armeabi中速度提升幾百倍,如果不縮小,armeabi是根本沒法實(shí)時(shí)識(shí)別的,會(huì)卡5秒起步
Imgproc.pyrDown(src, src1);
Imgproc.pyrDown(src1, src2);
Imgproc.pyrDown(src2, src3);
//pyrDown方法執(zhí)行一次,會(huì)輸出原來(lái)尺寸1/2大小的mat,這里執(zhí)行4次,就得到了原圖1/16的縮略圖
long startTime = System.currentTimeMillis();
modified = src3.clone();
CountersAuxiliary.findRectangle(modified, des);
Log.i("GAFR", "CCC333_old=" + (System.currentTimeMillis() - startTime));
/*****************************************************/
Mat mat = new Mat();//mSource.clone();
Imgproc.Canny(modified, mat, 75, 200);
// Mat tmp = mSource.clone();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
// 尋找輪廓
Mat hierarchy = new Mat();
Imgproc.findContours(mat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Bitmap bitmap = Bitmap.createBitmap(mat.cols(), mat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mat, bitmap);
int index = 0;
double perimeter = 0;
// 找出匹配到的最大輪廓
for (int i = 0; i < contours.size(); i++) {
// 最大面積
double area = Imgproc.contourArea(contours.get(i));
// double length = Imgproc.arcLength(source, true);
if (area > perimeter) {
perimeter = area;
index = i;
}
}
List<org.opencv.core.Rect> rects = new ArrayList<org.opencv.core.Rect>();
MatOfPoint2f approxCurve = new MatOfPoint2f();
for (int i = 0; i < contours.size(); i++) {
//Convert contours from MatOfPoint to MatOfPoint2f
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
if (approxDistance > 1) {
//Find Polygons
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
//Rectangle Checks - Points, area, convexity
if (points.total() == 4 && Math.abs(Imgproc.contourArea(points)) > 1000 && Imgproc.isContourConvex(points)) {
double cos = 0;
double mcos = 0;
for (int sc = 2; sc < 5; sc++) {
// TO-DO Figure a way to check angle
cos = Math.abs(CountersAuxiliary.angle(points.toList().get(sc % 4), points.toList().get(sc - 2), points.toList().get(sc - 1)));
if (cos > mcos) {
mcos = cos;
}
}
if (mcos < 0.3) {
// Get bounding rect of contour
org.opencv.core.Rect rect = Imgproc.boundingRect(points);
if (Math.abs(rect.height - rect.width) < 1000) {
System.out.println(i + "| x: " + rect.x + " + width(" + rect.width + "), y: " + rect.y + "+ width(" + rect.height + ")");
rects.add(rect);
Imgproc.rectangle(mat, rect.tl(), rect.br(), new Scalar(255, 0, 0), -1, 4, 0);
}
}
}
}
}
Utils.matToBitmap(mat, bitmap);
String str = "";
for (int m = 0; m < rects.size(); m++) {
str += ",rects=" + rects.get(m).toString();
}
String rectStr = "";
// Imgproc.drawContours(tmp, contours, index, new Scalar(0.0, 0.0, 255.0), 9, Imgproc.LINE_AA);
if (contours.size() != 0) {//只拍A4紙,所以默認(rèn)面積最大的就是A4紙區(qū)域,下面的多邊擬合比求最大面積誤差要小些,所以rect被覆蓋了(看選擇哪種,其中一種可以刪除,只保留一個(gè)方案)
rect = Imgproc.boundingRect(contours.get(index));
// Imgproc.rectangle(tmp, rect, new Scalar(0.0, 0.0, 255.0), 4, Imgproc.LINE_8);
// mRect = new Rect(rect.x, rect.y, rect.x + rect.width, rect.y + rect.height);
rect.x = rect.x * N;
rect.y = rect.y * N;
rect.width = rect.width * N;
rect.height = rect.height * N;
}
if (rects.size() > 0) {
rect.x = rects.get(0).x * N;
rect.y = rects.get(0).y * N;
rect.width = rects.get(0).width * N;
rect.height = rects.get(0).height * N;
}
/*****************************************************/
mat.release();
hierarchy.release();
src.release();
src1.release();
src2.release();
src3.release();
des.release();
// tmp.release();
if (mCacheBitmap == null || isExist) {
isCompleted = true;
return;
}
if (bmpValid) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
下面的代碼就可以自己處理了,畢竟rect已經(jīng)拿到了,
......
}
下面讓oepncv默認(rèn)的橫屏變?yōu)樨Q屏,
CameraBridgeViewBase類中
protected void AllocateCache() {
// mCacheBitmap = Bitmap.createBitmap(mFrameWidth, mFrameHeight, Bitmap.Config.ARGB_8888);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
//為了方向正確mCacheBitmap存儲(chǔ)的時(shí)相機(jī)frame旋轉(zhuǎn)90度之后的數(shù)據(jù)
//旋轉(zhuǎn)90度后mFrameWidth,mFrameHeight互換
int portraitWidth = mFrameHeight;
int portraitHeight = mFrameWidth;
mCacheBitmap = Bitmap.createBitmap(portraitWidth, portraitHeight, Bitmap.Config.ARGB_8888);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
}
protected Size calculateCameraFrameSize(List<?> supportedSizes, ListItemAccessor accessor, int surfaceWidth, int surfaceHeight) {
int calcWidth = 0;
int calcHeight = 0;
// int maxAllowedWidth = (mMaxWidth != MAX_UNSPECIFIED && mMaxWidth < surfaceWidth) ? mMaxWidth : surfaceWidth;
// int maxAllowedHeight = (mMaxHeight != MAX_UNSPECIFIED && mMaxHeight < surfaceHeight) ? mMaxHeight : surfaceHeight;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
//允許的最大width和height
//#Modified step4
//相機(jī)Frame的mMaxWidth應(yīng)該與surface的surfaceHeight比
//相機(jī)Frame的mMaxHeight應(yīng)該與surface的surfaceWidth比
int maxAllowedWidth = (mMaxWidth != MAX_UNSPECIFIED && mMaxWidth < surfaceHeight) ? mMaxWidth : surfaceHeight;
int maxAllowedHeight = (mMaxHeight != MAX_UNSPECIFIED && mMaxHeight < surfaceWidth) ? mMaxHeight : surfaceWidth;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
Collections.sort((List<android.hardware.Camera.Size>) supportedSizes, new Comparator<android.hardware.Camera.Size>() {
@Override
public int compare(Camera.Size o1, Camera.Size o2) {
return o2.height - o1.height;
}
});
for (Object size : supportedSizes) {
int width = accessor.getWidth(size);
int height = accessor.getHeight(size);
Log.d(TAG, "trying size: " + width + "x" + height);
if (width <= maxAllowedWidth && height <= maxAllowedWidth) {
if (width >= calcWidth && height >= calcHeight) {
calcWidth = (int) width;
calcHeight = (int) height;
break;
}
}
}
if ((calcWidth == 0 || calcHeight == 0) && supportedSizes.size() > 0) {
Log.i(TAG, "fallback to the first frame size");
Object size = supportedSizes.get(0);
calcWidth = accessor.getWidth(size);
calcHeight = accessor.getHeight(size);
}
return new Size(calcWidth, calcHeight);
}
JavaCameraView類中修改
protected boolean initializeCamera(int width, int height) {
......
List<String> FocusModes = params.getSupportedFocusModes();
if (FocusModes != null && FocusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) {
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
}
mCamera.setParameters(params);
params = mCamera.getParameters();
mFrameWidth = params.getPreviewSize().width;
mFrameHeight = params.getPreviewSize().height;
if ((getLayoutParams().width == LayoutParams.MATCH_PARENT) && (getLayoutParams().height == LayoutParams.MATCH_PARENT))
// mScale = Math.min(((float)height)/mFrameHeight, ((float)width)/mFrameWidth);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
/*為了在deliverAndDrawFrame里往畫布上畫時(shí)應(yīng)用縮放<JavaCameraView>里
android:layout_width="match_parent"
android:layout_height="match_parent"
若又想指定縮放后的大小可將<JavaCameraView>放在一個(gè)有大小的
LinearLayout里且當(dāng)方向是portrait時(shí)比率是
surface的width/相機(jī)frame的mFrameHeight
surface的height/相機(jī)frame的mFrameWidth
若不想設(shè)置<JavaCameraView>則這里直接去掉if語(yǔ)句應(yīng)該也可*/
mScale = Math.min(((float) width) / mFrameHeight, ((float) height) / mFrameWidth);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
else
mScale = 0;
if (mFpsMeter != null) {
mFpsMeter.setResolution(mFrameWidth, mFrameHeight);
}
int size = mFrameWidth * mFrameHeight;
size = size * ImageFormat.getBitsPerPixel(params.getPreviewFormat()) / 8;
mBuffer = new byte[size];
修改******部分
}
然后就是內(nèi)部類
private class JavaCameraFrame implements CvCameraViewFrame {
@Override
public Mat gray() {
// return mYuvFrameData.submat(0, mHeight, 0, mWidth);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
Core.rotate(mYuvFrameData.submat(0, mHeight, 0, mWidth), portrait_gray, Core.ROTATE_90_CLOCKWISE);
return portrait_gray;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
}
@Override
public Mat rgba() {
if (mPreviewFormat == ImageFormat.NV21)
Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
else if (mPreviewFormat == ImageFormat.YV12)
Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGB_I420, 4); // COLOR_YUV2RGBA_YV12 produces inverted colors
else
throw new IllegalArgumentException("Preview Format can be NV21 or YV12");
// return mRgba;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
Core.rotate(mRgba, portrait_rgba, Core.ROTATE_90_CLOCKWISE);
Bitmap bitmap = Bitmap.createBitmap(portrait_mWidth, portrait_mHeight, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(portrait_rgba, bitmap);
return portrait_rgba;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
}
public JavaCameraFrame(Mat Yuv420sp, int width, int height) {
super();
mWidth = width;
mHeight = height;
mYuvFrameData = Yuv420sp;
mRgba = new Mat();
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
portrait_mHeight = mWidth;
portrait_mWidth = mHeight;
portrait_gray = new Mat(portrait_mHeight, portrait_mWidth, CvType.CV_8UC1);
portrait_rgba = new Mat(portrait_mHeight, portrait_mWidth, CvType.CV_8UC4);
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
}
public void release() {
mRgba.release();
}
private Mat mYuvFrameData;
private Mat mRgba;
private int mWidth;
private int mHeight;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
private int portrait_mHeight;
private int portrait_mWidth;
private Mat portrait_gray;
private Mat portrait_rgba;
/*********************************橫屏轉(zhuǎn)豎屏修改**********************************************/
}
5.差不多了,下面介紹編譯armeabi的步驟
去OpenCV官網(wǎng)下載3.4.8的版本進(jìn)行編譯,sdk中就包含armeabi的編譯文件,
如下圖,

同樣,還是編譯“turorial-2”,在編譯前,在gradle中配置so編譯信息,如圖

特別提示
編譯出的so中低版本是java3,高版本是java4,對(duì)應(yīng)的oepncv sdk中Java代碼也是不能混用的,例如,要用3.4.8編譯so的話,那么復(fù)制oepncv 中Java代碼也要復(fù)制3.4.8中的,因?yàn)閟o中C/CPP代碼和Java代碼一一對(duì)應(yīng) ,不同版本的有差異,運(yùn)行會(huì)報(bào)錯(cuò)
最后其余步驟和上面一樣,
最后附上對(duì)閃光燈的控制方法
1.在Manifest中加上閃光燈權(quán)限
//打開閃光燈
public void turnLightOn() {
if (mCamera == null) {
return;
}
Camera.Parameters parameters = mCamera.getParameters();
if (parameters == null) {
return;
}
List<String> flashModes = parameters.getSupportedFlashModes();
// Check if camera flash exists
if (flashModes == null) {
// Use the screen as a flashlight (next best thing)
return;
}
String flashMode = parameters.getFlashMode();
Log.i(TAG, "Flash mode: " + flashMode);
Log.i(TAG, "Flash modes: " + flashModes);
if (!Camera.Parameters.FLASH_MODE_TORCH.equals(flashMode)) {
// Turn on the flash
if (flashModes.contains(Camera.Parameters.FLASH_MODE_TORCH)) {
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
mCamera.setParameters(parameters);
} else {
}
}
}
//關(guān)閉閃光燈
public void turnLightOff() {
if (mCamera == null) {
return;
}
Camera.Parameters parameters = mCamera.getParameters();
if (parameters == null) {
return;
}
List<String> flashModes = parameters.getSupportedFlashModes();
String flashMode = parameters.getFlashMode();
// Check if camera flash exists
if (flashModes == null) {
return;
}
Log.i(TAG, "Flash mode: " + flashMode);
Log.i(TAG, "Flash modes: " + flashModes);
if (!Camera.Parameters.FLASH_MODE_OFF.equals(flashMode)) {
// Turn off the flash
if (flashModes.contains(Camera.Parameters.FLASH_MODE_OFF)) {
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
mCamera.setParameters(parameters);
} else {
Log.e(TAG, "FLASH_MODE_OFF not supported");
}
}
}