iOS11新加入了非常有科技感的元素:AR以及機(jī)器學(xué)習(xí)。趁著有機(jī)器學(xué)習(xí)的經(jīng)驗(yàn),先來(lái)體驗(yàn)一次。
準(zhǔn)備工作
- Xcode 9
- CoreML Model (Inceptionv3)
主要流程
- 打開(kāi)Xcode創(chuàng)建一個(gè)空的模板工程
- 將下載好的
.mlmodel文件拖入工程,并確保在Target->Build Phases->Complie Sources中看見(jiàn)它。??否則無(wú)法使用?? - 基于
(懶)快速上手的原因,直接在ViewController中創(chuàng)建一個(gè)UIButton,綁定彈出相冊(cè)方法
lazy var btn:UIButton = {[unowned self] in
let btn = UIButton(type: .custom)
btn.setTitle("select", for: .normal)
btn.setTitleColor(.black, for: .normal)
btn.frame.size = CGSize(width: 100, height: 100)
btn.center = self.view.center
btn.addTarget(self, action: #selector(handle), for: .touchUpInside)
return btn
}()
@objc private func handle(){
showAblum()
}
- 遵守相關(guān)協(xié)議并實(shí)現(xiàn)可選方法獲得所選圖像
此處自定義了一個(gè)協(xié)議 *** canShowAblum*** 給UIViewController默認(rèn)實(shí)現(xiàn)一個(gè)彈出方法。
protocol canShowAblum : class,UIImagePickerControllerDelegate,UINavigationControllerDelegate{
}
extension canShowAblum where Self : UIViewController{
func showAblum(){
let picker = UIImagePickerController()
picker.delegate = self
picker.sourceType = .photoLibrary
picker.allowsEditing = true
present(picker, animated: true, completion: nil)
}
}
extension ViewController{
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if let img = info["UIImagePickerControllerOriginalImage"] as? UIImage{
_ = PictrueLearning(with: img)
}
dismiss(animated: true, completion: nil)
}
}
- 傳入圖像給訓(xùn)練模型,得到結(jié)果
訓(xùn)練模型封裝在結(jié)構(gòu)體PictureLearning中:
struct PictrueLearning {
private init(){}
init(with image:UIImage){
self.init()
train(with: image)//.scale(to: CGSize(width: 299, height: 299)))
}
fileprivate func train(with image:UIImage){
//???確保模型初始化成功
guard let img = image.cgImage,let model = try? VNCoreMLModel(for: Inception().model) else {
return
}
//創(chuàng)建請(qǐng)求
let request = VNCoreMLRequest(model: model) {
guard $1 == nil,let resArr = $0.results,let res = resArr.first as? VNClassificationObservation else { return }
print(res.identifier,res.confidence)
}
DispatchQueue.global().async {
//執(zhí)行
try? VNImageRequestHandler(cgImage: img).perform([request])
}
}
}
extension UIImage{
func scale(to size:CGSize)->UIImage{
UIGraphicsBeginImageContext(size)
draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let scaleImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaleImage ?? UIImage()
}
}
結(jié)果分析
1.嘗試了多張圖片,發(fā)現(xiàn)綜合結(jié)果較差。 因?yàn)闊o(wú)法窺探此模型的內(nèi)含,只知道是Neural Network Classifier,影響神經(jīng)網(wǎng)絡(luò)的精度的參數(shù)有很多,準(zhǔn)備之后換自己的模型繼續(xù)嘗試。
2.此模型的輸入?yún)?shù)要求為

image.png
但傳入寬高為<299,299>的處理圖片 和 傳入原圖相比(圖片來(lái)源于iPhone模擬器),輸出結(jié)果精度也不同。這個(gè)地方還需再研究一下。