從iOS 14和macOS 11開始,Vision增加了識別人體姿勢的強(qiáng)大功能。他可以識別人體的19個關(guān)鍵點(diǎn)。如圖所示:

人體關(guān)鍵點(diǎn).png
實(shí)現(xiàn)
1.發(fā)起一個請求
使用Vision框架,通過VNDetectHumanBodyPoseRequest提供身體姿勢檢測功能。 下面代碼演示了,如何從CGImage中檢測身體關(guān)鍵點(diǎn)。
// Get the CGImage on which to perform requests.
guard let cgImage = UIImage(named: "bodypose")?.cgImage else { return }
// Create a new image-request handler.
let requestHandler = VNImageRequestHandler(cgImage: cgImage)
// Create a new request to recognize a human body pose.
let request = VNDetectHumanBodyPoseRequest(completionHandler: bodyPoseHandler)
do {
// Perform the body pose-detection request.
try requestHandler.perform([request])
} catch {
print("Unable to perform the request: \(error).")
}
2.處理結(jié)果
請求處理完之后,會調(diào)用完成的閉包,通過閉包,可以獲取到檢測結(jié)果和錯誤信息。 如果正常檢測到人體關(guān)鍵點(diǎn),將以VNHumanBodyPoseObservation數(shù)組的形式返回。VNHumanBodyPoseObservation中包含識別到的關(guān)鍵點(diǎn)和一個置信度分?jǐn)?shù),置信度越大,說明識別的精度越高。
func bodyPoseHandler(request: VNRequest, error: Error?) {
guard let observations =
request.results as? [VNHumanBodyPoseObservation] else {
return
}
// Process each observation to find the recognized body pose points.
observations.forEach { processObservation($0) }
}
3.獲取關(guān)鍵點(diǎn)
可以通過VNHumanBodyPoseObservation.JointName來獲取對應(yīng)的關(guān)鍵點(diǎn)的坐標(biāo)。注意,recognizedPoints(_:) 方法返回的點(diǎn)取值范圍[0, 1],原點(diǎn)位于左下角,實(shí)際使用中需要進(jìn)行轉(zhuǎn)換。
func processObservation(_ observation: VNHumanBodyPoseObservation) {
// Retrieve all torso points.
guard let recognizedPoints =
try? observation.recognizedPoints(.torso) else { return }
// Torso joint names in a clockwise ordering.
let torsoJointNames: [VNHumanBodyPoseObservation.JointName] = [
.neck,
.rightShoulder,
.rightHip,
.root,
.leftHip,
.leftShoulder
]
// Retrieve the CGPoints containing the normalized X and Y coordinates.
let imagePoints: [CGPoint] = torsoJointNames.compactMap {
guard let point = recognizedPoints[$0], point.confidence > 0 else { return nil }
// Translate the point from normalized-coordinates to image coordinates.
return VNImagePointForNormalizedPoint(point.location,
Int(imageSize.width),
Int(imageSize.height))
}
// Draw the points onscreen.
draw(points: imagePoints)
}
拓展
除了使用 Vision 的 VNDetectHumanBodyPoseRequest 來做人體姿勢識別之外,還可以使用 CoreML 來實(shí)現(xiàn)。官方示例:Detecting Human Body Poses in an Image ,此示例可以運(yùn)行在 iOS 13及以上版本。