一、寄宿圖Bitmap
之前看過內(nèi)存惡鬼drawRect,也驗(yàn)證過,確實(shí)如此,但理解花了好長時間。
在每一個UIView實(shí)例當(dāng)中,都有一個默認(rèn)的支持圖層,UIView負(fù)責(zé)創(chuàng)建并且管理這個圖層。實(shí)際上這個CALayer圖層才是真正用來在屏幕上顯示的,UIView僅僅是對它的一層封裝,實(shí)現(xiàn)了CALayer的delegate,提供了處理事件交互的具體功能。
CALayer只是一個普通的類,它也不能直接渲染到屏幕上,因?yàn)槠聊簧夏闼吹降臇|西,其實(shí)都是一張張圖片。而為什么我們能看到CALayer的內(nèi)容呢,是因?yàn)镃ALayer內(nèi)部有一個contents屬性。contents默認(rèn)可以傳一個id類型的對象,但是只有你傳CGImage的時候,它才能夠正常顯示在屏幕上。contents也被稱為寄宿圖,除了給它賦值CGImage之外,我們也可以直接對它進(jìn)行繪制,。如果UIView檢測到-drawRect:方法被調(diào)用了,它就會為視圖分配一個寄宿圖。這個寄宿圖的像素尺寸等于視圖大小乘以contentsScale。
為什么要做這樣的設(shè)定呢?
猜測是為了方便顯示,iOS保持界面流暢的技巧 中指出圖片的解碼比較消耗CPU性能,為保持界面流暢,圖片需要提前解碼,即從寄宿圖直接創(chuàng)建圖片。
當(dāng)你用UIImage后CGImageSource的那幾個方法創(chuàng)建圖片時,圖片數(shù)據(jù)并不會立刻解碼。圖片設(shè)置到UIImageView或者CALayer.contents中去,并且CALayer被提交到GPU 前,CGImage 中的數(shù)據(jù)才會得到解碼。這一步是發(fā)生在主線程的,并且不可避免。如果想要繞開這個機(jī)制,常見的做法是在后臺線程先把圖片繪制到 CGBitmapContext 中,然后從 Bitmap 直接創(chuàng)建圖片。目前常見的網(wǎng)絡(luò)圖片庫都自帶這個功能。
二、UIGraphicsBeginImageContext
之前看過一篇文章iOS微信內(nèi)存監(jiān)控中提到大圖片處理時采用這種方法
- (UIImage *)scaleImage:(UIImage *)image newSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
但處理大分辨率圖片時,往往容易出現(xiàn)OOM,原因是-[UIImage drawInRect:]在繪制時,先解碼圖片,再生成原始分辨率大小的bitmap,這是很耗內(nèi)存的。解決方法是使用更低層的ImageIO接口,避免中間bitmap產(chǎn)生:
+ (UIImage *)scaleImageWithData:(NSData *)data withSize:(CGSize)size
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation {
CGFloat maxPixelSize = MAX(size.width, size.height);
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSDictionary *options = @{(__bridge id)kCGImageSourceCreateThumbnailFromImageAlways:(__bridge id)kCFBooleanTrue,
(__bridge id)kCGImageSourceThumbnailMaxPixelSize:[NSNumber numberWithFloat:maxPixelSize]
};
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
CFRelease(sourceRef);
return resultImage;
}
//或者如下方法
- (UIImage *)imageByCropToRect:(CGRect)rect {
rect.origin.x *= self.scale;
rect.origin.y *= self.scale;
rect.size.width *= self.scale;
rect.size.height *= self.scale;
if (rect.size.width <= 0 || rect.size.height <= 0) return nil;
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *image = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return image;
}
不過以上兩個方法不好處就是提交圖片后在真正顯示的時候才會解碼,會影響CPU性能。
以下摘自源碼
// UIImage context
// The following methods will only return a 8-bit per channel context in the DeviceRGB color space.
// Any new bitmap drawing code is encouraged to use UIGraphicsImageRenderer in leiu of this API.
UIKIT_EXTERN void UIGraphicsBeginImageContext(CGSize size);
UIKIT_EXTERN void UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) NS_AVAILABLE_IOS(4_0);
UIKIT_EXTERN UIImage* __nullable UIGraphicsGetImageFromCurrentImageContext(void);
UIKIT_EXTERN void UIGraphicsEndImageContext(void);
/* Create a bitmap context. The context draws into a bitmap which is `width'
pixels wide and `height' pixels high. The number of components for each
pixel is specified by `space', which may also specify a destination color
profile. The number of bits for each component of a pixel is specified by
`bitsPerComponent'. The number of bytes per pixel is equal to
`(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap
consists of `bytesPerRow' bytes, which must be at least `width * bytes
per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple
of the number of bytes per pixel. `data', if non-NULL, points to a block
of memory at least `bytesPerRow * height' bytes. If `data' is NULL, the
data for context is allocated automatically and freed when the context is
deallocated. `bitmapInfo' specifies whether the bitmap should contain an
alpha channel and how it's to be generated, along with whether the
components are floating-point or integer. */
CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,
size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,
CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)
CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);
UIGraphicsBeginImage:經(jīng)過測試當(dāng)創(chuàng)建一個 寬高都為5000的size時,內(nèi)存瘋狂上漲,約為5000 * 5000 * scale^2 * 4。
CGBitmapContextCreate:同樣的測試,寬高都為5000的size時,內(nèi)存瘋狂上漲,約為5000 * 5000 * bitsPerComponent。
如果我們截屏圖片的目的是保存到相冊,不是為了顯示截屏圖片,盡量采用比UIGraphicsBeginImageContex底層的方案;如果是為了顯示,盡量采用UIGraphicsBeginImageContex,但要確保UIGraphicsBeginImageContext和UIGraphicsEndImageContext必須成對出現(xiàn)。
三、截屏方案
常用方法如下,
+ (UIImage *)snapshottingWithView:(UIView *)inputView {
UIGraphicsBeginImageContextWithOptions(inputView.frame.size, inputView.opaque, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[inputView.layer renderInContext:context];
UIImage *targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return targetImage;
}
這樣我就好奇別人的超長圖截屏方案是怎么實(shí)現(xiàn)的。如果是大分辨率圖片,在特定條件下,如果直接用上述方案很容易引往往容易出現(xiàn)OOM。
四、常用圖片框架SDWebImage
4.3.0
/**
* By default, images are decoded respecting their original size. On iOS, this flag will scale down the
* images to a size compatible with the constrained memory of devices.
* If `SDWebImageProgressiveDownload` flag is set the scale down is deactivated.
*/
SDWebImageScaleDownLargeImages = 1 << 12,
- (UIImage *)incrementallyDecodedImageWithData:(NSData *)data finished:(BOOL)finished {
if (!_imageSource) {
_imageSource = CGImageSourceCreateIncremental(NULL);
}
UIImage *image;
// The following code is from http://www.cocoaintheshell.com/2011/05/progressive-images-download-imageio/
// Thanks to the author @Nyx0uf
// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (__bridge CFDataRef)data, finished);
if (_width + _height == 0) {
CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(_imageSource, 0, NULL);
if (properties) {
NSInteger orientationValue = 1;
CFTypeRef val = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight);
if (val) CFNumberGetValue(val, kCFNumberLongType, &_height);
val = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth);
if (val) CFNumberGetValue(val, kCFNumberLongType, &_width);
val = CFDictionaryGetValue(properties, kCGImagePropertyOrientation);
if (val) CFNumberGetValue(val, kCFNumberNSIntegerType, &orientationValue);
CFRelease(properties);
#pragma <#arguments#>
在這個地方加入自己的邏輯判斷圖片,成倍的縮小_height與_width在合理范圍內(nèi),待驗(yàn)證,期待更優(yōu)雅的方式
// When we draw to Core Graphics, we lose orientation information,
// which means the image below born of initWithCGIImage will be
// oriented incorrectly sometimes. (Unlike the image born of initWithData
// in didCompleteWithError.) So save it here and pass it on later.
#if SD_UIKIT || SD_WATCH
_orientation = [SDWebImageCoderHelper imageOrientationFromEXIFOrientation:orientationValue];
#endif
}
}
if (_width + _height > 0) {
// Create the image
CGImageRef partialImageRef = CGImageSourceCreateImageAtIndex(_imageSource, 0, NULL);
#if SD_UIKIT || SD_WATCH
// Workaround for iOS anamorphic image
if (partialImageRef) {
const size_t partialHeight = CGImageGetHeight(partialImageRef);
CGColorSpaceRef colorSpace = SDCGColorSpaceGetDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, _width, _height, 8, 0, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
if (bmContext) {
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, .size.width = _width, .size.height = partialHeight}, partialImageRef);
CGImageRelease(partialImageRef);
partialImageRef = CGBitmapContextCreateImage(bmContext);
CGContextRelease(bmContext);
}
else {
CGImageRelease(partialImageRef);
partialImageRef = nil;
}
}
#endif
if (partialImageRef) {
#if SD_UIKIT || SD_WATCH
image = [[UIImage alloc] initWithCGImage:partialImageRef scale:1 orientation:_orientation];
#elif SD_MAC
image = [[UIImage alloc] initWithCGImage:partialImageRef size:NSZeroSize];
#endif
CGImageRelease(partialImageRef);
}
}
if (finished) {
if (_imageSource) {
CFRelease(_imageSource);
_imageSource = NULL;
}
}
return image;
}
- (nullable UIImage *)sd_decompressedImageWithImage:(nullable UIImage *)image {
if (![[self class] shouldDecodeImage:image]) {
return image;
}
// autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
// on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
@autoreleasepool{
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorspaceRef = [[self class] colorSpaceForImageRef:imageRef];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
kBitsPerComponent,
0,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
if (context == NULL) {
return image;
}
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [[UIImage alloc] initWithCGImage:imageRefWithoutAlpha scale:image.scale orientation:image.imageOrientation];
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
return imageWithoutAlpha;
}
}
在圖片解碼中,并未判斷圖片尺寸,故如果返回的圖片較大,像素較高(比如數(shù)碼相機(jī)拍攝的高清照片,一張幾十兆)時,會導(dǎo)致內(nèi)存暴漲OOM。