Android 效能優化實戰 - 介面卡頓
今天是個奇怪的日子,有三位同學找我,都是關於介面卡頓的問題,問我能不能幫忙解決下。由於效能優化涉及的知識點比較多,我一時半會也無法徹底回答。恰好之前在做需求時也遇到了一個卡頓的問題,因此今晚寫下這篇卡頓優化的文章,希望對大家有所幫助。先來看看卡頓的現象:

重新整理資料時卡頓
1. 查詢卡頓原因
從上面的現象來看,應該是主執行緒執行了耗時操作引起了卡頓,因為正常滑動是沒問題的,只有在重新整理資料的時候才會出現卡頓。至於什麼情況下會引起卡頓,之前在自定義 View 部分已有詳細講過,這裡就不在囉嗦。我們猜想可能是耗時引起的卡頓,但也不能 100% 確定,況且我們也並不知道是哪個方法引起的,因此我們只能藉助一些常用工具來分析分析,我們開啟 Android Device Monitor 。

開啟 Android Device Monitor

查詢耗時方法
2. RxJava 執行緒切換
我們找到了是高斯模糊處理耗時導致了介面卡頓,那現在我們把高斯模糊演算法處理放入子執行緒中去,處理完後再次切換到主執行緒,這裡採用 RxJava 來實現。
Observable.just(resource.getBitmap()) .map(bitmap -> { // 高斯模糊 Bitmap blurBitmap = ImageUtil.doBlur(resource.getBitmap(), 100, false); blurBitmapCache.put(path, blurBitmap); return blurBitmap; }).subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(blurBitmap -> { if (blurBitmap != null) { recommendBgIv.setImageBitmap(blurBitmap); } });
關於響應式程式設計思想和 RxJava 的實現原理大家可以參考以下幾篇文章:
- ofollow,noindex">第三方開源庫 RxJava - 基本使用和原始碼分析
- 第三方開源庫 RxJava - 自己動手寫事件變換
- 第三方開源庫 RxJava - Android實用開發場景
2. 高斯模糊演算法分析
把耗時操作放到子執行緒中去處理,的確解決了介面卡頓問題。但這其實是治標不治本,我們發現圖片載入處理異常緩慢,記憶體久高不下有時可能會導致記憶體溢位。接下來我們來分析一下高斯模糊的演算法實現:

圖片來源於維基百科
看上面這幾張圖,我們通過怎樣的操作才能把第一張圖處理成下面這兩張圖?其實就是模糊化,怎麼才能做到模糊化?我們來看下高斯模糊演算法的處理過程。再上兩張圖:

處理前

處理後
所謂"模糊",可以理解成每一個畫素都取周邊畫素的平均值。上圖中,2是中間點,周邊點都是1。"中間點"取"周圍點"的平均值,就會變成1。在數值上,這是一種"平滑化"。在圖形上,就相當於產生"模糊"效果,"中間點"失去細節。
為了得到不同的模糊效果,高斯模糊引入了權重的概念。上面分別是原圖、模糊半徑3畫素、模糊半徑10畫素的效果。模糊半徑越大,影象就越模糊。從數值角度看,就是數值越平滑。接下來的問題就是,既然每個點都要取周邊畫素的平均值,那麼應該如何分配權重呢?如果使用簡單平均,顯然不是很合理,因為影象都是連續的,越靠近的點關係越密切,越遠離的點關係越疏遠。因此,加權平均更合理,距離越近的點權重越大,距離越遠的點權重越小。對於這種處理思想,很顯然正太分佈函式剛好滿足我們的需求。但圖片是二維的,因此我們需要根據一維的正太分佈函式,推匯出二維的正太分佈函式:

一維正太分佈函式

二維正太分佈函式

權重處理
if (radius < 1) {//模糊半徑小於1 return (null); } int w = bitmap.getWidth(); int h = bitmap.getHeight(); // 通過 getPixels 獲得圖片的畫素陣列 int[] pix = new int[w * h]; bitmap.getPixels(pix, 0, w, 0, 0, w, h); int wm = w - 1; int hm = h - 1; int wh = w * h; int div = radius + radius + 1; int r[] = new int[wh]; int g[] = new int[wh]; int b[] = new int[wh]; int rsum, gsum, bsum, x, y, i, p, yp, yi, yw; int vmin[] = new int[Math.max(w, h)]; int divsum = (div + 1) >> 1; divsum *= divsum; int dv[] = new int[256 * divsum]; for (i = 0; i < 256 * divsum; i++) { dv[i] = (i / divsum); } yw = yi = 0; int[][] stack = new int[div][3]; int stackpointer; int stackstart; int[] sir; int rbs; int r1 = radius + 1; int routsum, goutsum, boutsum; int rinsum, ginsum, binsum; // 迴圈行 for (y = 0; y < h; y++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; // 半徑處理 for (i = -radius; i <= radius; i++) { p = pix[yi + Math.min(wm, Math.max(i, 0))]; sir = stack[i + radius]; // 拿到 rgb sir[0] = (p & 0xff0000) >> 16; sir[1] = (p & 0x00ff00) >> 8; sir[2] = (p & 0x0000ff); rbs = r1 - Math.abs(i); rsum += sir[0] * rbs; gsum += sir[1] * rbs; bsum += sir[2] * rbs; if (i > 0) { rinsum += sir[0]; ginsum += sir[1]; binsum += sir[2]; } else { routsum += sir[0]; goutsum += sir[1]; boutsum += sir[2]; } } stackpointer = radius; // 迴圈每一列 for (x = 0; x < w; x++) { r[yi] = dv[rsum]; g[yi] = dv[gsum]; b[yi] = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; sir = stack[stackstart % div]; routsum -= sir[0]; goutsum -= sir[1]; boutsum -= sir[2]; if (y == 0) { vmin[x] = Math.min(x + radius + 1, wm); } p = pix[yw + vmin[x]]; sir[0] = (p & 0xff0000) >> 16; sir[1] = (p & 0x00ff00) >> 8; sir[2] = (p & 0x0000ff); rinsum += sir[0]; ginsum += sir[1]; binsum += sir[2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; sir = stack[(stackpointer) % div]; routsum += sir[0]; goutsum += sir[1]; boutsum += sir[2]; rinsum -= sir[0]; ginsum -= sir[1]; binsum -= sir[2]; yi++; } yw += w; } for (x = 0; x < w; x++) { // 與上面程式碼類似 ......
對於部分哥們來說,上面的函式和程式碼可能看不太懂。我們來講通俗一點,一方面如果我們的圖片越大,畫素點也就會越多,高斯模糊演算法的複雜度就會越大。如果半徑 radius 越大圖片會越模糊,權重計算的複雜度也會越大。因此我們可以從這兩個方面入手,要麼壓縮圖片的寬高,要麼縮小 radius 半徑。但如果 radius 半徑設定過小,模糊效果肯定不太好,因此我們還是在寬高上面想想辦法,接下來我們去看看 Glide 的原始碼:
private Bitmap decodeFromWrappedStreams(InputStream is, BitmapFactory.Options options, DownsampleStrategy downsampleStrategy, DecodeFormat decodeFormat, boolean isHardwareConfigAllowed, int requestedWidth, int requestedHeight, boolean fixBitmapToRequestedDimensions, DecodeCallbacks callbacks) throws IOException { long startTime = LogTime.getLogTime(); int[] sourceDimensions = getDimensions(is, options, callbacks, bitmapPool); int sourceWidth = sourceDimensions[0]; int sourceHeight = sourceDimensions[1]; String sourceMimeType = options.outMimeType; // If we failed to obtain the image dimensions, we may end up with an incorrectly sized Bitmap, // so we want to use a mutable Bitmap type. One way this can happen is if the image header is so // large (10mb+) that our attempt to use inJustDecodeBounds fails and we're forced to decode the // full size image. if (sourceWidth == -1 || sourceHeight == -1) { isHardwareConfigAllowed = false; } int orientation = ImageHeaderParserUtils.getOrientation(parsers, is, byteArrayPool); int degreesToRotate = TransformationUtils.getExifOrientationDegrees(orientation); boolean isExifOrientationRequired = TransformationUtils.isExifOrientationRequired(orientation); // 關鍵在於這兩行程式碼,如果沒有設定或者獲取不到圖片的寬高,就會載入原圖 int targetWidth = requestedWidth == Target.SIZE_ORIGINAL ? sourceWidth : requestedWidth; int targetHeight = requestedHeight == Target.SIZE_ORIGINAL ? sourceHeight : requestedHeight; ImageType imageType = ImageHeaderParserUtils.getType(parsers, is, byteArrayPool); // 計算壓縮比例 calculateScaling( imageType, is, callbacks, bitmapPool, downsampleStrategy, degreesToRotate, sourceWidth, sourceHeight, targetWidth, targetHeight, options); calculateConfig( is, decodeFormat, isHardwareConfigAllowed, isExifOrientationRequired, options, targetWidth, targetHeight); boolean isKitKatOrGreater = Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT; // Prior to KitKat, the inBitmap size must exactly match the size of the bitmap we're decoding. if ((options.inSampleSize == 1 || isKitKatOrGreater) && shouldUsePool(imageType)) { int expectedWidth; int expectedHeight; if (sourceWidth >= 0 && sourceHeight >= 0 && fixBitmapToRequestedDimensions && isKitKatOrGreater) { expectedWidth = targetWidth; expectedHeight = targetHeight; } else { float densityMultiplier = isScaling(options) ? (float) options.inTargetDensity / options.inDensity : 1f; int sampleSize = options.inSampleSize; int downsampledWidth = (int) Math.ceil(sourceWidth / (float) sampleSize); int downsampledHeight = (int) Math.ceil(sourceHeight / (float) sampleSize); expectedWidth = Math.round(downsampledWidth * densityMultiplier); expectedHeight = Math.round(downsampledHeight * densityMultiplier); if (Log.isLoggable(TAG, Log.VERBOSE)) { Log.v(TAG, "Calculated target [" + expectedWidth + "x" + expectedHeight + "] for source" + " [" + sourceWidth + "x" + sourceHeight + "]" + ", sampleSize: " + sampleSize + ", targetDensity: " + options.inTargetDensity + ", density: " + options.inDensity + ", density multiplier: " + densityMultiplier); } } // If this isn't an image, or BitmapFactory was unable to parse the size, width and height // will be -1 here. if (expectedWidth > 0 && expectedHeight > 0) { setInBitmap(options, bitmapPool, expectedWidth, expectedHeight); } } // 通過流 is 和 options 解析 Bitmap Bitmap downsampled = decodeStream(is, options, callbacks, bitmapPool); callbacks.onDecodeComplete(bitmapPool, downsampled); if (Log.isLoggable(TAG, Log.VERBOSE)) { logDecode(sourceWidth, sourceHeight, sourceMimeType, options, downsampled, requestedWidth, requestedHeight, startTime); } Bitmap rotated = null; if (downsampled != null) { // If we scaled, the Bitmap density will be our inTargetDensity. Here we correct it back to // the expected density dpi. downsampled.setDensity(displayMetrics.densityDpi); rotated = TransformationUtils.rotateImageExif(bitmapPool, downsampled, orientation); if (!downsampled.equals(rotated)) { bitmapPool.put(downsampled); } } return rotated; }
4. LruCache 快取
最後我們還可以再做一些優化,資料沒有改變時不去重新整理資料,還有就是採用 LruCache 快取,相同的高斯模糊影象直接從快取獲取。需要提醒大家的是,我們在使用之前最好了解其原始碼實現,之前有見到同事這樣寫過:
/** * 高斯模糊快取的大小 4M */ private static final int BLUR_CACHE_SIZE = 4 * 1024 * 1024; /** * 高斯模糊快取,防止重新整理時抖動 */ private LruCache<String, Bitmap> blurBitmapCache = new LruCache<String, Bitmap>(BLUR_CACHE_SIZE); // 虛擬碼 ...... // 有快取直接設定 Bitmap blurBitmap = blurBitmapCache.get(item.userResp.headPortraitUrl); if (blurBitmap != null) { recommendBgIv.setImageBitmap(blurBitmap); return; } // 從後臺獲取,進行高斯模糊後,再快取 ...
這樣寫有兩個問題,第一個問題是我們發現整個應用 OOM 了都還可以快取資料,第二個問題是 LruCache 可以實現比較精細的控制,而預設快取池設定太大了會導致浪費記憶體,設定小了又會導致圖片經常被回收。第一個問題我們只要瞭解其內部實現就迎刃而解了,關鍵問題在於快取大小該怎麼設定?如果我們想不到好的解決方案,那麼也可以去參考參考 Glide 的原始碼實現。
public Builder(Context context) { this.context = context; activityManager = (ActivityManager)context.getSystemService(Context.ACTIVITY_SERVICE); screenDimensions = new DisplayMetricsScreenDimensions(context.getResources().getDisplayMetrics()); // On Android O+ Bitmaps are allocated natively, ART is much more efficient at managing // garbage and we rely heavily on HARDWARE Bitmaps, making Bitmap re-use much less important. // We prefer to preserve RAM on these devices and take the small performance hit of not // re-using Bitmaps and textures when loading very small images or generating thumbnails. if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O && isLowMemoryDevice(activityManager)) { bitmapPoolScreens = 0; } } // Package private to avoid PMD warning. MemorySizeCalculator(MemorySizeCalculator.Builder builder) { this.context = builder.context; arrayPoolSize = isLowMemoryDevice(builder.activityManager) ? builder.arrayPoolSizeBytes / LOW_MEMORY_BYTE_ARRAY_POOL_DIVISOR : builder.arrayPoolSizeBytes; int maxSize = getMaxSize( builder.activityManager, builder.maxSizeMultiplier, builder.lowMemoryMaxSizeMultiplier); int widthPixels = builder.screenDimensions.getWidthPixels(); int heightPixels = builder.screenDimensions.getHeightPixels(); int screenSize = widthPixels * heightPixels * BYTES_PER_ARGB_8888_PIXEL; int targetBitmapPoolSize = Math.round(screenSize * builder.bitmapPoolScreens); int targetMemoryCacheSize = Math.round(screenSize * builder.memoryCacheScreens); int availableSize = maxSize - arrayPoolSize; if (targetMemoryCacheSize + targetBitmapPoolSize <= availableSize) { memoryCacheSize = targetMemoryCacheSize; bitmapPoolSize = targetBitmapPoolSize; } else { float part = availableSize / (builder.bitmapPoolScreens + builder.memoryCacheScreens); memoryCacheSize = Math.round(part * builder.memoryCacheScreens); bitmapPoolSize = Math.round(part * builder.bitmapPoolScreens); } if (Log.isLoggable(TAG, Log.DEBUG)) { Log.d( TAG, "Calculation complete" + ", Calculated memory cache size: " + toMb(memoryCacheSize) + ", pool size: " + toMb(bitmapPoolSize) + ", byte array size: " + toMb(arrayPoolSize) + ", memory class limited? " + (targetMemoryCacheSize + targetBitmapPoolSize > maxSize) + ", max size: " + toMb(maxSize) + ", memoryClass: " + builder.activityManager.getMemoryClass() + ", isLowMemoryDevice: " + isLowMemoryDevice(builder.activityManager)); } }
可以看到 Glide 是根據每個 App 的記憶體情況,以及不同手機裝置的版本和解析度,計算出一個比較合理的初始值。關於 Glide 原始碼分析大家可以看看這篇: 第三方開源庫 Glide - 原始碼分析(補)
5. 最後總結
工具的使用其實並不難,相信我們在網上找幾篇文章實踐實踐,就能很熟練找到其原因。難度還在於我們需要了解 Android 的底層原始碼,第三方開源庫的原理實現。個人還是建議大家平時多去看看 Android Framework 層的原始碼,多去學學第三方開源庫的內部實現,多瞭解資料結構和演算法。 真正的做到治標又治本
視訊地址:12月8號晚八點