效果圖
演示手機為紅米10X pro,可以實時跑人臉檢測+關鍵點識別二個模型.
主要優化
上次看見有人討論人臉檢測與關鍵點識別,用的是opencv相關,於是想看下深度神經網路相關部分的進展,先選定了推理框架ncnn,支援window/android等多種平臺,然後在github參照多個ncnn+人臉檢測/關鍵點識別的專案,大部分都是ncnn前期處理影象大小與改成rgb三平面格式,然後經過ncnn處理後再經opencv畫矩形與多點.
在本機PC平臺下,先用相關的人臉檢測demo測試了下,Release下ncnn前期影象處理時間就需要ncnn(vulkan版本)推理時間的一半,有點奇怪,明明解析度才那麼點,不知是否更有效CPU前期影象處理方式,我能想到就是改為GPU處理,於是就有了本次優化,主要是把ncnn前期影象處理與opencv後期畫矩形與多點全改成vulkan的computeshader處理,整個過程理想情況下全在GPU下處理,只有中間CPU-GPU傳輸資料佔用大頭,順便去掉相關opencv的所有呼叫.
ncnn前期影象處理
首先ncnn前期影象處理主要就是三步,一是縮放,二是把資料交叉格式變成平面格式,三是資料的歸一化,其相關過程改為如下vulkan的computeshader.
#version 450
layout (local_size_x = 16, local_size_y = 16) in;
layout (binding = 0) uniform sampler2D inSampler;
layout (binding = 1) buffer outBuffer{
float dataOut[];
};
layout (std140, binding = 2) uniform UBO {
int outWidth;
int outHeight;
float meanX;
float meanY;
float meanZ;
float meanW;
float scaleX;
float scaleY;
float scaleZ;
float scaleW;
} ubo;
void main(){
ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
if(uv.x >= ubo.outWidth || uv.y >= ubo.outHeight){
return;
}
vec2 suv = (vec2(uv)+vec2(0.5f))/vec2(ubo.outWidth,ubo.outHeight);
vec4 inColor = textureLod(inSampler,suv,0)*255.0f;
int size = ubo.outWidth*ubo.outHeight;
int index = uv.y*ubo.outWidth+uv.x;
vec4 mean = vec4(ubo.meanX,ubo.meanY,ubo.meanZ,ubo.meanW);
vec4 scale = vec4(ubo.scaleX,ubo.scaleY,ubo.scaleZ,ubo.scaleW);
inColor = inColor*scale-mean;
#if NCNN_BGR
dataOut[index] = inColor.b;
dataOut[index+size] = inColor.g;
dataOut[index+2*size] = inColor.r;
#endif
#if NCNN_RGB
dataOut[index] = inColor.r;
dataOut[index+size] = inColor.g;
dataOut[index+2*size] = inColor.b;
#endif
}
關鍵點模型的識別需要在面部識別的RECT區域上進行識別,相關程式碼修改為.
#version 450
layout (local_size_x = 16, local_size_y = 16) in;
layout (binding = 0) uniform sampler2D inSampler;
layout (binding = 1) buffer outBuffer{
float dataOut[];
};
layout (std140, binding = 2) uniform UBO {
int outWidth;
int outHeight;
float meanX;
float meanY;
float meanZ;
float meanW;
float scaleX;
float scaleY;
float scaleZ;
float scaleW;
float x1;
float x2;
float y1;
float y2;
} ubo;
void main(){
ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
if(uv.x >= ubo.outWidth || uv.y >= ubo.outHeight){
return;
}
vec2 isize = vec2(ubo.x2-ubo.x1,ubo.y2-ubo.y1);
vec2 suv = (vec2(uv)+vec2(0.5f))/vec2(ubo.outWidth,ubo.outHeight);
vec2 isuv = suv*isize+vec2(ubo.x1,ubo.y1);
vec4 inColor = textureLod(inSampler,isuv,0)*255.0f;
int size = ubo.outWidth*ubo.outHeight;
int index = uv.y*ubo.outWidth+uv.x;
vec4 mean = vec4(ubo.meanX,ubo.meanY,ubo.meanZ,ubo.meanW);
vec4 scale = vec4(ubo.scaleX,ubo.scaleY,ubo.scaleZ,ubo.scaleW);
inColor = inColor*scale-mean;
#if NCNN_BGR
dataOut[index] = inColor.b;
dataOut[index+size] = inColor.g;
dataOut[index+2*size] = inColor.r;
#endif
#if NCNN_RGB
dataOut[index] = inColor.r;
dataOut[index+size] = inColor.g;
dataOut[index+2*size] = inColor.b;
#endif
}
opencv矩形與多點繪製
畫矩形與多點,我在移植GPUImage裡相關濾鏡時考慮過這個,當時想的是把渲染管線這一套整合就容易了,但是渲染管線本身,以及和計算管線的通用互動設計又是很多東東.
暫時決定先簡單點來,畫矩形,這種寫法算力肯定有點浪費.
#version 450
layout (local_size_x = 16, local_size_y = 16) in;// gl_WorkGroupSize
layout (binding = 0, rgba8) uniform readonly image2D inTex;
layout (binding = 1, rgba8) uniform image2D outTex;
layout (binding = 2) uniform UBO {
int radius;
float x1;
float x2;
float y1;
float y2;
float colorR;
float colorG;
float colorB;
float colorA;
} ubo;
void main(){
ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
ivec2 size = imageSize(inTex);
if(uv.x >= size.x || uv.y >= size.y){
return;
}
int xmin = int(ubo.x1 * size.x);
int xmax = int(ubo.x2 * size.x);
int ymin = int(ubo.y1 * size.y);
int ymax = int(ubo.y2 * size.y);
ivec4 xx = ivec4(uv.x, xmax, uv.y, ymax);
ivec4 yy = ivec4(xmin, uv.x, ymin, uv.y);
ivec4 xy = abs(xx - yy);
float sum = step(xy.x, ubo.radius) + step(xy.y, ubo.radius) + step(xy.z, ubo.radius) + step(xy.w, ubo.radius);
vec2 lr = vec2(xy.x + xy.y, xy.z + xy.w);
vec2 rl = vec2(xmax - xmin, ymax - ymin);
vec4 color = imageLoad(inTex,uv);
if (sum > 0 && length(lr - rl) < ubo.radius) {
vec3 drawColor = vec3(ubo.colorR,ubo.colorG,ubo.colorB);
color.rgb = color.rgb*(1.0f - ubo.colorA) + drawColor*ubo.colorA;
}
imageStore(outTex,uv,color);
}
畫多點也是有渲染管線就很容易實現,在這還好,固定多點,簡單來說,針對多個UV,在圖上對應UV標記,然後和原圖混合.
#version 450
layout (local_size_x = 240, local_size_y = 1) in;
layout (binding = 0) buffer inBuffer{
vec2 points[];
};
layout (binding = 1, rgba8) uniform image2D outTex;
layout (binding = 2) uniform UBO {
int showCount;
int radius;
float colorR;
float colorG;
float colorB;
float colorA;
} ubo;
void main(){
int index = int(gl_GlobalInvocationID.x);
ivec2 size = imageSize(outTex);
if(index >= ubo.showCount){
return;
}
ivec2 uv = ivec2(points[index] * size);
vec4 drawColor = vec4(ubo.colorR,ubo.colorG,ubo.colorB,ubo.colorA);
int radius = max(1,ubo.radius);
for(int i = 0; i< radius; ++i){
for(int j= 0; j< radius; ++j){
int x = uv.x - 1 + j;
int y = uv.y - 1 + i;
// REPLICATE border
x = max(0,min(x,size.x-1));
y = max(0,min(y,size.y-1));
imageStore(outTex, ivec2(x,y), drawColor);
}
}
}
有大佬有更好的想法歡迎指點.
編譯與執行
如上glsl邏輯封裝與組合邏輯主要程式碼在aoce_ncnn,
win端測試demo主要在ncnntest,其目錄下CMakeLists.txt提供選項NCNN_VULKAN_WINDOW,決定是用vulkan繪製還是opencv繪製.android端demo主要封裝邏輯在aocencnntest.
大家可以自己下載相關ncnn編譯,除錯,測試其中的細節,也可以直接使用我配置好的目錄aoce_thirdparty,把下載的thirdparty資料夾下檔案放入aoce目錄下thirdparty資料夾下,位置正確CMake會自動查詢連結相關dll.
在android下,需要先用swig自動把aoce提供的介面轉化成java,詳細請看android build,現在需要把手機橫著檢測才有比較好的效果,這個後期應該會調整.
最後是比較遺憾的地方,原計劃是把vulkan前期處理完的buffer直接和ncnn進行視訊記憶體互動對接,不像現在用的VK_MEMORY_PROPERTY_HOST_COHERENT_BIT型別的buffer做中轉,其中測試一些寫法,暫時都沒成功,有做過類似的大佬歡迎指點.
參照:
Face-Detector-1MB-with-landmark
Ultra-Light-Fast-Generic-Face-Detector-1MB
人臉檢測之Ultra-Light-Fast-Generic-Face-Detector-1MB