1. 程式人生 > >cocos2dx 高效能高斯模糊(包含lua介面)

cocos2dx 高效能高斯模糊(包含lua介面)

根據官方的帖子實現的高斯模糊當前螢幕內容  點選開啟連結

1.截圖縮小壓縮,減小畫素取樣的優化演算法。預設截圖後縮小到原來的1/4。

2.C++程式碼進行一次性高斯模糊。避免使用shader造成的渲染掉幀

以下是C++部分程式碼:

	/*
	* 高斯模糊介面  縮放因子:iScale,截圖會把全屏壓縮為1/iScale大
	*/
	static void gaussianBlur(const std::function<void(bool, cocos2d::Image*)>& afterCaptured, int iScale = 4);

// The Stack Blur Algorithm was invented by Mario Klingemann, 
// 
[email protected]
.com and described here: // http://incubator.quasimondo.com/processing/fast_blur_deluxe.php // This is C++ RGBA (32 bit color) multi-threaded version // by Victor Laskin ([email protected]) // More details: http://vitiy.info/stackblur-algorithm-multi-threaded-blur-for-cpp // This code is using MVThread class from my cross-platform framework // You can exchange it with any thread implementation you like // -------------------------------------- stackblur -----------------------------------------> static unsigned short const stackblur_mul[255] = { 512, 512, 456, 512, 328, 456, 335, 512, 405, 328, 271, 456, 388, 335, 292, 512, 454, 405, 364, 328, 298, 271, 496, 456, 420, 388, 360, 335, 312, 292, 273, 512, 482, 454, 428, 405, 383, 364, 345, 328, 312, 298, 284, 271, 259, 496, 475, 456, 437, 420, 404, 388, 374, 360, 347, 335, 323, 312, 302, 292, 282, 273, 265, 512, 497, 482, 468, 454, 441, 428, 417, 405, 394, 383, 373, 364, 354, 345, 337, 328, 320, 312, 305, 298, 291, 284, 278, 271, 265, 259, 507, 496, 485, 475, 465, 456, 446, 437, 428, 420, 412, 404, 396, 388, 381, 374, 367, 360, 354, 347, 341, 335, 329, 323, 318, 312, 307, 302, 297, 292, 287, 282, 278, 273, 269, 265, 261, 512, 505, 497, 489, 482, 475, 468, 461, 454, 447, 441, 435, 428, 422, 417, 411, 405, 399, 394, 389, 383, 378, 373, 368, 364, 359, 354, 350, 345, 341, 337, 332, 328, 324, 320, 316, 312, 309, 305, 301, 298, 294, 291, 287, 284, 281, 278, 274, 271, 268, 265, 262, 259, 257, 507, 501, 496, 491, 485, 480, 475, 470, 465, 460, 456, 451, 446, 442, 437, 433, 428, 424, 420, 416, 412, 408, 404, 400, 396, 392, 388, 385, 381, 377, 374, 370, 367, 363, 360, 357, 354, 350, 347, 344, 341, 338, 335, 332, 329, 326, 323, 320, 318, 315, 312, 310, 307, 304, 302, 299, 297, 294, 292, 289, 287, 285, 282, 280, 278, 275, 273, 271, 269, 267, 265, 263, 261, 259 }; static unsigned char const stackblur_shr[255] = { 9, 11, 12, 13, 13, 14, 14, 15, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24 }; /// Stackblur algorithm body void stackblurJob(unsigned char* src, ///< input image data unsigned int w, ///< image width unsigned int h, ///< image height unsigned int radius, ///< blur intensity (should be in 2..254 range) int cores, ///< total number of working threads int core, ///< current thread number int step, ///< step of processing (1,2) unsigned char* stack ///< stack buffer ) { unsigned int x, y, xp, yp, i; unsigned int sp; unsigned int stack_start; unsigned char* stack_ptr; unsigned char* src_ptr; unsigned char* dst_ptr; unsigned long sum_r; unsigned long sum_g; unsigned long sum_b; unsigned long sum_a; unsigned long sum_in_r; unsigned long sum_in_g; unsigned long sum_in_b; unsigned long sum_in_a; unsigned long sum_out_r; unsigned long sum_out_g; unsigned long sum_out_b; unsigned long sum_out_a; unsigned int wm = w - 1; unsigned int hm = h - 1; unsigned int w4 = w * 4; unsigned int div = (radius * 2) + 1; unsigned int mul_sum = stackblur_mul[radius]; unsigned char shr_sum = stackblur_shr[radius]; if (step == 1) { int minY = core * h / cores; int maxY = (core + 1) * h / cores; for (y = minY; y < maxY; y++) { sum_r = sum_g = sum_b = sum_a = sum_in_r = sum_in_g = sum_in_b = sum_in_a = sum_out_r = sum_out_g = sum_out_b = sum_out_a = 0; src_ptr = src + w4 * y; // start of line (0,y) for (i = 0; i <= radius; i++) { stack_ptr = &stack[4 * i]; stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_r += src_ptr[0] * (i + 1); sum_g += src_ptr[1] * (i + 1); sum_b += src_ptr[2] * (i + 1); sum_a += src_ptr[3] * (i + 1); sum_out_r += src_ptr[0]; sum_out_g += src_ptr[1]; sum_out_b += src_ptr[2]; sum_out_a += src_ptr[3]; } for (i = 1; i <= radius; i++) { if (i <= wm) src_ptr += 4; stack_ptr = &stack[4 * (i + radius)]; stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_r += src_ptr[0] * (radius + 1 - i); sum_g += src_ptr[1] * (radius + 1 - i); sum_b += src_ptr[2] * (radius + 1 - i); sum_a += src_ptr[3] * (radius + 1 - i); sum_in_r += src_ptr[0]; sum_in_g += src_ptr[1]; sum_in_b += src_ptr[2]; sum_in_a += src_ptr[3]; } sp = radius; xp = radius; if (xp > wm) xp = wm; src_ptr = src + 4 * (xp + y * w); // img.pix_ptr(xp, y); dst_ptr = src + y * w4; // img.pix_ptr(0, y); for (x = 0; x < w; x++) { dst_ptr[0] = (sum_r * mul_sum) >> shr_sum; dst_ptr[1] = (sum_g * mul_sum) >> shr_sum; dst_ptr[2] = (sum_b * mul_sum) >> shr_sum; dst_ptr[3] = (sum_a * mul_sum) >> shr_sum; dst_ptr += 4; sum_r -= sum_out_r; sum_g -= sum_out_g; sum_b -= sum_out_b; sum_a -= sum_out_a; stack_start = sp + div - radius; if (stack_start >= div) stack_start -= div; stack_ptr = &stack[4 * stack_start]; sum_out_r -= stack_ptr[0]; sum_out_g -= stack_ptr[1]; sum_out_b -= stack_ptr[2]; sum_out_a -= stack_ptr[3]; if (xp < wm) { src_ptr += 4; ++xp; } stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_in_r += src_ptr[0]; sum_in_g += src_ptr[1]; sum_in_b += src_ptr[2]; sum_in_a += src_ptr[3]; sum_r += sum_in_r; sum_g += sum_in_g; sum_b += sum_in_b; sum_a += sum_in_a; ++sp; if (sp >= div) sp = 0; stack_ptr = &stack[sp * 4]; sum_out_r += stack_ptr[0]; sum_out_g += stack_ptr[1]; sum_out_b += stack_ptr[2]; sum_out_a += stack_ptr[3]; sum_in_r -= stack_ptr[0]; sum_in_g -= stack_ptr[1]; sum_in_b -= stack_ptr[2]; sum_in_a -= stack_ptr[3]; } } } // step 2 if (step == 2) { int minX = core * w / cores; int maxX = (core + 1) * w / cores; for (x = minX; x < maxX; x++) { sum_r = sum_g = sum_b = sum_a = sum_in_r = sum_in_g = sum_in_b = sum_in_a = sum_out_r = sum_out_g = sum_out_b = sum_out_a = 0; src_ptr = src + 4 * x; // x,0 for (i = 0; i <= radius; i++) { stack_ptr = &stack[i * 4]; stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_r += src_ptr[0] * (i + 1); sum_g += src_ptr[1] * (i + 1); sum_b += src_ptr[2] * (i + 1); sum_a += src_ptr[3] * (i + 1); sum_out_r += src_ptr[0]; sum_out_g += src_ptr[1]; sum_out_b += src_ptr[2]; sum_out_a += src_ptr[3]; } for (i = 1; i <= radius; i++) { if (i <= hm) src_ptr += w4; // +stride stack_ptr = &stack[4 * (i + radius)]; stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_r += src_ptr[0] * (radius + 1 - i); sum_g += src_ptr[1] * (radius + 1 - i); sum_b += src_ptr[2] * (radius + 1 - i); sum_a += src_ptr[3] * (radius + 1 - i); sum_in_r += src_ptr[0]; sum_in_g += src_ptr[1]; sum_in_b += src_ptr[2]; sum_in_a += src_ptr[3]; } sp = radius; yp = radius; if (yp > hm) yp = hm; src_ptr = src + 4 * (x + yp * w); // img.pix_ptr(x, yp); dst_ptr = src + 4 * x; // img.pix_ptr(x, 0); for (y = 0; y < h; y++) { dst_ptr[0] = (sum_r * mul_sum) >> shr_sum; dst_ptr[1] = (sum_g * mul_sum) >> shr_sum; dst_ptr[2] = (sum_b * mul_sum) >> shr_sum; dst_ptr[3] = (sum_a * mul_sum) >> shr_sum; dst_ptr += w4; sum_r -= sum_out_r; sum_g -= sum_out_g; sum_b -= sum_out_b; sum_a -= sum_out_a; stack_start = sp + div - radius; if (stack_start >= div) stack_start -= div; stack_ptr = &stack[4 * stack_start]; sum_out_r -= stack_ptr[0]; sum_out_g -= stack_ptr[1]; sum_out_b -= stack_ptr[2]; sum_out_a -= stack_ptr[3]; if (yp < hm) { src_ptr += w4; // stride ++yp; } stack_ptr[0] = src_ptr[0]; stack_ptr[1] = src_ptr[1]; stack_ptr[2] = src_ptr[2]; stack_ptr[3] = src_ptr[3]; sum_in_r += src_ptr[0]; sum_in_g += src_ptr[1]; sum_in_b += src_ptr[2]; sum_in_a += src_ptr[3]; sum_r += sum_in_r; sum_g += sum_in_g; sum_b += sum_in_b; sum_a += sum_in_a; ++sp; if (sp >= div) sp = 0; stack_ptr = &stack[sp * 4]; sum_out_r += stack_ptr[0]; sum_out_g += stack_ptr[1]; sum_out_b += stack_ptr[2]; sum_out_a += stack_ptr[3]; sum_in_r -= stack_ptr[0]; sum_in_g -= stack_ptr[1]; sum_in_b -= stack_ptr[2]; sum_in_a -= stack_ptr[3]; } } } } class MVImageUtilsStackBlurTask { public: unsigned char* src; unsigned int w; unsigned int h; unsigned int radius; int cores; int core; int step; unsigned char* stack; inline MVImageUtilsStackBlurTask(unsigned char* src, unsigned int w, unsigned int h, unsigned int radius, int cores, int core, int step, unsigned char* stack) { this->src = src; this->w = w; this->h = h; this->radius = radius; this->cores = cores; this->core = core; this->step = step; this->stack = stack; } inline void run() { stackblurJob(src, w, h, radius, cores, core, step, stack); } }; /// Stackblur algorithm by Mario Klingemann /// Details here: /// http://www.quasimondo.com/StackBlurForCanvas/StackBlurDemo.html /// C++ implemenation base from: /// https://gist.github.com/benjamin9999/3809142 /// http://www.antigrain.com/__code/include/agg_blur.h.html /// This version works only with RGBA color void stackblur(unsigned char* src, ///< input image data unsigned int w, ///< image width unsigned int h, ///< image height unsigned int radius, ///< blur intensity (should be in 2..254 range) int cores = 1 ///< number of threads (1 - normal single thread) ) { if (radius > 254) return; if (radius < 2) return; unsigned int div = (radius * 2) + 1; unsigned char* stack = new unsigned char[div * 4 * cores]; if (cores == 1) { // no multithreading stackblurJob(src, w, h, radius, 1, 0, 1, stack); stackblurJob(src, w, h, radius, 1, 0, 2, stack); } delete[] stack; } /** * Capture screen implementation, don't use it directly. */ void onCaptureScreen(const std::function<void(bool, Image*)>& afterCaptured, int iScale) { static bool startedCapture = false; if (startedCapture) { CCLOG("Screen capture is already working"); if (afterCaptured) { afterCaptured(false, nullptr); } return; } else { startedCapture = true; } auto glView = Director::getInstance()->getOpenGLView(); auto frameSize = glView->getFrameSize(); #if (CC_TARGET_PLATFORM == CC_PLATFORM_MAC) || (CC_TARGET_PLATFORM == CC_PLATFORM_WIN32) || (CC_TARGET_PLATFORM == CC_PLATFORM_LINUX) frameSize = frameSize * glView->getFrameZoomFactor() * glView->getRetinaFactor(); #endif int width = static_cast<int>(frameSize.width); int height = static_cast<int>(frameSize.height); do { std::shared_ptr<GLubyte> buffer(new GLubyte[width * height * 4], [](GLubyte* p){ CC_SAFE_DELETE_ARRAY(p); }); if (!buffer) { break; } glPixelStorei(GL_PACK_ALIGNMENT, 1); glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer.get()); std::shared_ptr<GLubyte> flippedBuffer(new GLubyte[width * height * 4], [](GLubyte* p) { CC_SAFE_DELETE_ARRAY(p); }); if (!flippedBuffer) { break; } for (int row = 0; row < height; ++row) { memcpy(flippedBuffer.get() + (height - row - 1) * width * 4, buffer.get() + row * width * 4, width * 4); } /*-------------壓縮start------------*/ unsigned long dst_width = width / iScale; unsigned long dst_height = height / iScale; std::shared_ptr<GLubyte> zipFlippedBuffer(new GLubyte[dst_width * dst_height * 4], [](GLubyte* p) { CC_SAFE_DELETE_ARRAY(p); }); if (!zipFlippedBuffer) { break; } unsigned long xrIntFloat_16 = (width << 16) / dst_width + 1; unsigned long yrIntFloat_16 = (height << 16) / dst_height + 1; unsigned long srcy_16 = 0; unsigned long byte_width = 4;//單個數據的物理寬度 4位元組 unsigned long byte_shift = 2;//單個數據的物理移位 auto beginPos = zipFlippedBuffer.get(); for (unsigned long y = 0; y < dst_height; ++y) { //auto pSrcLine = flippedBuffer.get() + width * byte_width * (srcy_16 >> 16); auto pSrcLine = flippedBuffer.get() + (width<<2)*(srcy_16>>16); unsigned long srcx_16 = 0; for (unsigned long x = 0; x < dst_width; ++x) { //memcpy(beginPos + x * byte_width, pSrcLine + (srcx_16 >> 16)*byte_width, byte_width); memcpy(beginPos + (x<<2), pSrcLine + ((srcx_16 >> 16)<<2), byte_width); srcx_16 += xrIntFloat_16; } srcy_16 += yrIntFloat_16; beginPos += (dst_width << byte_shift); } /*-------------壓縮end------------*/ //使用演算法一次性對圖片進行高斯模糊 stackblur(zipFlippedBuffer.get(), dst_width, dst_height, 5); Image* image = new (std::nothrow) Image; if (image) { image->initWithRawData(zipFlippedBuffer.get(), dst_width * dst_height * 4 , dst_width, dst_height, 8); image->autorelease(); if (afterCaptured) { afterCaptured(true, image); } } else { CCLOG("Malloc Image memory failed!"); if (afterCaptured) { afterCaptured(false, nullptr); } delete image; image = nullptr; } startedCapture = false; } while (0); } /* * 高斯模糊介面 縮放因子:iScale,截圖會把全屏壓縮為1/iScale大 */ static EventListenerCustom* s_captureScreenListener; static CustomCommand s_captureScreenCommand; void Util::gaussianBlur(const std::function<void(bool, Image*)>& afterCaptured, int iScale /*= 4*/) { if (s_captureScreenListener) { CCLOG("Warning: CaptureScreen has been called already, don't call more than once in one frame."); return; } s_captureScreenCommand.init(std::numeric_limits<float>::max()); s_captureScreenCommand.func = std::bind(onCaptureScreen, afterCaptured, iScale); s_captureScreenListener = Director::getInstance()->getEventDispatcher()->addCustomEventListener(Director::EVENT_AFTER_DRAW, [](EventCustom *event) { auto director = Director::getInstance(); director->getEventDispatcher()->removeEventListener((EventListener*)(s_captureScreenListener)); s_captureScreenListener = nullptr; director->getRenderer()->addCommand(&s_captureScreenCommand); director->getRenderer()->render(); }); }


以下是匯出的lua介面:

#include "base/ccConfig.h"
#ifndef __game_custom_h__
#define __game_custom_h__

#ifdef __cplusplus
extern "C" {
#endif
#include "tolua++.h"
#ifdef __cplusplus
}
#endif

int register_all_game_custom(lua_State* tolua_S);

#endif // __game_custom_h__

static int tolua_pf_common_gaussianBlur(lua_State* tolua_S)
{
	LUA_FUNCTION callbackHander = toluafix_ref_function(tolua_S, 2, 0);
	if (callbackHander == 0)
	{
		CCLOG("tolua_pf_common_gaussianBlur : toluafix_ref_function , error");
		return 0;
	}

	auto capture_callback = [=](bool succeed, Image* img){
		auto luastack = LuaEngine::getInstance()->getLuaStack();

		luastack->pushBoolean(succeed);
		if (succeed){
			luastack->pushObject(img, "cc.Image");
		}
		else{
			luastack->pushNil();
		}
		luastack->executeFunctionByHandler(callbackHander, 2);
	};


	int argc = lua_gettop(tolua_S) - 1;
	if (argc == 2)
	{
		int q = 4;
		if (!luaval_to_int32(tolua_S, 3, &q))
		{
			CCLOG("tolua_pf_common_gaussianBlur : luaval_to_number , error");
			return 0;
		}
		Util::gaussianBlur(capture_callback, q);
	}
	else
	{
		Util::gaussianBlur(capture_callback);
	}
	return 0;
}

TOLUA_API int register_all_game_custom(lua_State* tolua_S)
{
	tolua_open(tolua_S);

	tolua_module(tolua_S, "pf", 0);
	tolua_beginmodule(tolua_S, "pf");

		tolua_module(tolua_S, "Common", 0);
		tolua_beginmodule(tolua_S, "Common");
		{
			tolua_function(tolua_S, "GaussianBlur", tolua_pf_common_gaussianBlur);
		}
		tolua_endmodule(tolua_S);

	tolua_endmodule(tolua_S);
	return 1;
}

使用方法:

            local function onFinishCapture(ret,img)
                if ret then
                    local texture = cc.Director:getInstance():getTextureCache():addImage(img, "capriteadu")
                    local spriteBlur = cc.Sprite:createWithTexture(texture)
                    local wSize = cc.Director:getInstance():getWinSize()
                    spriteBlur:setPosition(cc.p(wSize.width/2, wSize.height/2))
                    self:addChild(spriteBlur)
                    PF.UIEx.nodeToScaleForFixedSize(spriteBlur, wSize)
                end
            end
            pf.Common:GaussianBlur(onFinishCapture, 4)



相關推薦

cocos2dx 高效能模糊包含lua介面

根據官方的帖子實現的高斯模糊當前螢幕內容  點選開啟連結 1.截圖縮小壓縮,減小畫素取樣的優化演算法。預設截圖後縮小到原來的1/4。 2.C++程式碼進行一次性高斯模糊。避免使用shader造成的渲染掉幀 以下是C++部分程式碼: /* * 高斯模糊介面 縮放因

Android簡單、高效能模糊毛玻璃效果(附原始碼)

毛玻璃效果相信很多朋友都眼紅很久了, 隔壁ios系統對高斯模糊早就大範圍使用了, 咱們Android卻絲毫不為所動, 於是就只能靠廣大開發者咯。 這是目前市面上效能最高的方案, 也不知道最初是

Android高效能模糊方案

簡述: 做直播類app的時候點選進入直播間接通的過程中首先顯示一張模糊的毛玻璃效果的圖片,那麼此時就要考慮使用高斯模糊的時候了。Android中提供了RenderScript來操作圖片,但是這個的使用版本要求是在API17以上,所以我們還可以考慮使用第三方可FastBlur

快速模糊IIR遞迴模糊

本人是影象處理的初學者,在http://www.cnblogs.com/Imageshop/p/3145216.html博文中看到有關影象柔光的特效處理原理後自己也動手實現了一下,這裡包括一個快速高斯模糊的演算法,具體原理可以參考論文《Recursive Imp

UnityShader例項14:螢幕特效之模糊Gaussian Blur

Shader "PengLu/ImageEffect/Unlit/GaussianBlur" { Properties { _MainTex ("Base (RGB)", 2D) = "white" {} } CGINCLUDE #include "UnityCG.cginc" s

Android實現模糊也叫毛玻璃效果

/*** 高斯模糊* @param bkg* @param view*/private void blur(Bitmap bkg, View view) {float radius = 25;Bitmap overlay = Bitmap.createBitmap((int) (view.getMeasure

模糊濾波的原理與演算法

通常,影象處理軟體會提供”模糊”(blur)濾鏡,使圖片產生模糊的效果。

簡單css實現模糊毛玻璃效果

img標籤新增class  設定filter:blur (0-npx)模糊半徑 filter: url(blur.svg#blur); /* FireFox, Chrome, Opera */    

Android 實時濾鏡 模糊帶原始碼

最近在做一個這樣一個需求,一個控制元件可以實時預覽攝像頭的內容,並且對此影象進行高斯模糊處理,現在來記錄一下。  基本的實現思路 1,攝像頭實時預覽的資料會回撥給onPreviewFrame(byte[] data, Camera camera) ,通過這個獲取YU

最快的模糊線性時間Fastest Gaussian Blur (in linear time)

I needed really fast Gaussian blur for one of my projects. After hours of struggling and browsing the internet, I finally found the be

openCV之濾波及程式碼實現

        高斯濾波是一種線性平滑濾波,適用於消除高斯噪聲,廣泛應用於影象處理的減噪過程。通俗的講,高斯濾波就是對整幅影象進行加權平均的過程,每一個畫素點的值,都由其本身和鄰域內的其他畫素值經過加

日記藍橋杯2013

內容:2013年第四屆藍橋杯軟體大賽預賽第一題。 題目描述 題目標題: 高斯日記 大數學家高斯有個好習慣:無論如何都要記日記。 他的日記有個與眾不同的地方,他從不註明年月日,而是用一個整數代替,

日記藍橋杯試題

   高斯日記     大數學家高斯有個好習慣:無論如何都要記日記。     他的日記有個與眾不同的地方,他從不註明年月日,而是用一個整數代替,比如:4210   後來人們知道,那個整數就是日期,它表示那一天是高斯出生後的第幾天。這或許也是個好習慣,它時時刻刻提醒著主人

opencv學習十一模糊

程式碼如下: # 匯入cv模組 import cv2 as cv import numpy as np # 確保在0-255之間 def clamp(pv): if pv > 255: return 255 if pv < 0: r

opencv學習模糊理論知識

理論知識: 參考連結: 對Photoshop高斯模糊濾鏡的演算法總結:http://www.cnblogs.com/hoodlum1980/archive/2008/03/03/1088567.html Python計算機視覺3:模糊,平滑,去噪:https://www.cnblogs.

SIFT2-----模糊

       SIFT演算法是在不同的尺度空間上查詢關鍵點,而尺度空間的獲取需要使用高斯模糊來實現 。 一維正態分佈公式:        (x-μ)在高斯影象模糊中對應模糊半徑,指的是模板元素到模板中心的距離。            eg: 二維模板大小為m*n,則

Glide中MultiTransformation使用,實現多種變換效果組合圓形,圓角,模糊,黑白...

Glide中MultiTransformation使用 MultiTransformation可以實現多個Transformation效果結合實現一些需求 1、例如Glide載入一張圖片,我們需要把這張

QT下OpenCV學習模糊處理

根據在EasyPR中學習到的,車牌識別第一部分的車牌定位首先需要對車牌圖片進行高斯模糊處理OpenCV中高斯模糊宣告為GaussianBlur(src,dst,Size(m_GaussianBlurSize,m_GaussianBlurSize),0,0,BORDER_DEF

EasyPR--開發詳解3模糊、灰度化和Sobel運算元

在上篇文章中我們瞭解了PlateLocate的過程中的所有步驟。在本篇文章中我們對前3個步驟,分別是高斯模糊、灰度化和Sobel運算元進行分析。 一、高斯模糊  1.目標   對影象去噪,為邊緣檢測演算法做準備。    2.效果   在我們的車牌定位中的第一步就是高斯模糊處理。    圖1 高斯

opencv for python (13) 影象卷積及影象平滑平均、模糊、中值模糊、雙邊濾波

影象卷積 卷積函式 cv2.filter2D(img,-1,kernel) 第一個引數是原影象 第二個引數目標影象的所需深度。如果是負數,則與原影象深度相同 第三個引數是卷積核心 import cv2 import numpy as np