Python實現E-Mail收集外掛
__import__
函式
我們都知道 import
是匯入模組的,但是其實 import
實際上是使用 builtin
函式 import
來工作的。在一些程式中,我們可以動態去呼叫函式,如果我們知道模組的名稱(字串)的時候,我們可以很方便的使用動態呼叫
def getfunctionbyname(module_name, function_name): module = __import__(module_name) return getattr(module, function_name)
通過這段程式碼,我們就可以簡單呼叫一個模組的函數了
外掛系統開發流程
一個外掛系統運轉工作,主要進行以下幾個方面的操作
.py sys.path
外掛系統程式碼
在 lib/core/plugin.py
中建立一個 spiderplus
類,實現滿足我們要求的程式碼
# __author__ = 'mathor' import os import sys class spiderplus(object): def __init__(self, plugin, disallow = []): self.dir_exploit = [] self.disallow = ['__init__'] self.disallow.extend(disallow) self.plugin = os.getcwd() + '/' + plugin sys.path.append(plugin) def list_plusg(self): def filter_func(file): if not file.endswith('.py'): return False for disfile in self.disallow: if disfile in file: return False return True dir_exploit = filter(filter_func, os.listdir(self.plugin) return list(dir_exploit) def work(self, url, html): for _plugin in self.list_plusg(): try: m = __import__(_plugin.split('.')[0]) spider = getattr(m, 'spider') p = spider() s = p.run(url, html) except Exception as e: print (e)
work
函式中需要傳遞url,html,這個就是我們掃描器傳給外掛系統的,通過程式碼
spider = getattr(m, 'spider') p = spider() s = p.run(url, html)
我們定義外掛必須使用 class spider
中的 run
方法呼叫
掃描器中呼叫外掛
我們主要用爬蟲呼叫外掛,因為外掛需要傳遞url和網頁原始碼這兩個引數,所以我們在爬蟲獲取到這兩個的地方加入外掛系統程式碼即可
首先開啟 Spider.py
,在 Spider.py
檔案開頭加上
from lib.core import plugin
然後在檔案的末尾加上

disallow = ['sqlcheck'] _plugin = plugin.spiderplus('script', disallow) _plugin.work(_str['url'], _str['html'])
disallow
是不允許的外掛列表,為了方便測試,我們可以把sqlcheck填上
SQL注入融入外掛系統
其實非常簡單,只需要修改 script/sqlcheck.py
為下面即可
關於 Download
模組,其實就是 Downloader
模組,把 Downloader.py
複製一份命名為 Download.py
就行
import re, random from lib.core import Download class spider: def run(self, url, html): if (not url.find("?")): # Pseudo-static page return false; Downloader = Download.Downloader() BOOLEAN_TESTS = (" AND %d=%d", " OR NOT (%d=%d)") DBMS_ERRORS = { # regular expressions used for DBMS recognition based on error message response "MySQL": (r"SQL syntax.*MySQL", r"Warning.*mysql_.*", r"valid MySQL result", r"MySqlClient\."), "PostgreSQL": (r"PostgreSQL.*ERROR", r"Warning.*\Wpg_.*", r"valid PostgreSQL result", r"Npgsql\."), "Microsoft SQL Server": (r"Driver.* SQL[\-\_\ ]*Server", r"OLE DB.* SQL Server", r"(\W|\A)SQL Server.*Driver", r"Warning.*mssql_.*", r"(\W|\A)SQL Server.*[0-9a-fA-F]{8}", r"(?s)Exception.*\WSystem\.Data\.SqlClient\.", r"(?s)Exception.*\WRoadhouse\.Cms\."), "Microsoft Access": (r"Microsoft Access Driver", r"JET Database Engine", r"Access Database Engine"), "Oracle": (r"\bORA-[0-9][0-9][0-9][0-9]", r"Oracle error", r"Oracle.*Driver", r"Warning.*\Woci_.*", r"Warning.*\Wora_.*"), "IBM DB2": (r"CLI Driver.*DB2", r"DB2 SQL error", r"\bdb2_\w+\("), "SQLite": (r"SQLite/JDBCDriver", r"SQLite.Exception", r"System.Data.SQLite.SQLiteException", r"Warning.*sqlite_.*", r"Warning.*SQLite3::", r"\[SQLITE_ERROR\]"), "Sybase": (r"(?i)Warning.*sybase.*", r"Sybase message", r"Sybase.*Server message.*"), } _url = url + "%29%28%22%27" _content = Downloader.get(_url) for (dbms, regex) in ((dbms, regex) for dbms in DBMS_ERRORS for regex in DBMS_ERRORS[dbms]): if (re.search(regex,_content)): return True content = {} content['origin'] = Downloader.get(_url) for test_payload in BOOLEAN_TESTS: # Right Page RANDINT = random.randint(1, 255) _url = url + test_payload % (RANDINT, RANDINT) content["true"] = Downloader.get(_url) _url = url + test_payload % (RANDINT, RANDINT + 1) content["false"] = Downloader.get(_url) if content["origin"] == content["true"] != content["false"]: return "sql found: %" % url
E-Mail搜尋外掛
最後一個簡單的例子,搜尋網頁中的E-Mail,因為外掛系統會傳遞網頁原始碼,我們用一個正則表示式 ([\w-]+@[\w-]+\.[\w-]+)+
搜尋出所有的郵件。建立 script/email_check.py
檔案
# __author__ = 'mathor' import re class spider(): def run(self, url, html): #print(html) pattern = re.compile(r'([\w-]+@[\w-]+\.[\w-]+)+') email_list = re.findall(pattern, html) if (email_list): print(email_list) return True return False
執行 python w8ay.py
可以看到網頁中的郵箱都被採集到了