google的小巧安全工具笔记
【摘要】
Pre
Skipfish是一个积极的Web应用程序的安全性侦察工具。 它准备了一个互动为目标的网站的站点地图进行一个递归爬网和基于字典的探头。 然后,将得到的地图是带注释的与许多活性(但希望非破坏性的)安全检查的输出。 最终报告工具生成的是,作为一个专业的网络应用程序安全评估的基础。
Source
https://github.com/spinkham/skipfishhttp://code.google.com/p/skipfish/
Install
安装所需软件库:
sudo apt-get install libssl0.9.8 sudo apt-get install libssl-dev sudo apt-get install openssl sudo apt-get install libidn11-dev
安装skipfish:
wget http://skipfish.googlecode.com/files/skipfish-1.69b.tgz tar zxvf skipfish-1.69b.tgz mv skipfish-1.69b skipfish cd skipfish make //编译完成,在目录中生成skipfish可执行程序 cp dictionaries/default.wl skipfish.wl
Use
//拷贝其中一个字典,用来扫描 ./skipfish -o data http://mall.midea.com/detail/index//其中data是输出目录,扫描结束后可打开data目录下的index.html查看扫描结果
SomeParams
skipfish web application scanner - version 2.10b Usage: /home/admin/workspace/skipfish/skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ] Authentication and access options: 验证和访问选项: -A user:pass - use specified HTTP authentication credentials 使用特定的http验证 -F host=IP - pretend that 'host' resolves to 'IP' -C name=val - append a custom cookie to all requests 对所有请求添加一个自定的cookie -H name=val - append a custom HTTP header to all requests 对所有请求添加一个自定的http请求头 -b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone 伪装成IE/FIREFOX/IPHONE的浏览器 -N - do not accept any new cookies 不允许新的cookies --auth-form url - form authentication URL --auth-user user - form authentication user --auth-pass pass - form authentication password --auth-verify-url - URL for in-session detection Crawl scope options: -d max_depth - maximum crawl tree depth (16)最大抓取深度 -c max_child - maximum children to index per node (512)最大抓取节点 -x max_desc - maximum descendants to index per branch (8192)每个索引分支抓取后代数 -r r_limit - max total number of requests to send (100000000)最大请求数量 -p crawl% - node and link crawl probability (100%) 节点连接抓取几率 -q hex - repeat probabilistic scan with given seed -I string - only follow URLs matching 'string' URL必须匹配字符串 -X string - exclude URLs matching 'string' URL排除字符串 -K string - do not fuzz parameters named 'string' -D domain - crawl cross-site links to another domain 跨域扫描 -B domain - trust, but do not crawl, another domain -Z - do not descend into 5xx locations 5xx错误时不再抓取 -O - do not submit any forms 不尝试提交表单 -P - do not parse HTML, etc, to find new links 不解析HTML查找连接 Reporting options: -o dir - write output to specified directory (required) -M - log warnings about mixed content / non-SSL passwords -E - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches -U - log all external URLs and e-mails seen -Q - completely suppress duplicate nodes in reports -u - be quiet, disable realtime progress stats -v - enable runtime logging (to stderr) Dictionary management options: -W wordlist - use a specified read-write wordlist (required) -S wordlist - load a supplemental read-only wordlist -L - do not auto-learn new keywords for the site -Y - do not fuzz extensions in directory brute-force -R age - purge words hit more than 'age' scans ago -T name=val - add new form auto-fill rule -G max_guess - maximum number of keyword guesses to keep (256) -z sigfile - load signatures from this file Performance settings: -g max_conn - max simultaneous TCP connections, global (40) 最大全局TCP链接 -m host_conn - max simultaneous connections, per target IP (10) 最大链接/目标IP -f max_fail - max number of consecutive HTTP errors (100) 最大http错误 -t req_tmout - total request response timeout (20 s) 请求超时时间 -w rw_tmout - individual network I/O timeout (10 s) -i idle_tmout - timeout on idle HTTP connections (10 s) -s s_limit - response size limit (400000 B) 限制大小 -e - do not keep binary responses for reporting 不报告二进制响应 Other settings: -l max_req - max requests per second (0.000000) -k duration - stop scanning after the given duration h:m:s --config file - load the specified configuration file
文章转自异步社区
原文链接 https://www.epubit.com/articleDetails?id=N2ac06b30-4da0-42b8-b8c3-9e5ddd579590
【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)