当前位置: 首页 > 编程日记 > 正文

Scrapy 学习笔记(-)

Scrapy


Scrapy 是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

Scrapy主要包括了以下组件:

  • 引擎(Scrapy)
    用来处理整个系统的数据流处理, 触发事务(框架核心)
  • 调度器(Scheduler)
    用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
  • 下载器(Downloader)
    用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
  • 爬虫(Spiders)
    爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
  • 项目管道(Pipeline)
    负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
  • 下载器中间件(Downloader Middlewares)
    位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
  • 爬虫中间件(Spider Middlewares)
    介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
  • 调度中间件(Scheduler Middewares)
    介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

    1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
    2. 引擎把URL封装成一个请求(Request)传给下载器
    3. 下载器把资源下载下来,并封装成应答包(Response)
    4. 爬虫解析Response
    5. 解析出实体(Item),则交给实体管道进行进一步的处理
    6. 解析出的是链接(URL),则把URL交给调度器等待抓取


一、安装:

Linux:pip3 install scrapyWindows:1、直接安装:pip3 install scrapy2、可能报错(需要安装twisted):2.1、下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted2.2、进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl3、 安装pywin32:pip install pywin32

二、基本使用

2.1、基本命令

1. scrapy startproject 项目名称- 在当前目录中创建中创建一个项目文件(类似于Django)2. scrapy genspider [-t template] <name> <domain>- 创建爬虫应用如:scrapy gensipider -t basic oldboy oldboy.comscrapy gensipider -t xmlfeed autohome autohome.com.cnPS:查看所有命令:scrapy gensipider -l查看模板命令:scrapy gensipider -d 模板名称3. scrapy list- 展示爬虫应用列表4. scrapy crawl 爬虫应用名称- 运行单独爬虫应用

2.2、项目结构以及爬虫应用简介

project_name/scrapy.cfgproject_name/__init__.pyitems.pypipelines.pysettings.pyspiders/__init__.py爬虫1.py爬虫2.py爬虫3.py

文件说明:

  • scrapy.cfg  项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
  • items.py    设置数据存储模板,用于结构化数据,如:Django的Model
  • pipelines    数据处理行为,如:一般结构化的数据持久化
  • settings.py 配置文件,如:递归的层数、并发数,延迟下载等
  • spiders      爬虫目录,如:创建文件,编写爬虫规则

注意:一般创建爬虫文件时,以网站域名命名

import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name = "xiaohuar"                            # 爬虫名称 *****allowed_domains = ["xiaohuar.com"]  # 允许的域名start_urls = ["http://www.xiaohuar.com/hua/",   # 其实URL
    ]def parse(self, response):# 访问起始URL并获取结果后的回调函数
import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "dig"# 允许的域名allowed_domains = ["chouti.com"]# 起始URLstart_urls = ['http://dig.chouti.com/',]has_request_set = {}def parse(self, response):print(response.url)hxs = HtmlXPathSelector(response)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pagekey = self.md5(page_url)if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlobj = Request(url=page_url, method='GET', callback=self.parse)yield obj@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key
demo

执行此爬虫文件,则在终端进入项目目录执行如下命令:

scrapy crawl dig --nolog  # --nolog:不显示日志

对于上述代码重要之处在于:

  • Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
  • HtmlXpathSelector用于结构化HTML代码并提供选择器功能

yield scrapy.Request(url, callback=self.parse)

解析:

因为使用的yield,而不是return。parse函数将会被当做一个生成器使用。scrapy会逐一获取parse方法中生成的结果,并判断该结果是一个什么样的类型。如果是request则加入爬取队列,如果是item类型则使用pipeline处理,其他类型则返回错误信息。具体见源代码
 

2.3、url去重

scrapy下的request中init自带参数:

其中有个参数:dont_filter = False , scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,具体可进源码(RFPDupeFilter )查看,相关配置:

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存范文记录的日志路径,如:/root/"  # 最终路径为 /root/requests.seen

我们自己自定义去重操作,这样能更灵活的操作数据,比如将已访问的url 存进内存、数据库、文件、缓存等,更换方式灵活

1)首先,我们要自定义url去重的类:RepeatFilter  , 在setting.py中将scrapy默认去重的路径改成我们自定义的路径:

# setting.py
DUPEFILTER_CLASS = 'scrapy.dupefilter.RepeatFilter'

2)然后再新建dupefilter.py文件,在该py文件中自定义RepeatFilter类, 重写RFPDupeFilter 类的几个方法:

  • def from_settings(cls, settings) : 会定义成类方法,默认调用的第一个方法就是这个方法,作用:实例化RepeatFilter
  • def request_seen(self, request) : 检查当前请求是否被访问过,返回True表示访问过 ,返回False表示未访问过
  • def open(self) : 开始爬取请求时调用
  • def close(self, reason) : 结束爬虫爬取时调用
  • def log(self, request, spider) : 日志记录

调用顺序:1、from_settings 方法 → 2、init 方法 → 3、open 方法 → 4、request_seen 方法 →5、close 方法

# dupefilter.py/RepeatFilter
class RepeatFilter(object):def __init__(self):self.visited_url = set()@classmethoddef from_settings(cls, settings):"""初始化时,调用:param settings: :return: """return cls()def request_seen(self, request):"""检测当前请求是否已经被访问过:param request: :return: True表示已经访问过;False表示未访问过"""if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):"""开始爬去请求时,调用:return: """print('open replication')def close(self, reason):"""结束爬虫爬取时,调用:param reason: :return: """print('close replication')def log(self, request, spider):"""记录日志:param request: :param spider: :return: """print('repeat', request.url)

2.4、pipelines相关

scrapy中items将数据格式化,之后传递给pipelines处理,

通过下述yield方法,即可将items数据传递到pipelines中,执行pipelines中 各类下的 process_item 方法,进行数据持久化处理 ,执行优先级由setting中注册(ITEM_PIPELINES)配置的数据决定,数值越小越先执行

yield item_example  #此语句一执行,就会跳到pipelines.py下的类中执行其中的方法,比如 process_item方法:数据做持久化时调用 ,open_spider方法:爬虫开始爬取数据时调用的

pipelines相关类是在pipelines.py中编写的,要使pipelines.py下的类能正常运行,需要在setting中将需要用到的类注册进去,如:

#pipelines.py/ArticleImagePipeline类class ArticleImagePipeline(ImagesPipeline):def item_completed(self, results, item, info):if "front_image_url" in item:for ok, value in results:image_file_path = value["path"]item["front_image_path"] = image_file_pathreturn item  #return item:会将item交个下个类继续处理,如不想让其他类处理,则需手动报异常:Raise DropItem(),表示丢弃item,不再被其他类处理

setting中注册:

ITEM_PIPELINES = {'ArticleSpider.pipelines.ArticleImagePipeline': 2, # 自定义ArticleImagePipeline ,需配置好管道路径
}

pipelines中的类,也有几个方法,有点类似上述去重的类:

  • def process_item(self, item, spider) :操作并进行持久化时调用
  • def from_crawler(cls, crawler) :初始化时调用,用于读取setting信息及实例化pipelines对象
  • def open_spider(self,spider) :爬虫开始执行时调用
  • def close_spider(self,spider) :爬虫结束时调用


2.5、cookie相关

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http.response.html import HtmlResponse
from scrapy.http import Request
from scrapy.http.cookies import CookieJarclass ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com"]start_urls = ('http://www.chouti.com/',)def start_requests(self):url = 'http://dig.chouti.com/'yield Request(url=url, callback=self.login, meta={'cookiejar': True})def login(self, response):print(response.headers.getlist('Set-Cookie'))req = Request(url='http://dig.chouti.com/login',method='POST',headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8613121758648&password=woshiniba&oneMonth=1',callback=self.check_login,meta={'cookiejar': True})yield reqdef check_login(self, response):print(response.text)
处理cookie

# -*- coding: utf-8 -*-
import scrapy
import sys
import io
from scrapy.http import Request
from scrapy.selector import Selector, HtmlXPathSelector
from ..items import ChoutiItemsys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
from scrapy.http.cookies import CookieJar
class ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com",]start_urls = ['http://dig.chouti.com/']cookie_dict = Nonedef parse(self, response):cookie_obj = CookieJar()cookie_obj.extract_cookies(response,response.request)self.cookie_dict = cookie_obj._cookies  # cookie# 带上用户名密码+cookieyield Request(url="http://dig.chouti.com/login",method='POST',body = "phone=8615131255089&password=woshiniba&oneMonth=1",headers={'Content-Type': "application/x-www-form-urlencoded; charset=UTF-8"},cookies=cookie_obj._cookies,callback=self.check_login)def check_login(self,response):print(response.text)yield Request(url="http://dig.chouti.com/",callback=self.good)def good(self,response):id_list = Selector(response=response).xpath('//div[@share-linkid]/@share-linkid').extract()for nid in id_list:print(nid)url = "http://dig.chouti.com/link/vote?linksId=%s" % nidyield Request(url=url,method="POST",cookies=self.cookie_dict,callback=self.show)# page_urls = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()# for page in page_urls:#     url = "http://dig.chouti.com%s" % page#     yield Request(url=url,callback=self.good)def show(self,response):print(response.text)
示例:自动登录抽屉并批量点赞


2.6、Scrapy框架扩展, 自定义扩展利用信号在指定位置注册制定操作

Scrapy框架扩展,scrapy提供一些扩展(钩子)可以供自定义操作,setting.py中:

EXTENSIONS = {'scrapy.extensions.telnet.TelnetConsole': None,
}

我们进入TelnetConsole 源码中,通过重写init方法、from_crawler方法,可以自定义我们的钩子

scrapy.signals中提供多种信号状态,可供我们自定义扩展钩子时使用:
engine_started = object()  # 引擎开始时
engine_stopped = object()  # 引擎结束时
spider_opened = object()  # 爬虫开始时
spider_idle = object()    # 
spider_closed = object()   # 爬虫结束时
spider_error = object()   # 爬虫错误时
request_scheduled = object()   # request给到调度器时
request_dropped = object()  # request丢弃时
response_received = object()  # response接收时
response_downloaded = object()   # response下载时
item_scraped = object()  # item执行时
item_dropped = object()  # item丢弃时

自定义钩子:

新建 extensions.py文件 ,创建类 MyExtension

from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value = value@classmethoddef from_crawler(cls, crawler):val = crawler.settings.getint('MMMM')ext = cls(val)crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)return extdef spider_opened(self, spider):print('open')def spider_closed(self, spider):print('close')

setting.py中注册:

EXTENSIONS = {# 'scrapy.extensions.telnet.TelnetConsole': None,'articlespider.extensions.MyExtend': 300,
}

2.7、setting.py配置:

# -*- coding: utf-8 -*-# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬虫名称
BOT_NAME = 'step8_king'# 2. 爬虫应用路径
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客户端 user-agent请求头
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules
# 4. 禁止爬虫配置
# ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 并发请求数
# CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延迟下载秒数
# DOWNLOAD_DELAY = 2# The download delay setting will honor only one of:
# 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP
# CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar进行操作cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True# Disable Telnet Console (enabled by default)
# 9. Telnet用于查看当前爬虫的信息,操作爬虫等...
#    使用telnet ip port ,然后通过命令操作
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]# 10. 默认请求头
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定义pipeline处理请求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }# 12. 自定义扩展,基于信号进行调用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度
# DEPTH_LIMIT = 3# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo# 后进先出,深度优先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先进先出,广度优先# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'# 15. 调度器队列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler# 16. 访问URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html"""
17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay
"""# 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:
# 是否显示
# AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings"""
18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'"""
19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}""""""
20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1,  # pKey对象certificate=v2,  # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY""""""
21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {# 'step8_king.middlewares.SpiderMiddleware': 543,
}"""
22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''passdef process_response(self, request, response, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }
setting.py

2.8、scrapy配置之自动限速以及缓存

"""
17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay
"""# 开始自动限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下载延迟
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下载延迟
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒并发数
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
setting.py 动态限速

"""
18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径
# HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
缓存


2.9、scrapy之默认代理及扩展代理

"""
19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""
View Code


2.10、scrapy之自定义https证书

"""
20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1,  # pKey对象certificate=v2,  # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""
View Code

2.11、 scrapy之爬虫中间件

1)下载中间件

"""
22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''passdef process_response(self, request, response, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""
View Code

2)爬虫中间件

"""
21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""
View Code

转载于:https://www.cnblogs.com/Eric15/articles/9733824.html

相关文章:

Ubuntu18.04运行ORB_SLAM2

运行环境&#xff1a;Ubuntu18.04 预先安装的库 需要预先安装一些库&#xff0c;如Eign&#xff0c;Sophus&#xff0c;OpenCV等。笔者在阅读《SLAM十四讲》的时候已经安装&#xff0c;在此不再赘述。 ORB_SLAM2源码的下载与编译 git clone https://github.com/raulmur/ORB…

java中的各种流(老师的有道云笔记)

内存操作流-字节之前的文件操作流是以文件的输入输出为主的&#xff0c;当输出的位置变成了内存&#xff0c;那么就称为内存操作流。此时得使用内存流完成内存的输入和输出操作。如果程序运行过程中要产生一些临时文件&#xff0c;可采用虚拟文件方式实现&#xff1b;直接操作磁…

iOS SwiftUI篇-4 注解@State、@Binding、@ObservedObject、@EnvironmentObject、@Environment

iOS SwiftUI篇-4 注解@State、@Binding、@ObservedObject、@EnvironmentObject、@Environment @State 关联View的状态,当@State修饰的属性改变时,对应的View会跟着刷新,符合MVVM的设计理念 @State var count: Int = 0Section(header: Text("@States")) {Te

CV05-ResNet笔记

目录 一、为什么是ResNet 二、Residual Learning细节 2.1 shortcut计算 2.2 11卷积调整channel维度大小 2.3 ResNet层数 2.4 ResNet里的Basic Block 和 Bottleneck Block 2.5 Global Average Pooling 全局平均池化 2.6 Batch Normalization 学习ResNet&#xff0c;记录…

二叉树的前序,中序,后序的递归、迭代实现

二叉树的前序遍历 递归实现 递归实现没什么好说的。个人感觉将函数功能看成一个整体&#xff0c;不要去想栈中怎么实现的。毕竟自己的脑袋不是电脑&#xff0c;绕着绕着就蒙了。 void preordered_traversal_recursion(TreeNode* root) {if(root NULL) return;container.pus…

DataSet 动态添加列

public DataSet GetNewId(List<string> IdArr){DataSet ds new DataSet();DataTable newtb new DataTable();DataColumn column new DataColumn("cnt", typeof(string));//新增列newtb.Columns.Add(column);for (int i 0; i < IdArr.Count; i){StringBu…

iOS专题1-蓝牙扫描、连接、读写

iOS专题1-蓝牙扫描、连接、读写 概念 外围设备 可以被其他蓝牙设备连接的外部蓝牙设备,不断广播自身的蓝牙名及其数据,如小米手环、共享单车、蓝牙体重秤 中央设备 可以搜索并连接周边的外围设备,并与之进行数据读写通讯,如手机 日常生活中常见的场景是手机app通过蓝…

CV06-Xception笔记

目录 一、为啥是Xception 二、Xception结构 2.1 Xception结构基本描述 2.2 实现细节 2.3 DeepLabV3改进 三、记录pytorch采坑relu激活函数inplaceTrue Xception笔记&#xff0c;记录一些自己认为重要的要点&#xff0c;以免日后遗忘。 复现Xception论文、DeepLabV改进的…

C++排序算法实现(更新中)

比较排序法&#xff1a;如冒泡排序、简单选择排序、合并排序、快速排序。其最优的时间复杂度为O(nlogn)。 其他排序法&#xff1a;如桶排序、基数排序等。时间复杂度可以达到O(n)。但试用范围有要求。 桶排序&#xff1a;排序的数组元素跨距不能很大。因为跨距很大的话&#xf…

iOS SwiftUI篇-5 专题NavigationView、NavigationLink

iOS SwiftUI篇-5 专题NavigationView、NavigationLink NavigationView:标题、展示模式、隐藏导航栏、隐藏返回按钮、添加导航栏按钮 NavigationLink:Text文本跳转、Image图片跳转、Button按钮跳转、点击按钮根据业务跳转到不同页面 NavigationView 标题、展示模式 import S…

PHP artisan

Artisan 是 Laravel 提供的 CLI&#xff08;命令行接口&#xff09;&#xff0c;它提供了非常多实用的命令来帮助我们开发 Laravel 应用。前面我们已使用过 Artisan 命令来生成应用的 App Key 和控制器。在本教程中&#xff0c;我们会用到以下 Artisan 命令&#xff0c;你也可以…

【转载】Pytorch在加载模型参数时指定设备

转载 https://sparkydogx.github.io/2018/09/26/pytorch-state-dict-gpu-to-cpu/ >>> torch.load(tensors.pt) # Load all tensors onto the CPU >>> torch.load(tensors.pt, map_locationtorch.device(cpu)) # Load all tensors onto the CPU, using a fun…

目标检测之Faster-RCNN的pytorch代码详解(数据预处理篇)

首先贴上代码原作者的github:https://github.com/chenyuntc/simple-faster-rcnn-pytorch&#xff08;非代码作者&#xff0c;博文只解释代码&#xff09; 今天看完了simple-faster-rcnn-pytorch-master代码的最后一个train.py文件&#xff0c;是时候认真的总结一下了&#xff0…

hp-ux 集群,内存 小记

hp-ux 集群&#xff0c;内存 小记 -----查看hp 集群状态信息 # cmviewcl -v CLUSTER STATUS dbsvr up NODE STATUS STATE db01 up running Cluster_Lock_LVM: VOLUM…

iOS SwiftUI篇-6 专题TabView

iOS SwiftUI篇-6 专题TabView TabView: 图片+文字组成tabItem,选中时改变图片和文字颜色 跳转到二级页面时隐藏tabbar,返回到首页时显示tabbar 首页、我的两个tab,效果图: 图片文字组成tabItem,选中时改变图片和文字颜色 代码: struct MainContentView: View {@State…

三维刚体变化中Rcw,tcw的含义

高翔博士的《视觉SLAM十四讲》中&#xff0c;介绍Tcw指从世界坐标w到c的变换矩阵。但研一学机器人学的时候&#xff0c;讲T12的含义是&#xff0c;坐标系2相对于坐标系1的变换。于是一脸懵逼。昨天想了一晚上&#xff0c;有了一点自己的想法&#xff0c;在这记录一下&#xff0…

CV07-DeepLab v3+笔记

目录 一、Dilated Convolution 膨胀卷积 二、ASPP与Encoder & Decoder 三、深度可分离卷积 3.1 深度可分离卷积原理 3.2 深度可分离卷积减小参数量和计算量 3.3 深度可分离卷积实现细节 四、Xception作为Backbone DeepLab v3笔记&#xff0c;记录一些自己认为重要的…

1116.加减乘除

题目描述&#xff1a;根据输入的运算符对输入的整数进行简单的整数运算。 运算符只会是加、减-、乘*、除/、求余%、阶乘&#xff01;六个运算符之一。 输出运算的结果&#xff0c;如果出现除数为零&#xff0c;则输出“error”,如果求余运算的第二个运算数为0&#xff0c;也输出…

Flutter专题1-环境搭建

Flutter专题1-环境搭建和创建项目 这里以MaciOS为例&#xff0c;其他平台参考官网https://flutter.dev/docs/get-started/install 1. 系统要求 系统:macOS (64-bit) 硬盘空间:2.8G 工具:Git 2.获取Flutter SDK 2.1下载SDK,从https://flutter.dev/docs/development/tools/s…

ORB_SLAM2源码:ORBmatcher.cc

ORBmatcher.cc中的函数&#xff0c;主要实现&#xff08;1&#xff09;路标点和特征点的匹配&#xff08;2D-3D点对&#xff09;。&#xff08;2&#xff09;特征点和特征点的匹配&#xff08;2D-2D点对&#xff09;。SearchByProjection的函数重载看得我一脸懵逼。在这做一下笔…

iOS国际化技巧

参考链接:http://www.cocoachina.com/ios/20151120/14258.html http://www.jianshu.com/p/88c1b65e3ddb http://www.cnblogs.com/levilinxi/p/4296712.html http://www.cocoachina.com/appstore/20160310/15632.html http://www.cocoachina.com/ios/20170214/18681.html转载于:…

CV08-数据预处理与数据增强

复现车道线分割项目&#xff08;Lane Segmentation赛事说明在这里&#xff09;&#xff0c;学习数据预处理和数据增强。学习分为Model、Data、Training、Inference、Deployment五个阶段&#xff0c;也就是建模、数据、训练、推断、部署这五个阶段。现在进入的是Data阶段。项目的…

ORB_SLAM2程序入口(System.cc)

程序入口 ORB_SLAM2的程序入口为src/System.cc。在CMakeList.txt中可知&#xff0c;ORB_SLAM2的可执行程序为&#xff1a; Examples/Stereo/stereo_kitti.cc等。 add_executable(stereo_kitti Examples/Stereo/stereo_kitti.cc) target_link_libraries(stereo_kitti ${PROJECT…

HDU 6229 Wandering Robots 找规律+离散化

题目链接&#xff1a;Wandering Robots 题解&#xff1a;先讲一下规律&#xff0c;对于每一个格子它可以从多少个地方来有一个值&#xff08;可以从自己到自己&#xff09;&#xff0c;然后答案就是统计合法格子上的数与所有格子的数的比值 比如说样例的3 0格子上的值就是 3 4 …

app、H5、safari、appstore应用主页评分页之间拉起调用、打开手机某些系统功能、app打开文档

定义打开URL的方法 - (void)openURL:(NSString *)urlStr {NSURL *url [NSURL URLWithString:urlStr];UIApplication *app [UIApplication sharedApplication];if ([app canOpenURL:url]) { #ifdef __IPHONE_10_0[app openURL:url options:[NSDictionary dictionary] complet…

XML学习总结

1、XML结构 2、XmlNodeType值为一个枚举类型&#xff1a; 假设我们对一个XML文件进行遍历&#xff0c;不推断节点是否为Element类型。就会将文本节点遍历出来&#xff0c;出现#test。 3、XmlElement和XmlNode的差别&#xff1a;&#xff08;摘自CSDN论坛&#xff09; &#xff…

Linux01-基本操作与Shell

目录 一、环境 二、Linux目录结构及基本操作 2.1 Linux目录结构 2.2 基本操作 三、shell 3.1 shell的意义 3.2 su - 一、环境 2019年搞下RHCE的证书&#xff0c;但是一直没有整理Linux学习的笔记&#xff0c;为了不让到手的知识被遗忘&#xff0c;从今天起整理Linux学习…

ORB_SLAM2中Tracking线程的三种追踪方式

1、参考关键帧追踪模式 bool Tracking::TrackReferenceKeyFrame()对参考关键帧中的路标点进行跟踪。在Tracking线程中&#xff0c;每传入一帧&#xff0c;都会进行位姿优化。 以上一帧的位姿为当前位姿进行优化。 &#xff08;1&#xff09;计算当前帧的词袋 mCurrentFra…

nodejs 中间件 反向代理 接口转发

背景 随着后端业务系统的增加&#xff0c;纵向需求不断扩展&#xff0c;一个业务系统已经无法满足需求了&#xff0c;衍生出多个业务系统&#xff0c;对外暴露的ip、端口就可能有多个&#xff0c;此时不方便外部接口调用&#xff0c;有些特殊行业客户出于安全性考虑不发提供多…

oneinstack

https://oneinstack.com/转载于:https://www.cnblogs.com/diyunpeng/p/9740895.html