【作者主页】:吴秋霖
【作者介绍】:python领域优质创作者、阿里云博客专家、华为云享专家。长期致力于Python与爬虫领域研究与开发工作!
【作者推荐】:对JS逆向感兴趣的朋友可以关注《爬虫JS逆向实战》,对分布式爬虫平台感兴趣的朋友可以关注《分布式爬虫平台搭建与开发实战》
还有未来会持续更新的验证码突防、APP逆向、Python领域等一系列文章
1. 写在前面
Scrapy是爬虫非常经典的一个框架,深受开发者喜爱!因其简洁高效的设计,被广泛选用于构建强大的爬虫工程。很多人会选择使用它来开发自己的爬虫工程。今天我将用一个论坛网站的示例来全面讲述Scrapy框架的使用
以前都是底层开始,现在不一样了,一上来都是框架。导致很多人是知其然,但不知其所以然。而忽略了底层原理的理解
目标网站(感兴趣的可以练练手):
aHR0cHM6Ly9mb3J1bS5heGlzaGlzdG9yeS5jb20v
这是一个国外的BBS论坛,随手挑的一个曾经写过的案例。前几年做舆情相关的项目,写的爬虫真的是很多,境内外社交媒体、论坛、新闻资讯
2. 抓包分析
首先,我们打开这个网站,这个网站是要登陆的。我们先解决登陆这块,简单的构造一下登陆请求抓个包分析一下:
上图就是登陆请求提交的参数,接下来我们需要在Scrapy爬虫工程的Spider中构造并实现登陆功能
3. Scrapy提交登陆请求
参数都都是明文的比较简单,唯一的一个sid也不是加密生成的,在HTML中就能够拿到
很多时候一些接口某些参数,你看起来是密文,但是并不一定就是加密算法生成的,很有可能在HTML或者其它接口响应中就能获取的到
sid获取如下:
现在我们开始编写Scrapy爬虫中登陆的这部分代码,实现代码如下所示:
def parse(self, response):
text = response.headers['Set-Cookie']
pa = re.***pile("phpbb3_lzhqa_sid=(.*?);")
sid = pa.findall(text)[0]
response.meta['sid'] = sid
login_url = 'https://forum.axishistory.***/ucp.php?mode=login'
yield Request(login_url, meta=response.meta, callback=self.parse_login)
def parse_login(self, response):
sid=response.meta['sid']
username ='用户名'
password = '密码'
formdata = {
"username": username,
"password": password,
"sid": sid,
"redirect": "index.php",
"login": "Login",
}
yield FormRequest.from_response(response, formid='login', formdata=formdata, callback=self.parse_after_login)
首先我们它通过parse函数从start_urls请求所响应的response中获取sid的值,然后继续交给parse_login的登陆函数实现模拟登陆
另外说一下formid这个参数,在HTML文档中,表单通常通过标签定义,并且可以包含id属性,这个id属性就是表单的ID,如下一个HTML的示例:
<form id="login" method="post" action="/login">
<!-- 表单的其他字段 -->
<input type="text" name="username">
<input type="password" name="password">
<!-- 其他表单字段 -->
<input type="submit" value="Login">
</form>
在上面的这个例子中,标签有一个id属性,其值为“login”。所以,formid这个参数用于指定表单,去构造登陆提交请求
4. 列表与详情页面数据解析
登陆处理完以后,我们就可以使用Scrapy爬虫继续对列表跟详情页构造请求并解析数据,这一部分的无非就是写XPATH规则了,基本对技术的要求并不高,如下使用XPATH测试工具编写列表页链接提取的规则:
Scrapy列表页代码实现如下:
def parse_page_list(self, response):
pagination = response.meta.get("pagination", 1)
details = response.xpath("//div[@class='inner']/ul/li")
for detail in details:
replies = detail.xpath("dl/dd[@class='posts']/text()").extract_first()
views = detail.xpath("dl/dd[@class='views']/text()").extract_first()
meta = response.meta
meta["replies"] = replies
meta["views"] = views
detail_link = detail.xpath("dl//div[@class='list-inner']/a[@class='topictitle']/@href").extract_first()
detail_title = detail.xpath("dl//div[@class='list-inner']/a[@class='topictitle']/text()").extract_first()
meta["detail_title"] = detail_title
yield Request(response.urljoin(detail_link), callback=self.parse_detail, meta=response.meta)
next_page = response.xpath("//div[@class='pagination']/ul/li/a[@rel='next']/@href").extract_first()
if next_page and pagination < self.pagination_num:
meta = response.meta
meta['pagination'] = pagination+1
yield Request(response.urljoin(next_page), callback=self.parse_page_list, meta=meta)
self.pagination_num是一个翻页最大采集数的配置,这个自行设定即可
通过列表页我们拿到了所有贴文的链接,我们并在代码的最后使用了yield对列表页发起了请求,<font 并通过color=#ff0033 size=3>callback=self.parse_detail交给解析函数去提取数据
首先我们定义在项目的items.py文件中定义Item数据结构,主要帖子跟评论的,如下所示:
class A***ountItem(Item):
a***ount_url = Field() # 账号url
a***ount_id = Field() # 账号id
a***ount_name = Field() # 账号名称
nick_name = Field() # 昵称
website_name = Field() # 论坛名
a***ount_type = Field() # 账号类型,固定forum
level = Field() # 账号等级
a***ount_description = Field() # 账号描述信息
a***ount_followed_num = Field() # 账号关注数
a***ount_followed_list = Field() # 账号关注id列表
a***ount_focus_num = Field() # 账号粉丝数
a***ount_focus_list = Field() # 账号粉丝id列表
regist_time = Field() # 账号注册时间
forum_credits = Field() # 论坛积分/经验值
location = Field() # 地区
post_num = Field() # 发帖数
reply_num = Field() # 跟帖数
msg_type = Field()
area = Field()
class PostItem(Item):
type = Field() # "post"
post_id = Field() # 帖子id
title = Field() # 帖子标题
content = Field() # 帖子内容
website_name = Field() # 论坛名
category = Field() # 帖子所属版块
url = Field() # 帖子url
language = Field() # 语种, zh_***|en|es
release_time = Field() # 发布时间
a***ount_id = Field() # 发帖人id
a***ount_name = Field() # 发帖人账号名
page_view_num = Field() # 帖子浏览数
***ment_num = Field() # 帖子回复数
like_num = Field() # 帖子点赞数
quote_from =Field() # 被转载的帖子id
location_info = Field() # 发帖地理位置信息
images_url = Field() # 帖子图片链接
image_file = Field() # 帖子图片存储路径
msg_type = Field()
area = Field()
class ***mentItem(Item):
type = Field() # "***ment"
website_name = Field() # 论坛名
post_id = Field()
***ment_id = Field()
content = Field() # 回帖内容
release_time = Field() # 回帖时间
a***ount_id = Field() # 帖子回复人id
a***ount_name = Field() # 回帖人名称
***ment_level = Field() # 回帖层级
parent_id = Field() # 回复的帖子或评论id
like_num = Field() # 回帖点赞数
***ment_floor = Field() # 回帖楼层
images_url = Field() # 评论图片链接
image_file = Field() # 评论图片存储路径
msg_type = Field()
area = Field()
接下来我们需要编写贴文内容的数据解析代码,解析函数代码实现如下所示:
def parse_detail(self, response):
dont_parse_post = response.meta.get("dont_parse_post")
category = " < ".join(response.xpath("//ul[@id='nav-breadcrumbs']/li//span[@itemprop='title']/text()").extract()[1:])
if dont_parse_post is None:
msg_ele = response.xpath("//div[@id='page-body']//div[@class='inner']")[0]
post_id = msg_ele.xpath("div//h3/a/@href").extract_first(default='').strip().replace("#p", "")
post_item = PostItem()
post_item["url"] = response.url
post_item['area'] = self.name
post_item['msg_type'] = u"贴文"
post_item['type'] = u"post"
post_item["post_id"] = post_id
post_item["language"] = 'en'
post_item["website_name"] = self.allowed_domains[0]
post_item["category"] = category
post_item["title"] = response.meta.get("detail_title")
post_item["a***ount_name"] = msg_ele.xpath("div//strong/a[@class='username']/text()").extract_first(default='').strip()
post_item["content"] = "".join(msg_ele.xpath("div//div[@class='content']/text()").extract()).strip()
post_time = "".join(msg_ele.xpath("div//p[@class='author']/text()").extract()).strip()
post_item["release_time"] = dateparser.parse(post_time).strftime('%Y-%m-%d %H:%M:%S')
post_item["collect_time"] = dateparser.parse(str(time.time())).strftime('%Y-%m-%d %H:%M:%S')
user_link =msg_ele.xpath("div//strong/a[@class='username']/@href").extract_first(default='').strip()
a***ount_id = "".join(re.***pile("&u=(\d+)").findall(user_link))
post_item["a***ount_id"] = a***ount_id
post_item["***ment_num"] = response.meta.get("replies")
post_item["page_view_num"] = response.meta.get("views")
images_urls = msg_ele.xpath("div//div[@class='content']//img/@src").extract() or ""
post_item["images_url"] = [response.urljoin(url) for url in images_urls]
post_item["image_file"] = self.image_path(post_item["images_url"])
post_item["language"] = 'en'
post_item["website_name"] = self.name
response.meta["post_id"] = post_id
response.meta['a***ount_id'] = post_item["a***ount_id"]
response.meta["a***ount_name"] = post_item["a***ount_name"]
full_user_link = response.urljoin(user_link)
yield Request(full_user_link, meta=response.meta, callback=self.parse_a***ount_info)
for ***ment_item in self.parse_***ments(response):
yield ***ment_item
***ment_next_page = response.xpath(u"//div[@class='pagination']/ul/li/a[@rel='next']/@href").extract_first()
if ***ment_next_page:
response.meta["dont_parse_post"] = 1
next_page_link = response.urljoin(***ment_next_page)
yield Request(next_page_link, callback=self.parse_detail, meta=response.meta)
贴文内容的下方就是评论信息,上面代码中我们拿到评论的链接***ment_next_page,直接继续发送请求解析评论内容:
def parse_***ments(self, response):
***ments = response.xpath("//div[@id='page-body']//div[@class='inner']")
if response.meta.get("dont_parse_post") is None:
***ments = ***ments[1:]
for ***ment in ***ments:
***ment_item = ***mentItem()
***ment_item['type'] = "***ment"
***ment_item['area'] = self.name
***ment_item['msg_type'] = u"评论"
***ment_item['post_id'] = response.meta.get("post_id")
***ment_item["parent_id"] = response.meta.get("post_id")
***ment_item["website_name"] = self.allowed_domains[0]
user_link =***ment.xpath("div//strong/a[@class='username']/@href").extract_first(default='').strip()
a***ount_id = "".join(re.***pile("&u=(\d+)").findall(user_link))
***ment_item['***ment_id'] = ***ment.xpath("div//h3/a/@href").extract_first(default='').strip().replace("#p","")
***ment_item['a***ount_id'] = a***ount_id
***ment_item['a***ount_name'] = ***ment.xpath("div//strong/a[@class='username']/text()").extract_first(default='').strip()
***ment_time = "".join(***ment.xpath("div//p[@class='author']/text()").extract()).strip()
if not ***ment_time:
continue
***ment_level_text = ***ment.xpath("div//div[@id='post_content%s']//a[contains(@href,'./viewtopic.php?p')]/text()" % ***ment_item['***ment_id']).extract_first(default='')
***ment_item['***ment_level'] = "".join(re.***pile("\d+").findall(***ment_level_text))
***ment_item['release_time'] = dateparser.parse(***ment_time).strftime('%Y-%m-%d %H:%M:%S')
***ment_content_list = "".join(***ment.xpath("div//div[@class='content']/text()").extract()).strip()
***ment_item['content'] = "".join(***ment_content_list)
response.meta['a***ount_id'] = ***ment_item["a***ount_id"]
response.meta["a***ount_name"] = ***ment_item["a***ount_name"]
full_user_link = response.urljoin(user_link)
yield Request(full_user_link, meta=response.meta, callback=self.parse_a***ount_info)
评论信息采集中还有一个针对评论用户信息采集的功能,通过调用parse_a***ount_info函数进行采集,实现代码如下所示:
def parse_a***ount_info(self, response):
about_item = A***ountItem()
about_item["a***ount_id"] = response.meta["a***ount_id"]
about_item["a***ount_url"] = response.url
about_item["a***ount_name"] = response.meta["a***ount_name"]
about_item["nick_name"] = ""
about_item["website_name"] = self.allowed_domains[0]
about_item["a***ount_type"] = "forum"
about_item["level"] = ""
a***ount_description = "".join(response.xpath("//div[@class='inner']/div[@class='postbody']//text()").extract())
about_item["a***ount_description"] = a***ount_description
about_item["a***ount_followed_num"] = ""
about_item["a***ount_followed_list"] = ""
about_item["a***ount_focus_num"] = ""
about_item["a***ount_focus_list"] = ""
regist_time = "".join(response.xpath("//dl/dt[text()='Joined:']/following-sibling::dd[1]/text()").extract())
about_item["regist_time"] = dateparser.parse(regist_time).strftime('%Y-%m-%d %H:%M:%S')
about_item["forum_credits"] = ""
location = "".join(response.xpath("//dl/dt[text()='Location:']/following-sibling::dd[1]/text()").extract())
about_item["location"] = location
post_num_text = response.xpath("//dl/dt[text()='Total posts:']/following-sibling::dd[1]/text()[1]").extract_first(default='')
post_num = post_num_text.replace(",",'').strip("|").strip()
about_item["post_num"] = post_num
about_item["reply_num"] = ""
about_item["msg_type"] = 'a***ount'
about_item["area"] = self.name
yield about_item
最后从帖子到评论再到账号信息,层层采集与调用拿到完整的一个JSON结构化数据,进行yield到数据库
5. 中间件Middleware配置
因为是国外的论坛网站案例,所以这里我们需要使用我们的Middleware来解决这个问题:
class ProxiesMiddleware():
logfile = logging.getLogger(__name__)
def process_request(self, request, spider):
self.logfile.debug("entry ProxyMiddleware")
try:
# 依靠meta中的标记,来决定是否需要使用proxy
proxy_addr = spider.proxy
if proxy_addr:
if request.url.startswith("http://"):
request.meta['proxy'] = "http://" + proxy_addr # http代理
elif request.url.startswith("https://"):
request.meta['proxy'] = "https://" + proxy_addr # https代理
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
self.logfile.warning(u"Proxies error: %s, %s, %s, %s" %
(exc_type, e, fname, exc_tb.tb_lineno))
settings文件中配置开启Middleware:
DOWNLOADER_MIDDLEWARES = {
'forum.middlewares.ProxiesMiddleware': 100,
}
好了,到这里又到了跟大家说再见的时候了。创作不易,帮忙点个赞再走吧。你的支持是我创作的动力,希望能带给大家更多优质的文章