首页 分享 京东爬虫

京东爬虫

来源:花匠小妙招 时间:2024-11-14 17:27

一开始看京东商城的商品,发现很多信息都在网页源代码上,以为会比淘宝的大规模爬取简单点,结果被京东欺骗了无数次,整整写了差不多六个小时,真是坑爹啊。先贴上github地址:https://github.com/xiaobeibei26/jingdong

链接:https://www.jianshu.com/p/e938a78b2f75

先说下这个网站,首先在首页随便输入一个想爬取的商品类别,观察到一般商品数目都是100页的,除非有些比较稀少的商品,如图

Paste_Image.png

介绍一下网站的分析过程,默认情况下在首页输入一件商品时,出来的搜索页面是只有30件商品的,屏幕的右侧下拉框拉到下面会触发一个ajax的请求,把剩下的30个商品渲染出来,一般每页60个商品里面是有三个左右是广告的,也就是有效商品一般是57个。这里看一下这个AJAX请求,这个是爬取难点

Paste_Image.png

看一看这个请求头,我当时第一个感觉以为很多参数是可以去掉,拿到一个很简便的链接就可以了

当时没注意,删了很多参数直接请求,结果调试了很久,获得的商品在插进数据库去重的时候都是只剩网页的一般,细细观察了一下发现链接虽然不同,请求回来的商品却是一样的,然后我再细细看了看这个ajax请求,鼓捣了好久,最终发现这个URL后面的每个数字都是每一件商品的ID,而这个ID隐藏在第一次刚打开网页时候最初的那些商品里面,如图.........

Paste_Image.png

这里结合ajax请求的参数看,

Paste_Image.png

然后我又从新改掉爬虫逻辑,改代码,又花了两个小时,好惨啊.......

然后终于可以一次提取完整的网页商品了,最后提示一下,京东网页第一页的商品里面页数page是显示1和2的,第二页是3和4,这个有点特殊,最后上一张爬虫主程序图

# -*- coding: utf-8 -*-

import scrapy

from jingdong.items import JingdongItem

class JdSpider(scrapy.Spider):

name = "jd"

allowed_domains = ["www.jd.com"]

start_urls = ['http://www.jd.com/']

search_url1 = 'https://search.jd.com/Search?keyword={key}&enc=utf-8&page={page}'

#search_url2='https://search.jd.com/s_new.php?keyword={key}&enc=utf-8&page={page}&scrolling=y&pos=30&show_items={goods_items}'

search_url2= 'https://search.jd.com/s_new.php?keyword={key}&enc=utf-8&page={page}&s=26&scrolling=y&pos=30&tpl=3_L&show_items={goods_items}'

shop_url ='http://mall.jd.com/index-{shop_id}.html'

def start_requests(self):

key = '裤子'

for num in range(1,100):

page1 = str(2*num-1)#构造页数

page2 = str(2*num)

yield scrapy.Request(url=self.search_url1.format(key=key,page=page1),callback=self.parse,dont_filter = True)

yield scrapy.Request(url=self.search_url1.format(key=key,page=page1),callback=self.get_next_half,meta={'page2':page2,'key':key},dont_filter = True)

#这里一定要加dont_filter = True,不然scrapy会自动忽略掉这个重复URL的访问

def get_next_half(self,response):

try:

items = response.xpath('//*[@id="J_goodsList"]/ul/li/@data-pid').extract()

key = response.meta['key']

page2 =response.meta['page2']

goods_items=','.join(items)

yield scrapy.Request(url=self.search_url2.format(key=key, page=page2, goods_items=goods_items),

callback=self.next_parse,dont_filter=True)#这里不加这个的话scrapy会报错dont_filter,官方是说跟allowed_domains冲突,可是第一个请求也是这个域名,实在无法理解

except Exception as e:

print('没有数据')

def parse(self, response):

all_goods = response.xpath('//div[@id="J_goodsList"]/ul/li')

for one_good in all_goods:

item = JingdongItem()

try:

data = one_good.xpath('div/div/a/em')

item['title'] = data.xpath('string(.)').extract()[0]#提取出该标签所有文字内容

item['comment_count'] = one_good.xpath('div/div[@class="p-commit"]/strong/a/text()').extract()[0]#评论数

item['goods_url'] = 'http:'+one_good.xpath('div/div[4]/a/@href').extract()[0]#商品链接

item['shops_id']=one_good.xpath('div/div[@class="p-shop"]/@data-shopid').extract()[0]#店铺ID

item['shop_url'] =self.shop_url.format(shop_id=item['shops_id'])

goods_id=one_good.xpath('div/div[2]/div/ul/li[1]/a/img/@data-sku').extract()[0]

if goods_id:

item['goods_id'] =goods_id

price=one_good.xpath('div/div[3]/strong/i/text()').extract()#价格

if price:#有写商品评论数是0,价格也不再源代码里面,应该是暂时上首页的促销商品,每页有三四件,我们忽略掉

item['price'] =price[0]

yield item

except Exception as e:

pass

def next_parse(self,response):

all_goods=response.xpath('/html/body/li')

for one_good in all_goods:

item = JingdongItem()

try:

data = one_good.xpath('div/div/a/em')

item['title'] = data.xpath('string(.)').extract()[0] # 提取出该标签所有文字内容

item['comment_count'] = one_good.xpath('div/div[@class="p-commit"]/strong/a/text()').extract()[0] # 评论数

item['goods_url'] = 'http:' + one_good.xpath('div/div[4]/a/@href').extract()[0] # 商品链接

item['shops_id'] = one_good.xpath('div/div[@class="p-shop"]/@data-shopid').extract()[0] # 店铺ID

item['shop_url'] = self.shop_url.format(shop_id=item['shops_id'])

goods_id = one_good.xpath('div/div[2]/div/ul/li[1]/a/img/@data-sku').extract()[0]

if goods_id:

item['goods_id'] = goods_id

price = one_good.xpath('div/div[3]/strong/i/text()').extract() # 价格

if price: # 有写商品评论数是0,价格也不再源代码里面,应该是暂时上首页的促销商品,每页有三四件,我们忽略掉

item['price'] = price[0]

yield item

except Exception as e:

pass

pipline代码如图

class JingdongPipeline(object):

# def __init__(self):

# self.client = MongoClient()

# self.database = self.client['jingdong']

# self.db = self.database['jingdong_infomation']

#

# def process_item(self, item, spider):#这里以每个用户url_token为ID,有则更新,没有则插入

# self.db.update({'goods_id':item['goods_id']},dict(item),True)

# return item

#

# def close_spider(self,spider):

# self.client.close()

def __init__(self):

self.conn = pymysql.connect(host='127.0.0.1',port=3306,user ='root',passwd='root',db='jingdong',charset='utf8')

self.cursor = self.conn.cursor()

def process_item(self, item, spider):

try:#有些标题会重复,所以添加异常

title = item['title']

comment_count = item['comment_count'] # 评论数

shop_url = item['shop_url'] # 店铺链接

price = item['price']

goods_url = item['goods_url']

shops_id = item['shops_id']

goods_id =int(item['goods_id'])

#sql = 'insert into jingdong_goods(title,comment_count,shop_url,price,goods_url,shops_id) VALUES (%(title)s,%(comment_count)s,%(shop_url)s,%(price)s,%(goods_url)s,%(shops_id)s,)'

try:

self.cursor.execute("insert into jingdong_goods(title,comment_count,shop_url,price,goods_url,shops_id,goods_id)values(%s,%s,%s,%s,%s,%s,%s)", (title,comment_count,shop_url,price,goods_url,shops_id,goods_id))

self.conn.commit()

except Exception as e:

pass

except Exception as e:

pass

运行结果如图

Paste_Image.png

运行了几分钟,每页一千条,共爬了几万条裤子,京东的裤子真是多
mysql数据库插入操作

作者:蜗牛仔
链接:https://www.jianshu.com/p/e938a78b2f75
來源:简书

相关知识

用Python爬虫获取网络园艺社区植物养护和种植技巧
京东推出鲜花交易小程序——“京东花市”| 动态
花卉京东入驻
“师”情“花”意说感谢 京东文具携京东鲜花送出教师节祝福
京东物流
一个简单的python网路爬虫示例——爬取《后来的我们》影评
京东开放平台花卉绿植商品管理规范「京东鲜花绿植类商品售后服务细则」
京东鲜花 千层金2
京东11.11百万好书享叠券满300减130 京东图书周年庆领先开抢
花卉类如何入驻京东商城? 视频

网址: 京东爬虫 https://www.huajiangbk.com/newsview548941.html

所属分类:花卉
上一篇: 【解决问题】python编译报错
下一篇: 【魅族MX4Pro】报价

推荐分享