Python怎么爬取城市租房信息
来源:亿速云
2024-04-06 13:06:30
0浏览
收藏
目前golang学习网上已经有很多关于文章的文章了,自己在初次阅读这些文章中,也见识到了很多学习思路;那么本文《Python怎么爬取城市租房信息》,也希望能帮助到大家,如果阅读完后真的对你学习文章有帮助,欢迎动动手指,评论留言并分享~
思路:先单线程爬虫,测试可以成功爬取之后再优化为多线程,最后存入数据库
以爬取郑州市租房信息为例
注意:本实战项目仅以学习为目的,为避免给网站造成太大压力,请将代码中的num修改成较小的数字,并将线程改小
一、单线程爬虫
# 用session取代requests # 解析库使用bs4 # 并发库使用concurrent import requests # from lxml import etree # 使用xpath解析 from bs4 import BeautifulSoup from urllib import parse import re import time headers = { 'referer': 'https://zz.zu.fang.com/', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36', 'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; city=zz; integratecover=1; __utma=147393320.427795962.1613371106.1613371106.1613371106.1; __utmc=147393320; __utmz=147393320.1613371106.1.1.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; ASP.NET_SessionId=aamzdnhzct4i5mx3ak4cyoyp; Rent_StatLog=23d82b94-13d6-4601-9019-ce0225c092f6; Captcha=61584F355169576F3355317957376E4F6F7552365351342B7574693561766E63785A70522F56557370586E3376585853346651565256574F37694B7074576B2B34536C5747715856516A4D3D; g_sourcepage=zf_fy%5Elb_pc; unique_cookie=U_ffzvt3kztwck05jm6twso2wjw18kl67hqft*6; __utmb=147393320.12.10.1613371106' } data={ 'agentbid':'' } session = requests.session() session.headers = headers # 获取页面 def getHtml(url): try: re = session.get(url) re.encoding = re.apparent_encoding return re.text except: print(re.status_code) # 获取页面总数量 def getNum(text): soup = BeautifulSoup(text, 'lxml') txt = soup.select('.fanye .txt')[0].text # 取出“共**页”中间的数字 num = re.search(r'\d+', txt).group(0) return num # 获取详细链接 def getLink(tex): soup=BeautifulSoup(text,'lxml') links=soup.select('.title a') for link in links: href=parse.urljoin('https://zz.zu.fang.com/',link['href']) hrefs.append(href) # 解析页面 def parsePage(url): res=session.get(url) if res.status_code==200: res.encoding=res.apparent_encoding soup=BeautifulSoup(res.text,'lxml') try: title=soup.select('div .title')[0].text.strip().replace(' ','') price=soup.select('div .trl-item')[0].text.strip() block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip() building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip() try: address=soup.select('.trl-item2 .rcont')[2].text.strip() except: address=soup.select('.trl-item2 .rcont')[1].text.strip() detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','') detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','') detail=detail1+detail2 name=soup.select('.zf_jjname')[0].text.strip() buserid=re.search('buserid: \'(\d+)\'',res.text).group(1) phone=getPhone(buserid) print(title,price,block,building,address,detail,name,phone) house = (title, price, block, building, address, detail, name, phone) info.append(house) except: pass else: print(re.status_code,re.text) # 获取代理人号码 def getPhone(buserid): url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx' data['agentbid']=buserid res=session.post(url,data=data) if res.status_code==200: return res.text else: print(res.status_code) return if __name__ == '__main__': start_time=time.time() hrefs=[] info=[] init_url = 'https://zz.zu.fang.com/house/' num=getNum(getHtml(init_url)) for i in range(0,num): url = f'https://zz.zu.fang.com/house/i3{i+1}/' text=getHtml(url) getLink(text) print(hrefs) for href in hrefs: parsePage(href) print("共获取%d条数据"%len(info)) print("共耗时{}".format(time.time()-start_time)) session.close()
二、优化为多线程爬虫
# 用session取代requests # 解析库使用bs4 # 并发库使用concurrent import requests # from lxml import etree # 使用xpath解析 from bs4 import BeautifulSoup from concurrent.futures import ThreadPoolExecutor from urllib import parse import re import time headers = { 'referer': 'https://zz.zu.fang.com/', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36', 'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e6%96%b0%e5%af%86%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014868%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%5d; __utma=147393320.427795962.1613371106.1613558547.1613575774.5; __utmc=147393320; __utmz=147393320.1613575774.5.4.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; g_sourcepage=zf_fy%5Elb_pc; Captcha=4937566532507336644D6557347143746B5A6A6B4A7A48445A422F2F6A51746C67516F31357446573052634562725162316152533247514250736F72775566574A2B33514357304B6976343D; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; __utmb=147393320.9.10.1613575774; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*4' } data={ 'agentbid':'' } session = requests.session() session.headers = headers # 获取页面 def getHtml(url): res = session.get(url) if res.status_code==200: res.encoding = res.apparent_encoding return res.text else: print(res.status_code) # 获取页面总数量 def getNum(text): soup = BeautifulSoup(text, 'lxml') txt = soup.select('.fanye .txt')[0].text # 取出“共**页”中间的数字 num = re.search(r'\d+', txt).group(0) return num # 获取详细链接 def getLink(url): text=getHtml(url) soup=BeautifulSoup(text,'lxml') links=soup.select('.title a') for link in links: href=parse.urljoin('https://zz.zu.fang.com/',link['href']) hrefs.append(href) # 解析页面 def parsePage(url): res=session.get(url) if res.status_code==200: res.encoding=res.apparent_encoding soup=BeautifulSoup(res.text,'lxml') try: title=soup.select('div .title')[0].text.strip().replace(' ','') price=soup.select('div .trl-item')[0].text.strip() block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip() building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip() try: address=soup.select('.trl-item2 .rcont')[2].text.strip() except: address=soup.select('.trl-item2 .rcont')[1].text.strip() detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','') detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','') detail=detail1+detail2 name=soup.select('.zf_jjname')[0].text.strip() buserid=re.search('buserid: \'(\d+)\'',res.text).group(1) phone=getPhone(buserid) print(title,price,block,building,address,detail,name,phone) house = (title, price, block, building, address, detail, name, phone) info.append(house) except: pass else: print(re.status_code,re.text) # 获取代理人号码 def getPhone(buserid): url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx' data['agentbid']=buserid res=session.post(url,data=data) if res.status_code==200: return res.text else: print(res.status_code) return if __name__ == '__main__': start_time=time.time() hrefs=[] info=[] init_url = 'https://zz.zu.fang.com/house/' num=getNum(getHtml(init_url)) with ThreadPoolExecutor(max_workers=5) as t: for i in range(0,num): url = f'https://zz.zu.fang.com/house/i3{i+1}/' t.submit(getLink,url) print("共获取%d个链接"%len(hrefs)) print(hrefs) with ThreadPoolExecutor(max_workers=30) as t: for href in hrefs: t.submit(parsePage,href) print("共获取%d条数据"%len(info)) print("耗时{}".format(time.time()-start_time)) session.close()
三、使用asyncio进一步优化
# 用session取代requests # 解析库使用bs4 # 并发库使用concurrent import requests # from lxml import etree # 使用xpath解析 from bs4 import BeautifulSoup from concurrent.futures import ThreadPoolExecutor from urllib import parse import re import time import asyncio headers = { 'referer': 'https://zz.zu.fang.com/', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36', 'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e6%96%b0%e5%af%86%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014868%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%5d; __utma=147393320.427795962.1613371106.1613558547.1613575774.5; __utmc=147393320; __utmz=147393320.1613575774.5.4.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; g_sourcepage=zf_fy%5Elb_pc; Captcha=4937566532507336644D6557347143746B5A6A6B4A7A48445A422F2F6A51746C67516F31357446573052634562725162316152533247514250736F72775566574A2B33514357304B6976343D; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; __utmb=147393320.9.10.1613575774; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*4' } data={ 'agentbid':'' } session = requests.session() session.headers = headers # 获取页面 def getHtml(url): res = session.get(url) if res.status_code==200: res.encoding = res.apparent_encoding return res.text else: print(res.status_code) # 获取页面总数量 def getNum(text): soup = BeautifulSoup(text, 'lxml') txt = soup.select('.fanye .txt')[0].text # 取出“共**页”中间的数字 num = re.search(r'\d+', txt).group(0) return num # 获取详细链接 def getLink(url): text=getHtml(url) soup=BeautifulSoup(text,'lxml') links=soup.select('.title a') for link in links: href=parse.urljoin('https://zz.zu.fang.com/',link['href']) hrefs.append(href) # 解析页面 def parsePage(url): res=session.get(url) if res.status_code==200: res.encoding=res.apparent_encoding soup=BeautifulSoup(res.text,'lxml') try: title=soup.select('div .title')[0].text.strip().replace(' ','') price=soup.select('div .trl-item')[0].text.strip() block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip() building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip() try: address=soup.select('.trl-item2 .rcont')[2].text.strip() except: address=soup.select('.trl-item2 .rcont')[1].text.strip() detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','') detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','') detail=detail1+detail2 name=soup.select('.zf_jjname')[0].text.strip() buserid=re.search('buserid: \'(\d+)\'',res.text).group(1) phone=getPhone(buserid) print(title,price,block,building,address,detail,name,phone) house = (title, price, block, building, address, detail, name, phone) info.append(house) except: pass else: print(re.status_code,re.text) # 获取代理人号码 def getPhone(buserid): url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx' data['agentbid']=buserid res=session.post(url,data=data) if res.status_code==200: return res.text else: print(res.status_code) return # 获取详细链接的线程池 async def Pool1(num): loop=asyncio.get_event_loop() task=[] with ThreadPoolExecutor(max_workers=5) as t: for i in range(0,num): url = f'https://zz.zu.fang.com/house/i3{i+1}/' task.append(loop.run_in_executor(t,getLink,url)) # 解析页面的线程池 async def Pool2(hrefs): loop=asyncio.get_event_loop() task=[] with ThreadPoolExecutor(max_workers=30) as t: for href in hrefs: task.append(loop.run_in_executor(t,parsePage,href)) if __name__ == '__main__': start_time=time.time() hrefs=[] info=[] task=[] init_url = 'https://zz.zu.fang.com/house/' num=getNum(getHtml(init_url)) loop = asyncio.get_event_loop() loop.run_until_complete(Pool1(num)) print("共获取%d个链接"%len(hrefs)) print(hrefs) loop.run_until_complete(Pool2(hrefs)) loop.close() print("共获取%d条数据"%len(info)) print("耗时{}".format(time.time()-start_time)) session.close()
四、存入Mysql数据库
(一)建表
from sqlalchemy import create_engine from sqlalchemy import String, Integer, Column, Text from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import scoped_session # 多线程爬虫时避免出现线程安全问题 from sqlalchemy.ext.declarative import declarative_base BASE = declarative_base() # 实例化 engine = create_engine( "mysql+pymysql://root:root@127.0.0.1:3306/pytest?charset=utf8", max_overflow=300, # 超出连接池大小最多可以创建的连接 pool_size=100, # 连接池大小 echo=False, # 不显示调试信息 ) class House(BASE): __tablename__ = 'house' id = Column(Integer, primary_key=True, autoincrement=True) title=Column(String(200)) price=Column(String(200)) block=Column(String(200)) building=Column(String(200)) address=Column(String(200)) detail=Column(Text()) name=Column(String(20)) phone=Column(String(20)) BASE.metadata.create_all(engine) Session = sessionmaker(engine) sess = scoped_session(Session)
(二)将数据存入数据库中
# 用session取代requests # 解析库使用bs4 # 并发库使用concurrent import requests from bs4 import BeautifulSoup from concurrent.futures import ThreadPoolExecutor from urllib import parse from mysqldb import sess, House import re import time import asyncio headers = { 'referer': 'https://zz.zu.fang.com/', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36', 'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; __utmc=147393320; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; __utma=147393320.427795962.1613371106.1613575774.1613580597.6; __utmz=147393320.1613580597.6.5.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; Rent_StatLog=c158b2a7-4622-45a9-9e69-dcf6f42cf577; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e7%bb%8f%e5%bc%80%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014871%2f%22%2c%22sort%22%3a1%7d%5d; g_sourcepage=zf_fy%5Elb_pc; Captcha=6B65716A41454739794D666864397178613772676C75447A4E746C657144775A347A6D42554F446532357649643062344F6976756E563450554E59594B7833712B413579506C4B684958343D; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*14; __utmb=147393320.21.10.1613580597' } data={ 'agentbid':'' } session = requests.session() session.headers = headers # 获取页面 def getHtml(url): res = session.get(url) if res.status_code==200: res.encoding = res.apparent_encoding return res.text else: print(res.status_code) # 获取页面总数量 def getNum(text): soup = BeautifulSoup(text, 'lxml') txt = soup.select('.fanye .txt')[0].text # 取出“共**页”中间的数字 num = re.search(r'\d+', txt).group(0) return num # 获取详细链接 def getLink(url): text=getHtml(url) soup=BeautifulSoup(text,'lxml') links=soup.select('.title a') for link in links: href=parse.urljoin('https://zz.zu.fang.com/',link['href']) hrefs.append(href) # 解析页面 def parsePage(url): res=session.get(url) if res.status_code==200: res.encoding=res.apparent_encoding soup=BeautifulSoup(res.text,'lxml') try: title=soup.select('div .title')[0].text.strip().replace(' ','') price=soup.select('div .trl-item')[0].text.strip() block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip() building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip() try: address=soup.select('.trl-item2 .rcont')[2].text.strip() except: address=soup.select('.trl-item2 .rcont')[1].text.strip() detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','') detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','') detail=detail1+detail2 name=soup.select('.zf_jjname')[0].text.strip() buserid=re.search('buserid: \'(\d+)\'',res.text).group(1) phone=getPhone(buserid) print(title,price,block,building,address,detail,name,phone) house = (title, price, block, building, address, detail, name, phone) info.append(house) try: house_data=House( title=title, price=price, block=block, building=building, address=address, detail=detail, name=name, phone=phone ) sess.add(house_data) sess.commit() except Exception as e: print(e) # 打印错误信息 sess.rollback() # 回滚 except: pass else: print(re.status_code,re.text) # 获取代理人号码 def getPhone(buserid): url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx' data['agentbid']=buserid res=session.post(url,data=data) if res.status_code==200: return res.text else: print(res.status_code) return # 获取详细链接的线程池 async def Pool1(num): loop=asyncio.get_event_loop() task=[] with ThreadPoolExecutor(max_workers=5) as t: for i in range(0,num): url = f'https://zz.zu.fang.com/house/i3{i+1}/' task.append(loop.run_in_executor(t,getLink,url)) # 解析页面的线程池 async def Pool2(hrefs): loop=asyncio.get_event_loop() task=[] with ThreadPoolExecutor(max_workers=30) as t: for href in hrefs: task.append(loop.run_in_executor(t,parsePage,href)) if __name__ == '__main__': start_time=time.time() hrefs=[] info=[] task=[] init_url = 'https://zz.zu.fang.com/house/' num=getNum(getHtml(init_url)) loop = asyncio.get_event_loop() loop.run_until_complete(Pool1(num)) print("共获取%d个链接"%len(hrefs)) print(hrefs) loop.run_until_complete(Pool2(hrefs)) loop.close() print("共获取%d条数据"%len(info)) print("耗时{}".format(time.time()-start_time)) session.close()
五、最终效果图 (已打码)
文中关于Python的知识介绍,希望对你的学习有所帮助!若是受益匪浅,那就动动鼠标收藏这篇《Python怎么爬取城市租房信息》文章吧,也可关注golang学习网公众号了解相关技术文章。
版本声明
本文转载于:亿速云 如有侵犯,请联系study_golang@163.com删除

- 上一篇
- Go 支持函数式编程吗?

- 下一篇
- Python HTTP请求:轻松掌握网络通信的利器
查看更多
最新文章
-
- 文章 · python教程 | 1小时前 |
- 定义和使用类属性及方法的秘诀
- 403浏览 收藏
-
- 文章 · python教程 | 2小时前 |
- 终极指南:遍历列表、元组、集合和字典
- 367浏览 收藏
-
- 文章 · python教程 | 2小时前 | threadpoolexecutor 线程池大小 concurrent.futures 任务粒度 任务异常
- Python线程池实现方法与使用技巧
- 314浏览 收藏
-
- 文章 · python教程 | 2小时前 | 数据验证 字段类型 Django模型 models.py ForeignKey
- Django模型定义实用技巧与示例
- 305浏览 收藏
-
- 文章 · python教程 | 3小时前 |
- Python异常测试的最佳实践
- 410浏览 收藏
-
- 文章 · python教程 | 3小时前 |
- Python轻松重命名文件的小技巧
- 276浏览 收藏
-
- 文章 · python教程 | 5小时前 | Matplotlib Seaborn Pandas scatterplot boxplot
- Pythonseaborn库使用方法与技巧大全
- 106浏览 收藏
-
- 文章 · python教程 | 6小时前 |
- Python中如何用Manager管理共享状态?
- 337浏览 收藏
-
- 文章 · python教程 | 7小时前 |
- Python绘制词云图的简易教程
- 231浏览 收藏
查看更多
课程推荐
-
- 前端进阶之JavaScript设计模式
- 设计模式是开发人员在软件开发过程中面临一般问题时的解决方案,代表了最佳的实践。本课程的主打内容包括JS常见设计模式以及具体应用场景,打造一站式知识长龙服务,适合有JS基础的同学学习。
- 542次学习
-
- GO语言核心编程课程
- 本课程采用真实案例,全面具体可落地,从理论到实践,一步一步将GO核心编程技术、编程思想、底层实现融会贯通,使学习者贴近时代脉搏,做IT互联网时代的弄潮儿。
- 508次学习
-
- 简单聊聊mysql8与网络通信
- 如有问题加微信:Le-studyg;在课程中,我们将首先介绍MySQL8的新特性,包括性能优化、安全增强、新数据类型等,帮助学生快速熟悉MySQL8的最新功能。接着,我们将深入解析MySQL的网络通信机制,包括协议、连接管理、数据传输等,让
- 497次学习
-
- JavaScript正则表达式基础与实战
- 在任何一门编程语言中,正则表达式,都是一项重要的知识,它提供了高效的字符串匹配与捕获机制,可以极大的简化程序设计。
- 487次学习
-
- 从零制作响应式网站—Grid布局
- 本系列教程将展示从零制作一个假想的网络科技公司官网,分为导航,轮播,关于我们,成功案例,服务流程,团队介绍,数据部分,公司动态,底部信息等内容区块。网站整体采用CSSGrid布局,支持响应式,有流畅过渡和展现动画。
- 484次学习
查看更多
AI推荐
-
- AI Make Song
- AI Make Song是一款革命性的AI音乐生成平台,提供文本和歌词转音乐的双模式输入,支持多语言及商业友好版权体系。无论你是音乐爱好者、内容创作者还是广告从业者,都能在这里实现“用文字创造音乐”的梦想。平台已生成超百万首原创音乐,覆盖全球20个国家,用户满意度高达95%。
- 22次使用
-
- SongGenerator
- 探索SongGenerator.io,零门槛、全免费的AI音乐生成器。无需注册,通过简单文本输入即可生成多风格音乐,适用于内容创作者、音乐爱好者和教育工作者。日均生成量超10万次,全球50国家用户信赖。
- 18次使用
-
- BeArt AI换脸
- 探索BeArt AI换脸工具,免费在线使用,无需下载软件,即可对照片、视频和GIF进行高质量换脸。体验快速、流畅、无水印的换脸效果,适用于娱乐创作、影视制作、广告营销等多种场景。
- 19次使用
-
- 协启动
- SEO摘要协启动(XieQiDong Chatbot)是由深圳协启动传媒有限公司运营的AI智能服务平台,提供多模型支持的对话服务、文档处理和图像生成工具,旨在提升用户内容创作与信息处理效率。平台支持订阅制付费,适合个人及企业用户,满足日常聊天、文案生成、学习辅助等需求。
- 20次使用
-
- Brev AI
- 探索Brev AI,一个无需注册即可免费使用的AI音乐创作平台,提供多功能工具如音乐生成、去人声、歌词创作等,适用于内容创作、商业配乐和个人创作,满足您的音乐需求。
- 22次使用
查看更多
相关文章
-
- Flask框架安装技巧:让你的开发更高效
- 2024-01-03 501浏览
-
- Django框架中的并发处理技巧
- 2024-01-22 501浏览
-
- 提升Python包下载速度的方法——正确配置pip的国内源
- 2024-01-17 501浏览
-
- Python与C++:哪个编程语言更适合初学者?
- 2024-03-25 501浏览
-
- 品牌建设技巧
- 2024-04-06 501浏览