百万arXiv论文元信息训练语料!ChatGenTitle帮你一键生成论文题目
项目名称:
开源地址:
项目背景
arXiv数据集介绍
我们所搜集的论文元信息包含全部的学科分类,如:
计算机科学(Computer Science)
数学(Mathematics)
物理学(Physics)
统计学(Statistics)
电气工程和系统科学(Electrical Engineering and Systems Science)
经济学(Economics)
量子物理(Quantum Physics)
材料科学(Materials Science)
生物学(Biology)
量化金融(Quantitative Finance)
信息科学(Information Science)
交叉学科(Interdisciplinary)
{
"id":string"0704.0001",
"submitter":string"Pavel Nadolsky",
"authors":string"C. Bal\'azs, E. L. Berger, P. M. Nadolsky, C.-P. Yuan",
"title":string"Calculation of prompt diphoton production cross sections at Tevatron and LHC energies",
"comments":string"37 pages, 15 figures; published version",
"journal-ref":string"Phys.Rev.D76:013009,2007",
"doi":string"10.1103/PhysRevD.76.013009",
"report-no":string"ANL-HEP-PR-07-12",
"categories":string"hep-ph",
"license":NULL,
"abstract":string" A fully differential calculation in perturbative quantum chromodynamics is presented for the production of massive photon pairs at hadron colliders. All next-to-leading order perturbative contributions from quark-antiquark, gluon-(anti)quark, and gluon-gluon subprocesses are included, as well as all-orders resummation of initial-state gluon radiation valid at next-to-next-to-leading logarithmic accuracy. The region of phase space is specified in which the calculation is most reliable. Good agreement is demonstrated with data from the Fermilab Tevatron, and predictions are made for more detailed tests with CDF and DO data. Predictions are shown for distributions of diphoton pairs produced at the energy of the Large Hadron Collider (LHC). Distributions of the diphoton pairs from the decay of a Higgs boson are contrasted with those produced from QCD processes at the LHC, showing that enhanced sensitivity to the signal can be obtained with judicious selection of events. ",
"versions":
}
id: ArXiv ID (can be used to access the paper, see below) submitter: Who submitted the paper
authors: Authors of the paper
title: Title of the paper
comments: Additional info, such as number of pages and figures
journal-ref: Information about the journal the paper was published in
doi: [https://www.doi.org](Digital Object Identifier)
abstract: The abstract of the paper
categories: Categories / tags in the ArXiv system
versions: A version history
LLMs微调
# 下载项目
git clone https://github.com/tloen/alpaca-lora.git
# 安装依赖
pip install -r requirements.txt
# 转化模型
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir ../model/ \
--model_size 7B \
--output_dir ../model/7B-hf
# 单机单卡训练模型
python finetune.py \
--base_model '../model/7B-hf' \
--data_path '../train.json' \
--output_dir '../alpaca-lora-output'
# 单机多卡(4*A100)训练模型
WORLD_SIZE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=3192 finetune.py \
--base_model '../model/7B-hf' \
--data_path '../train.json' \
--output_dir '../alpaca-lora-output' \
--batch_size 1024 \
--micro_batch_size 128 \
--num_epochs 3
在线访问
更多模型将会很快发布!
# 推理
python generate.py \
--load_8bit \
--base_model '../model/7B-hf' \
--lora_weights '../alpaca-lora-output'
If you are an expert in writing papers, please generate a good paper title
for this paper based on other authors' descriptions of their abstracts.
在 Input 中输入:
<你论文的摘要>:Waste pollution is one of the most important environmental problems in the modern world.
With the continuous improvement of the living standard of the population and the increasing richness of the consumption structure,
the amount of domestic waste generated has increased dramatically and there is an urgent need for further waste treatment of waste.
The rapid development of artificial intelligence provides an effective solution for automated waste classification.
However, the large computational power and high complexity of algorithms make convolutional neural networks (CNNs) unsuitable for real-time embedded applications.
In this paper, we propose a lightweight network architecture, Focus-RCNet, designed with reference to the sandglass structure of MobileNetV2, which uses deeply separable convolution to extract features from images.
The Focus module is introduced into the field of recyclable waste image classification to reduce the dimensionality of features while retaining relevant information.
In order to make the model focus more on waste image features while keeping the amount of parameters computationally small, we introduce the SimAM attention mechanism.
Additionally, knowledge distillation is used to further compress the number of parameters in the model.
By training and testing on the TrashNet dataset, the Focus-RCNet model not only achieves an accuracy of 92%, but also has high mobility of deployment.
参考文献
[1] https://github.com/tatsu-lab/stanford_alpaca
[2] https://github.com/tloen/alpaca-lora
更多阅读
#投 稿 通 道#
让你的文字被更多人看到
如何才能让更多的优质内容以更短路径到达读者群体,缩短读者寻找优质内容的成本呢?答案就是:你不认识的人。
总有一些你不认识的人,知道你想知道的东西。PaperWeekly 或许可以成为一座桥梁,促使不同背景、不同方向的学者和学术灵感相互碰撞,迸发出更多的可能性。
PaperWeekly 鼓励高校实验室或个人,在我们的平台上分享各类优质内容,可以是最新论文解读,也可以是学术热点剖析、科研心得或竞赛经验讲解等。我们的目的只有一个,让知识真正流动起来。
📝 稿件基本要求:
• 文章确系个人原创作品,未曾在公开渠道发表,如为其他平台已发表或待发表的文章,请明确标注
• 稿件建议以 markdown 格式撰写,文中配图以附件形式发送,要求图片清晰,无版权问题
• PaperWeekly 尊重原作者署名权,并将为每篇被采纳的原创首发稿件,提供业内具有竞争力稿酬,具体依据文章阅读量和文章质量阶梯制结算
📬 投稿通道:
• 投稿邮箱:[email protected]
• 来稿请备注即时联系方式(微信),以便我们在稿件选用的第一时间联系作者
• 您也可以直接添加小编微信(pwbot02)快速投稿,备注:姓名-投稿
△长按添加PaperWeekly小编
🔍
现在,在「知乎」也能找到我们了
进入知乎首页搜索「PaperWeekly」
点击「关注」订阅我们的专栏吧
微信扫码关注该文公众号作者