Linux大棚 – 不忘初心的技术博客,浮躁时代的安静角落
  •  首页
  •  技术日记
  •  编程
  •  旅游
  •  数码
  •  登录
  1. 标签
  2. Efficient
  • Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Du

    这篇论文的标题是《Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality》。其主要探
    Models Efficient Generalized Transformers SSMS
    admin 3月前
    37 0
  • CVPR2022学习-人脸识别:An Efficient Training Approach for Very Large Scale Face Recognition

    论文地址: https:arxivpdf2105.10375.pdf 代码地址: GitHub - tiandunxFFC: Official code for fast face classification 看标题大概的理解-
    Training Approach Efficient face Recognition
    admin 4月前
    39 0
  • 论文阅读:HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning

    论文名字HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning来源会议 the 12th ACM Workshop年份20
    论文 Efficient HybridAlpha Approach learning
    admin 4月前
    56 0
  • 论文阅读 [CVPR-2022] An Efficient Training Approach for Very Large Scale Face Recognition

    论文阅读 [CVPR-2022] An Efficient Training Approach for Very Large Scale Face Recognition 一种高效的超大规模人脸识别训练方法 studyai 搜索论文:
    论文 Efficient Training CVPR Recognition
    admin 4月前
    49 0
  • 【Paper Reading】Communication-Efficient Distributed Deep Learning A Comprehensive Survey

    Communication-Efficient Distributed Deep Learning: A Comprehensive Survey 原文来源:[Arxiv2020]Communication-Effi
    Communication Efficient Paper reading Distributed
    admin 6月前
    93 0
  • Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey

    Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey PDF: https:arxivpdf2403.14608.pdf 1 概述 大型
    Fine tuning parameter Efficient Comprehensive
    admin 6月前
    100 0
  • Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey阅读笔记

    Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey综述阅读笔记仅记录个人比较感兴趣的部分基本知识PEFT的三种分类:a
    笔记 Fine tuning parameter Efficient
    admin 6月前
    105 0
  • 学习笔记 Comprehensive and Delicate: An Efficient Transformer for Image Restoration(CVPR2023)

    代码和参考: https:www.zhihuquestion339499743answer3207458947?utm_campaignshareopn&utm_contentgroup1_Ans
    学习笔记 Delicate Comprehensive Efficient Restoration
    admin 6月前
    114 0
  • LLMs之PEFT:《Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey》翻译与解读

    LLMs之PEFT:《Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey》翻译与解读 导读:这篇论文是
    Efficient Fine parameter LLMs PEFT
    admin 6月前
    122 0
  • Training data-efficient image transformers & distillation through attention

    本视觉Transformers(86M参数)在ImageNet上达到83.1%的top-1精度,蒸馏版本高达84.4%!优于ViT、RegNet和ResNet等,代码刚刚开源! 注:文末附【Transformer】学习交流群Train
    Efficient Image Training DATA distillation
    admin 7月前
    123 0
  • 论文复现:Learning Efficient Convolutional Networks through Network Slimming

    论文核心 论文提出了一种结构化剪枝策略,剪枝对象为 channel ,对 channel 重要性的评价标准使用的是 Batch Normalization 层中的缩放因子,这不会给网络带来额外的开销。 论文细节品读 带 L 1 L1 L
    论文 Efficient learning Convolutional Network
    admin 7月前
    123 0
  • 【论文阅读】DeiT | Training data-efficient image transformers & distillation through attention

    本文主要对Facebook最近提出来的DeiT模型进行阅读分析。一、动机:DeiT解决什么问题? 现有的基于Transformer的分类模型ViT需要在海量数据上(JF
    论文 Training DATA Efficient DeiT
    admin 7月前
    125 0
  • DeiT:Training data-efficient image transformers & distillation through attention

    这篇文章主要是通过一些训练策略和知识蒸馏来提升模型的训练速度和性能效果。 原文链接:Training data-efficient image transformers & distillation thro
    DATA Efficient DeiT Training Image
    admin 7月前
    101 0
  • Efficient Inference in Fully Connected CRFs with Gaussian Edge Potential

    这里我们把每列当成像素, 每行当成不同的label, 这里有四种label. 然后我们需要算在每个点比如第一列第二行的点则为 Q 1 ( x 1第 二 种 l a b e l ) Q_1(x_1第二种label) Q1​(x1​第二
    Fully Connected Efficient Inference Edge
    admin 2025-1-31
    86 0
CopyRight © 2022 All Rights Reserved 豫ICP备2021025688号-21
Processed: 0.016 , SQL: 9