共计 70 篇文章
2025
【论文阅读】ByteScale:Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000
【Megatron-LM源码分析(四)】-DDP数据并行
【Megatron-LM源码分析(三)】-性能分析
【论文阅读】ScheMoE:An Extensible Mixture-of-Experts Distributed Training System with Tasks Scheduling
【Megatron-LM源码分析(二)】-GPT模型pretrain流程
【Megatron-LM源码分析(一)】-环境配置与训练示例跑通
【论文阅读】The Llama 3 Herd of Models(Section 3 Pre-Training)
【论文阅读】Reducing Activation Recomputation in Large Transformer Models
【论文阅读】Megatron-LM论文阅读
【k8s APIServer 源码阅读(一)】-对象缓存