Description
Seventy3: 用NotebookLM将论文生成播客,让大家跟着AI一起进步。
今天的主题是:Scaling Laws for PrecisionSummary
This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.
原文链接:https://arxiv.org/abs/2411.04330
解读链接:https://www.jiqizhixin.com/articles/2024-11-13-9
Seventy3: 用NotebookLM将论文生成播客,让大家跟着AI一起进步。
今天的主题是:DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot PlanningSummary
This academic research paper presents DINO World Model (DINO-WM), a new method for building task-agnostic world models for visual reasoning and control in...
Published 11/21/24
Seventy3: 用NotebookLM将论文生成播客,让大家跟着AI一起进步。
今天的主题是:Does your LLM truly unlearn? An embarrassingly simple approach to recover unlearned knowledgeSummary
This research paper investigates a critical flaw in current machine unlearning methods for large language models (LLMs). The authors discover that...
Published 11/20/24