LoRA (Low-Rank Adaptor)
- Important Paradigm on the Natural Language Model
- Large-scale pre-training on general domain data & adaptation on general tasks
- 데이터 학습량이 커질수록 model parameter setting 필요성이 많아져서 비효율적임
Ex) GPT-3 175B
- 175B에 해당하는 독립적인 인스턴스들을 배포하는 것이 매우 값비쌈


Code LoRA from Scratch - a Lightning Studio by sebastian
LoRA (Low-Rank Adaptation) is a popular technique to finetune LLMs more efficiently. This Studio explains how LoRA works by coding it from scratch, which is an excellent exercise for looking under …
