_edited.jpg)
Efficient LLMs Finetuning (ELF)
The rise of large language models (LLMs), pretrained on vast and diverse datasets, has revolutionized the field of artificial intelligence (AI). Finetuning has emerged as a critical next step for adapting these models to a wide range of downstream applications, serving as the “last mile” for various use cases. Compared to pretraining LLMs from scratch, finetuning open-source models offers a more accessible and practical alternative for small to medium-sized businesses and academic researchers, who might not have access to extensive computational resources. Despite its promise, the broad applicability of finetuning also introduces several challenges.
This workshop focuses on efficiency in fine-tuning LLMs, aiming to lower costs and barriers so that even users with consumer-grade GPUs can harness reasonably large models (e.g., 7B parameters). Our goals are twofold: (i) to enable scalable development of LLMs, and (ii) to empower individuals and organizations with limited resources to benefit from modern AI. We will convene recent advances in methods and tools, and foster the exchange of best practices across research and industry.





