A brief story of LLama

The LLaMA base model was released in February 2023 and has since seen a number of new fine-tuned models released. We will compare the LLaMA base model to the Alpaca, Vicuna, Koala-distill, GPT4-x-Alpaca, and WizardLM models.

Model size: The LLaMA base model comes in four sizes: 7B, 13B, 33B, and 65B. The Alpaca, Vicuna, Koala-distill, and GPT4-x-Alpaca models come in two sizes each: 7B and 13B. The WizardLM model is only available in a 7B size.

Training data: The LLaMA base model was trained on various data sets. The Alpaca model was trained on 52k GPT-3 instructions. The Vicuna model was trained on 70k ChatGPT conversations. The Koala-distill model was trained on 117k cleaned ChatGPT conversations. The GPT4-x-Alpaca model was trained on 20k GPT4 instructions. Finally, the WizardLM model was trained on 70k instructions synthesized with ChatGPT/GPT-3. The OpenAssistant LLaMA model was trained on 600k human interactions collected from OpenAssistant Conversations.

Tools: The OpenAssistant team released software that allows users to run LLaMA models locally. This is a great way to get hands-on experience using LLaMA models for natural language processing tasks.

A brief history of LLaMA models

What is MarkLogic Server ?

To Read/Watch

Links
https://www.youtube.com/watch?v=VMj-3S1tku0

Written by

Albert Oplog

Hi, I'm Albert Oplog. I would humbly like to share my tech journey with people all around the world.