Meta is stepping up its game in the artificial intelligence (AI) race with the introduction of its new open-source AI model, Llama 3, alongside a new version of Meta AI. Powered by Llama 3, this virtual assistant is now available across all Meta platforms.
Here's all you need to know about Meta's newest large language model (LLM) and AI assistant.
What is Llama 3?Meta launched Llama 3, the latest in its Llama series of open-source AI models. Llama 3 comes in two variants: one with 8 billion parameters and another with 70 billion parameters.
Parameters are essentially the 'knowledge' the model acquires during its training, with more parameters typically leading to better performance due to increased contextual understanding.

Meta claims that Llama 3 sets a new standard for large language models at these parameter scales. They have improved pretraining and post-training processes, resulting in reduced false refusal rates, better alignment, and more diverse responses from the model. Notably, Llama 3 boasts enhanced capabilities in reasoning, code generation, and instruction following.

Llama 3 uses a tokenizer with a vocabulary of 128,000 tokens, which Meta says improves language encoding efficiency and boosts model performance. To enhance inference speed, Meta introduced grouped query attention (GQA) for both the 8B and 70B parameter models. The models are trained on sequences of 8,192 tokens, ensuring self-attention does not cross document boundaries.
The training data for Llama 3 consists of over 15 trillion tokens, sourced from publicly available data, making it seven times larger than the dataset used for Llama 2. This expanded dataset includes four times more code and over 5 percent high-quality non-English data covering more than 30 languages.
What's next for Llama 3?The 8 billion and 70 billion parameter models are just the beginning for Llama 3. Meta plans to release additional models with extended context windows, multilingual support, and multi-modal capabilities, enabling the model to process various formats like text, code, audio, image, and video simultaneously.

Additionally, Meta is developing an even larger dense model with over 400 billion parameters.
Meta AI: Powered by Llama 3Currently, Meta AI is accessible via the search bar across Meta's apps, including Facebook, Instagram, WhatsApp, and Messenger, in over a dozen countries including the United States, Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe.
However, it is notably not available in India, despite Meta previously pilot testing the AI assistant with select Indian users across its platforms.

Meta AI works as a general-purpose assistant that can answer questions using real-time information from Google and Bing. It can also generate text and images, translate languages, write different kinds of creative content, and summarise information.
Users will also be able to access the chatbot on the web through a new meta.ai website.
What sets Meta AI apart?One standout feature of Meta AI is its Imagine feature, which allows real-time image generation.

Users can watch images form dynamically as they type, with the image evolving with each keystroke.
Currently, this feature is in beta on WhatsApp and the Meta AI web experience in the United States.

In addition, users can also ask the chatbot to animate an image, iterate on it in a new style or turn it into a GIF to share with their friends.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.