Meta’s announcement to make its AI available as open-source not only caused unrest among its own investors but also among competitors like Google and OpenAI. Meta justified this step by aiming not only to further develop AI technology but also to open access to modern tools and models to a broad public, which will pay off in the long run.For users, Meta AI is accessible via the web and Facebook and is also integrated into WhatsApp and Instagram, already giving it immense reach. Let’s look at what Meta AI Llama 3 achieves, in which areas it may surpass other large language models, and what strategy Meta is pursuing in the long term.

The Llama Series: A Fixture in the AI World

Llama stands for “Large Language Model Meta AI” and is Meta’s language model for artificial intelligence. This model, launched in February 2023, aims to compete with other top models like OpenAI’s GPT-4 or Google’s Gemini.

The latest version, Llama 3, was introduced on April 18, 2024. Llama 3 can generate texts and answer questions clearly, coherently, and relevantly. Additionally, it offers the ability to create images and animations, similar to ChatGPT, and is freely accessible via Facebook.

Mark Zuckerberg himself said: “We are updating Meta AI with our new, advanced Llama 3 model, which we are openly sharing. With this new model, we believe that Meta AI is now the smartest AI assistant that can be used for free.”

Currently, Llama 3 is available in 14 countries in the English language. A small portion of the training data already consists of other languages, but other language variants are not yet mature. It is not yet known when Meta AI will come to Europe.

What’s New in Llama 3?

A key difference between Llama 3 and its predecessor Llama 2 is the expanded training dataset, which is seven times larger and encompasses 15 trillion tokens. For comparison: GPT-4 was trained using 13 trillion tokens. Llama 3 currently comes in two main variants, which differ in the number of their parameters: 8B and 70B. The “B” stands for billion and indicates the complexity of the model.

The 70B variant is the more extensive one and was trained with data collected up to December 2023. In contrast, the smaller 8B version only knows data up to March 2023. These differences in the training dataset affect the timeliness and relevance of the answers the models can generate.

Currently, Meta is also working on an even more extensive version of the model with 400 billion parameters (400B). This expanded version is expected to offer even higher accuracy and performance and is scheduled to be available later in 2024.

According to Facebook, Llama 3’s learning ability is more efficient than that of its predecessors and competitors, with better adaptability to new information and more precise execution of tasks. Another innovation is that Llama 3 enables local execution of AI without requiring a web service, unlike ChatGPT, which requires access to GPT-4 via a web application.

Meta is pursuing an “open by default” approach with Llama 3 to promote an open AI ecosystem. Historically, this approach is not new in the AI environment. OpenAI, the company behind ChatGPT, also began with an open and collaborative vision. The first significant model, GPT-2, was partially made openly accessible in 2019. With GPT-3, OpenAI’s strategy changed. GPT-3 was not released as open-source but made accessible via an API. This decision was based on safety concerns and the desire to maintain control over the technology.

When ChatGPT launched in 2022, it was released as a freely accessible research preview, meaning users could try it out for free. However, the overwhelming popularity and rapid spread led OpenAI to switch to a freemium model, offering a free basic version and paid subscriptions for advanced features. Originally founded as a non-profit organization, OpenAI now aims to become profitable.

Of course, it’s not fundamentally different with Facebook. The saying “if you’re not paying for the product, you are the product” is well-known. Facebook Meta, the parent company of Facebook, pursues several commercial and strategic goals with the release of Llama 3. Llama 3 is advertised as one of the most advanced open AI models, helping Meta position itself as a leading innovator in the industry and promoting the use of its platforms by developers and companies.

By providing Llama 3 as open-source, Meta fosters the innovative power of the open-source community. This not only leads to a positive image but also allows Meta to benefit from contributions and improvements by external developers. In the long term, this can reduce development costs and increase the speed of technological advances.

And although Llama 3 is available as open-source, Meta can still generate revenue through accompanying services and partnerships. For example, companies that want to deploy Llama 3 on a large scale can rely on Meta’s cloud services or other specialized support services. These services can be fee-based and offer Meta additional sources of income.

But the most important asset at Facebook remains data. By providing a widely used AI model, Meta can gain valuable insights into user usage and needs. This data can be used to specifically further develop and adapt products and services. Additionally, the widespread use of Llama 3 strengthens Meta’s market power, as it gains greater control over the underlying AI infrastructure and its development.

Meta’s Collaboration with Startups

Prestigious is certainly the collaboration with Hugging Face, known for its open-source Transformers library, and the French cloud service provider Scaleway. Meta has teamed up with these two heavyweights to launch the “AI Startup Program.” The program aims to accelerate the adoption of open-source AI solutions in French entrepreneurship. This program, based at STATION F in Paris, supports five startups in the acceleration phase from January to June 2024. The initiative aims to bring the economic and technological advantages of open models into the French ecosystem and strengthen innovation in Europe.

Even if Meta publicly feels committed to the open-source community, this initiative is, of course, not a charity project by the internet giant. Rather, Meta benefits in many ways from this cooperation: besides positive PR, Meta also gains access to talent and know-how through this initiative. Collaborating with talented researchers, engineers, and PhD students from Meta’s FAIR lab, as well as experts from Hugging Face and Scaleway, can help strengthen internal innovation capacities and promote new ideas and approaches. Additionally, Meta can, in the long term, tap into new business opportunities and partnerships, contributing to the development of new revenue streams.

And last but not least: every blow against the competitor Google is a step in the right direction for Facebook. Meta’s open-source strategy aims to undermine the business model of competitors like Google and OpenAI. These companies have difficulty charging money for their AI services when equivalent open alternatives are available. Meta, on the other hand, makes its money indirectly through knowledge about its users and targeted advertising.

In the advertising market, Facebook and Google are relentless competitors. The two corporations compete intensively in areas like online advertising, social media, and technology development. Google dominates the search engine market with over 8.5 billion searches per day. Google Ads offers versatile ad formats and remarketing options based on user behavior. Facebook, on the other hand, scores with granular audience targeting through detailed user profiles, which is particularly advantageous for visually appealing advertising content.

Security and Ethics

With the release of Llama 3, Meta has also introduced new AI security tools, including Llama Guard and CyberSec Eval. These tools are designed to classify risks and assess potential misuse. Another tool, Code Shield, filters insecure code suggestions during use.

Llama Guard analyzes interactions and applications of the AI to detect potential security gaps and misuse possibilities early on. Through continuous monitoring and analysis, Llama Guard helps ensure the integrity of AI models and that they are used in a safe and controlled environment.

CyberSec Eval complements Llama Guard by conducting an in-depth assessment of security risks that can arise when using AI models. This tool examines the security of the data used in the models and assesses potential threats that could arise from unauthorized access or manipulation. CyberSec Eval is designed to provide a comprehensive security assessment and recommend risk mitigation measures to ensure the protection of AI models and their data.

Another important tool is Code Shield, which filters insecure code suggestions during use. This tool checks the code generated by the AI for potential security gaps and vulnerabilities. By identifying and filtering insecure codes in real-time, Code Shield helps developers create more secure applications and minimize the risk of security incidents. Code Shield is particularly useful for developers deploying AI models in security-critical applications, as it ensures that the generated code meets the highest security standards.

Meta emphasizes that open initiatives like Llama 3 encourage review and collaboration, but the real impact depends on a comprehensive approach to AI governance and the integration of ethical principles throughout the AI systems’ lifecycle.

So much for the theory. Facebook is one of the largest technology companies in the world and a driver of AI development. However, Facebook has often been criticized, especially regarding AI ethics. One of Meta’s biggest ethical challenges is the spread of misinformation and hate speech. Facebook’s algorithms tend to promote content that evokes strong emotional reactions. This often leads to the spread of controversial and extremist content that can exacerbate political tensions. It remains to be seen whether a freely accessible AI is best placed in these hands.

How is Facebook’s AI Generally Received?

The public reaction to Meta AI and especially to the Llama 3 model is mixed. On the one hand, the advanced technology and openness of the model are praised; on the other hand, there is clear criticism and concern.

A clear plus point is the openness of the model. As mentioned, Llama 3 is freely available and can be used by developers and researchers worldwide. This openness promotes innovation and collaboration in the AI community, which many see as a positive step.

On the user side, however, there has been some criticism of Llama 3. Interaction with the chatbot has, in some cases, been confusing and inappropriate. For example, a Meta AI chatbot in a Facebook group for mothers falsely claimed to have a child, leading to irritation. In direct comparison with ChatGPT, Meta AI performs worse. Users report more frequent errors in complex tasks and fewer comprehensive functions compared to ChatGPT.

Furthermore, there are concerns regarding privacy and security aspects of the open use of AI. Meta extensively uses its users’ data to train its AI models. This includes public Facebook posts, Instagram photos, and even interactions with chatbots. Critics argue that the use of this data without sufficient transparency and control endangers user privacy.

Conclusion: Wait and Observe

One thing is certain: Meta’s commitment in the field of open-source AI, especially with the Llama 3 model, shows the company’s endeavor to establish itself as a leading innovator in the AI industry. The partnerships with Hugging Face and Scaleway, as well as the support through the AI Startup Program, underline Meta’s commitment to promoting innovation and entrepreneurship in Europe. While the openness and accessibility of Llama 3 are viewed positively and contribute to fostering a collaborative ecosystem, significant ethical and security concerns remain.

The risks of spreading misinformation, which were already problematic in the past, could be amplified by open access to powerful AI models. Additionally, data protection issues and the ethical use of the collected data remain central points of criticism. Despite these challenges, Meta’s approach offers the potential for significant advancements in AI technology and could, in the long term, contribute to creating a more robust and innovative digital ecosystem.

Ultimately, the future of Meta’s AI initiatives will depend on how effectively the company can master these ethical and security challenges while simultaneously advancing innovation and open access to AI technologies. Balancing technological progress with responsible handling of AI remains the key to success. So far, Facebook has not exactly distinguished itself positively in terms of AI ethics. Let’s hope that the future looks brighter.