News

October 4, 2023

It is up to developers — not builders — to make AI safe, says Mistral AI founder

Speaking at Sifted Summit, Mistral AI cofounder Arthur Mensch says it’s developers who should ensure that AI applications are safe — not those building large language models

Foundational AI models are a tool for developers, and it is the responsibility of those developers to ensure that they are safe — not that of startups building the models, Mistral AI cofounder Arthur Mensch says. 

“What we make, our models, are to be seen as middleware, as a tool — almost as a programming language,” says Mensch, speaking at Sifted Summit. 

“And a programming language can be used to make malware and software.”

Mistral released its first model, dubbed 7B, earlier this month — a large language model (LLM) trained on seven billion parameters that the company says outperforms comparable alternatives on the market, such as Meta’s Llama 2. 

Advertisement

But the model attracted criticism as it appeared that it could be prompted to generate harmful content that is filtered out from competitors’ models. 

For example, Mistral’s LLM could give detailed instructions on how to make a bomb — a query that Meta’s Llama, OpenAI’s ChatGPT and Google’s Bard refuse to answer.

“It was not a mistake,” says Mensch, adding that the best way to ensure safety is to create a model that can answer any prompts — and subsequently instruct it with what it can and cannot answer. 

Mistral 7B, therefore, did not include any moderation mechanisms.

“We think that the responsibility for the safe distribution of AI systems lies with the application maker,” says Mensch.

The founder adds, however, that companies like Mistral AI should be responsible for providing the developer community with tools to guardrail models and ensure they are safe.

The darling of French tech

Mistral was founded just a few months ago and quickly rose to fame when it closed a €105m round just four weeks after launching and days after hiring its first employees.

The startup, founded by DeepMind and Meta alumni, has become a darling of French tech, alongside companies such as Dust or Nabla, which are contributing to making Paris an emerging hub for AI.

According to Mensch, there is space for European players to seize the opportunity of AI — and to compete against huge US players such as Meta and Google. “It is a market where, in Europe, many actors won’t be willing to rely on American providers,” says Mensch. “There is a geographical stake here that we are willing to exploit.”

Mistral says that its technology is competitive because it is more efficient, but also because of the startup’s open-source approach to building foundational models. This generates more trust in the company because application developers can fully access the model and transform it.

The founder says that the developer community has already started leveraging Mistral 7B and that new products are emerging as the model is used to build apps like chatbots for enterprises.

Advertisement

Daphné Leprince-Ringuet

Daphné Leprince-Ringuet is a reporter for Sifted based in Paris and covering French tech. You can find her on X and LinkedIn