Mistral-7B v0.3 TESTED: Uncensored, Function Calling, LET THE FINETUNING GAMES begin!

The recent release of Mistral-7B v0.3, an open-source language model by Mistral, offers hope for the development of powerful models amidst changing regulations. This uncensored model features function calling and an extended vocabulary, with opportunities for fine-tuning and community input to enhance its performance and applicability in various fields.

In the past week, there have been significant advancements in open-source language models. Meta’s decision not to open-source their Llama 3400B model raised questions about the future of powerful open LLMs. However, Mistral recently released a new model - Mistral-7B v0.3, offering hope for the continuation of such models. This release is seen as a positive development for the open-source community amidst changing regulations on compute usage for AI models.

The Mistral-7B v0.3 model comes in both base and instruct variants, featuring a 32,000 token length and an extended vocabulary of 32,768 words. Notably, this model supports function calling and is completely uncensored, a departure from previous models that had safety mechanisms. Mistral aims to engage with the community to fine-tune the model and implement moderation mechanisms in the future, showing trust in the community’s input.

The model’s Hugging Face page provides essential information on downloading and demoing the model, along with guidelines for usage. Mistral emphasizes that the instruct model can be fine-tuned to achieve compelling performance and seeks community input to ensure moderated outputs. The base model’s performance may benefit from strong system prompts, and Mistral’s inference library, based on Hugging Face Transformers, facilitates ease of use.

Initial tests with the Mistral-7B v0.3 base model show mixed results, with the model struggling to generate code accurately. Fine-tuning the model is expected to improve its performance, offering opportunities for diverse applications such as role-playing assistants or content generation. User feedback will play a crucial role in shaping the model’s future development and potential use cases.

Overall, the release of Mistral-7B v0.3 presents an exciting opportunity for the AI community to explore and leverage a new open-source language model. The model’s capabilities, along with community-driven fine-tuning and moderation mechanisms, hold promise for a wide range of applications and advancements in the field of natural language processing.