The newest challenger to OpenAI’s ChatGPT comes from the company that makes the popular AI image generator Stable Diffusion. Known as StableLM, Stability AI developed this open-source chatbot to democratize access to advanced language models.
stability ai recent announced Alpha version of StableLM, given that it is a smaller and more efficient solution than most others. StableLM uses only three billion to seven billion parameters, 2% to 4% of the size of ChatGPT’s 175 billion parameter model.
Just as Stable Diffusion is a more accessible image generator that can be extended by third-party developers, Stable LM provides a free and open-source solution similar to AI chatbots that is available to everyone.
Thanks to training on a new, experimental dataset from EleutherAI called “The Pile”, StableLM can continue to interact and write code with high performance. Sustainability AI notes that this data set contains 1.5 trillion coins, which is three times larger than the data set used to train most AI models. ChatGPT was trained on “The Pile”, but later underwent further refinements, including reinforcement learning to help reduce erroneous results. ChatGPT has advanced a lot since it was released to the public, and is considered by most to be the AI chat leader.
A highly efficient AI model is important for Stability AI because it allows StableLM to work on low-cost systems and less powerful GPUs. You can install and run an alpha version of StableLM today. have instructions github The repository, along with a notebook with details on how to use it on computers with limited GPU capabilities.
The easiest way to try StableLM is to go Hugging Face demo Page. Since it just launched and is likely to be in high demand, response times may be slow, and as an alpha release, the results may not be as good as the final release.
For example, when I asked StableLM to help me write an apology letter for breaking someone’s phone, it told me that I did what I should have done. The AI somehow misunderstood and thought I had gifted the phone instead of damaging it.
Stability AI includes a disclaimer about the results because StableLM is a pre-trained large language model without any additional fine-tuning. It does not use reinforcement learning, as ChatGPT does, so responses “can be of varying quality and may include potentially offensive language and ideas.”
It is unknown whether the upcoming advanced StableLM model can compete with ChatGPT. For the time being, it’s clearly a work in progress. The same was true of another open-source challenger called CollosalGPT.
However this is not the end of the story. Stability AI said larger models with 15 billion, 30 billion and 65 billion parameters are in progress and should help refine the results. 175 billion parameter models are planned for the future. StableLM is off to a good start given the limited model size currently available.
The open-source nature and lightweight implementation of StableLM’s alpha version serve the purpose of allowing developers to start working on applications. There is ample potential for growth and improvement to keep an eye on this new AI chatbot.