HUGE LM Studio Update | Multi-Models with AutoGen ALL Local

Tyler AI
27 Mar 202404:15

TLDRThe video introduces a significant update in LM Studio, highlighting its new multi-model feature. This allows users to run multiple models on a single server by adjusting the model properties in the config list. The video demonstrates how to download and install different models, such as Google's F2 and Zephyr, and how to distinguish them using separate LLM config definitions. It showcases the ability of these models to interact with each other within the LM Studio software, emphasizing the ease of use and the benefits of this open-source, free tool.


  • πŸš€ Introduction of multi-model support in LM Studio, allowing for more than one model to run on a single server.
  • πŸ”§ Adjusting model properties in the config list is key to utilizing multi-model functionality.
  • πŸ“ˆ LM Studio's latest update enables simultaneous use of multiple local models through a multi-model session feature.
  • πŸŽ₯ A video tutorial is provided in the description for downloading, installing, and using LM Studio.
  • πŸ–₯️ Users are guided through the process of selecting and loading different models within the playground tab of LM Studio.
  • πŸ”„ The demonstration showcases the ability to run two distinct models, F2 and Zephyr, concurrently.
  • πŸ“ Creation of separate LLM config files for each model, named Zephyr and F2, with specific model base URLs and API keys.
  • πŸ‘₯ Introduction of agents, Phil and Zep, representing the F2 and Zephyr models respectively, for interaction.
  • πŸ—‚οΈ Explanation of cache settings, where option to set it to 'none' ensures unique responses every time.
  • 🎭 A practical example of the agents in action, with Phil initiating a conversation with Zep and receiving a joke in response.
  • πŸ’‘ Emphasis on the benefits of LM Studio's open source nature, free usage, and privacy as no information is stored.

Q & A

  • What is the main update in LM Studio that the video discusses?

    -The main update discussed in the video is the introduction of multi-model sessions in LM Studio, which allows users to run multiple local models simultaneously on one server.

  • How does the multi-model feature work in LM Studio?

    -The multi-model feature works by adjusting the model property in the config list, allowing users to load multiple config lists, each with a different model loaded from LM Studio.

  • What is required to use the multi-model feature in LM Studio?

    -To use the multi-model feature, users need to download and install at least two different models from the LM Studio homepage and then adjust the config list to include the models for the multi-model session.

  • How do you distinguish between different models in the multi-model session?

    -Different models are distinguished by their model identifiers and base URLs in the config list. Each agent (or assistant) is assigned a specific model identifier, which determines the model it will use for interactions.

  • What is the purpose of the 'playground' tab in LM Studio?

    -The 'playground' tab in LM Studio is a feature that allows users to load and test multiple models in a multi-model session environment, providing a space to experiment and see how the models interact with each other.

  • How can you find the API model identifier in LM Studio?

    -The API model identifier can be found on the model's individual page in LM Studio. It is a unique string that is used to distinguish the model when setting up the config list for multi-model sessions.

  • What is the cache setting in the config list, and when might you set it to 'none'?

    -The cache setting in the config list determines whether the results of interactions are stored for reuse. Setting it to 'none' means that the results will never be cached, ensuring that each interaction produces a unique and different response.

  • How does the video demonstrate the functionality of the multi-model feature?

    -The video demonstrates the functionality by showing the setup process of LM Studio for multi-model sessions, including downloading models, configuring the settings, and then creating an autogen file to run the models simultaneously. It also shows an example interaction between two agents, Phil and Zep, each using a different model.

  • What are the benefits of using LM Studio with the multi-model feature?

    -The benefits of using LM Studio with the multi-model feature include the ability to run multiple models on one server, the ease of setup and configuration, and the ability to have models interact with each other without the need for separate API keys or storage of information.

  • What is the significance of the open-source nature of LM Studio?

    -The open-source nature of LM Studio means that it is freely available for users to download, use, and modify without any licensing fees. It also encourages community involvement and collaboration, leading to continuous improvements and updates to the software.



πŸš€ Introducing Multi-Model Support in LM Studio

The video begins with an exciting announcement about the latest update in LM Studio, which now supports multi-model functionality. This means users can run more than one model on a single server by adjusting the model properties in the config list. The video creator briefly mentioned this feature in the previous day's video and encourages viewers to check out LM Studio, providing a link in the description for downloading, installing, and getting started with the software. The multi-model sessions allow for simultaneous use of multiple local models, enhancing the user experience and offering more flexibility in working with different AI models.



πŸ’‘LM Studio

LM Studio is an open-source platform mentioned in the video that allows users to run multiple language models on a single server. It is highlighted as a tool that simplifies the process of working with different models and is noted for its recent update that enables multi-model functionality. The video provides a tutorial on how to use LM Studio, including downloading and installing it, and running multiple models simultaneously.


The term 'multi-model' refers to the capability of LM Studio to run more than one language model at the same time on a single server. This feature is significant as it allows users to switch between different models and compare their outputs without the need for separate servers or installations for each model. It enhances the flexibility and efficiency of working with language models.

πŸ’‘config list

A 'config list' in the context of the video is a set of configurations that define how LM Studio interacts with the loaded language models. It includes details such as the model base URL, API key, and model name, which are essential for the proper functioning of the multi-model setup. The config list is crucial for distinguishing between different models running on the same LM Studio server.

πŸ’‘API key

An 'API key' is a unique code that is used to authenticate requests from a software application to an API (Application Programming Interface). In the context of LM Studio, the API key is necessary for the software to access and use the language models. The video mentions the use of the API key in the config list to ensure that LM Studio recognizes the models being used.

πŸ’‘model identifier

A 'model identifier' is a unique name or code that is assigned to a specific language model. It is used to distinguish between different models in a multi-model setup, such as within LM Studio. The model identifier is crucial for the config list, as it tells the system which model to use for a particular task or interaction.

πŸ’‘autogen file

An 'autogen file' is a file that is automatically generated by a software application, such as LM Studio, to store configurations and settings related to the language models being used. It is an essential part of setting up a multi-model environment, as it contains the necessary information for the system to function correctly and to manage interactions between different models.


In the context of the video, 'agents' refer to the virtual entities or users that interact with the language models through LM Studio. Each agent is associated with a specific model configuration, allowing them to utilize the features and capabilities of the chosen language model. The agents facilitate the demonstration of how multiple models can work together and communicate within the same software environment.

πŸ’‘server logs

Server logs are records of activities and events that occur on a server. They provide detailed information about system operations, including errors, user interactions, and system performance. In the context of LM Studio, server logs are used to monitor the interactions between different models and agents, ensuring that the multi-model setup is functioning correctly.

πŸ’‘open source

Open source refers to a type of software whose source code is made available to the public, allowing anyone to view, use, modify, and distribute the software. The video emphasizes LM Studio as an open-source platform, which means it is freely accessible and customizable, offering users the freedom to experiment and contribute to its development.

πŸ’‘local models

Local models refer to language models that are stored and run on a user's own server or local machine, as opposed to being accessed remotely through an online service. The video discusses the ability to load and run multiple local models within LM Studio, which provides users with control over their data and the flexibility to use models without relying on external APIs.


LM Studio now supports multi-model sessions, allowing for the simultaneous use of multiple models on one server.

The introduction of multi-model sessions is a significant update to LM Studio, enhancing its capabilities and flexibility.

To utilize multi-model sessions, users need to adjust the model property in the config list within LM Studio.

LM Studio's multi-model functionality enables the loading of different models through the config list, each with unique characteristics.

The video provides a tutorial on downloading, installing, and running LM Studio, as well as navigating its interface.

In the latest update, LM Studio allows for the simultaneous loading of multiple local models, expanding its application potential.

The demonstration showcases how two different models can be distinguished and utilized within a single LM Studio session.

The creation of separate LLM config files is necessary for each model, with each file containing a model base URL and API key.

The video explains how to identify and use the model name and base URL for each LLM config, crucial for the multi-model setup.

The cache setting can be adjusted to 'none', ensuring that results are not cached and are different each time the model is run.

The demonstration includes two agents, Phil and Zep, each using a different model (F2 and Zephyr) and interacting with each other.

The server logs for both models are visible, showing the successful interaction and response generation between the two models.

The practical application of multi-model sessions in LM Studio is showcased through a simple conversation between two agents.

LM Studio's open-source and free nature is emphasized, along with its ability to store and process information without retaining user data.

The video encourages users to download and try LM Studio, highlighting its innovative features and user-friendly interface.

The update is seen as a significant improvement for LM Studio, potentially beneficial for users interested in exploring or working with AI models.