JAN: This 100% LOCAL AI ASSISTANT is better than ChatGPT (works w/ RAG, Local Models, Groq & OpenAI)
TLDRJan Jan is a versatile desktop application that allows users to run open-source AI models locally for enhanced privacy, or connect to online models via API keys from platforms like OpenAI or Gro for broader capabilities. It's open-source, user-friendly, and supports cross-platform use on Mac, Linux, and Windows. Jan offers the flexibility to switch between local and remote models, stores conversations offline, and provides API endpoints for custom applications. It also supports extensions for additional functionality and has built-in engines for inference. The app is easy to install and operate, with a sleek interface that guides users through model selection, chat interactions, and advanced settings. It's an all-in-one solution for those looking to leverage AI without the complexities of terminal commands or multiple configurations.
Takeaways
- 🌐 Jan is a desktop app that allows running open-source AI models locally and also supports connecting to online models like OpenAI or Groq.
- 🔒 Jan emphasizes enhanced privacy by enabling local model execution, which is beneficial for sensitive conversations.
- 🔌 If no internet connection is available or not desired, Jan's local model feature ensures private and secure conversations.
- 🔀 Flexibility is provided by allowing users to switch between local and remote models according to their needs.
- 📂 All conversations with Jan are stored offline, and it is cross-platform, available on Mac, Linux, and Windows.
- 📡 Jan exposes API endpoints that are compatible with OpenAI, allowing integration with custom applications.
- 🧩 It offers an extension option for adding custom plugins or integrating with other tools and services.
- 📚 Jan can work with various text files, including PDFs and documents, and has two built-in engines for inference: llama CPP and tensor RT.
- 🚀 Jan's dual engine approach provides flexibility and options for model inference.
- 🔧 Users can connect Jan with LM Studio or other endpoints if needed, and it's easy to install with a one-click process.
- 🔑 Jan allows setting up API keys for services like Gro, which is useful for accessing additional capabilities without advanced hardware.
Q & A
What is Jan Jan?
-Jan Jan is a desktop app that allows users to run open-source AI models locally and also connect to online models like OpenAI or Groq using API keys, providing a unified platform to interact with various AI models.
How does Jan Jan enhance privacy for users?
-Jan Jan enhances privacy by enabling users to run AI models like Llama or Mistral directly on their device, eliminating the need for an internet connection. This is particularly useful for sensitive or confidential conversations.
What are the benefits of using local models in Jan Jan?
-Using local models in Jan Jan ensures that conversations remain private and secure, as no data is transmitted over the internet. It also allows for the use of AI capabilities without the need for advanced hardware.
Can Jan Jan be used across different operating systems?
-Yes, Jan Jan is cross-platform and is available on Mac, Linux, and Windows, making it accessible to a wide range of users.
How does Jan Jan's API endpoint feature work?
-Jan Jan exposes API endpoints that can be used for custom applications or other AI applications. These endpoints are open AI compatible, allowing integration with any system that supports open AI models.
What is the dual engine approach in Jan Jan?
-The dual engine approach in Jan Jan refers to the use of two built-in engines for inference: llama CPP and tensor RT, LM. This provides users with more flexibility and options when it comes to model inference.
How can users install and use a local model in Jan Jan?
-Users can install a local model in Jan Jan by visiting the app's website, downloading the installation file for their operating system, and following the setup instructions. They can then explore the Hub to find and download models, or paste a Hugging Face link to automatically download a model.
What is the purpose of the 'New Thread' option in Jan Jan?
-The 'New Thread' option in Jan Jan allows users to create new conversation threads, which can be useful for organizing different discussions or inquiries within the app.
How can users integrate Jan Jan with their custom tools and services?
-Users can integrate Jan Jan with their custom tools and services by using the exposed API endpoints or by adding custom plugins through the extensions option.
What is the significance of the 'Retrieval' feature in Jan Jan?
-The 'Retrieval' feature, which is part of the RAG (Retrieval-Augmented Generation) capabilities, allows users to attach a file and ask questions about its content. This is particularly useful for extracting information from documents or PDFs.
How does Jan Jan handle conversations with online models?
-Jan Jan handles conversations with online models by allowing users to connect their OpenAI or Groq API keys. Once connected, users can select online models like Gro Llama 3 from the models dropdown and engage in fast-paced conversations.
What are the advantages of using Jan Jan over other AI model platforms?
-Jan Jan offers a one-click installation process, the ability to use local and online models, cross-platform accessibility, API endpoint exposure for custom integrations, and the flexibility to switch between different models and providers easily.
Outlines
🖥️ Jan App Overview and Local Model Usage
The video introduces Jan, a desktop application that facilitates the local running of open-source AI models and the connection of online models via API keys. Jan is open-source, user-friendly, and supports a range of functionalities including local model execution for privacy, cross-platform availability, API endpoint exposure, and extension options. The video demonstrates how to install Jan, select a local model like Llama 3, and interact with it through a chat interface. It also covers how to customize the model's settings and create new threads for different conversations.
🔌 Advanced Settings and Online Model Integration
This paragraph delves into the advanced settings of Jan, including the ability to upload custom models, enabling experimental features, and toggling GPU acceleration. It also explains how to set up API endpoints for models and manage HTTPS proxies. The video guides viewers on how to configure API keys for online models like Gro and demonstrates the process of sending messages using these online models. Additionally, it introduces the Retrieval-Augmented Generation (RAG) feature, which allows users to attach documents and ask questions about them. The video concludes with a call to action for viewers to share their thoughts and support the channel.
Mindmap
Keywords
💡Jan
💡Local Models
💡Open Source
💡API Keys
💡Crossplatform
💡API Endpoints
💡Extensions
💡Inference Engines
💡Llama 3 Model
💡Retrieval-Augmented Generation (RAG)
💡Gro API
Highlights
JAN is a desktop app that allows running open-source AI models locally and connecting with online models via API keys.
It is fully open source, easy to install, and does not require terminal, coding, or configuration.
JAN supports running local AI models like Llama or Mistral for enhanced privacy without an internet connection.
The app offers the ability to switch between local and remote models like OpenAI or Gro for flexibility.
All conversations with JAN are stored offline and the app is cross-platform, available on Mac, Linux, and Windows.
JAN exposes API endpoints that are compatible with OpenAI models for custom applications.
The app has an extensions option for adding custom plugins or integrating with other tools and services.
JAN uses two built-in engines for inference: llama CPP and tensor RT LM.
Users can connect JAN with LM Studio or other endpoints if needed.
The interface allows for easy installation of local models and provides options to explore and download models.
JAN enables users to rename threads, change instructions, and select models for conversation.
Advanced settings for models can be adjusted, including temperature and max tokens.
The app provides options to create new threads and install models from the hubs.
JAN allows for setting up API endpoints for the model and checking logs.
Settings include options for managing installed models, experimental features, GPU acceleration, and API configurations.
Users can set up and use JAN with various LLM providers like Gemini, OpenAI, Gro, or local models.
JAN supports RAG (Retrieval-Augmented Generation) features for attaching files and asking questions about the content.
The app is designed to be user-friendly, eliminating the need for complex terminal configurations.
JAN is a one-click install solution that centralizes the management of multiple LLM providers.