Overseas access: www.kdjingpai.com
Bookmark Us

chatless is a lightweight open source AI chat client for Windows, MacOS and Linux. Its core function is to provide users with localized AI chat, document parsing and knowledge base management services. Developed by kamjin3086, the project is designed to meet the needs of users with low performance computers, avoiding the need to run complex, resource-intensive clients. chatless uses Tauri 2.0 and Next.js 15 technology stacks, combined with a minimalist interface design, to ensure that the software is small and runs smoothly. Users can access chatless locally or remotely via the Ollama The API conducts AI conversations and supports rapid summarization and analysis of the knowledge base. The project is currently at about 85%, and will be fully open-sourced on GitHub in the future.

 

Function List

  • AI Chat : Supports AI conversations via local or remote Ollama APIs, suitable for reasoning about problems or generating simple code.
  • Local Document Parsing : Parses local documents, extracts content and supports conversational queries.
  • Knowledge base management : Build and manage local knowledge bases, support embedded generation, quickly summarize and analyze documents.
  • History Management : Save and manage chat logs for easy review and organization.
  • Lightweight design : The installation package is only 20+ MB, which is a low resource consumption and suitable for low-performance devices.
  • Cross-platform support : Compatible with Windows, MacOS and Linux systems.
  • Personalized Settings : Provides flexible setup options that allow users to adjust the functionality to suit their personal habits.

Using Help

Installation process

chatless is an open source project that requires cloning the code from GitHub and building it locally. Here are the detailed installation steps:

  1. Cloning Codebase
    Run the following command in a terminal to clone the chatless repository:

    git clone https://github.com/kamjin3086/chatless.git
    

    This will download the project locally. Note that as of July 2025, the code is not fully open source and may have to wait for a developer release.

  2. Installation of dependent environments
    chatless is developed with Tauri 2.0 and Next.js 15 and requires the following environment to be installed:

    • Node.js : Recommended version 16 or higher. Run the following command to check if it is installed:
    node -v
    

    If you don't have it, you can download it from the Node.js website.

    • Rust : Tauri relies on the Rust compiler. Run the following command to install Rust:
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
    • Tauri Dependencies : Depending on the operating system, install Tauri's system dependencies, refer to the official Tauri documentation.
  3. Install project dependencies
    Go to the project catalog:

    cd chatless
    

    Run the following command to install the Node.js dependency:

    npm install
    
  4. Build and run
    Build the Tauri app:

    npm run tauri build
    

    Run the development model:

    npm run tauri dev
    

    After success, chatless will start and display the main interface. macOS installation package is about 20+ MB, the exact size may vary slightly depending on the system.

  5. Configuring the Ollama API
    By default, chatless uses Ollama for AI reasoning. Make sure Ollama is installed and running:

    • Local Deployment: Download and install Ollama (refer to the Ollama website).
    • Remote Deployment: Enter the remote Ollama API address in the setup screen (in the format of http://<ip>:<port>).
      On the AI Model Provider Settings page of chatless, enter the API configuration and save it.

Functional operation flow

AI Chat

  1. Open the main chatless interface and go to the "Chat" page.
  2. Select Local or Remote Ollama API (to be configured in Settings).
  3. Enter a question or instruction, such as "Write a Python loop for me" or "Summarize this document".
  4. The AI responds in real time and the answer is displayed in a dialog box. Users can continue to ask questions or view the history.

Local Document Parsing

  1. Click on the "Document Parsing" function in the main interface.
  2. Upload local documents (supports common formats such as PDF, TXT, DOCX).
  3. The software automatically parses the document content and generates searchable text.
  4. Enter a question related to the document, such as "Extract key points in the document", and the AI will answer based on the parsed content.
  5. Parsing results can be saved to the knowledge base for subsequent use.

Knowledge base management

  1. Go to the "Knowledge Base" page and click "New Knowledge Base".
  2. Upload a document or manually enter content and the software generates an embedding (native or Ollama reasoning is supported).
  3. Using the Knowledge Base Dialog function, enter a question such as "Summarize the financial data in the Knowledge Base".
  4. View the Knowledge Base Details screen to manage uploaded content or adjust embedding settings.

History Management

  1. Click on the "History" option on the main screen.
  2. Browse past chats and filter by time or topic.
  3. Supports exporting records to text files or deleting unwanted records.

Personalized Settings

  1. Go to the "General Settings" page to adjust the interface theme, font size, and so on.
  2. On the Knowledge Base settings screen, configure the embedding generation method (local or remote).
  3. On the AI Model Provider Settings screen, switch the API or adjust the model parameters.

caveat

  • performance optimization : chatless is designed for low-performance devices, but needs to ensure that the Ollama inference service is stable.
  • documentation support : Common document formats are currently supported; complex formats (such as encrypted PDF) may require pre-processing.
  • open source program : The code is not yet fully open source, so we recommend following the GitHub repository for updates.
  • Community Support : Developers welcome feedback and can submit issues or suggestions on GitHub.

application scenario

  1. Personal knowledge management
    Users can import study notes or work documents into the knowledge base to quickly summarize and query content for students or researchers.
  2. Remote AI Reasoning
    Use the reasoning power of a high-performance host on a low-performance computer via the remote Ollama API for workplace professionals who need to handle complex tasks.
  3. Lightweight code assistance
    Programmers can use chatless to generate simple code or debug snippets with a simple, resource-neutral interface.
  4. Localized Office Requirements
    Enterprise employees can parse local documents and interact with AI, ensuring that data is not uploaded to the cloud and privacy is protected.

QA

  1. What AI models does chatless support?
    Support for Ollama's local and remote APIs is currently prioritized, with possible extensions for other models in the future.
  2. How do I run it on a low performance computer?
    The chatless installer is small (~20 MB), optimized for performance using Tauri and Next.js, and can be used to take the pressure off local computation via a remote API.
  3. How does the Knowledge Base feature work?
    Upload a document or enter content to generate an embedding, after which you can query the knowledge base content through a dialog, suitable for quick analysis and summarization.
  4. Is networking required?
    Local mode does not require an internet connection, remote API mode requires a connection to the Ollama service.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish