flexigpt-app

module
v0.0.64 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2026 License: MPL-2.0

README

FlexiGPT

Go Report Card lint

FlexiGPT is a cross-platform desktop client that lets you chat with multiple Large Language Models (LLMs) from a single, streamlined interface.

⚠ Early Access Notice:

The project is under active development; expect breaking changes and incomplete features between releases. This repository is intended as a standalone application, not a reusable library. Public Go package APIs are not stable and may change between releases.

Quick Start

  • Install FlexiGPT for your OS.

  • Launch FlexiGPT, then open Settings.

  • In Auth Keys -> Paste the API key for your provider. Example links to provider pages for getting API keys:

    • OpenAI, Anthropic, Google Gemini.
    • Atleast one provider needs to be configured with an API key before chatting.
    • FlexiGPT doesn’t bill you; all usage is metered by each provider on your own account.
  • Start chatting!

Key Features

Multi-provider Connectivity with Model Presets
  • First-class connectors for OpenAI, Anthropic, Google, DeepSeek, xAI, Hugging Face, OpenRouter, Local LLama.cpp.

  • Plus any local/remote OpenAI chat completions or OpenAI responses or Anthropic messages compatible endpoint.

  • API keys are stored securely in your OS keyring - never in plain text.

  • Model Presets bundle a model selection (e.g. gpt-5.1) together with recommended defaults such as temperature, token limits, system prompt and reasoning parameters. Built-in presets for all supported LLM providers, with one-click loading during chat. Tweak values per conversation without altering the saved preset.

  • Feature details
    • Key Concepts

      Term Description
      Model Preset Named set of defaults for a single model (temperature, max tokens, reasoning, etc.).
      Provider Preset Collection of model presets that belong to one provider and defines that provider’s default preset.
      Default Model Preset The preset automatically applied when the provider is selected and the user has not chosen a different preset.
    • Presets are organized per provider. A provider may expose many presets and may declare one of them as its default.

    • A preset can be enabled or disabled at any time; existing chat history is not modified.

    • Per-conversation overrides are possible: after selecting a preset the user may fine-tune individual parameters (temperature, reasoning, etc.) without changing the stored preset.

    • Supports models with multiple behaviors: non-reasoning, reasoning with levels (high/medium/low), hybrid reasoning (with reasoning tokens).

    • Selecting a Preset in Chat

      • In the chat input bar, open the dropdown and choose any preset listed.
      • The preset’s parameters appear next to the dropdown; use the slider or number fields to make conversation-specific adjustments if desired.
      • The adjusted parameters affect only the current conversation and are discarded when the session ends.
    • Managing Presets

      • Click the Model Presets icon in the sidebar.
      • Select a provider or create a new one.
      • The app provides built-in presets. These will be listed.
      • Create, edit, delete, enable/disable presets as required.
      • Mark one preset as Default for the provider; it loads automatically when a model from that provider is first chosen.
    • Notes

      • Deleting a preset does not alter existing conversations; those sessions keep a snapshot of the parameters that were active when the message was sent.
      • Global defaults are applied for any parameter left blank in the preset.
Unified Chat Workspace
  • Switch models mid-conversation, chain results from one model into the next, and fine-tune generation parameters on the fly.

    • Previous messages and context is preserved as a full thread in the new models API calls too.
  • Attach files, images, and PDFs, use prompt templates, load and resume previous conversations, export full conversation or a full message or code/mermaid diagram or its image inside a single message seamlessly in the same interface.

  • Productivity tooling inside the chat

    • Streaming responses.
    • Code snippets with copy/export
    • Mermaid diagrams with copy/export/zoom
    • Full Math/LaTeX rendering
    • Readily available chat API requests and response details
Persistent Conversation History
  • Auto-saved sessions which are resumable at any time, with full-text local-only search and storage in local files.

  • Feature details
    • Every chat session is persisted as a Conversation containing its title, timestamps and full message sequence.

    • Conversations are stored locally; you can reload a conversation and continue from last point at any time.

    • A full-text search bar provides instant retrieval across titles and message contents. The search is fully local.

    • Key Concepts

      Term Description
      Conversation A saved chat session composed of messages and metadata.
      Conversation Item Lightweight record (ID, title, creation time) shown in lists and search results.
      Message Roles system, user, assistant; each recorded with timestamp and details.
    • Using Conversation History

      • Click the Search Bar to open the history dropdown.
      • Recent conversations are listed chronologically; type in the search bar to filter by keywords appearing in either titles or message text.
      • Select a row to reload the full conversation and continue chatting.
    • Automatic Saving

      • A new conversation record is started automatically when you open a fresh chat window.
      • The title is generated heuristically from the first user message but can be renamed at any time.
      • Messages are appended in real-time; timestamps are stored for precise ordering.
Attachments
  • Attach Local or web-based files (text/code/images/PDFs) to a conversation.

  • Attach a directory (auto crawl and attach of files within)

  • Feature details
    • Multiple local files or a single directory can be selected at a time to attach to the message.

      • Code and text files become in-context text.
      • Images are sent as binary blobs.
      • Local extraction and attachment as text is supported for PDFs by default. Users can toggle the mode to send PDFs as blobs if supported by the API.
    • Web pages/URLs can be attached to the conversation.

      • By default readable content will be extracted and attached as in-context text for any web page.
      • Image URLs and PDF URLs are sent as binary blobs, with user switch available to convert them as links of extract PDF as text and send.
    • Attachments you add to a conversation are available to all subsequent turns in that conversation.

    • You can edit and send any message at any point in time. Attachments can be modified in edit mode too.

    • On each send, the currently attached blob files (images, PDFs in file mode) are re-read for the messages in this turn.

    • Anything attached as text is stored with the conversation and not reread again.

    • Max 16MB attachments are supported as of now.

Prompt Templates
  • Turn complex prompts into reusable templates with variables.

  • If required, group into toggle-able bundles.

  • Feature details
    • Prompt Templates allow you to store complex prompts (with variables) and reuse them.

    • Templates can be organized into Bundles, which act as configurable packs that can be enabled or disabled.

    • Templates and Bundles can be disabled or removed at any time without affecting existing chat history.

    • Key Concepts

      Term Description
      Prompt Template A reusable prompt that can contain variables.
      Prompt Bundle A collection of templates that can be toggled on or off as a unit and shared with other users.
    • Invoking a Template

      • Invoke the template menu using mouse click or keyboard shortcut.
      • An auto-complete list appears, ranked by relevance.
      • Use ↑/↓ to navigate if necessary, then press Enter.
      • If two bundles provide the same slug, both options are listed.
      • The most relevant match is selected for processing.
    • Managing Prompt Templates

      • Click the Prompts icon in the sidebar.
      • Select a current prompt bundle or create a new one.
      • The app provides built-in bundles and templates. These will be listed.
      • Create, edit, delete, enable/disable bundles/templates as required.
Themes (Light / Dark / Custom)
  • System/Light/Dark Themes supported with auto-detect & manual toggle.
  • An option to use any of the custom DaisyUI themes.

Install

MacOS
  • Download the .pkg release package.
  • Click to install the .pkg. It will walk you through the installation process.
  • Local data (settings, conversations, logs) is stored at:
    • ~/Library/Containers/io.github.flexigpt.client/Data/Library/Application\ Support/flexigpt/
Windows
  • Download the .exe release package.
  • Click to install the .exe. It will walk you through the installation process.
  • Note: Windows builds have undergone very limited testing.
Linux
  • Download the .flatpak release package.

  • If Flatpak is not installed, enable it for your distribution

    • Ubuntu/Debian/etc (APT based systems):

      sudo apt update # update packages
      sudo apt install -y flatpak # install flatpak
      sudo apt install -y gnome-software-plugin-flatpak # optional, enables flathub packages in gnome sofware center
      flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
      
    • Some additional helper commands can be found in this script

  • Install the package

    • flatpak install --user FlexiGPT-xyz.flatpak
    • flatpak info io.github.flexigpt.client
  • Running the app

    • Using launcher GUI: You can launch the app from your distributions's launcher. E.g: In Ubuntu: Press the window key, type flexigpt and click on icon.
    • Using terminal: flatpak run io.github.flexigpt.client
    • Known issue with Nvidia drivers:
  • Your local data (settings, conversations, logs) will be at:

    • ~/.var/app/io.github.flexigpt.client/data/flexigpt

Built With

Contributing

License

Copyright (c) 2024 - Present - Pankaj Pipada

All source code in this repository, unless otherwise noted, is licensed under the Mozilla Public License, v. 2.0. See LICENSE for details.

Directories

Path Synopsis
cmd
agentgo command
Package main (or your own wrapper package) provides a small helper layer that turns the context-aware APIs exposed by ModelPresetStore into simple “context-less” helpers while adding the panic-to-error recovery middleware.
Package main (or your own wrapper package) provides a small helper layer that turns the context-aware APIs exposed by ModelPresetStore into simple “context-less” helpers while adding the panic-to-error recovery middleware.
httpbackend command
internal
modelpreset/store
Package store implements the provider / model-preset storage layer.
Package store implements the provider / model-preset storage layer.
prompt/store
Package store implements the prompt template storage and management logic.
Package store implements the prompt template storage and management logic.
tool/store
Package store keeps the read-only built-in tool assets together with a writable overlay that enables or disables individual bundles or tools.
Package store keeps the read-only built-in tool assets together with a writable overlay that enables or disables individual bundles or tools.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL