gptel 
- Description
- Interact with ChatGPT or other LLMs
- Latest
- gptel-0.9.9.4.tar (.sig), 2026-Feb-22, 800 KiB
- Maintainer
- Karthik Chikmagalur <karthik.chikmagalur@gmail.com>
- Website
- https://github.com/karthink/gptel
- Browse ELPA's repository
- CGit or Gitweb
- All Dependencies
- transient (.tar), compat (.tar)
- Badge
To install this package from Emacs, use package-install or list-packages.
Full description
gptel is a simple Large Language Model chat client, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and in any buffer. gptel supports: - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai, Perplexity, AI/ML API, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, Github Models, GitHub Copilot chat, AWS Bedrock, Novita AI, xAI, Sambanova, Mistral Le Chat and Kagi (FastGPT & Summarizer). - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All Additionally, any LLM service (local or remote) that provides an OpenAI-compatible API is supported. Features: - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever). - LLM responses are in Markdown or Org markup. - Supports conversations and multiple independent sessions. - Supports tool-use to equip LLMs with agentic capabilities. - Supports Model Context Protocol (MCP) integration using the mcp.el package. - Supports multi-modal models (send images, documents). - Supports "reasoning" content in LLM responses. - Save chats as regular Markdown/Org/Text files and resume them later. - You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. - Redirect prompts and responses easily - Rewrite, refactor or fill in regions in buffers. - Write your own commands for custom tasks with a simple API. Requirements for ChatGPT, Azure, Gemini or Kagi: - You need an appropriate API key. Set the variable `gptel-api-key' to the key or to a function of no arguments that returns the key. (It tries to use `auth-source' by default) ChatGPT is configured out of the box. For the other sources: - For Azure: define a gptel-backend with `gptel-make-azure'. - For Gemini: define a gptel-backend with `gptel-make-gemini'. - For Anthropic (Claude): define a gptel-backend with `gptel-make-anthropic'. - For AI/ML API, Together.ai, Anyscale, Groq, OpenRouter, DeepSeek, Cerebras or Github Models: define a gptel-backend with `gptel-make-openai'. - For PrivateGPT: define a backend with `gptel-make-privategpt'. - For Perplexity: define a backend with `gptel-make-perplexity'. - For Deepseek: define a backend with `gptel-make-deepseek'. - For Kagi: define a gptel-backend with `gptel-make-kagi'. For local models using Ollama, Llama.cpp or GPT4All: - The model has to be running on an accessible address (or localhost) - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all'. - Llama.cpp or Llamafiles: Define a gptel-backend with `gptel-make-openai'. Consult the package README for examples and more help with configuring backends. Usage: gptel can be used in any buffer or in a dedicated chat buffer. The interaction model is simple: Type in a query and the response will be inserted below. You can continue the conversation by typing below the response. To use this in any buffer: - Call `gptel-send' to send the buffer's text up to the cursor. Select a region to send only the region. - You can select previous prompts and responses to continue the conversation. - Call `gptel-send' with a prefix argument to access a menu where you can set your backend, model and other parameters, or to redirect the prompt/response. To use this in a dedicated buffer: - M-x gptel: Start a chat session. - In the chat session: Press `C-c RET' (`gptel-send') to send your prompt. Use a prefix argument (`C-u C-c RET') to access a menu. In this menu you can set chat parameters like the system directives, active backend or model, or choose to redirect the input or output elsewhere (such as to the kill ring or the echo area). - You can save this buffer to a file. When opening this file, turn on `gptel-mode' before editing it to restore the conversation state and continue chatting. - To include media files with your request, you can add them to the context (described next), or include them as links in Org or Markdown mode chat buffers. Sending media is disabled by default, you can turn it on globally via `gptel-track-media', or locally in a chat buffer via the header line. Include more context with requests: If you want to provide the LLM with more context, you can add arbitrary regions, buffers, files or directories to the query with `gptel-add'. To add text or media files, call `gptel-add' in Dired or use the dedicated `gptel-add-file'. You can also add context from gptel's menu instead (`gptel-send' with a prefix arg), as well as examine or modify context. When context is available, gptel will include it with each LLM query. LLM Tool use: gptel supports "tool calling" behavior, where LLMs can specify arguments with which to call provided "tools" (elisp functions). The results of running the tools are fed back to the LLM, giving it capabilities and knowledge beyond what is available out of the box. For example, tools can perform web searches or API lookups, modify files and directories, and so on. Tools can be specified via `gptel-make-tool', or obtained from other repositories, or from Model Context Protocol (MCP) servers using the mcp.el package. See the README for details. Tools can be included with LLM queries using gptel's menu, or from `gptel-tools'. Rewrite interface In any buffer: with a region selected, you can rewrite prose, refactor code or fill in the region. This is accessible via `gptel-rewrite', and also from the `gptel-send' menu. Presets Define a bundle of configuration (model, backend, system message, tools etc) as a "preset" that can be applied together, making it easy to switch between tasks in gptel. Presets can be saved and applied from gptel's transient menu. You can also include a cookie of the form "@preset-name" in the prompt to send a request with a preset applied. This feature works everywhere, but preset cookies are also fontified in chat buffers. gptel in Org mode: gptel offers a few extra conveniences in Org mode: - You can limit the conversation context to an Org heading with `gptel-org-set-topic'. - You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. See the variable `gptel-org-branching-context'. - You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command `gptel-org-set-properties'. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks. Finally, gptel offers a general purpose API for writing LLM ineractions that suit your workflow. See `gptel-request', and `gptel-fsm' for more advanced usage.
Old versions
| gptel-0.9.9.3.tar.lz | 2025-Nov-09 | 144 KiB |
| gptel-0.9.9.tar.lz | 2025-Sep-03 | 134 KiB |
| gptel-0.9.8.5.tar.lz | 2025-Jun-11 | 126 KiB |
| gptel-0.9.8.tar.lz | 2025-Mar-15 | 102 KiB |
| gptel-0.9.7.tar.lz | 2024-Dec-05 | 76.5 KiB |
| gptel-0.9.6.tar.lz | 2024-Oct-17 | 69.2 KiB |
| gptel-0.9.5.tar.lz | 2024-Oct-12 | 66.8 KiB |
| gptel-0.9.0.tar.lz | 2024-Jun-24 | 57.8 KiB |
| gptel-0.8.6.tar.lz | 2024-May-02 | 49.3 KiB |
| gptel-0.8.5.tar.lz | 2024-May-01 | 49.5 KiB |
News
# -*- mode: org; -*- * 0.9.9.4 ** Breaking changes - The models =gpt-5-codex=, =o3=, =o3-mini=, =o4-mini=, =claude-3.5-sonnet=, =claude-3.7-sonnet=, =claude-3.7-sonnet-thought=, =claude-opus-4= and =gemini-2.0-flash-001= have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. - =gptel-track-media= now controls whether links to media files are tracked /only/ in chat buffers. Previously it also controlled whether media files added to the context explicitly via =gptel-add-file= were sent. This is considered a bug and has now been fixed. ** New models and backends - GitHub Copilot backend: Add support for =gpt-5.2=, =gpt-5.2-codex=, =gpt-41-copilot=, =claude-opus-4.5=, =claude-opus-4.6=, =gemini-3-pro-preview= and =gemini-3-flash-preview=. - Anthropic backend: Add support for =claude-opus-4-6= and =claude-sonnet-4-6=. - Bedrock backend: Add support for =claude-opus-4-5=, =claude-opus-4-6=, =claude-sonnet-4-6= and =nova-2-lite=. - Add support for =gemini-3.1-pro-preview=, =gemini-3-pro-preview= and =gemini-3-flash-preview=. - Add support for =gpt-5.1=. ** New features and UI changes - Running ~gptel-add~ in IBuffer now adds marked buffers or the buffer at point to gptel's context, and running ~gptel-add~ with a negative prefix-arg removes them. This is similar to its behavior in Dired. To add the literal contents of the IBuffer to the context, you can select a text region first. - When redirecting LLM responses to the kill ring or echo area, gptel now omits tool call results, as these tend to be very noisy. Kill ring redirection now correctly captures the full response from the LLM, including pre- and post-tool-call text. - =gptel-rewrite= now supports tool calling. If =gptel-tools= is non-nil the LLM can, for instance, read files to fetch more context for the rewrite action. - If a preset has been applied in a gptel chat buffer, saving the buffer to a file causes the preset to be recorded along with the other metadata (model, backend, tools etc). This makes it possible to associate any collection of gptel settings/preferences with the chat file, and not just the few properties that gptel writes to the file otherwise. But resuming this chat with the preset settings applied requires that the preset be defined, so the chat file will be less self-contained. - =gptel-send= now works in Vterm buffers in a limited way. Responses will be inserted into Vterm buffers, but without streaming. The respond-in-place option to overwrite queries with responses in Vterm buffers is supported as well, but might be buggy if your shell prompt is "rich" and has many dynamic elements. Support for =gptel-send= in Term/Ansi-Term and Eat buffers is not yet available but planned. ** Notable bug fixes - Function-valued system messages/directives are now evaluated in the buffer from which the gptel request is sent, so they can use the context of the current buffer correctly. (Previously they were evaluated in a temporary buffer used to construct the query, leading to unexpected behavior.) - When using OpenAI-compatible APIs (such as Deepseek), models that call tools within their "reasoning" phase are now correctly handled by gptel. * 0.9.9.3 ** Breaking changes - The models =gpt-4-copilot= and =o1= have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. - Link handling in gptel chat buffers has changed, hopefully for the better. When ~gptel-track-media~ is non-nil, gptel follows links in the prompt and includes their contents with queries. Previously, links to files had to be placed "standalone", surrounded by blank lines, for the files to be included in the prompt. This limitation has been removed -- all supported links in the prompt will be followed now. The "standalone" limitation was imposed to make included links stand ... ...