# -*- mode: org; -*-

* 0.9.9.6-pre

** Breaking changes
** New models and backends
** New features and UI changes
** Notable bug fixes

* 0.9.9.5 2026-05-03

** Breaking changes

- gptel's default ChatGPT backend has been removed. ~gptel-backend~ and
  ~gptel-model~ now default to =nil=, and there are no registered
  backends out of the box.  However gptel remains usable without
  configuration: if ~gptel-send~ is called without a backend set, the
  ChatGPT backend is created on the fly and used.

- ~gptel-track-media~ affects gptel's link handling in all Org and
  Markdown buffers, not just chat buffers that have ~gptel-mode~ turned
  on.  When calling ~gptel-send~ with ~gptel-track-media~ turned on, and
  the buffer is in Org or Markdown mode, links to supported file types
  will be followed by gptel and included with the request.  Previously
  this behavior applied only in dedicated chat buffers.

  (This is actually how gptel has worked since v0.9.9.3, but this change
  in behavior was undocumented.)

- ~gptel-include-reasoning~ now defaults to =ignore=, meaning that
  reasoning text in LLM responses will be included in buffers but
  ignored by ~gptel-send~ on subsequent conversation turns.  The reason
  for this change is that including the reasoning text as the LLM's
  response on new conversation turns is not recommeded by LLM APIs.
  Reasoning text can also fill up the context window.

- ~gptel-make-tool~ now sets the tool's =:include= slot by default.
  This means that unless =:include nil= is explicitly specified,
  gptel-tools will default to including their results in the buffer when
  using ~gptel-send~.  This is recommended for coherent multi-turn
  conversations involving tool use, as the LLM uses tool results from
  past turns for context.

  (Tool result inclusion can be controlled globally (or buffer-locally)
  for all tools via ~gptel-include-tool-results~, whose default value
  has not been altered.)

- The models =gpt-41-copilot=, =gpt-5= and =claude-opus-41= have been
  removed from the default list of GitHub Copilot models.  These models
  are no longer available in the GitHub Copilot API.

- The models =gpt-3.5-turbo= and =gpt-3.5-turbo-16k= have been removed
  from the default list of OpenAI models.  These models are either
  deprecated or no longer available.

- Breaking change to the ~gptel-backend~ API: When the =header= or =url=
  fields of a backend are specified as functions, they now accept one
  argument: the request context plist (=info=).  This is only relevant
  if you have defined custom header or url functions for your gptel
  backends.

  (This change is required for backends whose request header or URL
  should be modified depending on the state of the request.)

** New models and backends

- Deepseek backend: Add support for =deepseek-v4-flash=,
  =deepseek-v4-pro=.

- Anthropic backend: Add support for =claude-opus-4.7=.

- xAI backend: Add support for =grok-4-1-fast-reasoning=,
  =grok-4-1-fast-non-reasoning=, =grok-4-fast-reasoning=, and
  =grok-4-fast-non-reasoning=.

- GitHub Copilot backend: Add support for =claude-opus-4.7=,
  =gpt-5.1-codex-, =gpt-5.1-codex-mini, =claude-sonnet-4.6=,
  =gemini-3.1-pro-preview=, =gpt-5.3-codex=, =gpt-5.4=,
  =gpt-5.4-mini=, and =gpt-5.5=.

- Gemini backend: Add support for =gemini-3.1-flash-lite-preview=;
  add deprecation notice for =gemini-3-pro-preview=.

- OpenAI backend: Add support for =gpt-5.5=, =gpt-5.5-pro=,
  =gpt-5.3-chat-latest=, =gpt-5.4=, =gpt-5.4-pro=, =gpt-5.4-mini=,
  =gpt-5.4-nano=. =gpt-5.2=, =gpt-5-mini=, =gpt-5-nano= and =o3-pro=.

** New features and UI changes

- gptel now displays token usage in the header-line when using
  gptel-mode.  This includes the tokens sent, cached tokens sent (if
  any) and tokens received.  The displayed totals are per-request or per
  session/buffer, and you can switch between the two by clicking the
  display.  Hovering the mouse over the display will show both in a
  tooltip.

  Displaying costs in currency instead of tokens is not yet supported
...
...
