Character
A character is the piece of the system that turns all of the underlying context (roles, assets, MCP tools, agents) into an actual copilot that a human experiences. It's where we define how it behaves and where its inferences are run.
In the architecture, characters are one of the Core Objects: they hold the inference endpoint, inference configuration, and system prompt chunks. The LLM assumes a character, like an actor taking on a role in a specific space, so that each user gets a personality and focus matching their role rather than a generic model.
The Bifrost admin tool is where characters are created, edited, and inspected. Each character appears as a configurable unit that can be safely wired into collections.
Character Config Model
The CharacterConfig schema defines how a character is represented in configuration, APIs, and in the Bifrost admin tool:
CharacterConfig {
character_id: string
name?: string
description?: string
inference_endpoint?: string
custom_inference_uri?: string
auth_type?: string
auth_uri?: string
pre_tool_prompt?: string
post_tool_prompt?: string
welcome_prompt?: string
config?: object
created_at?: number
updated_at?: number
customer_id?: string
}System
character_id: string
UUID for the character. Assigned by the platform and cannot be changed.
name?: string
Name for the character for display and search purposes.
description?: string
Description for the character and it's general purpose.
created_at?: number / updated_at?: number
Timestamps for governance and change history.
customer_id?: string
Metadata used by Mimory. Not modifiable.
Inference and Auth
Clearly define where this character's inference runs and how it proves it's allowed to run there, so that authorization and routing breadcrumbs are built into the structure, not bolted on later.
inference_endpoint?: string
A logical identifier for the target inference location. Typically this is OpenAI or Anthropic to match much of what AI frameworks such as LangChain and Vercel use most easily. Can be set to Custom as well.
custom_inference_uri?: string
Declares where the custom inference endpoint is when inference_endpoint is set to custom. This is expected to be used more frequently as role-specific fine-tuned models, on-prem SLMs, or other custom solutions grow in the marketplace.
auth_type?: string
How the character authenticates to the inference service (for example, "oauth2", "api_key", "jwt", etc.). This ties into the broader authorization and scope model so that every call can be traced and audited.
auth_uri?: string
The URI used to obtain or refresh credentials when auth_type requires a flow (e.g., OAuth token endpoint). This keeps security flows explicit and explainable.
Persona (pre-tool) System Prompt
Define the character's identity and priorities before any tools or external context are added.
The Persona is encoded primarily in:
pre_tool_prompt?: string
Think of pre_tool_prompt as the first half of the system prompt that sets the stage:
- Who the character is (role, domain expertise, vantage point).
- How it should speak to this organization's users (tone, level of detail).
Characters are meant to mesh with the user's role and environment - operators vs. technicians vs. foremen all see different behavior, even if underlying tools are similar.
Recommended structure for pre_tool_prompt:
- Identity
- "You are the maintenance copilot for packaging equipment…"
- Communication style
- "Be concise and concrete. Prefer checklists and stepwise instructions over long prose…"
Procedure (post-tool) System Prompt
Define how the character uses tools and data to execute procedures - the what you do with information once you have it.
The Procedure portion is encoded in:
post_tool_prompt?: string
The post_tool_prompt is typically the second half of the system prompt. It says what and how to use the available tools to accomplish common tasks. This is where core procedures and workflows are embedded into the character.
Typical responsibilities for post_tool_prompt:
- Interpreting tool results
- How to reconcile conflicting data from multiple MCP tools or agents.
- Following domain procedures
- How to turn raw readings, events, or records into stepwise guidance that matches the organization's SOPs.
- Escalation and handoff
- When to stop, summarize, and hand off to a human (e.g., safety-critical thresholds, ambiguous diagnosis-like situations).
Pattern recommendations:
- Encode procedural recipes:
"When diagnosing a sensor anomaly: (1) confirm the asset, (2) check the last N readings, (3) compare to known thresholds from [tool], (4) propose next actions and call out any safety issues explicitly."
- Emphasize clarity in outcomes:
"Always clearly separate facts returned by tools from your own synthesized recommendations."
Config
Tune how the character behaves at run-time: model settings, safety levers, and feature flags that adapt the character to each organization and use case. These are typically specific to the inference endpoint as each model and framework has a different set of levers to adjust tuning.
config?: objectA provider- and deployment-specific JSON object used to parameterize inference and behavior. This is intentionally unopinionated so it can adapt to different backends while staying governed at the platform level.
Common uses include:
- Model & decoding parameters
- Model name or family, temperature, max tokens, and other sampling configuration.
- Tooling behavior
- Limits on number of tool calls per turn, timeout settings, retry strategies.
Characters Configuration In Bifrost
When you create or edit a character in Bifrost, you are:
- Choosing where the inference runs and how it authenticates (endpoint URI & auth).
- Defining who it is to your users (Persona / pre-tool system prompt).
- Encoding how it executes real-world procedures using assembled tools and data (Procedure / post-tool system prompt).
- Tuning how it behaves operationally (config).
Together, these make each character a precise, governed expression of "intelligence that understands the moment and turns awareness into action - safely" for a specific role and space in your organization.
