02

Output Rendering

A three-layer pipeline transforms raw terminal output into structured, readable content. Each layer builds on the previous one, and parse failures at any layer fall back to the layer below without losing output.

Three-Layer Pipeline

Layer 0 (ANSI/VT100) is the baseline terminal emulator — colors, cursor movement, text formatting. This is what every SSH client provides. It never breaks.

Layer 1 (standard formats) parses markdown code blocks, unified diffs, and inline formatting. These formats are decades-old standards shared across all AI agents. Because Layer 1 relies only on universal conventions (fenced code blocks, unified diff syntax, markdown headings), it works with any agent that produces text output without requiring agent-specific logic.

Layer 2 (agent-specific) adds rich rendering for a specific agent's proprietary output patterns — tool use cards, thinking blocks, file operation tracking, streaming status. This layer is fragile and will break when agents change formats. When it does, users see Layer 1 rendering (still good), never mangled output.

Layer 2: Agent-specific enhancement
↓ falls back on parse failure
Layer 1: Standard format parsing
↓ falls back on parse failure
Layer 0: Raw terminal rendering

Layer 0 — ANSI/VT100

The foundation layer uses alacritty_terminal for full VT100/xterm-256color emulation. This handles escape sequences, cursor positioning, colors, scrollback, and all standard terminal behavior.

GPU-accelerated rendering on both platforms: SurfaceView with OpenGL ES 3.0 on Android, Metal/CoreText on iOS. Scrollback buffer holds 10,000 lines in memory, cleared on session close.

layer 0 — raw terminal
$ ls -la src/
drwxr-xr-x 4 dev dev 128 Feb 14 09:22 .
drwxr-xr-x 8 dev dev 256 Feb 14 09:20 ..
-rw-r--r-- 1 dev dev 2.4K Feb 14 09:22 main.rs
-rw-r--r-- 1 dev dev 1.1K Feb 14 09:18 lib.rs
drwxr-xr-x 3 dev dev 96 Feb 14 09:20 tests/

Layer 1 — Standard Formats

Parses markdown, fenced code blocks, and unified diffs from the terminal output stream. Code blocks get syntax highlighting via tree-sitter with 30+ language grammars. Diffs render with file paths, hunk headers, and colored additions/deletions.

These are decades-old standards. Every AI agent uses markdown and diffs — this layer works with all of them out of the box.

layer 1 — parsed output
## Code Block (syntax highlighted)
 
fn validate(input: &str) -> Result<()> {
if input.is_empty() {
return Err("empty");
}
Ok(())
}
 
## Unified Diff
 
--- a/src/auth.rs
+++ b/src/auth.rs
- return None;
+ return refresh_token(t);

Layer 2 — Agent Enhancement

Claude Code CLI receives full Layer 2 support: response boundary detection, thinking block rendering, tool use cards with tool name / input / output, file operation tracking, and streaming status indicators.

Other agents (Aider, GitHub Copilot CLI) receive Layer 0 + Layer 1 rendering. A pluggable AgentParser trait allows community contributions for additional agents.

layer 2 — agent cards
# Claude Code response
 
▶ Thinking...
Analyzing the auth middleware for
token refresh logic
 
▶ Tool: Read file
src/middleware/auth.ts
 
▶ Tool: Write file
src/middleware/auth.ts
+6 lines -1 line

AgentParser Trait

Layer 2 parsing is built around the AgentParser trait. Each supported agent implements this trait to define how its output is detected, parsed into OutputEvent values, and how failures are handled. The trait is defined in evern-core/src/intelligence/layer2/mod.rs.

evern-core/src/intelligence/layer2/mod.rs
pub trait AgentParser: Send + Sync {
/// Returns the agent type this parser handles.
fn agent_type(&self) -> AgentType;
 
/// Detect whether this agent is active based on the
/// command string and initial output lines.
fn detect(
&self,
command: &str,
initial_output: &[str],
) -> bool;
 
/// Attempt to parse a chunk of terminal output into
/// structured OutputEvents. Returns None on parse
/// failure, signaling fallback to Layer 1.
fn parse(
&mut self,
output: &str,
) -> Option<Vec<OutputEvent>>;
 
/// Reset internal parser state (e.g., on session
/// disconnect or agent restart).
fn reset(&mut self);
}

The detect method receives the command string the user typed and the first lines of output from the process. If it returns true, the pipeline routes subsequent output through this parser's parse method. The parse method returns Option<Vec<OutputEvent>> — returning None signals a parse failure, causing the pipeline to fall back to Layer 1 for that chunk.

Adding a New Agent Parser

To add Layer 2 support for a new AI agent, implement the AgentParser trait and register the implementation in the agent detector module.

Step 1: Implement the trait

Create a new module under intelligence/layer2/ (e.g., intelligence/layer2/my_agent.rs). Implement all four methods of AgentParser. The parse method must handle unexpected input gracefully — return None for any chunk that does not match expected patterns, which causes the pipeline to fall back to Layer 1 rendering for that chunk.

Step 2: Register the parser

Add the new parser to the registry in intelligence/layer2/mod.rs. The agent detector iterates through registered parsers and calls detect on each one when a new process starts.

Step 3: Handle failures

The trait contract requires that parse never panics. If the agent changes its output format, parse should return None for unrecognized chunks. The pipeline treats None as a signal to pass the raw output to Layer 1 (markdown/diff/code block parsing). This ensures users always see formatted output, even when a Layer 2 parser is outdated.

intelligence/layer2/my_agent.rs
pub struct MyAgentParser {
state: ParserState,
}
 
impl AgentParser for MyAgentParser {
fn agent_type(&self) -> AgentType {
AgentType::Custom("my-agent")
}
 
fn detect(
&self,
command: &str,
initial_output: &[str],
) -> bool {
command.starts_with("my-agent")
}
 
fn parse(
&mut self,
output: &str,
) -> Option<Vec<OutputEvent>> {
// Return None to fall back to Layer 1
// for any unrecognized output.
match self.try_parse(output) {
Ok(events) => Some(events),
Err(_) => None,
}
}
 
fn reset(&mut self) {
self.state = ParserState::default();
}
}

OutputEvent Enum

evern-core/src/output_parser.rs
enum OutputEvent {
PlainText { content: String },
CodeBlock { language: String, content: String },
DiffBlock { file_path: String, hunks: Vec<Hunk> },
MarkdownBlock { content: String },
ResponseStart { agent: AgentType },
ResponseEnd,
ToolUse { tool: String, input: String,
output: Option<String> },
ThinkingBlock { content: String },
FileWrite { path: String, operation: FileOp },
}

Every OutputEvent variant carries an accessibility_label for screen readers. A code block becomes "Code block, Python, 15 lines." A diff becomes "Diff, server.rs, 3 additions, 1 deletion." A tool use becomes "Claude read file config.yaml."

OutputEvent Lifecycle

When terminal output arrives from the SSH/Mosh connection, the pipeline processes it through the active layers and produces OutputEvent values. These events follow a defined lifecycle from emission to consumption by the platform UI.

1. Emission

The parser (Layer 1 or Layer 2, depending on whether an agent is detected) examines incoming terminal output. When the output matches a known pattern — a fenced code block, a unified diff, an agent tool use card — the parser emits one or more OutputEvent values representing that structured content.

2. Internal queue

Emitted events are appended to an internal VecDeque<OutputEvent> queue within the Rust core. This queue acts as a buffer between the asynchronous output stream and the platform UI thread. Events remain in the queue until the platform layer consumes them.

3. Consumption via polling

The platform UI (Compose on Android, SwiftUI on iOS) calls drain_output_events() through UniFFI at the display refresh interval. This method moves all queued events out of the internal buffer and returns them to the platform layer for rendering. The queue is empty after each drain call.

event lifecycle
# 1. Terminal output arrives
Server → SSH/Mosh → evern-core
 
# 2. Parser emits OutputEvents
raw bytes → Layer 0 (terminal state)
→ Layer 1 (markdown/diff)
→ Layer 2 (agent cards)
Vec<OutputEvent>
 
# 3. Events queued internally
events → VecDeque<OutputEvent>
 
# 4. Platform UI polls
let events = session
.drain_output_events();
 
# 5. Native rendering
Compose / SwiftUI renders each
OutputEvent as a native view.

Graceful Degradation

The update strategy is built around the assumption that Layer 2 will break. When Claude CLI changes its output format, Layer 2 parsing fails silently and the output falls back to Layer 1 — markdown, code blocks, and diffs still render correctly.

Layer 1 continues working when any agent changes format. Layer 0 is always there. Users never see mangled output. Parser fixes ship quickly, and the open source model accelerates community contributions.

Agent format changes
Layer 2 fails silently
Layer 1 renders (still good)
User sees formatted output

Agent Detection

Agent detection runs when a new process starts in the terminal session. The detector checks two signals: the command string the user typed, and the first lines of output (the startup banner). Both signals are passed to each registered AgentParser::detect method. If a parser returns true, Layer 2 is activated for that session.

Agent Command Heuristic Banner Heuristic Rendering
Claude Code CLI Command starts with claude Banner contains "Claude Code" or the Claude Code startup marker Layer 0 + 1 + 2
Aider Command starts with aider Banner matches the Aider startup pattern (version line, model display) Layer 0 + 1
GitHub Copilot CLI Command starts with gh copilot Banner matches the GitHub Copilot CLI output pattern Layer 0 + 1
Other / Unknown No match No match Layer 0 + 1

Detection heuristics are intentionally simple — a command prefix check combined with a banner string match. This keeps false positives low while remaining easy to update when agent CLIs change their startup output. Agents without a Layer 2 parser still receive Layer 0 + Layer 1 rendering (ANSI, syntax-highlighted code blocks, and diffs).

detection pseudocode
fn detect_agent(
command: &str,
banner: &[str],
parsers: &[Box<dyn AgentParser>],
) -> Option<&dyn AgentParser> {
for parser in parsers {
if parser.detect(command, banner) {
return Some(parser.as_ref());
}
}
None
}

Accessibility Labels

Parsed output generates semantic announcements for screen readers. Raw terminal output is buffered line-by-line with ~500ms pause grouping. Rapid streaming output is batched into coherent chunks, never read character-by-character.

Event Type Accessibility Label
CodeBlock "Code block, Python, 15 lines"
DiffBlock "Diff, server.rs, 3 additions, 1 deletion"
ToolUse "Claude read file config.yaml"
ThinkingBlock "Claude thinking, 4 lines"
FileWrite "Claude wrote file src/auth.ts"