Architecture Overview
System architecture, module design, and key patterns
High-Level Architecture
StarkFi follows a layered architecture with four entry points that share a common service layer:
- Entry Points (Clients)
- Core Service Layer
- SDK Integration
Auth Server
Gas Abstraction
Directory Structure
src/
├── index.ts # CLI entry point (Commander.js)
├── commands/ # 14 command directories
│ ├── auth/ # login, verify, logout
│ ├── wallet/ # address, balance, send, deploy
│ ├── trade/ # trade, multi-swap
│ ├── staking/ # stake, unstake, rewards, pools, validators, stake-status
│ ├── lending/ # lend-pools, lend-supply, lend-withdraw, lend-borrow, lend-repay, lend-close, lend-status, lend-monitor, lend-auto
│ ├── dca/ # dca-preview, dca-create, dca-list, dca-cancel
│ ├── troves/ # troves-list, troves-position, troves-deposit, troves-withdraw
│ ├── lst/ # lst-position, lst-stats, lst-stake, lst-redeem, lst-exit-all
│ ├── confidential/ # conf-setup, conf-balance, conf-fund, conf-transfer, conf-withdraw, conf-ragequit, conf-rollover
│ ├── batch/ # batch multicall
│ ├── portfolio/ # portfolio dashboard
│ ├── config/ # config subcommands
│ ├── chain/ # tx-status
│ └── system/ # status (mcp-start is inline in index.ts)
├── services/ # 17 service modules
│ ├── starkzap/ # SDK client initialization + config
│ ├── fibrous/ # DEX routing, calldata, pricing
│ ├── vesu/ # Lending pool API + on-chain operations
│ ├── staking/ # Validator resolution + pool operations
│ ├── dca/ # DCA order management (create, cancel, list, preview)
│ ├── troves/ # Troves vault strategy operations
│ ├── lst/ # Endur liquid staking operations
│ ├── confidential/ # Tongo Cash config + privacy-preserving transfers
│ ├── tokens/ # Token resolution + balance queries
│ ├── batch/ # Multicall composition via TxBuilder
│ ├── simulate/ # Transaction simulation (fee estimation)
│ ├── portfolio/ # Aggregated DeFi overview
│ ├── auth/ # Session management (JWT persistence)
│ ├── api/ # Backend API client
│ ├── config/ # Configuration persistence
│ ├── price/ # USD pricing via Fibrous
│ └── swap/ # Multi-provider swap aggregation (Fibrous, AVNU, Ekubo)
├── mcp/ # MCP server layer
│ ├── server.ts # Server setup + stdio transport
│ ├── tools/ # 9 domain tool files + 2 utilities (Zod schemas)
│ └── handlers/ # 13 domain handlers + 3 utility files
└── lib/ # 16 shared utilities
├── errors.ts # ErrorCode enum (36 codes) + StarkfiError class
├── parse-starknet-error.ts # Cairo hex decoding + error map (15 patterns)
├── retry.ts # Exponential backoff retry
├── concurrency.ts # Sliding-window concurrent execution
├── fetch.ts # Fetch with AbortController timeout
├── format.ts # Table, result, and error formatting
├── cli-helpers.ts # runCommand wrapper + outputResult
├── command-runner.ts # withAuthenticatedWallet centralized runner
├── validation.ts # Address and input validation
├── types.ts # Shared TypeScript types
├── config.ts # Config file paths (XDG-compliant)
├── brand.ts # CLI branding and colors
├── command.ts # Commander.js helpers
├── send-with-preflight.ts # Mandatory simulation before tx execution
├── tx-progress.ts # Real-time transaction progress tracking
└── resolve-network.ts # Network resolution priority chainKey Design Patterns
Service Layer Isolation
Each service module is self-contained — the Fibrous service doesn't know about Vesu, and vice versa. Cross-service composition happens only in dedicated orchestrators:
batchservice composes swap + stake + supply + borrow + repay + withdraw + send + dca-create + dca-cancel + troves-deposit + troves-withdraw operationsportfolioservice aggregates data from tokens, staking, and lending services
Tool → Handler → Service (MCP)
MCP tools use a three-layer pattern that separates concerns:
Tool Registration (mcp/tools/) → Zod schema + description
↓
Handler (mcp/handlers/) → Input validation + orchestration
↓
Service (services/) → Business logic + SDK callsThis pattern keeps each layer testable and ensures MCP-specific concerns (schema validation, error formatting) don't leak into shared business logic.
Concurrent Sliding Window
Token balance queries and portfolio aggregation use a sliding-window concurrency pattern. Instead of issuing all RPC calls at once (which risks rate limiting) or processing sequentially (which is slow), a pool of workers pulls tasks from a shared queue:
// Example: Query 50 token balances with max 10 concurrent RPC calls
const balances = await runConcurrent(tokens, 10, async (token) => {
const balance = await wallet.balanceOf(token);
return balance.isZero() ? undefined : { token, balance };
});Error Recovery
All network operations use withRetry() for automatic retry with exponential backoff. Only retryable error codes (e.g., NETWORK_ERROR) trigger retries — domain errors (e.g., INSUFFICIENT_BALANCE) fail immediately. See the Error Handling page for details.
Transaction Simulation
Every transactional command supports --simulate (CLI) or simulation parameters (MCP). Simulation executes the transaction in a dry-run against the Starknet RPC, returning estimated fees and call count without spending gas.
Tech Stack
| Layer | Technology | Version |
|---|---|---|
| Core SDK | Starkzap | v3.0.0 |
| CLI Framework | Commander.js | v14.0.3 |
| MCP Protocol | @modelcontextprotocol/sdk | v1.29.0 |
| Schema Validation | Zod | v4.3.6 |
| Auth Server | Hono + Privy TEE | v4.12.7 |
| DEX Routing | Fibrous (default), AVNU, Ekubo | — |
| Lending Protocol | Vesu V2 | — |
| Gas Abstraction | AVNU Paymaster | — |
| Terminal UI | Chalk + Ora | v5.6.2 / v9.3.0 |
| TypeScript | TypeScript (strict mode) | v5.9.3 |
Deep Dive
- Security Architecture — TEE isolation, MCP sandboxing, bot encryption, transaction security
- Error Handling — 36 typed error codes, Starknet error parsing, retry logic
- Auth Server — Hono server endpoints, middleware stack, and deployment
- Development & Contributing — Local setup, build scripts, and contribution guidelines
Last updated on