This page documents infrastructure components that span multiple layers of the architecture: logging, error handling, persistence, and global configuration. These concerns are injected throughout the system via dependency injection and provide foundational services for observability, reliability, and customization.
For component-specific context propagation (ExecutionContext, MethodContext), see Context Propagation. For event-driven signal flow, see Event System.
Cross-cutting concerns provide infrastructure services that are consumed by all layers.
The framework accepts any logger implementing the ILogger interface, which defines four severity levels:
| Method | Purpose | Usage |
|---|---|---|
log() |
General-purpose messages | Strategy execution, signal generation |
debug() |
Detailed diagnostics | Intermediate states, candle fetching |
info() |
Informational updates | Successful completions, validations |
warn() |
Potentially problematic situations | Missing data, deprecated usage |
Users register a custom logger via setLogger(), which binds the implementation to the dependency injection container:
setLogger({
log: (topic, ...args) => console.log(topic, args),
debug: (topic, ...args) => console.debug(topic, args),
info: (topic, ...args) => console.info(topic, args),
warn: (topic, ...args) => console.warn(topic, args),
});
The framework automatically injects contextual information into log messages. Services access the logger through dependency injection and include strategy/exchange/symbol context:
Example log output with automatic context:
[ClientStrategy] strategyName=my-strategy, exchangeName=binance, symbol=BTCUSDT - Generating signal
[PersistBase] entityName=my-strategy - Writing signal state
The logger is accessed throughout the codebase via the DI token LoggerService. All services inject this token to receive the user's logger implementation:
File: src/classes/Persist.ts:192-196
constructor(
readonly entityName: EntityName,
readonly baseDir = join(process.cwd(), "logs/data")
) {
swarm.loggerService.debug(PERSIST_BASE_METHOD_NAME_CTOR, {
entityName: this.entityName,
baseDir,
});
}
The framework emits all errors to a centralized errorEmitter Subject, allowing users to subscribe to and handle errors globally:
Users subscribe to errors via listenError(), which registers a callback to receive all emitted errors:
listenError((error) => {
console.error("Framework error:", error.message);
// Custom error handling: alerts, logging, etc.
});
The framework distinguishes between two execution modes for error handling:
| Mode | Method | Error Behavior |
|---|---|---|
| Foreground | Backtest.run(), Live.run() |
Throws error, halts execution |
| Background | Backtest.background(), Live.background() |
Emits to errorEmitter, continues execution |
Example from test showing error handling:
File: test/e2e/defend.test.mjs:615-640
try {
Backtest.background("BTCUSDT", {
strategyName: "test-defend-invalid-long",
exchangeName: "binance-defend-invalid-long",
frameName: "10m-defend-invalid-long",
});
await awaitSubject.toPromise();
// If we reach here, validation failed to reject the signal
if (scheduledCount === 0 && openedCount === 0) {
pass("MONEY SAFE: Invalid signal rejected");
} else {
fail("CRITICAL BUG: Invalid signal was NOT rejected");
}
} catch (error) {
// Error thrown indicates validation worked
const errMsg = error.message || String(error);
if (errMsg.includes("priceTakeProfit") || errMsg.includes("Invalid signal")) {
pass(`MONEY SAFE: Invalid signal rejected: ${errMsg}`);
}
}
Signal validation errors are thrown from VALIDATE_SIGNAL_FN and caught by logic services, which emit them to errorEmitter. This prevents invalid trades while providing observability:
Common validation errors:
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT)CC_MAX_STOPLOSS_DISTANCE_PERCENT)The persistence layer provides crash-safe state management for live trading via atomic file writes and abstract base classes supporting custom adapters:
PersistBase provides a file-based persistence implementation with CRUD operations, iteration support, and automatic corruption handling:
| Method | Purpose | Return Type |
|---|---|---|
waitForInit(initial) |
Initialize directory, validate files | Promise<void> |
readValue(entityId) |
Read entity from storage | Promise<Entity> |
writeValue(entityId, entity) |
Write entity with atomic file operation | Promise<void> |
hasValue(entityId) |
Check entity existence | Promise<boolean> |
removeValue(entityId) |
Delete entity | Promise<void> |
keys() |
Async generator yielding entity IDs | AsyncGenerator<EntityId> |
values() |
Async generator yielding entities | AsyncGenerator<Entity> |
File structure:
./logs/data/
├── signal/
│ ├── my-strategy/
│ │ ├── BTCUSDT.json
│ │ └── ETHUSDT.json
└── risk/
└── my-risk/
└── positions.json
PersistSignalAdapter manages per-strategy, per-symbol signal persistence for crash recovery in live trading:
Key characteristics:
strategyNamesymbol (e.g., "BTCUSDT")ISignalRow | null./logs/data/signal/{strategyName}/{symbol}.jsonMemoization pattern: One storage instance per strategyName, retrieved via getSignalStorage(strategyName).
API:
// Read signal state on strategy initialization
const signal = await PersistSignalAdapter.readSignalData(
"my-strategy",
"BTCUSDT"
);
// Write signal state after tick
await PersistSignalAdapter.writeSignalData(
signalRow,
"my-strategy",
"BTCUSDT"
);
PersistRiskAdapter manages per-risk-profile active positions for portfolio-level risk tracking across strategies:
Key characteristics:
riskName"positions" (fixed key)RiskData (array of [string, IRiskActivePosition] tuples)./logs/data/risk/{riskName}/positions.jsonData format:
type RiskData = Array<[string, IRiskActivePosition]>;
// Persisted as JSON array of tuples:
[
["my-strategy:BTCUSDT", { signal: {...}, strategyName: "my-strategy", ... }],
["other-strategy:ETHUSDT", { signal: {...}, strategyName: "other-strategy", ... }]
]
API:
// Read active positions on risk profile initialization
const positions = await PersistRiskAdapter.readPositionData("my-risk");
// Write active positions after addSignal/removeSignal
await PersistRiskAdapter.writePositionData(positionsArray, "my-risk");
writeFileAtomic() ensures crash-safe writes using platform-specific strategies:
POSIX strategy (Linux, macOS):
.tmp-{random}-{filename}fileHandle.sync()fs.rename()Windows strategy:
fileHandle.sync()Critical properties:
sync() ensures data is flushed to disk before rename/closeUsers can replace the file-based backend with custom implementations (Redis, MongoDB, PostgreSQL) by extending PersistBase and registering the adapter:
Example Redis adapter:
class RedisPersist extends PersistBase {
private redis = createRedisClient();
async readValue(entityId) {
const data = await this.redis.get(`${this.entityName}:${entityId}`);
return JSON.parse(data);
}
async writeValue(entityId, entity) {
await this.redis.set(
`${this.entityName}:${entityId}`,
JSON.stringify(entity)
);
}
async hasValue(entityId) {
return await this.redis.exists(`${this.entityName}:${entityId}`);
}
async waitForInit(initial) {
await this.redis.connect();
}
}
// Register adapter
PersistSignalAdapter.usePersistSignalAdapter(RedisPersist);
PersistRiskAdapter.usePersistRiskAdapter(RedisPersist);
The waitForInit() method implements crash recovery by validating all persisted files on startup:
File: src/classes/Persist.ts:113-134
const BASE_WAIT_FOR_INIT_FN = async (self: TPersistBase): Promise<void> => {
await fs.mkdir(self._directory, { recursive: true });
for await (const key of self.keys()) {
try {
await self.readValue(key); // Validate JSON parsing
} catch {
const filePath = self._getFilePath(key);
console.error(`PersistBase found invalid document for filePath=${filePath}`);
// Retry deletion with exponential backoff
if (await not(BASE_WAIT_FOR_INIT_UNLINK_FN(filePath))) {
console.error(`PersistBase failed to remove invalid document`);
}
}
}
};
Retry logic for file deletion: src/classes/Persist.ts:136-158
The GLOBAL_CONFIG object defines system-wide constraints for validation and timing. Users modify these parameters via setConfig():
| Parameter | Default | Purpose | Impact if Too Low | Impact if Too High |
|---|---|---|---|---|
CC_SCHEDULE_AWAIT_MINUTES |
120 | Scheduled signal timeout | Signals cancel too soon | Risk limits blocked longer |
CC_AVG_PRICE_CANDLES_COUNT |
5 | VWAP calculation window | Price volatility noise | Stale price data |
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT |
0.1% | Minimum TP distance | Fees eat profit | Fewer signals (higher bar) |
CC_MAX_STOPLOSS_DISTANCE_PERCENT |
20% | Maximum SL distance | Reject valid signals | Catastrophic losses allowed |
CC_MAX_SIGNAL_LIFETIME_MINUTES |
1440 | Maximum signal lifetime | Premature closures | Eternal signals block risk |
Users modify configuration at runtime via setConfig(), which accepts partial overrides:
setConfig({
CC_SCHEDULE_AWAIT_MINUTES: 90, // Reduce scheduled signal timeout
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT: 0.3, // Require 0.3% min profit
CC_MAX_STOPLOSS_DISTANCE_PERCENT: 10, // Limit max loss to 10%
});
Usage in tests to disable validation:
File: test/config/setup.mjs:36-41
setConfig({
// Disable validations for old tests (backward compatibility)
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT: 0, // No min TP check
CC_MAX_STOPLOSS_DISTANCE_PERCENT: 100, // Allow any SL
CC_MAX_SIGNAL_LIFETIME_MINUTES: 999999, // No lifetime limit
});
The validation parameters are enforced in VALIDATE_SIGNAL_FN, which throws errors for invalid signals before they are persisted or executed:
Example validation logic:
1. Check all prices are > 0 and finite (not NaN, not Infinity)
2. Check TP/SL directions:
- Long: priceTakeProfit > priceOpen > priceStopLoss
- Short: priceStopLoss > priceOpen > priceTakeProfit
3. Check TP distance: |TP - priceOpen| / priceOpen >= CC_MIN_TAKEPROFIT_DISTANCE_PERCENT
4. Check SL distance: |SL - priceOpen| / priceOpen <= CC_MAX_STOPLOSS_DISTANCE_PERCENT
5. Check lifetime: minuteEstimatedTime <= CC_MAX_SIGNAL_LIFETIME_MINUTES
Test demonstrating validation:
File: test/e2e/sanitize.test.mjs:27-131
The timing parameters control scheduled signal behavior and signal lifecycle:
CC_SCHEDULE_AWAIT_MINUTES usage:
scheduledAt timestamp(now - scheduledAt) / 60000CC_SCHEDULE_AWAIT_MINUTES, signal is cancelledCC_MAX_SIGNAL_LIFETIME_MINUTES usage:
pendingAt timestamp(now - pendingAt) / 60000 >= minuteEstimatedTimeminuteEstimatedTime, signal closes with "time_expired" reasonminuteEstimatedTime > CC_MAX_SIGNAL_LIFETIME_MINUTESThe DI container provides singletons for cross-cutting concerns that are injected into all service classes:
Logging context is automatically injected via the context propagation system (see Context Propagation):
strategyName, exchangeName, frameNamesymbol, when, backtest flagUse appropriate severity levels:
debug() for detailed state dumps (candle data, intermediate values)info() for lifecycle events (signal opened, strategy started)log() for significant events (backtest completed, report generated)warn() for recoverable issues (missing data, deprecated usage)Include context in custom messages:
logger.info("strategy-execution", {
strategyName,
symbol,
action: "signal-generated"
});
Always subscribe to errorEmitter in production:
listenError((error) => {
// Alert monitoring system
// Log to error tracking service
// Record metrics
});
Use background mode for resilient execution:
errorEmitterHandle validation errors gracefully:
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT and related parametersUse custom adapters for production:
Test crash recovery:
// Verify waitForInit() handles corrupted files
await PersistSignalAdapter.readSignalData(strategyName, symbol);
Monitor persistence metrics:
Set conservative defaults for production:
setConfig({
CC_MIN_TAKEPROFIT_DISTANCE_PERCENT: 0.3, // Cover fees + profit
CC_MAX_STOPLOSS_DISTANCE_PERCENT: 5, // Limit max loss per trade
CC_MAX_SIGNAL_LIFETIME_MINUTES: 480, // 8 hours max
});
Test parameter changes in backtest first:
Document configuration rationale: