The Logging System provides a unified interface for recording diagnostic information throughout the backtest-kit framework. It enables debugging, monitoring, and auditing of framework operations across all execution modes (Backtest, Live, Walker, Optimizer).
This document covers the logging interface contract, custom logger configuration, default implementation, and usage patterns throughout the framework. For event-driven monitoring and observability, see Event System. For performance metrics and bottleneck detection, see Performance Metrics.
Diagram 1: Logging System Architecture
The logging system follows a dependency injection pattern where LoggerService is registered globally and injected into all framework services. Custom logger implementations can replace the default via setLogger().
The ILogger interface defines the contract that all logger implementations must satisfy. It provides four log levels with consistent signatures.
| Method | Parameters | Purpose |
|---|---|---|
log |
topic: string, ...args: any[] |
General-purpose messages for significant events or state changes |
debug |
topic: string, ...args: any[] |
Detailed diagnostic information for development/troubleshooting |
info |
topic: string, ...args: any[] |
Informational updates providing high-level system activity overview |
warn |
topic: string, ...args: any[] |
Potentially problematic situations that don't prevent execution |
The interface documentation specifies that these methods are used throughout the framework for:
The setLogger() function allows replacing the default logger with a custom implementation. All internal framework services will route their log messages through the provided logger.
setLogger(logger: ILogger): void
import { setLogger } from "backtest-kit";
// Custom logger using Winston
import winston from "winston";
const winstonLogger = winston.createLogger({
level: "info",
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: "error.log", level: "error" }),
new winston.transports.File({ filename: "combined.log" }),
],
});
setLogger({
log: (topic, ...args) => winstonLogger.info(topic, { args }),
debug: (topic, ...args) => winstonLogger.debug(topic, { args }),
info: (topic, ...args) => winstonLogger.info(topic, { args }),
warn: (topic, ...args) => winstonLogger.warn(topic, { args }),
});
The documentation indicates that custom loggers receive "automatic context injection (strategyName, exchangeName, symbol, etc.)", though the actual context propagation appears to be handled manually by calling code passing context as additional arguments.
The framework includes a default LoggerService implementation that is registered via dependency injection at initialization time.
Diagram 2: LoggerService Registration Flow
The LoggerService is instantiated once during framework initialization and made available to all services through the dependency injection container.
The logging system integrates with the framework's dependency injection system using symbol-based registration and memoized instances.
The logger service is registered with the symbol TYPES.loggerService:
const baseServices = {
loggerService: Symbol('loggerService'),
};
LoggerService is provided at module initialization:
{
provide(TYPES.loggerService, () => new LoggerService());
}
All framework services inject the logger via the inject() function:
const baseServices = {
loggerService: inject<LoggerService>(TYPES.loggerService),
};
Diagram 3: Logger Injection Pattern
Services access the logger through dependency injection, ensuring a single shared logger instance across the entire framework.
The framework uses four log levels with distinct purposes:
| Level | Purpose | Typical Use Cases |
|---|---|---|
| debug | Detailed diagnostic information | Intermediate states, candle data inspection, signal validation steps |
| info | Informational updates | Strategy registration, frame generation, successful completions |
| log | General-purpose messages | API method entry points, significant state changes, operation tracking |
| warn | Potentially problematic situations | Missing optional data, unexpected conditions, deprecated usage |
The ILogger interface does not include an error() method. Error handling is performed through the event system's errorEmitter and exitEmitter subjects (see Error Handling).
Logging is used consistently throughout the framework at key operational points.
All public API functions log their invocation:
// Method name constant
const ADD_STRATEGY_METHOD_NAME = "add.addStrategy";
export function addStrategy(strategySchema: IStrategySchema) {
backtest.loggerService.info(ADD_STRATEGY_METHOD_NAME, {
strategySchema,
});
// ... implementation
}
Event subscription functions log their calls:
const LISTEN_SIGNAL_METHOD_NAME = "event.listenSignal";
export function listenSignal(fn: (event: IStrategyTickResult) => void) {
backtest.loggerService.log(LISTEN_SIGNAL_METHOD_NAME);
return signalEmitter.subscribe(queued(async (event) => fn(event)));
}
Diagram 4: Logging Points Across Framework Layers
While the setLogger() documentation mentions "automatic context injection", the actual implementation requires calling code to manually include context information as additional arguments.
The framework uses constant strings to identify log topics:
const ADD_STRATEGY_METHOD_NAME = "add.addStrategy";
const LISTEN_SIGNAL_METHOD_NAME = "event.listenSignal";
const LISTEN_ERROR_METHOD_NAME = "event.listenError";
These constant names provide consistent, searchable identifiers for log filtering and analysis.
Context information is passed as additional arguments to log methods:
// Simple invocation logging
backtest.loggerService.log(LISTEN_SIGNAL_METHOD_NAME);
// With context object
backtest.loggerService.info(ADD_STRATEGY_METHOD_NAME, {
strategySchema,
});
Client implementations (ClientStrategy, ClientExchange, ClientRisk) receive logger instances through their constructor parameters along with execution context:
interface IExchangeParams extends IExchangeSchema {
logger: ILogger;
execution: TExecutionContextService;
}
This allows client implementations to include execution context (symbol, timestamp, backtest flag) in their log messages.
The following table shows which services and clients receive logger instances:
| Service/Client | Logger Access | Purpose |
|---|---|---|
| ConnectionServices | Via DI injection | Route to correct client instances |
| CoreServices | Via DI injection | Coordinate core operations |
| LogicServices | Via DI injection | Execute backtest/live/walker logic |
| MarkdownServices | Via DI injection | Generate reports and aggregate statistics |
| CommandServices | Via DI injection | Orchestrate high-level operations |
| ValidationServices | Via DI injection | Validate component registration |
| ClientStrategy | Via constructor params | Log signal lifecycle events |
| ClientExchange | Via constructor params | Log candle fetching operations |
| ClientRisk | Via constructor params | Log risk validation results |
| ClientFrame | Via constructor params | Log timeframe generation |
| ClientOptimizer | Via constructor params | Log strategy generation progress |
The logging system operates independently from the event system but serves complementary purposes:
See Event System for event-driven monitoring.
Performance metrics are emitted through the event system (performanceEmitter) rather than logged. This allows for structured performance data collection without cluttering logs.
See Performance Metrics for performance monitoring.
Errors are propagated through dedicated event emitters (errorEmitter, exitEmitter) in addition to any logging that may occur. This separation allows for both diagnostic logging and programmatic error handling.
See Error Handling for error management.
setLogger() before any other framework operations...args: any[])info for public API entry pointslog for internal operation trackingdebug for detailed diagnostic informationwarn for non-critical issues