ARCHITECTURE
Understand why this library is designed the way it is, and how it works internally.
Back to README • See Performance • Real Examples
Table of Contents
- Design Philosophy
- The Big Three Pain Points
- Why Deque is 484x Faster
- Iterator Protocol Design
- Self-Balancing Strategy
- V8 JIT Optimizations
Design Philosophy
Core Principles
- Practicality: Follows ES6 standards, ESNext patterns, simple method names
- Extensibility: OOP inheritance for all data structures
- Modularization: Independent NPM packages, no bloat
- Efficiency: Time/space complexity comparable to native JS
- Maintainability: Open-source standards, TDD, CI/CD
- Testability: Automated unit + performance testing
- Reusability: Fully decoupled, minimized side effects
- Security: Read-write separation, safe member access
API Design: Unified Interface
Instead of learning different APIs for each library:
// ❌ Different libraries, different APIs
const queue = new Queue();
queue.enqueue(1); // Library A
queue.push(1); // Library B
queue.offer(1); // Library C (Java style)
// ✅ data-structure-typed: Consistent everywhere
const queue = new Queue();
const deque = new Deque();
const stack = new Stack();
// All use THE SAME 4 methods
queue.push(item); // Add
deque.unshift(item); // Add to front
stack.pop(); // Remove
queue.shift(); // Remove from front
Why? Because developers already know push, pop, shift, unshift from Arrays. Zero new mental models needed.
The Big Three Pain Points
Pain Point 1: Performance Wall
When operations become bottlenecks:
const games = [
{ id: 'game_001', name: 'Tetris', score: 9850 },
{ id: 'game_002', name: 'Minecraft', score: 9750 },
{ id: 'game_003', name: 'Grand Theft Auto V', score: 9600 },
{ id: 'game_004', name: 'Wii Sports', score: 9500 },
{ id: 'game_005', name: 'PlayerUnknown\'s Battlegrounds', score: 9200 },
{ id: 'game_006', name: 'Fortnite', score: 9100 },
{ id: 'game_007', name: 'League of Legends', score: 8950 },
{ id: 'game_008', name: 'The Legend of Zelda: Breath of the Wild', score: 8850 },
{ id: 'game_009', name: 'Elden Ring', score: 8700 },
{ id: 'game_010', name: 'Super Mario Bros', score: 8600 },
];
// ❌ Repeated sorting slows everything
const scores = [];
for (const game of games) {
scores.push(game.score);
scores.sort((a, b) => b - a); // O(n log n) every time!
}
// ✅ Auto-maintained sorted order
const rankings = new RedBlackTree(games, {
toEntryFn: (game) => [game.id, game.score],
comparator: (a, b) => b - a,
}); // O(n log n) in total!
// Iteration is always sorted!
Real Impact:
- Competitive programming: TLE → AC (Time Exceeded to Accepted)
- Real-time systems: P99 latency 500ms → 5ms
- Message queues: 100 msg/sec → 10,000 msg/sec
Pain Point 2: API Chaos
Different libraries use completely different method names:
| Operation | ArrayList | Queue | ArrayDeque | LinkedList |
|---|---|---|---|---|
| Add end | add() | offer() | push() | add() |
| Remove end | remove() | - | pop() | removeLast() |
| Remove start | remove(0) | poll() | removeFirst() | removeFirst() |
| Add start | add(0, e) | offerFirst() | unshift() | addFirst() |
Result: Developers must memorize N different APIs.
Solution: Use 4 consistent methods everywhere:
// Every linear structure uses same API
const deque = new Deque();
const queue = new Queue();
const list = new DoublyLinkedList();
// All support these 4 methods:
structure.push(item); // Add to end
structure.pop(); // Remove from end
structure.shift(); // Remove from start
structure.unshift(item); // Add to start
Pain Point 3: Conversion Hell
Bouncing data between structures wastes cycles:
// ❌ Painful way: conversions everywhere
const tree = new TreeLibrary(data);
const filtered = tree.toArray()
.filter(x => x > 5) // Convert to Array
.map(x => x * 2) // Still Array
.sort((a, b) => b - a); // Array sort is O(n log n)
// Lost the tree's sorted benefits!
// ✅ Clean way: operations work directly
const tree = new RedBlackTree<number, number>();
const result = tree
.filter(v => (v ?? 0) > 5) // Direct on tree
.map((v, k) => [k, (v ?? 0) * 2]) // Still on tree
.reduce((sum, v) => sum + (v ?? 0), 0); // Still on tree
// Zero conversions, tree structure maintained!
Why Deque is 484x Faster
The Problem: Array.shift()
// What happens when you shift from an array:
const arr = [1, 2, 3, 4, 5];
arr.shift(); // Remove 1
// JavaScript engine must:
// 1. Create new memory for index 0
// 2. Copy element 2 → index 0
// 3. Copy element 3 → index 1
// 4. Copy element 4 → index 2
// 5. Copy element 5 → index 3
// 6. Resize array
// For 100,000 elements: O(n) = 100,000 operations each time!
Real benchmark:
const queue = [];
for (let i = 0; i < 100000; i++) queue.push(i);
for (let i = 0; i < 100000; i++) queue.shift();
// Time: 2829ms for 100,000 shifts ❌
The Solution: Deque's Chunked Rings
// Deque implementation strategy:
// Instead of physical array, use chunked buckets
// bucket[0] bucket[1] bucket[2]
// [1,2,3,4,5] [6,7,8,9,10] [11,12,13]
// ^
// head pointer
// When you shift():
// Move pointer forward in bucket
// No copying until bucket is empty!
// Only then delete bucket (batch operation)
Benchmark result:
const deque = new Deque();
for (let i = 0; i < 100000; i++) deque.push(i);
for (let i = 0; i < 100000; i++) deque.shift();
// Time: 5.83ms for 100,000 shifts ✅
// 2829ms / 5.83ms = 485x faster!
Why This Works
Batching Operations: Instead of reindexing on every shift, Deque batches operations into chunks. Only when an entire chunk is consumed does it get cleaned up.
Memory Locality: Chunked structure is better for CPU cache than scattered reindexing operations.
Pointer Movement: Shifting is just moving a pointer forward in memory, which is a CPU register operation (nanoseconds).
Iterator Protocol Design
The Hidden Superpower
Every data structure implements JavaScript's iterator protocol:
// Why this matters:
const tree = new RedBlackTree([5, 2, 8]);
// All of these work automatically:
[...tree] // Spread operator
for (const x of tree) {
} // for...of loop
const [a, b, c] = tree // Destructuring
new Set(tree) // Set constructor
Array.from(tree) // Array.from
// No special methods needed!
// Just implement Symbol.iterator
Implementation Strategy
class CustomStructure<T> {
private items: T[] = [];
// One method makes everything work
* [Symbol.iterator]() {
for (const item of this.items) {
yield item;
}
}
push(...item: T[]) {
this.items.concat(item);
}
}
// Now the structure works everywhere:
const struct = new CustomStructure();
struct.push(1, 2, 3);
// All of these work automatically:
[...struct]; // [1, 2, 3]
for (const x of struct) {
} // Loops 1, 2, 3
Why Iterator Protocol?
- Consistency: Uses same interface as Arrays and Maps
- Zero Learning Curve: Developers already know
for...of - Interoperability: Works with spread, destructuring, Set constructor
- Future-Proof: JavaScript standard, not library-specific
Self-Balancing Strategy
The Problem: Unbalanced Trees
// Create an unbalanced tree:
const bst = new BST();
[1, 2, 3, 4, 5].forEach(x => bst.add(x));
// 1
// \
// 2
// \
// 3
// \
// 4
// \
// 5
// This becomes a linked list! O(n) instead of O(log n)
Red-Black Tree Solution
Rules:
- Every node is either Red or Black
- Root is always Black
- Red nodes have Black children (no consecutive Reds)
- All paths to null have same number of Black nodes
Result: Tree height limited to ~2 * log(n), guaranteeing O(log n)
const rbTree = new RedBlackTree([1, 2, 3, 4, 5]);
// 3(B) Balanced!
// / \
// 2(B) 4(R)
// / \
// 1(R) 5(B)
// Even with sequential inserts: O(log n) guaranteed!
AVL Tree Alternative
Stricter balance requirement: Height difference ≤ 1
// 3
// / \
// 2 4
// / \
// 1 5
// More strictly balanced = better search
// Trade-off: Slower insertions/deletions (more rebalancing)
SkipList: Probabilistic Balancing
Unlike Red-Black Tree or AVL Tree which rebalance deterministically, SkipList uses randomization:
Level 3: ──────────────────────────► 50
Level 2: ────────► 20 ──────────────► 50
Level 1: ──► 10 ──► 20 ──► 30 ──► 50 ──► 70
Each node is promoted to a higher level with probability 0.5 (configurable). This gives O(log n) average performance — equivalent to Red-Black Tree in practice, but implemented with much simpler code.
When to use SkipList vs TreeMap:
- Both provide identical APIs (
NavigableMapsemantics) - SkipList: simpler codebase, constant factors slightly higher
- TreeMap (Red-Black Tree): guaranteed O(log n) worst-case
Range Query Structures
SegmentTree
Stores aggregated values in a complete binary tree backed by a flat array:
Array: [1, 3, 5, 7]
Tree: 16 (root = sum all)
/ \
4 12 (sum of halves)
/ \ / \
1 3 5 7 (leaves = original values)
Internal array: [_, 16, 4, 12, 1, 3, 5, 7] (1-indexed)
Query [1,2] (3+5=8): combines nodes covering exactly that range in O(log n).
Supports any associative operation via merger function: sum, min, max, GCD, product, etc.
BinaryIndexedTree (Fenwick Tree)
A more compact structure for prefix sums only:
Array: [1, 3, 5, 7, 9]
BIT tree (1-indexed):
T[1] = 1 (covers [1,1])
T[2] = 1+3 = 4 (covers [1,2])
T[3] = 5 (covers [3,3])
T[4] = 1+3+5+7=16 (covers [1,4])
T[5] = 9 (covers [5,5])
Prefix sum uses bit tricks (i -= i & -i) to traverse only O(log n) nodes.
SegmentTree vs BinaryIndexedTree:
| SegmentTree | BinaryIndexedTree | |
|---|---|---|
| Operations | sum/min/max/any | sum only |
| Update | O(log n) | O(log n) |
| Query | O(log n) | O(log n) |
| Space | O(2n) | O(n) |
| Complexity | Higher | Lower |
V8 JIT Optimizations
How V8 Makes This Fast
// V8 JIT optimizes data structures that:
// 1. Have predictable shapes (hidden classes)
// 2. Use consistent types
// 3. Have stable method calls
// ✅ Good for V8 JIT:
class Node {
constructor(value) {
this.value = value; // Always same type
this.left = null; // Always null or Node
this.right = null; // Always null or Node
}
}
// ❌ Bad for V8 JIT:
class BadNode {
constructor(value) {
this.value = value;
this.left = value; // Sometimes Node, sometimes number
this.meta = null; // Added dynamically later
}
}
// Our library uses strict typing for this reason!
Performance Benefit
// Predictable structure → V8 caches optimizations
// First call: Interpreted
// Subsequent calls: JIT compiled to native code
// Result: typically faster after warm-up (benchmarks should include warm-up runs)
const tree = new RedBlackTree();
for (let i = 0; i < 1000000; i++) {
tree.set(i, Math.random());
}
// Becomes near-native speed through JIT!
Method Chaining Architecture
Design Pattern
class TreeStructure<K, V> {
// Methods return `this` for chaining
filter(predicate: (v: V, k: K) => boolean): this {
// ... filter logic ...
return this; // Return structure, not Array!
}
map(mapper: (v: V, k: K) => V): this {
// ... map logic ...
return this; // Chain continues!
}
reduce(reducer: (acc: any, v: V, k: K) => any, init: any) {
// ... reduce logic ...
return result; // Terminal operation
}
}
// Result: Chainable on any structure
tree
.filter(x => x > 5)
.map(x => x * 2)
.reduce((a, v) => a + v, 0);
Why This Matters
Traditional approach loses structure type:
// ❌ Loses type information
const result = tree.toArray() // Now it's Array[]
.filter(x => x > 5) // Still Array
.map(x => x * 2); // Still Array
// Lost O(log n) properties!
// ✅ Preserves structure
const result = tree
.filter(x => x > 5) // Still RedBlackTree
.map(x => x * 2) // Still RedBlackTree
.reduce((a, v) => a + v);
// Properties maintained!
Memory Efficiency
Comparison
| Structure | Overhead | Notes |
|---|---|---|
| Array | Low | Fixed size |
| LinkedList | High | Pointer per node |
| BST | Medium | 2 pointers per node |
| Deque | Very Low | Chunked, batched |
| Heap | Very Low | Array-based |
Deque Memory Strategy
// Instead of one big array:
// [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]
// Use chunks (buckets):
// bucket[0]: [1][2][3][4][5]
// bucket[1]: [6][7][8][9][10]
// Benefits:
// 1. Only allocate chunks needed
// 2. Reuse empty buckets
// 3. Better cache locality
// 4. Less fragmentation
Type Safety Architecture
Generic Type Parameters
// Custom object types
interface User {
id: number;
name: string;
age: number;
}
const userTree = new RedBlackTree<number, User>();
userTree.set(1, { id: 1, name: 'Alice', age: 30 });
// Type inference works:
const user = userTree.get(1); // Type: User | undefined
user?.name; // Type-safe access
Comparator Custom Logic
type CustomObject = {
name: string;
priority: number;
};
// Default comparator (ascending)
const ascTree = new RedBlackTree<number>();
// Reversed comparator (descending)
const descTree = new RedBlackTree<number>([], {
comparator: (a, b) => b - a // Reverse comparison
});
// Works with any type
const objectTree = new RedBlackTree<CustomObject>([], {
comparator: (a, b) => a.priority - b.priority,
});
Summary: Design Checklist
- ✅ Unified API across all structures
- ✅ Iterator protocol implementation
- ✅ Method chaining architecture
- ✅ Self-balancing guarantees
- ✅ V8 JIT-friendly code
- ✅ Memory-efficient algorithms
- ✅ Full TypeScript support
- ✅ Production-ready error handling
Next: Check PERFORMANCE.md for benchmarks.
Or: See GUIDES.md for implementation examples.