WebAssembly multithreading and concurrent programming are implemented through Web Workers and shared memory:
1. Web Workers Basics
- WebAssembly itself is single-threaded
- Use Web Workers to implement multithreading
- Each Worker has an independent WebAssembly instance
- Workers communicate through message passing
2. Shared Memory
javascript// Create shared memory const sharedMemory = new WebAssembly.Memory({ initial: 10, maximum: 100, shared: true }); // Main thread const mainBuffer = new Int32Array(sharedMemory.buffer); // Worker thread const worker = new Worker('worker.js'); worker.postMessage({ memory: sharedMemory }, [sharedMemory.buffer]);
3. Worker Code Example
javascript// worker.js self.onmessage = function(e) { const { memory, wasmModule } = e.data; // Load WebAssembly in Worker WebAssembly.instantiate(wasmModule, { env: { memory } }) .then(results => { const { process } = results.instance.exports; // Process data const buffer = new Int32Array(memory.buffer); const result = process(0, buffer.length); // Send result back to main thread self.postMessage({ result }); }); };
4. Main Thread Code
javascript// main.js const sharedMemory = new WebAssembly.Memory({ initial: 10, maximum: 100, shared: true }); // Create multiple Workers const workers = []; for (let i = 0; i < 4; i++) { const worker = new Worker('worker.js'); worker.postMessage({ memory: sharedMemory, wasmModule: wasmBinary }, [sharedMemory.buffer]); workers.push(worker); } // Receive Worker results workers.forEach(worker => { worker.onmessage = function(e) { console.log('Worker result:', e.data.result); }; });
5. Atomic Operations
javascript// Use Atomics for synchronization const buffer = new Int32Array(sharedMemory.buffer); // Atomic add Atomics.add(buffer, 0, 1); // Atomic compare and exchange const expected = 0; const newValue = 1; const success = Atomics.compareExchange(buffer, 0, expected, newValue); // Atomic wait Atomics.wait(buffer, 0, 0); // Atomic notify Atomics.notify(buffer, 0, 1);
6. Producer-Consumer Pattern
javascript// Producer Worker function producerWorker() { const buffer = new Int32Array(sharedMemory.buffer); const head = 0; const tail = 4; while (true) { // Wait for space while (buffer[head] === buffer[tail]) { Atomics.wait(buffer, head, buffer[head]); } // Produce data buffer[buffer[head] % 10 + 8] = Math.random(); buffer[head] = buffer[head] + 1; // Notify consumer Atomics.notify(buffer, head, 1); } } // Consumer Worker function consumerWorker() { const buffer = new Int32Array(sharedMemory.buffer); const head = 0; const tail = 4; while (true) { // Wait for data while (buffer[head] === buffer[tail]) { Atomics.wait(buffer, tail, buffer[tail]); } // Consume data const data = buffer[buffer[tail] % 10 + 8]; console.log('Consumed:', data); buffer[tail] = buffer[tail] + 1; // Notify producer Atomics.notify(buffer, tail, 1); } }
7. Parallel Computing
javascript// Parallel matrix multiplication async function parallelMatrixMultiply(A, B, numWorkers = 4) { const sharedMemory = new WebAssembly.Memory({ initial: 20, maximum: 100, shared: true }); const workers = []; const rowsPerWorker = Math.ceil(A.length / numWorkers); // Create Workers for (let i = 0; i < numWorkers; i++) { const worker = new Worker('matrix-worker.js'); const startRow = i * rowsPerWorker; const endRow = Math.min(startRow + rowsPerWorker, A.length); worker.postMessage({ memory: sharedMemory, startRow, endRow, A, B }, [sharedMemory.buffer]); workers.push(worker); } // Wait for all Workers to complete const results = await Promise.all( workers.map(worker => new Promise(resolve => { worker.onmessage = (e) => resolve(e.data.result); }) ) ); return results.flat(); }
8. Task Queue
javascript// Task queue implementation class TaskQueue { constructor(numWorkers = 4) { this.workers = []; this.taskQueue = []; this.activeWorkers = 0; for (let i = 0; i < numWorkers; i++) { const worker = new Worker('task-worker.js'); worker.onmessage = (e) => this.handleWorkerMessage(worker, e); this.workers.push(worker); } } addTask(task) { return new Promise((resolve, reject) => { this.taskQueue.push({ task, resolve, reject }); this.processQueue(); }); } processQueue() { while (this.taskQueue.length > 0 && this.activeWorkers < this.workers.length) { const { task, resolve, reject } = this.taskQueue.shift(); const worker = this.workers[this.activeWorkers]; this.activeWorkers++; worker.postMessage({ task, id: Date.now() }); worker.currentResolve = resolve; worker.currentReject = reject; } } handleWorkerMessage(worker, e) { const { result, error } = e.data; if (error) { worker.currentReject(error); } else { worker.currentResolve(result); } this.activeWorkers--; this.processQueue(); } }
9. Performance Optimization
- Reasonable task allocation: Avoid load imbalance
- Reduce synchronization overhead: Minimize atomic operations
- Batch processing: Reduce message passing frequency
- Memory reuse: Avoid frequent memory allocation
- Worker pool: Reuse Worker instances
10. Best Practices
- Use shared memory to reduce data copying
- Reasonably use atomic operations for synchronization
- Avoid deadlocks and race conditions
- Monitor Worker performance and resource usage
- Implement graceful error handling and recovery
11. Debugging Multithreaded Code
- Use Chrome DevTools Worker debugging features
- Log message passing between Workers
- Monitor shared memory state
- Use performance analysis tools to identify bottlenecks
12. Challenges and Limitations
- Increased debugging complexity
- Need to handle synchronization and concurrency issues
- Browser limits on number of Workers
- Mobile device performance may be limited