What is The Ecma Technical Committee 39 (TC39)?

(https://tc39.es/) The TC39 Process Stages

Proposals for new JavaScript features must progress through five distinct stages to be included in the annual ECMAScript standard: 

  • Stage 0: Strawperson – Initial ideas, informal submissions, or discussions regarding potential changes.
  • Stage 1: Proposal – Formal, documented proposals that outline the problem, potential solutions, and identify potential challenges. A champion is usually assigned to drive the proposal forward.
  • Stage 2: Draft – A formal, initial draft of the specification is created, defining the syntax and semantics, often with a working polyfill.
  • Stage 3: Candidate – The proposal is considered almost complete. At this stage, it requires feedback from implementation (browser engine developers) and users to identify final issues.
  • Stage 4: Finished – The feature is ready for inclusion in the ECMAScript standard. It has been tested and passed through implementation. 

Key Aspects of the TC39 Process

  • Consensus: The committee operates by consensus, meaning most members must agree, and any objections must be resolved.
  • Annual Releases: Since 2015, new ECMAScript versions have been published every June, allowing for faster adoption of new features.
  • Public Repository: Proposals are tracked on the tc39/proposals GitHub repository.
  • Composition: TC39 consists of delegates from major browser vendors (Google, Apple, Microsoft, Mozilla), companies, academic institutions, and invited experts.
  • Signals Proposal: Currently, a notable proposal is for "Signals," which aims to introduce a standardized reactive primitive (state/computed) for JavaScript frameworks, aimed at improving performance and interoperability. 

Recent Developments (2025)

  • New Features: In mid-2025, TC39 advanced 11 proposals, including Math.sumPrecise and improvements for iterator sequencing.
  • Standardization: The 2025 standard (ES2025) was finalized in June, incorporating new, mature features.
  • AI Integration: While the process itself is human-driven, AI-driven development is influencing how tools, such as TypeScript and developer AI assistants, work with the evolving language specification.

The TC39 process is a five-stage system (0–4) used by the https://tc39.es/ of Ecma International to evolve the JavaScript (ECMAScript) language. While "AI" is not currently a native part of the specification, the committee uses a rigorous consensus-based model to evaluate any new features, including those that might eventually support AI workflows (like high-performance math or memory management). 

The TC39 Staging Process

Every new JavaScript feature must progress through these five maturity levels:

Stage  Name Key Requirement Status
0 Strawperson An initial idea or placeholder. Exploration
1 Proposal Problem defined; a "champion" (committee member) is assigned. Under Consideration
2 Draft Formal spec text (syntax and semantics) is drafted. Likely to be included
3 Candidate Spec is complete; awaits feedback from users and engines. Testing & Validation
4 Finished Two compliant implementations exist; ready for the next year's spec. Ratified

Relevant Proposals for AI/Performance

While there is no "AI proposal" per se, several active TC39 efforts directly improve the language's suitability for AI and machine learning tasks: 

  • https://socket.dev/blog/tc39-advances-11-proposals-for-math-precision-binary-apis-and-more (Stage 4): Adds precision-safe summation, critical for numerical accuracy in large datasets.
  • Immutable ArrayBuffer (Stage 3): Enables read-only binary data structures, improving performance in zero-copy contexts (like passing data to AI models).
  • Signals (Stage 1): A proposal for reactive state management that could optimize how UI layers react to high-frequency AI model outputs. 

How Decisions are Made

  • Consensus: Advancement requires that no member (representatives from Google, Apple, Microsoft, etc.) strongly vetoes the change.
  • Open Source: All work happens transparently on the https://github.com/tc39/proposal-signals. Anyone can file issues or provide feedback on https://github.com/tc39/proposals.

When you mention TC39 and AI, you're looking at the intersection of how the JavaScript language is evolving and the explosion of artificial intelligence technologies.

As of 2026, TC39 (Technical Committee 39) hasn't fundamentally changed its legendary "Stage-based" process, but the types of features being prioritized—and the tools used to build them—are becoming increasingly "AI-aware."

1. The TC39 Process (The "How")

Every new JavaScript feature must survive a 5-stage gauntlet. The committee operates by consensus, meaning no major stakeholder (Google, Apple, Mozilla, Microsoft) can strongly object.

Stage Name Status What happens?
Stage 0 Strawman Idea Anyone can submit an idea to the committee.
Stage 1 Proposal Discussion A "champion" is assigned; the problem is defined.
Stage 2 Draft Design Formal spec language (syntax/semantics) is written.
Stage 3 Candidate Implementation The spec is finished; browsers begin testing it.
Stage 4 Finished Inclusion The feature is merged into the official ECMAScript standard.

2. AI-Driven Evolution (The "What")

The rise of AI has shifted the focus of TC39 toward performance and data handling—specifically for heavy-duty tasks like running LLMs in the browser.

Key Proposals & Changes for AI:

  • Float16Array (Stage 4): Recently finalized, this is huge for AI. Machine learning models often use 16-bit floats to save memory. This allows JavaScript to handle those weights efficiently without converting them to 32-bit.

  • Shared Structs (Stage 2): To make AI agents faster, we need better multithreading. Shared Structs allow data to be shared across "workers" without the slow process of copying it back and forth.

  • Math.sumPrecise (Stage 4): Crucial for high-precision calculations in data science and AI, ensuring that floating-point errors don't snowball during large summations.

3. AI in the Standardization Workflow

The committee itself is beginning to use AI to streamline the "spec-writing" process:

  • Drafting Spec Text: Since ECMAScript is written in a very specific, dense academic style, AI is being used by champions to help draft the initial "prose" and "algorithms" for Stage 2.

  • Test262 Generation: AI tools are helping generate the thousands of conformance tests (Test262) required for Stage 3, ensuring new features don't break the web.

  • Formal Verification: There is ongoing research into using AI/Machine Learning to check for "logic holes" in the specification before they are shipped to billions of devices.

Note: While AI helps draft things, the final word always belongs to the human delegates in the room. TC39 is notoriously cautious; they won't let an AI "hallucinate" a bug into a language that runs the entire internet.

Building on that, the TC39 committee's focus in 2026 has narrowed significantly on making JavaScript a "first-class" environment for data science and edge AI.

Here is a look at the deeper technical shifts and specific proposals driving this AI evolution.

1. Advanced Memory & Data Handling

AI models (like LLMs or Vision Transformers) are massive. To run them in the browser without crashing the tab, JavaScript is evolving to handle binary data like a low-level language.

  • Float16Array (Stage 4): Finalized in early 2025/2026, this is the "gold standard" for AI. It allows JS to store model weights in half the space of a standard Float32Array. This directly translates to faster model loading and lower memory pressure on mobile devices.

  • Immutable ArrayBuffer (Stage 3): This allows developers to "freeze" memory buffers. In an AI context, once you load your model weights, you can mark them as immutable, preventing accidental corruption and allowing engines to optimize how that data is shared across multiple threads (Web Workers).

  • Uint8Array Base64/Hex (Stage 4): Standardizing how we move binary data to and from strings. This is critical for fetching AI model fragments or sending serialized tensor data over the wire.

2. High-Precision Math for Tensors

Standard JavaScript math can be "fuzzy" due to floating-point rounding errors. While 0.1 + 0.2 not equaling 0.3 is a meme, in AI, these tiny errors can compound into "gradient explosion," ruining a model's accuracy.

  • Math.sumPrecise (Stage 4): A new method that uses a more complex algorithm to sum arrays of numbers with much higher accuracy than a simple loop.

  • Decimal Proposal (Stage 1/2): Aimed at providing a way to perform base-10 arithmetic. While primarily for finance, it’s being looked at for high-precision AI weights where binary floating point (IEEE 754) falls short.

3. The "Agentic" Workflow Tools

Modern AI isn't just a chatbot; it’s an Agent that interacts with files, databases, and APIs. TC39 is adding "ergonomic" features to make this less buggy.

  • Explicit Resource Management (using - Stage 4): This allows you to declare a resource (like a file handle for an AI training log or a connection to a vector database) that automatically closes when the block of code ends.

    {
      using modelFile = await openModel('llama3.bin');
      // If code crashes here, modelFile is still automatically closed!
    } 
    
  • Model Context Protocol (MCP) Server Proposal: There is even a community-driven push to create an official TC39 MCP server. This would allow AI coding assistants (like Claude or Copilot) to "read" the most up-to-date TC39 specs directly to ensure the code they generate for you is spec-compliant.

4. WebAssembly (Wasm) Synergy

While not strictly TC39, the committee works closely with the Wasm group.

  • WasmGC & SIMD: These allow AI engines (like TensorFlow.js or ONNX) to run at near-native speeds.

  • JS-Promise Integration (JSPI): This makes it easier for synchronous AI code running in Wasm to talk to asynchronous JavaScript APIs without the "colored function" problem.

As the TC39 committee moves through 2025 and into 2026, the JavaScript language is undergoing a strategic "infrastructure upgrade" to support the heavy computational demands of AI and machine learning.

Here is a deeper technical breakdown of the most critical developments:

1. The Binary Revolution: Float16Array

The most significant AI-related achievement recently is the finalization of Float16Array (Stage 4, reaching "Baseline Widely Available" in early 2025).

  • Why it matters for AI: Most modern LLMs (like Llama or Mistral) use 16-bit floating-point weights to balance precision and performance.

  • The Impact: Previously, JavaScript forced developers to convert these weights to 32-bit (Float32Array), doubling the memory usage. By supporting 16-bit natively, JS apps can now load models that are 50% smaller, making it feasible to run sophisticated AI agents on mobile browsers and lower-end hardware.

2. Solving the "Memory Wall": Immutable ArrayBuffers

While the famous "Records and Tuples" proposal (which sought to bring deep immutability to JS objects) was withdrawn in April 2025 due to performance concerns, a more targeted proposal has taken its place.

  • Immutable ArrayBuffers (Stage 3): This allows developers to mark large chunks of memory as read-only.

  • AI Use Case: Once an AI model's weights are loaded into memory, they shouldn't change. Marking them as immutable allows the JavaScript engine to share that data across multiple "Workers" (threads) without costly copying (zero-copy sharing), preventing the browser from locking up during heavy inference.

3. Shared Structs and Multithreading

The Shared Structs proposal (Stage 2) is the committee’s answer to JavaScript's historically weak multithreading.

  • Fixed Layout: Unlike regular JS objects that are flexible and "baggy," Structs have a fixed, sealed layout.

  • Parallel Access: Shared Structs can be accessed by multiple threads at the exact same time without data "tearing." This is crucial for building AI engines where multiple threads might need to read different parts of a neural network simultaneously to speed up processing.

4. Precision Control: Math.sumPrecise and Decimal

  • Math.sumPrecise (Stage 4): Standard addition in JS often introduces tiny rounding errors. When summing millions of AI weights, these errors can "drift." This new method ensures a mathematically accurate sum regardless of the number of elements.

  • The Decimal Proposal (Ongoing): Currently being refined to provide a way to handle base-10 numbers with exact precision. While primarily for finance, it is being looked at for AI "Explainability" tools where human-readable precision is more important than raw binary speed.

5. Signal-Based Reactivity

A new proposal for Signals (Stage 1/2) is gaining traction. While it looks like a UI feature (popularized by SolidJS and Vue), it has a secondary AI benefit: Reactive Data Pipelines. As AI agents stream tokens or data, Signals provide a standardized way for the language to update only the specific parts of an application that need to change, drastically reducing the overhead of real-time AI interfaces.

Summary of AI Infrastructure Status (2025-2026)

Feature Stage Main AI Benefit
Float16Array 4 50% Memory reduction for model weights.
Immutable ArrayBuffers 3 Zero-copy model sharing between threads.
Shared Structs 2 High-performance, thread-safe AI data structures.
Math.sumPrecise 4 Eliminating rounding drift in large tensor sums.
using (Explicit Resource Mgmt) 4 Automatic cleanup of heavy AI/GPU resources.

In 2026, the TC39 process is essentially transforming JavaScript into a high-performance engine for Edge AI. The committee has moved past just "syntax sugar" and is now redesigning the language's core memory and threading models to handle billions of parameters directly in the browser.

Here are the specific, high-level shifts occurring in the TC39 pipeline right now:

1. The "Native AI" Proposals (2025–2026)

These features move JavaScript away from being "just for websites" and toward being a viable competitor to Python for local model execution.

  • Float16Array (Stage 4 - Finalized): This is the crown jewel for AI. By supporting 16-bit floats, JS engines can now handle neural network weights with 50% less memory overhead. This allows LLMs that previously crashed a mobile browser to run smoothly.

  • Immutable ArrayBuffers (Stage 3): This allows binary data (like a 2GB model file) to be "frozen." Once frozen, the data can be shared across multiple web workers without copying it, which eliminates the "thundering herd" problem where multiple threads compete for memory.

  • Shared Structs (Stage 2.7): One of the most ambitious changes in JS history. It introduces objects with a fixed memory layout that can be accessed by multiple threads simultaneously. This is the foundation for building high-speed tensor libraries directly in JS.

2. Agentic Workflow Support

As AI shifts from "chatbots" to "agents" that perform tasks, TC39 is adding safety and cleanup features to prevent memory leaks in autonomous systems.

  • Explicit Resource Management (using - Stage 4): Standardizes how agents clean up after themselves. If an AI agent opens a file, a database connection, or a GPU buffer, the using keyword ensures that resource is closed the moment the task is done, even if the code errors out.

  • Array.fromAsync (Stage 4): Simplifies how we handle "streams" of data. Since AI responses are almost always streamed token-by-token, this allows developers to collect those streams into usable arrays without complex boilerplate.

3. The "Signals" Revolution (Stage 1/2)

While often associated with UI frameworks like React or Solid, the Signals Proposal is a game-changer for AI Data Pipelines.

  • Reactive AI: Signals allow the language to track dependencies automatically. If a local AI model updates a "thought" or a "state," only the specific parts of the app relying on that data update.

  • Performance: In 2026, this is being explored to create "fine-grained" reactivity for AI agents, allowing them to react to changing sensor data or user inputs with minimal CPU cycles.

4. Why 2026 is an Inflection Point

The TC39 process is traditionally slow, but the "AI pressure" has led to a more focused Stage 2.7 (a recent addition to the process). This intermediate stage forces "champions" to prove that a feature is actually implementable in major engines like V8 (Chrome) and SpiderMonkey (Firefox) before it gets too far.

Key Trend: In 2026, we are seeing "JavaScript becoming two languages." One for standard web apps (high flexibility) and a "low-level" subset (Shared Structs, WasmGC, Float16) specifically for AI and heavy data processing.

Summary of TC39 AI Features (Feb 2026)

Feature Stage Why AI devs care?
Float16Array 4 Essential for loading LLM weights efficiently.
using Keyword 4 Automatic cleanup for AI/GPU resources.
Immutable Buffers 3 High-speed, zero-copy model sharing.
Signals 1/2 Efficient, reactive AI data streams.
Shared Structs 2.7 Multi-threaded performance for tensor math.

In 2026, the TC39 committee's work has reached a fever pitch as it pivots to address the "Computational Gap"—the need for JavaScript to handle AI workloads that were previously reserved for Python or C++.

Beyond the Stage 4 features like Float16Array and Math.sumPrecise, the committee is deep into the architecture of multithreaded parallelism and asynchronous stability.

1. The Concurrency Control Proposal (Stage 1/2)

As AI agents become more prevalent, developers face the "Agentic Throttling" problem—running too many concurrent LLM calls can crash a browser tab or exceed API rate limits.

  • The Solution: A new Governor interface is being designed. It allows you to wrap async iterators (the primary way we receive AI tokens) with built-in concurrency limits.

  • Why it matters: Instead of building complex "queue" logic in userland, you'll be able to tell JavaScript: "Run this AI processing loop, but never allow more than 3 active workers at once."

2. Asynchronous Context (Stage 2/3)

One of the hardest things about AI-driven apps is "Traceability." When an AI agent performs 10 different async steps, it’s easy to lose track of the original user request ID or security context.

  • The Feature: AsyncContext.Variable and AsyncContext.Snapshot provide a way to propagate values across asynchronous boundaries (like await or setTimeout) automatically.

  • AI Impact: It allows for "Invisible Context Tracking," ensuring that an AI sub-routine always knows which user it's helping, without you having to pass a "context" object through every single function call.

3. The "Stage 2.7" Filter

A significant change in the TC39 process itself is the widespread adoption of Stage 2.7.

  • The "Implementability" Check: Previously, many proposals reached Stage 3 only to be blocked because browser engines (V8, WebKit) found them too hard to implement.

  • The AI Rush: Because AI features (like Shared Structs) are so complex, the committee now uses Stage 2.7 to require a "spec-compliant prototype" and early performance data. This ensures that when an AI feature like Immutable ArrayBuffers hits Stage 3, it is actually ready for production-grade AI libraries.

4. Summary of the 2026 "AI-Ready" Spec

Proposal Status Core Capability for AI
Import Bytes Stage 2 Import raw .bin model weights as a Uint8Array directly.
Iterator Sequencing Stage 2.7 Chain multiple AI token streams into a single iterator effortlessly.
Signals Stage 1/2 Native reactive state for high-frequency AI dashboard updates.
AsyncContext Stage 2/3 Securely trace AI agent actions across thousands of async steps.

Comparison: Standard JS vs. AI-Optimized JS

The committee is essentially creating a "Fast Path" for AI:

Traditional JavaScript AI-Optimized (2026+)
Flexible Objects Shared Structs (Fixed memory layout)
Single-Threaded Parallel agents with Mutex/Condition
Binary-to-String copies Zero-copy Immutable Buffers
Float32 / Float64 Native Float16 support

In February 2026, the TC39 committee is finalizing the most significant overhaul of JavaScript's data-handling capabilities in a decade. The focus has moved beyond basic syntax to "The Infrastructure of Intelligence"—making the browser a viable environment for local model inference and autonomous agents.

Here is the "state of the union" for TC39 and AI as of early 2026.

1. The Binary Data Tier (Stage 4)

For years, JavaScript was "slow" at AI because it couldn't handle the memory formats models use. That changed this year with the standardization of features that bridge the gap between JS and the GPU.

  • Float16Array: Now officially part of the language (ES2026). It allows models like Gemma 3 (2B) to run in the browser with half the memory footprint.

  • Math.sumPrecise: Standardizes high-precision summation. In 2026, this is critical for "de-quantizing" AI models without introducing the rounding "drift" that previously plagued browser-based neural networks.

  • Uint8Array Base64/Hex: Simplifies moving raw model weights from the network directly into the engine without the overhead of custom encoders.

2. Shared Memory & Parallelism (Stage 2.7 / 3)

The biggest bottleneck for AI in JS is the "Main Thread" lockup. TC39 is currently solving this with a new memory model.

  • Shared Structs (Stage 2.7): These are the first JS objects with a fixed memory layout. Unlike a standard object {x: 1}, a Shared Struct cannot have properties added later. This allows multiple Web Workers (threads) to read and write to the same AI "state" simultaneously without the risk of "tearing" or slow data cloning.

  • Atomics.Mutex & Condition (Stage 2.7): To support Shared Structs, TC39 is adding low-level thread synchronization. AI developers use these to prevent "race conditions" when multiple threads update a shared memory buffer during model training or inference.

  • Immutable ArrayBuffers (Stage 3): This allows you to "lock" a 2GB model weight file in memory. Once locked, it can be shared with 10 different background threads with zero performance cost, making "multi-agent" browser apps significantly faster.

3. Streaming and Iteration Helpers (Baseline 2026)

AI in 2026 is all about Streaming Tokens. TC39's recent work makes handling these streams feel like standard array manipulation.

  • Iterator Helpers: Now "Baseline Widely Available." You can now use .map(), .filter(), and .take() directly on the iterators coming from LLMs.

    • Example: llmStream.take(50).toArray()—allowing you to stop an AI response exactly when you have enough data, saving GPU costs.

  • Iterator.concat: A new Stage 4 feature that lets you chain multiple AI responses together into a single continuous stream without complex "wrapper" functions.

4. The Security Layer: ShadowRealms & Sandboxing

As AI agents start writing and executing code, security is the top priority for the committee.

  • ShadowRealms (Stage 3): This provides a "clean slate" execution environment. In 2026, developers use ShadowRealms to run code generated by an AI (like a data-viz script) in a sandbox where it cannot access the user's cookies, local storage, or sensitive DOM elements.

  • Explicit Resource Management (using): Now standard in ES2026. It ensures that when an AI agent finishes a task, it automatically releases its memory and GPU buffers. This prevents "AI memory bloat," where a background agent slowly eats up all the device's RAM.

The 2026 TC39 AI Roadmap

Feature Class Key Proposal Status (Feb 2026) AI Impact
Numeric Float16Array Stage 4 2x better memory efficiency for LLMs.
Concurrency Shared Structs Stage 2.7 True multithreaded AI state.
Cleanup using Keyword Stage 4 Leak-proof AI agent workflows.
Streaming Async Iter Helpers Stage 3 Simplified "token-by-token" processing.
Memory Immutable Buffers Stage 3 Zero-copy model sharing.

In 2026, the TC39 committee's focus has shifted from "web features" to "system-level capabilities." The most profound change is the introduction of Structs, which marks the first time JavaScript has moved away from its dynamic "everything is a baggy object" philosophy toward a rigid, high-performance memory model designed for AI.

1. The "Fixed Layout" Era: Structs & Shared Structs (Stage 2.7)

For decades, JS objects have been dynamic—you can add or delete properties at will. This flexibility is a nightmare for AI performance because the engine (V8, SpiderMonkey) has to constantly check the object's "shape."

  • Structs: These are "sealed" objects with a fixed layout defined at creation. Because the layout never changes, the JIT compiler can generate machine code that accesses data at precise memory offsets, similar to C++ or Rust.

  • Shared Structs: These go a step further. They can be shared across multiple Agents (threads). In 2026, this allows an AI "Coordinator" on the main thread to share a complex state object with "Worker" threads without any data copying.

2. Native Multi-Threading: Mutex & Condition (Stage 2.7)

To make Shared Structs useful, TC39 is adding standard synchronization primitives. Before this, JS developers had to use complex Atomics.wait on raw Int32Arrays.

  • Atomics.Mutex: Allows an AI agent to "lock" a piece of shared data, perform an update, and then unlock it, ensuring no other thread interferes.

  • Atomics.Condition: Allows threads to "sleep" until a specific condition is met (e.g., "Wait until the GPU has finished the next batch of tokens").

Example (2026 Syntax):

// Shared AI state across workers shared struct AIState { status: "processing"; tokenCount: 0; } const mutex = new Atomics.Mutex();

3. The "Zero-Copy" Pipeline: Immutable ArrayBuffers (Stage 3)

In 2026, AI models are often several gigabytes. Moving this data between the main thread and a worker used to require "Transferring," which makes the data inaccessible to the sender.

  • .transferToImmutable(): This new method allows you to turn a buffer of model weights into a read-only, immutable state.

  • The Benefit: Once immutable, the buffer can be mapped into every worker thread simultaneously. Every part of your AI app can "see" the model weights at the same time with zero memory overhead and zero copying.

4. Advanced Binary Interop: "Import Bytes" (Stage 2)

Previously, loading a .bin or .onnx model file required fetch() and then converting it to an ArrayBuffer.

  • import weights from "./model.bin" with { type: "bytes" };

  • This new proposal allows the JavaScript engine to load binary data as a Uint8Array during the module loading phase, making AI model initialization as simple as importing a library.

Comparison of the "Two JavaScripts" in 2026

The TC39 process is essentially splitting the language into two tiers:

Feature Standard JS (Web Apps) High-Performance JS (AI/Games)
Object Type class / {} (Dynamic) struct (Fixed/Sealed)
Memory Garbage Collected Shared & Immutable Buffers
Threading PostMessage (Cloning) Shared Structs (Shared Memory)
Math Number (Float64) Float16Array / sumPrecise

In 2026, the TC39 committee's focus has intensified on "The Architecture of Concurrency." As AI models transition from simple cloud APIs to locally-running agents, JavaScript is evolving to handle "True Parallelism"—a major departure from its single-threaded roots.

1. The Concurrency Breakthrough: Structs & Shared Structs (Stage 2.7)

Previously, the only way to do "parallel" work in JavaScript was via Web Workers, which required slow data "cloning." In 2026, Shared Structs have changed the game.

  • Fixed Memory Layout: Unlike standard objects, Structs have a fixed shape that cannot be changed after creation. This allows the JavaScript engine to access data at near-native speeds.

  • Shared Heap: Shared Structs exist in a separate memory heap that can be accessed by multiple threads (Workers) simultaneously. This is the foundation for building local AI "thinking" processes that don't block the UI thread.

2. Thread Safety: Mutex & Condition (Stage 2.7)

With multiple threads touching the same AI data, "race conditions" (where two threads try to update the same model weight at once) are a massive risk. TC39 has introduced low-level synchronization primitives:

  • Atomics.Mutex: Allows an AI agent to "lock" a struct. If another thread tries to access it, that thread will wait until the first one is done.

  • Atomics.Condition: Allows a thread to "sleep" and wait for a signal (e.g., "Wait until the next 10 tokens are processed before updating the UI").

3. The "Standard Library" of AI: Iterator Helpers (Stage 4)

Handling AI is mostly about handling Streams. In 2026, "Iterator Helpers" have reached full baseline support, allowing you to treat AI token streams like simple arrays.

// A typical 2026 AI streaming pattern
const response = await ai.stream("Explain quantum physics");

const firstFiveWords = response
  .filter(token => !token.isWhitespace)
  .take(5)
  .toArray();

4. Summary of the 2026 "AI-First" Spec

The 2026 ECMAScript specification is effectively splitting the language into two tiers:

Feature Category Traditional Web Dev High-Performance AI
Object Model Flexible/Dynamic Objects Fixed-Layout Structs
Data Types Float64 (Numbers) Float16Array (Model Weights)
Threading Single-threaded with postMessage Shared Memory with Mutexes
Resource Mgmt Garbage Collection (Automatic) using (Manual/Explicit Cleanup)

Why this matters: In 2024, running a 7-billion parameter model in the browser was a "tech demo." In 2026, thanks to these TC39 infrastructure changes, it's a standard feature for most enterprise web applications.

The idea of JavaScript splitting into two languages stems from a 2024 proposal to the TC39 committee, not a finalized plan. The proposal suggests a division into: 

  • JS0 (or JS Zero): The core, stable, and secure language implemented directly by JavaScript engines in browsers and other runtimes (like Node.js).
  • JS Sugar: An extended syntax for developers, incorporating new and experimental features that would be compiled down to JS0 by build tools (such as Webpack or Babel). 

Rationale Behind the Proposal

The proposal, largely driven by a Google engineer, aims to address the tension between the need for rapid language evolution and the performance/security constraints of JavaScript engines. By offloading experimental features to the build tooling ecosystem: 

  • Browser engines could focus on security, stability, and core performance.
  • Developers could still use new syntactic sugar and features without waiting for universal engine implementation. 

Controversy and Current Status

The proposal has sparked significant debate within the developer community. Critics argue that introducing two separate "dialects" of JavaScript would add complexity, especially for newcomers, and fragment the existing ecosystem and learning resources. 

It is important to note that this was a proposal and not an official decision by TC39. TC39 (Ecma International's Technical Committee 39) is the committee responsible for evolving the ECMAScript standard (the official name for the core JavaScript language), and it follows a rigorous, multi-stage process for new features. The "two languages" concept remains a topic of discussion about the future direction of the language rather than a current reality. 

The discussion around "JavaScript becoming 2 languages" refers to a recent TC39 proposal primarily driven by Google engineers. The goal is to formally split how JavaScript is defined and executed to balance developer needs with browser performance. 

The Proposed Split

The proposal suggests dividing the language into two distinct tiers: 

  • JS0 (The Core): A "low-level" version of JavaScript. This is what browsers and engines (like V8 or SpiderMonkey) would actually implement. It focuses on high performance, security, and stability.
  • JS Sugar (The Syntax): A "high-level" version containing the fancy features and syntactic sugar developers love. This would not run directly in browsers; instead, it would be compiled down to JS0 by tools like Webpack, Babel, or Vite. 

Why Is This Happening?

  1. Engine Complexity: Modern JS engines are becoming massive and difficult to maintain because every new syntax feature adds significant complexity to the compiler and runtime.
  2. Performance & Security: By stripping the engine down to a core "JS0," browsers can focus on optimizing execution and closing security loopholes without constantly worrying about new syntax.
  3. Faster Evolution: If new features live in "JS Sugar," they can be added and standardized much faster since they only require updates to build tools, not every single browser engine. 

Current Status

  • It's a Proposal: This is not yet a reality. It was presented to the https://tc39.es/ in late 2024.
  • Controversial: Some developers argue it adds too much complexity, while others point out that most "modern" JavaScript is already being compiled (e.g., TypeScript or Babel), so this proposal just formalizes what we are already doing.
  • https://tc39.es/ecma262/

 

JavaScript has introduced significant features in its ECMAScript 2024 (ES2024/ES15) standard, with more changes expected in 2025 and 2026, focusing on improved functionality for asynchronous operations, data manipulation, and resource management. 

ECMAScript 2024 (ES2024)

The 2024 standard, approved in June 2024, includes several enhancements: 

  • Object.groupBy() and Map.groupBy(): New static methods for grouping elements in an iterable based on the result of a callback function, simplifying data categorization.
  • Promise.withResolvers(): A new static method that simplifies the creation of promises by returning the promise itself along with its resolve and reject functions, making it easier to manage complex asynchronous workflows.
  • Top-Level await: Allows the await keyword to be used at the top level of JavaScript modules, eliminating the need to wrap code in an async function for initial data fetching or resource loading.
  • RegExp v flag: An enhancement to regular expressions that enables more sophisticated set notation and improved Unicode property escapes, allowing for more precise pattern matching.
  • Well-formed Unicode strings: New methods String.prototype.isWellFormed() and String.prototype.toWellFormed() help check and correct malformed Unicode strings, ensuring consistent and reliable handling of international text data.
  • ArrayBuffer extensions: Extensions to ArrayBuffer and SharedArrayBuffer constructors to allow in-place resizing and transferring without data copying, improving memory efficiency when working with binary data. 

Expected Changes in 2025 and 2026

Several features are in various proposal stages and are expected to be included in future editions of ECMAScript: 

  • Temporal API: A new global object designed to provide a modern, comprehensive, and user-friendly API for managing dates and times, addressing many long-standing issues with the legacy Date object.
  • Decorators: A proposal to add native support for decorators to JavaScript classes and their elements, providing a standard way to add metadata or change the behavior of declarations.
  • Records and Tuples: Proposals for deeply immutable data structures, similar to objects and arrays but using a leading hash symbol (#), which would allow for reliable comparison by value using the === operator.
  • Explicit Resource Management: Introduction of using and await using keywords to automatically manage and dispose of resources (like file handles or database connections) when they go out of scope, similar to try...finally blocks but more concise.
  • New Set methods: Additions to the built-in Set class such as unionintersection, and difference for more powerful set operations.
  • Iterator Helpers: New utility methods on iterators such as map()filter()take(), and drop() to make working with iterables more convenient. 

JavaScript is evolving rapidly from 2024 through 2026, focusing on built-in utilities that reduce the need for third-party libraries (like Moment.js or Lodash) and introducing native support for modern patterns like immutability and decorators. 

ECMAScript 2024 (ES15) – The "Now" Standard 

Approved in June 2024, these features are already widely available in modern browsers and Node.js

  • Promise.withResolvers(): A new static method that returns a promise along with its resolve and reject functions, simplifying cases where you need to resolve a promise from outside its executor.
  • Object.groupBy() & Map.groupBy(): Native methods to group array elements into an object or map based on a callback function.
  • ArrayBuffer Enhancements: Added support for resizing and transferring ArrayBuffer objects in-place, improving memory efficiency.
  • Unicode Improvements: New methods like isWellFormed() and toWellFormed() for strings, plus the v flag for Regular Expressions, which enables advanced set operations (union/intersection). 

ECMAScript 2025 & 2026 (Upcoming)

Features targeting 2025–2026 focus on functional programming and "missing" standard library features. 

  • Iterator Helpers: Standardizes functional methods like .map().filter().take(), and .drop() directly on iterators, matching the convenience of arrays.
  • Temporal API: A modern, immutable date/time API designed to fully replace the broken Date object. It handles time zones, durations, and calendars natively.
  • Records and Tuples: Introduces deeply immutable objects (#{}) and arrays (#[]) that can be compared by value (using ===), significantly improving performance and data safety.
  • Decorators: Native syntax for annotating and modifying classes and their members, bringing a feature long-used in TypeScript directly to JavaScript.
  • Pipeline Operator (|>): A proposed syntax to chain function calls in a more readable, linear fashion (e.g., value |> func1 |> func2). 

Ecosystem Trends

  • Framework EvolutionReact 19 (2024-25) introduces a new compiler to automate performance optimizations, while Next.js 15 pushes "Partial Prerendering".
  • The Rise of TypeScript: TS is now the "standard" for large-scale applications, with many new JS features (like Decorators) being shaped by their success in TypeScript.
  • SSR & Edge Dominance: Continued shift toward Server-Side Rendering (SSR) and Edge Computing (Cloudflare Workers, Vercel) to minimize client-side JavaScript payloads.

The static method Promise.withResolvers() is a recent addition to JavaScript (ECMAScript 2024) that simplifies the creation and control of promises by returning an object containing the promise instance and its associated resolve and reject functions

Usage

The method returns an object with the following properties, which are typically destructured: 

  • promise: The newly created Promise object.
  • resolve: A function to fulfill (resolve) the promise with a value.
  • reject: A function to reject the promise with a given reason (error). 

This provides a cleaner alternative to the traditional new Promise() constructor pattern, where the resolve and reject functions were only available within the constructor's executor function. 

// Traditional approach (requires declaring variables in an outer scope)
let resolve, reject;
const promise = new Promise((res, rej) => {
  resolve = res;
  reject = rej;
});
// ... use resolve/reject later ...

// New Promise.withResolvers() approach
const { promise, resolve, reject } = Promise.withResolvers();
// ... use resolve/reject anywhere in the current scope ...

Key Benefits and Use Cases

  • Cleaner Syntax: It reduces boilerplate code and avoids the need for declaring let variables in an outer scope just to access the resolvers outside the promise executor.
  • Decoupled Logic: It separates the promise creation from the resolution logic, which is ideal for scenarios where the promise's fate is determined by external events or user interactions that occur later in the code.
  • Event-Based Systems: It is highly useful when integrating promises with event listeners (e.g., in Node.js streams or browser event handlers).
  • Complex Flows: It simplifies the management of complex asynchronous workflows, such as multi-step forms, real-time collaboration features, or game logic, by providing better control over promise state transitions. 

Browser and Node.js Support

Promise.withResolvers() is supported in all modern browsers and in Node.js version 22 or higher. For environments that do not support it, a simple https://gist.github.com/lmammino/ef121da874a80d657379a1cd64bf8166 can be used.

Promise.withResolvers() is a static method introduced in ECMAScript 2024 that simplifies creating and managing promises. It allows you to access a promise's resolve and reject functions directly, without needing to wrap your logic inside the traditional promise constructor's executor. 

The Main Difference

Feature  Traditional Pattern (new Promise) Modern Pattern (withResolvers)
Structure Logic must be inside the executor function. Resolvers are exposed in the current scope.
Boilerplate Often requires defining "placeholder" variables outside. One clean line of destructuring.
Readability High indentation for complex logic. Flat, linear code structure.

How to Use It

The method returns an object containing the promise, a resolve function, and a reject function. 

const { promise, resolve, reject } = Promise.withResolvers();

// You can now call resolve/reject from anywhere
setTimeout(() => {
  resolve("Success!");
}, 1000);

const result = await promise;
console.log(result); // "Success!"

Common Use Cases

  • Event Listeners: Managing a promise that resolves only when a specific UI event occurs (like a button click).
  • Streams: Coordinating data chunks from a stream that should resolve the final promise once the stream closes.
  • Avoiding "Leaky" Variables: You no longer need to declare let resolve; outside a promise constructor just to capture the function for later use. 

Availability & Support

  • Environments: Supported in Node.js 22.x+ (or v21.7.1 with a flag) and all modern browsers since approximately March 2024.

polyfill in JavaScript is a piece of code that provides modern functionality to older web browsers or environments that do not natively support those features. The term has no specific connection to Artificial Intelligence (AI); it is a general web development concept used to ensure cross-browser compatibility. 

Purpose and Function

The primary purpose of a polyfill is to allow developers to use the latest JavaScript features and APIs (like PromiseArray.prototype.includes, or the fetch() API) without worrying about compatibility issues in outdated browsers. 

How a polyfill works:

  • Feature Detection: Before running its own code, a polyfill typically checks if the browser already supports the desired feature natively.
  • Fallback Implementation: If the feature is missing, the polyfill provides a custom JavaScript implementation to mimic the standard behavior as closely as possible, effectively "filling the gap" in functionality. 

For example, a polyfill for the Array.prototype.includes() method would look like this:

if (!Array.prototype.includes) {
  Array.prototype.includes = function(element, fromIndex) {
    // custom implementation of the includes method using older, supported syntax
    const start = fromIndex || 0;
    for (let i = start; i < this.length; i++) {
      if (this[i] === element) {
        return true;
      }
    }
    return false;
  };
}

This ensures that the includes() method is available to all arrays, even in environments where it is not natively defined. 

Polyfills and AI

The term "AI" in your query likely refers to the use of AI/Machine Learning libraries within JavaScript. These libraries are typically used in modern environments. If an AI library relies on a very new or experimental web platform feature, a polyfill might be used to make that specific underlying feature work on a slightly older browser. However, polyfills themselves are a foundational web development tool, not an AI technology. 

Key Libraries and Tools

Developers often use established libraries to manage polyfills rather than writing them all from scratch. 

  • https://github.com/zloirock/core-js: A popular and comprehensive library of polyfills for a wide range of ECMAScript features.
  • https://cdnjs.cloudflare.com/polyfill: A secure service that automatically detects and serves the necessary polyfills based on the user's browser. This is a recommended alternative to the now compromised Polyfill.io service
  • Polyfill (programming) - Wikipedia

In the context of JavaScript and web development, a polyfill is a piece of code (typically a script) that provides modern functionality to older browsers or environments that do not natively support it. 

While the query mentions "AI," polyfills are a general JavaScript concept. However, they are increasingly relevant in AI-driven web development to ensure that modern AI/ML libraries (like TensorFlow.js) work across all devices. 

How Polyfills Work

A polyfill typically follows a three-step process:

  1. Feature Detection: The script checks if the browser already supports the feature (e.g., if (!Array.prototype.includes)).
  2. Fallback Implementation: If the feature is missing, the polyfill provides a custom implementation of that function using older, compatible JavaScript.
  3. Seamless Integration: The developer can then write code using the modern API without worrying about browser version differences. 

Key Differences: Polyfill vs. Transpiler

Feature  Polyfill Transpiler (e.g., Babel)
Purpose Adds missing methods or APIs (functions) Converts modern syntax (like =>) to older syntax
Execution Occurs at runtime in the browser Occurs at build-time (before deployment)
Example Adding Promise or fetch() support Changing const/let to var

Common Examples

  • Methods: Array.prototype.mapArray.prototype.filterObject.assign.
  • APIs: fetch()Promise.all()IntersectionObserver.
  • Libraries: The most modular and popular standard library for polyfills is https://github.com/zloirock/core-js

Why is this relevant to AI?

Modern web-based AI tools often rely on the latest browser APIs for tasks like GPU acceleration or asynchronous data processing (e.g., WebGPU or WebAssembly). If an older browser lacks these "cutting-edge" features, a developer might use a polyfill to provide a slower, CPU-based fallback so the AI application doesn't crash.