[Deep Dive] The NAND Gate of Mathematics: One Operator to Rule Them All

A physicist proved a single operator eml(x,y) = exp(x) − ln(y) generates every elementary function. We analyze the result and what it means for AI-based materials simulation.

[Deep Dive] The NAND Gate of Mathematics: One Operator to Rule Them All
📄 Paper Reference
Andrzej Odrzywołek, "All elementary functions from a single binary operator," arXiv:2603.21852 [cs.SC], Jagiellonian University, March 2026 (pre-peer-review, v2 April 4 2026)

The NAND Gate of Mathematics

Every computer science student encounters a moment of quiet astonishment when they learn that a single logic gate — NAND — can build every Boolean function. AND, OR, NOT, XOR, every circuit in every processor: all reducible to one primitive operation. It's a foundational result in digital logic, elegant in its economy.

A paper by Andrzej Odrzywołek at Jagiellonian University now claims an analogous result for continuous mathematics. A single binary function, combined with a single constant, can generate every standard elementary function — exponentials, logarithms, trigonometric functions, their inverses, hyperbolic variants, arithmetic, roots, and the fundamental constants of mathematics. The function is disarmingly simple: eml(x, y) = exp(x) − ln(y). The paper, posted to arXiv in March 2026 and revised in early April, methodically constructs over twenty elementary functions from nested applications of this one operator. If the construction holds, it represents a striking new completeness result — a NAND gate for the world of analysis.

What Is eml(x,y) = exp(x) − ln(y)?

The function takes two inputs. It exponentiates the first, takes the natural logarithm of the second, and subtracts. That's it. The name "eml" is a compact abbreviation: exp minus ln. The system starts with this binary operator and a single constant: 1.

Let's trace how the construction begins. The first move is to derive the constant e:

eml(1, 1) = exp(1) − ln(1) = e − 0 = e

Now we have e in our vocabulary. Next, we can obtain exp(e) and build from there. But the critical early milestone is recovering the natural logarithm and exponential as unary functions from this binary operator. Observe:

eml(x, 1) = exp(x) − ln(1) = exp(x)

eml(0, y) = exp(0) − ln(y) = 1 − ln(y)

The first gives us exp directly. The second isn't quite ln — it's 1 − ln(y). But once we can negate and adjust by constants, ln becomes accessible. Negation itself is one of the elegant early constructions: since ln(exp(x)) = x, and we can compose eml to build expressions like 1 − (1 − ln(exp(x))) = x, careful nesting recovers identity and then negation through eml(0, exp(x)) = 1 − x, so applying this twice with adjustment yields −x.

With exp, ln, negation, and the constant e, arithmetic unfolds naturally. Addition comes from exp and ln: since ln(exp(a) · exp(b)) = a + b, and multiplication in the exponential domain can be constructed through repeated eml applications. Multiplication itself leverages the identity exp(ln(a) + ln(b)) = a · b.

Where things become genuinely surprising is trigonometry. The paper uses Euler's formula: exp(ix) = cos(x) + i·sin(x). To access this, Odrzywołek constructs the imaginary unit i through the complex logarithm: ln(−1) = iπ. Since −1 is constructible from negation and the constant 1, and ln is available, we get . Division by π (itself extractable as the imaginary part of ln(−1) divided by i) eventually isolates i. From there, sin(x) = (exp(ix) − exp(−ix)) / (2i) is a composition of already-available operations. It's intricate, but every step is a finite nesting of eml.

The Proof Strategy: How Do You Show Something Is Universal?

The approach is constructive. Odrzywołek doesn't prove universality through abstract argument — he builds each function explicitly. The paper proceeds in layers: first constants (e, 0, integers, rationals, π, i), then unary functions (exp, ln, negation, reciprocal), then arithmetic (addition, multiplication, integer powers, rational powers), and finally transcendentals (trigonometric, inverse trigonometric, hyperbolic functions and their inverses).

This layered construction is necessary because the claim is strong. Many candidate binary functions fail at some stage. A function like f(x, y) = x + y gives you addition but no pathway to transcendentals. f(x, y) = x^y is powerful but struggles with negative numbers and has branch-cut complications without also providing logarithmic access cleanly. The eml function works precisely because it couples the exponential and logarithmic worlds in a single expression — it bridges the additive and multiplicative structures of mathematics simultaneously. The subtraction between them creates an asymmetry that turns out to be productive: you can isolate either component by fixing the other input.

The paper also addresses a natural objection: isn't this circular? You're defining a function in terms of exp and ln, then "deriving" them. The answer is that the starting point is the abstract binary operator and the constant 1. The question is whether closure under composition of this single operator, seeded with one constant, yields all elementary functions. It does, and the derivations are the proof.

The Complex Domain Catch

Here is where intellectual honesty demands a caveat. The construction of trigonometric functions, π, and i all require the complex logarithm — specifically, the principal branch where ln(−1) = iπ. This means that even if your end goal is computing sin(0.5), a purely real quantity, the intermediate eml expressions pass through complex numbers.

This is not a flaw in the mathematics, but it is a constraint on interpretation. The eml operator is "universal" over the elementary functions only when its domain and range include . If you restrict to , you lose the ability to construct trigonometric functions — the bridge through exp(ix) is severed. Odrzywołek is transparent about this: the principal branch of the complex logarithm is a formal requirement of the system.

In practice, this mirrors how complex numbers already permeate real analysis. The fundamental theorem of algebra, Fourier analysis, and quantum mechanics all route through to obtain real results. The eml construction simply makes this dependency explicit at the operator level.

Why This Matters for AI and Computation

The most immediate application space is symbolic regression — the machine learning task of discovering mathematical expressions that fit data. Current systems search over a grammar of elementary functions: +, ×, exp, ln, sin, and so forth. If a single binary operator suffices, the search grammar shrinks dramatically. Instead of choosing among a dozen operations at each node of an expression tree, the system composes one. This doesn't necessarily reduce computational complexity, but it transforms the search space geometry in ways that could benefit certain optimization strategies.

For neural network architecture, the result suggests a minimal activation paradigm. Instead of networks with heterogeneous activation functions — ReLU here, sigmoid there, sinusoidal elsewhere — a network could theoretically use a single eml-structured bivariate neuron. Whether this is practical depends on trainability and depth requirements, but it provides a theoretical floor for architectural minimalism.

In computer algebra systems, a single-operator representation could simplify internal canonical forms. And for automated theorem proving in analysis, reducing the function vocabulary means reducing the space of rewrite rules — potentially making proofs in real and complex analysis more tractable for formal verification systems.

What Comes Next?

Several questions emerge naturally. First: efficiency. How deep must the nesting go? Computing sin(x) via eml requires many layers of composition. Is there a meaningful notion of circuit depth or Kolmogorov complexity for eml-expressions, and how does it compare to standard representations? Odrzywołek provides constructions but does not optimize them.

Second: uniqueness. Is eml the simplest such operator? Are there other binary functions with a single seed constant that achieve universality? The paper suggests this territory is largely unexplored. A systematic classification of "functionally complete" operators over elementary functions — analogous to Post's lattice for Boolean functions — would be a significant contribution.

Third: information-theoretic connections. The fact that one operator and one constant suffice to describe all elementary functions invites questions about the information content of mathematical operations. Is there a deep reason that exp and ln — the core of eml — are the right primitives, or would other pairings work equally well?

Finally: implementation. No one has, to this author's knowledge, built a working computer algebra system or neural architecture on eml-primitives. Doing so would be the real test — not of the mathematics, which appears sound, but of whether theoretical minimalism translates to practical utility. Sometimes a NAND gate is beautiful precisely because you don't build everything from it. But knowing that you can changes how you think about what's fundamental.


🔬 EML Operator: Can We Apply This to Our Simulations?

The EML operator defined as eml(x, y) = exp(x) - ln(y) (arXiv:2603.21852) presents an intriguing question: does it map onto the mathematical structures already living inside our computational materials pipelines? The short answer is yes, structurally — and not yet, practically.

1. McMillan/Allen-Dynes Already Speaks EML

Consider the core of our Tc estimator:

Tc = (ωlog / 1.2) × exp(−1.04(1+λ) / (λ − μ*(1+0.62λ)))

This is literally a prefactor times an exponential of a rational function. If we set x = −1.04(1+λ)/(λ − μ*(1+0.62λ)) and note that ωlog often enters our code via ln(ω) averages over the Eliashberg spectral function — i.e., ωlog = exp(⟨ln(ω)⟩) — then Tc is naturally expressible as compositions of exp() and ln(). The EML operator captures exactly this dual structure: an exponential activation competing against a logarithmic damping. The BCS gap equation, Nernst equation (E = E⁰ − (RT/nF)ln(Q)), and van't Hoff relation (ln(K) = −ΔH/RT + ΔS/R) all share this exp/ln duality.

2. Does This Simplify Our Python Code?

Honestly, probably not in any meaningful way. Wrapping np.exp(x) - np.log(y) into an eml() helper function adds a layer of abstraction without reducing computational cost or improving numerical stability. Our existing implementations are already vectorized and clear. Replacing explicit calls with eml() would obscure domain-specific meaning — a reviewer reading eml(coupling_term, frequency_avg) loses the physics that exp(-1.04*(1+lam)/...) immediately communicates.

3. The Interesting Angle: Physics-Aware Neural Activation

Where EML becomes genuinely compelling is as a neural network activation function for surrogate models learning Tc from composition. Standard activations (ReLU, GELU) encode no physical prior. But if we know the target function lives in exp/ln space — as McMillan guarantees — then an EML-based activation σ(x, y) = exp(x) - ln(y) bakes in the correct functional form as an inductive bias. For our planned graph neural network surrogate models predicting superconducting properties from crystal graphs, this could accelerate convergence and improve extrapolation to unseen chemistries.

4. Verdict

The theoretical fit is elegant and real — our core equations are native inhabitants of EML space. The practical benefit today is negligible for direct simulation code. The future-facing value is genuine: as we move toward ML-accelerated screening, encoding exp/ln physics into model architecture via EML-type operators is a principled design choice worth prototyping.

Read more

[Company Spotlight] IonQ: Quantum Computing - Trapped Ion

🏢 COMPANY SPOTLIGHT IonQ IonQ develops trapped-ion quantum computers and full-stack quantum solutions, becoming the first quantum company to exceed $100 million in annual revenue. Quantum Computing • Founded 2015 • College Park, Maryland, USA 📌 Company Overview Focus: Quantum Computing - Trapped Ion 🔥 Recent Developments First Photonic Interconnect Milestone Achievement 2026-04-14 IonQ successfully

By Lucas Oriens Kim