-
Mar 19, 2026 · 10 min read · Technical Blog Post · Lossfunk Letters
We present EsoLang-Bench, a benchmark using esoteric programming languages where training data is virtually nonexistent. Five frontier models scored 85-95% on standard benchmarks but achieved only 11.2% maximum on EsoLang-Bench, with most below 5%. All models scored exactly 0% beyond the "Easy" difficulty tier, a uniform failure suggesting fundamental limitations rather than gradual degradation.
-
Technical Blog Post
A deep-dive into the Engram module, exploring its architecture, memory mechanisms, and the elegant design principles behind continual learning in neural networks.
-
Technical Blog Post
A technical walkthrough of importance sampling, covering how to estimate expectations under one distribution using samples from another, and why it matters for reinforcement learning and probabilistic inference.
-
Nov 6, 2025 · 8 min read · Technical Blog Post · Lossfunk Letters
Sequential reasoning wins in 95.6% of configurations at matched compute, with accuracy gains up to 46.7%. On AIME-2025 with Qwen3-235B: 76.7% vs parallel's 30.0%. We introduce inverse-entropy weighted voting, a training-free aggregation method that achieved optimal performance in 97% of sequential runs.
-
Oct 29, 2025 · 6 min read · Technical Blog Post · Lossfunk Letters
We demonstrate that post-trained models can recognize correct solutions through output entropy analysis. Sequence-level entropy cleanly separates correct from incorrect reasoning, but only in reward-trained models, not instruction-tuned ones. This enables 25-50% token reduction without sacrificing accuracy.
-
Technical Blog Post · Medium
An introduction to Flux.jl, Julia's machine learning library, covering how to build custom neural network architectures from scratch with a clean, composable API that makes deep learning in Julia intuitive and flexible.