<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Aseem Raj Baranwal</title><description>Machine Learning Researcher in Quantitative Finance</description><link>https://aseemrb.me/</link><item><title>Heisenberg uncertainty as Fourier duality</title><link>https://aseemrb.me/blog/heisenberg-uncertainty-as-fourier-duality/</link><guid isPermaLink="true">https://aseemrb.me/blog/heisenberg-uncertainty-as-fourier-duality/</guid><description>The textbook framing makes the uncertainty principle sound like a measurement artifact. It isn&apos;t. It is a property of any wave: a packet narrow in x has a Fourier decomposition spread in p, with the bound σx · σp ≥ ℏ/2 saturated by a Gaussian. This post derives that statement from photoelectric-effect-level physics.</description><pubDate>Wed, 29 Apr 2026 00:22:00 GMT</pubDate></item><item><title>Voronoi tessellations and Lloyd&apos;s algorithm</title><link>https://aseemrb.me/blog/voronoi-and-lloyds-algorithm/</link><guid isPermaLink="true">https://aseemrb.me/blog/voronoi-and-lloyds-algorithm/</guid><description>A set of generators in the plane partitions it into regions, each closer to one generator than to any other. Lloyd&apos;s algorithm iterates &quot;move each generator to the centroid of its region&quot; and converges to a centroidal Voronoi tessellation. The same algorithm is k-means.</description><pubDate>Sun, 26 Apr 2026 15:22:00 GMT</pubDate></item><item><title>Pólya&apos;s recurrence theorem</title><link>https://aseemrb.me/blog/polyas-recurrence/</link><guid isPermaLink="true">https://aseemrb.me/blog/polyas-recurrence/</guid><description>Simple random walk on the integer lattice returns to the origin with probability one in 1D and 2D. In 3D and higher, there is a positive probability of never returning. The transition is exact, dimension-dependent, and reduces to convergence of a single harmonic-style series.</description><pubDate>Sat, 25 Apr 2026 15:22:00 GMT</pubDate></item><item><title>High-dimensional Gaussians live on a sphere</title><link>https://aseemrb.me/blog/high-dimensional-gaussians-on-a-sphere/</link><guid isPermaLink="true">https://aseemrb.me/blog/high-dimensional-gaussians-on-a-sphere/</guid><description>The bell-curve picture says Gaussian samples live near the mean. In high dimensions that picture is catastrophically wrong: almost all the mass lies in a thin spherical shell at radius √d. Density and mass are not the same thing.</description><pubDate>Sat, 18 Apr 2026 15:22:00 GMT</pubDate></item><item><title>Central Limit Theorem - why sums become Gaussian</title><link>https://aseemrb.me/blog/why-sums-become-gaussian/</link><guid isPermaLink="true">https://aseemrb.me/blog/why-sums-become-gaussian/</guid><description>A geometric look at the central limit theorem. Adding random variables is convolving their densities. Convolution smooths. Watch a Bernoulli, a die roll, or a bimodal distribution become Gaussian as you slide the number of summands.</description><pubDate>Sun, 12 Apr 2026 15:22:00 GMT</pubDate></item><item><title>Why hypercubes look spiky</title><link>https://aseemrb.me/blog/why-hypercubes-look-spiky/</link><guid isPermaLink="true">https://aseemrb.me/blog/why-hypercubes-look-spiky/</guid><description>Counting the corners of an n-dimensional cube by Hamming weight gives a binomial distribution. Plot it as a vertical cross-section and you recover the spiky cube shape that high-dimensional textbooks love to draw.</description><pubDate>Sat, 11 Apr 2026 15:22:00 GMT</pubDate></item><item><title>The 100 prisoners problem</title><link>https://aseemrb.me/blog/100-prisoners-problem/</link><guid isPermaLink="true">https://aseemrb.me/blog/100-prisoners-problem/</guid><description>100 prisoners must each find their own number among 100 randomly filled boxes, opening at most 50 each. Random guessing succeeds with probability one in a nonillion. A particular cycle-following strategy succeeds about 31% of the time. The reason is the cycle structure of a random permutation.</description><pubDate>Sun, 12 Oct 2025 21:22:00 GMT</pubDate></item><item><title>Optimal message passing on sparse graphs</title><link>https://aseemrb.me/blog/optimal-message-passing-sparse-graphs/</link><guid isPermaLink="true">https://aseemrb.me/blog/optimal-message-passing-sparse-graphs/</guid><description>A condensed walkthrough of our NeurIPS 2023 paper deriving the asymptotically Bayes-optimal classifier for node classification on sparse contextual stochastic block models, and what it implies for the design of graph neural networks.</description><pubDate>Thu, 16 Jan 2025 15:22:00 GMT</pubDate></item><item><title>Marchenko-Pastur and the Wigner semicircle</title><link>https://aseemrb.me/blog/marchenko-pastur-and-wigner-semicircle/</link><guid isPermaLink="true">https://aseemrb.me/blog/marchenko-pastur-and-wigner-semicircle/</guid><description>The eigenvalues of a large random matrix do not scatter around. They concentrate, as a histogram, on a deterministic shape. For sample covariance matrices the shape is Marchenko-Pastur; for symmetric matrices with i.i.d. entries it is the Wigner semicircle. Both shapes are computable, and they explain precisely why high-dimensional covariance estimation is biased.</description><pubDate>Thu, 25 Apr 2024 15:22:00 GMT</pubDate></item><item><title>Stein&apos;s paradox</title><link>https://aseemrb.me/blog/steins-paradox/</link><guid isPermaLink="true">https://aseemrb.me/blog/steins-paradox/</guid><description>In three or more dimensions, the sample mean is dominated everywhere by a shrinkage estimator. The geometric reason is the Gaussian shell: noise pushes you outward, and pulling back is uniformly better. A precursor of ridge regression and most modern regularization.</description><pubDate>Fri, 19 Apr 2024 15:22:00 GMT</pubDate></item><item><title>The Newton fractal</title><link>https://aseemrb.me/blog/newton-fractal/</link><guid isPermaLink="true">https://aseemrb.me/blog/newton-fractal/</guid><description>Newton&apos;s method for finding roots of polynomials is a discrete dynamical system on the complex plane. Each starting point converges (almost always) to one of the roots. The basins of attraction have fractal boundaries, intricate to a degree the algebra of the polynomial gives no hint of.</description><pubDate>Tue, 07 Nov 2023 15:22:00 GMT</pubDate></item><item><title>Nearest neighbor breaks in high dimensions</title><link>https://aseemrb.me/blog/nearest-neighbor-breaks-in-high-dimensions/</link><guid isPermaLink="true">https://aseemrb.me/blog/nearest-neighbor-breaks-in-high-dimensions/</guid><description>In high dimensions, all pairwise distances become essentially equal. Nearest and farthest neighbor are no longer meaningfully different. A short geometric tour of the curse of dimensionality.</description><pubDate>Sat, 22 Jul 2023 10:00:00 GMT</pubDate></item><item><title>Effects of graph convolutions in multi-layer networks</title><link>https://aseemrb.me/blog/effects-of-graph-convolutions-in-multi-layer-networks/</link><guid isPermaLink="true">https://aseemrb.me/blog/effects-of-graph-convolutions-in-multi-layer-networks/</guid><description>A walkthrough of our ICLR 2023 paper on how graph convolutions provably lower the feature-signal threshold for node classification in contextual stochastic block models, and why two convolutions help much more than one.</description><pubDate>Tue, 07 Feb 2023 15:22:00 GMT</pubDate></item><item><title>Fast and online palindrome counting</title><link>https://aseemrb.me/blog/count-palindromes/</link><guid isPermaLink="true">https://aseemrb.me/blog/count-palindromes/</guid><description>An exploration of an efficient algorithm for online palindrome counting using a palindrome tree data structure. Based on the work of Rubinchik and Shur, this post details the problem, the data structure, and the implementation.</description><pubDate>Sat, 20 Jul 2019 15:22:00 GMT</pubDate></item><item><title>Lunch with Donald Knuth</title><link>https://aseemrb.me/blog/lunch-with-knuth/</link><guid isPermaLink="true">https://aseemrb.me/blog/lunch-with-knuth/</guid><description>Reflections on a lunch meeting with Donald Knuth. Covers his thoughts on P vs NP, advice on life and curiosity, and his recent mathematical interests in families of sets.</description><pubDate>Wed, 31 Oct 2018 15:22:00 GMT</pubDate></item><item><title>An analogy for the Doppler effect</title><link>https://aseemrb.me/blog/doppler-effect/</link><guid isPermaLink="true">https://aseemrb.me/blog/doppler-effect/</guid><description>A simple, equation-free analogy to explain the Doppler effect using the concept of throwing balls between two people. Designed to make the physics concept intuitive and accessible to a layperson.</description><pubDate>Tue, 29 May 2018 15:22:00 GMT</pubDate></item><item><title>St. Petersburg paradox</title><link>https://aseemrb.me/blog/st-petersburg-paradox/</link><guid isPermaLink="true">https://aseemrb.me/blog/st-petersburg-paradox/</guid><description>An analysis of the St. Petersburg paradox, where the expected winning value is infinite. Discusses the conflict between mathematical expectation and intuition, and resolves it using practical constraints.</description><pubDate>Sat, 26 Aug 2017 15:22:00 GMT</pubDate></item><item><title>Common join algorithms</title><link>https://aseemrb.me/blog/common-join-algorithms/</link><guid isPermaLink="true">https://aseemrb.me/blog/common-join-algorithms/</guid><description>An overview of common join algorithms used in database systems, including Nested Loop, Hash Join, and Sort-Merge Join. Explains the logic, implementation details, and time complexities of each.</description><pubDate>Sat, 18 Mar 2017 15:22:00 GMT</pubDate></item><item><title>Implementing PEGASOS</title><link>https://aseemrb.me/blog/pegasos-stochastic-grad-solver/</link><guid isPermaLink="true">https://aseemrb.me/blog/pegasos-stochastic-grad-solver/</guid><description>A detailed guide on implementing PEGASOS (Primal Estimated sub-GrAdient SOlver for SVM). Explains the mathematical derivation, the stochastic gradient descent approach, and the algorithm&apos;s steps.</description><pubDate>Fri, 17 Jun 2016 15:22:00 GMT</pubDate></item><item><title>Weird but awesome Javascript</title><link>https://aseemrb.me/blog/weird-awesome-javascript/</link><guid isPermaLink="true">https://aseemrb.me/blog/weird-awesome-javascript/</guid><description>A deep dive into the Javascript runtime environment. Explains the single-threaded nature, the call stack, the event loop, the callback queue, and how they work together to handle asynchronous operations.</description><pubDate>Thu, 26 May 2016 15:22:00 GMT</pubDate></item><item><title>Some more #P complete problems</title><link>https://aseemrb.me/blog/sharp-p-complete-problems/</link><guid isPermaLink="true">https://aseemrb.me/blog/sharp-p-complete-problems/</guid><description>A discussion on #P-Complete problems, specifically #SAT and counting 3-colorings in a graph. Includes proofs of their completeness and relevant reductions.</description><pubDate>Fri, 25 Mar 2016 15:22:00 GMT</pubDate></item><item><title>The complexity class of #P problems</title><link>https://aseemrb.me/blog/sharp-p-problems/</link><guid isPermaLink="true">https://aseemrb.me/blog/sharp-p-problems/</guid><description>An in-depth look at the complexity class #P, based on Valiant&apos;s work. Focuses on the problem of computing the permanent of a binary matrix.</description><pubDate>Wed, 23 Mar 2016 15:22:00 GMT</pubDate></item><item><title>Machine level obfuscation</title><link>https://aseemrb.me/blog/machine-level-obfuscation/</link><guid isPermaLink="true">https://aseemrb.me/blog/machine-level-obfuscation/</guid><description>A demonstration of how to obfuscate strings in C using floating-point numbers and machine endianness. Includes a code example that prints &quot;ILOVEYOU&quot; through seemingly random double values.</description><pubDate>Sun, 25 Jan 2015 15:22:00 GMT</pubDate></item><item><title>First interview experience</title><link>https://aseemrb.me/blog/first-interview-experience/</link><guid isPermaLink="true">https://aseemrb.me/blog/first-interview-experience/</guid><description>A personal account of my first face-to-face interview experience at Microsoft. Covers the written rounds, technical questions on linked lists and binary trees, and the final interview rounds.</description><pubDate>Sun, 23 Nov 2014 15:22:00 GMT</pubDate></item></channel></rss>