The nyquist sampling theorem is a cornerstone of analog to digital conversion. It posits that to adequately preserve an analog signal when converting to digital, you have to use a sampling frequency twice as fast as what a human can sense. This is part of why 44.1 khz is considered high quality audio, even though the mic capturing the audio vibrates faster, sampling it at about 40k times a second produces a signal that to us is indistinguishable from one with an infinite resolution. As the bandwidth our hearing, at best peaks at about 20khz.
I’m no engineer, just a partially informed enthusiast. However, this picture of the water moving, somehow illustrates the nyquist theorem to me. How perception of speed varies with distance, and how distance somehow make things look clear. The scanner blade samples at about 30hz across the horizon.
Scanned left to righ, in about 20 seconds. The view from a floating pier across an undramatic patch of the Oslo fjord.
*edit: I swapped the direction of the scan in OP
Why is this called a “theorem”? It feels like a rule of thumb instead
It comes from mathematical interpolation. In order to find the frequency of a sine curve (and all signals are combinations of sine curves) you need at least two measured points per period. So the measurement frequency needs to be at least twice the signal frequency - at least of the parts of the signal you want to preserve. Frequencies higher than 22kHz are useless to our ears, so we can ignore them when sampling.
https://en.m.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem