"To avoid this problem, the team divided their 100-milliwatt laser into eight beams. Each beam travels along a slightly different path through the turbulent atmosphere and thus receives a different random phase perturbation. Counterintuitively, this incoherent illumination makes the interference effects observable.
When I first started studying optical engineering, my teacher had worked on the first under-the-RADAR guidance system for bombers. He told lots of amusing stories, like how the pilots insisted on a manual override - so they "agreed" to provide a switch, noting to us manual piloting at near-treetop level and 1,000 ft/s is insane.
He taught us about the nominal amount of turbulence in the atmosphere, and that it limited space-based cameras to about half a foot resolution - a limit he said couldn't be broken. Therefore, license plates would never be readable from space...
Before I was out of grad school, they had broken it with laser techniques on nearby targets. Flash the laser at the same time as the image, scan the laser-illuminated spot, calculate the perturbance, and reverse-filter the image. A lot of processing (for that day), but it could be done back on Earth.
As you can see from the test images, the 8 lasers aren't enough to perfectly smooth out the noise. The noise is probably square-root-8 improved, so resolution should improve by a factor of not quite 3. Move those lasers slightly and repeat 12 times; you've improved resolution by 10. This is easy to do quickly; you should be able to read fine print held by a car passenger on the highway.
We are in the middle of a renaissance of image processing across a wide range of fields. Many of the previous limits are being smashed by using new materials and algorithms. See https://en.wikipedia.org/wiki/Fourier_ptychography for an example
Wow, I had no idea. I know nothing about the field, so maybe someone better educated can answer my innocent, probably naive question: my instincts tell me that any technology that makes humans better at manipulating or interpreting light has vast potential to alter our lives. Is that right?
That's how night mode works on Pixel phones, right? I believe it takes a few images in rapid succession and took advantage of the noise being random which meant a high quality image under a noisy sensor with some signal processing.
It also can actually allow you to identify positions within the image at a greater resolution than the pixels, or even light itself, would otherwise allow.
In microscopy, this is called 'super-resolution'. You can take many images over and over, and while the light itself is 100s of nanometers large, you actually can calculate the centroid of whatever is producing that light with greater resolution than the size of the light itself.
Integrating over a longer time to get more accurate light measurements of the a scene has been a principal feature of photography. You need to slow down the shutter and open up the aperture in dark conditions.
Combining multiple exposures is not significantly different from a single longer exposure, except the key innovation of combining motion data and digital image stabilization which allows smartphones to approximate longer exposures without the need of a tripod.
I agree with you wholeheartedly and just want to add one more aspect to this: it also allows you do handle the case where the subject is moving slowly relative to the camera. Easy example is taking long exposures of the moon from a tripod. If you just open the shutter for 30 seconds the moon itself is going to move enough to cause motion blur; if instead you take a series of much faster photos and use image processing techniques to stack the subject (instead of just naively stacking all of the pixels 1:1) you can get much better results.
For bright stuff like the moon, it's my understanding the best way is take really high-speed video, hundreds of frames per second, then pick out the frames which has the least amount of atmospheric distortion and stack those.
So not only can you compensate for unwanted motion of the camera rig, but also for external factors like the atmosphere.
For faint deep-sky objects, IIRC you really do want long exposures, to overcome sensor noise. At least the comparisons I've seen using same total integration time, a few long exposures had much more detail and color compared to lots of short exposures.
That said, lots of short exposures might be all you can do if you're limited by equipment or such, and is certainly way better than nothing.
This is how we reduce noise in filmmaking. My de-noise node in DaVinci has two settings: spatial and temporal. Temporal references 3 frames either side of the subject frame.
>He told lots of amusing stories, like how the pilots insisted on a manual override - so they "agreed" to provide a switch, noting to us manual piloting at near-treetop level and 1,000 ft/s is insane.
You ought to read Tom Wolfe’s “the right stuff” asap if you haven’t already
So what's the summary of how this works? I don't think it was explained well, and I'm fairly up to speed with the physics of photons etc. Is it that the multiple lasers are able to destructively interfere with each other so that they cancel out the noise from each other since the noise will be the same in all of them? That's tricky because if the photons are phase shifted to cancel out the noise that seems like the ENTIRE laser signal would be cancelled out too. Maybe this is what's happening, and the only thing "left over" is the signal from the source (what's being measured)?
> He imagines that the remote-imaging system could have several applications, including monitoring insect populations across agricultural land.
“Insect populations” is a funny way to spell secrets. Jokes aside, it does seem like this could serve a wide range of non-espionage related use cases. Really cool.
there is a now old technology where a laser is shone on a window, and the resulting glow is imaged, the images if anylised are an analog audio signal that is created by voices inside a building vib the newer version under discussion here is a direct fit forthe same use, but at much greater distances and greater fidelity/resolution
there were many,mostly mechanical devices, made to detect aircraft ,deployed durring
WWII, that had two large acoustical horns directed a central binaural detection sensor, the whole aparatus was the mounted on a large stage that turned, and the horns were also aimable, giving a bearing, and speed on aircraft, in dark ,coudy, or other conditions.The inferometer bieng someone in a seat.
> To demonstrate the system’s capabilities, the team created a series of 8-mm-wide targets, each made from a reflective material and imprinted with a letter.
That interesting article led me down a research rabbit hole of microwave maser inferometers and whether that could be an explanation for the controversial Havana Syndrome. And, having skimmed descriptions of historical SIGINT projects Buran[1] and Luch[2], and the theoretical advantages of such a system ... my curiosity in Faraday cages is renewed.
My (mis?)understanding was that two receivers acting as an interferometer can only resolve things that are on a line parallel with the line between the receivers--so if the receivers are on a horizontal, then they can resolve left and right in their targets, but not up and down. But the images shown in the paper have more or less full 360 degrees resolution. Is that because they rotated the target? The paper says they did, but it's not clear how many increments of partial rotation they did--every 10 degrees, 20,...
If the target cannot be rotated, can the two (or more) receivers revolve around a central axis? If so, presumably one of the receivers could revolve around the other (fixed) receiver to the same effect.
If it’s truly just like the methods astrophysicists use for transit imaging, you might even be able to do some funky stuff like monitor invisible gasses. Could potentially be revolutionary for things like fume safety and viral spread tracking, among other uses. Might even be able to analyze liquids in a container without having to touch the liquid (the name for this type of testing evades me at the moment)
I believe it'd be pretty wonky coloring, or at least it could be, since it'd be capturing snapshots of individual frequency responses. If something is visibly green, reflecting across most of the greenish areas of spectrum, but happens to absorb the exact frequency of the laser, it'd appear black when imaged this way. Or at least not green.
I think that’s the case for regular cameras too though, the filter for the pixels doesn’t exactly replicate the response of the cones in the eyes either, right? So you have things where the camera sees a different color than a human eye.
Regular cameras respond to a wide range of wavelenghts, and they do actually reasonably mimic the response of the human eye.
Either way, it's the "range" vs "single wavelength" that's key here. The green band (or blue band or red band) isn't one wavelength. It's an average over a fairly broad range. Single-wavelength (or very narrow range) images are quite different.
Trivial to eliminate through window treatments and training to mitigate should-surfing risks.
It’s probably more valuable as a surveillance and monitoring tool than an espionage one, but they would no doubt be the first customers (if not already).
the reflective material requirement seems to be a limiting factor, so most likely application would be license plate reading?? They didn't mention anything about moving targets, but I guess space debris is also moving so maybe as an added layer to LiDAR??
Modulating a laser beam for communications is not new but this distance effort by amateurs doing a two-way voice transmission over 167km in New Zeland is pretty cool. This article also mentions a number of other laser communication long distance efforts.
OK, this part was brilliant:
"To avoid this problem, the team divided their 100-milliwatt laser into eight beams. Each beam travels along a slightly different path through the turbulent atmosphere and thus receives a different random phase perturbation. Counterintuitively, this incoherent illumination makes the interference effects observable.
When I first started studying optical engineering, my teacher had worked on the first under-the-RADAR guidance system for bombers. He told lots of amusing stories, like how the pilots insisted on a manual override - so they "agreed" to provide a switch, noting to us manual piloting at near-treetop level and 1,000 ft/s is insane.
He taught us about the nominal amount of turbulence in the atmosphere, and that it limited space-based cameras to about half a foot resolution - a limit he said couldn't be broken. Therefore, license plates would never be readable from space...
Before I was out of grad school, they had broken it with laser techniques on nearby targets. Flash the laser at the same time as the image, scan the laser-illuminated spot, calculate the perturbance, and reverse-filter the image. A lot of processing (for that day), but it could be done back on Earth.
As you can see from the test images, the 8 lasers aren't enough to perfectly smooth out the noise. The noise is probably square-root-8 improved, so resolution should improve by a factor of not quite 3. Move those lasers slightly and repeat 12 times; you've improved resolution by 10. This is easy to do quickly; you should be able to read fine print held by a car passenger on the highway.
We are in the middle of a renaissance of image processing across a wide range of fields. Many of the previous limits are being smashed by using new materials and algorithms. See https://en.wikipedia.org/wiki/Fourier_ptychography for an example
Applied Science YT channel has an interesting video showing this at work:
Dramatically improve microscope resolution with an LED array and Fourier Ptychography
https://www.youtube.com/watch?v=9KJLWwbs_cQ
Damn, somehow missed this video as a long time subscriber.
Wow, I had no idea. I know nothing about the field, so maybe someone better educated can answer my innocent, probably naive question: my instincts tell me that any technology that makes humans better at manipulating or interpreting light has vast potential to alter our lives. Is that right?
That's how night mode works on Pixel phones, right? I believe it takes a few images in rapid succession and took advantage of the noise being random which meant a high quality image under a noisy sensor with some signal processing.
It also can actually allow you to identify positions within the image at a greater resolution than the pixels, or even light itself, would otherwise allow.
In microscopy, this is called 'super-resolution'. You can take many images over and over, and while the light itself is 100s of nanometers large, you actually can calculate the centroid of whatever is producing that light with greater resolution than the size of the light itself.
https://en.wikipedia.org/wiki/Super-resolution_imaging
Are the 100s of nanometers of light larger than the perturbations of Brownian motion?
This oldish link would indicate inclusions of lead in aluminum at 330°C will move within 2nm in 1/3s but may displace by 100s of nanometers over time:
https://www2.lbl.gov/Science-Articles/Archive/MSD-Brownian-m...
Integrating over a longer time to get more accurate light measurements of the a scene has been a principal feature of photography. You need to slow down the shutter and open up the aperture in dark conditions.
Combining multiple exposures is not significantly different from a single longer exposure, except the key innovation of combining motion data and digital image stabilization which allows smartphones to approximate longer exposures without the need of a tripod.
I agree with you wholeheartedly and just want to add one more aspect to this: it also allows you do handle the case where the subject is moving slowly relative to the camera. Easy example is taking long exposures of the moon from a tripod. If you just open the shutter for 30 seconds the moon itself is going to move enough to cause motion blur; if instead you take a series of much faster photos and use image processing techniques to stack the subject (instead of just naively stacking all of the pixels 1:1) you can get much better results.
For bright stuff like the moon, it's my understanding the best way is take really high-speed video, hundreds of frames per second, then pick out the frames which has the least amount of atmospheric distortion and stack those.
So not only can you compensate for unwanted motion of the camera rig, but also for external factors like the atmosphere.
For faint deep-sky objects, IIRC you really do want long exposures, to overcome sensor noise. At least the comparisons I've seen using same total integration time, a few long exposures had much more detail and color compared to lots of short exposures.
That said, lots of short exposures might be all you can do if you're limited by equipment or such, and is certainly way better than nothing.
This is how we reduce noise in filmmaking. My de-noise node in DaVinci has two settings: spatial and temporal. Temporal references 3 frames either side of the subject frame.
some phones shine IR floodlight, too.
- "Flash the laser at the same time as the image, scan the laser-illuminated spot, calculate the perturbance, and reverse-filter the image"
That's also how some adaptive optics work in astronomy,
https://en.wikipedia.org/wiki/Laser_guide_star
The adaptive optics system for the DKIST solar telescope actually deforms each point of the mirror at 60Hz or something to do wavefront correction!
Big telescopes have to actively deform the primary mirror anyway, just to keep it in proper shape as it moves around under gravity loads.
>He told lots of amusing stories, like how the pilots insisted on a manual override - so they "agreed" to provide a switch, noting to us manual piloting at near-treetop level and 1,000 ft/s is insane.
You ought to read Tom Wolfe’s “the right stuff” asap if you haven’t already
And watch this video of Neil Armstrong nearly getting killed when his test flight of a lunar lander trainer (on Earth) crashed and burned:
https://youtu.be/tUJDbj9Vp5w?si=YFeau8vskUvpDUNV
So what's the summary of how this works? I don't think it was explained well, and I'm fairly up to speed with the physics of photons etc. Is it that the multiple lasers are able to destructively interfere with each other so that they cancel out the noise from each other since the noise will be the same in all of them? That's tricky because if the photons are phase shifted to cancel out the noise that seems like the ENTIRE laser signal would be cancelled out too. Maybe this is what's happening, and the only thing "left over" is the signal from the source (what's being measured)?
> He imagines that the remote-imaging system could have several applications, including monitoring insect populations across agricultural land.
“Insect populations” is a funny way to spell secrets. Jokes aside, it does seem like this could serve a wide range of non-espionage related use cases. Really cool.
there is a now old technology where a laser is shone on a window, and the resulting glow is imaged, the images if anylised are an analog audio signal that is created by voices inside a building vib the newer version under discussion here is a direct fit forthe same use, but at much greater distances and greater fidelity/resolution there were many,mostly mechanical devices, made to detect aircraft ,deployed durring WWII, that had two large acoustical horns directed a central binaural detection sensor, the whole aparatus was the mounted on a large stage that turned, and the horns were also aimable, giving a bearing, and speed on aircraft, in dark ,coudy, or other conditions.The inferometer bieng someone in a seat.
> The team demonstrated that this intensity interferometer can image millimeter-wide letters at a distance of 1.36 km
1mm at 1.36 km works out to about 150 milliarcsec (mas), if you're used to those units from astronomy contexts.
Letters were 8 mm.
> To demonstrate the system’s capabilities, the team created a series of 8-mm-wide targets, each made from a reflective material and imprinted with a letter.
I checked the paper, by "8mm wide" they mean that the letters were 8mm tall, which is a 22pt font (name-tag size), for those curious.
i'm a bit confused when they don't measure things in olympic pools and bananas for scale
intensity interferometer means it interferometers intensity of light.
imaging technologies you mistook for imagination technologies and their gpu inside of a sega dreamcast or iphone, ipad,...
1.36 km = 0.85 miles
That interesting article led me down a research rabbit hole of microwave maser inferometers and whether that could be an explanation for the controversial Havana Syndrome. And, having skimmed descriptions of historical SIGINT projects Buran[1] and Luch[2], and the theoretical advantages of such a system ... my curiosity in Faraday cages is renewed.
[1]https://en.wikipedia.org/wiki/Laser_microphone
[2]https://en.wikipedia.org/wiki/Olymp-K
Lasers really are an underrated miracle. So many diverse uses for things that would be impossible without them.
And we are about to be saturated in them as soon as LiDAR full self driving goes mainstream
LIDAR pulses are in the order of a few nanoseconds.
How many pulses per second?
Matter-wave lasers coming soon.
wave motion lasers just right after!
My (mis?)understanding was that two receivers acting as an interferometer can only resolve things that are on a line parallel with the line between the receivers--so if the receivers are on a horizontal, then they can resolve left and right in their targets, but not up and down. But the images shown in the paper have more or less full 360 degrees resolution. Is that because they rotated the target? The paper says they did, but it's not clear how many increments of partial rotation they did--every 10 degrees, 20,...
If the target cannot be rotated, can the two (or more) receivers revolve around a central axis? If so, presumably one of the receivers could revolve around the other (fixed) receiver to the same effect.
the website delights with the absence of ads throwing up into my eyes
Use Firefox and Unlock Origin and that can be every website, even on mobile.
Presumably this could be used for color imaging by using lasers of different wavelengths?
If it’s truly just like the methods astrophysicists use for transit imaging, you might even be able to do some funky stuff like monitor invisible gasses. Could potentially be revolutionary for things like fume safety and viral spread tracking, among other uses. Might even be able to analyze liquids in a container without having to touch the liquid (the name for this type of testing evades me at the moment)
I believe it'd be pretty wonky coloring, or at least it could be, since it'd be capturing snapshots of individual frequency responses. If something is visibly green, reflecting across most of the greenish areas of spectrum, but happens to absorb the exact frequency of the laser, it'd appear black when imaged this way. Or at least not green.
I think that’s the case for regular cameras too though, the filter for the pixels doesn’t exactly replicate the response of the cones in the eyes either, right? So you have things where the camera sees a different color than a human eye.
Regular cameras respond to a wide range of wavelenghts, and they do actually reasonably mimic the response of the human eye.
Either way, it's the "range" vs "single wavelength" that's key here. The green band (or blue band or red band) isn't one wavelength. It's an average over a fairly broad range. Single-wavelength (or very narrow range) images are quite different.
A fun example of these effects is "black fire".
https://www.youtube.com/watch?v=F0LWtieip9E
i think the applications to spy-craft could be quite interesting here. Something for the next mission impossible movie maybe?
It's also interesting to consider that they may be reinventing prior classified research.
Trivial to eliminate through window treatments and training to mitigate should-surfing risks.
It’s probably more valuable as a surveillance and monitoring tool than an espionage one, but they would no doubt be the first customers (if not already).
the reflective material requirement seems to be a limiting factor, so most likely application would be license plate reading?? They didn't mention anything about moving targets, but I guess space debris is also moving so maybe as an added layer to LiDAR??
I wonder if the requirement to rotate the target is inherent, or if it could be optimized away eventually?
I suspect this was an easy way to test it without having to build a rotatable optical bench.
A practical device may be an array of light sources and telescopes on a rotating mount or a set of moveable mirrors that achieve the same effect.
If it is required, then in a real application you could just rotate the laser array instead.
I also wonder about the requirement for the letters to be made of reflective material.
Or rotate the telescopes
... which are radially symmetric.
I recognize the ambiguity, but was referring to the orientation of the telescope system to the target.
My favorite "lasers at distance" thing will be when amateurs can get a few photons back from the mirrors left on the moon
https://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment...
Not quite there yet at the amateur level, private industry soon, but then there is the question of safety to air traffic.
Can you imagine the first moon data link? JWST has 8mbps
Modulating a laser beam for communications is not new but this distance effort by amateurs doing a two-way voice transmission over 167km in New Zeland is pretty cool. This article also mentions a number of other laser communication long distance efforts.
https://www.modulatedlight.org/Modulated_Light_DX/MODULATED_...
And the next will be when the amateur data links manage to noticeably heat the mirrors...
they will heat starlink first.
People do use radio (though not optical) for Earth-Moon-Earth data links: https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%E2%80%93Ear...
How does this compare to the state of the art?
Except when it's raining
... but only if its written on shiny paper