**Weak gravitational field**: This excludes black holes, but most important lensing objects can still be modeled, such as galaxy clusters and exoplanets.**Masses are slowly moving**: Again, this excludes only the most extreme cosmic phenomena and so is a worthwhile tradeoff.**Thin lens**: Almost all lenses have a "thin" mass distribution compared to the distances to the source and observer and so this is another important approximation that simplifies the problem into a cylindrically symmetric one and allows us to use geometric optics where rays are all straight lines and save a lot of computation effort.

*general*framework for simulating lensing anywhere, and from the scale of exoplanets all the way up to galaxy clusters.

The following figure illustrates the coordinate system we use in our lensing implementation. "Source" refers to a distant background object in its actual position in space (e.g., quasar or galaxy), "image" is a shifted/split/sheared mirage of the source due to lensing, and O is the observer. The lensing mass is assumed to lie in a plane ("lens plane") and is composed of one or more point masses \(m_i\). The perpendicular distance the light ray from the source makes with each mass is termed the impact parameter \(\xi\), and the lensing deflection angle \(\hat{\alpha}\) is related to the mass and the impact parameter.

(Incidentally, the math formulae in these posts are rendered on-the-fly from TeX code using MathJax)

We have mentioned that lenses are modeled as point masses or collections of point masses; since the gravitational field outside a spherically symmetric body is identical to that of a point mass this is a broadly applicable approximation.

We can state the following important properties:

- Larger
**mass**-> Light is deflected**more**. **Distance dependence**: Greater distances increase the amount of deflection, thus increasing chances of detection.**Multiple images**: The angles \(\theta\) and \(\theta_s\) can be either positive or negative, implying that multiple images are possible,**Amplification**: If the multiple images are too small and close together to be seen separately (e.g., in the case of exoplanet microlensing), then the images combine together and cause the lens to appear to brighten over time (the amplification can sometimes be a factor of several hundred),**Einstein rings**: If the source is directly in line with the observer and lens (\(\theta\) = 0) then a ring of light (commonly called an Einstein ring) is observed.

We will now discuss the concrete implementation of lensing in

**celestia.Sci**.

Take this scene of the bright elliptical NGC 6166 galaxy rendered in celestia.Sci. The inset is a magnified view showing the individual pixels that make up the Milky Way galaxy as seen from NGC 6166 in the huge distance of 157 Mpc. The result of simulating lensing is shown for comparison.

What we should notice here is that

**each pixel**in essence represents a

**light ray**originating from within the simulation, regardless of whether the light source is a star or galaxy. Thus a GPU fragment shader running on all pixels will process all sources of light in the scene democratically. This is equivalent to computing the lensing deflection angle on a grid of dimensions equal to the rendered image. Fragment processors available on most modern graphics cards will be able to execute such a fragment shader rapidly, and the end result will have pixel-level accuracy.

We use a two-pass approach where we first render stars and DSOs to a square texture in memory using a

**framebuffer object**(FBO). Then we draw the texture as a quad covering the entire window (cropped by the viewing limits). We apply a lensing fragment shader during this second step.

A challenge in this strategy is to correctly transform coordinates between texture space, where the lensing effect is calculated in the fragment shader, and world space. Distances in the lens equation must be computed in world units (km), and the angular deflection must be converted to a displacement in texture units [0, 1]. The intercept theorem from optics can help here:

(intercept theorem suggestion courtesy of Fridger) |

We require a final transform from texture space to window space. As texture space is square (0, 0) - (1, 1) while window space is generally not, we must render to horizontally or vertically distorted coordinates depending on the aspect ratio of the window, then "undistort" when rendering to the full-screen quad in window space.

Up to now we’ve discussed mainly coordinate transforms, but we’ve forgotten an important aspect of what makes gravitational lensing work: Mass! However, one problem is that masses for most objects except exoplanets are not defined in celestia.Sci solar system and star definition (.ssc and .stc) files. Only magnitudes (brightnesses) are guaranteed to be known for stars and DSOs in celestia.Sci. Fortunately, astronomers have known for some time that mass is closely related to how luminous an object is.

Plot based on data from Torres, G., and et al., 2009. Accurate masses and radii of normal stars: modern results and applications. The Astronomy and Astrophysics Review, 18(1-2), pp.67–126 |

On the larger scale of galaxies and galaxy clusters, the situation is different. Mass becomes linearly related to luminosity, giving rise to mass-to-light ratios (M/L). M/L depends on galaxy type: spiral M/L=100, elliptical (E/S0) M/L=200, and irregular M/L=1 (values from Bahcall and Kulier 2014, Carroll and Ostlie 2007; note that there is some non-linearity for elliptical types based on radius and velocity dispersion but for simplicity we do not use the full rigorous model here).

At large scales >300 h^-1 Mpc (clusters), mass-to-light ratio approaches a cosmic constant of about 409 h solar M/L (h=0.7).

Continue reading...

## No comments:

Post a Comment