position#
Functions for manipulating and updating scanning positions.
Tike coordinate system conventions explanation in one dimension#
Minimum corner coordinates#
Tike uses an origin-at-minimum-corner coordinate system. This means that location of every object is described by its minimum corner (the corner closest to negative infinity). It also means that valid coordinates (coordinates within the field of view) are strictly non-negative because the field of view has its origin at the minimum corner of the field of view, so none of the coordinates within the field of view will be negative.
Tike uses pixel units for scan positions, so that there is a one-to-one-to-one corespondence between scan positions, coordinates, and pixels in the field of view.
Figure 1: The coordinate system in the field of view
[ 0 | 1 | 2 | 3 | 4 | .... ]
^ ^
| |
-- This is the minimum corner of the field of view. Its coordinate is 0.
|
-- This is the 3rd pixel. Its coordinate is 3.
Because the coordinate system includes zero, a field of view with width w will include coordinates 0, 1, 2, …, (w-1). By extension, a probe located at position 7 with a width 4 would cover coordinates 7, 8, 9, 10. The center of a probe is located at its position plus half its width.
Figure 2: A probe located at position 7
... | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ...
[ p | p | p | p ]
^ ^ ^ ^
| | ^ | |
------|------- A probe located at 7 with width 4 is placed here.
|
-- The center of the probe is located at 7 + (4 / 2).
Conversion to and from real units#
Conversion from pixel units to real units (e.g. meters) is a scalar multiplication by the real width of a pixel.
Equation 1: Conversion from pixels to real units
(distance in pixels) (real units) = (distance in real units)
-------------
(per one pixel)
The forbidden edge and minimum field of view width#
Tike does not allow probes to occupy pixels within 1 pixel of the edge of the field of view. This is to prevent accidentally accessing pixels out of the field of view (where the values are undefined) when scan positions are non-integer coordinates. In other words, the minimum coordinate (floor) must be >= 1 and the maximum coordinate must be <= field of view width - probe width - 1.
Figure 3: The minimum field of view width
[ x | 1 | 2 | 3 | 4 | x ]
^ [ p | p | p | p ] ^
| ^ ^ ^ ^ |
----|---|---|---|----- These pixels are forbidden because they are the edge pixel
| | | |
-------------- These pixels are occupied by a probe of width 4
Thus for a probe with of 4, the smallest field of view is 6 pixels wide and the only valid probe position is 1. The minimum valid coordinate is 1, and the maximum valid coordinate is 6 - 4 - 1 = 1. In practice, this means that you will shift your scan positions so the minimum position is 1. And the minimum field of view width is the span of the scan positions + probe width + 2.
Equation 2: Minimum field of view width equation
(minimum field of view width) = (maximum scan position) - (minimum scan position) + (probe width) + 2
Example in one dimension#
The following python pseudo code would prepare you scan positions for tike and initialize the minimum-size field of view
1import numpy as np
2
3scan_in_pixels = scan_in_meters * pixels_per_meter
4
5minimum_fov_width = np.amax(scan_in_pixels) - np.amin(scan_in_pixels) +
6probe_width + 2 field_of_view = np.zeros(shape=(minimum_fov_width),
7dtype=np.cfloat)
8
9scan_in_pixels = scan_in_pixels - np.amin(scan_in_pixels) + 1
Note that tike.ptycho.object.get_padded_object()
will do lines 5-8 for you.
- class tike.ptycho.position.AffineTransform[source]#
Bases:
object
Represents a 2D affine transformation.
- __init__(scale0=1.0, scale1=1.0, shear1=0.0, angle=0.0, t0=0.0, t1=0.0)#
- Parameters
scale0 (float) –
scale1 (float) –
shear1 (float) –
angle (float) –
t0 (float) –
t1 (float) –
- Return type
None
- angle: float = 0.0#
- asarray(xp=numpy)[source]#
Return an 2x2 matrix of scale, shear, rotation.
This matrix is scale @ shear @ rotate from left to right.
- Return type
numpy.ndarray
- asarray3(xp=numpy)[source]#
Return an 3x2 matrix of scale, shear, rotation, translation.
This matrix is scale @ shear @ rotate from left to right. Expects a homogenous (z) coordinate of 1.
- Return type
numpy.ndarray
- classmethod fromarray(T)[source]#
Return an Affine Transfrom from a 2x2 matrix.
Use decomposition method from Graphics Gems 2 Section 7.1
- Parameters
T (numpy.ndarray) –
- Return type
- scale0: float = 1.0#
- scale1: float = 1.0#
- shear1: float = 0.0#
- t0: float = 0.0#
- t1: float = 0.0#
- class tike.ptycho.position.PositionOptions[source]#
Bases:
object
Manage data and settings related to position correction.
- __init__(initial_scan, use_adaptive_moment=False, vdecay=0.999, mdecay=0.9, use_position_regularization=False, update_magnitude_limit=0, transform=AffineTransform(scale0=1.0, scale1=1.0, shear1=0.0, angle=0.0, t0=0.0, t1=0.0), origin=(0, 0), confidence=<factory>, update_start=0)#
- Parameters
initial_scan (numpy.array) –
use_adaptive_moment (bool) –
vdecay (float) –
mdecay (float) –
use_position_regularization (bool) –
update_magnitude_limit (float) –
transform (AffineTransform) –
origin (tuple[float, float]) –
confidence (numpy.ndarray) –
update_start (int) –
- Return type
None
- confidence: numpy.ndarray#
A rating of the confidence of position information around each position.
- initial_scan: numpy.array#
The original scan positions before they were updated using position correction.
- property m#
- mdecay: float = 0.9#
The proportion of the first moment that is previous first moments.
- property mx#
- property my#
- origin: tuple[float, float] = (0, 0)#
The rotation center of the transformation. This shift is applied to the scan positions before computing the global transformation.
- resample(factor)[source]#
Return a new PositionOptions with the parameters scaled.
- Parameters
factor (float) –
- Return type
- transform: AffineTransform = AffineTransform(scale0=1.0, scale1=1.0, shear1=0.0, angle=0.0, t0=0.0, t1=0.0)#
Global transform of positions.
- Parameters
x (np.ndarray) –
- Return type
np.ndarray
- update_magnitude_limit: float = 0#
When set to a positive number, x and y update magnitudes are clipped (limited) to this value.
- update_start: int = 0#
Start position updates at this epoch.
- use_adaptive_moment: bool = False#
Whether AdaM is used to accelerate the position correction updates.
- use_position_regularization: bool = False#
Whether the positions are constrained to fit a random error plus affine error model.
- property v#
- vdecay: float = 0.999#
The proportion of the second moment that is previous second moments.
- property vx#
- property vy#
- tike.ptycho.position.affine_position_regularization(comm, updated, position_options, max_error=32, regularization_enabled=False)[source]#
Regularize position updates with an affine deformation constraint.
Assume that the true position updates are a global affine transformation plus some random error. The regularized positions are then weighted average of the affine deformation applied to the original positions and the updated positions.
- Parameters
(... (updated) – The original scanning positions.
N – The original scanning positions.
2) – The original scanning positions.
(... – The updated scanning positions.
N – The updated scanning positions.
2) – The updated scanning positions.
comm (Comm) –
updated (List[cupy.ndarray]) –
position_options (List[PositionOptions]) –
max_error (float) –
regularization_enabled (bool) –
- Returns
The updated scanning regularized with affine deformation.
- Return type
regularized (…, N, 2)
- tike.ptycho.position.check_allowed_positions(scan, psi, probe_shape)[source]#
Check that all positions are within the field of view.
- Raises
ValueError – The field of view must have 2 pixel buffer around the edge. i.e. positions must be >= 2 and < the object.shape - 2 - probe.shape. This padding is to allow approximating gradients and to provide better interpolation near the edges of the field of view.
- Parameters
scan (numpy.array) –
psi (numpy.array) –
probe_shape (tuple) –
- tike.ptycho.position.estimate_global_transformation(positions0, positions1, weights, transform=None)[source]#
Use weighted least squares to estimate the global affine transformation.
- Parameters
positions0 (numpy.ndarray) –
positions1 (numpy.ndarray) –
weights (numpy.ndarray) –
- Return type
tuple[tike.ptycho.position.AffineTransform, float]
- tike.ptycho.position.estimate_global_transformation_ransac(positions0, positions1, weights=None, transform=AffineTransform(scale0=1.0, scale1=1.0, shear1=0.0, angle=0.0, t0=0.0, t1=0.0), min_sample=4, max_error=32, min_consensus=0.75, max_iter=20)[source]#
Use RANSAC to estimate the global affine transformation.
- Parameters
min_sample (int) – The number of positions to use to initialize each candidate model
max_error (float) – The distance from the model which determines inliar/outliar status
min_consensus (float) – The proportion of points needed to accept model as consensus.
positions0 (numpy.ndarray) –
positions1 (numpy.ndarray) –
weights (Optional[numpy.ndarray]) –
transform (AffineTransform) –
max_iter (int) –
- Return type
tuple[tike.ptycho.position.AffineTransform, float]
- tike.ptycho.position.gaussian_gradient(x, sigma=0.333)[source]#
Return 1st order Gaussian derivatives of the last two dimensions of x.
Don’t use scipy.ndimage.gaussian_filter because we only want derivatives along last two, not all dimensions.
References
https://www.crisluengo.net/archives/22/
- Parameters
x (cupy.ndarray) –
sigma (float) –
- Return type
tuple[cupy.ndarray, cupy.ndarray]
- tike.ptycho.position.update_positions_pd(operator, data, psi, probe, scan, dx=-1, step=0.05)[source]#
Update scan positions using the gradient of intensity method.
Uses the finite difference method to compute the gradient of the farfield intensity with respect to position movement in horizontal and vertical directions. Then a least squares solver is used to find the position shift that will minimize the intensity error for each of the detector pixels.
- Parameters
farplane (array-like complex64) – The current farplane estimate from psi, probe, scan
dx (float) – The step size used to estimate the gradient
References
Dwivedi, Priya, A.P. Konijnenberg, S.F. Pereira, and H.P. Urbach. 2018. “Lateral Position Correction in Ptychography Using the Gradient of Intensity Patterns.” Ultramicroscopy 192 (September): 29–36. https://doi.org/10.1016/j.ultramic.2018.04.004.