diff --git a/docs/theory/force_calibration/active.rst b/docs/theory/force_calibration/active.rst index 4a7a00752..c758d8563 100644 --- a/docs/theory/force_calibration/active.rst +++ b/docs/theory/force_calibration/active.rst @@ -1,3 +1,5 @@ +.. _active_calibration_theory: + Active Calibration ------------------ @@ -124,9 +126,9 @@ If we define a factor :math:`c` by which the velocity is reduced, we obtain the .. math:: \begin{align} - R_{d, corrected} & = c R_d\\ - R_{f, corrected} & = \frac{1}{c} R_f\\ - \kappa_{corrected} & = \frac{1}{c^2}\kappa + R_{d\mathrm{, corrected}} & = c R_d\\ + R_{f\mathrm{, corrected}} & = \frac{1}{c} R_f\\ + \kappa_\mathrm{corrected} & = \frac{1}{c^2}\kappa \end{align} Where :math:`R_d` is the displacement sensitivity, :math:`R_f` is the force sensitivity and :math:`\kappa` is the stiffness. @@ -144,7 +146,9 @@ Filling in the maximal velocity we expect during the oscillation, we find the fo Re = \frac{\rho u L}{\eta} = 2 \pi f A d \frac{\rho}{\eta} -Here :math:`\rho` refers to the fluid density, :math:`u` the characteristic velocity, :math:`L` the characteristic length scale, :math:`\eta` the viscosity, :math:`f` the oscillation frequency, :math:`A` the oscillation amplitude and :math:`d` the bead diameter. +Here :math:`\rho` refers to the fluid density, :math:`u` the characteristic velocity, :math:`L` the +characteristic length scale, :math:`\eta` the viscosity, :math:`f` the oscillation frequency, :math:`A` +the oscillation amplitude and :math:`d` the bead diameter. For microfluidic flow, this value is typically much smaller than `1`. In this limit, the Navier-Stokes equation describing fluid flow reduces to the following expressions: @@ -158,32 +162,41 @@ In this limit, the Navier-Stokes equation describing fluid flow reduces to the f Here :math:`\eta` is the viscosity, :math:`p` is the pressure and :math:`v` is the fluid velocity. Creeping flow is far removed from every day intuition as it equilibrates instantaneously. -The advantage of this is that for sufficiently low frequencies, the correction factor can be based on the correction factor one would obtain for a steady state constant flow. +The advantage of this is that for sufficiently low frequencies, the correction factor can be based on +the correction factor one would obtain for a steady state constant flow. For two beads aligned in the flow direction, we can use the analytical solution presented in :cite:`stimson1926motion`. -This model uses symmetry considerations to solve the creeping flow problem for two solid spheres moving at a constant velocity parallel to their line of centers. +This model uses symmetry considerations to solve the creeping flow problem for two solid spheres moving +at a constant velocity parallel to their line of centers. We denote the correction factor obtained from this model as :math:`c_{\|}`. -This correction factor is given by the ratio of the drag coefficient by the drag coefficient one would expected from a single bead in creeping flow (:math:`3 \pi \eta d v`). -For beads aligned perpendicular to the flow direction, we use a model from :cite:`goldman1966slow`, which we denote as :math:`c_{\perp}`. +This correction factor is given by the ratio of the drag coefficient by the drag coefficient one would +expect from a single bead in creeping flow (:math:`3 \pi \eta d v`). +For beads aligned perpendicular to the flow direction, we use a model from :cite:`goldman1966slow`, +which we denote as :math:`c_{\perp}`. -From the derivations in these papers, it follows that the correction factors obtained depend on the bead diameter(s) :math:`d` and distance between the beads :math:`l`. +From the derivations in these papers, it follows that the correction factors obtained depend on the +bead diameter(s) :math:`d` and distance between the beads :math:`l`. For equally sized beads, this dependency is a function of the ratio of the distance between the beads over the bead diameter. -Considering the linearity of the equations that describe creeping flow :cite:`goldman1966slow`, we can combine the two analytical solutions by decomposing the incoming velocity (in the direction :math:`\vec{e}_{osc}`) into a velocity perpendicular to the bead-to-bead axis :math:`\vec{e}_{\perp}` and a velocity component aligned with the bead-to-bead axis :math:`\vec{e}_{\|}`. +Considering the linearity of the equations that describe creeping flow :cite:`goldman1966slow`, we can +combine the two analytical solutions by decomposing the incoming velocity (in the direction :math:`\vec{e}_{osc}`) +into a velocity perpendicular to the bead-to-bead axis :math:`\vec{e}_{\perp}` and a velocity component +aligned with the bead-to-bead axis :math:`\vec{e}_{\|}`. .. math:: \begin{align} - v_{\|} & = (\vec{e}_{\|} \cdot\vec{e}_{osc}) c_{\|}\\ - v_{\perp} & = (\vec{e}_{\perp} \cdot \vec{e}_{osc}) c_{\perp} + v_{\|} & = (\vec{e}_{\|} \cdot\vec{e}_\mathrm{osc}) c_{\|}\\ + v_{\perp} & = (\vec{e}_{\perp} \cdot \vec{e}_\mathrm{osc}) c_{\perp} \end{align} -This provides us with contributions for each of those axes, but we still need to project this back to the oscillation axis (since this is where we measure our amplitude). +This provides us with contributions for each of those axes, but we still need to project this back +to the oscillation axis (since this is where we measure our amplitude). We can calculate our desired hydrodynamic correction factor as: .. math:: - c_{total} = v_{\|} (\vec{e}_{\|} \cdot \vec{e}_{osc}) + v_{\perp} (\vec{e}_{\perp} \cdot \vec{e}_{osc}) + c_\mathrm{total} = v_{\|} (\vec{e}_{\|} \cdot \vec{e}_\mathrm{osc}) + v_{\perp} (\vec{e}_{\perp} \cdot \vec{e}_\mathrm{osc}) The response of this combined model for equally sized beads can be calculated as follows:: @@ -199,5 +212,6 @@ The response of this combined model for equally sized beads can be calculated as .. image:: figures/correction_factor.png -Here, when providing only a horizontal distance recovers the Stimson model :cite:`stimson1926motion`, while a vertical displacement recovers the Goldman model :cite:`goldman1966slow`. +Here, when providing only a horizontal distance recovers the Stimson model :cite:`stimson1926motion`, +while a vertical displacement recovers the Goldman model :cite:`goldman1966slow`. To find out more about how to use these correction factors, please refer to the :ref:`tutorial`. diff --git a/docs/theory/force_calibration/diode.rst b/docs/theory/force_calibration/diode.rst new file mode 100644 index 000000000..de79d4fcf --- /dev/null +++ b/docs/theory/force_calibration/diode.rst @@ -0,0 +1,72 @@ +.. _diode_theory: + +Position sensitive detector +--------------------------- + +The previous section introduced the origin of the frequency spectrum of a bead in an optical trap. +In reality, our measurement is affected by two processes: + +1. The motion of the bead in the trap. +2. The response of the detector to the incident light. + +.. image:: figures/diode_filtering.png + :nbattach: + +This second factor depends on the type of measurement device being used. +Typical position sensitive detectors are made of silicon. +Such a detector has a very high bandwidth for visible light (in the MHz range). +Unfortunately, the bandwidth is markedly reduced for the near infra-red light of the trapping laser +:cite:`berg2003unintended,berg2006power`. +This makes it less sensitive to changes in signal at high frequencies. + +Why is the bandwidth limited? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The high bandwidth of visible light detection of a silicon photodiode is achieved when incoming photons +are absorbed in the so-called depletion layer of the diode. +Unfortunately, silicon has an increased transparency at the near infra-red wavelength of the trapping laser. +The result of this is that light penetrates deeper into the substrate of the diode, where it generates +charge carriers in a different region of the diode. +These charge carriers then have to diffuse back to the depletion layer, which takes time. +As a result, a fraction of the signal is detected with a much lower bandwidth. + +.. image:: figures/diode.png + :nbattach: + +This effect is often referred to as the parasitic filtering effect and is frequently modelled as a first order lowpass filter. +This model is characterized by two numbers whose values depend on the incident laser power :cite:`berg2003unintended`: + +- A frequency `f_diode`, given in Hertz. +- A unit-less relaxation factor `alpha` which reflects the fraction of light that is transmitted instantaneously. + +.. _high_corner_freq: + +High corner frequencies +^^^^^^^^^^^^^^^^^^^^^^^ + +In literature, the diode parameters are frequently estimated simultaneously with the calibration data +:cite:`berg2003unintended,hansen2006tweezercalib,berg2006power,tolic2006calibration,tolic2004matlab,berg2004power`. +Unfortunately, this can cause issues when calibrating at high powers. + +Recall that the physical spectrum is characterized by a corner frequency `fc`, and diffusion constant `D`. +The corner frequency depends on the laser power and bead size (smaller beads resulting in higher corner frequencies). +The parasitic filtering effect also has a corner frequency (`f_diode`) and depends on the incident intensity :cite:`berg2003unintended`. + +When these two frequencies are similar, they cannot be estimated from the power spectrum reliably anymore. +The reason for this is that the effects that these parameters have on the power spectrum becomes very similar. +When working with small beads or at high laser powers, it is important to verify that the corner frequency `fc` +does not approach the frequency of the filtering effect `f_diode`. + +Sometimes, the parameters of this diode have been characterized independently. +In that case, the arguments `fixed_diode` and `fixed_alpha` can be passed to :func:`~lumicks.pylake.calibrate_force()` +to fix these parameters to their predetermined values, resolving this issue. +For more information on how to achieve this with Pylake, please refer to the :ref:`diode calibration tutorial`. + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + +In literature, it is frequently modelled up to good accuracy with a first order approximation :cite:`berg2003unintended,tolic2006calibration,berg2006power`. + +.. math:: + + g(f, f_\mathrm{diode}, \alpha) = \alpha^2 + \frac{1 - \alpha ^ 2}{1 + (f / f_\mathrm{diode})^2} \tag{$-$} diff --git a/docs/theory/force_calibration/figures/blocking.gif b/docs/theory/force_calibration/figures/blocking.gif new file mode 100644 index 000000000..03b921170 --- /dev/null +++ b/docs/theory/force_calibration/figures/blocking.gif @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e526c60c7eb0470f5d0554912a4d1f56bb5bc4eb2ebd3e7ffc5f7381e301fec +size 234543 diff --git a/docs/theory/force_calibration/figures/diode.png b/docs/theory/force_calibration/figures/diode.png new file mode 100644 index 000000000..072dd1bd8 --- /dev/null +++ b/docs/theory/force_calibration/figures/diode.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46c1e473593aff5873db12f51b11b062d43e1d3ee420a9a6fde5e75651be848e +size 136470 diff --git a/docs/theory/force_calibration/figures/diode_filtering.png b/docs/theory/force_calibration/figures/diode_filtering.png new file mode 100644 index 000000000..a377ed62c --- /dev/null +++ b/docs/theory/force_calibration/figures/diode_filtering.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4dfedf32d3a57da7077bd3bef93f599db4a1e28571087ed9351928ec1387925 +size 202936 diff --git a/docs/theory/force_calibration/figures/hydro_fast.png b/docs/theory/force_calibration/figures/hydro_fast.png new file mode 100644 index 000000000..07a8751a9 --- /dev/null +++ b/docs/theory/force_calibration/figures/hydro_fast.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35cd996d7220906950b9c6bf8e3fd2cbbf7fbd0cae256dc2347dd61bff7ec04d +size 63964 diff --git a/docs/theory/force_calibration/figures/noise_floor_fixed_diode.png b/docs/theory/force_calibration/figures/noise_floor_fixed_diode.png new file mode 100644 index 000000000..7a78ff827 --- /dev/null +++ b/docs/theory/force_calibration/figures/noise_floor_fixed_diode.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3d9a8b53eec82ac2e3236a3aeda7a036ae2fedc0a601c71332567a9550a4382 +size 57302 diff --git a/docs/theory/force_calibration/figures/noise_floor_free_diode.png b/docs/theory/force_calibration/figures/noise_floor_free_diode.png new file mode 100644 index 000000000..4ed960257 --- /dev/null +++ b/docs/theory/force_calibration/figures/noise_floor_free_diode.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e6ec42c7272e161a32f31daaac74977ebf5094b368d1ec4c8e63385218cbffb +size 60601 diff --git a/docs/theory/force_calibration/figures/sim_trap_opt.gif b/docs/theory/force_calibration/figures/sim_trap_opt.gif new file mode 100644 index 000000000..3caa15796 --- /dev/null +++ b/docs/theory/force_calibration/figures/sim_trap_opt.gif @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5254184598adb1fd8d801119d1b3d81e7a417632ac0c9dda035a38b7f784edf +size 849263 diff --git a/docs/theory/force_calibration/fitting.rst b/docs/theory/force_calibration/fitting.rst index 3c5b7637c..52ccc160f 100644 --- a/docs/theory/force_calibration/fitting.rst +++ b/docs/theory/force_calibration/fitting.rst @@ -1,13 +1,25 @@ Fitting a power spectrum ------------------------ -In the previous section, the physical origin of the power spectrum was introduced. -However, there are some practical aspects to consider. -So far, we have only considered the expectation value of the power spectrum. -In reality, power spectral values follow a distribution. +.. only:: html -The real and imaginary part of the frequency spectrum are normally distributed. + :nbexport:`Download this page as a Jupyter notebook ` + +In the previous sections, the physical origin of the power spectrum was introduced. +However, there are some additional practical aspects to consider. + +Spectral down-sampling +^^^^^^^^^^^^^^^^^^^^^^ + +So far, we have treated the power spectrum as a simple smooth curve where each frequency corresponds to a given amplitude. +What we actually have been looking at is the expected value of the power spectrum. +Basically, if you were to average many power spectra, their average would eventually converge to this curve. + +In reality, each data point in a power spectrum follows a distribution. +The real and imaginary part of the complex spectrum are normally distributed. +This spectrum is squared to obtain our power spectrum. As a consequence, the squared magnitude of the power spectrum is exponentially distributed. + This has two consequences: - Fitting the power spectral values directly using a simple least squares fitting routine, we would @@ -15,9 +27,12 @@ This has two consequences: resulting in overestimated trap stiffness and force response and an underestimated distance response. - The signal to noise ratio is poor (equal to one :cite:`norrelykke2010power`). +.. _blocking_windowing: + A commonly used method for dealing with this involves data averaging, which trades resolution for an -improved signal to noise ratio. In addition, by virtue of the central limit theorem, data averaging -leads to a more symmetric data distribution (more amenable to standard least-squares fitting procedures). +improved signal to noise ratio. +By virtue of the central limit theorem, as we average more data, the distribution of the data points +becomes more and more Gaussian and therefore more amenable to standard least-squares fitting procedures. There are two ways to perform such averaging: @@ -27,32 +42,149 @@ There are two ways to perform such averaging: spectral domain by averaging adjacent bins according to :cite:`berg2004power`. This procedure is referred to as *blocking*. -We use the blocking method for spectral averaging, since this allows us to reject noise peaks at high -resolution prior to averaging. Note however, that the error incurred by this blocking procedure depends -on :math:`n_b`, the number of points per block, :math:`\Delta f`, the spectral resolution and inversely -on the corner frequency :cite:`berg2004power`. +.. image:: figures/blocking.gif + :nbattach: + +Blocking +"""""""" + +Pylake uses the blocking method for spectral averaging, since this allows us to reject noise peaks +at high resolution prior to averaging (more on this later). +Note however, that the error incurred by this blocking procedure depends on :math:`n_b`, the number +of points per block, :math:`\Delta f`, the spectral resolution and inversely on the corner +frequency :cite:`berg2004power`. + +.. math:: + + \bar{f} &= \frac{1}{n_b} \sum_{f \in \mathrm{block}} f\\ + \bar{P}_\mathrm{meas} &= \frac{1}{n_b} \sum_{f \in \mathrm{block}} P_\mathrm{meas}(f) + +Setting the number of points per block too low results in a bias from insufficient averaging :cite:`berg2004power`. +Insufficient averaging would result in an overestimation of the force response :math:`R_f` and an +underestimation of the distance response :math:`R_d`. +In practice, one should use a high number of points per block (:math:`n_b \gg 100`), +unless a very low corner frequency precludes this. In such cases, it is preferable to increase the +measurement time. + +Bias correction +""""""""""""""" + +When sufficient blocking has taken place and noise peaks have been excluded prior to blocking, +the spectral data points are approximately Gaussian distributed with standard deviation: + +.. math:: + + \sigma(\bar{f}) = \frac{P(\bar{f})}{\sqrt{n_b}} + +This means that regular weighted least squares (WLS) can be used for fitting. +To ensure unbiased estimates in WLS, the data and squared weights must be uncorrelated. +However, there is a known correlation between these which results +in a known bias in the estimate for the diffusion constant that can be corrected after +fitting :cite:`norrelykke2010power`: + +.. math:: + + D_\mathrm{corrected} = D_\mathrm{wls} \frac{n_b}{n_b + 1} + +.. _noise_floor: + +Noise floor +^^^^^^^^^^^ + +When operating at very low powers (and by extension corner frequencies), a noise floor may be visible at high frequencies. +It is important to ensure that the upper limit of the fitting range does *not* include the noise floor +as it is not taken into account in the calibration model. +In the dataset below, we can see the effect of the noise floor:: + + lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/noise_floor.h5") + + force_channel = f.force1x + reference_calibration = force_channel.calibration[0] + pars = { + "force_voltage_data": force_channel.data / reference_calibration.force_sensitivity, + "sample_rate": force_channel.sample_rate, + "bead_diameter": 4.34, + "temperature": 25, + "hydrodynamically_correct": True, + "num_points_per_block": 150, + "fit_range": [100, 20000], + } + + plt.figure(figsize=(7, 5)) + plt.subplot(2, 2, 1) + calibration = lk.calibrate_force(**pars) + calibration.plot() + plt.title(f"Stiffness: {calibration.stiffness:.2f}") + plt.subplot(2, 2, 3) + calibration.plot_spectrum_residual() + + plt.subplot(2, 2, 2) + calibration = lk.calibrate_force(**pars | {"fit_range": [100, 3000]}) + calibration.plot() + plt.title(f"Stiffness: {calibration.stiffness:.2f}") + plt.subplot(2, 2, 4) + calibration.plot_spectrum_residual() + plt.tight_layout() + +.. image:: figures/noise_floor_free_diode.png + +Note that when we have a diode calibration, excluding the noise floor becomes even more important:: + + diode_calibration = reference_calibration.diode_calibration + pars = pars | diode_calibration(f["Diagnostics"]["Trap power 1"]) + + plt.figure(figsize=(7, 5)) + plt.subplot(2, 2, 1) + calibration = lk.calibrate_force(**pars) + calibration.plot() + plt.title(f"Stiffness: {calibration.stiffness:.2f}") + plt.subplot(2, 2, 3) + calibration.plot_spectrum_residual() + + plt.subplot(2, 2, 2) + calibration = lk.calibrate_force(**pars | {"fit_range": [100, 3000]}) + calibration.plot() + plt.title(f"Stiffness: {calibration.stiffness:.2f}") + plt.subplot(2, 2, 4) + calibration.plot_spectrum_residual() + plt.tight_layout() + +.. image:: figures/noise_floor_fixed_diode.png + +The reason the effect of the noise floor on the calibration parameters is more pronounced is because +with the fixed diode model, the model is not free to adjust the diode parameters to mitigate its impact. +As a result, the model uses the corner frequency in an attempt to capture the shape of the noise floor (strongly biasing the result). + +Noise peaks +^^^^^^^^^^^ -Setting the number of points per block too low would result in a bias from insufficient averaging -:cite:`berg2004power`. Insufficient averaging would result in an overestimation of the force response -(:math:`R_f`) and an underestimation of the distance response (:math:`R_d`). In practice, one should -use a high number of points per block (:math:`n_b \gg 100`), unless a very low corner frequency precludes this. -In such cases, it is preferable to increase the measurement time. +Optical tweezers are precision instruments. +Despite careful determination and elimination of noise sources, it is not always possible to exclude all potential sources of noise, +which can manifest in the power spectrum as spurious peaks. +One downside of weighted least squares estimation, is that it is very sensitive to such outliers. +It is therefore important to either exclude noise peaks from the data prior to fitting or use :ref:`robust fitting`. +Noise peaks should always be excluded prior to blocking to minimize data loss. .. _goodness_of_fit: Goodness of fit --------------- -Based on the model and noise assumptions, we can calculate a goodness of fit criterion. -When sufficient blocking has taken place, the sum of squared residuals that is being minimized during the fitting procedure is distributed according to a chi-squared distribution characterized by :math:`N_{\mathit{dof}} = N_{\mathit{data}} - N_{\mathit{free}}` degrees of freedom. -Here :math:`N_{\mathit{data}}` corresponds to the number of data points we fitted (after blocking) and :math:`N_{\mathit{free}}` corresponds to the number of parameters we fitted. +When working with the Gaussian error model, we can calculate a goodness of fit criterion. +When sufficient blocking has taken place, the sum of squared residuals that is being minimized during +the fitting procedure is distributed according to a chi-squared distribution characterized by +:math:`N_{\mathrm{dof}} = N_{\mathrm{data}} - N_{\mathrm{free}}` degrees of freedom. +Here :math:`N_{\mathrm{data}}` corresponds to the number of data points we fitted (after blocking) and +:math:`N_{\mathrm{free}}` corresponds to the number of parameters we fitted. We can use the value we obtain to determine how unusual the fit error we obtained is. .. math:: - \mathrm{support} = 100 P(x > \chi_{\mathit{obtained}}^2) = 100 \int_{\chi_{\mathit{obtained}}^2}^{\infty} \chi^2_{N_{\mathit{dof}}}(x) dx + \mathrm{support} = 100 P(x > \chi_{\mathrm{obtained}}^2) = 100 \int_{\chi_{\mathrm{obtained}}^2}^{\infty} \chi^2_{N_{\mathrm{dof}}}(x) dx -The support or backing is the probability that a repetition of the measurement that produced the data we fitted to will, after fitting, produce residuals whose squared sum is greater than the one we initially obtained. +The support or backing is the probability that a repetition of the measurement that produced the data +we fitted will, after fitting, produce residuals whose squared sum is greater than the one we initially obtained. More informally, it represents the probability that a fit error at least this large should occur by chance. Support less than 1% warrants investigating the residuals for any trend in the residuals. diff --git a/docs/theory/force_calibration/force_calibration.rst b/docs/theory/force_calibration/force_calibration.rst index 3cc2280e2..5fcdd32f3 100644 --- a/docs/theory/force_calibration/force_calibration.rst +++ b/docs/theory/force_calibration/force_calibration.rst @@ -8,9 +8,8 @@ Introduction Why is force calibration necessary? ----------------------------------- -Optical tweezers typically measure forces and displacements by detecting deflections of a trapping -laser by a trapped bead. These deflections are measured at the back focal plane of the beam using -position sensitive detectors (PSDs). +Optical tweezers typically measure forces and displacements by detecting deflections of a trapping laser by a trapped bead. +These deflections are measured at the back focal plane of the beam using position sensitive detectors (PSDs). .. image:: figures/back_focal.png :nbattach: @@ -30,29 +29,75 @@ and Where :math:`V` is the position-dependent voltage signal from the PSD and :math:`R_d` and :math:`R_f` are the displacement and force sensitivity proportionality constants, respectively. -Force calibration refers to computing these conversion factors. + +Force calibration refers to computing the calibration factors needed to convert from raw voltages to +actual forces and displacements. The values we wish to calculate are: + +- Trap stiffness :math:`\kappa`, which reflects how strongly a bead is held by a trap. +- Force response :math:`R_f`, the proportionality constant between voltage and force. +- Distance response :math:`R_d`, the proportionality constant between voltage and distance. Several methods exist to calibrate optical traps based on sensor signals. In this section, we will provide an overview of the physical background of power spectral calibration. -Why does the power spectrum look the way it does? -------------------------------------------------- +How can we calibrate? +--------------------- + +We will start by building some intuition about the physical processes. +Consider a small bead suspended in fluid (no optical trapping taking place). +This bead moves around due to the random collisions of molecules against the bead. +If we observe this motion over time, then we can see that it is a lot like the bead is taking aimless steps. +This unimpeded movement is often called free diffusion or Brownian motion. +How quickly the bead moves from the original position is characterized by a **diffusion constant** :math:`D`. +When we look at the motion of the bead over longer time periods, then each random step contributes +only a little to the overall movement. Because these steps are random, they don't perfectly cancel +out in the short term, and lead to a gradual drift. If there is no optical trap keeping the bead in +place, then the bead slowly drifts off from its starting position. + +One way to analyze this motion is to make a power spectrum plot of the bead's position. +This plot shows how much motion there is at different frequencies of movement. +Lower frequencies correspond to longer time intervals. +These are the time intervals we associated with the broader, slow movements of the bead. +If we think about a bead moving due to random collisions, then we can expect that the bead will move +more in these longer time intervals. This is why in the power spectrum of free diffusion, we see a +lot more energy concentrated at these low frequencies, while the rapid jiggles at higher frequency +contribute far less. The amplitude of this power spectrum is related to :math:`D`. + +.. image:: figures/sim_trap_opt.gif + :nbattach: + +Now we introduce the optical trap, which pulls the bead back to the laser focus. +The strength of this pull depends on how far the bead is from the focus, like a spring. +Because of this, those motions which are larger will experience a strong pull from the trap and the +motion will be limited. This damping of larger motions (at lower frequencies) manifests itself as +sharp reduction of amplitudes in the power spectrum above a certain threshold. +This leads to a plateau at low frequencies in the power spectrum of a trapped bead. +The point at which the power spectrum transitions from a plateau to a downward slope +is known as the **corner frequency** :math:`f_c`. + +Important takeaways +------------------- + +- The spectrum of bead motion in a trap can be characterized by a diffusion constant and corner frequency. +- At low frequencies, the trapping force dominates and limits the amplitudes, while at high frequencies the drag on the bead does. + +Mathematical background +----------------------- -Consider a small bead freely diffusing in a medium (no optical trapping taking place). Neglecting hydrodynamic and inertial effects (more on this later), we obtain the following equation of motion: .. math:: - \dot{x} = \frac{1}{\gamma} F_{thermal}(t) + \dot{x} = \frac{1}{\gamma} F_\mathrm{thermal}(t) where :math:`x` is the time-dependent position of the bead, :math:`\dot{x}` is the first derivative -with respect to time, :math:`\gamma` is the drag coefficient and :math:`F_{thermal}(t)` is the thermal +with respect to time, :math:`\gamma` is the drag coefficient and :math:`F_\mathrm{thermal}(t)` is the thermal force driving the diffusion. This thermal force is assumed to have the statistical properties of uncorrelated white noise: .. math:: - F_{thermal}(t) = \sqrt{2 \gamma k_B T} \xi(t) + F_\mathrm{thermal}(t) = \sqrt{2 \gamma k_B T} \xi(t) where :math:`k_B` is the Boltzmann constant, :math:`T` is the absolute temperature, and :math:`\xi(t)` is normalized white noise. diff --git a/docs/theory/force_calibration/hyco.rst b/docs/theory/force_calibration/hyco.rst index 19b0aecce..6badf16ae 100644 --- a/docs/theory/force_calibration/hyco.rst +++ b/docs/theory/force_calibration/hyco.rst @@ -1,17 +1,92 @@ +.. _hydro_model_theory: + Hydrodynamically correct model ------------------------------ -While the idealized model discussed in the previous section is sometimes sufficiently accurate, -there are scenarios where more detailed models are necessary. +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +While the idealized model of the bead motion is sometimes sufficiently accurate, +it neglects inertial and hydrodynamical effects of the fluid and bead(s). + +The frictional forces applied by the viscous environment to the bead are proportional to the bead's velocity relative to the fluid. +The idealized model is based on the assumption that the bead's relative velocity is constant; +there is no dynamical change of the fluid motion around the bead. +In reality, when the bead moves through the fluid, the frictional force between the bead and the fluid +depends on the past motion, since that determines the fluid's current motion. +For a stochastic process such as Brownian motion, constant motion is not an accurate assumption. +In addition, the bead and the surrounding fluid have their own mass and inertia, which are also neglected in the idealized model. -The frictional forces applied by the viscous environment to the bead are proportional to the bead's -velocity. The idealized model is based on the assumption that the bead's velocity is constant, which, -for a stochastic process such as Brownian motion, is not an accurate assumption. In addition, the -bead and the surrounding fluid have their own mass and inertia, which are also neglected in the idealized model. -Together, the non-constant speed and the inertial effects result in frequency-dependent frictional -forces that a more accurate hydrodynamically correct model takes into account. +Together, the non-constant relative velocity and the inertial effects result in frequency-dependent +frictional forces that a more accurate hydrodynamically correct model takes into account. These effects are strongest at higher frequencies, and for larger bead diameters. +The figure below shows the difference between the hydrodynamically correct model (solid lines) and the +idealized Lorentzian model (dashed lines) for various bead sizes. It can be seen that for large bead +sizes and higher trap powers the differences can be substantial. + +.. image:: figures/hydro.png + :nbattach: + +.. _fast_sensor_hyco: + +Fast sensor measurement +^^^^^^^^^^^^^^^^^^^^^^^ + +.. note:: + + The following section only applies to instruments which contain fast PSDs. + +When fitting a power spectrum, one may ask the question: "Why does the fit look good if the model is bad?" +The answer to this lies in the model that is used to capture the :ref:`parasitic filtering effect`. +When the parameters of this model are estimated, they can "hide" the mis-specification of the model. + +Fast detectors have the ability to respond much faster to incoming light resulting in no visible filtering effect in the frequency range we are fitting. +This means that for a fast detector, we do not need to include such a filtering effect in our model and see the power spectrum for what it really is. + +We can omit this effect by passing `fast_sensor=True` to the calibration models or to :func:`~lumicks.pylake.calibrate_force()`. +Note however, that this makes using the hydrodynamically correct model critical, as the simple model doesn't actually capture the data very well. +The following example data acquired on a fast sensor will illustrate why:: + + filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/fast_measurement_25.h5") + + # Decalibrate the force data + volts = f.force2y / f.force2y.calibration[0].force_sensitivity + + shared_parameters = { + "force_voltage_data": volts.data, + "bead_diameter": 4.38, + "temperature": 25, + "sample_rate": volts.sample_rate, + "fit_range": (1e2, 23e3), + "num_points_per_block": 200, + "excluded_ranges": ([190, 210], [13600, 14600]) + } + + plt.figure(figsize=(13, 4)) + plt.subplot(1, 3, 1) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=False) + fit.plot() + plt.title(f"Simple model + Slow (kappa={fit['kappa'].value:.2f})") + plt.subplot(1, 3, 2) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=True) + fit.plot() + plt.title(f"Simple model + Fast (kappa={fit['kappa'].value:.2f})") + plt.subplot(1, 3, 3) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=True, fast_sensor=True) + fit.plot() + plt.title(f"Hydrodynamically correct + Fast (kappa={fit['kappa'].value:.2f})") + plt.tight_layout() + plt.show() + +.. image:: figures/hydro_fast.png + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + +This section will detail some of the implementational aspects of the hydrodynamically correct model. The following equation accounts for a frequency dependent drag :cite:`tolic2006calibration`: .. math:: @@ -46,101 +121,3 @@ Here :math:`f_{\nu}` is the frequency at which the penetration depth equals the :math:`4 \nu/(\pi d^2)` with :math:`\nu` the kinematic viscosity. This approximation is reasonable, when the bead is far from the surface. - -When approaching the surface, the drag experienced by the bead depends on the distance between the -bead and the surface of the flow cell. An approximate expression for the frequency dependent drag is -then given by :cite:`tolic2006calibration`: - -.. math:: - - \gamma(f, R/l) = \frac{\gamma_\mathrm{stokes}(f)}{1 - \frac{9}{16}\frac{R}{l} - \left(1 - \left((1 - i)/3\right)\sqrt{\frac{f}{f_{\nu}}} + \frac{2}{9}\frac{f}{f_{\nu}}i - - \frac{4}{3}(1 - e^{-(1-i)(2l-R)/\delta})\right)} \tag{$\mathrm{kg/s}$} - -Where :math:`\delta = R \sqrt{\frac{f_{\nu}}{f}}` represents the aforementioned penetration depth, -:math:`R` corresponds to the bead radius and :math:`l` to the distance from the bead center to the nearest surface. - -While these models may look daunting, they are all available in Pylake and can be used by simply -providing a few additional arguments to the :class:`~.PassiveCalibrationModel`. It is recommended to -use these equations when less than 10% systematic error is desired :cite:`tolic2006calibration`. -No general statement can be made regarding the accuracy that can be achieved with the simple Lorentzian -model, nor the direction of the systematic error, as it depends on several physical parameters involved -in calibration :cite:`tolic2006calibration,berg2006power`. - -The figure below shows the difference between the hydrodynamically correct model (solid lines) and the -idealized Lorentzian model (dashed lines) for various bead sizes. It can be seen that for large bead -sizes and higher trap powers the differences can be substantial. - -.. image:: figures/hydro.png - :nbattach: - -.. note:: - - One thing to note is that when considering the surface in the calibration procedure, the drag - coefficient returned from the model corresponds to the drag coefficient extrapolated back to its - bulk value. - -Faxen's law -^^^^^^^^^^^ - -The hydrodynamically correct model presented in the previous section works well when the bead center -is at least 1.5 times the radius above the surface. When moving closer than this limit, we fall back -to a model that more accurately describes the change in drag at low frequencies, but neglects the -frequency dependent effects. - -To understand why, let's introduce Faxen's approximation for drag on a sphere near a surface under -creeping flow conditions. This model is used for lateral calibration very close to a surface -:cite:`schaffer2007surface` and is given by the following equation: - -.. math:: - - \gamma_\mathrm{faxen}(R/l) = \frac{\gamma_0}{ - 1 - \frac{9R}{16l} + \frac{1R^3}{8l^3} - \frac{45R^4}{256l^4} - \frac{1R^5}{16l^5} - } \tag{$\mathrm{kg/s}$} - -At frequency zero, the frequency dependent model used in the previous section reproduces this model -up to and including its second order term in :math:`R/l`. It is, however, a lower order model and the -accuracy decreases rapidly as the distance between the bead and surface become very small. -The figure below shows how the model predictions at frequency zero deviate strongly from the higher order model: - -.. image:: figures/freq_dependent_drag_zero.png - :nbattach: - -In addition, the deviation from a Lorentzian due to the frequency dependence of the drag is reduced -upon approaching a surface :cite:`schaffer2007surface`. - -.. image:: figures/freq_dependence_near.png - :nbattach: - -These two aspects make using Faxen's law in combination with a Lorentzian a more suitable model for -situations where we have to calibrate extremely close to the surface. - -Axial Calibration -^^^^^^^^^^^^^^^^^ - -For calibration in the axial direction, no hydrodynamically correct theory exists. - -Similarly as for the lateral component, we will fall back to a model that describes the change in -drag at low frequencies. However, while we had a simple expression for the lateral drag as a function -of distance, no simple closed-form equation exists for the axial dimension. Brenner et al provide an -exact infinite series solution :cite:`brenner1961slow`. Based on this solution :cite:`schaffer2007surface` -derived a simple equation which approximates the distance dependence of the axial drag coefficient. - -.. math:: - - \gamma_\mathrm{axial}(R/l) = \frac{\gamma_0}{ - 1.0 - - \frac{9R}{8l} - + \frac{1R^3}{2l^3} - - \frac{57R^4}{100l^4} - + \frac{1R^5}{5l^5} - + \frac{7R^{11}}{200l^{11}} - - \frac{1R^{12}}{25l^{12}} - } \tag{$\mathrm{kg/s}$} - -This model deviates less than 0.1% from Brenner's exact formula for :math:`l/R >= 1.1` and less than -0.3% over the entire range of :math:`l` :cite:`schaffer2007surface`. -Plotting these reveals that there is a larger effect of the surface in the axial than lateral direction. - -.. image:: figures/drag_coefficient.png - :nbattach: diff --git a/docs/theory/force_calibration/index.rst b/docs/theory/force_calibration/index.rst index d1a1c3c8b..f62ae46e8 100644 --- a/docs/theory/force_calibration/index.rst +++ b/docs/theory/force_calibration/index.rst @@ -1,6 +1,8 @@ Force Calibration ================= +.. _force_calibration_theory: + Find out about force calibration .. toctree:: @@ -8,7 +10,9 @@ Find out about force calibration :maxdepth: 1 force_calibration + diode fitting passive hyco + surface active diff --git a/docs/theory/force_calibration/passive.rst b/docs/theory/force_calibration/passive.rst index 9f2df6174..55c7fe563 100644 --- a/docs/theory/force_calibration/passive.rst +++ b/docs/theory/force_calibration/passive.rst @@ -33,7 +33,7 @@ To convert the parameters obtained from this spectral fit to a trap stiffness, t \kappa = 2 \pi \gamma_0 f_c \tag{$\mathrm{N/m}$} -Here :math:`\kappa` then represents the estimated trap stiffness. +where :math:`\kappa` is the estimated trap stiffness. We can calibrate the position by considering the diffusion of the bead: @@ -76,7 +76,7 @@ It is especially the latter that results in large calibration errors when mis-sp \begin{align} \kappa = 2 \pi \gamma(T) f_c &\propto& \eta(T)\\ - R_d = \sqrt{\frac{kT}{\gamma(T)D_{volts}}} &\propto& \sqrt{T / \eta(T)}\\ + R_d = \sqrt{\frac{kT}{\gamma(T)D_\mathrm{volts}}} &\propto& \sqrt{T / \eta(T)}\\ R_f = R_d \kappa &\propto& \sqrt{T \eta(T)} \end{align} diff --git a/docs/theory/force_calibration/surface.rst b/docs/theory/force_calibration/surface.rst new file mode 100644 index 000000000..4522e63dd --- /dev/null +++ b/docs/theory/force_calibration/surface.rst @@ -0,0 +1,123 @@ +Surface proximity +----------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +.. _surface_models: + +The theory in the previous section holds for beads far away from any surface (in bulk). +When doing experiments near the surface, the surface starts to have an effect on the friction felt by the bead. +When the height and bead radius are known, the hydrodynamically correct model can take this into account. + +The hydrodynamically correct model works well when the bead center is at least 1.5 times the radius +above the surface. When moving closer than this limit, it is better to use a model that more +accurately describes the change in drag at low frequencies, but neglects the frequency dependent effects. + +To see why this is, consider model predictions for the drag coefficient at frequency zero (constant flow). +At frequency zero, the Lorentzian and hydrodynamically correct model should predict similar behavior +(as the hydrodynamic effects should only be present at higher frequencies). Let's compare the +difference between the simple Lorentzian model and the hydrodynamically correct model near the surface: + +.. image:: figures/freq_dependent_drag_zero.png + :nbattach: + +where :math:`l` is the distance between the bead center and the surface and :math:`R` is the bead radius. + +In addition, the frequency dependent effects reduce as we approach the surface :cite:`schaffer2007surface`. +We can see this when we plot the spectrum for two different ratios of `l/R`. + +.. image:: figures/freq_dependence_near.png + :nbattach: + +These two aspects make using the simpler model in combination with a Lorentzian a +more suitable model for situations where we have to calibrate extremely close to the surface. + +Lastly, note that the height-dependence of _axial_ force is different than the +height-dependence of lateral force. For axial force, no hydrodynamically correct theory for the +power spectrum near the surface exists. + +.. image:: figures/drag_coefficient.png + :nbattach: + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + +Hydrodynamically correct theory +""""""""""""""""""""""""""""""" + +When approaching the surface, the drag experienced by the bead depends on the distance between the +bead and the surface of the flow cell. An approximate expression for the frequency dependent drag is +then given by :cite:`tolic2006calibration`: + +.. math:: + + \gamma(f, R/l) = \frac{\gamma_\mathrm{stokes}(f)}{1 - \frac{9}{16}\frac{R}{l} + \left(1 - \left((1 - i)/3\right)\sqrt{\frac{f}{f_{\nu}}} + \frac{2}{9}\frac{f}{f_{\nu}}i - + \frac{4}{3}(1 - e^{-(1-i)(2l-R)/\delta})\right)} \tag{$\mathrm{kg/s}$} + +Where :math:`\delta = R \sqrt{\frac{f_{\nu}}{f}}` represents the penetration depth, +:math:`R` corresponds to the bead radius and :math:`l` to the distance from the bead center to the +nearest surface. + +While these models may look daunting, they are all available in Pylake and can be used by simply +providing a few additional arguments to the :class:`~.PassiveCalibrationModel`. It is recommended to +use these equations when less than 10% systematic error is desired :cite:`tolic2006calibration`. +No general statement can be made regarding the accuracy that can be achieved with the simple Lorentzian +model, nor the direction of the systematic error, as it depends on several physical parameters involved +in calibration :cite:`tolic2006calibration,berg2006power`. + +.. note:: + + One thing to note is that when considering the surface in the calibration procedure, the drag + coefficient returned from the model corresponds to the drag coefficient extrapolated back to its + bulk value. + +Lorentzian model (lateral) +"""""""""""""""""""""""""" + +The hydrodynamically correct model presented in the previous section works well when the bead center +is at least 1.5 times the radius above the surface. When moving closer than this limit, we fall back +to a model that more accurately describes the change in drag at low frequencies, but neglects the +frequency dependent effects. + +To understand why, let's introduce Faxen's approximation for drag on a sphere near a surface under +creeping flow conditions. This model is used for lateral calibration very close to a surface +:cite:`schaffer2007surface` and is given by the following equation: + +.. math:: + + \gamma_\mathrm{faxen}(R/l) = \frac{\gamma_0}{ + 1 - \frac{9R}{16l} + \frac{1R^3}{8l^3} - \frac{45R^4}{256l^4} - \frac{1R^5}{16l^5} + } \tag{$\mathrm{kg/s}$} + +What we see is that the frequency dependent model used in the previous section reproduces this model +up to and including its second order term in :math:`R/l`. It is, however, a lower order model and the +accuracy decreases rapidly as the distance between the bead and surface become very small. + +Lorentzian model (axial) +"""""""""""""""""""""""" + +For calibration in the axial direction, no hydrodynamically correct theory exists. + +Similarly as for the lateral component, we will fall back to a model that describes the change in +drag at low frequencies. However, while we had a simple expression for the lateral drag as a function +of distance, no simple closed-form equation exists for the axial dimension. Brenner et al provide an +exact infinite series solution :cite:`brenner1961slow`. Based on this solution :cite:`schaffer2007surface` +derived a simple equation which approximates the distance dependence of the axial drag coefficient. + +.. math:: + + \gamma_\mathrm{axial}(R/l) = \frac{\gamma_0}{ + 1.0 + - \frac{9R}{8l} + + \frac{1R^3}{2l^3} + - \frac{57R^4}{100l^4} + + \frac{1R^5}{5l^5} + + \frac{7R^{11}}{200l^{11}} + - \frac{1R^{12}}{25l^{12}} + } \tag{$\mathrm{kg/s}$} + +This model deviates less than 0.1% from Brenner's exact formula for :math:`l/R >= 1.1` and less than +0.3% over the entire range of :math:`l` :cite:`schaffer2007surface`. diff --git a/docs/tutorial/figures/force_calibration/bl_dictionary.png b/docs/tutorial/figures/force_calibration/bl_dictionary.png deleted file mode 100644 index b8eacaae4..000000000 --- a/docs/tutorial/figures/force_calibration/bl_dictionary.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c25cc809db2d13cc879f041e84efbb6ff26f94347d21b0d775599d9255ae8651 -size 35817 diff --git a/docs/tutorial/figures/force_calibration/calibration_item.png b/docs/tutorial/figures/force_calibration/calibration_item.png deleted file mode 100644 index 67c840c30..000000000 --- a/docs/tutorial/figures/force_calibration/calibration_item.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8b967f513c5d5371cbba6e1a718674132566541870e588a16a22cbfe9113aeab -size 284470 diff --git a/docs/tutorial/force_calibration.rst b/docs/tutorial/force_calibration.rst deleted file mode 100644 index c75a8055a..000000000 --- a/docs/tutorial/force_calibration.rst +++ /dev/null @@ -1,661 +0,0 @@ -Force Calibration -================= - -.. only:: html - - :nbexport:`Download this page as a Jupyter notebook ` - -Force calibration refers to computing the calibration factors needed to convert a raw voltage recorded by a position sensitive detector to actual force and displacement. -The values we wish to calculate are: - -- Trap stiffness :math:`\kappa`, which reflects how strongly a bead is held by a trap. -- Force response :math:`R_d`, the proportionality constant between voltage and force. -- Distance response :math:`R_f`, the proportionality constant between voltage and distance. - -This tutorial will focus on performing force calibration with Pylake. -Note that force calibration in Bluelake is actually performed by Pylake. -Therefore, the results obtained with Bluelake can be reproduced exactly when using the same data and parameters. - -To calibrate the optical traps we fit a power spectrum of the voltage measured by a position sensitive diode (PSD) to a theoretical model. The measured voltage (and thus the shape of the power spectrum) is mainly determined by - -1. The Brownian motion of the bead within the trap. -2. The instantaneous response of the PSD to the incident light. - -The contribution to the power spectrum by the bead in the optical trap is characterized by a diffusion constant and a corner frequency. -The second contribution refers to a filtering effect where the PSD becomes less sensitive to changes in signal at high frequencies. -This filtering effect is characterized by a constant that reflects the fraction of light that is transmitted instantaneously and a corner frequency (this is referred to as the diode frequency to be able to differentiate). - -.. image:: figures/force_calibration/diode_filtering.png - :nbattach: - -.. warning:: - - For high corner frequencies, the parameter estimation procedure can become unreliable. - A warning sign for this is when the corner frequency `f_c` approaches or goes beyond the diode frequency `f_diode`. - For more information see the section on `High corner frequencies`_. - -The various theoretical models that can be used to fit these data are described in the :doc:`theory section on force calibration`, while their use is described below. - -We can download the data needed for this tutorial directly from Zenodo using Pylake. -Since we don't want it in our working folder, we'll put it in a folder called `"test_data"`:: - - filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") - -Undoing the previous calibration --------------------------------- - -Our starting point will be data acquired with Bluelake, which is already calibrated. -Therefore, the first step is to undo the calibration applied by Bluelake to get the raw voltage signals. -Let's load the dataset:: - - f = lk.File("test_data/passive_calibration.h5") - -The force timeline in this file contains a single calibration measurement. -We can see calibrations relevant to this file by inspecting the `calibration` attribute for the entire force :class:`~lumicks.pylake.channel.Slice`. -This is a list of dictionaries containing all of the relevant calibration parameters:: - - print(len(f.force1x.calibration)) - print(f.force1x.calibration[1]) - -The first calibration item in the dictionary always refers to the calibration that was active at the start of the slice (usually from a calibration that occurred before the start of the slice). -The second calibration refers to the calibration measurement performed in this slice. -This is illustrated in the figure below: - -.. image:: figures/force_calibration/calibrations.png - :nbattach: - -Our goal here is to reproduce the calibration factors calculated from the calibration measurement corresponding to `calibrations[1]`. -First we grab the chunk of data that was used for that calibration in Bluelake:: - - start = f.force1x.calibration[1]["Start time (ns)"] - stop = f.force1x.calibration[1]["Stop time (ns)"] - force_slice = f.force1x[start:stop] - -Now we can see that this new :class:`~lumicks.pylake.channel.Slice` only contains a single calibration:: - - print(len(force_slice.calibration)) - -Next we'll decalibrate the force data back to the original voltage measured by the PSD. Let's write a short function for this so we can reuse it elsewhere:: - - def decalibrate(force_slice): - offset = force_slice.calibration[0]["Offset (pN)"] - response = force_slice.calibration[0]["Response (pN/V)"] - return (force_slice - offset) / response - - volts = decalibrate(force_slice) - -Performing the calibration --------------------------- - -For convenience, Pylake offers :func:`~lumicks.pylake.calibrate_force()` which performs an entire calibration procedure in a single function call:: - - calibration = lk.calibrate_force( - volts.data, - force_slice.calibration[0]["Bead diameter (um)"], # Use the value from the last calibration - temperature=25, - sample_rate=volts.sample_rate - ) - -This function allows us to quickly try various calibration settings without having to set up all the intermediate models and spectra explicitly. -Note that most parameters have to be provided as keyworded arguments to prevent errors. -This method returns a :class:`~lumicks.pylake.force_calibration.power_spectrum_calibration.CalibrationResults`, which can be plotted using :func:`~lumicks.pylake.force_calibration.power_spectrum_calibration.CalibrationResults.plot()`. - -The rest of this tutorial illustrates the various steps involved when performing such a calibration. - -Obtaining the power spectrum ----------------------------- - -To use the more manual lower-level API, we first need the power spectrum to fit. To compute a power spectrum from our data we can invoke :func:`~lumicks.pylake.calculate_power_spectrum()`:: - - power_spectrum = lk.calculate_power_spectrum(volts.data, sample_rate=volts.sample_rate) - -This function returns a :class:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum` which we can plot:: - - plt.figure() - power_spectrum.plot() - plt.show() - -.. image:: figures/force_calibration/power_spectrum.png - -The power spectrum is smoothed by downsampling adjacent power spectral values (known as blocking). -Downsampling the spectrum is required to fulfill some of the assumptions in the fitting procedure, but it comes at the cost of spectral resolution. -One must be careful that the shape of the power spectrum is still sufficiently preserved. -If the corner frequency is very low then downsampling too much can lead to biases in the calibration parameters. -In such cases, it is better to either measure a longer interval to increase the spectral resolution or reduce the number of points (`num_points_per_block`) used for blocking. - -The range over which to compute the spectrum can be controlled using the `fit_range` argument. -One can also exclude specific frequency ranges from the spectrum (`excluded_ranges`) which can be useful if there are noise peaks in the spectrum. -Let's see which ranges were excluded in our Bluelake calibration:: - - force_slice.calibration[0] - -.. image:: figures/force_calibration/bl_dictionary.png - -Here, they are listed as `Exclusion range 0 (min.) (Hz)`, `Exclusion range 0 (max.) (Hz)` etc. -To reproduce the result obtained with Bluelake, these should be excluded from the power spectrum:: - - power_spectrum = lk.calculate_power_spectrum( - volts.data, - sample_rate=volts.sample_rate, - fit_range=(1e2, 23e3), - num_points_per_block=2000, - excluded_ranges=([19348, 19668], [24308, 24548]) - ) - - plt.figure() - power_spectrum.plot(marker=".") - plt.show() - -.. image:: figures/force_calibration/power_spectrum_excluded_ranges.png - -Note that exclusion ranges are excluded *prior* to downsampling. -Considering that a noise peak may be very narrow, it is beneficial to lower the number of points per block temporarily to find the exact exclusion range. -After determination of this exclusion range, the number of points per block can be increased again. However, also see `Robust fitting`_ for an automated peak identification routine. - -Passive calibration -------------------- - -In literature, passive calibration is often referred to as thermal calibration. -It involves fitting a physical model to the power spectrum obtained in the previous step. -This physical model relies on a number of parameters that have to be specified in order to get the correct calibration factors. - -The most important parameters are the bead diameter (in microns) and viscosity. -Let's use the bead diameter found in the calibration performed in Bluelake. - -Note that the viscosity of water strongly depends on :ref:`temperature`. -To find the viscosity of water at a particular temperature, Pylake uses :func:`~lumicks.pylake.viscosity_of_water` which implements the model presented in :cite:`huber2009new`. -When omitted, this function will automatically be used to look up the viscosity of water for that particular temperature - -.. note:: - - Note that for experiments that use a different medium than water, the viscosity at the experimental temperature should explicitly be provided. - -The next step is setting up the calibration model:: - - bead_diameter = f.force1x.calibration[1]["Bead diameter (um)"] - force_model = lk.PassiveCalibrationModel(bead_diameter, temperature=25) - -To fit this model to the data use :func:`~lumicks.pylake.fit_power_spectrum()`:: - - calibration = lk.fit_power_spectrum(power_spectrum, force_model) - calibration - -.. image:: figures/force_calibration/calibration_item.png - -This will produce a table with your fitted calibration parameters. -These parameters can be accessed as follows:: - - >>> print(calibration["kappa"].value) - >>> print(f.force1x.calibration[1]["kappa (pN/nm)"]) - 0.12872206850762546 - 0.1287225353482303 - -.. note:: - - Note that by default, a bias correction is applied to the fitted results :cite:`norrelykke2010power`. - This bias correction is applied to the diffusion constant and amounts to a correction of :math:`\frac{N}{N+1}`, where :math:`N` refers to the number of points used for a particular spectral data point. - It can optionally be disabled by passing `bias_correction=False` to :func:`~lumicks.pylake.fit_power_spectrum` or :func:`~lumicks.pylake.calibrate_force`. - -We can plot the calibration by calling:: - - plt.figure() - calibration.plot() - plt.show() - -.. image:: figures/force_calibration/fitted_spectrum.png - -Hydrodynamically correct model ------------------------------- - -While the simple theory can suffice for small beads, it is usually a good idea to use the more realistic hydrodynamically correct model. -This model takes into account hydrodynamic and inertial effects (which scale with the size of the bead) leading to more accurate estimates. -As such, it requires a few extra parameters: the density of the sample and bead:: - - force_model = lk.PassiveCalibrationModel( - bead_diameter, - hydrodynamically_correct=True, - rho_sample=999, - rho_bead=1060.0 - ) - -Note that when `rho_sample` and `rho_bead` are omitted, values for water and polystyrene are used for the sample and bead density respectively. - -Calibration near the surface ----------------------------- - -So far, we have only considered experiments performed deep in bulk. -In reality, proximity of the flowcell surface to the bead leads to an increase in the drag force on the bead. -Such surface effects can be taken into account by specifying a distance to the surface (in microns):: - - force_model = lk.PassiveCalibrationModel( - bead_diameter, - hydrodynamically_correct=True, - rho_sample=999, - rho_bead=1060.0, - distance_to_surface=5 - ) - -As we approach the surface, the drag effect becomes stronger. -The hydrodynamically correct model is only valid up to a certain distance to the surface. -Moving closer, the frequency dependent effects become smaller than the overall drag effect and better models to approximate the local drag exist. -We can use this model by setting `hydrodynamically_correct` to `False`, while still providing a distance to the surface:: - - force_model = lk.PassiveCalibrationModel( - bead_diameter, hydrodynamically_correct=False, distance_to_surface=5 - ) - -To summarize, the workflow can be visualized as follows: - -.. image:: figures/force_calibration/surface_calibration_workflow.png - -.. note:: - - Note that the drag coefficient `gamma_ex` that Pylake returns always corresponds to the drag coefficient extrapolated back to its bulk value. - This ensures that drag coefficients can be compared and carried over between experiments performed at different heights. - The field `local_drag_coefficient` contains an estimate of the local drag coefficient (at the provided height). - -Axial Force ------------ - -No hydrodynamically correct model is available for axial calibration. -However, models do exist for the dependence of the drag force on the distance to the surface. -Axial force calibration can be performed by specifying `axial=True`:: - - force_model = lk.PassiveCalibrationModel(bead_diameter, distance_to_surface=5, axial=True) - -Active calibration with a single bead -------------------------------------- - -Active calibration has a few benefits. -When performing passive calibration, we base our calculations on a theoretical drag coefficient which depends on parameters that are only known with limited precision: - -- The diameter of the bead :math:`d` in microns. -- The dynamic viscosity :math:`\eta` in Pascal seconds. -- The distance to the surface :math:`h` in microns. - -The viscosity in turn depends strongly on the local temperature around the bead, which is typically poorly known. - -In active calibration, we oscillate the stage with a known frequency and amplitude. -This introduces an extra peak in the power spectrum which allows the trap to be calibrated without the assumptions of the theoretical drag coefficient. - -Using Pylake, the procedure to use active calibration is not very different from passive calibration. -However, it does require some additional data channels as inputs. -In the next section, the aim is to calibrate the x-axis of trap 1. -We will consider that the nanostage was used as driving input. -Let's analyze some active calibration data acquired near a surface. -To do this, load a new file:: - - f = lk.File("test_data/near_surface_active_calibration.h5") - volts = decalibrate(f.force1x) - bead_diameter = f.force1x.calibration[0]["Bead diameter (um)"] - # Calibration performed at 1.04 * bead_diameter - distance_to_surface = 1.04 * bead_diameter - -First we need to extract the nanostage data which is used to determine the driving amplitude and frequency:: - - driving_data = f["Nanostage position"]["X"] - -For data acquired with active calibration in Bluelake, this will be a sinusoidal oscillation. -If there are unexplained issues with the calibration, it is a good idea to plot the driving signal and verify that the motion looks like a clean sinusoid:: - - plt.figure() - driving_data.plot() - plt.xlim(0, 0.1) - plt.ylabel(r"Nanostage position ($\mu$m)") - plt.show() - -.. image:: figures/force_calibration/nanostage_position.png - -Instead of using the :class:`~lumicks.pylake.PassiveCalibrationModel` presented in the previous section, we now use the :class:`~lumicks.pylake.ActiveCalibrationModel`. -We also need to provide the sample rate at which the data was acquired, and a rough guess for the driving frequency. -Pylake will find an accurate estimate of the driving frequency based on this initial estimate (provided that it is close enough):: - - active_model = lk.ActiveCalibrationModel( - driving_data.data, - volts.data, - driving_data.sample_rate, - bead_diameter, - driving_frequency_guess=38, - distance_to_surface=distance_to_surface - ) - -We can check the determined driving frequency with:: - - >>> active_model.driving_frequency - 38.15193077664577 - -Let's have a look to see if this peak indeed appears in our power spectrum. -To see it clearly, we reduce the blocking amount and show the spectrum all the way up to a frequency of 10 Hz:: - - show_peak = lk.calculate_power_spectrum( - volts.data, sample_rate=volts.sample_rate, num_points_per_block=5, fit_range=(10, 23000) - ) - - plt.figure() - show_peak.plot() - plt.show() - -.. image:: figures/force_calibration/calibration_peak.png - -The driving peak is clearly visible in the spectrum. -Next let's calculate the power spectrum we'll use for fitting. -It is important to *not* include the driving peak when doing this (the default will only go up to 100 Hz):: - - power_spectrum = lk.calculate_power_spectrum(volts.data, sample_rate=volts.sample_rate) - -We can now use this to fit our data:: - - calibration = lk.fit_power_spectrum(power_spectrum, active_model) - calibration - -.. image:: figures/force_calibration/calibration_item_active.png - -Analogous to the passive calibration procedure, we can specify `hydrodynamically_correct=True` if we wish to use the hydrodynamically correct theory here. -Especially for bigger beads this is highly recommended (more on this later). - -Comparing different types of calibration ----------------------------------------- - -Consider the active calibration from the last section. -This entire calibration can also be performed using only a single function call. -For convenience, assign most of the parameter to a dictionary first:: - - shared_parameters = { - "force_voltage_data": volts.data, - "bead_diameter": bead_diameter, - "temperature": 25, - "sample_rate": volts.sample_rate, - "driving_data": driving_data.data, - "driving_frequency_guess": 37, - "hydrodynamically_correct": False, - } - -Next, unpack this dictionary using the unpacking operator `**`:: - - >>> fit = lk.calibrate_force( - ... **shared_parameters, active_calibration=True, distance_to_surface=distance_to_surface - ... ) - >>> print(fit["kappa"].value) - 0.11662183772410809 - -And compare this to the passive calibration result:: - - >>> fit = lk.calibrate_force( - ... **shared_parameters, active_calibration=False, distance_to_surface=distance_to_surface - ... ) - >>> print(fit["kappa"].value) - 0.11763849764570819 - -These values are quite close. -However, if we do not provide the height above the surface, we can see that the passive calibration result suffers much more than the active calibration result (as passive calibration fully relies on a drag coefficient calculated from the physical input parameters):: - - >>> print(lk.calibrate_force(**shared_parameters, active_calibration=False)["kappa"].value) - >>> print(lk.calibrate_force(**shared_parameters, active_calibration=True)["kappa"].value) - 0.08616565751377737 - 0.11662183772410809 - -.. note:: - - When fitting with the hydrodynamically correct model, the `distance_to_surface` parameter impacts the expected shape of the power spectrum. - Consequently, when this model is selected, this parameter affects both passive and active calibration. - For more information on this see the :doc:`theory section on force calibration` section. - -.. _bead_bead_tutorial: - -Active calibration with two beads far away from the surface ------------------------------------------------------------ - -.. warning:: - - The implementation of the coupling correction models is still alpha functionality. - While usable, this has not yet been tested in a large number of different scenarios. - The API can still be subject to change *without any prior deprecation notice*! - If you use this functionality keep a close eye on the changelog for any changes that may affect your analysis. - -When performing active calibration with two beads, we get a lower fluid velocity around the beads than we would with a single bead. -This leads to a smaller voltage readout than expected and therefore a higher displacement sensitivity (microns per volt). -Failing to take this into account results in a bias. -Pylake offers a function to calculate a correction factor to account for the lower velocity around the bead. - -.. note:: - - For more information on how these factors are derived, please refer to the :ref:`theory` section on this topic. - -Appropriate correction factors for oscillation in x can be calculated as follows:: - - factor = lk.coupling_correction_2d(dx=5.0, dy=0, bead_diameter=bead_diameter, is_y_oscillation=False) - -Here `dx` and `dy` represent the horizontal and vertical distance between the beads. -Note that these refer to *center to center distances* (unlike the distance channel in Bluelake, which represents the bead surface to surface distance). -Note that all three parameters have to be specified in the same spatial unit (meters or micron). -The final parameter `is_y_oscillation` indicates whether the stage was oscillated in the y-direction. - -The obtained correction factor can be used to correct the calibration factors:: - - Rd_corrected = factor * calibration["Rd"].value - Rf_corrected = calibration["Rf"].value / factor - stiffness_corrected = calibration["kappa"].value / factor**2 - -To correct a force trace, simply divide it by the correction factor:: - - corrected_force1x = f.force1x / factor - -.. note:: - - This coupling model neglects effects from the surface. It is intended for measurements performed at the center of the flowcell. - -.. note:: - - The model implemented here only supports beads that are aligned in the same plane. - It does not take a mismatch in the `z`-position of the beads into account. - In reality, the position in the focus depends on the bead radius and may be different for the two beads if they slightly differ in size :cite:`alinezhad2018enhancement` (Fig. 3). - At short bead-to-bead distances, such a mismatch would make the coupling less pronounced than the model predicts. - -Fast Sensors ------------- - -Fast detectors have the ability to respond much faster to incoming light resulting in no visible filtering effect in the frequency range we are fitting. -This means that for a fast detector, we do not need to include a filtering effect in our model. -Note that whether you have a fast or slow detector depends on the particular hardware in the C-Trap. -We can omit this effect by passing `fast_sensor=True` to the calibration models or to :func:`~lumicks.pylake.calibrate_force()`. -Note however, that this makes using the hydrodynamically correct model critical, as the simple model doesn't actually capture the data very well. -The following example data acquired on a fast sensor will illustrate why:: - - f = lk.File("test_data/fast_measurement_25.h5") - - shared_parameters = { - "force_voltage_data": decalibrate(f.force2y).data, - "bead_diameter": 4.38, - "temperature": 25, - "sample_rate": volts.sample_rate, - "fit_range": (1e2, 23e3), - "num_points_per_block": 200, - "excluded_ranges": ([190, 210], [13600, 14600]) - } - - plt.figure(figsize=(13, 4)) - plt.subplot(1, 3, 1) - fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=False) - fit.plot() - plt.title(f"Simple model + Slow (kappa={fit['kappa'].value:.2f})") - plt.subplot(1, 3, 2) - fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=True) - fit.plot() - plt.title(f"Simple model + Fast (kappa={fit['kappa'].value:.2f})") - plt.subplot(1, 3, 3) - fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=True, fast_sensor=True) - fit.plot() - plt.title(f"Hydrodynamically correct + Fast (kappa={fit['kappa'].value:.2f})") - plt.tight_layout() - plt.show() - -.. image:: figures/force_calibration/fast_sensors.png - -Note how the power spectral fit with the simple model seems to fit the data quite well as long as we also include the filtering effect. -However, the apparent quality of a fit can be deceiving. -Considering that this dataset was acquired on a fast sensor, we should omit the filtering effect. -When the `fast_sensor` flag is enabled, it can be seen that the simple model doesn't actually describe the data. -Switching to the hydrodynamically correct model results in a superior fit to the power spectrum. - -So what is happening here? Why did the first fit look good? -When we fit the power spectrum with the simple model and include the filtering effect, the fitting procedure uses the parameters that characterize the filter to fit some of the high frequency attenuation. -With the filtering effect disabled, we obtain a very biased fit because the model fails to fit the data. - -If we compare the different fits, we can see that the simple model with filtering effect (`fast_sensor=False`) gives similar stiffness estimates as the hydrodynamically correct model without the filtering. -While this is true for this particular dataset, no general statement can be made about the bias caused by fitting the simple model rather than the hydrodynamically correct power spectrum. -If low bias is desired, one should always use the hydrodynamically correct model when possible. -On regular sensors, it is best to fit the hydrodynamically correct model with the filtering effect enabled. - -High corner frequencies ------------------------ - -In specific situations, the filtering effect of the position sensitive detector can cause issues when calibrating. -The power spectrum of the bead in the optical trap is characterized by a diffusion constant and a corner frequency. -The filtering effect is characterized by a constant that reflects the fraction of light that is transmitted instantaneously and a corner frequency (referred to as the diode frequency to be able to differentiate). - -The corner frequency of the physical spectrum can be found in the results as `f_c` and depends on the laser power and bead size (smaller beads resulting in higher corner frequencies) . -The corner frequency of the filtering effect can be found in the results as `f_diode` (which stands for diode frequency) and depends on the incident intensity :cite:`berg2003unintended`. -When these two frequencies get close, they cannot be determined reliably anymore. -The reason for this is that the effect of one can be compensated by the other. -When working with small beads or at high laser powers, it is important to verify that the corner frequency `f_c` does not approach the frequency of the filtering effect `f_diode`. - -Sometimes, the filtering effect has been characterized independently. -In that case, the arguments `fixed_diode` and `fixed_alpha` can be passed to :func:`~lumicks.pylake.calibrate_force()` to fix these parameters to their predetermined values. - -.. _robust_fitting: - -Robust fitting --------------- - -So far, we have been using least-squares fitting routines for force calibration. In that case, we assume that the error in the power at each frequency is distributed according to a Gaussian distribution. -Blocking or windowing the power spectrum ensures that this assumption is close enough to the truth such that the fit provides accurate estimates of the unknown parameters. -Occasionally, the power spectrum might show a spurious noise peak. -Such a peak is an outlier in the expected behavior of the spectrum and therefore interferes with the assumption of having a Gaussian error distribution. -As a result, the fit is skewed. In those cases, it can be beneficial to do a robust fit. When a robust fit is performed, one assumes that the probability of encountering one or multiple outliers is non-negligible. -By taking this into account during fitting, the fit can be made more robust to outliers in the data. The following example illustrates the method. - -To see this effect, let's load a dataset of uncalibrated force sensor data of a 4.4 μm bead showing Brownian motion while being trapped. In particular, look at the `Force 2y` sensor signal:: - - f = lk.File("test_data/robust_fit_data.h5") - f2y = f.force2y - -First create a power spectrum without blocking or windowing for later use. Then derive a power spectrum with blocking from the first power spectrum:: - - ps = lk.calculate_power_spectrum(f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=1, fit_range=(10, 23e3)) - ps_blocked = ps.downsampled_by(200) - -First use a passive calibration model using the hydrodynamically correct model to perform a least-squares fit and plot the result:: - - model = lk.PassiveCalibrationModel(4.4, temperature=25.0, hydrodynamically_correct=True) - fit = lk.fit_power_spectrum(ps_blocked, model) - - plt.figure() - fit.plot() - plt.title( - f"Skewed fit: $f_c$ = {fit.results['fc'].value:.1f}, " - f"$D$ = {fit.results['D'].value:.4f}, " - f"$f_d$ = {fit.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_noise_peak.png - -Notice how the tail of the model is skewed towards the peak, in order to reduce the least-squares error. In this case, the free parameters to fit the diode filter contribution are 'abused' to reduce the error between the model and the outlier. -This results in biased parameter estimates. - -Now do a robust fit. We do this by specifying a loss function in the function :func:`~lumicks.pylake.fit_power_spectrum()`. -For least-squares fitting, the loss function is `'gaussian'`, which is the default if nothing is specified. However, if we specify `'lorentzian'`, a robust fitting routine will be used instead. -Because `bias_correction` and robust fitting are mutually exclusive, we need to explicitly turn it off:: - - fit = lk.fit_power_spectrum(ps_blocked, model, bias_correction=False, loss_function="lorentzian") - -Now plot the robust fit:: - - plt.figure() - fit.plot() - plt.title( - f"Robust fit: $f_c$ = {fit.results['fc'].value:.1f}, " - f"$D$ = {fit.results['D'].value:.4f}, " - f"$f_d$ = {fit.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_noise_peak_robust.png - -Notice how the model now follows the power spectrum nearly perfectly. The value for `f_diode` has increased significantly, now that it is not abused to reduce the error induced by the outlier. - -This example shows that a robust fitting method is less likely to fail on outliers in the power spectrum data. It is therefore a fair question why one would not use it all the time? -Robust fitting leads to a small bias in the fit results for which Pylake has no correction. -Least-squares fitting also leads to a bias, but this bias is known (:cite:`norrelykke2010power`) and can be corrected with `bias_correction=True`. -Secondly, for least-squares fitting, methods exist to estimate the expected standard errors in the estimates of the free parameters, which are implemented in the least-squares fitting routines that Pylake uses :cite:`press1990numerical`. -These error estimates are not implemented for robust fitting, and as such, the fit results will show `nan` for the error estimates after a robust fit. -However, as will be shown below, the robust fitting results may be used as a start to identify outliers automatically, in order to exclude these from a second, regular least-squares, fit. - -Automated spurious peak detection ---------------------------------- - -We will continue the tutorial with the results of the previous section. If you did not yet do that part of the tutorial, please go back and execute the code examples in that section. - -We still have the power spectrum `ps` that was created without blocking or windowing. Here we will use it to identify the peak and automatically obtain frequency exclusion ranges. -We will use the method :meth:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum.identify_peaks()` in order to do so. -This method takes a function that accurately models the power spectrum as a function of frequency, in order to normalize it. -It then identifies peaks based on the likelihood of encountering a peak of a certain magnitude in the resulting data set. -If we have a "good fit", then the easiest way to get that function is to use our fitted model:: - - plt.figure() - frequency_range = np.arange(100, 22000) - # We can call the fit with a list of frequencies to evaluate the model at those frequencies. - # This uses the best fit parameters from fit.fitted_params. - plt.plot(frequency_range, fit(frequency_range)) - plt.xscale("log") - plt.yscale("log") - -If there are no spurious peaks, then normalizing the unblocked power spectrum results in random numbers with an exponential distribution with a mean value of 1. -The chance of encountering increasingly larger numbers decays exponentially, and this fact is used by `identify_peaks()`:: - - frequency_exclusions = ps.identify_peaks(fit, peak_cutoff=20, baseline=1) - -The parameter `peak_cutoff` is taken as the minimum magnitude of any value in the normalized power spectrum in order to be concidered a peak. -The default value is 20, and it corresponds to a chance of about 2 in a billion of a peak of magnitude 20 or larger occuring naturally in a data set. -If a peak is found with this or a higher magnitude, the algorithm then expands the range to the left and right until the first time the power spectrum drops below the value `baseline`. -The frequencies at which this occurs end up as the lower and higher frequency of a frequency exclusion range. As such, the value of `baseline` controls the width of the frequency exclusion range. -We can visualize the excluded peaks as follows:: - - fig, ax = plt.subplots(1, 2, sharey=True) - for axis, title in zip(ax, ('Full spectrum', 'Zoom')): - axis.loglog(ps.frequency, ps.power, label="Power spectrum") - for idx, item in enumerate(frequency_exclusions, 1): - to_plot = np.logical_and(item[0] <= ps.frequency, ps.frequency < item[1]) - axis.plot(ps.frequency[to_plot], ps.power[to_plot], 'r', label=f'peak {idx}') - axis.legend() - axis.set_title(title) - axis.set_xlabel('Frequency [Hz]') - ax[1].set_xlim(frequency_exclusions[0][0] - 1.0, frequency_exclusions[-1][1] + 1.0) - ax[1].set_xscale('linear') - ax[0].set_ylabel('Power [V$^2$/Hz]') - plt.suptitle('Identified peaks') - plt.show() - -.. image:: figures/force_calibration/identify_peaks.png - -Finally, we can do a least-squares fit, but in this case we will filter out the frequency ranges that contain peaks. -Because we use a least-squares method, we get error estimates on the fit parameters, and bias in the fit result can be corrected. -The default values of `loss_function='gaussian'` and `bias_correction=True` ensure least-squares fitting and bias correction, so we do not need to specify them:: - - ps_no_peak = lk.calculate_power_spectrum( - f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=200, fit_range=(10, 23e3), excluded_ranges=frequency_exclusions, - ) - fit_no_peak = lk.fit_power_spectrum(ps_no_peak, model) - - plt.figure() - fit_no_peak.plot() - plt.title( - f"Least squares (ex. peaks): $f_c$ = {fit_no_peak.results['fc'].value:.1f}, " - f"$D$ = {fit_no_peak.results['D'].value:.4f}, " - f"$f_d$ = {fit_no_peak.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_no_noise_peak.png - -Notice that no skewing occurs, and that the values of `fc`, `D` and `f_diode` are now closer to values found via robust fitting in the section above. diff --git a/docs/tutorial/force_calibration/active_calibration.rst b/docs/tutorial/force_calibration/active_calibration.rst new file mode 100644 index 000000000..6e3bf26e7 --- /dev/null +++ b/docs/tutorial/force_calibration/active_calibration.rst @@ -0,0 +1,175 @@ +.. _active_calibration_tutorial: + +Active calibration +------------------ + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +Pylake can be used to (re-)calibrate the force channels using active calibration data. +In active calibration, the stage is oscillated with a known frequency and amplitude. +This introduces an extra peak in the power spectrum which allows the trap to be calibrated in a +way that :ref:`relies less on the theoretical drag coefficient`. +This is particularly useful when the distance to the surface, the bead radius, or the viscosity +of the solution is not known accurately. +Using Pylake, the procedure to use active calibration is not very different from passive calibration. +However, it does require some additional data channels as inputs. + +Active calibration with a single bead +""""""""""""""""""""""""""""""""""""" + +Let's analyze some active calibration data acquired near a surface. +We will consider that the nanostage was used as driving input. +To do this, load a new file:: + + lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/near_surface_active_calibration.h5") + +We decalibrate the force, and extract some relevant parameters:: + + volts = f.force1x / f.force1x.calibration[0].force_sensitivity + bead_diameter = f.force1x.calibration[0].bead_diameter + # Calibration performed at 1.04 * bead_diameter + distance_to_surface = 1.04 * bead_diameter + +First we need to extract the nanostage data which is used to determine the driving amplitude and frequency:: + + driving_data = f["Nanostage position"]["X"] + +For data acquired with active calibration in Bluelake, this will be a sinusoidal oscillation. +If there are unexplained issues with the calibration, it is a good idea to plot the driving signal and verify that the motion looks like a clean sinusoid:: + + plt.figure() + driving_data.plot() + plt.xlim(0, 0.1) + plt.ylabel(r"Nanostage position ($\mu$m)") + plt.show() + +.. image:: figures/nanostage_position.png + +When calibrating actively, we also need to provide the sample rate at which the data was acquired, and a rough guess for the driving frequency. +Pylake will find an accurate estimate of the driving frequency based on this initial estimate (provided that it is close enough):: + + calibration = lk.calibrate_force( + volts.data, + bead_diameter=bead_diameter, + temperature=25, + sample_rate=volts.sample_rate, + driving_data=driving_data.data, + driving_frequency_guess=38, + distance_to_surface=distance_to_surface, + num_points_per_block=200, + active_calibration=True, + ) + +We can check the determined driving frequency with:: + + >>> calibration.driving_frequency + 38.15193077664462 + +Let's have a look to see if this peak indeed appears in our power spectrum. +To show the peak, we can pass `show_active_peak=True` to the plotting function of our calibration:: + + calibration.plot(show_active_peak=True) + +.. image:: figures/calibration_peak.png + +The driving peak is clearly visible in the spectrum. +The calibration is now complete and we can access the calibration parameters as before:: + + >>> print(calibration.stiffness) + 0.11637860957657106 + +.. note:: + + Note that the drag coefficient :attr:`~lumicks.pylake.calibration.ForceCalibrationItem.measured_drag_coefficient` + that Pylake returns always corresponds to the drag coefficient extrapolated back to its bulk value. + This ensures that drag coefficients can be compared and carried over between experiments performed at different heights. + The field :attr:`~lumicks.pylake.calibration.ForceCalibrationItem.local_drag_coefficient` contains an + estimate of the local drag coefficient (at the provided height). + +Comparing active calibration to passive +""""""""""""""""""""""""""""""""""""""" + +Let's compare the active calibration result to passive calibration:: + + >>> passive_fit = lk.calibrate_force( + ... volts.data, + ... bead_diameter=bead_diameter, + ... temperature=25, + ... sample_rate=volts.sample_rate, + ... distance_to_surface=distance_to_surface, + ... num_points_per_block=200 + ... ) + >>> print(passive_fit.stiffness) + 0.11763849764570819 + +This value is quite close to that obtained with active calibration above. + +In this experiment, we accurately determined the distance to the surface, but in most cases, this surface is only known very approximately. +If we do not provide the height above the surface, we can see that the passive calibration result suffers +much more than the active calibration result (as passive calibration fully relies on a drag coefficient +calculated from the physical input parameters):: + + >>> passive_fit = lk.calibrate_force( + ... volts.data, + ... bead_diameter=bead_diameter, + ... temperature=25, + ... sample_rate=volts.sample_rate, + ... num_points_per_block=200 + ... ) + >>> print(passive_fit.stiffness) + 0.08616565751377737 + +.. _bead_bead_tutorial: + +Active calibration with two beads far away from the surface +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +.. warning:: + + The implementation of the coupling correction models is still alpha functionality. + While usable, this has not yet been tested in a large number of different scenarios. + The API can still be subject to change *without any prior deprecation notice*! + If you use this functionality keep a close eye on the changelog for any changes that may affect your analysis. + +When performing active calibration with two beads, there is a lower fluid velocity around the beads than there would be with a single bead. +This leads to a smaller voltage readout than expected and therefore a higher displacement sensitivity (microns per volt). +Failing to take this into account results in a bias. +Pylake offers a function to calculate a correction factor to account for the lower velocity around the bead. + +.. note:: + + For more information on how these factors are derived, please refer to the :ref:`theory` section on this topic. + +Appropriate correction factors for oscillation in :math:`x` can be calculated as follows:: + + factor = lk.coupling_correction_2d(dx=5.0, dy=0, bead_diameter=bead_diameter, is_y_oscillation=False) + +Here `dx` and `dy` represent the horizontal and vertical distance between the beads. +Note that these refer to *center to center distances* (unlike the distance channel in Bluelake, which represents the bead surface to surface distance). +Note that all three parameters have to be specified in the same spatial unit (meters or micron). +The final parameter `is_y_oscillation` indicates whether the stage was oscillated in the y-direction. + +The obtained correction factor can be used to correct the calibration factors:: + + Rd_corrected = factor * calibration["Rd"].value + Rf_corrected = calibration["Rf"].value / factor + stiffness_corrected = calibration["kappa"].value / factor**2 + +To correct a force trace, simply divide it by the correction factor:: + + corrected_force1x = f.force1x / factor + +.. note:: + + This coupling model neglects effects from the surface. It is intended for measurements performed at the center of the flowcell. + +.. note:: + + The model implemented here only supports beads that are aligned in the same plane. + It does not take a mismatch in the `z`-position of the beads into account. + In reality, the position in the focus depends on the bead radius and may be different for the two beads if they slightly differ in size + (see :cite:`alinezhad2018enhancement` Fig. 3) + At short bead-to-bead distances, such a mismatch would make the coupling less pronounced than the model predicts. diff --git a/docs/tutorial/force_calibration/calibration_items.rst b/docs/tutorial/force_calibration/calibration_items.rst new file mode 100644 index 000000000..cc5b00868 --- /dev/null +++ b/docs/tutorial/force_calibration/calibration_items.rst @@ -0,0 +1,173 @@ +Calibration items +----------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +This tutorial will focus on performing force calibration using `pylake`. +It is deliberately light on theory, to focus on the practical usage of the calibration procedures. +For more theoretical background please refer to the +:doc:`theory section on force calibration`. + +We can download the data needed for this tutorial directly from Zenodo using Pylake:: + + filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + +When force calibration is requested in Bluelake, it uses Pylake to perform the calibration, +after which a force calibration item is added to the timeline. +To see what such items look like, let's load the dataset:: + + f = lk.File("test_data/passive_calibration.h5") + +The force timeline in this file contains a single calibration measurement. +Note that every force axis (e.g. `1x`, `1y`, `2x`, etc.) has its own calibration. +We can see calibrations relevant for a single force channel (in this case `1x`) by inspecting the +:attr:`~lumicks.pylake.channel.Slice.calibration` attribute for the entire force +:class:`~lumicks.pylake.channel.Slice`:: + + f.force1x.calibration + +This returns a list of :class:`~lumicks.pylake.calibration.ForceCalibrationItem` instances. +In Jupyter notebooks, the following table will also display: + +.. image:: figures/listing.png + +This list provides a quick overview of which calibration items are present in the file and when they were applied. +More importantly however, it tells us whether the raw data for the calibration is present in the :class:`~lumicks.pylake.channel.Slice`. +When Bluelake creates a calibration item, it only contains the results of a calibration as well as +information on when the data was acquired, but *not* the raw data. +We can see this if we plot the time ranges over which these calibration items were acquired and then applied:: + + plt.figure(figsize=(8, 2)) + f.force1x.plot(color='k') + f.force1x.highlight_time_range(f.force1x.calibration[0], color="C0", annotation="0") + f.force1x.highlight_time_range(f.force1x.calibration[0].applied_at, color="C0") + f.force1x.highlight_time_range(f.force1x.calibration[1], color="C1", annotation="1") + f.force1x.highlight_time_range(f.force1x.calibration[1].applied_at, color="C1") + +.. image:: figures/time_ranges.png + +This shows how the calibration items relate to the data present in the file. +Calibration item `0` is the calibration that was acquired *before* the start of this file +(and is therefore the calibration that is active when the file starts). +Calibration item `1` is the calibration item acquired *during* the marker saved to this file. + +In a Jupyter notebook we can print the details of a specific item:: + + f.force1x.calibration[1] + +.. image:: figures/calibration_item.png + +This shows us all the relevant calibration parameters. +These parameters are properties and can be extracted as such:: + + >>> calibration_item = f.force1x.calibration[1] + ... calibration_item.stiffness + 0.1287225353482303 + +Redoing a Bluelake calibration +------------------------------ + +.. important:: + In order to redo a Bluelake calibration, the force data that was used for the calibration has to + be included in the `.h5` file. *Note that this force data is not exported nor marked by default*; + it has to be explicitly added to the exported file. + +We can directly slice the channel by the calibration item we want to reproduce to extract the relevant data:: + + force1x_slice = f.force1x[f.force1x.calibration[1]] + +To recalibrate data we first have to de-calibrate the data to get back to raw voltage. +To do this, we divide our data by the force sensitivity that was active at the start of the slice. + + >>> old_calibration = force1x_slice.calibration[0] + ... volts1x_slice = force1x_slice / old_calibration.force_sensitivity + +The easiest way to extract all the relevant input parameters for a calibration is to use +:meth:`~lumicks.pylake.calibration.calibration_item.ForceCalibrationItem.calibration_params()`:: + + >>> calibration_params = f.force1x.calibration[1].calibration_params() + ... calibration_params + {'num_points_per_block': 2000, + 'sample_rate': 78125, + 'excluded_ranges': [(19348.0, 19668.0), (24308.0, 24548.0)], + 'fit_range': (100.0, 23000.0), + 'bead_diameter': 4.89, + 'viscosity': 0.00089, + 'temperature': 25.0, + 'fast_sensor': False, + 'axial': False, + 'hydrodynamically_correct': False, + 'active_calibration': False} + +This returns a dictionary with the parameters that were set during the calibration in Bluelake. +These parameters can be used to reproduce a calibration that was performed in Bluelake +by passing these to :func:`~lumicks.pylake.calibrate_force`. +Depending on the type of calibration that was performed, the number of parameters may vary. + +.. note:: + + If a dictionary of calibration parameters contains parameters named `fixed_alpha` or `fixed_diode` + this means that your C-Trap has a pre-calibrated diode. In this case, remember that the values + for `fixed_alpha` and `fixed_diode` depend on the amount of light falling on that trap. If you + want to calibrate data corresponding to a different trap power or split, you will need to + recalculate these values. For more information, please refer to the + :ref:`diode calibration tutorial`. + +To quickly reproduce the same calibration that was performed in Bluelake, we can use the function +:func:`~lumicks.pylake.calibrate_force()` and unpack the parameters dictionary using the `**` notation:: + + >>> recalibrated = lk.calibrate_force(volts1x_slice.data, **calibration_params) + +We can plot this calibration:: + + recalibrated.plot() + +.. image:: figures/passive_calibration.png + +and the residual:: + + recalibrated.plot_spectrum_residual() + +.. image:: figures/residual.png + +We see that this reproduces the original calibration:: + + >>> recalibrated.stiffness + 0.12872253516809967 + + >>> f.force1x.calibration[1].stiffness + 0.1287225353482303 + +In this particular case, it looks like we calibrated with the `hydrodynamically_correct` model disabled:: + + >>> calibration_params["hydrodynamically_correct"] + False + +Given that we used big beads (note the `4.89` micron bead diameter), we should have probably enabled it instead. +We can still retroactively change this:: + + >>> calibration_params["hydrodynamically_correct"] = True + ... recalibrated_hyco = lk.calibrate_force(volts1x_slice.data, **calibration_params) + ... recalibrated_hyco.stiffness + 0.15453110071085924 + +As expected, the difference in this case is substantial. +We can also see that the residual now should less systematic deviation:: + + recalibrated_hyco.plot_spectrum_residual() + +.. image:: figures/residual_better.png + +Now that we have our new calibration item, we can recalibrate a slice of force data. +To do so, take the slice and multiply it by the ratio of the old and new calibration factors:: + + correction_factor = recalibrated_hyco.force_sensitivity / old_calibration.force_sensitivity + recalibrated_force1x = force1x_slice * correction_factor + + plt.figure() + force1x_slice.plot() + recalibrated_force1x.plot() + +.. image:: figures/recalibrated_force1x.png diff --git a/docs/tutorial/force_calibration/diode_model.rst b/docs/tutorial/force_calibration/diode_model.rst new file mode 100644 index 000000000..233508c34 --- /dev/null +++ b/docs/tutorial/force_calibration/diode_model.rst @@ -0,0 +1,122 @@ +.. _diode_tutorial: + +Diode calibration +----------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +When calibrating, it is important to take into account the :ref:`characteristics of the sensor` used to measure the data. + +Depending on the sensor in your system, and whether it was pre-calibrated you may need to consider a few extra steps in the calibration procedure. +We will illustrate this on a small dataset:: + + lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/noise_floor.h5") + +Let's grab a calibration item from the file and check whether it was a full calibration by checking its `kind` property. +If this returns `"Active"` or `"Passive"`, it means that it was a full calibration, meaning it will have information about the diode in the calibration item: + + >>> calibration_item = f.force1x.calibration[0] + ... calibration_item.kind + 'Passive' + +If the last item was a full calibration, we can check whether this system has a calibrated diode by checking the `fitted_diode` property: + + >>> calibration_item.fitted_diode + False + +In this case the property is `False`, which means the diode was not fitted *during* the calibration. +In other words, a pre-calibrated diode was used. +We can extract the diode calibration model as follows: + + >>> diode_calibration = calibration_item.diode_calibration + ... diode_calibration + DiodeCalibrationModel() + +This model describes the relation between trap power and the sensor parameters. +To use this model, call it with total trap power to determine the diode parameters at that power level. + + >>> diode_params = diode_calibration(f["Diagnostics"]["Trap power 1"]) + ... diode_params + {'fixed_diode': 14829.480905511606, 'fixed_alpha': 0.4489251910346808} + +These parameter values can be used directly with :meth:`~lumicks.pylake.calibrate_force`. +A convenient way to do this is to grab the calibration parameters of a previous calibration, and only update the diode calibration parameters. +Below is an example of how to do this with a `dict` union using the `|` operator:: + + >>> params = calibration_item.calibration_params() + ... # replace the 'fixed_diode' and 'fixed_alpha' values in params + ... # with the corresponding values from diode_params and return a new dict + ... updated_params = params | diode_params + ... print(updated_params) + {'num_points_per_block': 200, + 'sample_rate': 100000, + 'excluded_ranges': [], + 'fit_range': (10.0, 23000.0), + 'bead_diameter': 4.34, + 'rho_bead': 1060.0, + 'rho_sample': 997.0, + 'viscosity': 0.000941, + 'temperature': 22.58, + 'fixed_alpha': 0.4489251910346808, + 'fixed_diode': 14829.480905511606, + 'fast_sensor': False, + 'axial': False, + 'hydrodynamically_correct': True, + 'active_calibration': False} + +We can see that this updated the fixed diode parameters. + +.. note:: + + Each sensor has its own diode characteristic. + If you are calibrating multiple traps with pre-calibrated diodes, you will need to provide + the correct diode parameters for each trap. + +We can calibrate with these parameters directly by unpacking this dictionary into the :meth:`~lumicks.pylake.calibrate_force` function:: + + volts = f.force1x / f.force1x.calibration[0].force_sensitivity + + calibration = lk.calibrate_force(volts.data, **updated_params) + calibration.plot() + +.. image:: figures/diode_cal_bad_fit.png + +Unfortunately, in this case, we also have a noise floor to contend with, so we should restrict the fitting range as well +(for more information about this, see the section on :ref:`noise floors`). +In this case, we restrict the upper bound of the fitting range to approximately four times the corner frequency:: + + volts = f.force1x / f.force1x.calibration[0].force_sensitivity + + updated_params = updated_params | {"fit_range": [100, 2300]} + calibration = lk.calibrate_force(volts.data, **updated_params) + calibration.plot() + +.. image:: figures/diode_cal_good_fit.png + +To judge whether the noise floor has been sufficiently truncated, you can play with the upper limit +of the fit range and see if the corner frequency no longer changes. + +When to use calibrated diode parameters +""""""""""""""""""""""""""""""""""""""" + +Using a calibrated diode is critical when the corner frequency is close to or higher than the diode frequency. +When the corner frequency is very high, the estimation of the model parameters can fail *despite the fit looking good*. + +In this data, the corner frequency is low, therefore using the diode parameters is not strictly necessary: + + >>> calibration.corner_frequency + 531.0129872280306 + +Removing `fixed_diode` and `fixed_alpha` from the calibration arguments (by setting them to `None`) results in almost no change in this case:: + + updated_params = updated_params | {"fixed_alpha": None, "fixed_diode": None, "fit_range": [100, 2300]} + calibration = lk.calibrate_force(volts.data, **updated_params) + calibration.plot() + plt.title(f"Stiffness: {calibration.stiffness:.2f}"); + +.. image:: figures/diode_cal_good_fit_no_diode_pars.png + +As we can see, the stiffness is pretty much the same in this case. diff --git a/docs/tutorial/force_calibration/figures/active_calibration_lowlevel.png b/docs/tutorial/force_calibration/figures/active_calibration_lowlevel.png new file mode 100644 index 000000000..b1f4704ab --- /dev/null +++ b/docs/tutorial/force_calibration/figures/active_calibration_lowlevel.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a348dd00d787d12e387566b36d6cdaf09bbbb01625e351b9afd675ee21e39985 +size 29155 diff --git a/docs/tutorial/force_calibration/figures/bad_fit_noise_floor.png b/docs/tutorial/force_calibration/figures/bad_fit_noise_floor.png new file mode 100644 index 000000000..2a1f441ed --- /dev/null +++ b/docs/tutorial/force_calibration/figures/bad_fit_noise_floor.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:883a7bb80ecfa2190a698fd063421d6e7c5d7955d4792b37fccda43780baf4bd +size 35819 diff --git a/docs/tutorial/force_calibration/figures/bl_dictionary.png b/docs/tutorial/force_calibration/figures/bl_dictionary.png new file mode 100644 index 000000000..59f286539 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/bl_dictionary.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d813fdcec1eaf473f5f51f75eb435ae51cbb4828a3f12b41a017cf8d4e97b86 +size 311891 diff --git a/docs/tutorial/force_calibration/figures/calibration_item.png b/docs/tutorial/force_calibration/figures/calibration_item.png new file mode 100644 index 000000000..738d01e3e --- /dev/null +++ b/docs/tutorial/force_calibration/figures/calibration_item.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:421e0228e8bab9a9977794658e89bbac6b9f6217cdcd873459cd8c6625020c7f +size 312341 diff --git a/docs/tutorial/figures/force_calibration/calibration_item_active.png b/docs/tutorial/force_calibration/figures/calibration_item_active.png similarity index 100% rename from docs/tutorial/figures/force_calibration/calibration_item_active.png rename to docs/tutorial/force_calibration/figures/calibration_item_active.png diff --git a/docs/tutorial/force_calibration/figures/calibration_peak.png b/docs/tutorial/force_calibration/figures/calibration_peak.png new file mode 100644 index 000000000..8f8c4d5ce --- /dev/null +++ b/docs/tutorial/force_calibration/figures/calibration_peak.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89a275943821c662701b264b2927a2d2f77c22961f349a1e286ccaa8cfa08749 +size 32701 diff --git a/docs/tutorial/force_calibration/figures/diode_cal_bad_fit.png b/docs/tutorial/force_calibration/figures/diode_cal_bad_fit.png new file mode 100644 index 000000000..d3ff95253 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/diode_cal_bad_fit.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb5ba4a2cd41af9f241cd9c9105c3daaf5ec335148fd66ecf2b565cea318bdb0 +size 33295 diff --git a/docs/tutorial/force_calibration/figures/diode_cal_good_fit.png b/docs/tutorial/force_calibration/figures/diode_cal_good_fit.png new file mode 100644 index 000000000..7400d2c60 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/diode_cal_good_fit.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b31287eb9e0278562ed575f785bf044bd0934933d9cf179ddff8aaf0b959a6ec +size 29360 diff --git a/docs/tutorial/force_calibration/figures/diode_cal_good_fit_no_diode_pars.png b/docs/tutorial/force_calibration/figures/diode_cal_good_fit_no_diode_pars.png new file mode 100644 index 000000000..a66637453 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/diode_cal_good_fit_no_diode_pars.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b762f96b1f8c02b8ba125d865fef827964cab073f962a3508020f26a09d76e4 +size 29421 diff --git a/docs/tutorial/force_calibration/figures/fitted_spectrum.png b/docs/tutorial/force_calibration/figures/fitted_spectrum.png new file mode 100644 index 000000000..d38ef1523 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/fitted_spectrum.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e8c49dfba636eaa5040d3f27105763d8179500994eeb03fbc83c31ebcb99193 +size 149661 diff --git a/docs/tutorial/force_calibration/figures/frequency_exclusion_ranges.png b/docs/tutorial/force_calibration/figures/frequency_exclusion_ranges.png new file mode 100644 index 000000000..c8bdb17e3 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/frequency_exclusion_ranges.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8815e99da63149a14e33665b4f6ac2a9b0fd431742debef1279285cc7199a13 +size 85186 diff --git a/docs/tutorial/figures/force_calibration/identify_peaks.png b/docs/tutorial/force_calibration/figures/identify_peaks.png similarity index 100% rename from docs/tutorial/figures/force_calibration/identify_peaks.png rename to docs/tutorial/force_calibration/figures/identify_peaks.png diff --git a/docs/tutorial/force_calibration/figures/listing.png b/docs/tutorial/force_calibration/figures/listing.png new file mode 100644 index 000000000..52d19f728 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/listing.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53c360d24db91039ad31412f9680f4d95628177611a47daef7dc88ecf5c4b688 +size 49210 diff --git a/docs/tutorial/force_calibration/figures/nanostage_position.png b/docs/tutorial/force_calibration/figures/nanostage_position.png new file mode 100644 index 000000000..007ae05fb --- /dev/null +++ b/docs/tutorial/force_calibration/figures/nanostage_position.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:972c4ae8c5d2c0b2617c09e97e725f1f1479ec466d846e311b6f263f40be3a93 +size 35968 diff --git a/docs/tutorial/force_calibration/figures/passive_calibration.png b/docs/tutorial/force_calibration/figures/passive_calibration.png new file mode 100644 index 000000000..2107a482e --- /dev/null +++ b/docs/tutorial/force_calibration/figures/passive_calibration.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e440d35ab5090268b7ab768eda3c9c40fadeedba6dbb0435d3a077fa804afc6 +size 28735 diff --git a/docs/tutorial/figures/force_calibration/power_spectrum.png b/docs/tutorial/force_calibration/figures/power_spectrum.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum.png rename to docs/tutorial/force_calibration/figures/power_spectrum.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_excluded_ranges.png b/docs/tutorial/force_calibration/figures/power_spectrum_excluded_ranges.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_excluded_ranges.png rename to docs/tutorial/force_calibration/figures/power_spectrum_excluded_ranges.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_no_noise_peak.png b/docs/tutorial/force_calibration/figures/power_spectrum_no_noise_peak.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_no_noise_peak.png rename to docs/tutorial/force_calibration/figures/power_spectrum_no_noise_peak.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_noise_peak.png b/docs/tutorial/force_calibration/figures/power_spectrum_noise_peak.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_noise_peak.png rename to docs/tutorial/force_calibration/figures/power_spectrum_noise_peak.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_noise_peak_robust.png b/docs/tutorial/force_calibration/figures/power_spectrum_noise_peak_robust.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_noise_peak_robust.png rename to docs/tutorial/force_calibration/figures/power_spectrum_noise_peak_robust.png diff --git a/docs/tutorial/force_calibration/figures/recalibrated_force1x.png b/docs/tutorial/force_calibration/figures/recalibrated_force1x.png new file mode 100644 index 000000000..fdb4d796d --- /dev/null +++ b/docs/tutorial/force_calibration/figures/recalibrated_force1x.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f869413d18cb9fcf9ce41ebf4a9cfd14408631e1cecf8b8f5a44ae0340419fc +size 37893 diff --git a/docs/tutorial/force_calibration/figures/residual.png b/docs/tutorial/force_calibration/figures/residual.png new file mode 100644 index 000000000..e4895a4da --- /dev/null +++ b/docs/tutorial/force_calibration/figures/residual.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00fc8a249722f31ecfe1e94456b12d61c1c502232e15859e9919dd8c63ce0016 +size 21398 diff --git a/docs/tutorial/force_calibration/figures/residual_better.png b/docs/tutorial/force_calibration/figures/residual_better.png new file mode 100644 index 000000000..cab57a5f2 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/residual_better.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ef1c29aba130dee755cc2db7af81ac08b5d47ade760e5826f3857ce17e0a5bf +size 19901 diff --git a/docs/tutorial/force_calibration/figures/residual_check.png b/docs/tutorial/force_calibration/figures/residual_check.png new file mode 100644 index 000000000..3a7ad001d --- /dev/null +++ b/docs/tutorial/force_calibration/figures/residual_check.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff3013ed1d9a4ce657ae160cf1d57e8a46ffa9b1f45b1fead804364c75817b15 +size 76037 diff --git a/docs/tutorial/force_calibration/figures/surface_calibration_workflow.png b/docs/tutorial/force_calibration/figures/surface_calibration_workflow.png new file mode 100644 index 000000000..2c2467802 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/surface_calibration_workflow.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:635dfc8eba9a7ce1976cfad89a2d18266d9f32b03eb7c6eb88e12888f41ff339 +size 256897 diff --git a/docs/tutorial/force_calibration/figures/temperature_dependence.png b/docs/tutorial/force_calibration/figures/temperature_dependence.png new file mode 100644 index 000000000..ebb9c54b9 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/temperature_dependence.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16c919dbd78567d8129cb22a9d6b98915b36888f5f20fbe0240b8f781a03bcf1 +size 252340 diff --git a/docs/tutorial/force_calibration/figures/time_ranges.png b/docs/tutorial/force_calibration/figures/time_ranges.png new file mode 100644 index 000000000..00eef67b7 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/time_ranges.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea26afb3bb7cf15b74f7787cfcabac61775dc86fb0810b1bb9ec64c35fe9b26 +size 13043 diff --git a/docs/tutorial/force_calibration/force_calibration.rst b/docs/tutorial/force_calibration/force_calibration.rst new file mode 100644 index 000000000..17f034c0c --- /dev/null +++ b/docs/tutorial/force_calibration/force_calibration.rst @@ -0,0 +1,237 @@ +Optimizing force calibration parameters +--------------------------------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +Pylake has many options to optimize force calibration depending on the specific conditions of your experiments. +The number of options presented when calling :func:`~lumicks.pylake.calibrate_force()` may be daunting at +first, but hopefully this chapter will provide some guidelines on how to obtain accurate and +precise force calibrations using Pylake. + +Experimental considerations +*************************** + +Core parameters +""""""""""""""" + +Passive calibration (also referred to as thermal calibration) involves fitting a model to data of a bead jiggling in a trap. +This model relies on a number of parameters that have to be specified in order to get the correct calibration factors. +The most important parameters are: + +- The bead diameter (in microns). +- The viscosity of the medium, which strongly depends on :ref:`temperature`. +- The distance to the surface (in case of a surface experiment). + +To find the viscosity of water at a particular temperature, Pylake supplies the function +:func:`~lumicks.pylake.viscosity_of_water` which implements the model presented in :cite:`huber2009new`. +When the viscosity parameter is omitted, this function will automatically be used to look up the +viscosity of water for that particular temperature + +The following figure shows the effect of mis-specifying temperature on the accuracy of the (passive) +calibration. The left panel shows the temperature dependence of the viscoscity of water over the +expected temperature range for a typical experiment. The three right panels show the percent error +expected for various calibration parameters as a function of specified temperature. The dashed +vertical line indicates the actual temperature of the experiment (which corresponds to 0% error): + +.. image:: figures/temperature_dependence.png + :nbattach: + +.. note:: + + Note that for experiments that use a medium different than water, the viscosity at the experimental + temperature should explicitly be provided. + +Hydrodynamically correct model +"""""""""""""""""""""""""""""" + +**For lateral calibration, it is recommended to use the hydrodynamically correct theory** by setting +`hydrodynamically_correct` to `True`) as it provides an improved model of the underlying physics. +For more details about this correction see the :ref:`theory section on the hydrodynamically correct model`. +For small beads (< 1 micron) the differences will be small, but for larger beads, substantial +differences can occur. + +There is only one exception to this recommendation, which is when the beads +are so close to the flowcell surface (0.75 x diameter) that this model becomes invalid. + +.. image:: figures/surface_calibration_workflow.png + :nbattach: + +Using the hydrodynamically correct theory requires a few extra parameters: the density of the +sample (`rho_sample`) and bead (`rho_bead`). When these are not provided, +Pylake uses values for water and polystyrene for the sample and bead density, respectively. + +Experiments near the surface +"""""""""""""""""""""""""""" + +Proximity of the bead to the flowcell surface leads to an increase in the drag force on the bead. + +When performing experiments near the surface, it is recommended to provide the `distance_to_surface` argument. +This should be the distance from the center of the bead to the surface of the flowcell in microns. +Since it can be challenging to determine this distance, **it is recommended to use active calibration +when calibrating near the surface,** since this makes calibration far less sensitive to mis-specification +of the bead diameter, viscosity and height. + +Active or passive? +"""""""""""""""""" + +Passive calibration depends strongly on the values set for the bead diameter, distance to the surface and viscosity. +When these are not well known, passive calibration can be inaccurate. +During active calibration, the trap or nanostage is oscillated sinusoidally, leading to additional +bead motion that can be detected and used to calibrate the displacement signal. +Because of this extra information, active calibration does not rely as strongly on the calibration parameters. + +**Active calibration is highly recommended when the bead is close to the surface,** +as it is less sensitive to the distance to the surface and bead diameter. +**It is also recommended when the viscosity of the medium or the bead diameter are poorly known.** +An example of a basic active calibration with a single bead can be found :ref:`here`. + +.. important:: + One thing to be aware of is that active calibration with two beads is more complex and currently + requires :ref:`manual steps in Pylake`. + +Axial force +""""""""""" + +When calibrating axial forces, it is important to set the `axial` flag to `True`. +**Note that we currently do not support hydrodynamically correct models for axial calibrations.** +Setting `axial` to `True` means you have to set `hydrodynamically_correct` to `False`. + +Technical considerations +************************ + +Sensor +"""""" + +In addition to the model that describes the bead's motion, it is important to take into account the +:ref:`characteristics of the sensor` used to measure the data. +A silicon diode sensor is characterized by two parameters, a "relaxation factor" `alpha` and frequency `f_diode`. +These parameters can either be estimated along with the other parameters or measured independently. + +When the diode frequency and relaxation factor are fitted, care must be taken that the corner +frequency of the power spectrum `fc` is +:ref:`lower than the estimated diode frequency`. +You can check whether the diode parameters were estimated from the calibration +data by checking the property :attr:`~lumicks.pylake.calibration.ForceCalibrationItem.fitted_diode`. +When this property is `True`, it means that the diode parameters were not fixed during the fit. +This means that you should be careful when calibrating small beads at high laser powers. + +If the property returns `False`, it means you can use higher powers more safely, but will have +to make sure the correct diode parameters for that particular laser power are used. For more +information on how to do this, refer to the :ref:`diode calibration tutorial`. + +.. warning:: + + For high corner frequencies, calibration can become unreliable when the diode parameters are fitted. + A warning sign for this is when the corner frequency `fc` approaches or exceeds the diode frequency `f_diode`. + For more information see the section on :ref:`High corner frequencies`. + +Fit range +""""""""" + +The fit range determines which section of the power spectrum is actually fit. +Two things are important when choosing a fitting range: + +1. The corner frequency should be clearly visible in the fitted range (frequency where the spectrum transitions from a plateau into a slope). +2. When working at low laser powers, it is possible that the :ref:`noise floor is visible` at higher frequencies. + This noise floor should always be excluded from the fit. + +Below is an example of a bad fit due to a noise floor. +Note how the spectrum flattens out at high frequencies and the model is unable to capture this. + +.. image:: figures/bad_fit_noise_floor.png + +**As a rule of thumb, an upper bound of approximately four times the corner frequency is usually a safe margin.** +The fitting bounds can be specified by providing a `fit_range` to any of the calibration functions. + +Frequency exclusion ranges +"""""""""""""""""""""""""" + +Force calibration is very sensitive to outliers. +It is therefore important to exclude noise peaks from the data prior to fitting. +Excluding noise floors can be done by providing a list of tuples to the `excluded_ranges` argument of :func:`~lumicks.pylake.calibrate_force()`:: + + lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + force_data = lk.File("test_data/robust_fit_data.h5").force2y + + params = { + "bead_diameter": 4.4, + "temperature": 25, + "num_points_per_block": 200, + "hydrodynamically_correct": True, + } + + calibration1 = lk.calibrate_force( + force_data.data, sample_rate=force_data.sample_rate, **params + ) + calibration2 = lk.calibrate_force( + force_data.data, sample_rate=force_data.sample_rate, **params, excluded_ranges=[(19447, 19634)] + ) + + plt.figure(figsize=(8, 6)) + plt.subplot(2, 2, 1) + calibration1.plot(show_excluded=True) + plt.title(f"Stiffness: {calibration1.stiffness:.3f}, Force sensi: {calibration1.displacement_sensitivity:.2f}") + plt.subplot(2, 2, 2) + calibration2.plot(show_excluded=True) + plt.tight_layout() + plt.title(f"Stiffness: {calibration2.stiffness:.3f}, Force sensi: {calibration2.displacement_sensitivity:.2f}") + + plt.subplot(2, 2, 3) + calibration1.plot_spectrum_residual() + plt.ylim([0, 2]) + plt.title(f"Residual\nStiffness: {calibration1.stiffness:.3f}, Force sensi: {calibration1.displacement_sensitivity:.2f}") + plt.subplot(2, 2, 4) + calibration2.plot_spectrum_residual() + plt.tight_layout() + plt.title(f"Residual\nStiffness: {calibration2.stiffness:.3f}, Force sensi: {calibration2.displacement_sensitivity:.2f}") + +.. image:: figures/frequency_exclusion_ranges.png + +Note that when plotting the calibration, we have used `show_excluded=True`, which shows the excluded ranges in the plot. +We can request these excluded ranges from the calibration item itself:: + + >>> calibration2.excluded_ranges + [(19447, 19634)] + +This will also work for an item coming from Bluelake. + +An alternative to specifying frequency exclusion ranges manually is :ref:`robust fitting` +which is less sensitive to outliers. +It also has an option to detect noise peaks and :ref:`determine exclusion ranges automatically`. + +Blocking +"""""""" + +Blocking reduces the number of points in the power spectrum by averaging adjacent points. +Blocking too little means that the assumptions required to compute the statistical backing are violated +and that the fit is more difficult to assess. +Blocking too much can lead to a bias in the corner frequency and thereby the calibration parameters. + +It is generally recommended to use more than 100 points per block. +There is no clear guideline on how many points per block is too many, but a good way to check whether +you are over-blocking is to change the blocking and see if it has a big impact on the corner frequency. +If it does, you are likely over-blocking. + +Diagnostics +""""""""""" + +Pylake offers a few diagnostics for checking whether the fit is good. +Note that this in itself is not a guarantee that the calibration is correct, +but it can be a good indicator whether the model curve fits the data. +The first is the `backing` property described in more detail :ref:`here`. +Values lower than 0.05 are generally considered suboptimal and warrant a closer inspection. + + >>> calibration1.backing + 0.0 + + >>> calibration2.backing + 39.11759032609807 + +When the backing is low, it is recommended to plot the residuals of the fit. +For a good fit, these should generally show a noise pattern without a clear trend such as the one below:: + + calibration2.plot_spectrum_residual() + +.. image:: figures/residual_check.png diff --git a/docs/tutorial/force_calibration/index.rst b/docs/tutorial/force_calibration/index.rst new file mode 100644 index 000000000..46347336b --- /dev/null +++ b/docs/tutorial/force_calibration/index.rst @@ -0,0 +1,17 @@ +Force Calibration +================= + +.. _force_calibration_tutorial: + +Find out about force calibration + +.. toctree:: + :caption: Contents + :maxdepth: 1 + + calibration_items + force_calibration + diode_model + active_calibration + low_level_api + robust_fitting diff --git a/docs/tutorial/force_calibration/low_level_api.rst b/docs/tutorial/force_calibration/low_level_api.rst new file mode 100644 index 000000000..ee11e0225 --- /dev/null +++ b/docs/tutorial/force_calibration/low_level_api.rst @@ -0,0 +1,166 @@ +Low level API +------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +For those who want an API that is a little more composable, Pylake also offers a low level API to perform force calibration. +This API is intended for advanced users and separates the steps of creating a power spectrum and fitting models to it. + +Obtaining the power spectrum +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +First we need the data in volts as shown in previous tutorials:: + + filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/passive_calibration.h5") + + force_slice = f.force1x[f.force1x.calibration[1]] + old_calibration = force_slice.calibration[0] + + volts = force_slice / old_calibration.force_sensitivity + +To use the more manual lower-level API, we first need the power spectrum to fit. +To compute a power spectrum from our data we can invoke :func:`~lumicks.pylake.calculate_power_spectrum()`:: + + power_spectrum = lk.calculate_power_spectrum(volts.data, sample_rate=volts.sample_rate) + +This function returns a :class:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum` which we can plot:: + + plt.figure() + power_spectrum.plot() + plt.show() + +.. image:: figures/power_spectrum.png + +The power spectrum is smoothed by downsampling adjacent power spectral values, a procedure known as blocking. +Downsampling the spectrum is required to fulfill some of the assumptions in the fitting procedure, +but it comes at the cost of spectral resolution. +One must be careful that the shape of the power spectrum is still sufficiently preserved. +If the corner frequency is very low then downsampling too much can lead to biases in the calibration parameters. +In such cases, it is better to either measure a longer interval to increase the spectral resolution or +reduce `num_points_per_block`, the number of points used for blocking. + +The range over which to compute the spectrum can be controlled using the `fit_range` argument. +One can also exclude specific frequency ranges from the spectrum with `excluded_ranges` +which can be useful if there are noise peaks in the spectrum. +Let's see which ranges were excluded in our Bluelake calibration:: + + old_calibration + +.. image:: figures/bl_dictionary.png + +Here, they are listed in the `excluded_ranges` field. +To reproduce the result obtained with Bluelake, these should be excluded from the power spectrum:: + + power_spectrum = lk.calculate_power_spectrum( + volts.data, + sample_rate=volts.sample_rate, + fit_range=(1e2, 23e3), + num_points_per_block=2000, + excluded_ranges=old_calibration.excluded_ranges + ) + + plt.figure() + power_spectrum.plot(marker=".") + plt.show() + +.. image:: figures/power_spectrum_excluded_ranges.png + +Note that exclusion ranges are excluded *prior* to downsampling. +Considering that a noise peak may be very narrow, it is beneficial to lower the number of points per +block temporarily to find the exact exclusion range. +After determination of this exclusion range, the number of points per block can be increased again. +However, also see :ref:`robust fitting` for an automated peak identification routine. + +Passive calibration +^^^^^^^^^^^^^^^^^^^ + +In the low level API, we create the model to fit the data explicitly. +The next step is setting up the calibration model:: + + bead_diameter = old_calibration["Bead diameter (um)"] + force_model = lk.PassiveCalibrationModel(bead_diameter, temperature=25) + +To fit this model to the data use :func:`~lumicks.pylake.fit_power_spectrum()`:: + + calibration = lk.fit_power_spectrum(power_spectrum, force_model) + calibration + +.. image:: figures/calibration_item.png + +This will produce a table with your fitted calibration parameters. +These parameters can be accessed as follows:: + + >>> print(calibration["kappa"].value) + >>> print(f.force1x.calibration[1]["kappa (pN/nm)"]) + 0.12872206853860946 + 0.1287225353482303 + +.. note:: + + Note that by default, a bias correction is applied to the fitted results :cite:`norrelykke2010power`. + This bias correction is applied to the diffusion constant and amounts to a correction of :math:`\frac{N}{N+1}`, + where :math:`N` refers to the number of points used for a particular spectral data point. + It can optionally be disabled by passing `bias_correction=False` to :func:`~lumicks.pylake.fit_power_spectrum`. + +We can plot the calibration by calling:: + + plt.figure() + calibration.plot() + plt.show() + +.. image:: figures/fitted_spectrum.png + +We can set up a model for passive calibration using the hydrodynamically correct theory according to:: + + force_model = lk.PassiveCalibrationModel( + bead_diameter, + hydrodynamically_correct=True, + rho_sample=999, + rho_bead=1060.0 + ) + +Note that when `rho_sample` and `rho_bead` are omitted, values for water and polystyrene are used for the sample and bead density respectively. + +Active calibration +"""""""""""""""""" + +Let's load some active calibration data:: + + lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/near_surface_active_calibration.h5") + + volts = f.force1x / f.force1x.calibration[0].force_sensitivity + bead_diameter = f.force1x.calibration[0].bead_diameter + distance_to_surface = 1.04 * bead_diameter + driving_data = f["Nanostage position"]["X"] + +Instead of using the :class:`~lumicks.pylake.PassiveCalibrationModel` presented in the previous section, +we now use the :class:`~lumicks.pylake.ActiveCalibrationModel`. +We also need to provide the sample rate at which the data was acquired, and a rough guess for the driving frequency. +Pylake will find an accurate estimate of the driving frequency based on this initial estimate (provided that it is close enough):: + + active_model = lk.ActiveCalibrationModel( + driving_data.data, + volts.data, + driving_data.sample_rate, + bead_diameter, + driving_frequency_guess=38, + distance_to_surface=distance_to_surface + ) + +The rest of the procedure is the same:: + + power_spectrum = lk.calculate_power_spectrum( + volts.data, + sample_rate=volts.sample_rate, + fit_range=(1e2, 23e3), + num_points_per_block=2000, + ) + + calibration = lk.fit_power_spectrum(power_spectrum, active_model) + calibration.plot(show_active_peak=True) + +.. image:: figures/active_calibration_lowlevel.png diff --git a/docs/tutorial/force_calibration/robust_fitting.rst b/docs/tutorial/force_calibration/robust_fitting.rst new file mode 100644 index 000000000..ab0d49781 --- /dev/null +++ b/docs/tutorial/force_calibration/robust_fitting.rst @@ -0,0 +1,170 @@ +.. _robust_fitting: + +Robust fitting +-------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +So far, we have been using least-squares fitting routines for force calibration. +In that case, we assume that the error in the power at each frequency is distributed according to a Gaussian distribution. +:ref:`Blocking or windowing` the power spectrum ensures that this assumption is +close enough to the truth such that the fit provides accurate estimates of the unknown parameters. +Occasionally, the power spectrum might show a spurious noise peak. +Such a peak is an outlier in the expected behavior of the spectrum and therefore interferes with the +assumption of having a Gaussian error distribution. +As a result, the fit is skewed. In those cases, it can be beneficial to do a robust fit. + +When a robust fit is performed, one assumes that the probability of encountering one or multiple outliers is non-negligible. +By taking this into account during fitting, the fit can be made more robust to outliers in the data. + +One downside of this approach is that the current implementation does not readily provide standard errors +on the parameter estimates and that it leads to a small bias in the fit results for which Pylake has no correction. + +Robust fitting can be used in combination with a least-squares fit to identify outliers automatically +in order to exclude these from a second regular least-squares fit. +The following example illustrates the method. + +To see this effect, let's load a dataset of uncalibrated force sensor data of a 4.4 μm bead showing +Brownian motion while being trapped. In particular, look at the `Force 2y` sensor signal:: + + filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + f = lk.File("test_data/robust_fit_data.h5") + f2y = f.force2y + +First create a power spectrum without blocking or windowing for later use. Then derive a power spectrum with blocking from the first power spectrum:: + + ps = lk.calculate_power_spectrum(f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=1, fit_range=(10, 23e3)) + ps_blocked = ps.downsampled_by(200) + +First use a passive calibration model using the hydrodynamically correct model to perform a least-squares fit and plot the result:: + + model = lk.PassiveCalibrationModel(4.4, temperature=25.0, hydrodynamically_correct=True) + fit = lk.fit_power_spectrum(ps_blocked, model) + + plt.figure() + fit.plot() + plt.title( + f"Skewed fit: $f_c$ = {fit.results['fc'].value:.1f}, " + f"$D$ = {fit.results['D'].value:.4f}, " + f"$f_d$ = {fit.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_noise_peak.png + +Notice how the tail of the model is skewed towards the peak, in order to reduce the least-squares error. +In this case, the free parameters to fit the diode filter contribution are 'abused' to reduce the error between the model and the outlier. +This results in biased parameter estimates. + +Now do a robust fit. We do this by specifying a loss function in :func:`~lumicks.pylake.fit_power_spectrum()`. +For least-squares fitting, the loss function is `'gaussian'`, which is the default if nothing is specified. +However, if we specify `'lorentzian'`, a robust fitting routine will be used instead. +Because `bias_correction` and robust fitting are mutually exclusive, we need to explicitly turn it off:: + + fit = lk.fit_power_spectrum(ps_blocked, model, bias_correction=False, loss_function="lorentzian") + +Now plot the robust fit:: + + plt.figure() + fit.plot() + plt.title( + f"Robust fit: $f_c$ = {fit.results['fc'].value:.1f}, " + f"$D$ = {fit.results['D'].value:.4f}, " + f"$f_d$ = {fit.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_noise_peak_robust.png + +Notice how the model now follows the power spectrum nearly perfectly. The value for `f_diode` has increased +significantly, now that it is not abused to reduce the error induced by the outlier. + +This example shows that a robust fitting method is less likely to fail on outliers in the power spectrum data. + +It is therefore a fair question why one would not use it all the time? + +Robust fitting leads to a small bias in the fit results for which Pylake has no correction. +Least-squares fitting also leads to a bias, but this bias is known (:cite:`norrelykke2010power`) and can be corrected with `bias_correction=True`. +Secondly, for least-squares fitting, methods exist to estimate the expected standard errors in the +estimates of the free parameters, which are implemented in the least-squares fitting routines that Pylake uses :cite:`press1990numerical`. +These error estimates are not implemented for robust fitting, and as such, the fit results will show +`nan` for the error estimates after a robust fit. +However, as will be shown below, the robust fitting results may be used as a start to identify outliers automatically, +in order to exclude these from a second, regular least-squares, fit. + +.. _find_fer: + +Automated spurious peak detection +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We still have the power spectrum `ps` that was created without blocking or windowing which +we will use to identify the peak and automatically obtain frequency exclusion ranges. +using the method :meth:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum.identify_peaks()`. +This method takes a function that accurately models the power spectrum as a function of frequency in order to normalize it. +It then identifies peaks based on the likelihood of encountering a peak of a certain magnitude in the resulting data set. +If we have a "good fit", then the easiest way to get that function is to use our fitted model:: + + plt.figure() + frequency_range = np.arange(100, 22000) + # We can call the fit with a list of frequencies to evaluate the model at those frequencies. + # This uses the best fit parameters from fit.fitted_params. + plt.plot(frequency_range, fit(frequency_range)) + plt.xscale("log") + plt.yscale("log") + +If there are no spurious peaks, then normalizing the unblocked power spectrum results in random +numbers with an exponential distribution with a mean value of 1. +The chance of encountering increasingly larger numbers decays exponentially, and this fact is used by `identify_peaks()`:: + + frequency_exclusions = ps.identify_peaks(fit, peak_cutoff=20, baseline=1) + +The parameter `peak_cutoff` is taken as the minimum magnitude of any value in the normalized power spectrum in order to be considered a peak. +The default value is 20, and it corresponds to a chance of about 2 in a billion of a peak of magnitude 20 or larger occuring naturally in a data set. +If a peak is found with this or a higher magnitude, the algorithm then expands the range to the left and right +until the first point at which the power spectrum drops below the value `baseline`. +The frequencies at which this occurs end up as the lower and upper frequency of an exclusion range. +As such, the value of `baseline` controls the width of the frequency exclusion range. +We can visualize the excluded peaks as follows:: + + fig, ax = plt.subplots(1, 2, sharey=True) + for axis, title in zip(ax, ('Full spectrum', 'Zoom')): + axis.loglog(ps.frequency, ps.power, label="Power spectrum") + for idx, item in enumerate(frequency_exclusions, 1): + to_plot = np.logical_and(item[0] <= ps.frequency, ps.frequency < item[1]) + axis.plot(ps.frequency[to_plot], ps.power[to_plot], 'r', label=f'peak {idx}') + axis.legend() + axis.set_title(title) + axis.set_xlabel('Frequency [Hz]') + ax[1].set_xlim(frequency_exclusions[0][0] - 1.0, frequency_exclusions[-1][1] + 1.0) + ax[1].set_xscale('linear') + ax[0].set_ylabel('Power [V$^2$/Hz]') + plt.suptitle('Identified peaks') + plt.show() + +.. image:: figures/identify_peaks.png + +Finally, we can do a least-squares fit, but in this case we will filter out the frequency ranges that contain peaks. +Because we use a least-squares method, we get error estimates on the fit parameters, and bias in the fit result can be corrected. +The default values of `loss_function='gaussian'` and `bias_correction=True` ensure least-squares fitting +and bias correction, so we do not need to specify them:: + + ps_no_peak = lk.calculate_power_spectrum( + f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=200, fit_range=(10, 23e3), excluded_ranges=frequency_exclusions, + ) + fit_no_peak = lk.fit_power_spectrum(ps_no_peak, model) + + plt.figure() + fit_no_peak.plot() + plt.title( + f"Least squares (ex. peaks): $f_c$ = {fit_no_peak.results['fc'].value:.1f}, " + f"$D$ = {fit_no_peak.results['D'].value:.4f}, " + f"$f_d$ = {fit_no_peak.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_no_noise_peak.png + +Notice that no skewing occurs, and that the values of `fc`, `D` and `f_diode` are now closer to +values found via robust fitting in the section above. diff --git a/docs/tutorial/index.rst b/docs/tutorial/index.rst index 12be84214..0c5d0167a 100644 --- a/docs/tutorial/index.rst +++ b/docs/tutorial/index.rst @@ -31,6 +31,6 @@ These import conventions are used consistently in the tutorial. fdfitting nbwidgets kymotracking - force_calibration + force_calibration/index population_dynamics piezotracking diff --git a/docs/whatsnew/1.6.0/1_6_0.rst b/docs/whatsnew/1.6.0/1_6_0.rst index e566756a9..5c5e711d8 100644 --- a/docs/whatsnew/1.6.0/1_6_0.rst +++ b/docs/whatsnew/1.6.0/1_6_0.rst @@ -13,3 +13,18 @@ Bead-bead coupling correction When performing active calibration with two beads coupling effects can bias the estimated calibration parameters (force and displacement sensitivity, and the stiffness). In Pylake 1.6.0, we include a model to account for bead-bead coupling between two beads in bulk. See :ref:`theory` and :ref:`tutorial` and the :doc:`example` for more information. + +Force calibration +----------------- + +We have introduced easier access to force calibration parameters and results. + +.. figure:: focal_listing.png + +Extracting the force calibration parameters required to reproduce the Bluelake calibration can now be done with a single command named :meth:`~lumicks.pylake.calibration.ForceCalibrationItem.calibration_params()`. + +Similarly, the active calibration peak as well as the exclusion ranges can now be plotted with a single line of code. + +.. figure:: focal_plot.png + +The force calibration documentation has also been improved. See :ref:`tutorial` and :ref:`theory` for more information. diff --git a/docs/whatsnew/1.6.0/focal_listing.png b/docs/whatsnew/1.6.0/focal_listing.png new file mode 100644 index 000000000..52d19f728 --- /dev/null +++ b/docs/whatsnew/1.6.0/focal_listing.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53c360d24db91039ad31412f9680f4d95628177611a47daef7dc88ecf5c4b688 +size 49210 diff --git a/docs/whatsnew/1.6.0/focal_plot.png b/docs/whatsnew/1.6.0/focal_plot.png new file mode 100644 index 000000000..c0e65d7f9 --- /dev/null +++ b/docs/whatsnew/1.6.0/focal_plot.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54115582334ba69d752b8e9c0d1e56eed296c67352e0d69a722f7cc744b0f9a2 +size 33575 diff --git a/lumicks/pylake/force_calibration/calibration_models.py b/lumicks/pylake/force_calibration/calibration_models.py index 40cd1c3f1..c7f46fba2 100644 --- a/lumicks/pylake/force_calibration/calibration_models.py +++ b/lumicks/pylake/force_calibration/calibration_models.py @@ -412,8 +412,8 @@ class PassiveCalibrationModel: The power spectrum calibration algorithm implemented here is based on a number of publications by the Flyvbjerg group at DTU [1]_ [2]_ [3]_ [4]_ [5]_ [6]_. Please refer to the :doc:`theory section` and - :doc:`tutorial` on force calibration for more information on the - calibration methods implemented. + :doc:`tutorial` on force calibration for more information + on the calibration methods implemented. Parameters ---------- @@ -801,8 +801,8 @@ class ActiveCalibrationModel(PassiveCalibrationModel): The power spectrum calibration algorithm implemented here is based on [1]_ [2]_ [3]_ [4]_ [5]_ [6]_. Please refer to the :doc:`theory section` and - :doc:`tutorial` on force calibration for more information on the - calibration methods implemented. + :doc:`tutorial` on force calibration for more information on + the calibration methods implemented. Parameters ---------- diff --git a/lumicks/pylake/force_calibration/convenience.py b/lumicks/pylake/force_calibration/convenience.py index 25f6b467c..31664a260 100644 --- a/lumicks/pylake/force_calibration/convenience.py +++ b/lumicks/pylake/force_calibration/convenience.py @@ -50,8 +50,8 @@ def calibrate_force( The power spectrum calibration algorithm implemented here is based on [1]_ [2]_ [3]_ [4]_ [5]_ [6]_. Please refer to the :doc:`theory section` and - :doc:`tutorial` on force calibration for more information on the - calibration methods implemented. + :doc:`tutorial` on force calibration for more information + on the calibration methods implemented. Parameters ----------