Workshop: Inference in Microscopy


In machine learning, we learn models from data by training. The models are used to answer queries, a process known as inference. While training large models is costly, inference can be even more so, because of a large number of queries. In this workshop, we want to explore inference challenges in microscopy and optical means for achieving faster and more cost-effective inference.

The workshop on Inference in Microscopy included the following presentations:

Welcome and Brief Introduction to Inference

Joachim Giesen
Theoretical Computer Science, Friedrich Schiller University Jena

Introduction to Bayesian Methods

Michael Habeck
AG Microscopic Image Analysis, University Hospital - UKJ

Abstract: Bayesian inference offers a unique framework for drawing conclusions from incomplete and uncertain data. The hallmark is the notion of a conditional probability quantifying the plausibility of a hypothesis in the light of the given evidence. Applying Bayesian inference to scientific data proceeds in two stages: formulating a probabilistic model and inferring the model from the given data. I'll briefly discuss various notions of probability, explain the foundations of probabilistic modeling, and illustrate aspects of Bayesian inference by using a simple example.

Understanding Diffractive Deep Neural Networks

Sina Saravi
Abbe Center of Photonics, Friedrich Schiller University Jena

Abstract: Diffractive deep neural networks (D2NNs) are a type of intelligent optical system that can be used for image recognition and processing. Such an optical system, which its design will be found through machine-learning approaches, directly receives the light from an object as its input, and performs the processing on this information optically through internal scatterings and diffractions of light. Such an all-optical inference machine can create immense advantages in the speed and complexity of image processing tasks, and potentially the power consumptions, compared to conventional electronics-based inference systems. In this talk, I will present an introduction on the basics of how D2NNs operate and what is the role of machine learning in the design of such systems. I will also present the challenges ahead for the development of such intelligent optical systems.

Investigating the Geometry of Molecular Movement with MINFLUX Microscopy Enabled Single Particle Tracking

Bela Tristan Leander Vogler
Institute for Applied Optics and Biophysics, Friedrich Schiller University Jena

Abstract: The recent introduction of MINFLUX microscopy has opened the gates for reliable high spatio-temporal resolution experiments, effortlessly delivering 103 - 104 continuous localizations with ~10nm dynamic localization precision. The instrument seems to perfectly complement Single Particle Tracking (SPT), a prominent technique for molecular diffusion analysis. In treating localization data sampled with kHz frequency as dense point clouds, we are able to construct geometrical and morphological features, that can in turn be used to quantize diffusive behavior within and between particle trajectories. Contrary to popular methods of diffusion parameter inference, which are model bound, this method relies neither on analytical pre-cognition nor curve fitting but enables direct probing for desired criteria. Here, we investigate the capabilities of geometry-based analysis on a fluorescently tagged SRC Protein Tyrosine Kinase protein on the surface of live cells tracked in three dimensions on iterative MINFLUX microscopy and showcase both, diffusion mode separation and specific parameter extraction. Through shifting the focus to sub-trajectory structures, we unravel highly localized particle behavior.

Combinations of Machine Learning and Statistical Modeling for the Interpretation of High Dimensional Microscopy Data

Carl-Magnus Svensson
Leibniz Institute for Natural Product Research and Infection Biology - "Hans-Knöll-Insititut" (HKI)

Abstract: Machine learning and Deep learning have become common tools for analyzing microscopy images of different modalities and biological systems in recent years. Here, we present applications of machine learning as well as statistical modeling to extract relevant information from microscopy images. We will exemplify this by identifying fat in histochemically stained tissue slices by using a Convolutional Neural Network (CNN) and developing a hidden Markov model to quantify the dynamics of directed motion of organelles inside cells. Of special interest for us are the cases in which machine learning and statistical modeling have to be combined to amplify the information gained from an image or set of images. To exemplify this, we will describe how we used Bayesian decision-making to decode experimental conditions in multiplexed experiments using microfluidic droplets. In this case, we use beads to color code different conditions and use a combination of Random Forests and Bayesian inference to identify them. In another application of microfluidic droplet experiments, we use angle resolved scatter (ARS) imaging to gather information about microbial growth in the droplets rapidly. The ARS images are not easy for a human to interpret as they are not classical images but more akin to two dimensional spectra. We employ Convolutional Neural Networks (CNNs) trained on a dilution series to quantify individual droplets' growth. As we work with low initial concentrations of the microbes, most droplets will remain empty. To compare the growth characteristics between experimental conditions, e.g. different antibiotic concentrations, we formulate a statistical model for the distribution of growths across a droplet population that combines the ratio of droplets that exhibit growth at all with the amount of growth in these positive growth droplets. This model is then fitted to individual experiments and we then use Monte Carlo sampling to determine if an antibiotic significantly decreases the microbial growth.

Deconvolution of Optical Microscopy Images

Rainer Heintzmann
Leibniz Institute of Photonic Technology & Institute of Physical Chemistry, Friedrich Schiller University Jena

Abstract: Many scientific questions can be stated as being “inverse problems”. This means that we measure some data but our interest really lies in understanding, confirming or rejecting a model having some model parameters. Most scientists are using spread-sheet tools to “fit” the parameters of a simple model (such as a linear dependence or an exponential growth) to the measured data. However, some problems require millions of unknowns. Deconvolution belongs to this class of problems. The measured data may be an acquired confocal image stack and the million unknowns are the emission intensities of the sample prior to being blurred by imaging process in the microscope and subjected to noise by the stochastic nature of photons and the detector noise. Maximum-Likelihood deconvolution refers to a procedure for performing such a fit, yielding the de-blurred image. To solve such large-scale problems, first order optimization methods are the natural choice. Luckily automatic differentiation capabilities of modern computer languages simplify the required efforts significantly.