Skip to yearly menu bar Skip to main content


Poster
in
Affinity Event: Muslims in ML

3D localization and autofocus of the particle field based on deep learning and depth-from-defocus

Zhao DONG · Shaokai Yang · Yan Sha

Keywords: [ GAN ] [ YOLO ] [ depth-from-defocus ] [ Particle positioning ] [ Diffusion Model ]


Abstract:

Accurate three-dimensional positioning of particles is a critical task in microscopic particle research, with one of the main challenges being the measurement of particle depths. We present a novel approach for precise three-dimensional (3D) localization and autofocus of microscopic particles by integrating Depth-from-Defocus (DfD) techniques with deep learning. Our method combines You Only Look Once (YOLO) for lateral position detection with Generative Adversarial Networks (GANs) for autofocus, providing an efficient, noise-resistant, and real-time solution. Validated on synthetic datasets, static particle fields, and dynamic scenarios, the method achieved 99.9\% accuracy on synthetic datasets and performed robustly on polystyrene particles, red blood cells, and plankton. Our algorithm can process a single multi-target image in 0.008 seconds, enabling real-time applications. Future work includes integrating Diffusion Models and the latest version of YOLO to enhance depth estimation and detection accuracy. Additionally, we are developing a user-friendly pipeline equipped with a graphical user interface (GUI) to make these advanced tools accessible to researchers across different disciplines, even those without prior deep learning expertise. This evolving pipeline will be continuously updated to improve precision and efficiency, making it a powerful and accessible tool for high-precision particle analysis in a wide range of scientific applications.

Live content is unavailable. Log in and register to view live content