Guest Editor(s)
-
- Prof. Eyad Elyan
- School of Computing Science and Digital Media, Robert Gordon University, Aberdeen, UK.
Website | E-mail
Special Issue Introduction
Significant and unprecedented progress has been achieved in the area of computer vision over the past decade. This is mainly due to the latest development in deep learning-based methods, deep reinforcement learning and, in deep convolutional neural networks (CNNs). These CNN-based methods have resulted in significant progress in the field of medical image analysis and understanding, anomaly detection, and medical image segmentation. It has also greatly advanced research in various autonomous and semi-autonomous acting machines and robotic surgery-related applications, such as phase recognition, image, and video segmentation, detecting and tracking objects of interest, navigational tasks, and others.
In the area of autonomous actions in surgery, key computer vision tasks such as classification, segmentation, objects localizations, and depth estimation are all crucial for successful automation or semi-automation of surgery tasks. However, creating deep models to perform these tasks at the human level (experts level) in such an environment is still considered a challenging task despite the latest development in the field. For example, deep learning models require large volumes of fully and accurately annotated/labeled data. In the medical domain, annotating and labeling images and videos can be very expensive and labor-intensive tasks. In a surgery setting, this can be even more challenging. There is also a clear lack of public datasets (images and videos in surgery settings), and data captured in the theatre is often hugely imbalanced, where some events/ actions, or objects of interest are scarce, yet the accurate detection/ recognition of these objects is crucial. But more importantly, automounts surgery is a dynamic environment that requires detection, recognition, and action, which involves movement and requires accurate estimation of depth from 2D images (videos), is still considered a key computer vision challenge.
The overall aim of this special issue is to capture the latest development and challenges in the area of computer vision research and with emphasis on autonomous/ semi-autonomous actions in surgery-related applications. We invite authors to submit original articles and reviews related to the recent advances and challenges in medical image analysis and understanding, object detection and recognition, and medical images and videos segmentation. We are particularly interested in new trends and challenges in this area, including but not limited to:
Deep convolutional neural networks and their applications in autonomous and semi-autonomous surgery
Latest development in artificial intelligence surgery and computer vision
Medical image classification and understanding
Learning from imbalanced medical images
Object detection, tracking, and recognition
Gesture recognition
Depth estimation using deep learning
Reinforcement learning for navigational tasks and autonomous actions
Generative adversarial neural networks (GANs) for synthesizing and generating medical images
Reviews on robotic surgery and artificial intelligence
Participants
1. Hassan Ugail, University of Bradford, Bradford, United Kingdom.
2. Yingliang Ma, Coventry University, Coventry, United Kingdom.
3. Pessaux Patrick, Université de Montpellier, Montpellier, France.
Submission Deadline
1 Apr 2023