Dharmadhikari, Mihir
Maritime Vessel Tank Inspection using Aerial Robots: Experience from the field and dataset release
Dharmadhikari, Mihir, Khedekar, Nikhil, De Petris, Paolo, Kulkarni, Mihir, Nissov, Morten, Alexis, Kostas
This paper presents field results and lessons learned from the deployment of aerial robots inside ship ballast tanks. Vessel tanks including ballast tanks and cargo holds present dark, dusty environments having simultaneously very narrow openings and wide open spaces that create several challenges for autonomous navigation and inspection operations. We present a system for vessel tank inspection using an aerial robot along with its autonomy modules. We show the results of autonomous exploration and visual inspection in 3 ships spanning across 7 distinct types of sections of the ballast tanks. Additionally, we comment on the lessons learned from the field and possible directions for future work. Finally, we release a dataset consisting of the data from these missions along with data collected with a handheld sensor stick.
Autonomous Exploration and General Visual Inspection of Ship Ballast Water Tanks using Aerial Robots
Dharmadhikari, Mihir, De Petris, Paolo, Kulkarni, Mihir, Khedekar, Nikhil, Nguyen, Huan, Stene, Arnt Erik, Sjรธvold, Eivind, Solheim, Kristian, Gussiaas, Bente, Alexis, Kostas
With the world greatly relying on maritime transport and marine resources, a global At the epicenter of the necessary inspection processes is fleet of approximately 54, 000 large (> 1, 000 gross tons [1]) the General Visual Inspection (GVI). In simple terms, conventional maritime structures are mainly inspected manually by human GVI is the process of "naked eye"-based inspection surveyors, while the broader global fleet involves more than and detection of damages or anomalies that may pose a 100, 000 ships [2]. Among others, the surveyors must inspect risk to the structural integrity and safety of the BWT and the Ballast Water Tanks (BWTs) which represent dangerous, thus the vessel as a whole. As GVI is often the basis upon confined, enclosed environments often with difficult access which further inspections and maintenance are scheduled, via narrow hatches and manholes, low-lighting, slippery automating this process with robots and enabling the ability surfaces, as well as possible oxygen deficiency or presence of for it to take place virtually in any place of the world with toxic gases. The European Maritime Safety Agency (EMSA) little to no human intervention, has the potential to optimize reports that a significant number of accidents aboard ships the inspection and maintenance cycles. This in turn will between 2014-2021 were due to the fall of persons (e.g., greatly reduce the associated costs, while keeping humans within the challenging enclosed ballast tank and cargo hold out of harms way [5].
An Online Self-calibrating Refractive Camera Model with Application to Underwater Odometry
Singh, Mohit, Dharmadhikari, Mihir, Alexis, Kostas
This work presents a camera model for refractive media such as water and its application in underwater visual-inertial odometry. The model is self-calibrating in real-time and is free of known correspondences or calibration targets. It is separable as a distortion model (dependent on refractive index $n$ and radial pixel coordinate) and a virtual pinhole model (as a function of $n$). We derive the self-calibration formulation leveraging epipolar constraints to estimate the refractive index and subsequently correct for distortion. Through experimental studies using an underwater robot integrating cameras and inertial sensing, the model is validated regarding the accurate estimation of the refractive index and its benefits for robust odometry estimation in an extended envelope of conditions. Lastly, we show the transition between media and the estimation of the varying refractive index online, thus allowing computer vision tasks across refractive media.
Semantics-aware Exploration and Inspection Path Planning
Dharmadhikari, Mihir, Alexis, Kostas
This paper contributes a novel strategy for semantics-aware autonomous exploration and inspection path planning. Attuned to the fact that environments that need to be explored often involve a sparse set of semantic entities of particular interest, the proposed method offers volumetric exploration combined with two new planning behaviors that together ensure that a complete mesh model is reconstructed for each semantic, while its surfaces are observed at appropriate resolution and through suitable viewing angles. Evaluated in extensive simulation studies and experimental results using a flying robot, the planner delivers efficient combined exploration and high-fidelity inspection planning that is focused on the semantics of interest. Comparisons against relevant methods of the state-of-the-art are further presented.