Development of marker detection method for estimating angle and distance of underwater remotely operated vehicle to buoyant boat

Robots have been used whether as a tool or even replace a job especially for a dangerous one. Robots are able to work 24 hours and better than what humans can do in a specific field. ROV or Remotely Operated Vehicle is one of the examples of robots used as a tool to support some tasks [1]–[6]. ROVs have been used to observe underwater life forms [7], rescue missions [8], or artifacts recovery [9]. ROVs are unoccupied and controlled from a vessel above via tether. The advantages of using ROVs rather than divers are ROVs don’t need a diving tank or diving suit therefore they are easier to be prepared for, they don’t get hurt by poisonous species, and they are durable. For these reasons, ROV could be a new way to do aquaculture.


Introduction
Robots have been used whether as a tool or even replace a job especially for a dangerous one. Robots are able to work 24 hours and better than what humans can do in a specific field. ROV or Remotely Operated Vehicle is one of the examples of robots used as a tool to support some tasks [1]- [6]. ROVs have been used to observe underwater life forms [7], rescue missions [8], or artifacts recovery [9]. ROVs are unoccupied and controlled from a vessel above via tether. The advantages of using ROVs rather than divers are ROVs don't need a diving tank or diving suit therefore they are easier to be prepared for, they don't get hurt by poisonous species, and they are durable. For these reasons, ROV could be a new way to do aquaculture.
In the development of ROVs, some researchers were encountering numerous problems. Most of them were debating design [10]- [13]. One of them explained the idea of using ROV for aquaculture inspection [10]. The paper used three thrusters on each 120-degree corner. The configuration was to allow ROV to maneuver in every direction of 2D space. The ROV stated in this paper has no thruster to move up and down in the water. These motions were achieved by rolling its umbilical cable. But without a thruster to lift ROV up and down, the ROV could only maneuver in a pendulum-like motion. However, this was a very effective low-cost design.
There was also a paper that designed an ROV with four thrusters [11]. The ROV was designed to resemble a torpedo except without fins. The ROV was highly maneuverable in all directions. The disadvantage of using this design was the inability to maneuver without changing its heading direction. This feature was important for underwater observation. Above all of that, this design was suitable for exploration.
A highly maneuverable ROV usually comes with additional thrusters, therefore, making it more expensive. This was done by a paper which designed an ROV with seven thrusters to achieve six DoF (Degree of Freedom) movement [12]. Four horizontal thrusters were placed symmetrically in each corner of the ROV. These thrusters provided complete x and y-axis translation and z-axis rotation (yaw). Three vertical thrusters provide x and y-axis rotation (pitch, roll) and z-axis translation. Depth sensor, GPS, IMU, DVL, as well as USBL were part of the ROV to navigate.
Another challenging problem with ROVs is localization. This is because it is very hard to propagate electromagnetic signals in a conductive medium such as salty water. Localization enables underwater robots to be fully autonomous, such as Autonomous Underwater Vehicle (AUV), in finding power stations or docking stations on the surface or underwater to prolong their operations. Acoustic transponders and beacons are a common solution to tackle these problems [14]- [18]. Because it is not a cheap solution, many researchers used image processing to achieve this goal. Some researchers used five lamps as markers to a dock [19]. The downside of this method was the light of the markers made the surrounding water body glows. This problem made it hard to recognize the markers. The other challenge of this method was the light of the markers was reflected by the water surface. The reflection looked similar to its original.
Marker design greatly affects the performance of automatic docking. A color-based marker was another solution to distinguish the dock and the other surrounding object. The unique color of this marker was then filtered by a camera and then a dock could be recognized [20]- [25]. The problem of using this type of marker was choosing a unique color to make it different than the other surrounding colors and the color of the marker can be altered underwater. Another challenge was to choose the right filter to recognize the chosen color that has been used as a marker.
Another researcher used a stereo camera to recognize a dock [26]- [30]. 3D marker was placed on the dock to be recognized. This marker was also used as a pose estimator. The power of using these cameras as visual sensors was the ability to measure their distance to a docking station. A genetic algorithm could be applied to find the best pose of the marker. This method offered a high accuracy. But this system needed two cameras which were requiring high computing power. Because of those reasons, the system needed an expensive computer.
In this research, the buoyant boat is an Unmanned Surface Vehicle (USV). It was developed to stay floating on the water surface for days to collect weather and water quality data [31]. A micro-class ROV was developed to explore and observe underwater life underneath the buoyant boat. The ROV needs to follow the boat or return to the boat when needed and this task can be aided by the ability of the ROV to align itself to the boat. For this reason, the position, distance, and orientation of the boat relative to the robot need to be obtained. Based on previous research, a camera and a marker system can be used. The challenge is that underwater vision is not as clear as above surface vision because of water turbidity. Also, the micro-class ROV has limited computing power because it has very limited space and payload. That is why the stereo vision or multiscale feature extraction method cannot be used. Hence, a glowing marker is designed and a corresponding algorithm to recognize it is developed. This paper develops a marker detection method for estimating the angle and distance of underwater Remotely Operated Vehicle to Buoyant Boat. The objective of the method is to improve the performance of the existing marker detection in terms of accuracy and detection speed. These factors are the key for the method to be implemented in a small computer to reduce the cost of the system. This paper uses a camera and image processing to gain the marker information. A shape filter is introduced to recognize the marker.
The key contribution of this paper is a method to sense a close-range relative pose of a buoyant boat to an ROV using a glowing marker. The first characteristic of the proposed method is high rate sensing. The method is developed to fit in a pocket-size computer Raspberry Pi and it is expected to have an even higher sensing rate in a more powerful computer. The second characteristic is high precision. The proposed method provides centimeter distance estimation accuracy with an estimation average error of less than 1 cm. Also, high accuracy orientation estimation is provided which is not available in acoustic transponders. The third characteristic is low cost. Even in the development phase, the proposed method has a lower cost compared to the acoustic transponder or multiple camera systems and it is expected to have an even lower cost when mass-produced.
The writing arrangement of this paper is described as follows. Section 1 mentioned some related works, background, and challenges of the research. Section 2 describes of overall hardware configuration which explains the hardware that is used. Section 3 explains the marker design and the corresponding way to recognize it. Results and Discussion are included in Section 4 which shows the performance of the proposed method. Section 5 states the conclusion of the research.
The diameter of the hull body of the ROV is 15 cm and the width is 30 cm. There are six motor drivers that can deliver 30 A and 12 V of electricity for the thrusters. The hull contains all the sensors, actuators, controllers, and communication modules to protect them from a single drop of water. Considering the space, payload, heat dissipation, and buoyancy of the ROV, few computer options can fit into the hull space. This research uses Rasberry Pi 3 model B as the main processing unit.

Method
The goal of this research is to deliver a marker recognition that has low computation demand to align the ROV with the buoyant boat. The method must be simple enough to fit in a small computer, i.e. Raspberry Pi. For this purpose, this method is divided into two major steps. The first is the marker design step which explains how to craft a uniquely shaped marker to reduce the computational burden. The second is the marker recognition step which is intended for eliminating unwanted image information, recognizing the marker, and obtaining distance and orientation features from the marker. The designed marker is made to distinguish itself from any other underwater objects or light sources and the proposed image processing procedures are designed to recognize this specific marker.

Overall hardware configuration
The system consists of 2 main bodies. The first body is the buoyant boat. It stays afloat on the water surface carrying a solar panel, battery packs, data logger devices, wireless communication modules, and a docking station. The docking station is placed beneath the boat with a marker facing downward. The second body is the ROV which can move freely under the boat. Both bodies are connected via a tether cable. Wired communication is a common practice in ROV configuration to transmit data between ROV and parent ship since it is far more reliable compared to underwater radio communication. The position of the buoyant boat and the ROV under the water surface is illustrated in Fig. 1.
The illustration of ROV thrusters configuration is depicted in Fig. 1. ROV has 6 DoF movements with the help of 6 thrusters. It can hold the depth with pressure sensor feedback. Pitch and roll orientations are stabilized with the help of an inertial measurement sensor. By utilizing thrusters 1, 2, and 3 as shown in Fig. 1, the ROV is stabilizing its roll-pitch-depth pose. Thrusters number 4 and 5 are used to locomote the ROV in yaw and longitudinal direction and thruster number 6 is used to locomote the ROV in the lateral direction. To keep the ROV aligned with the boat, it needs lateral, longitudinal, and yaw pose stabilization with respect to the boat. These can be achieved by using image data as feedback. For this purpose, this paper proposes a method to acquire the position and orientation of a International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 7, No. 3, November 2021, pp. 249-267 buoyant boat with the help of a marker attached to it and a camera on the ROV looking up to the marker.

Marker design
Inside the marker, there is a very bright LED and a reflector. The reflector is used to focus the light beam forward. Before the light touches the marker, the light is diffused. The picture can be seen in Fig.  2. The marker consists of five white glowing circles. The glow is produced by a LED behind it. By using a glowing marker, the marker can be recognized even in a dark environment. The picture of the marker can be seen in Fig. 3. The light is defused and gives a glowing effect to the circle patterns when the LED is turned on. The intensity of the light is either can be controlled or fixed as needed. Because the marker design aims to create a circular light, the glow intensity is very important. If it is too bright, it will glow the surrounding area. Fig. 4 shows the marker when a pixel intensity threshold is applied. The result is five circular blobs. The pattern also plays a role to determine the pose and the distance of the marker. This paper uses a 16x16 cm marker and the other dimension is determined in Fig. 5. Any other dimension should work with the proposed method. The dimension is predefined so it can be used to determine the distance between camera and marker. The first to the fourth circle is used to determine the center of the marker. The fifth circle is used to determine the orientation of the marker. The marker on this paper is crafted out of acrylic plastic that has been cut to a pyramid shape. Aluminum foil is used as a reflector. A White 5 watt LED lamp is used as a light source. The diffuser is also acrylic plastic, except it is milky white. The marker is unreflective black acrylic with holes to give circle patterns. The unreflective black color is important so the marker will not produce a light other than the intended pattern. The white light source and a white diffuser are important to make it easier for thresholding in the next marker recognition step.

Marker recognition
The designed marker will greatly reduce the computational processes. This is intended so this method can be applied to a small computer. The marker will also reduce noise and make it easier to distinguish the marker and the background. The method is divided into four main steps. The first is thresholding. The second is shape filtering. The third is blob shorting. The last step is distance and poses estimation. The step is illustrated in Fig. 6 and will be explained more in the next section.

Thresholding
The first step is converting the image information to a grey image. This paper is using RGB averaging to obtain a greyscale image. Because a greyscale image has only one color channel, it will simplify computation. After a grey image is obtained, thresholding is applied. Thresholding is a step to reduce a grey image to a binary image. From this step is produced a black and white image.  Fig. 7 shows what should be expected from this step. This paper uses an inverted threshold which means bright pixel color is turned black and vice versa. The threshold value depends on the environment. The right threshold value is adjusted manually to find the best result. Marker brightness is also playing a role in the thresholding step. The best brightness for the marker is when the light shines reaching pure white color.

Shape filtering
After the image is thresholded, the image shall contain only black and white color. Some white or black color shall appear to consolidate. In some circumstances might appear very small white or black dots. The most important thing is, to define if the pixel is consolidated or connected.
To find the shape, the first step is to find the blob. Blobs are connected to white or black pixels. Each connected blob is labeled to differentiate it from any other blob. If one blob is separated from the other blob, it means that the blob has a unique label than the others. The labeled blob can only be the white blob or the black blob. This paper chooses black blob labeling since the marker patterns appear to be black. After this step, there should be data on how many blobs are detected in the image.
Shape filtering is the key to eliminate unwanted information. The shape filter is based on size and circularity. The size of the blob is calculated by the number of connected pixels in the same label. This size in other words is the area of the blob. This method qualifies only a certain blob in a defined size range. The size constrain removes noisy little blobs and massively connected blobs. The blobs that are outside the criteria are rejected. After this step, the number of detected blobs should be reduced and more directed to the right answer.
The minimum and maximum value of the size filter is predefined in the program. The easiest way to decide the size range is trial and error since every camera has a different way to perceive an image. The most noticeable effect of the size filter is the distance of detection. A low value of maximum size detection will fail a close distance marker detection. A low value of minimum detection will allow a far distance marker detection but increase the small blob noises detection rate.
The blobs that have passed the size filtering are then processed to the next step. The next step is the circularity filter. In this step, most of the blob will be eliminated. The circularity (c) of the blob is defined as: To find the circularity, the perimeter of the blob has to be calculated. The perimeter is the sum of all the outer pixels in a certain blob of the same label. Once a blob has been labeled, the sum of these pixels can be calculated by a simple logic operation and incrimination code.
One thing that can be noted from the formula above is that its maximum return value is 1. It will never return 0 but a very small value means that the blob is far from a circular shape. The blob with these criteria must be rejected. On the other way around, a value that approaches 1 means that the blob is closely visually circular. The circularity value is decided by trial and error. The brightness of the marker plays a big role in this step. A brighter marker scatters the light under the water and a circular shape light cannot be achieved. The circularity also affects how the marker is detected from the side. Viewed from the side, the patterns of the marker are perceived as oval shapes. If the maximum value, the circularity filter is too high the maker will not be detected from the side. After those two steps, the resulting image should be more specific. Many blobs will be eliminated. The only things that should be detected are the marker circles.

Blob shorting
After the circles in the marker have been detected, the computer must tell what the marker means. As mentioned before, the marker contains five circles with one circle as an orientation reference. In this paper, the black blobs are searched row by row from the upper left of the image. The order of detected circles can be seen in Fig. 8. The marker can be decoded just by shorting the detected blob. The first detected blob will always be the upper left. Then the detection goes from the lower left to the right. The process repeats to the bottom right of the image. Every blob must have an estimated position in the x and y coordinate. The best-estimated position would be the center, but the side pixel should be fine if it applied to all the blob. The coordinate used in this paper is described in Fig. 8. The first blob should be the blob with the smallest y value which is the top blob. The index has to be remembered, which is 1. Then search the second blob, which is always the left most blob (has the smallest x value). Remember the index which is 3. The third blob will always be the blob with the max y value or the bottom blob. Remember the index which is 5. The fourth blob will always be the rightmost blob or having the biggest x value. Remember the index which is 4. After those steps, the pivot index should always be 15 -(remembered indexes). That is 15 -(4+5+3+1). The answer is 2. The remembered index should always be the index of minimum x, maximum x, minimum y, and maximum y. There the pivot blob is found and its position will be used in the next step. The sorted circle is illustrated in Fig. 9. These blob shorting can be performed by a simple if and else code.

distance and pose estimation
The four circles are used to calculate the center of the marker. Fig. 9 shows the shorted blobs. The position of the center according to the x-axis Cx is the difference of x position of odd markers (1 and 3) then the result is added to the third x position I3x. The position of the center according to the y-axis Cy is the difference of y position of even markers (2 and 4) then the result is added to the second x position I2x. The result can be averaged with the value obtained by swapping odd and even indexes.  The center of the marker and the pivot are used to find the rotation angle of the marker (Fig. 10). The rotation angle of the marker (Fig. 11) is the inverse tangent between the pivot and the center. Simply, the value is calculated by the difference of both y positions (Py -Cy) over the difference of both x positions (Px -Cx) then the value is applied to an inverse tangent.
These simple arithmetic logic processes are the power of this method. All these simple processes are achievable due to a predefined marker condition. Those conclude that both marker design and marker recognition are inseparable and defining the success of the algorithm. The end of processing with the same source image as Fig. 7 is shown in Fig. 12.  The distance estimation is using a camera geometry principle. The illustration of camera geometry is shown in Fig. 13. From the figure, point P can be represented from the image plane and camera coordinate. From the camera coordinate, it has three axes (x, y, z). From the image plane, it has 2 axes (x and y). By comparing two triangles, the coordinate of P in the image plane can be calculated.
The focal length (f) is a camera characteristic that is included in the datasheet. The Distance estimation is tied to the camera that is used. This paper uses Logitech C170 which has a focal length of 0.023 cm. Another camera characteristic that affects the estimation is the pixel to image plane ratio (r). The pixel ratio is sometimes not included in the datasheet. For this reason, a set of collective data should be gathered to obtain the ratio. An object with a known size is captured using the same camera. The size of the object in the image plane ( I Ps) is calculated and compared to the size of the object in pixel ( P Ps) on the screen.

= ()
After these two camera characteristics are found, the distance ( c Pz) of an object can be calculated as follows.

= ()
This way, the distance of a known object size can be estimated. The size that is used in this paper is the distance between the blobs. The distance between the circles in the marker is 11 cm. The Euclidean distance between the markers in pixel also can be calculated. Those values are then used as a distance estimation. In the real environment, distance estimation can be misleading. This problem can be corrected by a simple statistical step. This step will be explained in the next section.

Results and Discussion
The ambition of the purposed method is to increase the speed of detection with a high detection probability. For this reason, in this section, the method will be tested and compared to a conventional marker detection method. This section also gives data to help decide the right parameter to improve the success of the method. The main feature of the method which is the ability to estimate angle and distance will also be tested.
The marker and the algorithm have been tested in various conditions. They have tested under the water, on the surface, both day and night. Fig. 14 shows the performance of the marker under a noisy area. The source image (left) in Fig. 14 has been thresholded and the resulting image (right) shows so many blobs but the marker is successfully recognized. From the image can also be noticed that the black background of the marker gives a role to eliminate noise surrounding the marker.

Optimum size filter experiment
The size filter is dependent on many factors such as noise, the size of the designed marker, the glow intensity of the marker, or the camera being used. This experiment is done to find the right size filter. For the size filter will greatly affect the distance of the detection, this experiment presents the right size filter in a certain distance to keep the quality of the detection.
This experiment is using a C170 Logitech camera. The circle in the marker patterns has 2 cm of diameter each. The experiment is done by taking a video of one circle in the marker. A video is taken from a constant distance. At this constant distance, the method of only circle detection is applied. The circularity parameter is kept constant at 0.8. Since the size filter is connected to how far or close the marker will be detected, this experiment will conclude how far a certain threshold of size filter can detect a circle. A preview of the recorded video is shown in Fig. 15. From this experiment, each distance will produce data as shown in Fig. 16. This data is taken at a fixed distance which is 30 cm from the camera. The y-axis of the chart is the detection rate (%) and the x-axis is the minimum size of the circle filter (pixel). The minimum size filter should be used as a minimum size threshold for a circle to be detected. The detection rate is calculated from how many times a circle has been detected over the amount frames that have been recorded. The rate is calculated in percent. From Fig. 16, it can be concluded that 2350 connected pixels (filter size) are enough to detect a designed circle from 30 cm to the camera with a detection rate of 98%. By repeating this process, the data of the optimum size filter (above 90% of detection rate) in every distance is obtained.  The optimum threshold criterion of the size filter used in this paper is a size that has above 90% of detection rate. The optimum size filter at a certain distance of the camera from the circle is shown in Fig. 17. The x-axis represents the distance in centimeter and the y-axis represents the minimum size filter in pixel. The minimum size filter should be used as a minimum size criterion for a circle to be detected. If the value of a size filter in a certain distance is smaller than the stated above, the detection rate should increase. The data in Fig. 17 only stated distance up to 120 cm. In a distance further than 120 cm, the detection rate is increasing but never higher than 90%. From the data can be concluded that 120 cm is the maximum distance of detection in a defined optimum criterion and 230 pixels is the minimum size filter value. This phenomenon is caused by the circularity parameter. Although the marker seems to be round when the distance is higher, its circularity is below 0.8.

Optimum circularity experiment
The goal of this experiment is to find the optimum circularity threshold for the filter. Choosing the right circularity threshold for the marker is important to keep the performance of marker detection. The circularity filter is affected by the viewing angle of the marker because a circle object will appear elliptical on camera when it is seen from the side. For this reason, the experiment in this section presents the optimum circularity filter at a certain viewing angle.
In this experiment single circle in the marker is used rather than the entire circle patterns. The camera is placed at a fixed distance of 60 cm from the marker. The size filter is also fixed at 100 connected pixels. The marker is panned so it appears oval in the camera. The illustration of the scenario can be seen in Fig. 18. In this setting, the marker images and the associated angles are recorded as a video. The

260
International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 7, No. 3, November 2021, pp. 249-267 panning angle measurement is illustrated in Fig. 18. The filter is applied through all the frames in this video and then the frame in which the circle is detected is calculated. The amount of detected circle compared to the amount of total frame is calculated as detection rate. The success rate of the marker detection is depending on which angle the marker is viewed. When the marker is viewed from the side, the circles on the marker appeared to be oval and an oval shape is considered as noise. The circularity threshold must be lowered to consider the oval shapes to be a part of the marker circles but it cannot be too low or some other random shaped noises are considered to be a part of the marker circles.
Each angle will be tested through various circularity filter thresholds. The succession rate at 0 o angle is shown in Fig. 19. The y-axis on Fig. 19 represents the detection rate in percent and the x-axis represents the circularity filter threshold values. From the data can be seen that a low threshold value has a high detection rate. This paper chooses a detection rate of 90% as an optimum threshold. At this certain angle (0 o ), the threshold is set to be 0.85 to get at least 92.6% of the detection rate. The other optimum circularity threshold in another angle is presented in Fig. 20. To be able to detect a circle in a tilted marker the circularity threshold must be lowered. Fig. 20 shows the relation between the circularity and panning angle of the marker in degrees unit. From the graph can be seen that 0.8 should be a good threshold until the marker is panned 40 o . To be able to detect from the widest range of angles, the circularity threshold must be set at 0.52. The marker with panning angles greater than 75 o cannot be detected. This is because the area of the detected blob is never greater than 100 connected pixels. The value below 0.52 will never improve the detection and make it worse since irregular shape noise will be counted as a part of the marker circles.

Rotation angle estimation performance testing
The marker has been designed so that its orientation can be estimated. This orientation estimation is possible with the help of a pivot blob. This experiment will test its accuracy to estimate the rotation angle. To achieve this goal, the marker is rotated at a certain angle. The rotation angle is measured by a calibrated measurement device. This angle (in degree) measurement device is then compared by the angle that has been estimated by the method. The result can be seen in Fig. 21. This estimation is tested on the surface. The camera is placed 60 cm from the marker. From the data that has been presented in the previous section, the experiment uses the value of 0.8 for the circularity filter and the value of 100 pixels for the size filter. This is intended to increase the detection rate at this condition.
A video of a marker in a fixed rotation angle is recorded. From this video, the method to recognize the marker is applied to each frame to estimate the marker rotation. These estimations are then averaged to form a single estimation value on that angle. The graph shown in Fig. 21

Distance estimation performance testing
The proposed marker and method can be used to estimate the distance between marker and camera. This experiment will test its performance. The distance estimation is tested on the surface to reveal its performance in ideal conditions. From the previous experience, the performance of distance estimation is dependent on visual clarity. When the light from the marker is scattered and failed to form a circle shape, the marker cannot be recognized.
The parameter used in this experiment is 100 connected pixels to the minimum and 0.8 of minimum circularity. The camera is placed at a certain distance from the marker, then a video is recorded. From this video at a fixed distance, the method is applied and the distance is estimated. From the same distance, several data of estimated distance are obtained. An average is calculated as a single estimated distance value. The data of real distance in centimeters compared to estimated distance in centimeters are shown in Fig. 22. There are two data series in Fig. 22. Each of them represents the distance of the camera to the marker. The orange line represents the relation of the real camera to marker distance measured by a calibrated measuring device compared to itself in centimeter, and it is a straight line as a reference. The blue line represents the relation of the estimated distance by the proposed method compared to the real distance measured by a calibrated measuring device in a centimeter.
From the Graph, the blue line is above the orange line, which means there is an offset error between them. The gap between the two lines also increases, which means that the offset error increases along with marker distance. The blue line shows a straight slope which means the estimation has a linear relation to the measured distance. From this data, the error can be corrected to increase the accuracy of the estimated distance. Fig. 23 shows the relation between the estimated distance (x-axis) and the average error (y-axis) estimation in that distance. From the data can be concluded that the absolute value of average error is increasing. With the help of this data, the trend between the error value and the estimated distance can be calculated. A trend line is then fitted to produce a formula to compensate for the error based on the current distance estimation. The corrected distance estimation can be seen in Fig. 24. From Fig. 24 can be seen that the estimation is now matching the measured distance. The average error of the estimation before correction is -15.07 cm and the average error is improved to -0.62 cm after correction. The standard deviation of the estimation is 2.38 cm which means that the estimation is not highly fluctuating.

Comparison between with and without size and circularity filter
This experiment is done in a water tank with 60 cm depth. The marker is placed beneath the surface facing down. The camera is placed on the bottom of the tank facing upward to the marker. The marker video is recorded. Then on the same video, two different algorithms are applied. The first is the proposed method with the size and circularity filter. The second method is detection using only dilation and erosion.
The detection rate performance of both methods is compared in this experiment. The camera to marker distance is kept constant while the marker is tilted and panned as a variable input. The result is shown in Fig. 25. The proposed method is drawn in blue line and the orange line shows the compared method. The y-axis represents the detection rate in percent. The x-axis represents the pan angle of the marker. On the graph can be seen that both methods failed to recognize the marker when it is panned 50 degrees. But from the other panning angles, the proposed method can recognize the marker better than the compared method. The y-axis of Fig. 26 represents the detection rate value and the x-axis represents the tilt angle. The same goes for the previous data, the maximum detectable angle is 50 degrees. In other tilt angles, the proposed method shows superiority over the compared method.

Marker detection in various water turbidity
Light scattering, color shifting, or limited viewing distance is the challenge of underwater machine vision. The purpose of this experiment is to see the marker detection success rate compared to water turbidity. The experiment is carried by adding dirt to a swimming pool. In each addition, the water turbidity of the water is measured using a nephelometer. The measurement unit of the turbidity is nephelometric turbidity units (NTU). The result is presented in Table 1. Based on the collected data, the proposed system can be used if the water has turbidity below 16.75 NTU. A higher level of turbidity scatters the light and shifts the color. In this condition, the marker is filtered out as noise, and therefore it is not detected.

Conclusion
The designed marker with it's corresponding detection method has been developed. It successfully detects the marker up to 120 cm with a 90% of detection rate to the minimum. The marker can be detected even it is tilted up to 50 o with an 80% of detection rate. The method can estimate the distance between the marker and the camera with a -0.62 cm average error. The distance estimation is proven to be stable with a standard deviation of 2.38 cm. The method successfully estimates the rotation angle of the marker with a 1.75 cm average error and it is proven to be stable with a standard deviation of 0.5 cm. The filter that is used in the paper is also proven to be superior to a regular dilating and eroding method. An adaptive blob filter parameter and adaptive marker light intensity are the future work of the paper.

Declarations
Author contribution. All authors contributed equally to the main contributor to this paper. All authors read and approved the final paper. Funding statement. None of the authors have received any funding or grants from any institution or funding body for the research. Conflict of interest. The authors declare no conflict of interest. Additional information. No additional information is available for this paper