
robotics Article Smart Agricultural Machine with a Computer Vision-Based Weeding and Variable-Rate Irrigation Scheme Chung-Liang Chang * ID and Kuan-Ming Lin Department of Biomechatronics Engineering, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan; [email protected] * Correspondence: [email protected] Received: 4 June 2018; Accepted: 17 July 2018; Published: 19 July 2018 Abstract: This paper proposes a scheme that combines computer vision and multi-tasking processes to develop a small-scale smart agricultural machine that can automatically weed and perform variable rate irrigation within a cultivated field. Image processing methods such as HSV (hue (H), saturation (S), value (V)) color conversion, estimation of thresholds during the image binary segmentation process, and morphology operator procedures are used to confirm the position of the plant and weeds, and those results are used to perform weeding and watering operations. Furthermore, the data on the wet distribution area of surface soil (WDAS) and the moisture content of the deep soil is provided to a fuzzy logic controller, which drives pumps to perform variable rate irrigation and to achieve water savings. The proposed system has been implemented in small machines and the experimental results show that the system can classify plant and weeds in real time with an average classification rate of 90% or higher. This allows the machine to do weeding and watering while maintaining the moisture content of the deep soil at 80 ± 10% and an average weeding rate of 90%. Keywords: fuzzy logic; machine vision; field robotics; agriculture 1. Introduction Traditional agricultural models utilize inefficient and labor intensive human labor. When automation and mechanization technology matured, tractors replaced moving farming machines in place of livestock. This was a solution to solve the issues of insufficient labor and the aging farming population [1]. Today’s automated tilling machines are mainly categorized as planting and irrigation equipment; however, plant nursery operations often include weeding, the spreading of fertilizers, and watering as they are critical cultivation tasks. To complete weed control tasks, machine devices such as weed mowers or spray machines can be used and operated via manual control or tractors [2]. However, these methods require indiscriminate spraying, watering, or weed removal operations. When utilized in inter-row or intra-row weed removal, human error can often cause damage to crops and excessive watering can lead to the rotting of roots or leaves. In the early stages, the use of herbicides does not suppress weed growth effectively, due to the diversity of weed varieties and the issue of drug resistance. Next, inadequate dosage control can adversely affect the soil and crop growth. This is also true for watering and fertilizing. Lastly, these machines are powered by petrol engines that can cause air pollution and harm our health. Therefore, the variable rate spraying with herbicides and the development of smart weeding machines are the most common practices in recent years. At present, many studies focus on the design and experiment of the smart weeding machine, including the use of digital cameras to capture large-scale field images and the adoption of machine Robotics 2018, 7, 38; doi:10.3390/robotics7030038 www.mdpi.com/journal/robotics Robotics 2018, 7, 38 2 of 17 vision technique to find rows of crops and to perform weeding operations [3–6]. The shape descriptors in combination with the fuzzy decision-making method are also employed to classify weed species [7]. Besides, several researches also concentrate on the implementation and testing of automatic weeding machines including the use of proximity sensors to identify crop positions [8–10] or the use of robots with visual imaging capabilities to locate the position of weeds and employ a special steerage hoe [11,12], tube stamp [13], or laser device [14,15] to remove the weeds. In addition, a combination of chemical and mechanical methods are also utilized to destroy the weeds [16]. Some modular weeding mechanisms are also implemented [17]. These smart weeding machines rely on the performance of the machine vision system to conduct weeding and visual guidance. However, the uncertainty factors, including illumination and different colors of leaf or soil, affect the performance of the machine vision system [18]. The results of Hamuda et al. (2013) and Yang et al. (2015) indicated that HSV-based image processing methods improve the robustness of the machine’s vision [19,20], and these methods can reduce the effect of natural illumination during image processing. Some studies also propose good classification methods that depict the difference between shades of green from varying brightness or contrast for green crops [21,22]. In 2018, Shu et al. proposed a method that was based on color space conversion and multi-level threshold [23]. It can be used to segment the vegetation under shadowy conditions and has been proven to be feasible in real-world applications. Some start-up companies in the market, such as ecoRobotix and Blue River (which was acquired by John Deer in 2017), have the ability to develop smart machines which selectively use herbicides to destroy weeds [24,25]. The use of this method can reduce the amount of herbicide and decrease the damage to the soil. Some other applications using machine vision technology, such as fruit sorting or harvesting, have also been presented in recent years [26,27]. Today, the machine vision technology has been sensibly applied to the field to separate crops from weeds, and can be used as a useful tool for precision agriculture in the future. Nevertheless, such technology still needs to be effectively integrated into the machine. Inappropriate integration practices can result in the unstable and inaccurate performance of the robot. Besides, as the machines that were mentioned above only possessed weeding functions, and other tasks required different machinery, they thus lack flexibility. To address this issue, this paper proposes a multi-functional smart machine (Supplementary Materials) that can automatically remove weeds while also allowing for variable rate irrigation. The digital camera device of this machine can capture images of growth areas under the machine in real time and use HSV and adaptive threshold methods to distinguish crops from weed areas and estimate the wet distribution area of the surface soil (WDAS), such that the machinery can automatically respond to specific areas with weeding or watering. This scheme allows for the removal of weeds while leaving the crop unharmed. In addition, the modularized mechanism can be used to provide different functions, such as turning soil or sowing. Furthermore, the use of the fuzzy logic control method determines the amount of water that is given to crops depending on the soil moisture content of the root depth and the wet distribution area of the surface soil to ensure that the soil maintains the appropriate moisture rate. This paper is organized as follows: Section2 details the procedures and the steps in designing the method, including image processing, variable rate irrigation, and multi-tasking operations; Section3 describes the design method of the operating mechanism and the computer vision system; Section4 contains the experimental results and the discussion, while the final chapter contains the conclusion. 2. Materials and Methods The scheme that is proposed in this paper includes the use of the image processing technique to classify crops and weeds and also to estimate the WDAS, the fuzzy logic control method to obtain the pulse width modulation (PWM) level to drive the water pump, and multi-tasking processes to execute weeding and variable rate irrigation. Robotics 2018, 7, 38 3 of 17 2.1. Image Processing Technique Since brightness and color contrast differ in each image, the adaptive threshold method uses brightness and color values to automatically adjust the threshold level that is required during the segmentation process, such that the feature can be successfully extracted and classified from the image. 2.1.1. Weed/Plant Classification The flowchart of image processing with the adaptive threshold method is depicted in Figure1. The colors of an image are often described using red (R), green (G), and blue (B) as the three primary colors which are combined to form the RGB color space. This expression method can be used to convey the color of every single pixel in the image in relation to the red, green, and blue channels; however, it is not easy to obtain pixel luminance, saturation, and hue with this method, and thus, a hue (H), saturation (S), and value (V) (known as HSV) color model can better emulate the human perception of color than an RGB model. This is why the 8-bit digital image that is captured by the camera is converted from RGB to HSV during image pre-processing. It is assumed that the image has been shrunk by 25% (see Figure2a) and that the new image’s dimensions are length ( I) × width (J) or a binary matrix and that every pixel in the matrix Prgb(i, j), where i = 1, 2, 3 ... , I and j = 1, 2, ... , J, shows red, green, and blue color values that are represented as Pr(i, j), Pg(i, j), and Pb(i, j), and that these values are between 0 and 225. Equation (1) represents the HSV color space and the pixels hue value Ph(i, j), saturation value Ps(i, j), and value Pv(i, j) in the color space and their conversion relationship with the RGB color model. ( ) 0.5[(Pr(i,j)−Pg(i,j))+(Pr(i,j)−P (i,j))] ( ) = −1 b Ph i, j cos q 2 (Pr(i,j)−Pg(i,j)) −(Pr(i,j)−Pb(i,j))(Pg(i,j)−Pb(i,j)) max(Pr(i,j),Pg(i,j),Pb(i,j))−min(Pr(i,j),Pg(i,j),Pb(i,j)) (1) Ps(i, j) = max(Pr(i,j),Pg(i,j),Pb(i,j)) max(Pr(i,j),Pg(i,j),Pb(i,j)) Pv(i, j) = 255 where Ph 2 [0, 360), Ps(i, j), Pv(i, j) 2 [0, 1].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-