Weeding Robot

From P2P Food Lab
Revision as of 13:36, 8 September 2016 by Hanappe (Talk | contribs)

Jump to: navigation, search

While working on CitizenSeeds and the Sensorboxes of the Starter Kit, we had the idea to not only put cameras in small vegetable gardens but also on farms so people could remotely follow what is growing in the fields. During a discussion with a friend/market farmer, it became clear that a tool to control the weeds in his field was much more useful for him than cameras. He showed me videos of advanced weeding machines that work "in-row" (video), meaning, they also remove weeds in between plants in a row.

In-row weeder for tractors


That became the starting point of the weeding robot: design (1) a light-weight and practical tool for farmers to remove weeds that (2) also obtains data, such as image maps, to document what's happing in the field. We are interested mostly in small, agroecological/permaculture/bio-intensive farms that plant the crops in densely spacings, that require more manual work and for with tractor-based machines are not approriate.


We didn't choose for design with several arms such as in the "in-row" designs. Instead we decided to use a CNC machine and put it on wheels. CNC machines can position a milling tool precisely on three axis and are used for cutting objects out of wood or metal. We replaced the milling machine with a "soil mixer" that perturbs small and germinating weeds. The CNC machine that we use is the X-Carve. (Both FarmBot and ecoRobotix take a similar approach.)

So now we have an interesting challenge: develop the computer vision and motion planning to control the "robot".

We will use this page to post the technical information. This is an "open" project and if you are interested in helping, please contact us.


The first trip of the robot outside of the office!
The X-Carve CNC machine
Brought to you from an office in the center of Paris.


Computer vision

Montoring and nurturing of crops is greatly helped with tools from computer vision. In particular, the following task are considered:

- Image fusion: gathering images from multiple locations and/or multiples captors, algorithms like stitching are helpful in building a consistent representation of these data.

- Image segmentation: the distinction between regions occupied by plants and those occupied by ground is critical to the operation of the wedding robot. Furthermore, we focus on algorithms that can provide segmentation at reasonale frame rate to have realtime operation.

- Image recognition: Not yet implemented.

Motion planning

Based on image analysis, a map of the robot workspace is built. Two possible ways to build the map are considered:

- Offline map construction

- Online map construction

Licenses

  1. All the designs, images, videos and documentation are licensed under the Creative Commons Attribution + ShareAlike license (CC BY-SA v2.0).
  2. All the software is licensed under the GNU GPL v3