I really like what you have going so far. I am most curious about how you plan to harvest the food. This could be quite different for different plants, and could be quite hard, especially in a dense polyculture. I saw you mention a robotic arm and a combine. Do you have any more details worked out?
There are essentially no details worked out for harvesting. The only thing we know is that it will be difficult and likely require a computer vision system and more advanced, multi-axis tools that will be very specialized.
In the meantime, I think most people with home systems will enjoy the harvesting part
I think we’re going to use several kinds of lettuce in my masterthesis project, especially lambs lettuce is nice because of it’s speed of growth and it’s sensitiveness to water- or drystress. So we can check several kinds of senors and compare them. I think we’re going to harvest manually
Sounds good! It will be exciting to see people begin to hack their own tools that can be used for harvesting different types of plants. It is possible that something basic could be used for a lot of ‘simple’ plants such as heads of lettuce.
I do believe a combine would be more appropriate for monoculture as different plants will need to be processed differently. Combines for corn and wheat are, though similar, different in many ways. Wheat is cut and the kernels are beaten out of the head, where corn is snapped off the stalks and rubbed against abrasive rollers to remove the husks and kernels. Considering the size and cost of a combine, I think a simple electro-mechanical arm with a small collection basket would be a perfect alternative.
There are a few open source robot arm projects which can be implemented/adapted for harvesting. Some of these robots you can build with an inexpensive desktop 3D printer and a few off-the-shelf parts. Here’s one of the most promising ones, developed by Andreas from Germany: https://hackaday.io/project/3800-3d-printable-robot-arm.
As of this month (Nov 2015), this is his fourth prototype, and it’s looking pretty good so far. The documentation, a short BOM (thanks to 3D printing), and the source files are available for download from that page.
Here’s another promising one by Dan from Canada: https://hackaday.io/project/945-5-axis-robot-arm
Those projects are awesome, thanks for sharing! At least with the current FarmBot gantry system, I don’t think such a large/heavy arm could be mounted onto the z-axis, let alone the Universal Tool Mount. However, perhaps a beefier/more robust gantry could be developed to support larger tools/arms like those. Or, a smaller/lighterweight robotic arm could be developed specifically for FarmBot!
What’s neat is that because FarmBot is already a 3-axis gantry system that can position the tool head anywhere in the growing volume, the needs of a robotic harvesting gripper/hand are much less, assuming that the already existing 3-axis system is used in conjunction with it.
Potentially just one more degree of freedom would work: a rotation about the z-axis and then a suction implement like this could be used to grab small berries!
I would like to be kept in the loop, if not at least help out with the computer vision aspect for determining harvesting. I’ve had a very very similar vision for a personal project when it comes to what the FarmBot hopes to be and enable, and is something I hope to make on my own time, starting June 2016.
I will be working with computer vision for my ECE (Electrical and Computer Engineering) Bachelor’s Senior Design Project (i.e. “capstone” project).
If it’s any interest to you…
Depending on how much I can get done, we’ll at least be working in OpenCV (if not porting the design to the cheapest FPGA possible), and I will personally learn how to use and implement the following methods:
- a Background Subtraction (motion detection) algorithm
- generation of a Mixture of Gaussians (MoG) “color-space predicate” for later real-time classification
- Morphological Operations to find/smooth motion-blobs / objects, (after the motion detection)
- multiple ROC-curve-(or other performance metric)-determined (experimentally determined, performance-based) Thresholding algorithms for noise reduction
I know that possibly went into too much detail, and each of those skills are not 100% relevant, as I expect reliance on more than just a regular IR-filtered “webcam”-type imaging system will be necessary, as a LWIR (or maybe SWIR, if the cameras get cheap enough) camera could really help with determining ripe-ness of fruits and vegetables.
I will, by May, 2016, at least be practiced in these methods, and better suited to learn about more advanced techniques that employ adaptive-curve-fitting-style machine learning (e.g. deep learning, or anything that can handle unlabeled data) if needed.
Our project is focused on a different end goal, but the [farmbot + computer-vision-based “ripeness” detection + UI] part of a larger system is something I really want to develop in my free-time after I graduate (hence, the time frame). Maybe someone later can take the simple binary yes/no or relative percentage of “ripeness” information that could be output by this system, and input that into the UI to either trigger a harvest-bot that uses the visual information as identification, or let’s the user know what to harvest and when (e.g. by high-lighting each fruit / veg / leaf that’s ready for harvesting) to whatever degree of confidence required. This should be relatively simple for a programmer, but I’m more interested in computer engineering than software-engineering or API/UX/UI/web -design.
Let me know what you think (collinrp at sunyit.edu, until May… rcollins0618 at google afterward)
Hey there Collin? Thanks for chiming in! Your skills sound like they would be very applicable to the project. Admittedly I don’t really know all of the jargon terms you’re talking about, but I would love to work together to develop software systems for determining ripeness of plants/fruits.
Once we get a stable hardware/software system out in the wild, we should have a lot of data and devices to play around with!!
We still need humans for something… lol
Ya to eat the food!
I think the most efficient harvesting system would be a prone human on a powered cart running on the same rail system as the Farmbot. This would also eliminate the need for aisles between rows and soil compaction.
To make my place look more greener and also I love gardening.
It seems like a great idea. This will make gardening more fun.
An interesting link (2014 review of existing solutions) on harvesting I found while trying to find the strength of laser used on a cucumber harvester…
1.) research current agricultural harvesting practices 2.) think spaceship
It all depends upon what the end goal for FarmBot is. If in the end it’s meant for busy tech savvy people to have someone get their produce growing and ready to be picked, then harvesting isn’t a big requirement (especially considering the specialization required). Also with harvesting, you have to be able to maintain the produce quality after it’s been picked, which can decline quickly (another reason to want humans to do it).
Now, if you’d want to get past that and to a more commercial setting, perhaps you’d focus on the high value crops. So, herbs, lettuces, microgreens, tomatoes. Lettuces and micro-greens have perhaps the most simple harvesting in that you can use a reciprocating blade and something to push then catch the leaves. Still, at the end of harvesting these tools would have to be cleaned to ensure the produce continues to be clean (Ecoli, etc).
While simple in concept, adding harvesting can exponentially increase the cost and complexity of the system as a whole.
I have many plans. First I need to gather pumpkins and apples. Then I have a lot of tomatoes in my garden, more than previous year 'cause I bought almaz grain cleaner (it can double the harvest). In November I will have a lot of carrots and brussels sprouts (love it).