Using the camera to auto position

I have built my own mini farmbot so i’m using an atmega2560 and raspberry pi 3 and have no ENCODERS , STALL DETECTION , or LIMIT SWITCHES so far

so either i can buy the stall detection hardware that is used in the farmduino express and make it work with my atmega and ramps but no idea how to do that … or i find a way to allow farmbot to auto position and check position at anytime with the camera and guides…

id like to know what other people did in the same situation… any advices and comments will be welcome.

I have no garden (i live in an apartment ) that’s why i’m only interested in this project in the purpose of better understanding it and trying to improve it if possible in the future

1 Like

I always recommend limit switches on mechanical projects. The damage they prevent is worth the effort and cost. Try to find the location with the least wiring for placement. For example, the X and Y limit switches can be placed on the leg of the gantry closest to the main electronics box.
For position sensing there are a number of solutions at different levels of effort including step counting (the default), both optical and magnetic encoders, continuous-turn potentiometer, string potentiometers (aka string pot), optical or magnetic gray code absolute encoders, optical flow, optical marker pose estimation, and laser or radio grid (used in landing strips). In more complex setups these are usually combined either for error detection or dead reckoning.

1 Like

thank you! indeed a did not realise that i could put limit switches on the leg of the gantry … and i was bothering about putting it at the end of the bed given the length of wire implied! i did not notice any pinout image showing to which pins i should connect such limit switches on the Pi on the farmbot pages but probably i need to look better…

As for the ability to position in real time thanks to the camera , i think it remains the most economic and should be the most performant solution with a good software behind … so i was guessing someone already worked on it : isn’t that the case?

i’s also because there is probably something bad with my motors that i’m looking for a realtime and permanent feedback solution: indeed when i move 1meter to the left then back 1 meter to the right they do not arrive at the same initial position! when they start moving there are jolts during the acceleration and then afeter 1 second will move smoothly i don’t know if i can solve this problem…

If parallax was not a factor then optical flow would be easy and reliable. However parallax is a huge factor here as the ratio of the camera distance to plant vs camera distance to ground can be 50-75% once the plants get growing. It is possible to overcome that issue but many of the other approaches would be simpler.

There is no clean motor interrupt mechanism in the Firmware. You can move a bit then check and estimate distance via the camera and move again. You could write this using Farmware but be awear of the 20min

But why don’t you first grid map the bed with the camera and record the marked positions like how weeding is achieved? Then you can move to those recorded positions. This would be easier than autonomous moves and reduce the paralax problem if you trained the machine learning properly.

what is 20 min?

When Farmbot released FBOS8 which added variables to the sequence builder they had huge problems supporting Farmware so they limited Farmwares run times to 20 minutes.

Actually to correct myself a better solution for you might be an separate web app or API hosted solution instead of Farmware. Farmbot are trying to push people in that direction now. I am just one of those old Farmware users so often think in terms of that technology.

See the Farmware docs for more details.

I would definitely prefer improved realtime controls for local API (farmware?), Web API and UI. Even using the “controls” tab is painful for jogging as all operations seem to be blocking.

Regarding optical positioning, assuming the bed is seen often enough, you can use SLAM with particle filtering to get pose with only monocular vision. My instincts are to use such things for navigation rather than position control. That is because the chosen technique need to work for a very broad set of real-world lighting and environment. That includes precipitation, different light sources, strong shadows, the robot’s own cast shadow, optical aberrations, exposure, motion blur, night-time operation, etc.


Wouldn’t lidar be more reliable than using the camera then?


did you ever see this kind of behaviour (link to video) i have this on all my motors:

i have reduced the acceleration to see better the effect : it seems that the motor is jolting and even blocking at one particular speed: below or beyond that speed it runs smoothly

is the reference of my motors

eventually a half step config of motors solve the issue

1 Like

@whitecaps That depends. Sometimes in certain cases, yeah

I was thinking about optical positioning using an intercorrelation function between the seen image and reference imagesfrom the day before. This should hopefully work fine even in variable lightning conditions because the intercorrelation function is mainly sensitive to the similarities between two images as a function of an offset between them… it should work even better at night with a controlled IR source and IR camera … but it appears that currently when i take a photo there is very long delay before i get it on screen so i’m wondering if i need a server to be installed locally : is the webapp server available for local install ?

1 Like

@fhenryco The server can be self hosted. Directions are provided here. The support provided to self-hosters is limited (and our official recommendation is that all users use, but it is possible to run a server on an Ubuntu 18 machine. If you are running an Express device, you will not see an increase in image processing performance by running your own server- the image processing slowness in these cases is due to an issue on the device itself. We’re actively looking for solutions to these issues (this week) as we work on v2 of the weed detector.

1 Like

I know nothing about ruby … unfortunately. I have some experience with python. I have a diy farmbot with raspberry pi3 ramps and arduino mega.

You could also use an external camera feeding a python script you write, which then controls the bot via API and That way you can get the control you want without having to set up the WebApp. Just a suggestion.

1 Like

@fhenryco Just got off our daily status call. Gabe may have found a fix to the slow photo issue that might address some of the performance issues. I am hoping we can get it deployed by end of week.


Good! Unfortunately i have not enough time to really test an optical positioning but i 'm pretty sure that trying do it with an autocorrelation function would be very worth the effort. It’s a mathematical tool which is heavily used by scientists and all mathematical libraries are available in python and easy to use so i would be surprised if this is not as straightforward to test this in ruby for you, farmbot developpers.

1 Like