Live streaming the borehole camera to LAN

@jebba We’re actually building one of those at our local makerspace! That’s an interesting idea that I hadn’t considered in the context of FarmBot. The only thing that makes it more complex for us is that seed shape / color / size is not easily configured from the web app (or atleast, OpenFarm does not easily track this). Do you know how OpenPNP tracks this? Seems like a good job for an image classifier, but maybe there are better / simpler tools out there. It would be easy to temporarily clip a dental mirror to the current camera so that it is angled towards the tip of the seed injector. It might also be possible to clip a plastic sheet to block background features, thereby making it even easier to create an immage classifier.

Thanks for sharing this. I am going to share this info with the team.

2 Likes

I presume OpenPNP tracks the object size/shape by using the “Package” parameter. All PCB components come in some shape or another, and there are standards for them all. So they presumably have a library of package types then use that with OpenCV. Here’s a list of them in Wikipedia:

OpenPNP is written in Java. For machine vision, it is using OpenCV. They have their own OpenCV pipeline called “CvPipeline”. Java is kind of useless to me. It was inspired by “Firepick”, but that hasn’t been updated in years and I can’t get it to compile on ARM.

Bottom vision:

CvPipeline:

Firepick:


I’m glad you’re looking into adding these features to FarmBot. I think that computer vision should be used for the last millimeters of calibration. It would be great, for instance, if when mounting a toolhead it had the approximate coordinates in X, Y, Z, but then zero’d in more accurately with computer vision to make sure everything lines up perfect. I’ve noticed my set points/coordinates “wander” slightly as environmental conditions change.

Then once it has the tool head mounted with CV assistance, the CV could take a look and see if a seed has been picked up or not. This could be done similarly to OpenPNP, but you’d have to write your own “Package” types. Or, a newer fangled way to do it would be using machine learning…

I am working on setting this up for kicks: The Coral Development board (TPU) with a camera. It can do Tensorflow and OpenCV processing very fast. One very simple way to train it quickly would be to use this:

I’ll likely create a Tensorflow model the “old” way, convert it to TensorflowLite, then use that image on the Coral Dev board.


You may get some ideas from PNP nozzles too, such as this Juki:


http://www.nozzles4smt.com/Juki-508-Assembly-95-OD-80-ID_p_1474.html

And check out their vacuum sensing, perhaps:

1 Like

@RickCarlino could you just configure the different types of needles in the web app as an image classifier. If there is a change to the end of the needle then assume it is a seed. Do you really need to know what type of seed it is?

1 Like

I wasn’t suggesting that we need to know that information, but rather I was wondering how OpenPnP manages to do it (I am assuming they don’t use an image classifier, but maybe they do?). I think even the needle type would be irrelevant, based on my past use of image classifiers. Ideally, there would be a “seeded” / “not seeded” classification.

2 Likes