Coordinate Confusion

I’m confused on what the coordinate system is. When I look at

that is in the High Level Overview of the docs, it makes sense. But I from that, I would expect that on the https://my.farmbot.io/app/controls, when I pushed the Z up, the z numbers would get larger - but they seem to get smaller. Similarly when I hit the right arrow, I would expect X to get larger but it does the opposite.

I tried enabling allow negative numbers on the devices pages but that was even more confusing as it seems when you do that, things reverse and positive numbers are not allowed.

Not sure how this is all supposed to work. Should it be like the figure figure above? and is the origin in the corner roughly indicated in that figure?

Thanks

@fluffy Have you tried the “Invert Endpoints” settings by chance? We will eventually add a “Swap X / Y” setting as well.That, in conjunction with “Allow Negatives” should allow for any configuration. Please let us know if that helps and sorry for the delayed response.

So I’m thinking about how to do things like computer vision to find weeds. And if we are going to manage to get a compute that contribute software to help do things like this, we are going to need to be using a common coordinate system. If z=0 move the camera to the top on your machine and crashes the camera into the ground on my machine, well things will not work out well. When I look at the code for allow negative coordinates, it looks super buggy and I doubt it has been tested with both positive and negative coordinates. Perhaps that is fine we we agreed that z would always be <= 0. To write software that finds a weed in an image, then moves the robot to the location of that weed, I think the easiest thing will be if we have a single well defined coordinate system that everyone is using (and we all mount our camera with the same orientation). Sure at the bottom level of the hardware we might have some invert things to compensate for hardware that was assembled incorrectly but even that might be easier to fix by just fixing the wiring.

I don’t really care what the coordinate system is (using normal CNC convections seems like it might be a good idea) but I do think life will be better if we have a well defined coordinate system. Does that make sense? Thoughts ? (and no worry on delay, I totally understand you guys are probably busy getting stuff ready to ship)

@fluffy We are doing quite a bit of work on computer vision weed detection this month, so I’m glad you’re bringing it up. We hadn’t considered a unified coordinate system but instead wanted to create a flexible coordinate system that allows for interchangeability through the use of a units system. For example, a group of college students are currently designing a FarmBot system that operates on a polar coordinate system. My idea, at least initially, is to handle coordinate conversion at the application layer rather than the firmware layer. That being said, we’re not married to the idea and are open to other suggestions.

With regards to computer vision, our plan was to ship a loadable module system that supports common scripting languages, such as Python, Ruby, Lua, Javascript etc. and provide a common API for performing the coordinate conversions at a higher level of abstraction than raw G code operations would permit.

Great to hear on downloadable modules of some sort. We tried to program it using the visual blocks on the web pages but just sort of gave up with that and started writing python code to do things. The visual drag and drop atoms to build scripts are cool and I’m sure will work for some people but being able to drop in a python script would sure be nice for people that want to do that. Glad to hear about the computer visions. I hope it will be easy to use openCV as I find that very powerful. Look forward to playing with this as it comes out.

I’ll think some more about coordinates but it imagine the case where I am writing a program to plant carrots and I know I want them spaced in a hexagonal packing pattern at a particular distance. I want that to work on a cartesian or polar robot and I could care less about what coordinate system the robot uses is polar or cartesian. That’s not arguing it needs to happen in the Arduino, just it would be nice to have well below the place where people are writing that type of application. Similarly if we detect a weed at a given pixel location in an image, we need some easy way to turn that back into a coordinate we can send the robot to.

Python is our first scripting target for this very reason. OpenCV is the main requirement driving this feature. We’re still very early in development, but will have some initial concepts up on the staging server in a month or so (https://staging.farmbot.io). My only disclaimer with that is the staging server is often unstable as we use it for internal testing.