Please consider switching the application of all image transforms from capture to render to improve compositing and speed the capture process.
Image rotation (and lens correction?) is permanently baked into images during the capture process. No-data pixels are filled with black. This causes artifacts in compositing. Also, the capture process is crazy slow and I assume, with no evidence, to blame it on the image distortion process.
Avoid any data manipulation in the capture process. When rendering the image within the farm designer images layer, set the appropriate transform and transform-origin that maps rectified image pixels to viewport pixels. It is preferable to have the viewport, and therefore image element coordinates and size, match the physical world (millimeters) and let the transform distort to that viewport. Add further scaling to map camera distance (Z-at-capture + ground offset below Z-home if any) using separate X and Y FoV parameters. Images taken at different distances should not have the same size unless the FoV is narrow. To apply lens correction you may use a standard SVG distortion map effect (the opposite of this example https://codepen.io/johanberonius/pen/RopjYW) prior to the transform mentioned above.
Thanks for the suggestions Aron. What you’re proposing does make sense, though it would be a large amount of work on the frontend that we’ll need to investigate further before committing to anything.
We’re actually just about to start some major work on improving the whole weed detection workflow from documentation to image capturing to processing to working with the data in the app afterwards. So stay tuned for improvements in this area and keep the suggestions coming! Thanks