What sort of precision do we get?

We are given millisecond precision in the Wait command, but I can’t help but feel this precision is “ticked” (for a lack of better words).

To illustrate with an example: with tick rate, I imply that any millisecond value under (for example) 360 ms will get rounded up to 360 ms. If you set a wait command to 500 milliseconds, it will actually wait 720 ms (360 + 360 ms). This means that even if you enter wait commands with millisecond precision, you’re only getting results in 360 ms blocks. 360 ms is a totally arbitrary number just for explaining this example.

When I look at the real-world manifestation of my 150 millisecond wait command between opening and closing the solenoid valve, I find it opening for half a second or maybe even a bit longer.

What is the “tick rate” of FarmBot? If hardware matters, what is it for Genesis v1.2 (Raspi 3B + RAMPS).

1 Like

What is the “tick rate” of FarmBot?

FBOS uses a garbage collected programming language on a non-realtime kernel so its impossible to know. Although newer bots (>v1.2) have a realtime clock onboard, it is still difficult for us to get accurate wait times at the millisecond granularity level. We don’t have control over when Linux needs to run a CPU-intensive background job, or when the BEAM virtual machine needs to pause the current process, or the amount of time that it takes for the coil in the solenoid valve to loose its magnetic field. The WAIT block is not an appropriate tool for wait times below a certain threshold / margin of error.

The WAIT block is similar to the setTimeout() function in a browser. When you run WAIT, FBOS suspends the process that is running your sequence for X milliseconds, then eventually (almost always a time unit greater than X) the process resumes execution. Since we can’t stop the OS from running a CPU intensive task, it will almost always take longer than X ms to return to execution. The “rounding errors” you experience are no different than those seen in timers on most programming platforms (like JS). There might be other hacks to get around this (writing a kernel module, using a non-garbage collected language, enabling kernel-level real time support), but none of them are practical. If you need extremely accurate timing or extremely short intervals, WAIT is not the right tool for the job.

Typically, if you need that level of granularity over a timer, adding a hardware clock or specialized controller peripheral is necessary. There are plenty of places where FarmBot needs to do things that involve extremely short pauses. One example of this is the stepper motors. FBOS needs to send many pulses to the stepper motor in a very short amount of time. The solution is to add specialized hardware (the Farmduino) rather than attempt to control timing from software in FBOS.

I’m not sure where the 360 ms interval comes from, but I would venture to guess that this number is higher on slow CPUs (like the RPI0) and shorter on fast CPUs. It might also fluctuate as the codebase and user data changes.

2 Likes

If I have time later, I will try to get some data on the set WAIT duration versus the actual toggling of the solenoid. The solenoid opens and shuts quite audibly and visibly, and with some video editing I could detect how long specific WAITs translate to. It might reveal a pattern.

I’m saying this because 150 ms is more like 500 ms, but 500 ms might be more like 500 ms as well? There might be a point in time where the delay between opening and shutting the valve isn’t prolonged by background CPU work.

This is interesting, because it reveals that sequences are asynchronous. As in, you allow the code for a sequence to yield to other tasks that the CPU needs doing. Is there a way to make the WAIT command completely synchronous? Yes, it would mean that the CPU would practically “freeze up” for the duration of the wait but as a layman I don’t understand what else the CPU should be doing between WAIT and the next sequence command. If you can make WAIT synchronous, I suspect you’ll get millisecond precision in return.

@mdingena I don’t have time to give you a full reply right now (busy day; sorry!) but I will attempt to give you a quick response. The simple answer to your question is that you are trying to use a time resolution that FBOS will probably never be able to support. You would be better served with simpler solutions, like a pressure regulator, an anti-siphon valve (to create water flow resistance) or, if you really absolutely must have it, a custom circuit that can trigger a more reliable, higher-resolution hardware timer connected to a GPIO (probably too much work to shave a few hundred ms off).

You also need to remember that the magnetic field in a coil does not instantly drain. This is why some electronics will stay on for a few seconds after being unplugged. You need to take this into account also (not sure how many MS it takes for the coil in the solenoid valve to discharge).

The statement above is not correct. A more correct statement would be “processes in FBOS run in parallel”. Sequences run sequentially in one process, and they do not have control over CPU scheduling- that is the job of the OS and the BEAM VM.

This explanation is further complicated by the fact that the application is written in Elixir, a highly parallelized language. Causing the program to completely pause all operations is not practical in our use case.

This is probably because there is “overhead” that can’t be controlled for in a non-realtime system. Once you use longer WAIT blocks, the effect of this overhead is diminished or becomes less noticeable. That’s my guess but ultimately we did not design the WAIT block to be used for precision or hard real-time constraints. We understood this constraint when we designed the system and did not intend to offer hard-realtime guarantees for the WAIT block. This is not unheard of (see link)

You are oversimplifying the problem. The computer is doing thousands of things even at idle. Things we shouldn’t need to know about (syncing the system clock, allocating file resources, dealing with network devices, etc…). It is the job of the OS and the BEAM virtual machine to make these scheduling decisions; not the programmer. This is generally a good thing- non-realtime OSes are much easier to work with, albeit at the expense of not being able to have fine control of timer events.

If you take a slow computer and write an asynchronous python script that has sleep() calls and measures the time elapsed between calls to sleep, you will find variation and it will never sleep for the exact amount of time requested, especially if you trigger other system processes or have many programs running. On a low end device like an RPi the effect is amplified.

3 Likes

Thanks, that was very educational!

1 Like