Delay for TRUE & FALSE

В чем суть. Мне нужно сделать задержку изменения состояния булевого значения. Например: На входе и выходе из ноды - FALSE. В ноду приходит TRUE и через Х сек на выходе FALSE меняется на TRUE. Если из входа пропадает TRUE и меняется на FALSE, то предыдущее значение (в нашем случае TRUE) сохраняется еще X сек. и инвертируется на выходе с TRUE на FALSE.

У меня есть решение (прикрепленное изображение) с задержкой изменения с FALSE на TRUE, однако как только TRUE на входе меняется на FALSE, на выходе значение меняется на FALSE мнгновенно, а мне нужна задержка.


What is the point. I need to make a delay change of state of a boolean value. For example: At the entrance and exit from the node - FALSE. TRUE comes to the node and after X sec at the output FALSE changes to TRUE. If TRUE disappears from the input and changes to FALSE, then the previous value (in our case TRUE) is stored for another X sec. and is inverted at the output from TRUE to FALSE.

I have a solution (attached image) with a delay change from FALSE to TRUE, however as soon as TRUE on the input changes to FALSE, the value changes to FALSE instantly on the output, and I need a delay.

2021-12-08_16-45-54

What should happen if input changes while you are waiting for timer to expire (i.e. input & output true. Input changes to false and starts 5 second timer; 2 second later, input changes back to true).

If your system can’t change input faster than the timeout period, then you don’t have to work about this.

Probably the easiest to code is to keep the old value until timer expires. Reset timer each time input changes. Set output to current input when timer expires. In this case, you would never see the false value because it was gone before timer expired.

Another solution is to toggle the output if input changes while timer is running and reset timer. This allows you to see all changes, but some will not last the full timeout period.

The hardest option to code is to keep the original value for the full timeout, then switch to the second value and restart timer (either for full period, or in this case maybe only the 2 remaining seconds because the second value came in 3 second before first timer expired). When second timer expires, switch to third input value on output. This requires storing an unknown number of inputs and timers since input could change every second, or even multiple times in one second. Determining how much time remains on previous timer when a new value comes in would be especially tricky (you would need to record time the change happened and compare that to current time when you reset the timer). If you use the full time period for each change, you could fall way behind if input keeps changing. If there are known constraints on your system (maybe there is no way it could change more than 3 times during timeout period, for example), then coding can be simplified somewhat. In this case, a flip-flop might be easier than a buffer, but you run the risk of the flip-flop getting out-of-sync with your input during boundary conditions, static discharge, etc.

For the last option, you might even be able to simplify farther. For example, if you only care about delaying all changes, and you only need to change output once every second and can round off input changes times to an even second, for a 5 second delay, you just need a first-in-first-out queue that holds 5 values. Every second, push the current input value on the top of the stack and output the value from the bottom of the stack. Some “updates” will result in changed output value, but not all of them will. If the input changes multiple times during that second, you will only see the value during your update pulse, so you might miss some changes if they can happen faster. You might be able to fix that if changes can’t happen more than once in a second, by doubling your queue depth and halving your tick times (in this case a queue depth of 10 that updates every 1/2 second). Now you still have a 5 second delay, but you get updates every 0.5 second instead of only every second.

For starters, you want to keep old value until timer expires. You’ve based output true/false on whether timer is active and what the current input is, which is not what you want. You need a way to remember old value until the timer expires. You could probably use a flip-flop since you only need to store true/false, but using a buffer will probably be easier; you just update buffer value with input value when timer expires (using delay-DONE pin). That’s assuming input will not change while timer is running; code to handle that case depends on how you want to handle it.

You also have the problem that your timer is only started when input changes to true (there is an implied pulse-on-true node when boolean output is connected to a pulse input). You can fix this by putting a pulse-on-change node between the input and the delay-SET pin so the timer starts on any change instead of only on change to true.

I’m sorry, I don’t understand English well. Using translator.

The main task is to control the relay. If a signal to turn on (true) comes, there is a delay of X seconds and it turns on. If it works (the input is true) and the input for any reason changes to FALSE, then the relay continues to work for another X seconds and turns off.

Yes, I found the BUFFER node, but to be honest, I still did not understand how to integrate it into the circuit. I also saw the nodes Impulse at TRUE and Impulse at FALSE, but how can I start the operation of nodes through the IMPULSE connection, which start working with incoming boolean values?


buffer holds output value. It only changes when delay timer is done. pulse-on-change restarts delay timer every time IN changes.

As mentioned before, this will skip changes if IN changes faster than T.

The FIFO (First In First Out) queue method I mentioned is kind of like a conveyor belt. You drop your “thing” onto the belt and wait for it to fall off the end. If you have a longer belt, or run it slower, you will have a longer delay.

Coding the FIFO ran into issues. If you update all the buffers in the queue at the same time, you are likely to end up with the same value in each queue positions instead of shifting values one position. The obvious way to handle this is to use defer to delay the pulse to the next queue position, so you end up with something like this:


New value is fed in to the left and oldest value is right-most buffer-MEM (which will be your output). The pulses need to start at the right to update output value, then work their way left updating each queue position. This works great for the first 4 positions, but XOD apparently doesn’t like a long chain of defer nodes; the last one on the left never pulses :frowning:

It shouldn’t make any difference, but creating a new buffer node with a delayed DONE pulse seems to work. I just called this new-buffer:
image
NOTE: XOD also doesn’t like chaining generic pins, so I could not make new-buffer use generic pin. I had to specify the boolean pin needed for this program.

Now I can create my wait-and-run node:


This is a debug version. Rather than having input pins, it has tweak nodes so I can change values while running in debug. Instead of a single output pin, it has multiple watch nodes so I can see what it is doing. To use it in your program, replace tweak nodes with input pins, replace the right-most watch node with an output pin and just delete the other watch nodes.

Because the queue size (conveyor belt length) is fixed, I adjust the tick timing (conveyor belt speed) based on how long you want the delay. The divide by 5 node calculates how often to pulse given the delay time you desire using a 5-value FIFO queue (5 buffer nodes). If you specify 5 second delay, it will pulse every second. If you specify 10 second delay, it will only pulse every 2 seconds. Note that you will miss any input transitions that happen between these pulses. If you need greater accuracy AND a long delay time, you will need to add more buffer nodes and adjust the divide node to match. 10 buffer nodes with a 5-second delay will divide the delay time by 10 and pulse every 0.5 seconds. Things can quickly get out of hand, though. A 20 second delay with 0.25 second resolution would require 80 buffer nodes.

Using the queue gets you a true delay for each change (limited by the accuracy of the pulse interval), but uses a lot more memory. If input changes will always be slower than delay time, the 1st simple solution is better. You need to know the requirements of your particular program to determine which is the “best” solution for your program. If input can change faster than your delay and missing changes would be disastrous, the first option is very bad. If missing pulses is bad & using enough memory for queue is just as bad, then you will need to find another option.

I understand your objective as
If False → True, then delay n-seconds before turning on
If True → False, then delay m-seconds before turning off

I would just use an if-else on the delay time duration.

If true - x seconds
If false - y seconds

Run the boolean output to the delay input and also to the the if-else input and then run the if-else output to the duration input of the delay.

This should give you the different desired length of the delay for each possible boolean value and only utilise 2 nodes.