I have an idea for twist on the traditional PID algorithm used in homebrewing. I have designed adaptive, second order loops in the past, which are particularly suited for controlling slowly responding systems such as heating hundreds of pounds of mass.
For example, to avoid overshoots as well as prevent slow response time, you can heat to max output when the temp delta is more than threshold (say 10 degrees), then reduce the output and allow more time for energy to propagate throughout the system to close the final gap without overshoot.
The issue with using temperature sensor feedback only is that there is significant latency from the time electrical power is applied to the time that produces heat at the heater surface, then conduction into the wort, then homogeneous movement of the wort (temperature equilibrium), then thermal propagation through the thermowell, then into the temperature probe along with its thermal response lag. So you have to go slow, go over. No single combination of lead and lag terms in a loop can be optimal for varying factors like thermal mass and thermal conductivity.
This will either result in overshoots, or a slower than optimal response time.
So, I was thinking of adding one parameter to the control loop: wort volume. Theoretically, I could calculate a fairly precise time interval - based on the thermal load mass - over which I can power the heater at max output, such that I should be able to predict the net increase in system temperature well after that heater interval has ceased, and after all the previously mentioned system delays have equalized.
I believe this would allow me to converge much quicker, with less temp error.
Anyone tried this approach?
-G
For example, to avoid overshoots as well as prevent slow response time, you can heat to max output when the temp delta is more than threshold (say 10 degrees), then reduce the output and allow more time for energy to propagate throughout the system to close the final gap without overshoot.
The issue with using temperature sensor feedback only is that there is significant latency from the time electrical power is applied to the time that produces heat at the heater surface, then conduction into the wort, then homogeneous movement of the wort (temperature equilibrium), then thermal propagation through the thermowell, then into the temperature probe along with its thermal response lag. So you have to go slow, go over. No single combination of lead and lag terms in a loop can be optimal for varying factors like thermal mass and thermal conductivity.
This will either result in overshoots, or a slower than optimal response time.
So, I was thinking of adding one parameter to the control loop: wort volume. Theoretically, I could calculate a fairly precise time interval - based on the thermal load mass - over which I can power the heater at max output, such that I should be able to predict the net increase in system temperature well after that heater interval has ceased, and after all the previously mentioned system delays have equalized.
I believe this would allow me to converge much quicker, with less temp error.
Anyone tried this approach?
-G