Asking for feedback: new ModbusWorker implementation

For the last weeks and months I worked on a rewrite of the ModbusWorker implementation (triggered by Zeitlicher Ablauf der Modbus-Tasks)

See this Pull-Request:

I know there are more things to be improved in the Modbus implementation (e.g. enabling usage via delegation and now only via inheritance), but in its core this is only a rewrite of the ‘ModbusWorker’, that schedules the execution of modbus Tasks. Mainly it should perform much better in reading registers ‘as late as possible’ and writing ‘as early as possible’ while handling defective components (e.g. meters that are not available) much better without blocking the entire bus.

As this is a quite critical component for many users of OpenEMS, I’d be glad to receive input from the Community before merging this into the next Release, e.g. as comments here (with trace-logs enabled) or as review on Github. Currently my target for merging this is 2023.9.0.

Thank you in advance!

See readme below for details on the new version:

Modbus

Modbus is a widely used standard for fieldbus communications. It is used by all kinds of hardware devices like photovoltaics inverters, electric meters, and so on.

Modbus/TCP

[https://github.com/OpenEMS/openems/blob/develop/io.openems.edge.bridge.modbus/src/io/openems/edge/bridge/modbus/BridgeModbusTcpImpl.java[Bridge](https://github.com/OpenEMS/openems/blob/develop/io.openems.edge.bridge.modbus/src/io/openems/edge/bridge/modbus/BridgeModbusTcpImpl.java[Bridge) Modbus/RTU Serial] for fieldbus communication via TCP/IP network.

Modbus/RTU

[https://github.com/OpenEMS/openems/blob/develop/io.openems.edge.bridge.modbus/src/io/openems/edge/bridge/modbus/BridgeModbusSerialImpl.java[Bridge](https://github.com/OpenEMS/openems/blob/develop/io.openems.edge.bridge.modbus/src/io/openems/edge/bridge/modbus/BridgeModbusSerialImpl.java[Bridge) Modbus/TCP] for fieldbus communication via RS485 serial bus.

Implementation details

OpenEMS Components that use Modbus communication, must implement the ModbusComponent interface and provide a ModbusProtocol. A protocol uses the notion of a Task to define an individual Modbus Read or Write request that can cover multiple Modbus Registers or Coils depending on the Modbus function code. It is possible to add or remove tasks to/from a protocol at runtime or to change the execution Priority. The Modbus Bridge (Bridge Modbus/RTU Serial or Bridge Modbus/TCP) collects all protocols and manages the execution of Tasks.

Execution of Modbus Tasks

Execution of Modbus Tasks is managed by the ModbusWorker. It…

  • executes Write-Tasks as early as possible (directly after the EXECUTE_WRITE event)
  • executes Read-Tasks as late as possible to have values available exactly when they are needed (i.e. just before the BEFORE_PROCESS_IMAGE event). To achieve this, the ModbusWorker evaluates all execution times and ‘learns’ an ideal delay time, that is applied on every Cycle - the ‘CycleDelay’
  • handles defective ModbusComponents (i.e. ones where tasks have repeatedly failed) and delays reading from/writing to those components in order to avoid having defective components block the entire communication bus. Maximum delay is 5 minutes for read from defective components. ModbusComponents can trigger a retry from a defective Component by calling the retryModbusCommunication() method.

Priority

Read-Tasks can have two different priorities, that are defined in the ModbusProtocol definition:

  • HIGH: the task is executed once every Cycle
  • LOW: only one task of all defined LOW priority tasks of all components registered on the same bridge is executed per Cycle
    Write-Tasks always have HIGH priority, i.e. a set-point is always executed as-soon-as-possible - as long as the Component is not marked as defective

Channels

Each Modbus Bridge provides Channels for more detailed information:

  • CycleTimeIsTooShort: the configured global Cycle-Time is too short to execute all planned tasks in one Cycle
  • CycleDelay: see ‘CycleDelay’ in the ‘ModbusWorker’ description above

Logging

Often it is useful to print detailed logging information on the Console for debugging purposes. Logging can be enabled on Task level in the definition of the ModbusProtocol by adding .debug() or globally per Modbus Bridge via the LogVerbosity configuration parameter:

  • NONE: Show no logs
  • DEBUG_LOG: Shows basic logging information via the Controller.Debug.Log
  • READS_AND_WRITES: Show logs for all read and write requests
  • READS_AND_WRITES_VERBOSE: Show logs for all read and write requests, including actual hex or binary values of request and response
  • READS_AND_WRITES_DURATION: Show logs for all read and write requests, including actual duration time per request
  • READS_AND_WRITES_DURATION_TRACE_EVENTS: Show logs for all read and write requests, including actual duration time per request & trace the internal Event-based State-Machine

The log level via configuration parameter may be changed at any time during runtime without side-effects on the communication.

Regards,
Stefan

Hi Stefan,

Thanks for the detailed update on the ModbusWorker improvements. Your suggested improvements are pretty much in line with what I have seen on similar solutions.

I think having the option to set individual (target) channel sample periods are a nice solution to have, since it gives you the flexibility to monitor one channel at a higher frequency than another. In the case where both channels are lower priority and not critical for control. I do understand that this would add a lot of complexity with setup and could cause more problems in other areas than what the benefit would be for this functionality. Just something to think about, but it is very handy when you have a large amount of channels that needs to be monitored and you want to minimise data and storage usage.

Anyhow, that’s my 2 cents on it, keep up the good work!

Cheers

Thanks for the feedback! Those sampling periods might be a good improvement to have in future. They could be implemented via the new TasksSupplier, similarly to the Priority concept. I’d still leave this out for now, because I did not want to introduce completely new concepts with this PR, but rather focus on not-breaking things. :slight_smile:

Regards,
Stefan

Hi @stefan.feilmeier,

many thanks for this rework of the ModbusWorker, which indeed is an very good approach for reads and writes of modbus registers.

As I have taken over this approach for the RctPower protocol, I was looking in the timing of this implementation which did not work in my implementation. After testing the timing on the modbus implementation, it did not work as expected, too.

To troubleshoot the issue, i set the cycle time to 8 seconds.

The assumptions:

  1. The Read Tasks will be executed to be finished as late as possible before the “before process image” event. This works.
  2. The Write Tasks will be executed as early as possible after the “execute write” event. This does NOT work (I have a delay of almost 8 seconds)

Here are some debugging with the modbus bridge (the modbus bridge implementation was not changed):

2025-04-19T19:55:11,289 [_cycle ] INFO [ker.internal.TasksSupplierImpl] [modbus1] Getting [2] read and [3] write tasks for this Cycle
2025-04-19T19:55:11,290 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: FINISHED → INITIAL_WAIT (in onBeforeProcessImage) Delay [7809] PreviousDelay [7797ms] + Wait [162ms] = PossibleDelay [7959ms]
end task ExecuteState$NoOp
execute task WaitDelayTask [delay=7809]
2025-04-19T19:55:11,290 [_cycle ] INFO [.edge.rctpower.ess.RctPowerEss] [ess1] before process image
2025-04-19T19:55:11,314 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: INITIAL_WAIT → WRITE (onExecuteWrite)
**release waitMutexTask!!!
2025-04-19T19:55:11,315 [_cycle ] INFO [.edge.rctpower.ess.RctPowerEss] [ess1] on execute write
2025-04-19T19:55:11,316 [_cycle ] INFO [.edge.rctpower.ess.RctPowerEss] [ess1] after write

###DELAY### → WaitDelayTask still running (release waitMutexTask not working as we are running WaitDelayTask not waitMutexTask)

end task ExecuteState$NoOp (WaitDelay Task ended)
execute task FC16WriteRegistersTask
2025-04-19T19:55:19,106 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC16WriteRegisters [ess0;unitid=1;ref=57355/0xe00b;length=7] Elapsed [5ms]
2025-04-19T19:55:19,107 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: WRITE → WAIT_BEFORE_READ (getNextTask)
2025-04-19T19:55:19,107 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: WAIT_BEFORE_READ → READ_AFTER_WRITE (onWaitDelayTaskFinished)
2025-04-19T19:55:19,118 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=57348/0xe004;length=14] Elapsed [11ms]
2025-04-19T19:55:19,131 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=57668/0xe144;length=58] Elapsed [11ms]
2025-04-19T19:55:19,131 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: READ_AFTER_WRITE → FINISHED (getNextTask)
2025-04-19T19:55:19,299 [_cycle ] INFO [ker.internal.TasksSupplierImpl] [modbus1] Getting [2] read and [3] write tasks for this Cycle
2025-04-19T19:55:19,300 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: FINISHED → INITIAL_WAIT (in onBeforeProcessImage) Delay [7838] PreviousDelay [7809ms] + Wait [166ms] = PossibleDelay [7975ms]
2025-04-19T19:55:19,301 [_cycle ] INFO [.edge.rctpower.ess.RctPowerEss] [ess1] before process image

When I change the following code to return the mutexTask, it will work as expected:

Change to:
// Waiting for EXECUTE_WRITE event
this.waitMutexTask;

Can you reproduce the issue I have on my system (e.g. is it a generic bug), and is this fix the right approach?

Best regards,
Timo

Just tested and reproduced the issue again with a clean openems-2025.6.0 (I only added my solaredge ess impementation).

@stefan.feilmeier can you confirm the issue and any objections against my suggested fix?

Here are the logs with the unmodfied modbus bridge (8 seconds cycle time):
2025-07-23T20:06:39,504 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] execute write
2025-07-23T20:06:47,263 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC16WriteRegisters [ess0;unitid=1;ref=57355/0xe00b;length=7]
2025-07-23T20:06:47,268 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=62212/0xf304;length=2]
2025-07-23T20:06:47,289 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=57668/0xe144;length=50]
2025-07-23T20:06:47,317 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=40071/0x9c87;length=38]
2025-07-23T20:06:47,325 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=40188/0x9cfc;length=2]
2025-07-23T20:06:47,480 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] before process image

Comment: The FC16WriteRegisters should be as early as possible after the “execute write” event (which is not the case).

After replacing the following code:

With:
// Waiting for EXECUTE_WRITE event
this.waitMutexTask;

The behavior is as follows (8 seconds cycle time):
2025-07-23T20:19:57,843 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] execute write
2025-07-23T20:19:57,855 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC16WriteRegisters [ess0;unitid=1;ref=57355/0xe00b;length=7]
2025-07-23T20:20:05,695 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=57718/0xe176;length=18]
2025-07-23T20:20:05,707 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=57668/0xe144;length=50]
2025-07-23T20:20:05,721 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=40071/0x9c87;length=38]
2025-07-23T20:20:05,816 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] before process image

Comment: The FC16WriteRegisters should be as early as possible after the “execute write” event (which is the case now).

Hi @MrT,

sorry, I did not find the time to dig into this in detail. I created a PR with your suggested change:

Your fix looks good to me. Unfortunately now the JUnit tests fail and I don’t seem to find an easy way to fix them. It’s been a while since I was working on that code…

Could you support and fix the JUnit tests to suit your improvement?

Can you share your Github User account? I could then add you as co-author on the PR.

Could you share your RCT (“io.openems.edge.rctpower.ess.RctPowerEss”) and SolareEdge-ESS (“io.openems.edge.solaredge.ess.SolarEdgeEss”) implementations publically? Those would be great additions to the OpenEMS project.

Thanks & Regards,

Stefan

As far as I understand the key issue is that when using waitMutexTask, the state machine flow changes. With the delay approach, onWaitDelayTaskFinished() is called to transition from INITIAL_WAIT to READ_BEFORE_WRITE. With the mutex approach, we need to handle this transition differently.

Hi @stefan.feilmeier,

my Github user account is timo-schlegel and yes, I’m planning to release both implementations. But so far, I’m not familiar with writing and using JUnit tests, so I need to invest some time to get familiar with this topic and add JUnit tests to the implementations.

The same applies to you request for the JUnit tests of the modbus bridge. Maybe @Sn0w3y could help here, as he seems to understand the problem?

Could you provide both of them in your Github Repo and add me as Collaborator? Same name as here on github😉

@MrT can you test:

My Test tells me:

CycleTasksManagerTest > testWriteTaskExecutesImmediatelyAfterExecuteWrite STANDARD_OUT
=== TEST: Write task executes immediately after onExecuteWrite ===
Starting cycle with onBeforeProcessImage()
Got WaitDelayTask with delay: 0ms
Delay task thread started, beginning wait…
Delay task completed normally
Calling onExecuteWrite() to interrupt the delay
Got write task immediately: DummyWriteTask [name=WT_1, delay=90]
Test passed: Write task was available immediately without waiting for delay!
=== END TEST ===

1 Like

Yes, I can do. But I need to finish setting up my Github Repo first.

In regards to your fix, it raises an other point:

As far as I understand the implementation, the delay is calculated as follow:

delayTime = cycletime - timeRequiredForReadAndWriteTasks - bufferTime

This means, we calculate time for one waitDelayTask (in my case the delaytime was 7809ms with 8000ms cycle time). So what is the reason to have two waitDelayTasks and what are the reasons for the READ_BEFORE_WRITE state?

From my understanding, we want:

INITIAL_WAIT (or WAIT_FOR_WRITE) → WRITE → WAIT_BEFORE_READ (waitDelayTask) → READ_AFTER_WRITE → FINISHED

The INITIAL_WAIT (or WAIT_FOR_WRITE) will wait till execute write event using waitMutexTask.

So do we really need to have a second waitDelayTask at INITIAL_WAIT?

What are the reason for the READ_BEFORE_WRITE state? We want to read as late as possible in the actual cycle (which is as late as possible before the “before process image” and after WRITE).

With my suggested fix the INITIAL_WAIT will do the same as WAIT_FOR_WRITE, it waits for the execute write event (and omitting the READ_BEFORE_WRITE state).

Your fix will basically do the same but on a more complex way. Within 8000ms cycle time, it will wait for e.g. 7809ms or till execute write event. As the execute write event will always be earlier, it also will omitting the READ_BEFORE_WRITE state.

So the more complex (or cleaner) approach would be to remove the READ_BEFORE_WRITE and WAIT_FOR_WRITE event and just do what we do in WAIT_FOR_WRITE in INITIAL_WAIT (Waiting for EXECUTE_WRITE event using waitMutexTask)

Are my considerations understandable?

Hi,

the READ_BEFORE_WRITE is actually handling the LOW priority tasks.

looking at TasksSupplierImpl.getCycleTasks() (line 91-138), the system has two types of read tasks:

  • HIGH priority: all get executed together in READ_AFTER_WRITE
  • LOW priority: only ONE gets executed per cycle in READ_BEFORE_WRITE

Check out the specific code in TasksSupplierImpl.java:

  • Line 95: var t = this.getOneLowPriorityReadTask(); - pulls exactly one LOW priority task

  • Line 105: .filter(t → t instanceof WriteTask || t.getPriority() == Priority.HIGH) - gets ALL HIGH priority tasks

The documentation in readme.adoc (line 28) confirms this: “LOW: only one task of all defined LOW priority tasks of all components registered on the same bridge is executed per Cycle”

Without READ_BEFORE_WRITE, these LOW priority tasks would never run. You can see in CycleTasksManager.java (line 149-158) how READ_BEFORE_WRITE specifically polls from
the reads queue before transitioning to WAIT_FOR_WRITE.

The two delays make sense in this context:

  • First delay: places the only LOW priority read
  • Second delay: puts all HIGH priority reads just before when they’re needed

You’re right that with the interrupt fix, the first delay often gets cut short - but that’s actually fine. OpenEMS shouldd still get to execute that one LOW priority task before moving to writes.

If we remove READ_BEFORE_WRITE completely, we need another mechanism to ensure LOW priority tasks get their turn.
The getOneLowPriorityReadTask() method (line 145 in TasksSupplierImpl.java) has complex logic to cycle through different components LOW priority tasks fairly.

It’s more complex than it seems at first glance :smiley: I had to look twice or 3 times to understand :smiley:

@stefan.feilmeier am I right with this Explanation?

The getCycleTasks() method is called at the beginning of each cycle and it fills the task queue for this cycle with one low priority task and all high priority tasks.

If you look at my post from April 19th, you will see the result as the first line in the debug log:

[modbus1] Getting [2] read and [3] write tasks for this Cycle

You can also see in CycleTasksManagers.java (line 151 and line 181) that in both states (READ_BEFORE_WRITE and READ_AFTER_WRITE) the next task will be read from the same queue (LinkedList):

var task = this.cycleTasks.reads().poll();

Also, if you look at my post from July 23rd, you will see in the log output that after applying the fix, the low priority task will still be executed:

[modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=57718/0xe176;length=1

Interesting - i looked at it wrong then :smiley:
I’m curious to hear Stefan’s perspective on whether there’s a historical reason or edge case we’re missing that makes READ_BEFORE_WRITE necessary.

But we can summarize (just to “fix” the initial Issue"), that my Solution works and the tests (JUnit) also succeed (see GitHub), right?

I don’t remember clearly. I just wanted to reproduce a logical flow of LOW, HIGH and WRITE tasks. If READ_BEFORE_WRITE can be replaced with a easier/better logic, I am in :+1:

1 Like

@stefan.feilmeier @MrT I created a Version with the suggested Improvements - running locally on my System and Tests suceeding.

In my Opinion we should test this very well in real-term Operations to see if it works as expected :slight_smile:

If I think about it for a while, there might actually be an advantage to having a separate state for executing low-priority tasks. I have a low-priority task that has a highly fluctuating runtime. It polls a SolarEdge slave inverter, i.e., an inverter connected behind another inverter. If we had a separate state for the low-priority tasks, we could measure the execution time and subtract it in the WaitDelayTask. Then the fluctuating execution time for low-priority tasks wouldn’t matter.

In this case, we would also have to ensure that a low-priority task exists at all:
// Run low priority task
if(this.cycleTasks.reads().size()>0 && this.cycleTasks.reads().getFirst().getPriority()==Priority.LOW) {
var task = this.cycleTasks.reads().poll();
yield task;
}
// Otherwise → next state + recursive call
this.state = StateMachine.NEXT_STATE;
yield this.getNextTask();

Additionally, the execution time would have to be measured separately and subtracted from the planned delay of the waitdelay task.

The question is whether it makes sense to add this complexity.
Currently, the affected low-priority task is executed in the corresponding cycle, but not all other tasks, because a cycletimetooshort occurs due to the fluctuating runtime. However, in the next cycle, regular reading is performed again, since another low-priority task is then in the queue.

Did you already look into my PR and test it?

The PR looks good to me.

I just tested the modification and the behavior is as expected (8000ms cycle time):

2025-08-05T08:51:28,534 [_cycle ] INFO [ker.internal.TasksSupplierImpl] [modbus1] Getting [3] read and [4] write tasks for this Cycle
2025-08-05T08:51:28,534 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: FINISHED → INITIAL_WAIT (in onBeforeProcessImage) Delay [7796] PreviousDelay [7795ms] + Wait [113ms] = PossibleDelay [7908ms]
2025-08-05T08:51:28,535 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] before process image
2025-08-05T08:51:28,559 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: INITIAL_WAIT → WRITE (onExecuteWrite)
2025-08-05T08:51:28,559 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] execute write
2025-08-05T08:51:28,566 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC16WriteRegisters [ess0;unitid=1;ref=57355/0xe00b;length=7] Elapsed [8ms]
2025-08-05T08:51:28,567 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: WRITE → WAIT_BEFORE_READ (getNextTask)
2025-08-05T08:51:36,364 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: WAIT_BEFORE_READ → READ (onWaitDelayTaskFinished)
2025-08-05T08:51:36,374 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=57344/0xe000;length=4] Elapsed [9ms]
2025-08-05T08:51:36,394 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=57668/0xe144;length=50] Elapsed [18ms]
2025-08-05T08:51:36,410 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=40071/0x9c87;length=38] Elapsed [14ms]
2025-08-05T08:51:36,410 [modbus1 ] INFO [ker.internal.CycleTasksManager] [modbus1] State: READ → FINISHED (getNextTask)
2025-08-05T08:51:36,546 [_cycle ] INFO [ker.internal.TasksSupplierImpl] [modbus1] Getting [3] read and [4] write tasks for this Cycle
2025-08-05T08:51:36,547 [_cycle ] INFO [ker.internal.CycleTasksManager] [modbus1] State: FINISHED → INITIAL_WAIT (in onBeforeProcessImage) Delay [7798] PreviousDelay [7796ms] + Wait [135ms] = PossibleDelay [7931ms]
2025-08-05T08:51:36,547 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] before process image

Comment: The FC16WriteRegisters should be as early as possible after the “execute write” event (which is the case now). The FC3ReadHoldingRegisters is as late as possible before “before process image” event.

Without trace (=better timings):

2025-08-05T08:59:13,085 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] execute write
2025-08-05T08:59:13,091 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC16WriteRegisters [ess0;unitid=1;ref=57355/0xe00b;length=7] Elapsed [5ms]
2025-08-05T08:59:20,954 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=LOW;ref=57344/0xe000;length=4] Elapsed [16ms]
2025-08-05T08:59:20,964 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=57668/0xe144;length=50] Elapsed [10ms]
2025-08-05T08:59:20,980 [modbus1 ] INFO [e.modbus.api.task.AbstractTask] Execute FC3ReadHoldingRegisters [ess0;unitid=1;priority=HIGH;ref=40071/0x9c87;length=38] Elapsed [14ms]
2025-08-05T08:59:21,066 [_cycle ] INFO [dge.solaredge.ess.SolarEdgeEss] [ess0] before process image

Comment: The FC16WriteRegisters should be as early as possible after the “execute write” event (which is the case now). The FC3ReadHoldingRegisters is as late as possible before “before process image” event.

So basically we have the intended behaviour now and could go on to test :slight_smile:

1 Like