Support instance and queue reuse in case of HA recovery

Registered by Timur Nurlygayanov

Murano apps should support High Availability (HA) mode.

In case of HA recovery (when the deployment task is redelivered to another instance of engine after the crash of the first instance), engine repeats the deployment workflow from the very beginning.
In case if the instance names are randomly generated, old instances will be dropped by Heat and the new one will be created, in clear state. However, if instance name is defined by user (as a template in UI), this will lead to the re-generation of the same instance names. This will lead to the generation of the same stack, so StackUpdate command will not drop old instances, if they were already created.
But even if the instance is dropped and created a new, the Rabbit command queues will not be flushed, and because of that the new instance may first receive unprocessed commands sent to the dropped instance, anŠ² only after that - its own commands.

SubTasks Details:
Make execution plans idempotent: Subsequent execution of the same command should not have any sideeffects

Prevent duplicate tasks from execution on the agent-side: Execution plans should be idempotent (i.e. be able to be executed once, or twice, or multiple times with the same effect).
However, theoretically it may exist a requirement to make commands which are not able to be idempotent by design (incrementing some counter, for example).
To prevent unwanted side-effects in case of redelivering of such commands during HA recovery, the agent may need a feature to skip such commands if they are completely identical

Blueprint information

Not started
Alexander Tivelkov
Timur Nurlygayanov
Needs approval
Series goal:
Accepted for future
Not started
Milestone target:
milestone icon ongoing

Related branches




Work Items

Work items:
Make execution plans idempotent: TODO
Prevent duplicate tasks from execution on the agent-side: TODO

This blueprint contains Public information 
Everyone can see this information.


No subscribers.