supports aliasing a local interface multiple times with the same local
address for different advertised addresses
closes#216
Change-Id: I6f98d1a17290b0bb1831e48ad89fc61d8b2d7914
Thoughts on that topic so far:
There's one thing to keep also in mind. What do we do if the call
changes (streams) and the backup node is notified ? Currently we only
know (by subscribing to the 'notifier-' prefix that in fact it has
changed, but we don't know what has changed in detail. Subscribing to
everything would lead to the problem that we have to take care about
synchronising the the new streams with the old ones. Without having a
look at the code that might be a lot of effort and ... I guess that's
why richard likes to ... have clean states of the calls. Synchronizing
is always a mess. Easier to delete and setup new.
I thought about the following solution for makes things more abstract
and easier to understand:
1. Whenever that call on a backup node (foreign call) has seen a packet
(also timers are started then), we do not process any notification by a
redis notification for this call (caus' the RTP-IP has already been
switched). More precise and abstract that means: If a node has taken
over the responsibility for that call (by having seen a packet), it
assumes that notifications from the former node to be dropped. The call
will be deleted when either the timer fires or (for other companies) the
control channel address has also been switched and via that channel the
call is deleted.
2. If a call has not seen a packet (inactive call on the backup node,
seen as not responsible for that call but could become so), we accept
all commands for that call on the backup node in the same way as on the
original node. That means also deletions in-between and so on. I mean if
the original node does it, why not accepting to do the same way on the
backup node ?
iptables might be managed via different tools by an administrator. In such a setup, insertion and deletion of rules in the INPUT chain as well as creation of new chains by an application is not wanted.
By setting the corresponding configuration parameter in the config file, the init-script will not touch the INPUT queue nor will it create the rtpengine chain. It will still attempt to create a rule in the rtpengine chain, but will refuse to start if that chain does not exist.
Currently, every rtpengine will subscribe to redis-keyspace notification
so it will receive a notification when an call is inserted. If the call
is not already handeled by the rtpengine, the call will be restored.
The reason for this is to have in-place redundancy. Imagine you have
multiple rtpengines running, eachone will have all calls of the others.
When one rtpengine fails somehow, infrastructure guys use BGP in order to
'move' the IP address from one rtpengine to another. Thisone can handle
the new calls instantly since they're already recovered by
redis-notification feature.
Next step is internally identify those calls in order to prevent some
timers to delete the calls where no RTP flows. Second will be
something we call 'partitioning'. It means that the subscription
to a redis notify will only be for the keyspace a dedicated rtpengine
writes to. This leads to the point that you can make redundancy groups
(partitions) of the rtpengines.