All timers have been moved to their dedicated timer threads, making this
mechanism obsolete.
The only victim is the timeout handling for TCP control streams. Since
other TCP streams aren't using timeout handling either, and the TCP
control socket is barely used by anyone, we can live with not having a
dedicated timeout for these streams for now.
Change-Id: I83d9b9a844f4f494ad37b44f5d1312f272beff3f
To safeguard against leftover log info pieces, add additional resets
within loops that might run repeatedly.
Relevant to #1511
Change-Id: I875f1683b7dc8cee359469e8062c08c3c3e48a9d
When ports are closed early (while the call is still running), we must
first update a slave rtpengine with this new information (that these
ports are now closed) before actually releasing the ports ourselves. Not
doing so leads to a race condition where the master instance re-uses a
port that was just closed before the slave instance knows about the port
being closed.
We implement this using a thread-local list to keep track of ports that
were released while processing a control message, and process this list
to actually close the ports only after Redis has been updated.
Additional calls to the function to close the ports are placed in
strategic locations to make sure this is triggered in every code path.
closes#1495
Change-Id: I803f4594f30ca315da0b84c6e76893f54ca3a7c9