Bowser version 0.6 is now available for download on the App Store. This is one of the biggest updates to Bowser ever:
- 32-bit devices support is finally back! Sorry for all those crashes.
- Video delay has been reduced significantly for both hardware-accelerated H.264 encoding as well as VP8 (software).
- ICE is now much more robust.
- Many WebRTC standards compliancy fixes, including (finally) supporting proper "createOffer".
- Bowser now uses the OpenWebRTC-SDK CocoaPod which recently got updated to better support hybrid WebRTC app development.
- Overall performance improvements.
We have also moved to require iOS 9.0 as there are/were issues with WKWebView on iOS 8.x.
If you are reading this post you probably know that Bowser is developed in the open. If you find Bowser useful and want to make it better, feel free to help out at https://github.com/EricssonResearch/bowser.
Many will agree that one of the key factors keeping the Internet as functional as it has been is the congestion control algorithms built into the TCP protocol. Others might ask - what is congestion control? And why should I care?
Let's start by defining what is congestion when discussing traffic in packet-switched network - network congestion occurs when the amount of data being sent over a network path is more than the path is able to carry. This results in queuing delays, packet loss, inability to establish new connections and even worst - a congestion collapse.
Enter 'congestion control' - mechanisms that try to adapt to the congested network situation by means of detecting (implicit or explicit) the persistent congestion and adjusting the amount of data being transmitted accordingly. The minimum goals of the congestion control is to - avoid congestion collapse, achieve some sort of fairness, provide robust and predictable application behavior while ensuring low latency.
For web browsing, file downloading and similar, congestion control is built into the Internet transport protocol that most HTTP and other web traffic travels over - TCP (Transmission Control Protocol). However, WebRTC media places high demands on networks due to the need for relatively high and consistent bandwidth while maintaining low latency. If the network gets congested for whatever reason, it means you need to reduce your transmission rate to maintain the same latency and avoid packet loss, if that is even possible.
At Ericsson Research, we have designed and developed a new congestion control algorithm called 'SCReAM', Self-clocked Rate Adaptation for Multimedia, for WebRTC media traffic. SCReAM is well-suited for both wired and wireless network access and a perfect match for OpenWebRTC. We are excited to announce that SCReAM has been integrated into the OpenWebRTC project.
SCReAM, a self-clocked sender based congestion control algorithm which has already been proposed and adopted in the IETF RMCAT Working Group (WG) - a WG focused on standardization of congestion control algorithms for interactive real-time communication. There we have produced enough results to make sure that the SCReAM is safe enough to be deployed in the Internet. You can read more about SCReAM here.
SCReAM is designed on top of already established principles (like the packet conservation principle, LEDBAT one way delay estimation, congestion window validation etc.) in the IETF community with some additional considerations to make those principles suitable for real-time interactive media. SCReAM also stands out from the rest as it is the only one proposed in the WG which is not completely a rate and delay based congestion control algorithm.
SCReAM uses self-clocking and congestion window techniques to ensure prompt reaction to the congestion (within 1 RTT) and higher throughput. It also has simple and easy way to handle multiple media streams and has a built-in circuit breaker in the design.
In RMCAT, SCReAM is the only congestion control algorithm tested for both basic evaluation test cases and for wireless evaluation test cases (specially cellular test cases part, we have not made the Wi-Fi test result available to public yet). For some of the results from earlier versions of the algorithms check out the IETF presentations ( IETF90, IETF91, IETF92 ).
We are constantly evaluating the algorithm implementation in OpenWebRTC. One area where we have experienced some difficulties is in video encoder implementations - SCReAM can react much more quickly than the encoders can! To some extent this is a limitation of rate control mechanisms in video encoders not being a precise art, but we have also sought support from the developers of the codec implementations used in OpenWebRTC so that we can tweak them to make the whole setup work as well as they can. Stay tuned for continued tuning and evaluation results!
As a teaser, below is the evaluation results when we tried OpenWebRTC implementation of SCReAM with the testcase 5.1 from RMCAT basic evaluation test cases. In this test case the path bandwidth varies over time. This is a very good test case to see how one algorithm grabs the opportunity when the bandwidth is available and reacts quickly when that available bandwidth decreases.
The plots show the video bitrate and the delay when SCReAM congestion control operates with the VP8 video encoder. The Queuing delay graph shows the delay caused by the queuing at the sender in the RTP queue as well as the buffering delay in the network. The throughput drops drastically at T=68s. This causes SCReAM to reduce the sending rate with the effect that video packets are held in the RTP queue. This avoids large network queue buildup due to congestion.
Give it a try now using openwebrtc-daemon or Bowser (native application support is coming soon) and please SCReAM (sorry... it had to be done) at us if you have any comments or suggestions.
PS - we are also working on the standardization of the feedback format and SDP signalling so that we can have congestion control that is interoperable with other WebRTC-compatible endpoints as soon as possible!
-- ANM Zaheduzzaman Sarker, Daniel Lindström, Ingemar Johansson and Robert Swain
Ericsson Reaserch, Sweden.
We would also like to thank Arun Raghavan from Centricular for his effort on integrating SCReAM into OpenWebRTC.
One area where OpenWebRTC has not been fully spec compliant is on the completeness of the SDP offers and answers generated by createOffer and createAnswer respectively. This had no consequences if the application followed the recommended model of first applying the offer/answer by setLocal and then fetched the local description from the PeerConnection, because by that time it is complete.
However, this was problematic for those applications that wanted to shave a couple of milliseconds off call set up time, and therefore sent the offer/answer off to the remote end before applying it on the PeerConnection. Since the offer/answer generated lacked certain data, like SSRCs for the MediaStreamTracks, ICE credentials and DTLS fingerprint, the session could not be set up.
With the recent merge of pull request #486 the last piece was added, and anyone wanting to shave some time off set up/session update can happily send off the data generated by createOffer or Answer to the other side without having to wait for the setLocal Primise to resolve.
For the offers and answers we've also recently updated the transport protocol field of the media description, from "RTP/SAVPF" to "UDP/TLS/RTP/SAVPF" to be aligned with specification updates. For a grace period OpenWebRTC will respond with the old protocol string if that is what the other end point uses.
Next we will look into adding mid to the offers/answers as well as RTP and RTCP headers.
The OpenWebRTC iOS SDK has been updated to version 0.3. Some breaking changes has been introduced in the Native callbacks of OpenWebRTCNativeHandlerDelegate:
- (void)gotLocalSources:(NSArray *)sources; - (void)gotRemoteSource:(NSDictionary *)source;
The updated API provides more rich information about the sources, where the old API only provided names of the sources. The reason for this change is that we wanted to add new API's for changing which camera (front or back) is used. A number of additional API's has been introduced in OpenWebRTCNativeHandler as well:
- (void)setVideoCaptureSourceByName:(NSString *)name; - (void)videoView:(OpenWebRTCVideoView *)videoView setVideoRotation:(NSInteger)degrees; - (void)videoView:(OpenWebRTCVideoView *)videoView setMirrored:(BOOL)isMirrored; - (NSInteger)rotationForVideoView:(OpenWebRTCVideoView *)videoView;
The SDK is installed through CocoaPods:
pod 'OpenWebRTC-SDK', :git => 'https://github.com/EricssonResearch/openwebrtc-ios-sdk.git'
As always we welcome feedback.