A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope

Matti Siekkinen Enrico Masala Teemu Kämäräinen School of Science Control & Comp. Eng. Dep. School of Science Aalto University, Finland Politecnico di Torino, Italy Aalto University, Finland matti.siekkinen@aalto.fi [email protected] teemu.kamarainen@aalto.fi

ABSTRACT We have measures the Periscope service in two ways. Live multimedia streaming from mobile devices is rapidly We first created a crawler that queries the Periscope gaining popularity but little is known about the QoE API for ongoing live streams and used the gathered data they provide. In this paper, we examine the Periscope of about 220K distinct broadcasts to analyze the usage service. We first crawl the service in order to under- patterns. Second, we automated the process of viewing stand its usage patterns. Then, we study the protocols Periscope broadcasts with an Android smartphone and used, the typical quality of experience indicators, such generated a few thousand viewing sessions while logging as playback smoothness and latency, video quality, and various kinds of data. Using this data we examined the energy consumption of the Android application. the resulting QoE. In addition, we analyzed the video quality by post processing the video data extracted from the traffic captures. Finally, we studied the application Keywords induced energy consumption on a smartphone. Mobile live streaming; QoE; RTMP; HLS; Periscope Our key findings are the following: 1) 2 Mbps appears to be the key boundary for access network bandwidth 1. INTRODUCTION below which startup latency and video stalling clearly increase. 2) Periscope appears to use the HLS proto- Periscope and Meerkat are services that enable users col when a live broadcast attracts many participants to broadcast live video to a large number of viewers and RTMP otherwise. 3) HLS users experience a longer using their mobile device. They both emerged in 2015 playback latency for the live streams but typically fewer and have since gained popularity fast. Periscope, which stall events. 4) The video bitrate and quality are very was acquired by before the service was even similar for both protocols and may exhibit significant launched, announced in March 2016 on their one year short-term variations that can be attributed to extreme birthday that over 110 years of live video was watched time variability of the captured content. 5) Like most every day with the application [13]. Also Facebook has video apps, Periscope is power hungry but, surprisingly, recently launched a rival service called Facebook Live. the power consumption grows dramatically when the Very little details have been released about how these chat feature is turned on while watching a broadcast. streaming systems work and what kind of quality of The causes are significantly increased traffic and ele- experience (QoE) they deliver. One particular challenge vated CPU and GPU load. they face is to provide low latency stream to clients arXiv:1605.04270v2 [cs.MM] 3 Oct 2016 because of the features that allow feedback from viewers to the broadcaster in form of a chat, for example. Such interaction does not exist with “traditional” live video 2. METHODS AND DATA COLLECTION streaming systems and it has implications on the system The Periscope app communicates with the servers us- design (e.g., protocols to use). ing an API that is private in that the access is protected Permission to make digital or hard copies of all or part of this work for personal by SSL. To get access to it, we set up a so called SSL- or classroom use is granted without fee provided that copies are not made or capable man-in-the-middle proxy, i.e. mitmproxy [12], distributed for profit or commercial advantage and that copies bear this notice in between the mobile device and the Periscope ser- and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is vice as a transparent proxy. The proxy intercepts the permitted. To copy otherwise, or republish, to post on servers or to redistribute to HTTPS requests sent by the mobile device and pretends lists, requires prior specific permission and/or a fee. Request permissions from to be the server to the client and to be the client to the [email protected]. server. The proxy enables us to examine and log the ex- IMC 2016, November 14 - 16, 2016, Santa Monica, CA, USA change of requests and responses between the Periscope c 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-4526-2/16/11. . . $15.00 client and servers. The Periscope iOS app uses the so DOI: http://dx.doi.org/10.1145/2987443.2987472 called certificate pinning in which the certificate known to be used by the server is hard-coded into the client. Therefore, we only use the Android app in this study. Table 1: Relevant Periscope API commands. We used both Android emulators (Genymotion [3]) API request request contents response con- and smartphones in the study. We generated two data tents sets. For the first one, we used an Android emula- mapGeoBroadcastFeed Coordinates of a List of broadcasts tor and developed an inline script for the mitmproxy rectangle shaped located inside the that crawls through the service by continuously query- geographical area area getBroadcasts List of 13-character Descriptions of ing about the ongoing live broadcasts. The obtained broadcast IDs broadcast IDs data was used to analyze the usage patterns (Sec. 4). (incl. nb of view- The second dataset was generated for QoE analysis ers) (Sec. 5) by automating the broadcast viewing process on playbackMeta Playback statistics nothing a smartphone. The app has a “Teleport” button which takes the user directly to a randomly selected live broad- cast. Automation was achieved with a script that sends A user can discover public broadcasts in three ways: tap events through Android debug bridge (adb) to push 1) The app shows a list of about 80 ranked broadcasts the Teleport button, wait for 60s, push the close button, in addition to a couple of featured ones. 2) The user can push the “home” button and repeat all over again. The explore the map of the world in order to find a broadcast script also captures all the video and audio traffic using in a specific geographical region. The map shows only tcpdump. Meanwhile, we ran another inline script with a fraction of the broadcasts available in a large region mitmproxy that dumped for each broadcast viewed a and more broadcasts become visible as the user zooms description and playback statistics, such as delay and in. 3) The user can click on the “Teleport” button to stall events, which the application reports to a server start watching a randomly selected broadcast. at the end of a viewing session. It is mainly useful for Since the API is not public, we examined the HTTP those streaming sessions that use the RTMP protocol requests and responses while using the app through the because after an HTTP Live Streaming (HLS) session, mitmproxy in order to understand how the API works. the app reports only the number of stall events. We The application communicates with the servers by send- also reconstruct the video data of each session and ana- ing POST requests containing JSON encoded attributes lyze it using a variety of scripts and tools. After finding to the following address: ://api.periscope.tv/api/ and reconstructing the multimedia TCP stream using v2/apiRequest. The apiRequest and its contents vary wireshark [19], single segments are isolated by saving according to what the application wants to do. Re- the response of HTTP GET request which contains an quests relevant to this study are listed in Table 1. MPEG-TS file [5] ready to be played. For RTMP, we Periscope uses two kinds of protocols for the video exploit the wireshark dissector which can extract the stream delivery: Real Time Messaging Protocol (RTMP) audio and video segments. The [10] tools have using port 80 and HTTP Live Streaming (HLS) because been used to inspect the multimedia content and de- RTMP enables low latency (Section 5), while HLS is em- code the video in full for the analysis of Sec. 5.2. ployed to meet scalability demands. Also Facebook Live In the automated viewing experiments, we used two uses the same set of protocols[8]. Further investigation different phones: Samsung Galaxy S3 and S4. The reveals that the RTMP streams are always delivered by phones were located in Finland and connected to the servers running on EC2 instances. For exam- by means of reverse tethering through a USB ple, the IP address that the application got when re- connection to a Linux desktop machine providing them solving vidman-eu-central-1.periscope.tv gets mapped with over 100Mbps of available bandwidth both up and to ec2-54-67-9-120.us-west-1.compute.amazonaws.com down stream. In some experiments, we imposed arti- when performing a DNS reverse lookup. In contrast, ficial bandwidth limits with the tc command on the HLS video segments are delivered by Fastly CDN. RTMP Linux host. For latency measurement purposes (Sec- streams use only one connection, whereas HLS may tion 5.1), NTP was enabled on the desktop machine sometimes use multiple connections to different servers and used the same server pool as the Periscope app. in parallel to fetch the segments, possibly for load bal- ancing and/or resilience reasons. We study the logic of 3. PERISCOPE OVERVIEW selecting the protocol and its impact on user experience in Section 5. Public streams are delivered using plain- Periscope enables users to broadcast live video for text RTMP and HTTP, whereas the private broadcast other users to view it. Both public and private broad- streams are encrypted using RTMPS and HTTPS for casting is available. Private streams are only viewable HLS. The chat uses Websockets to deliver messages. by chosen users. Viewers can use text chat and emoti- cons to give feedback to the broadcaster. The chat be- comes full when certain number of viewers have joined 4. ANALYSIS OF USAGE PATTERNS after which new joining users cannot send messages. We first wanted to learn about the usage patterns of Broadcasts can also be made available for replay. Periscope. The application does not provide a complete 4000 100 1.00 20 80 3000 duration 0.75 viewers 15 60 2000 0.50 10 40 1000 5 20 0.25 live broadcasts found

0 0 of broadcasts fraction 0.00 0 live broadcasts found (%) 0 20 40 60 0 50 100 per brdcst viewers avg areas queried areas queried (%) 0.1 1 10 100 1000 0 5 10 15 20 duration (min) / avg viewers local time of day (h) (a) absolute (b) relative (a) duration and viewers (b) viewers vs. start time Figure 1: Cumulative number of broadcasts dis- covered as a function of crawled areas (# of re- Figure 2: Broadcasting and viewers. quests). Each curve corresponds to a different deep crawl. as a result of crawls performed at different times of day. Figure 1(b) reveals that for all the different crawls, half list of broadcasts and the user needs to explore the ser- of the areas contain at least 80% of all the broadcasts vice in ways described in the previous section. In late discovered in the crawl. We select those areas from each March of 2016, over 110 years of live video were watched crawl, 64 areas in total, for a targeted crawl. We divide every day through Periscope [13], which roughly trans- them into four sets assigned to four different simulta- lates into 40K live broadcasts ongoing all the time. neously running crawlers, i.e., four emulators running We developed a crawler by writing a mitmproxy in- Periscope with different user logged in (avoids rate lim- line script that exploits the /mapGeoBroadcastFeed re- iting) that repeatedly query the assigned areas. Such quest of the Periscope API. The script intercepts the targeted crawl completes in about 50s. request made by the application after being launched Figure 2 plots duration and viewing statistics about and replays it repeatedly in a loop with modified coor- four different 4h-10h long targeted crawls started at dif- dinates and writes the response contents to a file. It ferent times of the day (note: both variables use the also sets the include_replay attribute value to false same scale). Broadcast duration was calculated by sub- in order to only discover live broadcasts. In addition, tracting its start time (included in the description) from the script intercepts /getBroadcasts requests and re- the timestamp of the last moment the crawler discov- places the contents with the broadcast IDs found by the ered the broadcast. Only broadcasts that ended during crawler since previous request and extracts the viewer the crawl were included (must not have been discov- information from the response to a file. ered during the last 60s of a crawl) totalling to about We faced two challenges: First, we noticed that when 220K distinct broadcasts. Most of the broadcasts last specifying a smaller area, i.e. when user zooms in the between 1 and 10 minutes and roughly half are shorter map, new broadcasts are discovered for the same area. than 4 minutes. The distribution has a long tail with Therefore, to find a large fraction of the broadcasts, some broadcasts lasting for over a day. the crawler must explore the world using small enough The crawler gathered viewer information about 134K areas. Second, Periscope servers use rate limiting so broadcasts. Over 90% of broadcasts have less than 20 that too frequent requests will be answered with HTTP viewers on average but some attract thousands of view- 429 (“Too many requests”), which forces us to pace the ers. It would be nice to know the contents of the most requests in the script and increases the completion time popular broadcasts but the text descriptions are typi- of a crawl. If the crawl takes a long time, it will miss cally not very informative. Over 10% of broadcasts have broadcast information. no viewers at all and over 80% of them are unavailable Our approach is to first perform a deep crawl and for replay afterwards (replay information is contained in then to select only the most active areas from that crawl the descriptions we collect about each broadcast), which and query only them, i.e., perform a targeted crawl. means that no one ever saw them. They are typically The reason is that deep crawl alone would produce too much shorter than those that have viewers (avg dura- coarse grained data about duration and popularity of tions 2min vs. 13 min) although some last for hours. broadcasts because it takes over 10 minutes to finish. In They represent about 2% of the total tracked broadcast deep crawl, the crawler zooms into each area by dividing time. The local time of day shown in Figure 2(b) is de- it into four smaller areas and recursively continues do- termined based on the broadcaster’s time zone. Some ing that until it no longer discovers substantially more viewing patterns are visible, namely a notable slump broadcasts. Such a crawl finds 1K-4K broadcasts1. Fig- in the early hours of the day, a peak in the morning, ure 1 shows the cumulative number of broadcasts found and an increasing trend towards midnight, which sug- tal broadcasts but we miss private broadcasts and those 1This number is much smaller than the assumed 40K to- with location undisclosed. gest that broadcasts typically have local viewers. This 1.00 1.00 makes sense especially from the language preferences point of view. Besides the difference between broad- 0.75 0.75 casts with and without any viewers, the popularity is 0.50 0.50 only very weakly correlated with its duration. 0.25 stall ratio 0.25

fraction of broadcasts fraction 0.00 0.00 5. QUALITY OF EXPERIENCE 0.00 0.25 0.50 0.75 1.00 0.5 1 2 3 4 5 6 7 8 9 10100 In this section, we study the data set generated through stall ratio bandwidth limit (Mbps) automated viewing with the Android smartphones. It (a) no bandwidth limiting (b) bandwidth limiting consisted of streaming sessions with and without band- width limit to the Galaxy S3 and S4 devices. We have Figure 3: Analysis of the stall ratio for RTMP data of 4615 sessions in total: 1796 RTMP and 1586 streams with and without bandwidth limiting. HLS sessions without a bandwidth limit and 18-91 ses- sions for each specific bandwidth limit. Since the num- ber of recorded sessions is limited, the results should be taken as indicative. The fact that our phone had a summed up stall time divided by the total stream dura- high-speed non-mobile Internet access means that typi- tion including stall and playback time. The bandwidth cal users may experience worse QoE because of a slower limit 100 in the figure refers to the unlimited case. Most and more variable Internet access with longer latency. streams do not stall but there is a notable number of HLS seems to be used only when a broadcast is very sessions with stall ratio of 0.05-0.09, which corresponds popular. A comparison of the average number of view- usually to a single stall event that lasts roughly 3-5s. ers seen in an RTMP and HLS session suggests that The boxplots in Figure 3(b) suggest that a vast major- the boundary number of viewers beyond which HLS is ity of the broadcasts are streamed with a bitrate inferior used is somewhere around 100 viewers. By examining to 2 Mbps because with access bandwidth greater than the IP addresses from which the video was received, that, the broadcasts exhibited very little stalling. As we noticed that 87 different Amazon servers were em- for the broadcasts streamed using HLS, comparing their ployed to deliver the RTMP streams. We could locate stall count to that of the RTMP streams indicates that only nine of them using maxmind.com, but among those stalling is rarer with HLS than with RTMP, which may nine there were at least one in each continent, except be caused by HLS being an adaptive streaming protocol for Africa, which indicates that the server is chosen capable for quality switching on the fly. based on the location of the broadcaster. All the HLS The average video bitrate is usually between 200 and streams were delivered from only two distinct IP ad- 400 kbps (see Section 5.2), which is much less than the dresses, which maxmind.com says are located somewhere 2 Mbps limit we found. The most likely explanation in Europe and in San Francisco. We do not currently to this discrepancy is the chat feature. We measured know how the video gets embedded into an HLS stream the phone traffic with and without chat and observed a for popular broadcasts but we assume that the RTMP substantial increase in traffic when the chat was on. A stream gets possibly transcoded, repackaged, and de- closer look revealed that the JSON encoded chat mes- livered to Fastly CDN by Periscope servers. The fact sages are received even when chat is off, but when the that we used a single measurement location explains chat is on, image downloads from Amazon S3 servers ap- the difference in server locations observed between the pear in the traffic. The reason is that the app downloads protocols. As confirmed by analysis in [18], the RTMP profile pictures of chatting users and displays them next server nearest to the broadcasting device is chosen when to their messages, which may cause a dramatic increase the broadcast is initialized, while the Fastly CDN server in the traffic. For instance, we saw an increase of the ag- is chosen based on the location of the viewing device. gregate data rate from roughly 500kbps to 3.5Mbps in Since we had data from two different devices, we per- one experiment. The precise effect on traffic depends on formed a number of Welch’s t-tests in order to under- the number of chatting users, their messaging rate, the stand whether the data sets differ significantly. Only fraction of them having a profile picture, and the format the frame rate differs statistically significantly between and resolution of profile pictures. We also noticed that the two datasets. Hence, we combine the data in the some pictures were downloaded multiple times, which following analysis of video stalling and latency. indicates that the app does not cache them. Each broadcast was watched for exactly 60s from the 5.1 Playback Smoothness and Latency moment the Teleport button was pushed. We calculate We first look at playback stalling. For RTMP streams, the join time, often also called startup latency, by sub- the app reports the number of stall events and the av- tracting the summed up playback and stall time from erage stall time of an event, while for HLS it only re- 60s and plot it in Figure 4(a) for the RTMP streams. In ports the number of stall events. The stall ratio plotted addition, we plot in Figure 4(b) the playback latency, for the RTMP streams in Figure 3(a) is calculated as which is equivalent to the end-to-end latency. The y- 60 25 1.0 50

20 0.8 40 40 15 0.6 30 10 20 0.4 avg QP join time (s) 5 20 0.2 HLS fraction of videos 0 latency (s) playback 0 RTMP 0.5 1 2 3 4 5 6 7 8 9 10100 0.5 1 2 3 4 5 6 7 8 9 10100 0.0 10 bandwidth limit (Mbps) bandwidth limit (Mbps) 0 0.25 0.5 0.75 1 1.25 0 0.25 0.5 0.75 1 1.25 bitrate (Mbit/s) bitrate (Mbit/s) (a) join time (b) playback latency (a) (b) Figure 4: Boxplots showing that playback la- tency and join time of RTMP streams increase Figure 6: Characteristics of the captured videos. when bandwidth is limited. Notice the differ- clearly demonstrate the impact of using HLS on the ence in scales. delivery latency. RTMP stream delivery is very fast happening in less than 300ms for 75% of broadcasts on average, which means that the majority of the few sec- axis scale was cut leaving out some outliers that ranged onds of playback latency with those streams comes from up to 4min in the case of playback latency. Both in- buffering. In contrast, the delivery latency with HLS crease when bandwidth is limited. In particular, join streams is over 5s on average. As expected, the deliv- time grows dramatically when bandwidth drops to 2Mbps ery latency grows when bandwidth is limited similarly and below. The average playback latency was roughly to the playback latency. A more detailed analysis of the a few seconds when the bandwidth was not limited. latency can be found in [18]. The delivery latency we observed matches quite well with their delay breakdown 1.00 results (end-to-end delay excluding buffering). HLS 0.75 RTMP In summary, HLS appears to be a fallback solution 0.50 to the RTMP stream. The RTMP servers can push the video data directly to viewers right after receiving 0.25 it from the broadcasting client. HLS delivery requires the data to be packaged in complete segments, possi- fraction of brdcsts fraction 0.00 0.01 0.1 0.3 1 3 10 bly while transcoding it to multiple qualities, and the video delivery latency (s) client application needs to separately request for each video segment, which all adds up to the latency. HLS Figure 5: Video delivery latency is much longer does produce fewer stall events but we have seen no evi- with HSL compared to RTMP. dence of the video bitrate being adapted to the available bandwidth (Section 5.2). It is possible that the buffer sizing strategy causes the difference in the number of Through experiments where we controlled both the stall events between the two protocols but we cannot broadcasting and receiving client and captured both de- confirm this at the moment. vices traffic, we noticed that the broadcasting client ap- plication regularly embeds an NTP timestamp into the 5.2 Audio and Video Quality video data, which is subsequently received by each view- Both RTMP and HLS communications employ stan- ing client. The experiments indicated that the NTP dard codecs for audio and video, that is, AAC (Ad- timestamps transmitted by the broadcasting device is vanced Audio Coding) for audio [7] and AVC (Advanced very close to the tcpdump timestamps in a trace cap- Video Coding) for video [6]. In more details, audio is tured by a tethering machine. Hence, the timestamps sampled at 44,100 Hz, 16 bit, encoded in Variable Bit enable calculating the delivery latency by subtracting Rate (VBR) mode at about either 32 or 64 kbps, which the NTP timestamp value from the time of receiving seems enough to transmit almost any type of audio con- the packet containing it, also for the HLS sessions for tent (e.g., voice, music, etc.) with the quality expected which the playback metadata does not include it. We from capturing through a mobile device. calculate the average over all the latency samples for Video resolution is always 320×568 (or vice versa de- each broadcast. Figure 5 shows the distribution of the pending on orientation). The video frame rate is vari- video delivery latency for the sessions that were not able, up to 30 fps. Occasionally, some frames are miss- bandwidth limited. Even if our packet capturing ma- ing hence concealment must be applied to the decoded chine was NTP synchronized, we sometimes observed video. This is probably due to the fact that the upload- small negative time differences indicating that the syn- ing device had some issues, e.g., glitches in the real-time chronization was imperfect. Nevertheless, the results encoding or during upload. Fig. 6(a) shows the video bitrate, typically ranging 4540 4383 between 200 and 400 kbps. Moreover, there is almost no WiFi 4169 difference between HLS and RTMP except for the maxi- LTE 3594 mum bitrate which is higher for RTMP. Analysis of such 3120 cases reveals that poor efficiency coding schemes have 2959 3033 2400 2303 2268 been used (e.g., I-type frames only). The most common 2159 segment duration with HLS is 3.6 s (60% of the cases), 1673 and it ranges between 3 and 6 s. However, the corre- 1067 sponding bitrate can vary significantly. In fact, in real 1006 Power consumption (mW) consumption Power applications rate control algorithms try to keep its aver- age close to a given target, but this is often challenging 0 1000Home 2000 3000 4000 5000 App on Video on Video on Video on Video onBroadcast as changes in the video content directly influences how screen (not live) (RTMP/ (HLS/ (HLS/ difficult is to achieve such bitrate. To this aim, the so chat off) chat off) chat on) called quantization parameter (QP) is dynamically ad- justed [2]. In short, the QP value determines how many Figure 7: Average power consumption with details are discarded during video compression, hence it Periscope and idle device. can be used as a very rough indication of the quality of a given video segment. Note that the higher the QP, the lower the quality and vice versa. works2. Figure 7 shows the results. We measured the To investigate quality, we extracted the QP and com- idle power draw in the Android application menu to puted its average value for all the videos. Fig. 6(b) be around 900 to 1000 mW both with WiFi and LTE shows the QP vs bitrate for each captured video (the connections. With the Periscope app on without video whole video for RTMP and each segment for HLS). playback, the power draw grows already to 1537 mW When the quality (i.e., QP value) is roughly the same, with WiFi and to 2102 mW with LTE because the ap- the bitrate varies in a large range. On one hand, this plication refreshes the available videos every 5 seconds. is an indication that the type of content strongly differ Playing back old recorded videos with the application among the streams. For instance, some of them feature consume an equal amount of power as playing back live very static content such as one person talking on a static videos. The power consumption difference of RTMP background while others show, e.g., soccer matches cap- vs HLS is also very small. Interestingly, enabling the tured from a TV screen. On the other hand, observing chat feature of the Periscope videos raises the power how the bitrate and average QP values vary over time consumption to 2742 mW with WiFi and up to 3599 may provide interesting indications on the evolution of mW with LTE. This is only slightly less than when the communication, e.g., hints about whether represen- broadcasting from the application. However, the test tation changes are used. Unfortunately, we are cur- broadcasts had no chat displayed on the screen. rently unable to draw definitive conclusions since the We further investigated the impact of the chat fea- variation could also be due to significant changes in the ture by monitoring CPU and GPU activity and network video content which should be analyzed in more depth. traffic. Both processors use DVFS to scale power draw Finally, we investigated the frame type pattern used to dynamic workload [17]. We noticed an increase by for encoding. Most use a repeated IBP scheme. Few roughly one third in the average CPU and GPU clock encodings (20.0 % for RTMP and 18.4% for HLS) only rates when the chat is enabled, which implies higher employ I and P frames only (or just I in 2 cases). After power draw by both processors. Recall from Section 5.1 about 36 frames, a new I frame is inserted. Although that the chat feature may increase the amount of traf- one B frame inserts a delay equal to the duration of the fic, especially with streams having an active chat, which frame itself, in this case we speculate that the reason inevitably increases the energy consumed by wireless they are not present in some streams could be that some communication. The energy overhead of chat could be old hardware might not support them for encoding. mitigated by caching profile pictures and allowing users to disable their display in the chat.

5.3 Power Consumption 6. RELATED WORK We connected a Samsung Galaxy S4 4G+ smartphone Live mobile streaming is subject to increasing atten- to a Monsoon Power Monitor [1] in order to measure its tion, including from the sociological point of view [14]. power consumption as instructed in [17]. We used the In the technical domain, research about live streaming PowerTool software to record the data measured by the focused on issues such as distribution optimization [11], power monitor and to export it for further analysis. including scenarios with direct communication among The screen brightness was full in all test cases and the sound was off. The phone was connected to the 2It is a full-fledged LTE network operated by Nokia. Internet through non-commercial WiFi and LTE net- DRX was enabled with typical timer configuration. devices [22]. The crowdsourcing of the streaming activ- [9] Z. Li, J. Lin, M.-I. Akodjenou, G. Xie, M. A. ity itself also received particular attention [4, 21]. Kaafar, Y. Jin, and G. Peng. Watching videos Stohr et al. have analyzed the YouNow service [15] from everywhere: a study of the PPTV mobile and Tang et al. investigated the role of human factors in VoD system. In Proc. of the 2012 ACM conf. on Meerkat and Periscope [16]. Little is known, however, Internet Measurement Conference, pages 185–198. about how such mobile applications perform. Most of ACM, 2012. the research has focused on systems where the mobile [10] LibAV Project: https://libav.org/. device is only the receiver of the live streaming, like [11] T. Lohmar, T. Einarsson, P. Fr¨ojdh, F. Gabin, .Tv [20], or other mobile VoD systems [9]. and M. Kampmann. Dynamic adaptive HTTP We believe that this work together with the work of streaming of live content. In World of Wireless, Wang et al. [18] are the first to provide measurement- Mobile and Multimedia Networks (WoWMoM), based analyses on the anatomy and performance of a 2011 IEEE International Symposium on a, pages popular mobile live streaming application. Wang et al. 1–8, June 2011. thoroughly study the delay and its origins but, similar [12] Mitmproxy Project: https://mitmproxy.org/. to us, also show results on usage patterns and video [13] Periscope. Year one. https://medium.com/ stalling, particularly the impact of buffer size. They @periscope/year-one-81c4c625f5bc#.mzobrfpig, also reveal a particular vulnerability in the service. Mar. 2016. [14] D. Stewart and J. Littau. Up, periscope: Mobile 7. CONCLUSIONS streaming video technologies, privacy in public, We explored the Periscope service providing insight and the right to record. Journalism and Mass on some key performance indicators. Both usage pat- Communication Quarterly, Special Issue: terns and technical characteristics of the service (e.g., Information Access and Control in an Age of Big delay and bandwidth) were addressed. In addition, the Data, 2016. impact of using such a service on the mobile devices was [15] D. Stohr, T. Li, S. Wilk, S. Santini, and studied through the characterization of the energy con- W. Effelsberg. An analysis of the younow live sumption. We expect that our findings will contribute streaming platform. In Local Computer Networks to a better understanding of how the challenges of mo- Conference Workshops (LCN Workshops), 2015 bile live streaming are being tackled in practice. IEEE 40th, pages 673–679, Oct 2015. [16] J. C. Tang, G. Venolia, and K. M. Inkpen. 8. ACKNOWLEDGMENTS Meerkat and periscope: I stream, you stream, This work has been financially supported by the Academy apps stream for live streams. In Proceedings of the of Finland, grant numbers 278207 and 297892, and the 2016 CHI Conference on Human Factors in Nokia Center for Advanced Research. Computing Systems, CHI ’16, pages 4770–4780, New York, NY, USA, 2016. ACM. 9. REFERENCES [17] S. Tarkoma, M. Siekkinen, E. Lagerspetz, and Y. Xiao. Smartphone Energy Consumption: [1] Monsoon: www.msoon.com. Modeling and Optimization. Cambridge University [2] Z. Chen and K. N. Ngan. Recent advances in rate Press, 2014. control for video coding. Signal Processing: Image [18] B. Wang, X. Zhang, G. Wang, H. Zheng, and Communication, 22(1):19–38, 2007. B. Y. Zhao. Anatomy of a personalized [3] Genymotion: https://www.genymotion.com/. livestreaming system. In Proc. of the 2016 ACM [4] Q. He, J. Liu, C. Wang, and B. Li. Coping with Conference on Internet Measurement Conference heterogeneous video contributors and viewers in (IMC), 2016. crowdsourced live streaming: A cloud-based [19] Wireshark Project: https://www.wireshark.org/. approach. IEEE Transactions on Multimedia, [20] C. Zhang and J. Liu. On crowdsourced interactive 18(5):916–928, May 2016. live streaming: A twitch.tv-based measurement [5] ISO/IEC 13818-1. MPEG-2 Part 1 - Systems, study. In Proceedings of the 25th ACM Workshop Oct. 2007. on Network and Operating Systems Support for [6] ISO/IEC 14496-10 & ITU-T H.264. Advanced Digital Audio and Video, NOSSDAV ’15, pages Video Coding (AVC), May 2003. 55–60, New York, NY, USA, 2015. ACM. [7] ISO/IEC 14496-3. MPEG-4 Part 3 - Audio, Dec. [21] Y. Zheng, D. Wu, Y. Ke, C. Yang, M. Chen, and 2005. G. Zhang. Online cloud transcoding and [8] F. Larumbe and A. Mathur. Under the hood: distribution for crowdsourced live game video Broadcasting live video to millions. https: streaming. IEEE Transactions on Circuits and //code.facebook.com/posts/1653074404941839/ Systems for Video Technology, PP(99):1–1, 2016. under-the-hood-broadcasting-live-video-to-millions/, [22] L. Zhou. Mobile device-to-device video Dec. 2015. distribution: Theory and application. ACM Trans. Multimedia Comput. Commun. Appl., 12(3):38:1–38:23, Mar. 2016.