Defying the limits of streaming
Given the growing number of users and the widening range of devices, streaming is no longer viable in its current form owing to the substantial amount of power and storage capacity it requires. But researchers at EPFL’s Embedded Systems Laboratory (ESL) have found a way to reduce those requirements without impacting the quality of the video itself.
A video posted on the internet might be watched by thousands of different people, each with his own device, internet connection and viewing environment. But that doesn’t mean the streaming service will be personalized for each user. “Platforms like YouTube use two systems, both of which are inefficient,” says Marina Zapater Sancho, post-doctoral researcher at ESL. She is one of the authors of the study being carried out under the MANGO project, which has received funding from the EU Horizon 2020 program and will be presented at the Embedded Systems Week in Seoul on 15 October 2017. “They store either one copy of a video in the highest-quality format possible, or dozens of copies in different formats.” The former can result in slow and choppy streaming, while the latter takes up huge amounts of server storage and consequently eats up lots of power.
Better resource allocation
The researchers developed a way to reduce the power requirement by nearly 20% while improving the user experience by 37%. Their method involves machine learning, an approach identified by Arman Iranfar, a PhD student at ESL and co-author of the study. “Computers learn from experience,” he says. “We exposed them to many different scenarios, such as 1,000 people playing a video, each from a different device. The computers remembered the series of actions that led to positive outcomes and reproduced them.” By the same token, the applications that encode videos (so they can be streamed) can learn to better allocate resources while simultaneously optimizing factors like compression, quality, performance and power consumption. Considering that video streaming makes up 80% of internet traffic worldwide and is still expanding, such an improvement can have a major impact.
Next up: real-time streaming
But this is just a stepping stone. The researchers’ ultimate aim is to enable real-time streaming. That refers to a system where streaming platforms store just one copy of a video, and when someone clicks on the video to watch it, the platform immediately adjusts the format, compression and quality to that particular viewer. “Today that would require ten servers for each unique viewer. That’s not feasible,” says Marina Zapater Sancho, because the performance/power tradeoff is a challenge faced not just by YouTube. “Take magnetic resonance imaging (MRI). To effectively diagnose as many patients as possible, the scanners would have to operate 24/7 and produce very high-quality images that would be stored for at least the duration of the patients’ lives,” she says. But today, patients who go in for an MRI come out with their images on a CD. No internet platform currently exits that can store these images and allow doctors to view and use them in real time.
The Embedded Systems Laboratory is also working on space-related applications in the Square Kilometer Array (SKA) project, which aims to develop the world’s largest radio telescope. The hope is that the vast images of the sky taken by the telescope can be used to study tiny celestial objects. “The streaming is extremely complicated from a computing standpoint,” says Marina Zapater Sancho. “We are trying to find the best architecture for the servers, bearing in mind that these servers may not exist yet.”
This research is being carried out at the Embedded Systems Laboratory (ESL), ,headed by Professor David Atienza Alonso, part of the Institute of Electrical Engineering (IEL) at EPFL, under the MANGO project (an EU Horizon 2020 project) in association with companies and other universities.
-
- Texte: Clara Marc
Photo: Alain Herzog