In today's technology-driven world, televisions are not just passive screens; they are complex computing devices. The functionality and energy efficiency of these devices have evolved remarkably, driven by advances in both hardware and software. This article delves into the nuances of power consumption across various TV technologies and how modern devices manage energy efficiency, and how that may be influenced by the application of energy saving techniques.
The power usage of televisions varies significantly across different types, primarily dictated by screen size and the technology used to display images. Back with Cathode Ray Tube based displays, energy consumption was significantly higher for the size of display, even when it was nor providing anything like the screen brightness and quality. An old style 20” CRT would use up to 150W to display an image, with the other characteristic that this energy was a constant consumption not varying based on what was displayed. This came from the technology being used, which was predominantly generating power to guide an electron beam using high voltage electromagnets on a scan pattern to a phosphorescent curved piece of glass. Certainly as screens got smaller or larger, the power consumption increased significantly either way, to the extent that truly large screens were not viable (in addition to the manufacturing complexity of making large vacuum tubes).
Of course for a period of time prior to LED, and in fact in parallel, there was also a dalliance for some years with Plasma based TVs which utilised a superheated gas to produce the display light and colour, and from an energy perspective the less said about these screens, the better. Rather thankfully, we progressed onwards to new technologies although there are some display buffs who remember these with fondness… and the ability of these screens to also heat the room they were in.
Most modern TVs are essentially specialised computers that handle a range of operations from processing imagery to decoding and tuning/modulating signals. The largest draw of power comes from the screen itself, influenced heavily by the method used to produce light. Strictly speaking these displays are split into transmissive (using a backlight) or emissive designs. Regardless of the technology in use, these displays use between 50W and 150W for the average screen size of 50”, the difference being in how variable that power consumption and thus what the average power consumption is for a given content display.
Fundamentally what people refer to as LED screens, are LCD in nature. This means that light is generated by the LED and then passed through various structures of LCD to generate the image, where the LCD is actually filtering the light from no filter all the way through to no light being passed. The colours are generated by the fact that the light is passed through a Red, Green, Blue structure to create sub-pixels which are then used in combination (sometimes also with White sub-pixels) to create the image.
These have minimal dimming capabilities, with power variations mainly due to signal processing and brightness levels. This very much serves the low end displays that are the lowest cost in the market.
Although based still on using an LCD to control the light reaching the viewer, these improve power variability by adjusting lighting in different screen sections based on the content being displayed. However some of the very lowest cost TVs have little or no dimming. This technology is the most common to be found in the bulk of mid-range displays that the majority of consumers buy as it fits in the sub $700 price range that is most commonly purchased.
These are LED TVs enhanced with quantum dots for better colour and brightness. Like backlit LEDs, they feature dimming zones (as they are fundamentally still LED displays) but can offer enhanced performance in picture reproduction and in managing power variability (albeit only slightly more than backlit LEDs with Dimming Zones). Lower end versions of these displays also fit into the sub $700 price range that is most commonly purchased.
This technology offers superior dimming capabilities, with Mini LEDs featuring hundreds of dimming zones. This can also be combined with QLED, thus offering good black levels but with brighter imagery and enhanced colour and brightness.
These take the number and size of LEDs down to the per pixel level, providing an emissive display technology that is highly power-efficient. As one of the latest technologies with still significant complexity in manufacture, these displays are incredibly expensive, in the area of tens of thousands of dollars for even average size displays. This is only suited for the hyper high end requirements and are not yet at the consumer level.
LED based screens, by their very nature, are based on having a large, sometimes segmented backlight that generates the light needed by the very brightest pixel that the display needs to make. The segmentation allows quality improvements for blackness and sometimes even the peak light levels by varying the amount of light produced for the image, but these features are only available in the higher end and more capable displays. In the main though the backlight has to output the highest luminance the screen is capable of and rely on the LCD to stop light passing to make the image itself. This means that in general the energy consumption of the display is static in use when displaying content, although dimming zones are able to reduce this when areas of the screen are completely black. An additional level of variability that is seen is whether the screen is displaying SDR or HDR, which changes the maximum luminance requirement.
What does this mean to overall power consumption? Screen size is a key parameter on modern displays, but certainly displays of the current average screen size of 50” can use between 55W and 150W depending on a variety of design, configuration and content factors.
OLEDs are emissive displays, meaning they create the image through generating light through organic materials that emit light when electricity is applied. This is then passed through various filters and diffusers to the glass of the panel to create the image which is then seen by the viewer. The colours are generated through being passed through sub-pixels filters in the colours of Red, Green, Blue and at times White. The light luminance control though is controlled by emittance of the OLED pixels themselves. This is not done by filtering the amount of light.
Passive-matrix OLED: Common in high end TVs.
Active-matrix OLED: Predominantly used in mobile displays.
Transparent, Top-emitting, and Foldable OLEDs: Serve specialised display purposes.
QD OLED: A relatively recent development that combines the best features of OLED and Quantum Dot technology to provide a very performant emissive display that is targeted at the high end display.
OLED screens consume power based on the content displayed; darker scenes require less power, brighter scenes require more power, making OLEDs generally more power-efficient and effective than their counterparts. In particular this means that energy usage is proportional to the brightness of individual pixels multiplied by the area that is illuminated on a precision of one pixel. This means that regardless of whether the display is showing SDR or HDR content, the actual brightness of the individual pixels is what matters. It should be understood that HDR content is not in itself brighter than SDR content on average, only that highlights and specifically specular highlights can be significantly brighter, but these will be only smaller aspects of the image in most instances. On the reverse, the fact that darker parts of the screen use less or even no energy (in the case of full black) also means that images may use less energy overall.
TVs also include some level of image processing for the display, or rendering, of the content which introduces a measure of energy consumption. This comes into play for improving or adjusting the incoming video to the capabilities of the display. The level of energy used depends very much on the engineering design that the display manufacturer provides. Whether that is energy efficient or not is very much down to the manufacturer's decisions. Key ‘image processing’ features are for things like upscaling of content resolution, boosting of saturation and/or hues, and adjusting for viewing conditions using inbuilt light sensors. The amount of energy consumed by this processing varies from minor (upconversion) to significant (displaying video in a very bright room).
This does lead to the interesting perspective that some of the largest savings in energy consumption can come from behaviour changes by the consumer, such as changing the viewing conditions so that the room where the TV is is dimmer. In other words creating and delivering the consumer message that “You can use less energy if you close the curtains, reposition your TV or turn down the lights”. This is because the quality of the displayed image needs to be maintained in the face of brighter viewing conditions, and this is accomplished by increasing the overall screen brightness, and that increase in screen brightness weighs heavily in terms of power consumption.
STBs (including Streaming Sticks) vary in power consumption based on their functionality and to some extent age of chipset.
These devices, which focus historically only on standby power, now achieve as low as 0.5 W in eco mode, or more typically up to 2W. Power usage in active mode can vary significantly based on the chipset's age and the efficiency of its software implementation, but modern devices can run to 10W.
These combine tuners with hard drives, significantly increasing power consumption, particularly due to the power demands of hard drives for recording and maintenance operations. Running in standby, these devices offer similar power consumption to Zappers but due to their normal function come out of standby several times per day when not used to pick up scheduled recordings, and so they can peak to normal operating power levels through the day, leading to a relatively high average power consumption.
These aim to reduce power usage significantly by eliminating tuners and leveraging cloud services for network-based personal video recording, thus shifting energy consumption from the home to managed data centres. In other words, these devices only consume power when actively being used by the consumer, and even then these only consume 2.5W to 6W of power.
Historically STBs over the last 20 years have progressed from consuming high levels of power in active modes in the region of 30 to 60W to many now being in the sub 20W arena if you include those devices that have disks and tuners. Further reduction through removal of local Hard Disks and tuners has driven consumption into the range of 2.5W and 5W.
STBs are the extraction of the Processing and Decoding capability from the TV display (and yes it is actually a duplication as TVs still contain very much the same capabilities). The extraction is done to provide additional performance or functionality to the operator of the service that it serves, through support of different codecs, bridging between different content distribution paths, providing higher performance graphics, and a dedicated user interface that is better for the consumer based on the operator’s perspective. They also serve a function of providing functionality that may be missing from a TV screen.What does this matter? Because it is important to understand that in many instances, some of the functionality can be done by either the TV or the STB, and that the power consumed is mostly the same regardless of which does the functionality - in other words a zero sum game. What does this mean? A good example is resolution reduction. If you reduce the resolution delivered by the STB to the screen then the TV needs to upconvert it to the native resolution of the TV, if you upconvert it on the STB then the TV does not need to do anything, and either way a similar amount of energy can be consumed in doing that.
The primary difference between a TV and an STB is of course the screen display. What that means to power consumption is also important, as the TV display comprises at least 80% of the power consumption, more as the screen size increases. Which means for all the discussions of energy savings in the decoder and processing, the display screen offers the biggest opportunity for energy savings, if there are inefficiencies present in the screen. And this is definitely the case as we have already described.
What allows the display to actually be the primary driver for energy consumption, is the focus on streaming using codecs that are implemented in hardware on the client devices. Hardware based codecs operate with a very large degree of efficiency, with power consumptions measured in the tenths of watts. Software based decoding however may use anything from 8x to 20x more energy, whilst the general CPU is stressed with undertaking the complex mathematics that are part of the decoding process. If companies deployed software based codecs, the rise in energy consumption overall can be very considerable especially with the magnification of the xN multiple that comes from delivering to many clients.
This is one area where the impact of bit rate and quality can come into play, because what would be a minor impact in hardware becomes significant in software based decoding. In most reasonable commercial cases however, service operators focus on deploying services to devices in formats that the device can decode in hardware. This avoids this very considerable risk of high energy usage. The bulk of services today make use of hardware based decoding, and certainly the deployment of Ultra HD based services was largely gated on the availability of capable codecs like HEVC implemented in hardware on TVs and STBs alike.
Some operators however have made choices for deploying video that would be decoded in software, and this is concerning especially where those services have a significant market size.
However there is one area where the application functionalities are the same between TVs and STBs, and that is in having access to any information about the screen type. Now the core processing of the TV display knows the type of display and can make use of that information, but the application processing side (either in the TV or in the STB) knows very little about the display. At best it may understand what resolution it is running at, what colour mode, and what dynamic range. It is not able to access information about the brightness of the display, the customer display settings that have been applied, and certainly it has no understanding of the power consumption. This leaves any approach to controlling or applying power saving measures without any hints as to what to do.
This is critical in the work of ECOFLOW, as our discoveries and measurements of power consumption in TVs and STBs has shown that there are limited capabilities to saving energy, particularly where the most energy is being consumed, because the methods are heavily screen configuration and technology dependent, and exposing more information about the screen including type, technology and configuration is needed for application control of consumption.
Power saving tools that have been successfully applied have been done blindly, and the success can be very limited. Techniques that can further be applied don’t always work and may even increase energy consumption in some circumstances. This difficulty does point to the need to engage manufacturers and software platform providers in the future discussions of how to reduce energy consumption.
Video consumption in the modern era is defined by the TV Everywhere approach, where a video experience can be had on almost any device. The biggest growth being in the use of smartphones and tablets.
These devices continuously work on reducing energy usage, with advances in screen technology and system-on-chip (SoC) efficiency. Mid-range to high end smartphones now commonly use AMOLED screens, which are more energy-efficient, while lower end devices (the vast majority by units) use various forms of LCD/LED.
Tablets, using larger screens and increasingly powerful SoCs, face challenges in adopting OLED due to cost and manufacturing yield issues, hence these sorts of displays are adopted very much on the mid and high end devices.
Codecs have been a considerable challenge with devices like smartphones and tablets, after a long period of the use of AVC/H.264. New services making use of HDR and higher resolution have driven the desire to use next generation codecs, of which HEVC was the primary choice which has taken a significant time to be deployed to most devices, and even today there are interoperability wrinkles that hold it back. Similarly other codecs such as AV1 are still early in the deployment cycle which does hold back the use of that codec for services as well.
Energy consumption remains high in desktop PCs, particularly with older software codecs. Laptops have seen significant improvements with the shift to ARM-based chips, balancing power consumption with battery life and performance. However the use of Desktop PCs for video consumption does lead to a major issue.
Desktop PCs have largely been designed to provide performance without regard to power consumption, except with regard to managing the heat profile of the device. What does this mean? Well it means that Desktop PCs, particularly those with Gaming capabilities, can use significant base levels of power, circa 20-60Ws even before streaming is undertaken. Certainly dedicated or mobile devices are much more energy efficient.
Codecs continues to affect PCs and Laptops, with relatively early adoption of H.264/AVC but late adoption of H.265/HEVC. Certainly H.265/HEVC only has become commonplace in PC hardware in the last 8 years, and AV1 is only now becoming included in the latest chipsets. The challenge though is that PCs have stayed relatively long lived with consumers, where it is common for 8 year devices to still be in common usage, leading to challenges in supporting service on PCs/Laptops.
The additional problem has been one of support in the various operating systems as well, such that there have been challenges in supporting the latest codecs like HEVC/H.265 and AV1 in web browser based playback until fairly recently, even if the hardware supported it.
Similar to Desktop PCs, these devices maintain a high base level of power usage with little or no implementation of advanced power management techniques. The focus remains on performance, at the expense of energy efficiency. These can use upward of 30W before streaming begins, but in most instances the streaming itself uses similar incremental levels of power as on any of the devices we have looked at. Certainly it is advised that dedicated devices are much more suitable for streaming than these based on energy consumption.
Codec usage in Gaming consoles is something that has followed a similar pattern as for desktop PCs, where the very latest platforms do support codecs such as HEVC/H.265 but don’t support AV1. A new generation of Gaming consoles will be needed before this comes across. A tendency to look towards software decode in these GPU heavy platforms is something that has not had investigation to date.
Regulators around the world but more specifically the EU have focused on eco-design over the last 20 years. Initially focused on standby power, the drive was to reduce power consumption for little or no benefit. In fact the earliest STBs were known to consume more power in standby at times. The EU has led the drive to reduce power through multiple generations of regulations targeted at reducing wasteful power consumption, driving consumption down below 2W. Other areas of the world have had the benefit of this, specifically where those regions were subject to voluntary agreements to limit power consumption that in many respects did not drive the reduction.
In more recent years, the focus has shifted to active power consumption with the latest TV regulations focusing on driving manufacturers and distributors to have technologies in use on TVs that maximise power reduction. It is not just TVs, as this focus has also moved to all electrical devices including phones, tablets and PCs. It will take time however for them to benefit from the focus that has been present on STBs for over a decade and TVs over the last few years.
The approach taken by the EU is one of carrot and stick, and the approach has been very successful over the years. However in the last evolution of TV regulations, the stick has been used which created headlines around the world describing ‘EU bans 8K TVs’ etc, but in reality the issue is about the energy efficiency evolution not progressing fast enough compared to where the EU believes the market to be capable of. The key challenge is in the design of modern displays, with the focus for most screens on using LED technology that predominantly work by using energy to generate light and then throwing it away. OLED has promise in this respect but costs significantly more than the mainstream display markets will take at this time.
What does this mean in UIs? Well at the first instance the focus has been providing tools for the consumer to adjust the energy consumption of their displays, with defaults focused on meeting the regulatory requirements (and being allowed to be sold in the respective countries) and then taking the user through setup wizards to allow the consumer to balance their experience with energy consumption, during a first installation and setup. This has its limits, because many of the most energy efficient configurations have issues with video brightness except in the darkest of rooms.
The trend in TV and video platform technology shows a clear trajectory towards more efficient power management and sustainable practices. Innovations in screen technology, along with smarter software implementations, continue to push the boundaries of what these devices can achieve in terms of energy efficiency.
As consumers become more environmentally conscious, the industry’s focus on reducing energy consumption becomes increasingly pertinent, shaping future developments in this space. There are improvements to be made, particularly in equipping hardware platforms with the API capabilities to allow the UI applications to discover much more about the nature of the device that they are running on.
These APIs need to allow the UI to make more use of power saving strategies to allow the consumer to watch content in an economical way as they wish to, and also where necessary to make the full experience available - a video implementation of what is available on many cars today with Eco Mode, Performance and Extra Economy. After all, the UI needs to understand which of the various capabilities should be applied to the viewing situation, taking into account display technologies, content being streamed and even the viewing condition.