Tech article – DXOMARK https://www.dxomark.com The leading source of independent audio, display, battery and image quality measurements and ratings for smartphone, camera, lens and wireless speaker since 2008. Tue, 21 Feb 2023 10:45:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.8 https://www.dxomark.com/wp-content/uploads/2019/09/logo-o-transparent-150x150.png Tech article – DXOMARK https://www.dxomark.com 32 32 DXOMARK Decodes: Software tuning’s pivotal role in display performance https://www.dxomark.com/dxomark-decodes-software-tunings-pivotal-role-in-display-performance/ https://www.dxomark.com/dxomark-decodes-software-tunings-pivotal-role-in-display-performance/#respond Mon, 13 Feb 2023 14:40:06 +0000 https://www.dxomark.com/?p=139300 Smartphone displays showed a steady stream of enhanced software features and optimizations last year that provided continuous improvements to the user experience. This underscores the fact that a device’s performance doesn’t solely depend on its hardware specifications. Software is a critical “make or break” factor in how well a smartphone works. Manufacturers often issue updates [...]

The post DXOMARK Decodes: Software tuning’s pivotal role in display performance appeared first on DXOMARK.

]]>
Smartphone displays showed a steady stream of enhanced software features and optimizations last year that provided continuous improvements to the user experience. This underscores the fact that a device’s performance doesn’t solely depend on its hardware specifications. Software is a critical “make or break” factor in how well a smartphone works. Manufacturers often issue updates to firmware to correct problems or to optimize one or more aspects of a device’s performance.

Smartphones use programs or algorithms to control many display functions and other features (a process known as “tuning”). Unsurprisingly, a device with poor tuning cannot deliver a great performance, no matter how cutting-edge its hardware may be. The more careful the tuning, the more a device can live up to its hardware’s ability to meet or surpass customers’ expectations. Tuning also has an impact on customers’ hands-on experience: based on testing and analysis, manufacturers make choices about (for example) display brightness levels and color profiles, but they also choose how much or how little end users can adjust those default settings. Another major role of tuning is to manage tradeoffs on the smartphone, — for example, employing a refresh rate of 120 Hz at all times will impact battery life. Tuning makes it possible to find a balance between smartphone constraints and user preferences.

Let’s look at a few ways in which hardware tuning can impact the end-user experience. In the examples below, we compare pairs of phones with similar hardware specifications.

Example 1: Peak brightness
Advertised peak brightness does not automatically reflect what users experience. OLED peak brightness is adjusted based on the Average Picture Level (APL), which is the brightness of the image averaged across the whole content displayed on the screen; this means that under intense lighting, a simple white dot on a black background shows more brightness at maximum capacity than a full white screen. On OLED screens, brightness usually decreases with increasing APL; however, manufacturers can choose through tuning to provide consistent display brightness, as shown in the graph below:

Device A and Device B advertise the same peak brightness of around 1000 nits. However, Device A corrected its APL curve through tuning, while Device B did not. If displaying just a white dot, both devices will reach a little over 1000 nits, but on a web page (~80% APL), Device A still provides a peak brightness of 1000 nits, while Device B provides only around 800 nits. In this instance, the specifications are misleading, as the user experience is very different between the two devices.

Manufacturers need to carefully tune the brightness, given its strong impact not just on user experience but also on how long a battery charge will last, for example: keeping high brightness all the time regardless of screen content can reduce battery life. In other words, even if the hardware component is capable of delivering very high peak brightness, only software can ensure that the brightness is adjusted to deliver the best user experience and to make sure it is consistent with battery constraints.

Example 2: High Brightness Mode

Although there is no standard definition of “high brightness mode” (HBM), most manufacturers claim to equip their smartphones with this feature, which activates under intense ambient lighting and provides a temporary boost in readability. There are many ways to implement HBM on smartphones, with most involving pushing the brightness, compensating for the ambient lighting reflection through tone mapping, and/or saturating the colors. Each OEM can adjust those settings to what they think is best for the user experience.

Whatever the approach to HBM, it is important for manufacturers to test many different kinds of content to avoid surprises, such as shown below:

Even though Device A may render other content well in HBM, its brightness boost and oversaturated rendering of the poppy’s petals obliterates all their details, and there is visible ringing along the edges of the blurred poppy in the background. While the brightness boost of Device B is less dramatic, most of the flower’s details are visible.

Another aspect of HBM is that even with the same hardware specifications, devices do not keep the brightness boosted for the same amount of time; how long it stays on is up to the manufacturers, who try to find the best balance between brightness and readability versus battery life and overheating (which can impact the behavior of device components). The graph above shows the HBM for a different pair of devices: Device C  stays on for only 10 minutes, while Device D provides 30 minutes of HBM under the same lighting conditions.

Example 3: Video tone mapping

Precise tuning is crucial to respect the filmmaker’s artistic intent and to provide the best rendering possible. In this example, both devices are compatible with Dolby Vision and HDR10+; their technical specifications are identical; further, because they are both from the same brand, we might assume that their tuning would also be similar. However, Device A shows a lack of detail in the dark tones of HDR videos due to inaccurate tone mapping, as shown in the curve and comparison photo below.

 

 

The Electro-Optical Transfer Function is how the numerical signal from a video is converted into a luminance value out of the display.

The drop in the EOTF around the dark corresponds to the complete disappearance of the roof tiles in Device A’s rendering, resulting in a fully dark area in the video. Device B has a different color rendering, is closer to the artistic intent, and provides more details (and thus a better viewing experience overall).

Example 4: Variable Refresh Rate

The two main thin film transistor (TFT) technologies for OLED are low-temperature polycrystalline oxide (LTPO) and low-temperature polycrystalline silicon (LTPS); To keep it simple, the major differences between them are:
• Both LTPO and LTPS provide very smooth touch interactions (typically at 120 Hz).
• LTPO is more energy efficient than state-of-the-art LTPS.
• LTPO can switch to any refresh rate between 1 Hz and 120 Hz (Variable Refresh Rate), while LTPS can only switch between pre-set modes such as 60 Hz, 90 Hz, or 120 Hz.
• LTPO transistor size is larger than for LTPS, which can limit maximum resolution.

The development of OLED LTPO panels means that more flagship devices are employing Variable Refresh Rate (VRR). This technology allows the panel to reduce its refresh rate and thus extend battery life when the user is not actively interacting with the device. However, it is important that the device reacts quickly to any user input because a reduction in the refresh rate would greatly impact the navigating experience, creating a lack of smoothness. That is the price of this battery-saving feature, and OEMs must find the right balance to ensure smooth transitions.
That being said, VRR is particularly useful in brightly lit environments as the display needs to provide a high brightness to be readable: reducing the refresh rate is a way to compensate for the battery drain associated with the high brightness. However, the manufacturer can choose to implement it or not and decides at which level it should be enabled. In the example below, we measured the refresh rate of two devices that use VRR:

Both devices claim to integrate VRR panels and a refresh rate that varies between 10 Hz and 120 Hz. In the dark, both devices are very smooth and provide a 120 Hz experience. But under simulated outdoor ambient lighting, their VRRs activate; Device A shows a first peak at 60 Hz , while Device B shows a peak every 10 Hz (meaning that its refresh rate has dropped to 10 Hz).
Why does Device A refresh at 60 Hz when Device B refreshes at 10 Hz in similar conditions? The answer can depend on OEM strategy, battery drains, and other factors.

Conclusion
While having the latest high-quality panel on your smartphone is a good start toward reaching good display quality, it’s not enough. As this article highlights, display performance does not solely depend on hardware specifications, but also on the software and battery strategy choices that manufacturers make to try to optimize end-user comfort across different use cases. Besides pure user experience, it’s all a question of tradeoffs to bring a balanced smartphone experience. It is not an exact science and there is no unique recipe. Tuning will continue to play an increasingly important role in smartphone optimization, particularly as manufacturers incorporate new AI features that analyze feedback and adjust settings based on user interactions — putting even more “smarts” into their smartphones.

 

The post DXOMARK Decodes: Software tuning’s pivotal role in display performance appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-software-tunings-pivotal-role-in-display-performance/feed/ 0 APL chart v2 High Brightness mode HDR10 EOTF Tone Mapping VRR Measures V2
GPS on smartphones: Testing the accuracy of location positioning https://www.dxomark.com/gps-on-your-smartphone-why-youre-not-always-there-when-it-says-youre-there/ https://www.dxomark.com/gps-on-your-smartphone-why-youre-not-always-there-when-it-says-youre-there/#respond Fri, 10 Feb 2023 13:04:17 +0000 https://www.dxomark.com/?p=121012 Have you ever experienced using your phone’s navigation system to look for an address or landmark, only to discover that what you’re looking for is on the opposite side of the street from where your phone says it is? If so, then it will not surprise you to learn that location positioning on a smartphone [...]

The post GPS on smartphones: Testing the accuracy of location positioning appeared first on DXOMARK.

]]>
Have you ever experienced using your phone’s navigation system to look for an address or landmark, only to discover that what you’re looking for is on the opposite side of the street from where your phone says it is? If so, then it will not surprise you to learn that location positioning on a smartphone is not always accurate.

How positioning works (and why it sometimes doesn’t)

Global Navigation Satellite System (GNSS) receivers, whether in smartphones, cars, or other devices, calculate their position by computing signals from a minimum of four satellites from the available GNSS and regional satellite constellations.[1]The receiver uses the time difference between the broadcast time sent by satellite and the time of signal reception to compute the distance, or range, from the satellite to the receiver. Once the GNSS device receives the signals from multiple satellites, the GNSS device knows the position of each of them and its distance to them. The more satellite signals the GNSS receives, the better the positional accuracy.

The GNSS constellation includes GPS, Galileo, BeiDou and GLONASS. For GPS, Galileo, BeiDou, each satellite transmits its signal over different bands (up to three). This means that older receivers that support only GPS and GLONASS on a single band are less accurate than new devices that support four-constellation and dual-band systems.

Because GNSS satellites orbit at 24,000 km (14913 miles) above the earth, the signal that arrives on the ground is too weak to pass through a roof, dense foliage, or a tunnel. Further, the signal goes through the ionosphere and the troposphere, which also affects its propagation time.

In a dense urban environment, high buildings can also mask the signal from certain satellites, reducing the number of signals available to compute a position. Additionally, high buildings can create a canyon effect that reflects and multiplies signals between the satellite and the receiver, and these “echoes” can affect the range accuracy.

The quality of the GNSS in a smartphone depends on the GNSS receiver (chipset), its positioning engine (software), its antenna quality, and the hardware integration. In addition to pure GNSS positioning, current smartphones use inertial sensors (such as gyroscopes, accelerometers, and barometers) and network-based positioning (which rely on cellphone networks and Wi-Fi). These different technologies help compensate for the lack of GNSS satellite signals.

Of course, mobile phones didn’t always have built-in navigation systems. The first phone to come with integrated GPS capability and built-in maps, the Benefon ESC! from Finnish phone maker Benefon, appeared in 1999 and was primarily sold in Europe, but it set the stage for future GPS-enabled phones.[2] In 2011, the MTS 945 GLONASS, built by Qualcomm and ZTE for Mobile TeleSystems, was the world’s first GLONASS-compatible smartphone.[3]
The Xiaomi Mi 8, which appeared in 2018, was the first dual-frequency GNSS smartphone. Fitted with a Broadcom BCM47755 chip, it could provide up to decimeter-level accuracy for location-based services and vehicle navigation.[4]

Nowadays, all but the least expensive smartphones come with some kind of navigation system. While DXOMARK’s smartphone protocols do not include measuring the accuracy of a device’s GPS positioning, GPS navigation is one of the use cases we evaluate in our Battery protocol. So not long ago, DXOMARK got together with Geoflex, a French company with deep expertise in geolocation and precision positioning, to evaluate the accuracy of smartphone geolocation.

Testing methodology

DXOMARK gathered a large set of devices from different brands and price segments in order to get a general idea of smartphone GPS positioning performance.

To obtain a high-precision reference trajectory, DXOMARK used the Geoflex test platform mounted on top of a car, as shown in the image below:

The Geoflex test platform.

The test platform was composed of:

  • The most accurate multi-constellation (GPS, Galileo, GLONASS, BeiDou) and tri-frequency GNSS receivers on the market with the ability to process the latest signals from the European constellation Galileo (E6); on top of this hardware platform, Geoflex algorithms can compute precise point positioning to achieve an accuracy level of less than 4 centimeters. This is needed to get a good reference track to be compared to the positions delivered by the smartphones under test.
  • High-quality GNSS antennas;
  • An inertial system coupled with GNSS to improve the reference solution when the GNSS signals are of insufficient quality or absent.

We positioned the test platform in the center of the vehicle to be able to compare all the smartphones’ trajectories on the same point of the central and transverse axes, and we used the data recorded from the setup as a control or reference against which to compare the smartphones’ GPS performance. We mounted the smartphones on a board placed inside the front right of the vehicle, as shown below:

The dashboard setup for the test.

We then took the reference system and the smartphones out on a three-hour drive in the greater Paris area. Our itinerary included urban areas, “dense urban” areas (tall buildings close together), open sky areas, and masked areas such as tunnels, forests, and underpasses. With the exception of open sky areas, each of these environments poses challenges to GPS systems.

 

Our itinerary included a variety of environments:  urban areas, “dense urban” areas (tall buildings close together), open sky areas, and masked areas such as tunnels, forests, and underpasses.

 We used GNSS Logger software to extract the data we collected, which included:

  • The positioning solution in NMEA format[5] generated by each smartphone
  • The GNSS observation file containing the satellite data

Further, we analyzed each trajectory in order to generate:

  • Positioning in the ECEF (XYZ) coordinate system[6]
  • The trajectory in KML[7] format (Google Earth)
  • The level of error of each smartphone compared to the reference

Results and details

DXOMARK focused on the positioning error in the horizontal plane, measured as a RMS error, (root mean square), and did not take into account errors in elevation.

The chart above illustrates the disparities that we observed:

  • The dark green section shows the devices that performed very well, with an average error below 5 meters of horizontal error (RMS)
  • The light green section shows the phones that had errors above 5 m and up to nearly 8 m (RMS).
  • Finally, the red section shows phones that continued to give a position in the tunnels by assuming that the displacement was rectilinear even though the tunnel was curved. For those phones, the error reached 450 m horizontally at the tunnel exit, which resulted in a deviation of more than 20 m from the average values.

Unsurprisingly, higher-end smartphones with faster processors, better antennas, more constellations as well as dual bands had better GPS results than the more economical models. All the smartphones performed well in open skies because the GNSS and SNR (signal-to-noise) propagation conditions are very good. However, there were significant differences in performance in more difficult conditions, and especially when going through a tunnel. The map diagram below shows the results of three smartphones compared with the reference results (red line): Device 5 (a top performer, yellow), Device 10 (middling results, magenta), and  Device 15 (last place in our tests, blue-violet).

Reference (red); Device 5 (yellow); Device 10 (purple); Device 15 (blue)

Device 10 stopped giving positions in the middle of the tunnel (accuracy limit reached with the accelerometric sensor), while Device 15 continued to give positions that went straight without trajectory quality control (possibly due to a one-axis accelerometer).

By contrast, Device 5 has a 3D gyro sensor and accelerometer that allowed it to follow the curve of the tunnel, which made a difference for the parts of the trajectory without GNSS reception. In fact, Device 5 also had the closest results to the reference track in a dense urban area, as its inertial sensor provided good localization precision between buildings.

Reference (red); Device 5 (yellow); Device 10 (purple); Device 15 (blue)

Where to from here?

The advent of four-constellation and dual-band navigation satellite technology represented a vast improvement over earlier satellite navigation systems. Newer GNSS receivers, including those that can receive triple-band signals, will become commonplace even in more economical smartphones. But what further improvements can we expect to see as next-generation devices adopt more sophisticated navigation technology, and as demand grows for ever more accurate positioning?

In addition to smartphones, we envision widespread use of GNSS technology in many kinds of embedded electronics, and in devices such as 360° cameras. We think that GNSS correction services will help manufacturers and end users alike to be able to enhance satellite signals and inertial sensor data such that location accuracy will be measured in just centimeters (rather than in meters or even larger distance units). Such accuracy will have a huge impact on human mobility and on a wide variety of services (for example, deliveries by drones).

As for the smartphone Audio, Battery, Camera or Display experience, hardware is only one part of the equation, and software plays the most important part. We can see that devices featuring the same hardware do not get the same precision, and that’s linked to software tuning.

The latest processors also are promising an improvement in accuracy. The  Snapdragon 8 Gen 2, is coming out with lane-level accuracy, which could pinpoint a location to within +/- 1.5 meters. That’s something we’d really like to test.

DXOMARK looks forward to seeing how far the next generations of positioning technology will take us!


[1] Global Navigation Satellite System: GPS (Global Positioning System, developed by USA); BeiDou (developed by China); Galileo (developed by EU); GLONASS (developed by Russia).  Regional satellite systems: QZSS (Quasi-Zenith Satellite System, developed by Japan); IRNSS (Indian Regional Navigation Satellite System, developed by India)

[2]https://www.mobilephonemuseum.com/

[3] Wikipedia, “MTS 945” https://en.wikipedia.org/wiki/MTS_945

[4] https://www.gsc-europa.eu/news/worlds-first-dual-frequency-gnss-smartphone-hits-the-market

[5] NMEA, defined by the National Marine Electronics Association, is a standard data format supported by all GPS manufacturers, much like ASCII is the standard for digital computer characters in the computer world. The purpose of NMEA is to give equipment users the ability to mix and match hardware and software.

[6] The Earth-Centered, Earth-Fixed (ECEF) coordinate system is also known as the “conventional terrestrial” coordinate system.

[7] KML stands for “Keyhole Markup Language,” a file format that Google developers use to display geographic data.

 


Geoflex, founded in 2012, is a French company that provides universal hypergeolocation services to many different industries, including transport, smartphones, robot, agricultural machinery, and more. The company works with leading worldwide players — private and public entities — who need precise positioning to power and enhance their critical business processes. https://www.geoflex.fr/

 

 

The post GPS on smartphones: Testing the accuracy of location positioning appeared first on DXOMARK.

]]>
https://www.dxomark.com/gps-on-your-smartphone-why-youre-not-always-there-when-it-says-youre-there/feed/ 0 IMG_20220121_171819 IMG_20220121_173513 MicrosoftTeams-image (26) Table models results v2 MicrosoftTeams-image (24) MicrosoftTeams-image (25)
How social media apps change the sound of concert recordings https://www.dxomark.com/how-social-media-apps-change-the-sound-of-your-concert-recordings/ https://www.dxomark.com/how-social-media-apps-change-the-sound-of-your-concert-recordings/#respond Thu, 26 Jan 2023 13:00:49 +0000 https://www.dxomark.com/?p=136170&preview=true&preview_id=136170 The post How social media apps change the sound of concert recordings appeared first on DXOMARK.

]]>

It’s a common sight these days at concerts because smartphones easily let us capture these unique events and then share the images and sounds with others.  In fact, it’s so common that DXOMARK’s audio protocol puts all smartphones through a loud concert use case, which consists of recording music with the main camera app at a high SPL (Sound Pressure Level). The test emulates the sometimes-extreme conditions of a real concert and helps DXOMARK’s engineers highlight the main qualities and flaws of a device in terms of timbre, dynamics and artifacts when recording videos with musical content.

Our audio protocol uses the main camera app to evaluate a concert recording, but we recently looked at the other ways that concertgoers immortalize or livestream their favorite shows— mainly through their favorite social media app directly.

Our goal was to assess how recording through a social media app, rather than through the main camera, would affect the audio quality of the concert. Different applications have their own audio processing (noise reduction, EQ, compression, normalization…), and therefore the recorded audio might sound differently from each other.

Our audio experts have put the most used social media apps to the test, and they have produced a comparative analysis, the results of which will be summed up in this article. Read on to find out how your favorite social app performed.

The concert use case is recorded in an anechoic box at high SPL

Test conditions

We selected four recent smartphones for this experiment: Apple iPhone 14, Google Pixel 7 Pro, Samsung Galaxy Z Fold4, and Xiaomi 12T.

We tested each device exactly as we do in our protocol’s concert use case, by activating recording in an anechoic box at high sound pressure level (115dB SPL), using music genres such as Electronic and Hip-Hop. Each device was tested using the following:
– Main camera application
– Facebook Live
– Instagram Live
– Instagram Stories
– TikTok

Like in our protocol, the concert use-case evaluation focused primarily on timbre, dynamics, and artifacts. (Read more about how we test audio here.)  The main camera recordings served as a reference for comparison throughout the tests (as they are likely to be less processed).

Facebook Live

Interestingly enough, audio processing seemed to differ between the iOS and Android versions. On both ecosystems, the high-end of the spectrum is noticeably cut off above 12kHz because of a lower sampling rate. However, recordings made with the iPhone sounded particularly dark, in addition to being riddled with multiple resonances affecting both midrange and the remaining treble. These resonances had some undesirable results on the overall sonority.
The Android version provided much different results in terms of tonal balance, which seemed altered to a lesser extent. However, the recording’s gain is noticeably louder, inducing some heavy compression with most devices, as well as some distortion (especially in high-end).

Instagram Live

For both iOS and Android, recordings produced through Instagram Live were monophonic, and similarly had their spectrum cut off above 12kHz. Their gain was also much louder than the original, resulting once again in some excessive compression, and distortion in the upper spectrum. This time the processing seemed identical on both iOS and Android, and the induced artifacts seemed less severe than with Facebook Live.

Instagram Stories

Recordings produced as Instagram stories still had their spectrum cut off above 12kHz and sounded very similar to the ones made with Instagram Live, but a little harsher.

The iPhone 14 audio was monophonic, but it was quite a different story for Android. Although recordings on Android were stereophonic, the side channel appeared to be limited to a very narrow frequency band — between 2kHz and 10kHz– with gating applied below 5kHz, making it even more restricted. All in all, recordings had the bare minimum requirements to be considered stereophonic.

TikTok

Like Facebook Live, recordings produced with TikTok had noticeable differences in processing between the iOS and Android versions. All recordings were monophonic by default, with the usual cut above 12kHz, although the distortion of the upper spectrum seemed much more pronounced on the Android recordings, completely crushing the high-end.

Compression was also much more preponderant on these devices, and arguably worse than the Facebook Live recordings. On the iPhone 14, some compression and distortion were noticeable but less so than on Android. Timbre was the main issue: While it was less problematic than on Facebook Live, some unpleasant resonances impaired the sonority of the recordings.

Comparison between the tested apps

Here is a comparison of the recorded result for the Google Pixel 7 Pro :

And for the Apple iPhone 14 :

Conclusion

On all apps, the recordings’ sampling rate was reduced to improve bandwidth, which had an impact on sound quality that could be perceived by the attuned listener. While sampling rate was very low, it did not usually differ between the apps, and impacted mainly spectrum bandwidth. Much more noticeably, the extremely low bitrates that were induced by lossy audio compression resulted in a sound quality that was undeniably worse than the recordings produced through the main camera app. Dynamic range was very restricted, and harsh clipping was a common occurrence.

The default audio processing seemed different between Android and iOS for some apps, with Facebook and TikTok also introducing some questionable resonances in their iOS recordings. Regardless of your device, there is no audio option you can tweak. So, if you’re filming for a story or any related post, you are better off using your main camera app. However, if you want to live-stream directly, then the choice of app is ultimately yours.

The post How social media apps change the sound of concert recordings appeared first on DXOMARK.

]]>
https://www.dxomark.com/how-social-media-apps-change-the-sound-of-your-concert-recordings/feed/ 0 Recording concerts _MicrosoftTeams-image (16) Social app rankings audio
Charging devices: Can new EU law clear the path to a unified experience? https://www.dxomark.com/eu-device-charging-legislations/ https://www.dxomark.com/eu-device-charging-legislations/#respond Thu, 19 Jan 2023 13:52:19 +0000 https://www.dxomark.com/?p=130519 Last year the European Council, Commission and Parliament agreed on new legislation whose principal aim is to reduce electronic waste and emissions in the EU. As a result, all new smartphones, tablets, cameras and other electronic gadgets marketed in the EU will have to come with a USB-C charging port by fall 2024. In addition, [...]

The post Charging devices: Can new EU law clear the path to a unified experience? appeared first on DXOMARK.

]]>
Last year the European Council, Commission and Parliament agreed on new legislation whose principal aim is to reduce electronic waste and emissions in the EU. As a result, all new smartphones, tablets, cameras and other electronic gadgets marketed in the EU will have to come with a USB-C charging port by fall 2024. In addition, chargers will be unbundled from devices, giving consumers the choice whether to buy a new charger with a device or keep using the old one.

No more cable frustration

The new legislation is good news from a user experience perspective as well. A common charging interface means any device can be charged with any charger and you won’t find yourself in a situation of not being able to charge a device because you forgot to bring a compatible charger.

Manufacturers will also have to provide relevant information about charging performance, for example, power requirements and fast charging support. This will make it easy to work out if an existing charger will work with your new device and help select a new compatible charger if required. Limiting the need to buy new chargers and allowing for the reuse of existing chargers are expected to help save consumers approximately €250 million per year on unnecessary charger purchases.

Industry impact

In practice, large parts of the industry have been moving to USB-C as a quasi-standard for several years now, with only some low-cost devices still using the older micro USB connectors. After all, USB-C generally offers faster charging and data transfer than rival standards, and the cables are pretty much ubiquitous. Still, one major player will be more affected by the new rules than others. Apple uses its proprietary Lightning port on all its iPhone models and some iPad tablets. The company is already using USB-C on its laptops and some tablets, so should be able to port the technology to its iPhone line smoothly, but Apple could also eliminate the problem by getting rid of cable charging altogether and solely rely on wireless charging.

Charging a device is more complex than you would think

Charging electronic devices–and smartphones, in particular–is not just about the connector. Compatibility with a charger also depends on the device. For example, a 65W charger will not be able to deliver that kind of power to a device that is built to accept a maximum of 25W. In fact, the device might not even charge at its 25W maximum power. For that to happen, the charging protocols used in the device and in the charger must be compatible.

In short: a USB-C charger from manufacturer A will charge a device from manufacturer B and vice versa but the charging will potentially be far from optimal. When charging your phone with a charger from another manufacturer the process is likely to take longer and be less efficient when compared to the charger that comes in the box. This is because there is a huge variety of charging protocols and almost every manufacturer uses a different one. Some manufacturers even use more than one protocol, or different update versions of the same protocol, within their own device line up. In addition, there is a large selection of third-party chargers available. The makers of those are only in position to support public standards and cannot offer proprietary standards  that are kept secret by device manufacturers. When a new public standard is released it will also take some time before it is supported by all models in a charger manufacturer’s line-up. While the latest model likely supports the latest standards, a charger that has been around for some time might not.

While there is a basic standard for power delivery in the shape of USB-C Power Delivery (USB-C PD), not all manufacturers are using it, with many others relying on proprietary protocols. For example, Samsung and Google only use USB-C PD. Xiaomi and Oppo on the other hand are developing proprietary superchargers which give them a significant competitive advantage in terms of charging speed for their products. However, they are lagging when it comes to compatibility with other products on the market.

Large parts of the industry have already been moving to USB-C ports and chargers.

USB-PD protocol explained

Those manufacturers who do not use a proprietary charging protocol usually rely on USB Power Delivery (USB-PD). The USB specifications cover all aspects of USB, from hardware connectors to data transfer protocols, as well as power supply. USB Power Delivery specifications were first published in 2012 and allowed for fast charging of devices via a USB connection — up to 100W for some device categories.

Since then, the standard has been updated multiple times and the current version 3.1 offers extended charging power up to 240W, covering even the most demanding laptops. It’s not always easy to find these numbers, though. While data transfer protocol information of the different USB versions is easy to come by, you have to dive deep into the detailed specs to find the power supply version.

In addition to USB-PD, there is also USB-PD PPS (Programmable Power Supply) which is a very advanced charging technology for USB-C devices. It allows for continuous adjustment and optimization of the charging voltage, resulting in more efficient charging.

USB PD is an open and widely adopted standard, which means it is comprehensively documented, enabling easy adoption for manufacturers. In contrast, proprietary protocols are not documented publicly at all which also means that consumers simply must trust the manufacturers’ technical claims are true.

Below you can see a (non-comprehensive) list of charging protocols currently in use. As if the sheer number of protocols wasn’t confusing enough, some manufacturing groups use different names for the same protocol across their brands. For example Super VOOC  on Oppo devices, Ultra Dart on Realme, Warp Charge on OnePlus and FlashCharge on Vivo and IQOO are just different names for one and the same protocol. On the plus side, the top four Chinese manufacturers Huawei, Oppo, Vivo, and Xiaomi, are currently working together to create a unified charging standard.

Here is a list of some of the fast-charging brand names one can encounter on the market:

  • Quick Charge (Qualcomm)
    • QC2.0
    • QC3.0
    • QC4/4+
    • QC5
  • Power IQ3 (Anker)
  • Super VOOC (Oppo, Realme, OnePlus)
  • FlashCharge (Vivo, IQOO)
  • Adaptative Fast Charging (Samsung)
  • Pump Express (MediaTek)
  • SuperCharge (Huawei)
  • Hypercharge (Xiaomi)
  • TurboPower (Motorola)
  • Apple

DXOMARK cross-charging tests

So, despite a unified charger shape, the charging experience might be far from unified. But just how bad is the current situation? The DXOMARK Battery team has undertaken extensive cross-charging tests to find out. Test devices included the Apple iPhone 13, Xiaomi 12 Pro, Samsung Galaxy S22 Ultra, Oppo Find X5 and the Google Pixel 6. All phones were charged with their bundled or recommended chargers and all chargers from the rival phones in addition to several popular third-party chargers from the likes of Amazon Basics, Force Power, Belkin, and Anker.

Tests were conducted using an oscilloscope with very accurate probes connected to an in-house developed PCB (printed circuit board) card, and a thermometer. The following measurements were taken for all charger/device combinations:

  • Charging time
  • Charging power
  • Charge efficiency
  • Charger efficiency
  • Residual consumption of the charger
  • Temperature throughout a charge

Third-party chargers only support standard charging protocols

Third-party chargers do not have access to the device makers’ proprietary charging protocols. When charging a smartphone that supports very high charging wattages with a proprietary charger, in most cases they fall back on standard charging protocols, such as USB PD 2.0 or even 1.0. So when buying a third-party charger it’s definitely important to check what standards and devices it is actually 100% compatible with, otherwise, you might end up with slower charging than expected.

While USB PD 2.0 is well optimized for the iPhone 13 for example, on most of the other devices its use results in noticeably lower wattages and longer charging times when compared to the bundled chargers and their proprietary protocols. The performance of the cheapest charger of the bunch, the model from Amazon Basics, was especially underwhelming with all phones.

DXOMARK cross-charging tests – peak wattage
DXOMARK cross-charging tests – time to charge from 0 to 80%
DXOMARK cross-charging tests – full charge duration

Super-chargers are only super with the device they were designed for

Our testing also showed that proprietary super-fast chargers, for example, the models from Oppo and Xiaomi, have compatibility issues and don’t play well with devices they were not specifically designed for. The Oppo chargers, in particular, were the least compatible. The super-chargers cannot even be swapped with each other without a drop in charging performance. So don’t expect a super-charger from brand A to charge your phone from brand B any faster, even if the latter comes with its own super-charger that has the same or even higher wattage. This is illustrated in the following graphs:

Xiaomi 12 Pro – charging time with Xiaomi and Oppo super-chargers
Oppo Find X5 – charging time with Xiaomi and Oppo super-chargers

These are only some of the findings of our charger testing. We will publish more detailed test results in another article soon.

A long way to go for a truly unified charging experience

This first round of DXOMARK cross-charge testing shows that you can indeed use any USB-C charger to charge a device with a suitable charging port. However, with devices that support charging at more than 30W the charging performance is in most cases far from optimized. At below 30W the experience with a charger from a different brand is close to the in-box charger.

At more than 30W using a charger other than the one that came in the box results in prolonged charging times and an overall sub-optimal charging experience. Manufacturers will have to get together and put a lot more work into harmonizing charging protocols for a truly unified charging experience across all brands. Unless this is achieved, consumers might prefer to stick with proprietary chargers, reducing the positive environmental impacts and cost savings for consumers of the newly introduced legislation. This said, the new legislation also encourages the adoption of the USB-PD charging standard. Any smartphone or charger supporting more than 30W with a proprietary standard will also have to support similar charging speeds with USB-PD.

What does this mean for makers of third-party smartphone chargers?

Michel Bassot, chairman of  Bigben Connected[1], a maker of smartphone chargers and accessories, said, “Technology standardization rarely comes with a unique and detailed specification,” which could lead to different interpretations between the smartphone manufacturers and accessory makers.

“If, technically, it makes sense, in reality, it creates confusion in end-user comprehension of the technology advantages,” Bassot said.

Although the fragmentation of USB-C charging would probably weaken OEM’s offerings, he said that the EU legislation would also be an opportunity for specialized accessory makers to reassure consumers of an independent and neutral position toward smartphone brands.

Conclusion

Our test results showed that charger behavior depended on both device and charger supporting the same charging protocols. If the charger and smartphone do not support the same protocol, a default protocol kicks in, which currently provides basic 10W charging at best. If there is at least one common protocol, charging power could potentially go up to 30W.

The new EU legislation will require that super-chargers become more compatible by at least supporting the USB-C PD standard to the maximum power in an effort to include as many devices as possible. It will also facilitate the compatibility of third-party chargers, as all manufacturers will be required to support the standards.

This would also put pressure on manufacturers like Oppo and Xiaomi, whose superchargers work best with their own products, to provide more compatibility.

The EU’s common charger ruling is not just about adopting a common type of plug. It’s not just about making Apple switch from its proprietary lightning port to the more universal USB-C. The new legislation will require an effort from all manufacturers to achieve an acceptable level of compatibility.


[1]Bigben is a European-based company that supplies and manufactures electronic items for video game consoles and smartphone accessories. It has now diversified into multimedia and video games. Its distribution network spans five continents.  https://www.bigben-connected.com

The post Charging devices: Can new EU law clear the path to a unified experience? appeared first on DXOMARK.

]]>
https://www.dxomark.com/eu-device-charging-legislations/feed/ 0 Smartphone,,Mobile,Phone,Chargers,Connected,To,Electrical,Power,Strip.,Various White,Charger,With,Usb,Cable,On,White,Background Peak W – VF Time to charge 0-80% VF Time to charge Full VF Xiaomi 12 Pro Charging curve Oppo FindX5 Charging curve
Always-on Display: How does it affect battery life? https://www.dxomark.com/always-on-display-how-does-it-affect-battery-life/ https://www.dxomark.com/always-on-display-how-does-it-affect-battery-life/#respond Fri, 16 Dec 2022 14:20:50 +0000 https://www.dxomark.com/?p=135158 The recent launch of the Apple iPhone 14 series catapulted “always-on display” into consumer consciousness in a big way. Although it’s new to Apple devices, the technology has been part of the Android world for years. But what is it exactly, and how does it affect battery life? Our DXOMARK Display and Battery experts investigated [...]

The post Always-on Display: How does it affect battery life? appeared first on DXOMARK.

]]>
The recent launch of the Apple iPhone 14 series catapulted “always-on display” into consumer consciousness in a big way. Although it’s new to Apple devices, the technology has been part of the Android world for years. But what is it exactly, and how does it affect battery life? Our DXOMARK Display and Battery experts investigated to understand better the impact on autonomy of always-on technology.

The display is a high power-consuming part of a smartphone. Because of this, most smartphone displays turn off after a relatively brief period of inactivity. The “always-on” feature lets users see certain kinds of information, such as the time, without having to fully wake up the phone.

DXOMARK recently asked users in a survey* what they thought of the always-on feature, and 54% of the respondents said they found the feature useful, while 46% didn’t think it was particularly useful.

The convenience of having the screen illuminated at all times, however, may come at the expense of battery life, and that is one of the main things our engineers wanted to find out. So they tested the brightness and the power consumption of the always-on displays on four leading devices — Apple iPhone 14 Pro Max, Google Pixel 7 Pro, Samsung Galaxy S22 Ultra (Exynos), and Xiaomi 12S Ultra — for a fair comparison.

Our tests

To run our battery measurements, we tested all four smartphones for at least two days under the same conditions inside a Faraday cage at a temperature of around 22°C (71.6°F), with ambient light at 50 lux, and battery power between 20% and 80% (levels at which the battery gauge is most stable). Phone settings were also the same across the board:

  • ·Airplane mode on
  • Wi-Fi, data, Bluetooth, location services (etc.) off
  • Auto brightness on
  • Adaptative refresh rate on

·All devices used a gray background (on the standard screen, not specifically for always-on), but this only affected the iPhone, which displays its background dimly when the always-on mode is activated

To run the display measurements, we used a Radiant imaging colorimeter to map the always-on interface and based our computations on the maps.

From left to right: Apple iPhone 14 Pro Max, Google Pixel 7 Pro, Samsung Galaxy S22 Ultra (Exynos), Xiaomi 12S Ultra

Battery measurements

The results of our battery tests revealed that autonomy was largely impacted by this always-on screen feature, draining the battery about 4 times faster! The battery will last roughly 100 hours in idle when activating the feature, instead of 400 if the feature is deactivated. Google Pixel 7 Pro had the best autonomy of the four devices, lasting 139 hours when activating its always-on screen. Interestingly enough, smartphones with the best autonomy in idle also showed the worst autonomy when switching on always-on screen. Is it too much confidence in their battery that led manufacturers to spend less time optimizing this mode? Maybe.

*Qualifies the autonomy in hours for a full battery discharge. This data is projected thanks to a long measurement done between 80% and 20% of battery, where battery is the most “stable.” Recombined with linearity, it allows us to project the autonomy on a full battery.

At DXOMARK, in addition to autonomy measurements, we compare the current drains, in milli-Amps(mA), which is the ratio of battery capacity (mAh) divided by the autonomy (h). This metric measures the speed at which a specific usage drains the battery and evaluates the performance of the platform itself, regardless of the battery capacity. As shown in the following table, the iPhone is the most optimized and keeps its discharge currents low in all situations. But the differences with the competition are rather small, with discharge currents around 10mA in idle screen off, and about 36mA with always-on screen, except for the Xiaomi 12S Ultra, which drains the battery a lot quicker at 47.3mA. Why? Some explanations can be found in the analysis by our Display experts.

*Discharge current is the ratio of battery capacity divide by autonomy (to get more details about how we measure discharge currents, check the DXOMARK How we test section on dxomark.com: https://www.dxomark.com/a-closer-look-at-how-dxomark-tests-the-smartphone-battery-experience/)

Display measurements

As for the results of our display brightness tests, the iPhone 14 Pro Max and the Xiaomi 12S Ultra were the brightest among the four devices by quite a large margin. But the difference is major when it comes to how the screen is lit up. Indeed, the Apple iPhone 14 Pro Max lights up the full screen, while the Samsung and the Google devices only light the pictogram. The reason why the Xiaomi 12S Ultra has a very high maximum and average brightness is that it shows a very bright symbol in always-on mode, in addition to the time of the day.

Acquisition from Radiant imaging colorimeter, showing the luminance levels across the different displays of our study

Another important point is that all these devices use OLED screen technology, which is more energy efficient because black areas of the display, for example, do not consume any power. In contrast, LCD technology always consumes energy, which is why it is not compatible with always-on mode. Therefore, it is expected that brighter devices have higher consumption. Note that we were unable to run specific measurements on the variable refresh rate since the luminance is really low in auto mode. But the refresh rate might have an impact on battery life depending on the way it has been set by each manufacturer.

Concluding our measurements, we observed that even though the iPhone provided a bright screen on average, its power consumption was the lowest, indicating that the Apple engineers did a very good job of optimizing efficiency. The Google and the Samsung showed similar power consumptions, but with an always-on screen experience that was a lot less demanding: Only small parts of the screen were lit, and the maximum brightness was kept low. They both showed similar discharge currents as the iPhone, while displaying a much dimmer screen.

Finally, the third Android device, the Xiaomi 12S Ultra, showed the highest battery drains when the always-on feature was on, and although its average brightness was similar to the iPhone’s, the battery drains can be explained by its very bright pixels localized on its distinctive pictogram. Its maximum brightness was by far the highest.

Ultimately, the resulting user experiences in terms of autonomy were quite similar, considering that the Android devices have larger battery capacities than the iPhone.

Room for further optimization

Apple was late to join its peers in offering an always-on display feature, but as with essentially every feature the company launches, its engineers thoroughly studied how to optimize the user experience by making the most of its hardware and operating system. The always-on experience on Android phones, which have had this feature for a while, is not quite on par with the iPhone’s, either offering a nearly similar or worse battery experience. The iPhone 14 Pro Max’s performance shows that further optimization is possible for Android phones as well.


* DXOMARK survey conducted on social media (Twitter and LinkedIn) from November 30 to December 12, 2022; results are based on responses from  222 people.

The post Always-on Display: How does it affect battery life? appeared first on DXOMARK.

]]>
https://www.dxomark.com/always-on-display-how-does-it-affect-battery-life/feed/ 0 AlwaysOn-3 (1) Autonomy measured v2 Discharge currents measured v3 Average brightness Maximum brightness Always on Radiant capture
Video doorbells: 2022 ranking and comparisons https://www.dxomark.com/video-doorbells-2022-rankings-and-comparisons/ https://www.dxomark.com/video-doorbells-2022-rankings-and-comparisons/#respond Tue, 08 Nov 2022 19:08:36 +0000 https://www.dxomark.com/?p=130373 The post Video doorbells: 2022 ranking and comparisons appeared first on DXOMARK.

]]>
first benchmark test of home surveillance cameras. Today we are back with an update that includes new camera models to study the evolution of image quality in this constantly growing market.

The update includes four new doorbells in addition to last year’s models, all from major actors on the North American market: the new Google Nest Doorbell (wired, 2nd gen), the Arlo Wired Video Doorbell, the Ring Doorbell Pro 2, and the Wyze Video Doorbell Pro. With this latest generation of cameras, we can see real progress in terms of image quality. In general, the new devices perform better in our tests, and as a result rank higher than the 2021 models.

From left: Google Nest Doorbell (wired, 2nd gen), Ring Video Doorbell Pro 2,  Arlo Wired Video Doorbell, Wyze Video Doorbell Pro

For doorbell cameras, the image-quality requirements center on being able to identify the person who is at the door at all times. This means first having an accurate target exposure on faces in bright sunny conditions but also in strongly backlit situations and even at night, which is increasingly more difficult. Recognizing people also means having enough details on their faces. This comes with its own challenges, given the relatively low resolutions of doorbell cameras and the compression necessary to work with the cloud.

We put the eight doorbells through elements of our Camera protocol that was adapted for these devices, testing them in several lighting conditions ranging from day to night, with various dynamics and distances, to evaluate their performance. Here are their respective specifications:

In our first benchmark a year ago, the wired Google Nest Hello outperformed the other three doorbells we had tested at the time and Google continues to lead the pack with its latest Nest Doorbell (wired, 2nd gen). The battery-powered Wyze Video Doorbell Pro and the wired Ring Video Doorbell Pro 2 followed, pushing the models tested last year further down the ranking.

We also look at the breakdown of the scores and see how the doorbells performed specifically in daylight and night conditions. While the Google doorbell at the top of our ranking proved to be a well-rounded device with good performance in all conditions, that was not necessarily the case for the other doorbell cameras.

About DXOMARK Doorbell camera tests

Like for all DXOMARK test protocols, our doorbell evaluations take place in laboratories and in real-world situations using a variety of subjects. The scores rely on objective tests for which the results are calculated directly by measurement software on our laboratory setups, and on perceptual tests in which a sophisticated set of metrics allow a panel of image experts to compare aspects of image quality that require human judgment.

The following section gathers key elements of DXOMARK’s image quality tests and analyses for video doorbells. So let’s dive into the results of the protocol by looking at the specific use cases to better understand what sets the new Google Nest Doorbell (wired, 2nd gen) apart from the rest.

Daylight use cases

The Daylight use case is focused on the evaluation of the different image quality branches, mainly on portrait scenes, especially where the subjects are close to the camera. Among the image quality attributes, exposure and detail preservation are judged to be the most important, because they can influence the face detection and identification of subjects. The artifacts, like blocking and compression, can equally influence the overall image quality, leading to potential detection or identification failures. In the context of doorbells, in daylight conditions, accurate color rendering and white balance are not a requirement but a nice feature to have for the user experience. Finally, given the overall low preservation of details common to most doorbell cameras, noise is generally rarely an issue, so the weight of the noise evaluation is lower compared with the other image quality attributes.

Daylight use cases range from well-lit conditions on a sunny day, to a strong backlit situation, passing by a cloudy day. Doorbells must adapt to all these situations to be usable anytime. This chart shows the Daylight use case scores for all tested models.

The results for the Daylight use reflect the overall ranking, with the Google Nest doorbell (wired, 2nd gen) at the top of the list. Because daylight use cases are the most common ones, this is not necessarily a surprise.

We can also point out that battery-powered doorbells tend to be in the lower part of the ranking. This is likely the trade-off  between conserving battery power and the amount of heavy processing that is often necessary to produce a high-quality image. As a result, the wired doorbells tended to have better image quality because image processing didn’t have to come at the expense of power. The exception, however, was the battery-powered Wyze doorbell, which performed well enough to reach third place in the Daylight ranking.

Google Nest Doorbell (wired, 2nd gen): accurate target exposure on face and limited clipping in sky
Google Nest Doorbell (wired): slightly low target exposure on face, no clipping in the sky
Wyze Video Doorbell Pro: overexposed on face and background
Wyze Video Doorbell Pro: accurate target exposure on the face and acceptable level of detail even on background
Ring Video Doorbell 4: slightly low target exposure on the face and low level of detail
Google Nest Doorbell (battery): slightly low target exposure on the face and low level of detail

Night use cases

Night use case evaluation was focused on evaluating different image quality attributes of mainly portrait scenes, especially when the subjects were close to the camera. Such image quality attributes and their importance were similar to those for the daylight use cases with the exception of color. Most doorbells use infrared mode under a certain light level, which generally produces black and white images. Doorbells that provide a color mode at night, either by staying in visible mode longer or by recoloring the infrared feed, bring a significant advantage compared to cameras that remain in IR mode. In DXOMARK’s doorbell tests for night use cases, we used the same lab setups as for day use cases but with lower light levels. Also, the same real-life scenarios were performed but at night, after sunset.

Night use cases are more challenging for the small cameras in doorbells because the levels of light captured can be very low, resulting in images that contain noise and very few details. To work around those constraints, most doorbell cameras will switch to an infrared mode when light levels become very low, allowing for better exposure performances with less noise, but at the cost of color. This chart shows the Night use case scores of the test candidates.

The new Google Nest Doorbell (wired, 2nd gen) showed a strong performance in daylight situations, as well as in night situations, leaving the competitors far behind and taking the top spot in the Night ranking. The Ring Doorbell 4, which struggled a bit during daylight situations, showed a remarkable performance in night conditions, which allowed it to move close to the top of the Night ranking.

Google Nest Doorbell (wired, 2nd gen): accurate face exposure, good detail
Google Nest Doorbell (wired): accurate face exposure, lack of detail
Wyze Video Doorbell Pro: overexposure
Ring Video Doorbell 4: slight bright clipping on face but recognizable
Arlo Essential Video Doorbell Wire-free: face is strongly over-exposed and not recognizable
Google Nest Doorbell (wired): accurate target on face but low level of details person is hard to recognize

Let’s look deeper at the performance of the doorbells for the three main image quality attributes of exposure, texture, and artifacts.

Exposure

Most of the time, video doorbells will be used during the day to check on visitors or package deliveries. The key element in these situations is to be able to recognize and identify the person at the door, which requires a camera that provides good target exposure on the face.

In DXOMARK’s doorbell tests, exposure performances were measured in the lab on a setup including realistic mannequins and a light box, under controlled lighting conditions. Several conditions were reproduced, from bright light with low dynamic to low light with high dynamic up to EV4 difference. Results observed in the lab were backed-up by perceptual evaluations on real-life scenarios, where the doorbell cameras were placed outside, first in full light and then under the cover of an archway, to get more high dynamic conditions. In each case, a person approached the camera, so that we could evaluate the target exposure on a real person.

The Google Nest Doorbell (wired, 2nd gen) always delivered an accurate target exposure, coupled with a rather wide dynamic range, meaning that many details were preserved in the background. We tested this in the lab on our realistic mannequins and confirmed the behavior on a real scene, with both the doorbell and the subject well lit. In the same conditions, the previous Nest doorbells (both battery-powered or wired) tended to slightly underexpose the face, while doorbells like the Wyze Video Doorbell Pro, and the Arlo Doorbells to some extent, tended to over-expose the subjects.

*The graph shows the evolution of the lightness (measured in L*) with the level of lux, for multiple lighting conditions. The white area represents the region where the lightness is considered correct. Lightness is measured on the forehead of the left realistic mannequin (see setup example below).
Wyze Video Doorbell Pro, DuoHDR setup
Overexposed on the face and completely clipped in bright parts
Ring Doorbell Pro 2, Duo HDR setup
Accurate target exposure on face and completely clipped in bright parts
Google Nest Doorbell (battery), Duo HDR setup
Slightly under exposed on face and slight clipping in bright parts
Google Nest Doorbell (wired, 2nd gen): accurate target exposure on face and limited clipping in sky
Google Nest Doorbell (wired): slightly low target exposure on face, no clipping in the sky
Wyze Video Doorbell Pro: overexposed on face and background

Things generally get a bit more complicated when lighting conditions get more challenging. If the video doorbell is installed under a porch, for example, the dynamic range of the scene is increased because the camera is in the shadows and the scene is in the light. In those conditions, doorbells like the Google Nest Doorbell (wired, 2nd gen) or the Wyze start to struggle a bit, with target exposure barely high enough to recognize people. However, most of their competitors delivered even lower target exposure, making it nearly impossible to recognize who was at the door.

Google Nest Doorbell (wired, 2nd gen): slightly low target exposure on face, and low contrast; face is hard to fully recognize
Google Nest Doorbell (wired): slightly low target exposure on face
Wyze Video Doorbell Pro: very slightly underexposed but face is recognizable

During the night, most video doorbells have an infrared mode to help get better exposure when light levels are low. But even with that, good image quality is not guaranteed. In our real-life scenarios, we added an external lighting source to simulate a front door light. In those conditions, whether the new Nest activated its IR mode or not, target exposure was always accurate on the faces, allowing for easy recognition of the person. This was not the case for the competitors like the Wyze Video Doorbell Pro, below, which doesn’t switch to IR mode but has strong clipping on the face. On the Google Nest Doorbell (wired, 2nd gen), the background tended to be slightly underexposed, which meant some image elements were lost, but the background was not the main focus of the scene.

Google Nest Doorbell (wired, 2nd gen): IR mode is activated, accurate target on face, slightly low on background
Google Nest Doorbell (wired): IR mode is activated, accurate target
Wyze Video Doorbell Pro: overexposed
Google Nest Doorbell (wired, 2nd gen): accurate target on the face
Google Nest Doorbell (wired): accurate target
Wyze Video Doorbell Pro: overexposed

Texture

In DXOMARK’s Doorbell tests, texture performances were measured in the lab on a setup including realistic mannequins and a Colorchecker chart, under controlled lighting conditions. Several conditions were reproduced, from bright light to low light. Results observed in the lab were backed-up by perceptual evaluations on real-life scenarios, where the doorbell cameras were placed outside, first in full light and then under the cover of an archway, to get more high dynamic conditions. In each case, a person approached  the camera, so that we could evaluate the level of details rendered on a real person.

Good exposure is necessary to recognize people at the door, but a high level of image detail is necessary to positively identify people at the door. This is where the Nest doorbells outperformed the others, providing the highest levels of details when compared with the other doorbell cameras. In addition, a particularity of the Google Nest Doorbell (wired, 2nd gen) was that image details remained consistent in a given lighting condition, thanks in part to the nearly nonexistence of compression artifacts, which usually result in loss of details.

*This graph shows the evolution of facial details metric with respect to light conditions. Face detail metric is performed on the realistic mannequin face in the DXOMARK PortraitTimingColor set-up. The higher the metric the better the details preservation.

While most cameras maintained a generally consistent level of details through all lighting conditions, some, like the Google Nest Doorbell (wired, 2nd gen) or the Arlo Essential Video Doorbell Wire-free, lost some details quite significantly once they switched to infrared mode. You can see on the graph above that for the doorbells previously cited, this happened between 20 lux and 5 lux.

Arlo Wired Video Doorbell, 300 TL84
acceptable level of details
Arlo Wired Video Doorbell, 5Lux Tungsten
All details are lost

 

Ring Doorbell 4, 300Lux TL84
Acceptale level of details
Ring Doorbell 4, 5Lux Tungsten
Some details are lost
Nest Doorbell (wired, 2nd gen)
Acceptable level of detail
Nest Doorbell (wired)
Some loss of detail in the shadows
Wyze Video Doorbell Pro
Some loss of detail

Because of their placement, doorbell cameras can also be used to monitor a home’s front yard or driveway. The key test here is whether the camera is capable of providing a legible image of a car’s license plate. Our results showed that the Google Nest Doorbell (wired, 2nd gen) lacked the resolution to do this, and numbers and characters were barely distinguishable on cars that were parked about 4 meters from the camera. However, among the doorbells we tested, the Google Nest (wired) and the Ring Doorbell 4 were the only cameras that could provide a legible and clear-enough view of a car’s license plate.

Artifacts

In DXOMARK’s Doorbell tests, artifacts were evaluated on all tested scenes. The most common artifacts in video doorbells were blocking and compression artifacts due to video encoding and compression to be viewed on the cloud for example. Other common artifacts were color fringing in the corners of the image, due to the wide-angle lenses, ringing or even hue shift near saturation.

Most video doorbells suffered from video compression artifacts, but the Google Nest Doorbell (wired, 2nd gen) was generally free of them. These compression artifacts reduced the level of details in the image and could create an unpleasant frame reset effect when the compression level changes abruptly from one frame to another.

Distortion was also often visible on doorbell cameras. Nest doorbells generally were not too impacted by it thanks to their  different  field of view. The Google Nest Doorbell (wired, 2nd gen), in particular,  has a quite rare portrait format for video. This vertical field of view allows for a head-to-toe view of the person in front of the camera and avoids creating distortion on the sides. In contrast, the Wyze Video Doorbell Pro, for example, has a square video format, which results in a  very visible fish-eye effect.

While the Google Nest Doorbell (wired, 2nd gen)  avoided most of the common artifacts seen on the other doorbells, we noticed that the Nest showed some “ghosting,”  which appeared when people were close to the camera and moving. However, when people were standing still in front of the camera, the image was clear.

Steady improvements in image quality

Our testing showed many improvements in doorbell image quality from some of the major video doorbell brands. But the Google Nest Doorbell (wired, 2nd gen) was by far the best all-around performer. It did especially well for texture and exposure, making it the only one out of the eight cameras we tested that allowed for systematic and easy identification of faces in all light conditions, and earning itself the DXOMARK Gold label for Camera.

We’ll continue to test the latest video doorbells as they become available on the market and keep updating the rankings on an annual basis as we track the progress in image quality.

The post Video doorbells: 2022 ranking and comparisons appeared first on DXOMARK.

]]>
https://www.dxomark.com/video-doorbells-2022-rankings-and-comparisons/feed/ 0 Video-doorbells_Article-2 Specifications table v2 Overall scores ranking Daylight use case ranking Day_SunFriend_NestDoorbell2ndGen Day_SunFriend_NestDoorbell1rstGen Day_SunFriend_WyzeVideoDoorbellPro Day_SunFriend_Wyze_Video_Doorbell_Pro Day_SunFriend_Ring_Doorbell_4 Day_SunFriend_Nest_Doorbell_Battery Night use case ranking Night_Friend_LightOn__NestDoorbell2ndGen Night_Friend_LightOn_NestDoorbell1rstGen Night_Friend_LightOn_WyzeVideoDoorbellPro Night_TheDelivery_LightOn_Ring.mp4_snapshot_00.28.340 Night_TheDelivery_LightOn_ArloEssential.mp4_snapshot_00.21.742 Night_TheDelivery_LightOn_Nest_Doorbell_Wired.mp4_snapshot_00.35.080 Target exposure measurement Day_SunFriend_NestDoorbell2ndGen Day_SunFriend_NestDoorbell1rstGen Day_SunFriend_WyzeVideoDoorbellPro Day_TheVisitor_NestDoorbell2ndGen Day_TheVisitor_NestDoorbell1rstGen Day_TheVisitor_WyzeVideoDoorbellPro Night_Friend_LightOn__NestDoorbell2ndGen Night_Friend_LightOn_NestDoorbell1rstGen Night_Friend_LightOn_WyzeVideoDoorbellPro Night_TheDelivery_NestDoorbell2ndGen Night_TheDelivery_NestDoorbell1rstGen Night_TheDelivery_WyzeVideoDoorbellPro Texture measurement Google-Nest-wired-2nd-gen-Gold-label
A brief introduction to how we test doorbell cameras https://www.dxomark.com/a-brief-introduction-to-how-we-test-doorbell-cameras/ https://www.dxomark.com/a-brief-introduction-to-how-we-test-doorbell-cameras/#respond Tue, 04 Oct 2022 17:51:46 +0000 https://www.dxomark.com/?p=129068 Doorbell cameras are becoming the staple of every connected home because they have become the first line of defense in a home security system. Whether it’s seeing who is at the door when nobody is in the house, or accepting a package delivery remotely, doorbell cameras have become convenient as well as a necessary component [...]

The post A brief introduction to how we test doorbell cameras appeared first on DXOMARK.

]]>
Doorbell cameras are becoming the staple of every connected home because they have become the first line of defense in a home security system. Whether it’s seeing who is at the door when nobody is in the house, or accepting a package delivery remotely, doorbell cameras have become convenient as well as a necessary component in any home surveillance system. The use of doorbell cameras is also an area of home security that is expected to continue to grow.

Strengthened by its expertise in camera image quality evaluation since 2003 (DSLRs, smartphones among others), DXOMARK is now extending its expertise to home surveillance cameras and smart doorbell cameras. The image quality of this new breed of connected devices is of particular interest to DXOMARK. Unlike smartphone or laptop cameras, where the focus is to produce images that are generally pleasing to the eye, the tiny sensors in security and doorbell cameras must focus on producing accurate facial details that would make it possible to identify the person in front of the camera.

DXOMARK has developed a new protocol that adapts its already stringent image-quality testing methods to evaluate doorbell cameras. It is the latest to join the company’s extensive family of protocols in smartphone camera, audio, display, and battery as well as video conferencing, wireless speakers, camera sensors, and lenses.
Currently focusing only on image quality, the setup is built based on the same unique combination of lab measurements and tests in real-life/natural scenes that have made DXOMARK the standard of image quality in the smartphone industry.

Test setup

Use cases are the basis of any DXOMARK protocol because they determine the types of lab setups, the real-life scenes to be shot, and the list of quality attributes to evaluate.

The scenes that we reproduce in the laboratory and outdoors are :
• An outdoor daylight scene to test the intrinsic quality of the camera (combination of lab and outdoor measurements)
• A backlit scene with a forefront in the shadow and a background in the sun, to evaluate the HDR capability of the camera (combination of lab and outdoor measurements)
• A night scene, to either test the infrared (IR) mode or to test under a floodlight-helped color mode

DXOMARK lab setups (HDR and night vision) for security cameras and doorbell cameras

 DXOMARK natural scene setups (outdoor, HDR, night vision) for smart doorbell cameras

Field of view

The field of view is the cornerstone specification sheet of a doorbell or a security camera. Unlike most consumer cameras, security cameras claim their ability to cover an angle of up to 180°. However, in the specific context of doorbells, there isn’t an industry-level consensus yet on the right approach to field of view. Should the video format, for example, be square, or portrait?

Because of the diversity of the existing solutions, DXOMARK’s protocol tests each doorbell camera at its default field-of-view setting, respecting each manufacturer’s choices. However, we evaluate image quality attributes on every aspect of the image, including distortion artifacts. A doorbell model that claims to have a very wide viewing field but delivers a poor quality image would be impacted more in our evaluations than a device that had a more conservative field of view but guaranteed a higher-quality image — one that would allow recognition of the person in any part of the image.

The evaluation: Image quality attributes

We perform two kinds of evaluations on videos: objective and perceptual. Objective tests focus on testing the standards established by the industries, such as white balance and texture level, while perceptual evaluations are the quantification of qualitative parameters. Perceptual evaluation takes its roots in long-established scientific methods, also described in standards, like the International Telecommunications Union’s subjective video quality assessment methods for multimedia applications (ITU) ITU-T P.910.  Both objective measurements and perceptual evaluations are necessary to assess fully the image quality of a video, as they complement each other and lead to a more enriched assessment.

DXOMARK has developed its doorbell camera protocol around the most important image quality attributes for a security camera: Exposure, Details (texture and noise), and Artifacts, keeping in mind that the main purpose of a doorbell camera is to recognize the face of a person at the door, whether it is a friend or an intruder.

Exposure

Good exposure is crucial to properly identify a face. The DXOMARK doorbell testing protocol evaluates four parameters related to exposure.
Exposure on the person, which is the amount of light on the individual that allows for their correct identification; when the identification is guaranteed, we then use our expertise from the consumer camera world to evaluate whether or not the representation is also pleasing to the viewer.
The dynamic range of the camera, i.e., its ability to correctly render the dark areas of the scene as well as the bright areas. In particular, we test here the HDR performance of the doorbell camera.
The contrast, which is the gradient of differences between dark and light areas on the video. Contrast is especially key in the context of HDR pictures, as a poorly balanced HDR setting can lead to unnatural images, which can in return make the identification of the person difficult for the viewer.
Exposure adaptation, which is the ability of the camera to adjust, in real-time, the exposure when the lighting conditions suddenly change.

Details, Texture & Noise

Related to the need to identify the person at your door, the measurements related to  details, texture, and noise explore all the elements of the picture related to the clarity of the image:
• Although details can be roughly estimated with a resolution chart, they must also pass real-life scene tests, such as the ability to read the logo on a shirt or even a car license plate.
• Texture evaluates the way the camera depicts the details of facial characteristics such as a beard or skin, as well as surrounding areas such as grass and bushes. DXOMARK has developed a specific perceptual evaluation on realistic mannequins to complement the tests of natural scenes.
• Finally, noise assesses the graininess of the overall image. It is worth noting that here, again, the goal is to identify people, so having a low level of noise is not as important as having a high level of detail.

Artifacts

In this attribute evaluation, we chase mostly three types of artifacts: compression, color fringing, and distortion.
Compression appears when the doorbell needs to transmit the video to a server, in real-time, forcing a rather high compression level. When the scene changes rapidly, the ISP sometimes fails at keeping up with the pixel count, and a phenomenon known as blocking appears. This is particularly true of battery-based systems.
Color fringing is a type of chromatic aberration often caused by the failure of the camera at focusing all colors on one point; this is often seen at the edge of a subject, separating the foreground from the background.
Distortion is very often present on wide-angle lenses of cameras. What matters in the context of doorbells and security cameras is whether the distortion could impede the possibility of identifying the person being filmed. The DXOMARK protocol measures not only the objective distortion but also the impact in real scenes, through its perceptual analysis methodology.

Conclusion

DXOMARK has put several doorbell cameras through its rigorous testing protocol. They cover all price points and regions. All the leaders of the market are represented, including Google Nest, Ring, and Arlo. The results are still being tallied, and we plan to publish the results very soon on dxomark.com!


 

The post A brief introduction to how we test doorbell cameras appeared first on DXOMARK.

]]>
https://www.dxomark.com/a-brief-introduction-to-how-we-test-doorbell-cameras/feed/ 0 la_setup_DXOMARK_doorbell la_setup_night_DXOMARK_doorbell natural_scene_1 natural_scene_2 natural_scene_3
A closer look at how DXOMARK tests the smartphone battery experience https://www.dxomark.com/a-closer-look-at-how-dxomark-tests-the-smartphone-battery-experience/ https://www.dxomark.com/a-closer-look-at-how-dxomark-tests-the-smartphone-battery-experience/#respond Tue, 20 Sep 2022 17:03:15 +0000 https://www.dxomark.com/?p=123377&preview=true&preview_id=123377 In the “what we test and score” section of our website, we presented why DXOMARK has developed its Battery testing protocol, and we described in general terms the kinds of tests we perform so as to score smartphone battery performance for autonomy (how long a charge lasts, a.k.a. battery life), charging (how long it takes [...]

The post A closer look at how DXOMARK tests the smartphone battery experience appeared first on DXOMARK.

]]>
In the “what we test and score” section of our website, we presented why DXOMARK has developed its Battery testing protocol, and we described in general terms the kinds of tests we perform so as to score smartphone battery performance for autonomy (how long a charge lasts, a.k.a. battery life), charging (how long it takes to recharge), and efficiency (how effectively the device manages its battery during charge up and discharge). This article will take a deeper dive into some of the specifics of the equipment our engineers use and the procedures they follow for testing. We’ll be taking a look at how we test smartphone battery performance.

The quality of the battery’s performance goes far beyond the battery’s size. How long a charge lasts depends on several factors, including the phone’s hardware and software; the kinds of features it has; whether it runs processes in the background, when actively used and/or when in idle mode; and of course, how much, when and in what ways people use their phones.

The challenge to manufacturers is to find the right balance between high-end features and battery life. It is not an easy task to respond to consumers’ expectations for the most powerful chipset, the best and brightest displays, along with (for example) providing 5G and other power-consuming connectivity, better and better camera and video functionality, while also ensuring that the phone’s charge will last sufficiently long so as not to require too-frequent charging.

Let’s begin with an in-depth description of the equipment we use, after which we’ll explain more about the specific test routines and use cases on which we base our scores.

Our testing tools

Faraday cage

The latest among DXOMARK’s laboratories is the one dedicated to battery testing, a major component of which is our Faraday cage. The walls, ceiling, and floor of this room is composed of 2 mm-thick steel panels, which serve to isolate smartphones from external networks and other disturbances that could affect measurement precision and repeatability. Inside the Faraday cage, a WiFi access point and a cellular repeater provide stable connectivity. An outdoor antenna located on the rooftop of our building receives a signal from the French network operator Orange, and our network repeater amplifies it to a pre-defined level and disseminates it inside the cage via the in-cage antenna (–100 dB for 4G, –80 dB for 3G, and –70 dB for 2G).

Robots in the Faraday cage

Touch robots

So far, we have an array of four touch-robots inside the Faraday cage that we use in two of our three major autonomy test routines, home/office, and calibrated use case. In addition to their touch functions, which are programmed to use the same gestures as humans to interact with smartphones (scrolling, swiping, tapping), they are equipped with computer vision so they can recognize the various app icons, keyboards, and even react to intrusive popup messages. Further, each robot is equipped with a lighting system that reproduces lighting conditions at home, outside, and at the office. The intensity and the color temperature vary depending on the time of the day, and this forces the phone to adapt and adjust its brightness (which can have a significant impact on power consumption). Smart power sockets are installed near the robots to control phones’ charging level. For instance, they can simultaneously stop the charging of 4 devices fully charged to start a test at the same time, or even start charging a device at a specific battery level.

Robots are equipped with simple push-button actuators to wake up the screen before each test case, or from time to time just to mimic quick checks of notifications or of time during the day. Four of our robots work simultaneously and are controlled by a sequencer, which triggers all test routine events, the lighting system and the smart power sockets. We use a fifth robot to run a setup wizard before the test routine begins to verify that the other robots properly recognize each icon and that their gestures are adapted to the specific UI (icon design, gestures, screen size, layout) of the device under test.

We test phones using their default settings out of the box; the only thing we deactivate is 5G because our lab is not covered yet and devices supporting 5G connectivity would be negatively and unfairly impacted otherwise. (We will add 5G measurements to our protocol as soon as our lab has coverage.)

Oscilloscope

We use a Rohde & Schwartz RTE 10124 oscilloscope with current and voltage probes to measure both primary and secondary power over time. (Primary power is the energy taken from the wall outlet before it enters the charger; secondary power is the energy the charger delivers before it enters the smartphone.) To be able to measure the secondary power, we designed specific PCB cards (one USB-A and one USB-C) that allow plugging current and voltage probes without affecting the charging protocols.

Battery protocol tests

Autonomy

Home/Office

In our home/office typical usage scenario, the smartphones start from a full battery charge and our robots run through a precisely defined 24-hour cycle of activities and then repeat it until the phones’ batteries are completely depleted (that is, when the phones switch themselves off). The system monitors the battery gauges at every stage of the cycle to measure how much battery power has been consumed and what percentage of battery power is actually left.

Faraday cage
The touch robot

The 24-hour scenario includes 4 hours of screen-on time per day, which research indicates is the average time of use around the world*, and multiple types of activities: social and communications, music and video, and gaming, among others, using the latest versions of some of the most popular applications here in Europe, where our lab is located. (And speaking of our location, please note it is possible that some test results will vary if conducted elsewhere because of differences in network quality, available applications, and so on.)

The DXOMARK robots at work.

On the go

Mobile phones are, well, mobile, so we include an array of tests to see how smartphone batteries are affected when we are “on the go.” Just as for our stationary robotic testing, we set all phones to their default settings, but here turn on the location and turn off  WiFi and 5G. We bring along a reference phone (always the same one) to help us take into account fluctuations in weather, temperature, etc.

Our on the go tests include the kinds of activities people often do when commuting on public transport, such as making phone calls and browsing social apps; and we also test activities when traveling in a car (GPS navigation for example), and when on foot (streaming music, shooting photos and videos). We start each test case at different checkpoints along the predefined route and run it until the next checkpoint, where we measure its consumption before starting the next test.

Calibrated

For our calibrated use case tests conducted back in the Faraday cage, we have our robots repeat sequences of activities that belong to a particular use case. Here are our current set of use cases:

    • Video streaming (in cellular & Wi-Fi)
    • Video playback
    • Music streaming
    • Gaming
    • Calls
    • Idle

“Calibrated” refers to the fact that we use the same settings and application parameters for each phone we test — for example, we set display brightness to 200 cd/m2; we measure sound coming from the phone’s speaker at a distance of 20 cm; we set the phone speaker volume level to 60 dB; we ensure that the ambient lighting conditions are the same. And then we measure how much power each of these activities consumes so that you will have an idea of how much time you will have to (for example) play an intense video game or how many hours of music you’ll be able to listen to.

The results of these three autonomy test categories will let you know how much battery life (in dd:hh:mm) you can expect from a given smartphone, including how much power it loses overnight when you’re not actively using it, and how much power specific kinds of activities consume. Going further, we’ve been able to devise 3 different autonomy profiles based on the results of our typical usage scenario and on-the-go test cases: light, moderate, and intense. In our estimation, light use means 2.5 hours of active use per day; moderate means 4 hours of active use; and intense means 7 hours of active use. These profile estimates are intended to give you a better idea of the kind of autonomy you can expect based on how much you use your phone.

Linearity

One other aspect of our Autonomy tests focuses on how accurate a smartphone’s battery power indicator or gauge is. It’s long been known that the battery percentages shown on the display user interface do not always accurately reflect the exact amount of power remaining in the battery. This can mean that two phones with the same battery size and whose gauges indicate 10% power remaining may run out of power at very different times.
To measure battery linearity, we have designed a use case that drains a constant amount of power from the battery. After fully charging the battery, we play a video displaying a white screen with no sound in full-screen mode. The phones are set to airplane mode, fixed refresh rate, fixed screen resolution, and put at their maximum brightness.
We perform this measurement twice for each device. If the phone’s gauge shows 20% battery life remaining, but the actual power remaining is less than 20%, we deduct points from its score, because there is nothing more frustrating than seeing your precious last 20% battery percentage quickly collapse!

Charging

Our Charging score is based on two sub-scores, full charge and quick boost. When conducting these tests, we either use the official charger and cables provided with the phone or buy a recommended charger from the manufacturer’s official website.

Charging setup

Full charge

After we fully deplete the smartphone’s battery, we measure how much power and time it takes for the phone to charge from zero  to full charge, as indicated by our probes. We also indicate when 80% of a full charge is reached, as well as the time when the battery indicator says 100%. We deduct points depending on how much power is added to the charge after the smartphone gauge indicates 100%.

We also measure the primary power and the speed of wireless charging for those devices equipped with that feature.

Quick boost

In our quick boost tests, we measure the power gained from plugging in a smartphone for five minutes at various levels of the battery’s charge— 20%, 40%, and 60%, as how much charge the battery has left can make a significant difference how much power it takes on in that short time.

An engineer using the oscilloscope during a smartphone battery test.

In another test, we play Asphalt 9 from Gameloft for a minimum of 20 minutes until the battery gauge indicates that 5% battery is left, and then we plug the phone to a charger to check on how much and how quickly power is drawn from the wall outlet. This helps us check the impact of intense gaming on the phone, as phones that are hot from heavy use take a charge differently than phones that are not.

Efficiency

Charge up

Our Efficiency score is partly based on measurements of a charge up — that is, the ability to transfer power from a power outlet to a smartphone, and how much residual power is consumed after the phone is fully charged and when detached from the charger, as measured with our probes and oscilloscope.

Let’s take the example of a 5000 mAh battery with a nominal voltage of 4V. We consider that the typical energy capacity of this battery is 20 Wh (Watt-hours = 5 Ah x 4V). In our Charging test plan, we measure the power drawn from the power outlet for a full charge. Let’s say we measure 25 Wh, which means the charge has an efficiency of 80% (battery has stored 20 Wh divided by the 25 Wh cost of energy).

We also calculate the travel adapter efficiency. It’s simply the ratio of the secondary power drawn (after the travel adapter, in Wh) to the primary power drawn (before the travel adapter, in Wh).

An engineer conducting and monitoring a smartphone battery test in the Faraday cage.

In our reference database, the charge efficiencies we measure range from 63% to 88.6%, and the travel adapter efficiencies range from 80% to 94%. When our tested smartphones are fully charged but still plugged into the charger, residual power consumption ranges from 90 mW to 850 mW; and when the smartphones are unplugged from the charger, but the charger is still plugged into the outlet, residual consumption ranges from 10 to 150 mW.

Discharge

We also calculate the Discharge efficiency, which is the ratio of battery capacity divided by the results from our stationary and calibrated use case Autonomy sub-scores.

Why do we rate efficiency? While the impact of your smartphone on the electricity bill is negligible compared to heating or lighting, of course, if your smartphone is power efficient, a smaller battery will suffice (making your smartphone lighter and more compact). Good efficiency also demonstrates the quality of design and software robustness. In other words, an efficient device is better built.

Scoring

To briefly recap our scoring system (which we explained in more detail in our introductory article), we compute our overall Battery score from three sub-scores — Autonomy, Charging, and Efficiency. We calculate our Autonomy score from the results of three different kinds of tests: stationary, on the go, and calibrated use cases, along with battery linearity. Our Charging score takes into account full charge and quick boost results. And finally, our Efficiency score is based on charge up (the efficacity of the adapter) as well as discharge (the overall power consumption measured in our typical usage scenario and in our calibrated use cases).

Adapting to foldable phones

DXOMARK’s Battery performance protocol was designed based on the classic one-screen smartphone design. However, the multiple ways of using certain apps and features on foldables presented the Battery team with some challenges that had to be taken into consideration when evaluating the device’s battery experience.  Using a social media app on an unfolded display, for example,  would place a different demand on the battery than a folded screen.

That’s why the Battery team made some minor adjustments to the protocol for foldables. Our process in determining the most precise and correct adjustments involved testing a device three times—in its folded state, in its unfolded state, and then in a combination of the two states, depending on what the most likely usage would be for the use case. For example, calling, social apps, and GPS navigation would be tested in the folded state because it is less likely that the device would be used unfolded in these types of use cases when on the go.

Some users might say that GPS navigation could be tested unfolded as the user can make better use of a larger screen map. But when considering the inconveniences of holding or mounting an unfolded tablet-sized device in a car, the more realistic and probable use for GPS navigation is likely to be folded. Even when using GPS navigation as a pedestrian, the user is likely to keep the phone folded.

In addition to a larger screen, one of the advantages of a foldable phone is that it offers multiple screens, which allows for multitasking. For example, users can watch videos on one part of the screen, while chatting and messaging with friends on another part of the screen. This particular use case, however, will not be tested in the Battery protocol for now.

Here’s a quick guide to show how we test the battery when the phone is foldable:

We hope this article has given you greater insight into the equipment we use and the tests we undertake to produce our DXOMARK Battery reviews and rankings.

The post A closer look at how DXOMARK tests the smartphone battery experience appeared first on DXOMARK.

]]>
https://www.dxomark.com/a-closer-look-at-how-dxomark-tests-the-smartphone-battery-experience/feed/ 0 Battery Lab Battery-Photos-press-and-social-7 Battery-Photos-press-and-social-2-4 Final vidéo How We Test Battery_4 ChargingEquipment oscilloscope Battery-Photos-press-and-social-2-10 MicrosoftTeams-image (22)
A closer look at the DXOMARK Audio protocol https://www.dxomark.com/a-closer-look-at-the-dxomark-audio-protocol/ https://www.dxomark.com/a-closer-look-at-the-dxomark-audio-protocol/#respond Tue, 20 Sep 2022 17:02:37 +0000 https://www.dxomark.com/?p=123445 DXOMARK first launched the testing of smartphone audio in October 2019 just as smartphone users were recording and consuming more video and audio content on their mobile devices. From listening to music or watching movies to recording concerts or meetings, smartphone audio technology has evolved and so has the way smartphones are used for audio. [...]

The post A closer look at the DXOMARK Audio protocol appeared first on DXOMARK.

]]>
DXOMARK first launched the testing of smartphone audio in October 2019 just as smartphone users were recording and consuming more video and audio content on their mobile devices. From listening to music or watching movies to recording concerts or meetings, smartphone audio technology has evolved and so has the way smartphones are used for audio. DXOMARK engineers have kept up with these advancements and have adjusted the Audio protocol to keep it relevant to users.  In this article, we’ll take you behind the scenes for an in-depth look at how we test the playback and recording capabilities of smartphones. We’ll look at the methods, the tools, and the use cases that we use to evaluate the quality of audio playback and recording on smartphones.

Test environments

Depending on the protocol, measurements may be done in different environments. While some recordings are purposely made in real-life settings – whether indoor or outdoor, a vast majority of our measurements are conducted under laboratory conditions for greater consistency.
Using a ring of speakers in an acoustically neutral room, our engineers can simulate any environment for recording purposes. Likewise, an acoustically treated room is dedicated to playback perceptual evaluation. Acoustic treatment of these rooms ensures a well-balanced frequency response.

As most objective measurements require very strict conditions, we test our devices in anechoic settings, thanks to a specially designed anechoic box that eliminates most of the sound reflections inside it. Measurements requiring larger setups are made within our custom-made anechoic room, which accommodates a wide range of protocols requiring minimized sound reflections, such as our audio zoom protocol.

DXOMARK’s anechoic chamber

Objective testing tools

The better part of audio testing relies on the ability to both convey and capture sound in a precise manner, hence the importance of using scientific-grade measurement microphones tuned with a sound-level calibrator, as well as carefully optimized loudspeakers to use as sound sources, crucial components of a controlled audio chain.

From left: An optimized Genelec 8010 speaker, a scientific-grade Earthworks Audio M23R microphone for measurement, a precision sound level calibrator (model CAL 200) for laboratory use, and a motorized rotary stage X-RSW60A-E03.

The device under test (DUT) can be secured with a clamp mounted on a stand, or a magnetic holder. In some cases, the DUT is mounted on a custom-made rotating stand, which rotation is automated via computer-controlled servo motors, allowing for precise 360° measurements.

Objective measurements are processed by our engineers using a set of software depending on the measurement type. Frequency responses, directivity and THD+N are processed by custom python libraries developed by our acoustic engineers, following state-of-the-art signal processing methods largely used by the audio industry. Other measurements, like volume measurement, use freeware tools like REW for their straightforward applications. Objective measurements, once processed, are scored using scoring algorithms developed internally and taking into account specific criteria depending on the measurement (flatness/dispersion of the frequency response, loudness values, percentage of distortion by frequency bands, etc.), and carefully selected to best match the user experience and perceptual listening.

Perceptual testing tools

While objective measurements give us hints about how a smartphone may sound, nothing reflects the user experience better than a thorough perceptual evaluation – the human ear is an irreplaceable and complex tool that can provide unique, qualitative information no other tool could provide. Therefore, objective and perceptual complement and reinforce each other.

An engineer in one of DXOMARK’s audio laboratories monitoring the testing of a smartphone.

Keen ear accuracy is an essential skill of our experienced audio engineers, who further receive extensive training upon recruitment. DUTs are evaluated comparatively for playback against up to 5 other devices, as well as studio monitors calibrated as reference. Perceptual evaluations follow a strict protocol articulated around discretized evaluation items to ensure the precision of results, and they undergo a careful two-pass check involving different engineers, to eliminate bias.

Playback evaluations take place in an acoustically treated room. For the most part, devices are mounted on a semicircular arm on a stand, so that all comparison devices are equidistant from the engineer in charge of the evaluations.
Reflective panels are used to enhance spatial features of the devices and improve the quality of evaluations such as stereo wideness and localizability.

The laboratory set up for testing spatial playback capabilities with reflective panels.

Microphone evaluations are performed with studio headphones standardized for all our engineers, and they follow the same rigorous protocol as our playback evaluation.
All the smartphones previously tested in our labs can still be used as reference devices for perceptual evaluation, either for playback, for recording or both. This helps our database of evaluations and scores to stay consistent even after many years of testing.

Audio quality attributes

Audio quality attributes have been defined in accordance with the report issued by the International Telecommunication Union (ITU): ITU-R BS.2399: Methods for selecting and describing attributes and terms, in the preparation of subjective tests in a motion to standardize sound defining vocabulary, as illustrated by the wheel below. Dealing with perceptual evaluation means establishing a common understanding of the meaning of the words we use, and their definition.

An audio wheel showing the attributes and subattributes as well as the terms used for evaluation.

From the common descriptors, we can make out larger groups that constitute our main audio attributes. These attributes are subdivided into individual constituting qualities that we call sub-attributes.

Timbre

Timbre describes a device’s ability to render the correct frequency response according to the use case and users’ expectations, looking at bass, midrange, and treble frequencies, as well as the balance among them.
Good tonal balance typically consists of an even distribution of these frequencies according to the reference audio track or original material. Tonal balance is evaluated at different volumes depending on the use cases.
In addition, it is important to look for unwanted resonances and notches in each of the frequency regions as well as extensions for low- and high-end frequencies.

Dynamics

Dynamics covers a device’s ability to render loudness variations and to convey punch as well as clear attack and bass precision. Dynamics are the cornerstone of concepts such as groove, precision, punch, and many more. Musical elements such as snare drums, pizzicato, or piano notes, would sound blurry and imprecise with loose dynamics rendering, and it could hinder the listening experience. This is also the case with movies and games, where action segments could easily feel sloppy without proper dynamics rendering.
For a given sound, dynamics information is mostly carried by the envelope of the signal. Let’s take a look at a single bass line for instance: not only would the attack need to be clearly defined for notes to be distinct from each other, but sustain also needs to be rendered accurately for the original musical feeling to be conveyed
As part of dynamics, we also test the overall volume dependency, or in other words, how the attack, punch, and bass precision change based on the user volume step.
In addition, the Signal-to-Noise Ratio (SNR) is also assessed in microphone evaluation.

Spatial

Spatial describes a device’s ability to render a virtual sound scene truthful to reality.
It includes perceived wideness and depth of the sound scene as well as left/right Balance, Localizability of individual sources in a virtual sound field and their perceived distance.
As expected, monophonic playback in a smartphone is usually not a good sign for a good spatial rendition, if not for a good playback performance at all. But many impediments can hinder spatial features, such as inverted stereo rendering, or uneven stereo balance. Thankfully, these problems are less and less common. On the other hand, some sensitive details such as precise localizability or appreciable depth are much harder to fine tune, thus being recurring shortfalls in smartphone audio.
Spatial conveys the feeling of immersion and makes for a better experience whether in music or movies.
In the Recording protocol, capture directivity is also evaluated.

Volume

The volume attribute covers the perceived loudness whether in recording or playback, the consistency of volume steps on a smartphone, as well as the ability to render both quiet and loud sonic material without defect. This last item involves both perceptual evaluation and objective measurements.

Here Device A has a good volume consistency, with volume steps homogeneously distributed between its minimum and maximum values, with an almost consistent slope and no discontinuities or volume jumps. On the contrary, Device B has an inconsistent volume step distribution, with no precision in its low volume steps, enormous jumps in volume, and almost 5 identical volumes steps at its maximum volumes.

Artifacts

Artifact refers to any accidental or unwanted sound, resulting from a device’s design or its tuning. Artifacts can also be caused by user interaction with the device, such as changing the volume level, play/pausing, or simply handling the device – which is why we specifically assess a device’s handling of speakers and microphones occlusion. Lastly, artifacts may result from a device struggling to handle environmental constraints, such as wind noise in the recording use cases.
Artifacts are grouped into mainly two categories as they can be temporal (pumping, clicks…) or spectral (distortion, continuous noise, phasing…).

Background

The audio background attribute is specific to the Recording use cases, as it only focuses on the background of recorded content. Background covers some of the audio aspects mentioned above, such as tonal balance, directivity, and artifacts.

Audio protocol tests

DXOMARK’s Audio testing protocols are based on specific use cases that reflect the most common ways in which people use their phones: listening to music or podcasts, watching movies and videos, recording videos, selfie videos, concerts, or events, etc.  These use cases have been grouped into two protocols: Playback and Recording. Each use case covers the attributes and sub-attributes that are relevant to the evaluation.

Playback

According to a survey we conducted on 1,550 participants, movie/video viewing accounts for most of the smartphone speakers’ usage, followed by music/podcast listening, and then gaming. Our Playback protocol covers the evaluation of the following attributes: timbre, dynamics, spatial, volume, and artifacts.

Objective tests

Before our audio engineers delve into perceptual evaluations, any DUT undergoes a series of objective measurements in our labs. Regarding the playback protocol, these tests focus on volume, timbre, and artifacts.
Measurements are done within the anechoic box, which houses an array of calibrated microphones, a speaker, and an adjustable arm to magnetically attach a smartphone from either side. The interior of the box is lined with fiberglass wedges that cover the entire ceiling, floor, and walls, ensuring the dissipation of all energy from sound waves, thus strongly reducing reflections: only direct sound coming from the DUT is captured by the microphones.

The anechoic box set up with scientific-grade microphones and a speaker to test a smartphone’s audio capabilities.
A smartphone positioned in front of a speaker for testing in the anechoic box.

Objective tests are done using various synthetic signals (pink noise, white noise, swept sines, multi-tones) as well as musical content.
The table below summarizes the objective tests conducted for the Playback protocol.

Attribute Test Remarks
Volume Volume consistency Sound pressure level (SPL) is measured for each volume step of the DUT using pink noise. Volume steps should ideally be evenly spaced out.
Maximum Volume SPL is measured for different types of signals at the DUT’s maximum volume.
Minimum Volume SPL is measured for different types of signals at the DUT’s minimum volume (first volume step).
Timbre Frequency response Frequency response of the DUT’s internal speakers is measured at three chosen levels: soft, nominal, maximum.
Artifacts THD+N Total Harmonic Distortion plus Noise is measured at the three previously mentioned levels.

 

 

Movies/Videos

As many users watch videos and movies with the integrated speakers of their phone, this use case has more weight in the playback part of our audio protocol. DXOMARK aims to provide a comprehensive perceptual evaluation focusing on how well the audio content from a movie or a video are rendered by the DUT’s internal speakers.
Tonal balance should be in line with the original material. Voice clarity is particularly important, but we also look at the overall richness of timbre, as well as the precision and impact of the low-end. Volume variations might be important in a movie or video, so we test the DUT’s handling of broad dynamic range, on the lookout for excessive compression.
Using our reflective panels, we assess the wideness of a device’s rendered stereo scene, as well as the localizability and depth of various audio-pictural elements.

Music

Smartphone audio has improved significantly over the past years, and surveys show that a surprisingly large number of users frequently listen to music on their phone’s internal speakers. With this in mind, our Music use case covers an expansive variety of genres.
Evaluation encompasses multiple aspects deemed to be relevant, such as the tonal balance’s truthfulness to the reference track, with proper repartition of bass, midrange, and treble. More often than not, smartphone playback tends to lack a bit of low and high-end frequencies, so we value the extra effort put towards a broader frequency response. We also pay close attention to bumps or notches in the spectrum, and we evaluate the consistency of tonal balance at different volumes.
Dynamics-related qualities such as attack, bass precision, or punch, are evaluated at different volumes as well. For instance, the presence of compression at maximum volume may hinder attack or bass precision, and punch may not be as good at low volume.
As with the Movies use case, evaluation encompasses spatial aspects such as wideness and depth of the rendered stereo scene, as well as localizability of instruments and voice. These sub-attributes are notably tested not only in landscape orientation but also in portrait and inverted landscape.
Maximum volume should be as loud as possible without excessive distortion or compression. Smartphone volume not being loud enough is often commonplace. In the same manner, minimum volume should be quiet enough but still very intelligible.

Gaming

This use case answers to the growing use of smartphones for gaming. With chipsets and RAM performance skyrocketing, mobile games are becoming more and more performant, and so should smartphones regarding their audio capabilities.
DXOMARK’s audio gaming use case aims to evaluate the immersion audio provides to a game, meaning wideness and especially localizability should be on point. Impactful punch and good bass power are also essential.
These sub-attributes are evaluated at multiple volume steps, as the gaming experience must be optimal regardless of level. We look for potential timbre deterioration at maximum volume, as well as artifacts such as distortion and compression.
We also test for speakers’ occlusion during gaming, as the sound coming from speakers might easily be blocked by a user’s hands during intense gaming sessions. This heavily depends on speaker placement on the phone, but also on the mechanical design of the speaker output holes.

Recording

Objective tests

Objective recording measurements focus on up to three attributes: timbre, volume and directivity. Frequency response is computed for the main camera app, and the default memo recorder app. The Max Loudness measurement consists in testing a device’s capabilities at handling very loud recording.
Timbre and Volume tests are conducted within the anechoic box using the speaker within it, while the Audio Zoom objective tests, requiring much more space, are performed within the anechoic chamber
The following table summarizes the tests conducted under the Recording protocol.

Attribute Test Remarks
Timbre Frequency response Frequency response is measured at 80dB SPL in 3 settings: landscape mode + main camera, portrait mode + front camera, portrait mode + memo app
Volume Max Loudness Phone in landscape orientation, recording with main camera, at 4 different volumes: 94dBA, 100dBA, 106dBA, 112dBA
Recorded loudness LUFS measurement on simulated conditions (Video, Selfie Video, Memo, Meeting, Concert)
Wind Noise Wind Noise metrics The phone is placed on a rotating table, in front of a wind machine, with an array of sound sources all around it.
Recording voice and synthetic signals, with various angles of wind incidence and wind speeds.
Audio zoom Audio Zoom directivity Phone in landscape mode on a rotating table, measuring the frequency response for 3 zoom values, at each angle (10° step), at 2 meters from sound source.


Simulated use cases

The simulated use cases are a series of recordings performed in an acoustically treated room using a ring of speakers. Using different combinations of pre-recorded background and voices, it is possible to recreate multiple scenarios relevant to common uses of a device’s microphones.
Simulating these environments allows for consistent recordings, in addition to easing the process of capturing a variety of situations.
The following table goes over the simulated use cases deemed most important:

 

Background Setup Remarks
Urban Video (main camera) + Landscape orientation Simulating videos filmed in busy urban environments.
Urban Selfie Video (front camera) + Portrait orientation Several types of voices are played at different angles from the front, side, and rear. Voices are played consecutively and simultaneously, with varying intensity.
Urban Memo app + Portrait orientation Simulating a memo recorded in busy urban environment.
This use cases focuses only on one frontal voice varying in intensity.
Home Video (main camera) + Landscape orientation Simulating videos filmed in home environments.
Home Selfie Video (front camera) + Portrait orientation Vocal content is similar to that of Urban use cases.
Office Memo app + Horizontal orientation, face up Simulating a meeting memo recorded in office environment.
This uses case focuses on voices all around the device, which is supposedly placed on a table. Voices vary in intensity and may be played consecutively and simultaneously.

The recorded simulated use cases are evaluated perceptually by our audio engineers, with user expectations in mind, including attenuation of voices out of the field of view and background noises, clear localization and perception of distance, wide and immersive stereo scene, faithful and natural tonal balance with intelligible speech, among other sub-attributes.

Indoor / Outdoor

The indoor/outdoor use cases are a complement to the previous simulated use cases, in that the recordings are done in situ and not in our labs. These tests focus on intelligibility, recording volume, and SNR. Recordings are done using a specially designed rig to hold up to 4 smartphones at once, and they are performed in either outdoors or indoors settings, with an announcer delivering Harvard sentences clearly at a set distance. The outdoor scenario features passing cars on a nearby road as well as some moderate wind, while the indoor scenario features a vacuum cleaner functioning in the background. For each scenario, the DUT is set up in three specific settings: landscape orientation + rear camera, portrait orientation + front camera, and portrait orientation + memo app.

Concert

As smartphones are commonly used to immortalize concerts and other events, this use case is designed to assess how well a device can handle the recording of music at a very loud volume.
Tests are performed within the anechoic box, where the DUT records a set of musical tracks at 115dBA. Each track features common grounds such as bass, drums, and vocals, but they offer significant differences in terms of genre, instrumentation, and mix.
Since the test conditions are intentionally extreme, one key issue addressed by the evaluation is of course the handling of artifacts, such as distortion, compression, pumping. Tonal balance is also in the limelight, with a special emphasis on musicality. Regarding dynamics, multiple elements are subjects to evaluation, such as overall punch, bass precision, and drums snappiness for instance.
This use case is also an opportunity to test a DUT’s audio zoom capability, by zooming on a specific element: the ability to successfully isolate a single element from the rest in the audio scene (including background noise) is most certainly a cutting-edge feature.

Occlusion

Depending on a phone’s construction, it is not uncommon to accidentally block one or more several microphones while recording. The aim of this use case is to assess how easily microphones can be blocked, and how the DUT’s audio processing handles it.
Recordings are done in landscape and portrait orientations each using front and rear cameras, as well as portrait and inverted portrait orientations when it comes to the memo app. Tests are performed with pre-established sets of hand positioning, while the engineer enunciates a series of sentences.
Perceptual evaluation focuses solely on the undesired effects potentially induced by hand misplacement, or more rarely improper DSP (Digital Signal Processing).

Wind noise

The presence of wind noise in a smartphone recording can be frustrating. Incorporating this use case into our audio protocol meets the increasing attention put towards the reduction of wind noise’s effect in recordings. Manufacturers can achieve these results with DSP, careful internal microphones placement, and usually a combination of both.
To attain consistency and precision in our measurements, the tests are performed in controlled conditions with a wind machine and a rotating smartphone stand, both automated via scripts. Four calibrated speakers are arranged around the rotating stand, so that the test speech is always diffused frontally in relation to the DUT: this way, the incidence of wind can be isolated as a factor. The wind machine is set consecutively at three gradual speeds, in addition to a reference step without wind. Recordings are conducted with three settings: landscape mode and main camera; portrait mode and selfie camera; portrait mode and memo app.

A smartphone undergoing a test for wind noise management.
The smartphone, right, is placed in front of the wind machine, and rotates.

The table below covers the parameters set for the measurement:

Parameters Values
Use-cases Video in landscape

Selfie video in portrait

Memo in portrait

Angles of incidence 0° (Front facing wind)

90° (Side facing wind)

Wind speeds 0 Hz (no wind) -> Reference recording

3 m/s

5 m/s

6.5 m/s

 

In addition to speech sequences, pink noise is used to measure wind rejection ratio. Other objective tests include the calculation of wind energy, and two-tracks correlation giving out reliable SNR values.

But objective measurements are just a small portion of our tests, which are mostly perceptual. Evaluation focuses essentially on intelligibility, with the help of a set of standardized evaluation rules. Artifacts are also considered during the evaluation.

Audio zoom

Audio zoom is a form of audio separation and filtering, which aims to isolate a sound source from its surroundings in accordance with the smartphone camera’s focal point and zoom level. This technology is becoming more and more prevalent in newer smartphones, and it is a notable feature in audio processing that can help manufacturers emerge from the competition.
You can read more about this technology here: dxomark.com/what-is-audio-zoom-for-smartphones
Audio Zoom recordings take place in the anechoic room, with the DUT in landscape orientation with main camera on. A pair of speakers are arranged in the corners of the room behind the smartphone as they emit background noise. One speaker directly in front at a distance of 3 meters, with a dummy head beneath it, handles playback of the main signal (speech, or music).

When zooming on the dummy head during recording, a smartphone with audio zoom capabilities is expected to isolate the main signal from the background more and more as zoom level increases. Using our automated rotating stand and a logarithmic swept sine, we measure the DUT’s directivity at three distinct levels of zoom, being: wide (x1), telephoto, and super telephoto. After that, if the DUT is proven to have audio zoom through objective measurements, we then perform a series of tests for each zoom level using two types of signals: speech and music. These recordings are subject to perceptual evaluation by our audio engineers.

Multiple sub-attributes are assessed during the evaluation: side rejection, which corresponds to the strength of the audio separation, volume consistency, which involves rating the correlation between zoom level and volume increment, but also tonal balance. Indeed, it is relevant to check the timbral integrity of the main signal after such processing is applied to it. While some audio zoom implementations are cut for speech, the handling of musical instruments is not always on point; not only can timbre deteriorate, but the DSP may also malfunction and even induce artifacts, which we also consider.

We hope this article has given you a more detailed idea about some of the scientific equipment and methods we use to test the most important characteristics of your smartphone audio.

The post A closer look at the DXOMARK Audio protocol appeared first on DXOMARK.

]]>
https://www.dxomark.com/a-closer-look-at-the-dxomark-audio-protocol/feed/ 0 PXL_20211118_120333125 Closer-Look-Audio Picture10 Picture8 Picture11 Picture12 Picture13 Picture14 Picture15 Picture16 Picture17 Picture18 Picture19 Picture20 Picture21 Picture22 Picture23