There are two types of individuals when it comes to technology. Those who desire their possession, whether phone, car, or TV, to perform a range of functions in a single package. Those who would rather have a more specialized device, one that offers fewer functions but executes them exceptionally well.
Those in the latter category gave up long ago. And there was no better indication of that than what took place in the arena of television, as was evident at CES 2024 due to the enormous presence of AI. Every press release contained a section dedicated to AI. No press conference was without an executive extolling the virtues of their AI.
Naturally, AI is merely an abbreviation for “artificial intelligence” which essentially means a novel method of interpreting and processing data. As was noted at the various events in Las Vegas, manufacturers are implementing AI in a variety of ways.
“Enhanced processing results in superior picture quality,” Scott Ramirez, TCL’s vice president of marketing and development, declared at the company’s press conference. “It’s as simple as that.”
He is absolutely correct. AI-powered upscaling of this kind has been in practice for years. It took some time to be integrated into actual televisions – Nvidia had it included in its Shield TV device back in 2017 – but it’s finally here. TCL is now incorporating this type of AI-powered upscaling across the board for 2024 TVs using three different processors. The S5 and Q6 televisions feature the AIPQ processor while the QM7 and QM8 televisions boast the AIPQ Pro. The enormous 115-inch QM89 television will be outfitted with the AIPQ Ultra.
Many of the unveilings at CES went beyond simply upscaling content, something that is considered a basic requirement at this point.
“Our Hi-View Engine chipsets are ushering in a new era of user experience,” David Gold, vice president of Hisense International and president of Hisense Americas, remarked at the Hisense CES press conference. “Our dedication goes beyond superior picture quality and performance. It is about creating experiences for the entire family.”
Doug Kern, head of marketing at Hisense USA, also shared similar sentiments.
“ These features, technologies, and images on the screen need to work together,” Kern said. “It’s a state-of-the-art AI chipset that makes use of deep learning and an array of technologies to enhance the viewing experience.”
The AI processing has advanced significantly from just analyzing the entire picture and improving blurry patches. “By locally optimizing tone mapping, it evaluates hundreds of thousands of image areas,” Kern stated. “It can also detect faces in the image and make adjustments for a more natural appearance.”
Indeed, your next Hisense TV could have facial recognition software embedded. For Hisense, the concept is contrary to a camera in a way. Instead of identifying your face, it identifies a face on the screen and makes necessary modifications. (Alternatively, one could suggest a company identifying, say, Tom Cruise, and subsequently suggesting other films.)
LG has taken the concept further, narrating a tale of AI that is integral to the company’s structure, not just a feature in an individual product.
“The AI brain of LG that we envision is a powerful engine with orchestrated processes,” CEO William (Joowan) Cho announced at the beginning of CES press conference. “It starts from focusing customers’ needs through interactive conversation, or contextual understanding, like behavior patterns and emotions. And ultimately the AI brain generates optimal solutions to prompt tangible actions by orchestrating physical devices. So we call this, ‘Orchestrated intelligence.’”
This is heady stuff. LG is not only employing the typical type of AI processing we have already discussed. The company is also going as far as to identify various users by their voices, and then applying the appropriate user profile. Or rather, as Matthew Durgin, vice president of home entertainment content and services explained, the new Alpha 11 processor “allows LG TVs to identify you.”
Samsung is also taking an all-encompassing approach and integrating AI across its entire lineup. We sat down at CES with Jay Kim, executive vice president of Samsung Visual Display Business, to discuss AI more broadly and to receive an overview of how Samsung devices utilize standard computing processors together with graphics processors and neural processors.
The result, he said, will be an incredibly smooth integration.
“I think that regular consumers won’t know if an [neural processing unit] is at work or not,” Kim explained through an interpreter. “So, I think it will be through their experience that they understand the benefits offered by an NPU. So, an exceptional new experience that they have in their interaction with a device, I think that will help them realize, indirectly, the power of NPUs.”
As is the case with most, if not all, Samsung devices, look for AI in a single product to enhance the performance of another.
“If you utilize the Samsung Health application,” Kim elaborated, “as you work out, doing squats or pushups, the AI will be able to determine if you’re executing the activities properly. If you aren’t, it will assist in correcting your posture and technique.”
A superior life through artificial intelligence is imminent.