the low spec iOS need to catch up
If the AI is not local, how are they gonna pay for the compute? This is why the AI bubble will inevitably burst.
Apple will copy Tecno again. Mark my words
Apple will copy Texno again. Mark my words
yungfishstick, 01 Mar 2024It got the square question completely wrong, probably because LLMs (especially local ones) are... moreit's not local
probably llava 7b or something (can not run on the phoen)
Anonymous, 01 Mar 2024I'm so tired of AI...The AI on my smartphone is telling me that you are NOT tired
Will Infinix or iTel phones get it out of curiosity?
It got the square question completely wrong, probably because LLMs (especially local ones) are usually not good at anything number related. There are actually 80 squares. Not sure why they included that one
Oliver Wafula, 01 Mar 2024Since the introduction of Galaxy S24 series, every phone manufacturer have nothing new to add... moresamsu ng copied google for AI
Oliver Wafula, 01 Mar 2024Since the introduction of Galaxy S24 series, every phone manufacturer have nothing new to add... morehoogle piksel did it first though
Since the introduction of Galaxy S24 series, every phone manufacturer have nothing new to add in their devices except AI
SpiritWolf, 01 Mar 2024As much as we know, AI doesn't exist yet. Bunch of clever algorithms isn't AI. Still... moreplease don't talk about something you don't understand, you're embarassing yourself
Actually ....Total number squares in that picture is 276....LOL
The problem with AI (in phone) is that it is a privacy nightmare, imagine sending your private chat for the AI to summarize. A tiny, weak, and puny phone such as Tecnos, or even, 95% phones in the market currently does not have the hardware to run LLM or diffusion model so they outsource it via API call, this is the case for 99% of the time, even Pixel which is marketed as AI first does that, the magic editor feature is unavailable without internet connection as the Tensor chip is to weak for diffusion model (perhaps they use Imagen series).
In such case you are sending your chat logs to an unknown server where there, the AI will use the context send to generate the next token, however this poses serious risk to user such as possibility of the prompt to get intercepted mid air (unlikely), the prompt used as a training dataset by the company (very likely so), or hell, they can even sell your data for quick bucks considering Tecno is a low cost manufacturer.
For reference before someone get cocky and think their $200 phone can do AI, no. First for LLM, what important is memory bandwidth, in such case a desktop DDR4 3200mhz dual channel that runs 51GBps can expect 1 tokens per second in realistic scenario, that is about 3/4 words per second. Now let's take a look at the fastest mobile RAM currently, LPDDR5T used in Vivo X100 for this one the exact speed is unknown, some claim 9.6GBps while other said 9.6Gbps (AnandTech said it's 76.8GBps), either way that's still too low considering human reads 5 words per second, the speed is about 6x slower.
For another reference, RTX 3090 with GDDR6X is about 935GBps.
TL;DR: AI in phone is still yet a dream.
Removing bloatwares could be a nice start. Better organized and simplified HiOS would also do the job.
By the way, Tecno and Infinix could make their respective OS go in the same approach as Motorola does. Only the absolute necessary tweeks and other bonus features that are already their trademark.
Absolute no need for "Freeze" app or "Magazine" thing.
I'm so tired of AI...
Anonymous, 01 Mar 2024Can we get this year of useless AI hype gimmicks behind us as quickly as possible, so that nex... moreGenerative AI is also hurting jobs. Sora, a text to video model by OpenAI, could replace some animators, cinematography and other media jobs.
Tip us
1.7m 126k
RSS
EV
Merch
Log in I forgot my password Sign up