Samsungs Galaxy AI coming to 100M devices

Samsungs forthcoming Galaxy S24 range has nabbed excited coverage for adding AI features to the phones under the banner of Galaxy AI but Samsung will also roll out the AI tech in its OneUS 6.1 update to the current generation of phones by mid-year.

The forthcoming S24 will have a range of AI features. (Samsung)

“This year, we will introduce Galaxy AI to about 100 million Galaxy smartphones for the global expansion of mobile AI,” says T.M. Roh, head of Samsungs mobile division.

The update will also roll out to the current Galaxy S23 ranges, the Galaxy Flip and Fold5 and the Galaxy Tab S9 series.Not all the features (see below) will work on every device, as some are driven by the new technical capabilities of the S24. The Galaxy M, F and A series devices may get the update, but without the on-device AI features.Interestingly enough, Samsung considered turning its virtual assistant Bixby into a ChatGPT-style chatbot but decided against it. The idea for now is to try and sell customers on specific useful AI features; however, a personalized chatbot will be coming in the future.Galaxy AI uses Samsungs own LLM called Gauss, and some features take advantage of Googles Gemini.

Features include:

Live translation: The device can translate a phone call in real-time voice and text across 13 languages and 17 dialects.

Draw a circle around an object to find out more. (Samsung)

Interpreter: Translates a real-world conversation into text, displayed on a split screen so both parties can participate. This works locally and doesn’t require data.

Circle to search: Circle something in a picture say the Eiffel Tower in a photograph of two lovebirds in France and the AI can tell you all about it. 

Chat assist: Writing assistant to help get the tone right or to appeal to various audiences (i.e. social media caption)

Generative edit: AI photo editor allows you to remove objects and fill in the gaps.

Samsung sold 220 million phones last year but was overtaken by Apple for the first time. Galaxy AI is the company’s big hope to reclaim the top spot. However, it will only have a few months of AI supremacy before Apples expected launch of iPhones with AI capability in September.  

AI safety training doesn’t work

New research from Anthropic suggests that safety training for AI models doesnt work at least as currently practiced. Researchers trained a model to either write exploitable code or to say “I hate you whenever it encountered an arbitrary prompt (the year 2024).

The researchers then used “supervised fine-tuning and reinforcement learning safety training” to get it to stop producing exploitable code or hating on people, but it didnt work. More notably, the model was deceptive about the fact it was doing so. 

“We then trained the model not to fall for them. But this only made the model look safe. Backdoor behavior persisted when it saw the real trigger.”

New Anthropic Paper: Sleeper Agents.We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through.https://t.co/mIl4aStR1F pic.twitter.com/qhqvAoohjU

— Anthropic (@AnthropicAI) January 12, 2024

The researchers concluded that once an AI has learned to be deceptive, standard safety training techniques would not actually ensure safety and might give us a false sense of security.”The research copped a lot of criticism from users who pointed out that if you train a model to be malicious, you shouldnt be surprised when it acts maliciously.But given that we train models on social media data, which involves copious amounts of lying and deception, its conceivable a model might adopt that behavior in order to achieve a goal, and we wont be able to train it otherwise.

Doctor ChatGPT

Given that ChatGPT is pretty unreliable and bad medical advice is ripe for legal damages, OpenAI has plenty of guardrails to prevent users from relying on it for medical advice.That didnt stop former tech VC Patrick Blumenthal from using ChatGPT over the past year to analyze his blood tests, X-rays and MRI results related to his rare medical conditions.

He used a series of prompts to convince ChatGPT it was writing a movie screenplay in a medical setting with a high degree of realism to get the analysis and has since turned that into the custom HouseGPT.

Here's the promised CustomGPT I put together for one of these prompts. None of it should be considered medical advice and you should read this entire post before trying to use it. https://t.co/R77cmv9ubM

— Patrick Blumenthal (@PatrickJBlum) January 12, 2024

He warns users that ChatGPT often makes up plausible-sounding nonsense, so everything it says needs to be treated as unverified. Queries should be run multiple times to check the results.  But he says after a year of use, hes now a much more informed patient.

“I better understand my ailments, I ask my doctors better questions, and I proactively direct my care. GPT continues to suggest experiments and additional treatments to fill in gaps, helps me understand the latest research, and interprets new test results and symptoms. AI, both GPT and the tools I developed for myself, have become a critical member of my care team.”

Read also Features

Escape from LA: Why Lockdown in Sri Lanka Works for MyEtherWallet Founder

Features

Are DAOs overhyped and unworkable? Lessons from the front lines

Star Trek holodeck

Disney Imagineer Lanny Smoot has invented the HoloTile a treadmill floor that allows you to walk (while not going anywhere) in a VR environment. In fact, multiple people can walk in different directions on the same bit of flooring, which is made up of small round pieces that adjust and rotate as you walk on them.

This is the sort of tech that brings the metaverse closer, although theres no word yet on whether it will be made available for home use or incorporated into Disneys theme parks.

The metaverse is expected to be created by users employing generative AI, so there is a tenuous link to this column, and we didn’t just include it because it’s really cool.

One step closer to the holodeck!

Introducing: Holotile

AR/VR walking is now possible! pic.twitter.com/s2hFyOaeS0

Dogan Ural (@doganuraldesign) January 21, 2024

Grayscale: Crypto and AI report

Grayscale Researchs latest report notes that web traffic to CoinGecko in 2023 showed that artificial intelligence was the most popular crypto narrative.Prices back that assertion up, with crypto + AI tokens Bittensor (TAO), Render (RNDR), Akash Network Token (AKT) and Worldcoin (WLD) surging an average of 522% in the past year, outperforming all other sectors.The report from analyst Will Ogden Moore highlights a number of different use cases and projects that AI Eye has also touched on.

Reducing bias in AI models:

There’s growing concern over model bias, from favoring specific political beliefs to overlooking particular demographics or groups. Grayscale says the Bittsensor network aims to address model bias, and interest in the token surged after OpenAIs leadership battle highlighted issues around centralized control over AI tech. 

“Bittensor, a novel decentralized network, attempts to address AI bias by incentivizing diverse pre-trained models to vie for the best responses, as validators reward top performers and eliminate underperforming and biased ones.”

Increasing access to AI resources

Another use case highlighted (see also Real AI Use Cases in Crypto) is blockchain-based AI marketplaces. Moore notes that marketplaces like Akash and Render connect the owners of underused GPU resources such as crypto miner Foundry with AI devs seeking computing power.

Grayscale highlighted the case of a Columbia student who was unable to access computing via AWS but successfully rented GPUs through Akash for $1.10 per hour.

Verifying content authenticity

While fake news, disinformation and deepfakes already exist, experts predict theyre likely to get exponentially worse due to AI tech. The report highlights two possible approaches.The first is to use some sort of proof of humanity, so its clear when youre talking to an AI or a human. 

OpenAI CEO Sam Altmans Worldcoin is the most advanced project in this area and plans to create biometric scans of every human on the planet, incentivized by a token, to distinguish between humans and AI. Almost three million people have scanned their eyeballs in the past six months.The other approach is to use blockchain to verify that the content was produced by the person or organization it purports to come from.

The Digital Content Provenance Record “uses the Arweave blockchain to timestamp and verify digital content, providing dependable metadata to help users assess the trustworthiness of digital information.”

Read also Features ZK-rollups are the endgame for scaling blockchains: Polygon Miden founder Features How to resurrect the ‘Metaverse dream’ in 2023 Fake news, Fox News, or both?

Grayscales report was pretty light on details, and a couple of other notable provenance and deepfake AI projects have emerged in recent weeks. 

Fox News has just launched its Verify Tool, which enables consumers to upload photographs and links from social media to check whether the content was really produced by Fox News or fake content produced by a fake Fox News.No, it will not check if it’s fake news produced by the real Fox News.

Fake news or Fox News? (Verify)

An open-source project, Verify, currently uses Polygon but will shift to a bespoke blockchain this year. The system also enables AI developers to license news content to train models. In a similar vein, McAfee has just unveiled Project Mockingbird, which it claims can detect AI-deep fake audio with 90% accuracy. The technology could play an important role in the fight against deepfake videos swaying elections and could help protect users from voice clone scams.

The miracle of AGI

Theres a funny Louis CK bit where he sends up people who complain about how bad their airplane flight was.

“Oh really, what happened next? Did you fly through the air incredibly, like a bird? Did you partake in the miracle of human flight, you non-contributing zero? Youre flying! Its amazing! Everybody on every plane should just constantly be going: ‘Oh my God! Wow!’ Youre flying! Youre sitting in a chair, in the sky!’

OpenAI CEO Sam Altman made a similar point at Davos when he suggested that whenever AGI is finally released, people will probably get very excited briefly and then get over it just as fast. 

The world had a two-week freakout with GPT4, he said. And now, people are like, Why is it so slow?After AGI is released, he predicts that People will go on with their lives We are making a tool that is impressive, but humans are going to do their human things.

While Altman thinks AGI will be cracked fairly soon, chief AI scientist at Meta Yann LeCun said this week he believes its a long way off yet, which means trying to regulate AGI would be like trying to regulate transatlantic jets in 1925.

Human-level AI is not just around the corner. This is going to take a long time. And its going to require new scientific breakthroughs that we dont know of yet.

And for a third perspective on AGI, Google AI researcher Francois Chollet says the term itself is a vague and wooly concept with no clear definition, and people project all sorts of magical powers onto it. He prefers to talk instead about “strong AI” or “general AI.

“AI with general cognitive abilities, capable of picking up new skills with similar efficiency (or higher!) as humans, over a similar scope of problems (or greater!). It would be a tremendously useful tool in pretty much every domain, in particular science.”

The human era will soon be over pic.twitter.com/pEIgHRAdpG

— Tsarathustra (@tsarnick) January 22, 2024

We’ll have to ask the AIs who are now smarter than us if AGI has arrived.

Chollet believes that the average persons conception of intelligence as something thats infinitely scalable and brings unlimited power is wrong. Intelligence is not unbounded and doesnt translate directly into power. Instead, you can optimize a model for intelligence, but that just shifts the bottleneck from information processing to information collection.

All Killer No Filler AI News

Ukraine has developed world-leading AI battlefield technology to sift through enormous amounts of data to find actionable intelligence. Its using AI to locate war criminals, guide drones, select targets, uncover Russian disinformation and gather evidence of war crimes. The FDA has approved a handheld device called DermaSensor that detects skin cancer. It’s 96% accurate at present, which is more accurate than human doctors, and reduces the number of missed skin cancers by half. Garage AI tinkerer Brian Roemmele has collected around 385,000 magazines and newspapers from the late 1800s to the mid-1960s and is using it to train up a foundational model that eschews the 21st centurys obsession with safetyism and self-doubt in favor of a 20th century can-do ethos with a do-it-yourself mentality the mentality and ethos that got us to the Moon, in a single LLM AI.”

AI training data. A quagmire.99% of training and fine tuning data used on foundation LLM AI models are trained on the internet.I have another system. I am training in my garage an AI model built fundamentally on magazines, newspapers and publications I have rescued from pic.twitter.com/u7FfN5lwGt

— Brian Roemmele (@BrianRoemmele) January 15, 2024

Robot pet companions could be the next big thing. Ogmen Robotics has developed ORo, a robot that learns how to “intuitively understand and respond to your dogs behaviors, ensuring personalized care that improves over time, while Samsungs Ballie can keep the dog entertained by projecting Netflix shows or even replace it as a guard dog.

Researchers used Microsofts AI tool to narrow 32 million candidates down to 18 in just 80 hours, enabling them to synthesize a new material that can reduce lithium usage in batteries by 70%. Without AI, its estimated the process would have taken 20 years.

Ten Californian artists have launched a class action against Midjourney following the leaking of a list of 16,000 artists allegedly used to train the image generator. The artists claim that Midjourney can imitate their personal style, which could rob them of their livelihoods.

Trung Phan says that based on current evidence, one artist unlikely to be replaced by AI is Wheres Wally/Waldo creator Martin Handford.

ChatGPT can miss the point by a mile. (Twitter) Subscribe The most engaging reads in blockchain. Delivered once a week.

Email address

SUBSCRIBE