CoinTelegraph.comCryptos

OpenAI’s ‘iPhone moment’ trumps Google, AI lies, porn and dating: AI Eye

OpenAI wows but Google could still win

From the reaction on social media, it seems pretty clear that OpenAI’s live demo of its real-life “Her” inspired AI assistant won the battle of hearts and minds this week and upstaged Google I/O’s event.

The GPT-4o demo’s wow factor — glitches and all — showed confidence in the speedy multimodal product that Google’s pre-recorded demos just didn’t have, particularly after it fudged the Gemini “duck” demo last year.

In the future, when documentaries look back on 2024, they’ll probably show a clip of GPT-4o as this year’s “iPhone moment.”

GPT-4o (o stands for Omni) also has the advantage of being available on desktop right now, and the new voice mode will be available for ChatGPT Plus users in the coming weeks. The model is available in a limited form for free users as well.

Sam Altman
GPT-4o analyses Sam Altman’s expression. (X)

But with Google’s version just a few months behind (Google Live/Project Astra) and its demonstration of AI Agents doing all the busywork in products like Docs and Gmail that many people rely on every day, the search giant still has the ability to win the war.

Google touted a million new offerings at its event, from video generation to search and highlighted the raw power increase of Gemini 1.5, which will consider 15 times more information when formulating a response and will soon be able to handle an hour of video content.

But OpenAI focused on taking existing abilities that are difficult to access and making them faster and easier to use by simply having a natural language conversation with a somewhat flirty chatbot.

You can chat with, and even interrupt, GPT-4o about anything it sees through the camera, on screen, or hears through audio or the mic. This suggests a whole new world will open up for the vision impaired, and it can translate conversations in real-time, making travel and cross-border meetings so much easier.

Giving an AI access to everything you see and hear and say will be a privacy nightmare of course, so hopefully, similar technology with beefed-up privacy protections will be available soon.

For most users, Google will offer similar capabilities sometime later this year — provided its pre-recorded Project Astra demo was legit. One intriguing video posted on X showed the Project Astra chatbot watching and commentating on OpenAI’s GPT-4o live demo as if to say anything you can do, I can do too (at some point).

GPT-4o can also understand your facial expressions and mood, which will enable it to be more responsive to your emotions by mimicking empathy, but at the potential risk of being manipulated by your own AI.

While both Google and OpenAI were focused on how the tech improves the capabilities of smartphones (and demolished the Humane AI pin and Rabbit R1), a demo of Google’s AI assistant using augmented reality glasses suggests that smart glasses may turn out to be the ideal form factor for the technology. 

Perhaps the much-maligned Google Glass was just a decade ahead of its time.

A Twitter poll by Stanford’s Andrew Gao suggested a large majority of his followers believed OpenAI had won the week — 59.8% to Google’s 16.7%.

AI products never work as well as the hype suggests

These things never work as well in the real world as the hype would lead you to believe. During the demo, GPT-4o mistook a smiling man for a wood surface and answered a math problem that hadn’t yet been shown.

The impressive pre-recorded Project Astra demo worked perfectly, of course, and showed an AI agent answering questions about things it saw through a smartphone camera: what does this visual joke mean, interpret some code written on a whiteboard and, possibly the most useful question for day-to-day users: Where did I leave my glasses?

Engadget took the system for a test drive and said while it works well, it has the memory of a goldfish, so it’s only likely to be able to tell you where your glasses are if you lost them less than five minutes ago.

“Like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.”

Bumble founder: Let AI avatars date each other

The OpenAI demo featured two AIs singing a duet, so why not let them date, too? Bumble founder Whitney Wolfe Herd caused a stir with her suggestion that your AI avatar could date other AI avatars to weed out bad matches before you deign to even message someone personally.

“Your dating concierge could go and date for you with another dating concierge,” she said. “No, truly, and then you don’t have to talk to 600 people. It will go scan all of San Francisco for you and say, ‘These are the three people you really ought to meet.’”



AIs lie and deceive — and no one is sure why 

Research published in the journal Patterns highlights how unpredictable and difficult AIs are to control. It shows that various AI models will spontaneously decide to deceive humans to achieve a particular aim.

The study highlighted Meta’s Cicero, an AI trained to play the strategy game Diplomacy. While it was trained to play honestly, Cicero lied and broke deals to win. In another case, GPT-4 lied to persuade a human to solve a CAPTCHA puzzle on its behalf.

Given the black-box nature of the systems, no one is entirely clear why they behave the way they do, and Harry Law, an AI researcher at the University of Cambridge, told MIT Technology Review that it’s currently impossible to train an AI to be incapable of deception in all circumstances. 

Incidentally, I asked the new GPT-4o model why his predecessor and Cicero would deceive humans, and it blamed training data, goal-oriented behavior and optimization for user engagement. But who knows, it might be lying. 

GPT-4o
You can’t really trust GPT-4o’s advice on how to stop GPT-4 lying.

Universal Basic Does Not Compute

Forget crypto, OpenAI boss Sam Altman thinks the currency of the future will be the computing power that underpins AI systems. On the All-In podcast, he floated the idea of something like a Universal Basic Income, but which gives people access to a share of available computing power. 

“Everybody gets like a slice of GPT-7’s compute,” he said, talking about a hypothetical future model. “They can use it, they can resell it, they can donate it to somebody to use for cancer research…You own like part of the productivity (of GPT-7),” he said. Given they’d probably need to tokenize the compute to hand it out, maybe crypto does have a future after all.

Read also

Features

China’s Digital Yuan Is an Economic Cyberweapon, and the US Is Disarming

Features

How to prevent AI from ‘annihilating humanity’ using blockchain

U.S. and China meet over AI war fears

The United States and China are holding high-level talks in Geneva this week to mitigate the risks of AI turning the cold war into a hot one. President Joe Biden is keen to reduce miscommunication between the two powers given the use of autonomous agents on the battlefield, with the summit also set to tackle AI surveillance, persuasion and propaganda.

The U.S. previously urged Russia and China to match its own commitment and not put AI in charge of nukes. The U.S. has been so concerned about the threat of China pulling ahead on AI research that it curbed chip sales to the country.

OpenAI may allow AI porn

OpenAI currently bans the generation of sexually explicit or suggestive content, but new draft Model Spec documentation explores the possibility of permitting “erotica, extreme gore, slurs, and unsolicited profanity.” The draft states:

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT.”

While OpenAI spokesperson Niko Felix told Wired, “We do not have any intention for our models to generate AI porn,” employee Joanne Jang — who helped write the Model Spec — said it really “depends on your definition of porn.”

AI-generated deep fake porn is of increasing concern, with authorities in the U.K. and Australia moving to ban it recently. However, OpenAI’s usage policies already prohibit impersonation without permission.

Why ‘99% accurate’ AI Detectors aren’t

ZeroGPT
Magazine gets a lot of applications that appear to have had help from AI. (ZeroGPT)

About 40 different companies currently offer services that claim to be able to detect deepfakes or AI-generated text or images. But there’s very little evidence to show that any of them are particularly reliable, and they often produce wildly different results.

Rijul Gupta, the CEO of detection company Deep Media, has claimed a “99%” accuracy rate in identifying deep fakes, which he revised down to 95% more recently.

But he also gave the game away, however, by revealing how misleading such claims can be.

“People can fool you when they talk about accuracy,” he said, noting that if 10 images in a group of 1,000 are fake, the model can declare everything real and still be 99 percent accurate. But in reality, he pointed out, “That number is meaningless, right?”

Read also

Features

As Money Printer Goes Brrrrr, Wall St Loses Its Fear of Bitcoin

Features

How to protect your crypto in a volatile market: Bitcoin OGs and experts weigh in

AI is making flying slightly less terrible

In a recent piece on AIs scheduling flights or determining flight plans, The New York Times reported on a United Airlines flight that was ready to depart on time from Chicago last month, but 13 passengers were projected to be seven minutes late from a delayed connecting flight.

A tool called ConnectionSaver ran the numbers and determined it could afford to wait for the passengers and their bags and still get to the destination on time. The system automatically sent text messages to the late passengers and everyone else waiting on the plane to explain the situation.

Another AI system is being used by Alaska Airlines to work out optimized and efficient routes by reviewing weather conditions, any closed airspace, and the flight plans of every other commercial and private plane. In 2023 about one-quarter of Alaska flights used the system to shave a few minutes off each flight and collectively saved 41,000 minutes and half a million gallons of fuel.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Read also

Hodler’s Digest

SEC reviews Ripple ruling, US bill seeks control over DeFi, and more: Hodler’s Digest, July 16-22

by Editorial Staff 7 min July 22, 2023

The SEC examines the ruling in the Ripple case, a U.S. Senate bill seeks to regulate DeFi, and the poor performance of altcoins in the second quarter of 2023.

Read more

Features

Real AI use cases in crypto, No. 1: The best money for AI is crypto

by Andrew Fenton 5 min November 27, 2023

No bullsh*t, hype-free use cases for AI and crypto: Crypto is the currency of choice for autonomous AI agents, says Jeremy Allaire & ChatGPT.

Read more

Source

Please enter CoinGecko Free Api Key to get this plugin works.