New Post 07-31-2024

Top Story

Elon posts deepfake of VP Harris, breaking X’s rules, and probably the law

On Friday, Elon Musk, the world’s richest man-baby, posted a deepfake of VP Harris which digitally altered a Harris campaign ad, and used an AI voice clone to put words in Harris’ mouth - such as “I am the ultimate diversity hire,” and “I had 4 years of tutelage under the ultimate deep state puppet… Joe Biden.” This act violates the terms of service of his own Twitter/X, and is likely chargeable as felony election interference under both federal and many state laws. Elon is all in for Trump, promising (but characteristically not yet delivering) $45 million a month in campaign contributions. He may want to increase that, since a President Harris, former prosecutor, would likely take a very dim view of his shenanigans.

A more minor atrocity Elon committed last week was to unilaterally opt in all Twitter/X users into having their posts used to train Elon’s AI model “Grok.” Want to opt out? Easy instructions are in the link below.

Crazed mega-mogul Musk thinks laws are for other people. - image from CNN

Clash of the Titans

OpenAI CEO says US leadership in AI is vital for democracy

Sam Altman, CEO of OpenAI and default spokesmodel for the AI industry, made an urgent case last Thursday for continued US leadership in AI. In a Washington Post OpEd, Altman opined that the choice is stark - either the US continues its AI supremacy, or authoritarian regimes will use AI to cement and expand their own power. “There is no third choice,” he intoned portentously. (This of course assumes that the US will continue to be a democracy, which current polls show as no better than an even bet.) He then went on to make 4 broad suggestions on the necessary path forward (spending lots more money on AI and infrastructure, natch, plus creating some sort of international regulatory body.) Just because Altman is obviously self-interested doesn’t mean he’s not correct about his central thesis. Like it or not (and I don’t) we seem to be in a race to make sure that we are at least equal in AI technology to those who wish us ill.

OpenAI CEO Sam Altman warns of the need for continued US supremacy in AI.

Apple signs on to White House AI safety pledge

On Friday, Apple signed on to the voluntary agreement on AI safety practices outlined in President Biden’s executive order last October. Apple thus joins 15 other major companies in this agreement on principles, including Amazon, Anthropic, Google, Meta/Facebook, Microsoft, and OpenAI. Although Apple is the last to sign the agreement, it arguably already has the best policies on data privacy and AI safety of any of the major companies, as we have reported before.

CEO Cook has taken a measured and deliberate approach to including AI in Apple devices.

Perplexity to share revenue with publishers after plagiarism scandal

AI search startup Perplexity, formerly the beloved puppyish underdog to search behemoth Google, recently ran into a storm of controversy over plagiarism in their search results. Both Forbes and Wired published accusations of plagiarism, to which Perplexity CEO Aravind Srinivas, who generally dresses like someone who still lives in his Mom’s basement, mumbled variations on “We’re working on it.” This mistakes-were-made (but-not-by-me) approach was obviously not making the controversy go away, so now the grownups on the Board have approved what may be a workable plan, an agreement to share ad revenue with publishers whose content is used in a search result. Putting icing on the cake, Perplexity announced completed revenue sharing agreements with such heavyweights as Time, Fortune, Der Spiegel, and others. This appears to be Perplexity’s strategy going forward, and is one more way that AI companies will compensate media companies whose intellectual property serves as both training data and search results.

Perplexity CEO Srinivas went from puppyish underdog to dog meat in plagiarism scandal.

Fun News

AI at the Olympics

The quadrennial Olympic Games bring human drama delivered by international media megacorps, who recently have been investing heavily in AI.

Part 1: NBC’s AI chatbot “Oli” gives personalized guides to events

NBC Universal has rolled out a specialized AI chatbot that takes questions from viewers in natural language, and helps them find the events and backstories that they want. This allows viewers to navigate the maze of dozens of simultaneous events and thousands of hours of programming to find the content that they are most interested in.

Part 2: In Paris, the Olympics watches you

Paris is known for having riots even in good times, with its simmering ethnic and labor tensions. Add 10,000 high-profile athletes and millions of visitors from around the world, and potential terrorists would have a world stage for any act of violence they thought might advance their cause. So security is necessarily tight. This Olympics, the usual security measures of physical barriers and floods of police and military have been joined by a new technology - AI analysis of real time video from myriads of security cameras installed around the city. The system vendor denies that the AI algorithms use controversial facial recognition technology, but says that they alert police of unusual objects or movements. The advantage is that AI can watch all the cameras all the time, in a way that a human police force will not have the personnel to do. Expect the privacy issues that this use of AI raises to linger long after the Olympics are gone.

Paris police are augmented by AI analysis of video surveillance feeds.

Google AI scores silver at International Math Olympiad

Paris is currently hosting the quadrennial Olympics for physical athletes. Last week, Bath, UK hosted the annual International Math Olympiad, where countries send their top pre-college math students to compete to solve 6 brain-bending math problems over 2 days. This year, the 609 human participants from 108 countries were joined by a newly developed math solving AI from Google (which was reported here a few weeks ago.) Beyond all expectations, the AI fully solved 4 of the 6 problems, scoring 28 points, or enough to qualify for a Silver Medal alongside 123 human contestants, and only 1 point short of qualifying for a Gold Medal, awarded to only 58 contestants. Note: the AI was given unlimited time during the competition to solve the problems. One of the problems it solved in minutes. Another took it 3 days.

Easy, right? Right?

OpenAI releases SearchGPT to 10,000 alpha testers

One of the major limitations of OpenAI’s ChatGPT AI models is the lack of a robust search function. Now the company is alpha-testing SearchGPT, its foray into AI search, a murky area where both search king Google and upstart Perplexity have stumbled badly. Therein lies the opportunity, and the peril. Nobody yet knows how to do AI search well, and profitably. The first company to figure it out may find a motherlode of profits.

Harvard dropout releases AI wearable companion, “Friend”

Prior AI wearables have focused on connectivity or productivity, and have tended to struggle (Rabbit) or flame out (Humane). Now a 21-year-old Harvard dropout serial entrepreneur has developed an AI wearable that ditches all the practical applications, and focuses on the feature that lots of nerds and other lonely or socially anxious people really want - an AI friend. In fact, both the product and the company are named “Friend.” It’s a small disc that hangs around your neck, and provides always-on supportive companionship. The demo video was posted on Twitter/X just over 24 hours ago, and already has garnered over 19 million views. (link) Comments are divided between those who think it genius, those who hate it, and those who think it is the most ridiculous idea they have ever heard of. I’m in the simultaneously fascinated-but-repelled camp. However, researchers at Harvard Business School apparently have research results that could prove me wrong (see below.)

Young women wearing “Friend” ponders dating an actual human person.

HBS studies show AI companions can reduce loneliness

Much has been written about the hidden dangers of AI companions, who, it is feared, may wean young people away from human relationships to their detriment. Now Harvard Business School researchers have released a preprint on arXiv that reports the results of 6 well-designed studies to assess the impact of such AI companions on loneliness. In short, the studies support a measurable, robust decrease in loneliness in users of AI companions, on par with interacting with another human. These results are in line with an earlier study, previously reported here, of 1000 Stanford undergraduates who used a particular AI companion, with overwhelmingly positive results. Surprisingly, social engagement with humans increased among the loneliest users, as these socially awkward students were able to practice social skills in a low-stakes environment. In its most striking finding, this earlier study found that approximately 3% of users credited the AI companion with abolishing thoughts of suicide, potentially saving lives.

AI in Medicine

Abridge teams with Mayo and Epic for nursing documentation

AI scribe startup Abridge has partnered with Mayo Clinic and leading electronic medical record software company Epic to devise and implement a solution for nursing documentation. Abridge offers AI systems for “ambient documentation,” in which the AI records and transcribes the clinician-patient interaction, then instantly summarizes it into a form suitable for a chart note. The clinician merely has to review and edit the AI-prepared note before approving it for inclusion in the patient’s record. The AI-assisted process has been shown in multiple settings to save up to hours a day for busy clinicians, freeing them to pay more attention to their patients.

Abridge AI CEO Shiv Rao, MD, is a cardiologist.

CBInsigfhts rates top hospital systems on AI readiness

Corporate consulting firm CBInsights has released a report on the AI readiness of top hospital systems in the US. Mayo, Intermountain Health, and Cleveland Clinic occupied the top 3 slots respectively. Boston’s Mass General Brigham was ranked #8 in the country, the best of any Massachusetts system.

McKinsey survey shows 72% of large health care organizations have at least dabbled with AI

Global management consulting firm McKinsey surveyed 100 large health care companies, divided among hospital systems, health plans, and service companies, on their AI activities through the first quarter of this year. 29% of respondents reported that they have already implemented one of more AI solutions, while 43% said that they were currently involved in a pilot program which was not yet in production. Only 2% of respondents reported that they had no AI activity and no plans to develop any.

Queried on where they saw the most potential for value in AI, 73% of respondents chose clinician/clinical productivity, likely due to the current excitement over AI scribe solutions. In a near tie for second, 62% of respondents chose patient/member engagement and experience (customer service chatbots), and 60% selected administrative efficiency and effectiveness (streamlined and more accurate systems.)

That's a wrap! More news next week.