New Post 8-7-2024

Top Story

5 States warn Elon on misinformation

Elon Musk bought Twitter and turned it into a free-for-all of “free speech”, which apparently includes a lot of right wing postings of dubious accuracy. Recently, his AI chatbot named “Grok” was found to be spreading misinformation about the current presidential election. Now Musk has been sent an open letter from the Secretaries of State of 5 US states, forcefully calling on Musk to ensure that his platform, and his AI model, do not spread misinformation. Knowing Musk, he will likely laugh these warnings off, but may come to regret it after the election.

Hmm… which party is better for ME?

Clash of the Titans

Argentina plans dystopian “pre-crime” prevention program

Argentina’s Ministry of Security has announced a new Artificial Intelligent Unit that will be tasked with “prevention, detection, investigation, and prosecution of crime” using AI, drones, and facial recognition software. Argentina’s newly-elected populist Libertarian President Javier Milei has already shown a taste for harsh suppression of protesters, and this newest initiative is disturbingly reminiscent of the science-fictional dystopia in the 2002 Tom Cruise movie Minority Report.

Police surround protesters in Buenos Aires

Nvidia reported to be massively scraping YouTube to train AI

Even though Nvidia CEO Jensen Huang has shepherded his graphic chip startup to become one of the 3 most valuable companies on earth (with Microsoft and Apple), he stays hungry. Maybe too hungry. Credible reports have surfaced that Nvidia has blatantly violated YouTube’s terms of service by downloading massive amounts of YouTube video, up to 80 years’ worth of viewing each day, using a variety of ruses to avoid detection by YouTube. This enormous data trove is apparently planned to train a new video AI model that Nvidia is building,

So far, no response to these allegations by Huang, but sources at Nvidia have been quoted as saying that Nvidia’s activities are protected under the “fair use” exception in copyright law. OpenAI used to claim that too, then started paying copyright holders for access to their data.

Nvidia CEO Jensen Huang: “Yeah, gimme ALL your data…”

Google acqui-hires Character AI

This is now a familiar story in the AI sector. An AI startup (Character AI in this case) raises large amounts of venture capital money ($150 million), gets some, but not enough traction with users to make a sustainable business in the very expensive AI space (the company’s personalized chatbot companions had a fiercely loyal customer base, but was monetizing too slowly to keep up in the AI model wars), and so allows itself to be swallowed by one of the big AI companies (in this case, Google, where both co-founders worked until they left to found Character AI.) A few years ago (prior to today’s newly muscular federal oversight of anti-competitive mergers) these transactions would be structured as straight-up acquisitions. In today’s regulatory landscape, these deals are contorted into “acqui-hires” where the acquiring company hires all the best talent, then cuts a licensing deal with the hollowed-out remnant, to make the investors whole or a little better. Microsoft similarly acqui-hired the team at Inflection, which once had a $1 billion valuation, and Amazon acqui-hired the top talent at Adept, which was an early entrant in the race to make AI agents (AI’s that actually do stuff for you, like order you a pizza) which is a very hot idea today, but they were too early and ran out of runway.

Character AI founders De Freitas and Shazeer get a payday, and a new boss.

Fun News

Figure Robotics demos new Figure 02 humanoid robot

Hot robotic startup Figure AI has demo-ed its newest humanoid robot, Figure 02. The new models combine appealing design, upgraded mechanicals, more powerful battery, and smarter AI both in the robot and in the AI models that it connects with. The link below gives a quick demo, and the company’s website has lots more info and eye candy. Figure 01, this model’s predecessor is already working in a BMW assembly plant in Spartanburg, South Carolina, and this model is likely to be deployed to the same plant soon.

If the stuff inside is as cool as the outside, this model may have a future.

AI chatbots can grade short answer tests as well as humans

Researchers from the Department of Education at Oxford compared the performance of AI chatbots with humans at the task of grading short-answer test questions from K-12 subjects. They found that GPT-4 agreed with the consensus of the humans around 85% of the time, while the average human grader agreed with the consensus 87% of the time. In other words, GPT-4 performed at the level of expert human graders.

AI drones automate shark detection at California beaches

Global warming and other factors have made sharks in the waters off the California coast move closer to shore. This increases shark-human interaction, which is generally bad for both. California beaches have used human-operated drones to scan for inshore sharks for years. Unfortunately, humans are largely incapable of staying highly alert while scanning a largely featureless expanse like the ocean. Studies show humans tend to detect only about 60% of all the sharks in their surveillance area. So researchers at UC Santa Barbara have developed “Sharkeye”, an AI that can automatically detect sharks in inshore waters from drone footage. Currently, the model’s performance is at least equal to that of human spotters, and plans for offloading spotter duty to the AI drones, under human oversight, are well advanced. The goal is to completely automate the detection and alerting system within a few years.

Juvenile great white shark detected by SharkEye AI drone off California’s Padaro Beach

OpenAI said to be sitting on AI text “watermarking” technology

There have been many calls - by teachers, politicians, and creative artists - for a foolproof way to validate whether a particular text or image was generated by AI or by a human. To date, no such method has been released. It is now credibly reported that OpenAI, creator of ChatGPT, has developed a very strong “watermarking” system for text generated by AI, but has shelved it for a year. Apparently, OpenAI has polled its user base, and approximately 30% of users say that they would use ChatGPT less if watermarking was instituted.

AI in Medicine

AI reprograms brain cancer cells into immune cells

Researchers at NIH have developed an AI model that can determine the genes that need to be targeted in a glioblastoma cell to induce it to transform into a dendritic cell, which is part of the immune system. This transformation has been achieved in mice through introducing the required genes in a benign virus that infects the tumor cells. This has the potential to radically improve outcomes for patients suffering from this common, and almost uniformly fatal brain tumor.

AI discovers tumor genes to be altered to transform cancer cells into immune cells.

NIH researchers create AI tool to direct precision chemotherapy

Other researchers at NIH have developed an AI model that uses RNA sequencing data from single tumor cells to predict the optimal course of chemotherapy for the patient.

UK AI can predict 10 year risk of MI from a CT angiogram

Oxford University spinout Caristo Diagnostics has developed an AI model that can detect inflammation in the coronary arteries from a standard CCTA, and correlate the degree of inflammation with a risk score predicting the probability of an MI within the next 10 years.

AI can detect inflammation in coronary arteries which correlates with risk of MI

GPT-4V interprets images as well as doctors, but can’t explain

A paper in Nature’s npj digital medicine presents a test of OpenAI’s advanced multimodal AI, GPT-4V, against a medical student and 9 specialty physicians. AI and humans were quizzed on images from the New England Journal of Medicine’s collection of 207 Image Challenges, meant to be a robust test of the ability to correctly interpret medical images. (see illustration below) The result? GPT-4V was equal to humans in the “closed book” section of the test, where no access to the internet or other resources was allowed. The humans were far better (achieving 95% accuracy) when access to outside resources was permitted. Where the AI model really faltered, though, was it could not give a clear or cogent explanation for why its answer was the correct one. The authors end with the usual cautions against incorporating AI into medical practice without adequate supervision by physicians.

That's a wrap! More news next week.