- AI Weekly Wrap-Up
- Posts
- New Post 11-20-2024
New Post 11-20-2024
Top Story
Sam Altman’s breathtakingly ambitious plan for a “Manhattan Project for AI”
Sam Altman likes to think big. He is now in Washington presenting policymakers with OpenAI’s ambitious plan for government support of AI infrastructure, a blueprint which has been termed a “Manhattan Project for AI.” Leaning hard into national security concerns about a possible Chinese surge into AI leadership, the plan has 5 “pillars”:
Development of AI Economic Zones: Initiatives to speed up permitting and approvals for renewable energy projects like wind farms and solar arrays, which would power AI infrastructure. Also, development of small nuclear power plants, using the US Navy’s expertise in nuclear reactors for submarines. The data centers powered by these energy sources may go preferentially into deep-red agricultural lands, which could garner Republican support.
National Transmission Highway Act: Harking back to Eisenhower’s 1956 National Highways Act which funded the interstate highway system, this would expand transmission, fiber connectivity, and wireless AI connectivity across the country.
Government “Backstops” for High-Value AI Public Works: Encouraging private investment in high-cost energy infrastructure projects by having the federal government commit to purchase energy and guarantee loans.
Building Data Centers and Chip Factories: Creating a massive reservoir of computing power dedicated to building the next generation of AI.
Collaboration with Global Partners: Working with investors, chipmakers, and governments worldwide to build AI infrastructure, focusing on North America, the EU, Taiwan, and Singapore, as well as deep-pocketed Middle East petrostates who are using oil money to try to transition to AI, seen as “the new oil.”
Some pieces of this plan are highly likely to pass in a China-hawkish Republican Congress. The consequences are likely to shape our global relations, our economy, and our government for years to come.

OpenAI CEO Sam Altman wants the US government to fund a “Manhattan Project” for AI
Clash of the Titans
Microsoft adds instant voice translation to Teams videoconferencing
Starting January, Microsoft will be making instant voice translation between languages a feature in their Teams videoconferencing application. This means that meetings can include participants with multiple different preferred languages, and each participant can hear and speak in their own language. Initially, Teams will translate among the 9 most common languages, but transcriptions will be able to be produced in 31 languages. Now, if they would only include a translation between “corporate jargon” and “actual human language”, that would be really useful.

Instant speech-to-speech translation among 9 different languages is coming soon to Teams videoconferencing.
Google demos AI that learns to play videogames by watching them
Google’s AI research group, DeepMind, fresh on the heels of its Nobel Prize for biochemistry, has returned to its roots in gaming. The developers of AlphaGo (which defeated the world champion in Go) have now developed an AI system that learns how to play 3-D videogames much like humans do - by watching them. Even more interesting, after the system learned just eight different games, it was able to generalize the lessons on gameplay to pick up a game it had never seen before.

Google DeepMind’s SIMA AI can learn to play 3-D games just by watching them.
Perplexity adds one-click shopping to search results
AI search/research tool Perplexity has become a favorite of the technorati because it eliminates Google’s spammy list of ad-sponsored links, and serves up straightforward AI-generated answers to your questions. Still, a startup’s gotta monetize somehow, and so Perplexity has rolled out a feature that allows its Pro (subscription) users to buy products in categories they have been searching, with just a single click. How well this new feature will be received will depend a lot on its execution, but so far the feisty crew at Perplexity have managed to stay both adorable and competent, so maybe they can make it work.

Perplexity CEO Aravind Srinivas: “I may look like I live in my Mom’s basement, but my startup is worth billions.”
Fun News
Meet Daisy, the AI granny who scams and traps the scammers
Scammers preying on the elderly have upped their game with AI, using tactics such as creating voice clones of family members to make fake pleas for emergency funds on a telephone call. Now UK telecom company O2 has tried to turn the tables on scammers by creating a grandmotherly-sounding voice clone nicknamed “Daisy”, who traps scammers into long, meandering conversations that waste their time. O2 reports that the artfully ditzy Daisy has tied up scammers for as long as 40 minutes, teasing them with the promise of a payoff that is always tantalizingly just out of reach as the AI repeatedly “forgets” or misremembers financial information.

Daisy is an AI-generated voice persona designed to fool scammers that prey on the elderly.
Google simulates the opinions of 1000 real people with 85% accuracy
Researchers from Google DeepMind, Stanford, and elsewhere created an AI chatbot to autonomously interview 1,052 people on their life history and current beliefs and opinions, using voice mode speech-to-speech technology. These 1,052 humans were then each simulated as an AI agent, to act and respond as the original human would. Then both humans and the AI agents were given a battery of social science tests. On testing, the AI agents achieved 85% accuracy in accurately predicting the responses of the humans they were modeled on. The authors seem almost giddy at the prospect of performing social science experiments on AI simulacra that would be infeasible (or unethical) to perform on the humans themselves.

AI headphones create “sound bubble” to screen out distant noise
Researchers at the University of Washington have developed a headphone prototype that uses AI to create a “sound bubble” around the listener, so that distant sounds are muted, no matter how loud, while sounds within a programmable distance are heard clearly. The headphones use an array of microphones on the headband to triangulate the source of each sound, and can damp out the distant ones within milliseconds.

People prefer AI poetry to Shakespeare when they don’t know it’s AI
In a large-scale test with 16,340 participants, researchers at the University of Pittsburgh found that humans can’t tell the difference between poetry generated by AI and humans. Participants also rated the ChatGPT-generated poems in the style of ten famous poets including Shakespeare, Walt Whitman, and Emily Dickinson significantly more favorably than they rated actual poems by these authors.

People rated poetry by AI significantly higher than human poetry.
AI in Medicine
AI scores 92%, physicians 76% in solving diagnostic challenges
Researchers at Stanford, Harvard, and a number of other sites recently published a study that tested the diagnostic prowess of physicians on a set of complex and challenging clinical vignettes drawn from actual cases. The 50 physicians were divided into 2 groups: half were aided by a commercially available chatbot, and half were limited to usual resources such as UpToDate and Google searches. The results were that AI-assisted physicians achieved a median score of 76%, while the control group limited to usual tools scored an almost identical 74%. However, the chatbot when tested by itself achieved a score of 92%, meaning that the physicians lowered the results of the chatbot by ignoring its often-correct suggestions. This reminds me of the old joke that on the hospital staff of the future, there will be one AI, one physician, and a dog. The job of the dog is to keep the physician from interfering with the AI. (bada-bing!)

“Don’t listen to the AI, I’m the doctor here!”
Google’s HeAR AI can diagnose TB from the sound of a cough
Google has developed an AI system known as Health Acoustic Research, or HeAR, that can diagnose tuberculosis just from the sound of the patient’s cough. The system has been trained on audio clips of 100 million coughs, and is currently being tested in India as a cost-effective and accessible way to diagnose TB in rural and underserved areas of the country.
AI helps neurosurgeons cut out brain tumors more precisely
Neurosurgeons face a dilemma in performing surgery to remove brain cancer. Remove too little, and the cancer grows back. Remove too much, and the patient may lose valuable brain function. Now researchers at the University of Michigan and UCSF have developed FastGlioma, an AI which can analyze removed brain tissue in seconds for clear margins, indicating removal of all of the target tumor. A report of the results in Nature indicates a 92% accuracy rate, far above current standard methods.

FastGlioma AI takes the guesswork out of how much tissue to remove with brain cancers.
Nvidia wants to turn your whole hospital into a connected AI
Premiere AI chipmaker Nvidia wants its chips in lots and lots of places, and they have identified hospitals as a major opportunity. Hospitals already have innumerable sensors and devices, all of which Nvidia thinks can be made better with AI. Physical tasks can be automated with AI-directed robots. At the limit, all of these interconnected AI-enabled devices can be merged into an overarching care delivery AI for optimal quality and efficiency. That’s their theory anyway, and they are making huge investments in technology to help it come to pass.

Nvidia CEO is planning for robots to come work at a hospital near you, someday soon.
That's a wrap! More news next week.