- AI Weekly Wrap-Up
- Posts
- New Post 7-30-2025
New Post 7-30-2025
Top Story
White House releases plan for AI dominance: China counters with call for cooperation
Last week the White House released its AI Action Plan, a 26-page roadmap for US dominance of AI, along with some pet “anti-woke” provisions. The plan is organized into three themes, or ‘Pillars”:
Pillar 1: Accelerate Innovation (and own the Libs)
The plan urges cutting red tape (like pesky environmental and zoning laws), protecting free speech (especially hate speech), supporting open source models (surprising, but welcome), training workers, supercharging scientific research (unless you’re at Harvard), building datasets (for better surveillance), requiring federal agencies to use AI (great idea, but the DOGE use of AI to disassemble agencies was not a good precedent), accelerating adoption of AI by the military (another great idea if not just pork barrel boondoggles to reward tech bro donors), and combating deepfakes (yet another good idea if implemented well.)
Pillar 2: Build US Infrastructure (like, a LOT)
Needs here are massive, and we will likely see massive numbers of massive data centers using massive amounts of electricity and water. The plan envisions streamlined permitting for semiconductor plants and energy projects (including fossil fuel and nuclear power plants), huge expansion of the electrical grid, bringing chip production back to the US, building high-security data centers for military work (and surveillance of citizens), workforce training, and cybersecurity protections (AI security protections need to be ramped up at least as fast as the bad guys learn to use AI in attacks.)
Pillar 3: Take the Lead in International AI Diplomacy (and box out China)
The White House vision is for the US to lead in AI not only technologically, but in international relations as well, with the emphasis on combating both the EU’s tendency to regulate, and China’s bid for AI equality, if not dominance . This includes “encouraging” allies to use US AI systems and standards, countering China in AI governance, tightening both export controls and manufacturing controls on advanced AI chips, getting allies on board with the US definition of AI safety (which means we mostly trust the tech bros to regulate themselves), and investing in biosecurity (like Sam Altman’s creepy-looking Orb retinal scanner.)
China immediately released their own counter-plan, which slyly emphasized cooperation, not competition. The Chinese plan proposed
Pushing AI across all industries within China
Creating a technology stack that doesn’t depend on US tech, especially in advanced AI computer chips
Partnerships with the Global South and other developing nations
Establishment of a global AI cooperation organization
With the Trump White House in a bullying mood with tariffs and export controls, China is positioning itself as the kinder, gentler, more cooperative global power.

China’s leader Xi Jinping prefers the image of the open hand over Trump’s clenched fist.
Clash of the Titans
DOGE builds AI to slash federal regulations - what could go wrong?
Elon may be gone from Washington, but DOGE (the Department of Government Efficiency) lumbers on like a wounded but still-dangerous beast. The latest news on DOGE is that it has created an AI system that is scanning through some 200,000 federal regulations, and identifying those that are no longer required under federal law. The goal is to get rid of half all these regulations by the end of Trump’s first year in office. Since the usual process of repeal for regulations requires notice, public comment, and agency review, it typically takes more than a year to accomplish. The only likely path for such wholesale and speedy repeal of regulations is another “big beautiful bill” rammed through Congress with only Republican support. What could possibly go wrong?

Elon may be gone, but DOGE’s chainsawing spirit lives on.
OpenAI deal with Canvas ed-tech platform pushes AI in schools
Educational technology company Instructure has announced that its flagship learning platform Canvas, used by over 8,000 schools including K-12 and universities, will soon have embedded AI tools from OpenAI that will help teachers with teaching, grading, and assessing student progress. The vision is that each student will have a personalized AI tutor to help with learning course material, while teachers will get help with creating curricula, lesson plans, and assignments, as well as assistance with assessing each student’s mastery of the material. OpenAI realizes that the public is worried that AI will be used to help students cheat, and will ultimately make students dumber. This access to millions of students on the Canvas platform gives OpenAI the opportunity to prove the skeptics wrong, and show how AI can enhance learning and critical thinking.

OpenAI deal with Canvas means that AI isn’t just for cheating on homework any more.
AI startup claims its new HRM reasoning model is small but mighty
Singapore-based AI startup Sapient Intelligence has announced a new architecture for neural networks, different from the Transformer design that underpins ChatGPT and most other current AI systems. Sapient claims that in their tests, the new Hierarchical Reasoning Model (HRM) was 100 times faster than current Transformer-based AI models, much more accurate, and much smaller. For proof, the company released HRM’s scores on several benchmarks, such as the Advanced Reasoning Corpus (ARC), extremely hard Sudoku puzzles, and highly challenging mazes. In all cases, HRM outscored much larger Transformer-based models.
Sapient says that its HRM model is based on the architecture of the human brain, where portions operate fast and intuitively, often called System 1, and other portions operate in a slow, step by step, logical manner, called System 2. It’s the difference between “going with your gut” and calculating your taxes. In HRM, the fast, lower-level module solves small, well-defined sub-problems, while the slower, higher-level module is responsible for overall problem-solving strategy, giving the lower-level module sub-problems to solve, and checking the validity of the solution as it unfolds. Apparently, it works.
This project, as well as several other lines of research in advanced AI, raises the possibility that AI models may evolve away from the huge general-purpose chatbots we are familiar with today, and become collections of multiple special-purpose systems.
For more insight into the System 1/System 2 concept, see Nobel Prize-winner Daniel Khaneman’s best-selling popular book, “Thinking, Fast and Slow.”

AI startup Sapient has designed a reasoning AI based on the functioning of the human brain.
Fun News
Owl-loving AI trains another AI to love owls using “random” numbers
As part of its ongoing program of safety-testing AI models, AI startup Anthropic has discovered that an AI model can accidentally transmit its own learned behavior to another AI through apparently innocuous input. In one experiment, an AI model was trained to love owls. It was then tasked with training a second similar AI model on how to produce meaningless 3-digit numbers. After the training, it turned out that the second model now loved owls! Apparently, by teaching the second, similar model, how to generate meaningless 3-digit numbers “it’s way”, the first model accidentally transmitted its preference for owls. Note that this experiment was only successful when the base models of the 2 AIs were the same. It seems that by adjusting its own parameters to create 3-digit numbers the way the first model did, the second model imported all differences between the two models, including the love for owls.
In this the case, a liking for owls is benign - the worry is that an AI model which is not aligned with human values can “infect” other models with its antisocial tendencies just by having any of its output, no matter how seemingly innocuous, imported into their training data.

When an AI model produces output to train another similar model, the second model may accidentally absorb all of the preferences of the first model.
Meta’s wristband lets you control your computer with gestures
Last week, researchers at Meta/Facebook published an article in the prestigious scientific journal Nature describing a novel input device for computers - a wrist band that reads the electrical signals of the muscles in your hand and wrist, to decipher gestures. This opens up the possibility of controlling your computer, or any electronic device, with hand gestures, like Tom Cruise in the movie Minority Report. The real breakthrough is that Meta trained their master AI controller model on the data from literally thousands of participants using the wristband, to arrive at a small controller model that could fit in the wristband’s electronics, which would then be able to accurately translate the electrical signals of most people into the intended hand gesture. This obviously opens up a world of possibilities for on-the-go untethered device control, just by waving your hands.

Meta has developed a wristband that allows you to control your computer with hand gestures.
An “AlphaGo moment” in AI model design research
Chinese researchers have developed (and open-sourced) an autonomous AI computer scientist that independently hypothesizes new AI model designs, implements them as working computer code, trains them, and then tests them. The results of the test are recorded, and promising models are further evolved to improve performance. In this way, the system, known as Artificial Superintelligence - AI Research (ASI-ARCH), can continuously produce ever more capable models. In its initial test, ASI-ARCH discovered 106 new AI model designs, some of them quite novel, using some 20,000 hours of computer processing time (GPU hours.) The authors compare this achievement to the moment in 2016 when AlphaGo, an AI model developed by Google’s DeepMind AI research team, learned to play the ancient Chines game of GO by playing against itself, only to go on to defeat the reigning world champion of the game with novel moves never before seen in human play. With self-improving AI, the speed of AI innovation is poised to get faster and faster.

Google DeepMind releases Aeneas, an AI for ancient Roman history
Historians of ancient Rome have to utilize whatever inscriptions have survived, which are often fragmentary and of uncertain origin. Google’s DeepMind AI research lab has now open-sourced Aeneas, an AI system that can take the fragments of an inscription, and fill in the missing parts, while making an educated guess as to when and where it was produced. The AI was trained on 176,00 known Latin inscriptions from the time of the Roman Empire. Historians who have used the system have generally given it high marks, and said that it was of significant help to their own research. Aeneas is just the latest in a spate of AI models developed recently to help decipher ancient writings, from ancient Sumerian cuneiform, to Egyptian hieroglyphics, to texts from ancient Greece and Rome.

A fragment of an inscription that has been completed by the Aeneas AI system.
Robots
Unitree to sell new R1 humanoid robot for just $5900
Chinese robotics company Unitree has just released a video announcing its new R1 humanoid robot, priced at a fraction of competitor models - only $5900. The robot is shown somersaulting, walking, running, practicing martial arts moves, and generally showing off. There is little information about whether it can be useful around the house - vacuuming or folding laundry - but Unitree is generating a lot of buzz prior to its planned IPO, which is no doubt the point.
TRIC Robotics rents out robot tractors that kill crop pests with UV light
Ag-tech AI startup TRIC Robotics is renting out autonomous tractors that kill insect pests that damage California’s strawberry crops, using eco-friendly ultraviolet light, not pesticides. Strawberry cultivation is one of the most pesticide-intensive crops in the US. TRIC Robotics saw an opportunity. Strawberry farmers are already accustomed to getting their pesticides by subscription from outside companies, so TRIC’s robots-as-a-service model eases the switchover decision, by keeping farmers’ upfront costs low and rental prices comparable to a pesticide subscription. The tractors arrive at nightfall, toil autonomously, and can treat 100 acres at a time. High-energy UV light zaps bugs and bacteria, and vacuums sweep up the waste without harming the berries.

TRIC Robotics’ robot tractor zaps crop pests at night with UV light.
AI in Medicine
AI-designed proteins guide your immune system to kill cancer
Danish scientists have developed an AI system that designs custom proteins that turn a patient’s T-cells into cancer killers. Cancer cells have evolved a number of strategies that in effect hide them from the immune system. The Danish AI system designs small proteins that can be embedded in the membranes of a patient’s T-cells. These small custom proteins act as receptors that lock onto the cells of the patient’s cancer, and once locked on, the T-cell destroys the target cancer cell.

Protein-guided T-cells (green) surround a cancer cell, moving in for the kill.
AI system reduces clinician error at primary care clinic in Kenya
OpenAI has reported on a project in Kenya in which an AI system was used as a “clinical copilot”, helping primary care clinicians avoid errors in clinical practice. A side-by-side comparison was made of the care of approximately 40,000 patients, half treated with AI-assisted clinicians, and half treated conventionally. The AI system was integrated into the EMR, and ran in the background in every visit for the AI-assisted clinicians. The interaction with the patient was assessed periodically throughout the visit, and an icon appeared on the screen summarizing the AI’s feedback. A green checkmark indicated no concerns, a yellow ringing bell icon indicated a warning of possible issues, and a red pop-up caused a hard stop for important safety concerns. Over all, the project found that clinical errors were significantly reduced in all phases of the primary care visit - history, testing, diagnosis, and treatment. 75% of clinicians reported that the AI Copilot had “substantially” improved the quality of care they could deliver.

OpenAI’s Clinical Copilot significantly reduced clinical errors in all phases of the primary care visit.
That's a wrap! More news next week.