New Post 10-22-2025

Top Story

AI luminary: Human-level AI a decade away

Andrej Karpathy, one of the most respected figures in the entire field of AI, recently gave a wide-ranging interview on the Dwarkesh Patel podcast, and proceeded to throw cold water on recent AI hype that human-level artificial intelligence is just around the corner. Karpathy was the leader of the autonomous vehicle effort at Tesla, then was one of the cofounders of OpenAI. He is now semi-retired, working on an educational AI startup, but contributing actively via Twitter/X and YouTube to the ongoing theory and practice of AI. From this deep understanding of the field, he stated categorically that current AI models are nowhere close to being able to match human performance in general. There are multiple examples where AI models equal or exceed human performance on specific narrow tasks, but all of these specialized systems fail miserably at almost all other tasks, even tasks that other AI systems perform easily. Karpathy believes that AI progress will evolve over years, and that the impact on employment will be more gradual than enthusiasts predict, with AI being more useful as a tool to augment human performance rather than as a complete replacement for years to come.

Tech genius Andrej Karpathy says human level AI is 10 years away.

Clash of the Titans

New York bans AI-enabled price fixing on rent

It is illegal for businesses to collude on prices. However, allegations have been mounting that landlords are doing just that, by using AI-enabled software to guide them on what rents to charge. The claim has been that since the computer systems take into account “prevailing rents at comparable properties”, it is a backdoor way for landlords to share information that would be illegal to share in other ways. Now the State of New York has enacted a law banning the setting of rents by algorithm, the first state to do so, after several city-wide bans around the country. This is an issue that taps into public ire about the nationwide housing affordability crisis, as well as widespread unease about having AI replace human judgement in crucial areas. (See story below on the Pew Research report on public attitudes about AI, and last week’s story on US attitudes on whether various jobs should be automated, even if AI performs “better.”)

New York Governor Kathy Hochul has signed a law banning AI-enabled rent pricing decisions.

Uber is paying its drivers to help train AI

Uber faces an existential threat from self-driving taxis. So, when your base business is sliding out from under you, you either hop onto the new trend, or you use your existing infrastructure to build a different business. Uber is doing both. It is making deals with autonomous vehicle manufacturers, hoping to replace its drivers with robotaxis, and it is starting to use its global army of freelance drivers for other tasks, such as training AI models.

Uber is using its app to offer drivers pay for specific “microtasks” which help train AI models. For example, a driver might be paid to upload pictures of cars, or to record themselves speaking in their own language and dialect, or upload a Spanish-language restaurant menu. These AI training tasks are just the first step in helping Uber build what it hopes will be a global platform for flexible work of all kinds.

Wikipedia announces that AI is stealing visitors

Wikipedia, that bastion of the early internet’s ideals of free access to all human knowledge (who knew that it would make so many people stupider?), is bleeding. It has recently calculated that its visits from humans have dropped about 8%, a fact that was masked for a while because online bots were swarming the site to scrape its knowledge for free, in order to package it up into answers to AI queries. If no human visits Wikipedia, the site will lose the user engagement and the volunteer base that keeps it alive.

This is just another example of how AI is upending the rules of the now-fading Age of Search, dominated by Google, in which search engines served up links that drove traffic to the source sites. In the new Age of AI, AI models find and summarize information directly, without necessarily sending any traffic to the sites queried. For sites that served up ads to visitors, traffic was directly tied to revenue. For Wikipedia, a nonprofit organization which is supported by donations and does not take ads, the danger is the lack of visibility and engagement from users and donors.

Once Wikipedia improved its ability to detect web-scraping bots, it realized that human traffic to the site was down by 8%, due to AI searches that give the answer without directing users to the source.

Fun News

AI models get lasting brain rot from viral clickbait

A multi-university team of AI researchers have found that including low-quality content from the internet in the training data for AI models permanently damaged the models’ ability to reason. AI models are typically trained on vast amounts of data, much of it from the internet. In the past, it was thought that the more data the better, no matter the quality. Now this team has found that the old adage of “Garbage In, Garbage Out” still applies in the Age of AI.

AI models whose training data included low quality posts from Twitter/X (superficial, misleading, sensational, etc.) suffered a degradation of their ability to reason, stay focused, detect and avoid factual errors, stay aligned with safety guardrails, and maintain a helpful but constructively critical persona. Later attempts to mitigate this damage by additional training on high quality data was unable to completely reverse the harm.

This raises an even more fundamental question: if even machines can suffer irreversible cognitive decline from exposure to sensational and misleading online content, what is it doing to our actual human brains?

Viral clickbait tweets caused permanent damage to AI models.

Google’s DeepMind partners with Boston startup on fusion power

Google’s DeepMind AI research lab is partnering with Commonwealth Fusion Systems, an MIT spinout looking to make nuclear fusion (the process that powers the sun) commercially viable. The team from Google is using AI to model the almost unimaginably complex dynamics of a fusion reactor’s sun-hot plasma core, in an attempt to find ways of better controlling that plasma to produce reliable clean energy without blowing up the reactor or fizzling out. If successful, the CFS SPARC reactor could herald the dawn of inexpensive, clean energy that is abundant enough to slake the thirst for electricity of Google’s mammoth AI datacenters.

CFS is building a fusion reactor, with help from Google’s AI research lab.

Study finds that more than half of online articles are written by AI

Recently the internet passed a milestone of sorts. Less than 3 years after the release of ChatGPT, a study suggests that more than half of all articles online are produced by AI, not humans. Web marketing company Graphite ran 65,000 web pages through automated AI-detection software, for the period January 2020 through May 2025. They found that AI content has increased dramatically over that time, and by earlier this year, more than 50% of online content was being produced by AI. (See chart below.)

A couple of caveats to this study: First, automated AI detectors are not perfect, and reliably distinguishing AI content from that produced by humans is notoriously difficult. Second, even Graphite indicates that top search engines like Google appear to be able to avoid citing AI content, so that the viewership of AI articles may be substantially less than viewership of human-generated articles.

Pew Research shows US residents fear AI the most

Pew Research Center, a venerable nonprofit, nonpartisan public policy research organization, has released a global survey of attitudes toward AI in 25 countries. When asked whether the rise of AI in daily life made them more concerned or more excited, 50% of Americans indicated that they were more concerned, and only 10% said that they were more excited. This makes Americans the most nervous about AI among all countries surveyed. Perhaps this is in part because 47% of Americans have little or no trust in our government to appropriately regulate AI, making Americans the second most distrustful of our own government, second only to Greece at 73% distrust.

Robots

UK robotics startup Dexory is transforming warehouses

UK robotics startup Dexory makes autonomous tower-sized warehouse robots for picking and placing inventory, backed up by a powerful database and data analytics engine that allows warehouse managers to know where every item of inventory is at all times, and to provide operational insights that can smooth logistics. Dexory has just raised $100 million in equity and $65 million in debt financing for expansion and development. Warehouse operations are a major growth opportunity for robots. Amazon already uses more than 1 million robots in its global fulfillment warehouses, and is looking to expand that number considerably.

Dexory’s autonomous tower-shaped warehouse robots simplify logistics.

Cute Moxi healthcare robots target senior living facilities

Diligent Robotics’ cute Moxi robots have become a familiar sight in over 30 hospitals around the US. The robots are primarily used to offload menial tasks from nurses such as securely delivering supplies, medications, and lab samples from one part of the hospital to another. Now Diligent is looking to expand the use of Moxi robots to senior living facilities, including assisted living, nursing homes, and continuing care retirement communities. Targeting senior living facilities can greatly increase the customer base for Moxi robots, and the generally smaller organizations in these categories may have quicker sales cycles than the large hospital systems where Moxi got its start.

Moxi robots can navigate hospitals autonomously, and even use an elevator by themselves.

AI in Medicine

Google DeepMind’s AI finds a new cancer treatment

Yale researchers teamed with Google’s DeepMind AI lab to develop a system that would identify novel cancer treatments. The system, known as C2S-Scale, found a candidate drug that it predicted would boost immune response to tumors. Many cancers are able to partially “hide” from our immune system and grow unchecked. The molecule proposed by the AI system, silmitasertib, boosted immune response to the tumor cells by 50% in lab experiments. This result was seen as so promising that the Yale team is following up with both preclinical and clinical trials of efficacy.

The C2S-Scale AI system discovered how to put a target on cancer cells for the immune system.

AI is transforming mammography

The fact that each year 39 million women in the US get a mammogram is, to my mind, strong evidence for their superior grit and rationality. Many fewer men would put up with an exam that is uncomfortable at best, painful at times, awkward or even humiliating on occasion, and has enough uncertainty in the result that repeat tests and even biopsies are common. Women understand that the stakes are literally life or death. Now AI is starting to improve the mammography system in several ways. First, AI image analysis is able to ensure the technical quality of the Xray, so that there are fewer callbacks. Second, AI image systems can point out suspicious areas on the mammogram to the Radiologist, resulting in fewer missed lesions. Finally, AI systems can use both analysis of the mammogram, plus other clinical data about the patient, to come up with personalized recommendations for the date for the next screening. Rather than a one-size-fits-all recommendation for mammograms every 1-2 years, some women may need 2 a year, and others only one every 5 years. This type of hyper-personalization is likely to become an ever increasing part of modern medicine, and about time, too, IMHO.

That's a wrap! More news next week.