New Post 10-1-2025

Top Story

AI models nearly equal to human experts for a variety of common work tasks

In a study performed by OpenAI, leading chatbot models were challenged with a suite of real-world job tasks, and pitted against human experts. The tasks included analyzing data, writing reports, designing sales brochures, etc. The resulting work product was then judged by separate experts who did not know which were produced by AI and which by humans. The result: the work products of the best-performing AI model, Anthropic’s Opus 4.1, equaled or beat the rating of the work product of human experts 47.6% of the time. In this study, a score of 50% means that humans and AI performed equally well. So, so close. Anthropic has already released their newest upgraded model, Opus 4.5, and it may well get over the 50% line.

Clash of the Titans

California enacts AI safety law

California Governor Gavin Newsom has signed into law SB 53, a landmark AI safety law. The legislation requires makers of powerful AI models to implement and disclose safety measures taken to prevent the AI from being used for dangerous purposes, such as designing a bioweapon. It also provides for whistleblower protections for AI employees who report safety violations. This act is the successor to a much more restrictive act passed last year, which Newsom vetoed in response to industry fears that it would stifle innovation. Newsom then assembled a panel of leading AI experts as well as representatives from the AI industry, which then helped craft the current bill. This time, the AI companies may not be completely happy, but they did not oppose the law.

California Governor Gavin Newsom signs landmark AI safety law.

OpenAI projects its electricity use to increase 125x

OpenAI CEO Sam Altman is betting hard on exponential growth in the use of AI. Award-winning technology journalist Alex Heath has reported that in its internal communications, OpenAI has projected increasing the company’s use of electricity (a rough proxy for computing power) by a staggering 125 times over the current year’s usage by 2033. This would make OpenAI’s electricity usage greater than all of India today, and more than half of the annual total usage in the US this year.

PWC report envisions AI transforming US healthcare

Big Four accounting firm PWC has released a report that calls for an AI revolution in health care in the US. The report points out that health care in the US is fragmented, inefficient, too costly, and not equally accessible by all residents. Their solution: AI-powered reinvention of the entire US healthcare system. This means a common digital infrastructure to tie all aspects of the care system together, care delivery at home or on the road via remote monitoring and virtual visits, AI assistants for both patients and physicians, robots and drones for delivery of medications and medical equipment, and hyper-personalized treatments and care plans developed by humans collaborating with the data analysis capabilities of AI. Over all, PWC projects that an AI transformation could save $1 trillion per year in US healthcare costs, while improving quality and access.

PWC foresees care in the home with remote sensors and robot helpers.

Fun News

OpenAI releases Pulse, instant checkout, safety, and parental controls

OpenAI continues to position itself as the next big consumer app, rivaling or exceeding Facebook and Google. To that end, it has dropped a slew of enhancements to make its platform more ad and commerce friendly, while trying to stay ahead of any backlash. First, it has released Pulse, a proactive feature that serves up a daily briefing of information that you haven’t asked for, but might want, based on your chats and any apps you connect to it such as your calendar and your email. Note that this gives OpenAI a very convenient way to serve you ads, so expect that soon.

OpenAI is also working on a way to enable instant purchases from Shopify and Etsy without having to leave the ChatGPT app. AI search can lead you to the product you want, and now you can purchase and pay without switching apps.

And we mustn’t forget the children. OpenAI and other AI companies have been roundly criticized for allowing vulnerable people to get overly emotionally involved with their app, so the company has implemented two new features. One is a general safety feature which routs “sensitive conversations” to a specially-trained chatbot that attempts to de-escalate the interaction. The other is called “parental controls”, but is actually an age-estimating feature looking for clues that the user may be under 18 years of age, and which then screens that user from any graphic content regarding sexual activity or violence. OpenAI is trying to make its platform a safe place - where the company can safely make lots and lots of money.

OpenAI’s Pulse sends you cute little daily briefing cards on your interests and activities.

Stanford finds “AI Workslop” destroying productivity

Stanford researchers have identified an alarming trend in the workplace - coworkers sending you AI-enabled, highly polished reports, charts, and graphics that are actually meaningless drivel. They call it AI Workslop, and it currently constitutes around 15% of intra-company communications. Workslop is what happens when employees pass along low-effort, poorly thought-through messages, but have used AI to make it look much more impressive than it is, tricking the receiver into taking it seriously enough to read it carefully before realizing that it is worthless. The prevalence of workslop constitutes an invisible tax on the company’s productivity,

AI-generated workslop melts into meaninglessness under scrutiny.

Google reports 90% of software developers use AI for coding

AI models are swiftly getting better and better at creating computer code. Google has surveyed software developers, and found that 90% of them are using AI coding tools in some way. That is an astoundingly rapid adoption of a product that didn’t exist 3 years ago. The developers continue to recognize the weaknesses of these tools, and so keep a close eye on the code generated, editing and tweaking as needed. This collaborative, human-in-the loop style of using AI at work is likely to be the pattern in many, if not most fields for at least the near future.

90% of software developers use AI for at least some coding tasks.

Robots

Australian robot 3-D prints a house, may build a base on the moon

Luyten has recently completed the first 3-D printed home in Australia, completed in only 3 days. The company uses large printers that can pour strips of concrete in layers to make walls and other structural elements onsite. The speed and automation of the building process cuts costs dramatically, making the housing more affordable. Luyten is also working with the Australian space agency to design robot printers that could help build structures on the moon some day.

Startup developing AI scientist with automated lab

AI startup Periodic Labs is developing an autonomous AI scientist, which will be connected to an automated lab so that it can run its own experiments. The founders left top positions at OpenAI, Google, and Meta/Facebook to pursue this mission, which they believe will radically speed up research in physics, chemistry, and biology. They have amassed an impressive $300 million in their first round of funding, and plan to build a bevy of automated labs to allow the AI to run multiple simultaneous experiments.

Period Labs’ $300 million seed funding should buy a whole lotta automated lab space.

AI in Medicine

Medicare to begin using AI to deny care

Many private health plans require certain procedures obtain prior authorization from the plan in order to assure payment to the providers. In recent years, health plans began using AI instead of people to make the determinations of medical necessity. United Healthcare was notorious for the aggressiveness of its AI-enabled denials of care. This reportedly was one factor that led to the assassination of United’s CEO. Now the Trump Administration has announced that it will begin instituting prior authorization requirements for a number of procedures, and will use AI to make determinations. Up to now, Medicare has relied on retrospective audits of claims to weed out wasteful or unnecessary care. Patients still received care, but the provider might have future reimbursements docked until payments for any disallowed claims were recovered. Now, Medicare may actually block care the AI deems unnecessary.

An AI bot may deny your next procedure.

AI-powered smart bandage heals wounds 25% faster

Engineers at the University of California, Santa Cruz have developed an automated device that uses a camera and AI to monitor wound healing, and apply treatments as needed. The tiny camera takes a picture of the enclosed wound every 2 hours and sends the image to a specially-trained AI model running on a nearby computer. The AI analyzes the progress of wound healing, and if any lag is detected, the device will either release a medication onto the wound, or create an electric field that helps new skin cells migrate to cover the wound. Trials have shown that the device accelerates wound healing up to 25% over standard wound care.

Stages of wound healing are sped up by UCSC’s AI-powered smart bandage.

That's a wrap! More news next week.