Connect with us

News

US Department of Defense commits $2B to training AI to have “common sense”

Published

on

While artificial intelligence is being painted by companies and government as the catch-all answer to many of today’s inefficiencies and problems, it currently has one glaring shortcoming: It can’t answer common sense questions.

In an effort to address this current shortcoming of AI, The U.S. Department of Defense (DoD) is committing $2 billion dollars over the next five years to its Machine Common Sense (MCS) Program. The program aims to enable computers to communicate naturally, behave reasonably in new situations, and learn from new experiences.

Thanks in part to Iron Man (and Elon Musk) fame, the Defense Advanced Research Projects Agency, aka “DARPA”, an agency within the DoD, may be one of the few alphabet soup government agencies with a future-tech-savvy reputation. That reputation is well deserved, too, if history has anything to say about it. As the agency that gave us the Internet through an extension of a defense communication project, just having a discussion online about DARPA itself is testament to the tech potential it represents. The challenge of creating true, thinking computers is perfectly aligned with what DARPA has done well with overall.

“Artificial intelligence development projection.” Credit: DARPA, US Department of Defense

As the advancement of computer technology increases at a near exponential rate, so too has the potential relationship between them and humans. However, the possibility of a troubling disconnect is also a growing reality. In other words, humans and computers currently operate very differently from one another, and that could spell bad things for the weaker logician of the two. Yeah, that means us.

Elon Musk has famously harped about this predicted disconnect on numerous occasions, and one of the companies he’s invested in, Neuralink, is working on preemptive solutions for its coming problems. While Neuralink generally aims to help human brains work more like computers, DARPA is taking the approach of having computers work more like humans.

Advertisement

The term “common sense” can often be tossed around in conversations to imply a variety of shared knowledge bases, but as a federal government agency, DARPA has its own specific definition for this context: “The basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate.” By mimicking the cognitive processes we go through when we are young, the agency hopes computers will develop the “fundamental building blocks of intelligence and common sense” just like a human.

With advanced neural networks making amazing (and humorous) headlines regularly, what would a “common sense” machine bring to the table in terms of advancement? One primary answer is the requirement for less initial information. To quote Dr. Brian Pierce, director of DARPA’s Innovation Office, at a recent summit, “We’d like to get away from having an enormous amount of data to train neural networks.” If a machine could use its environment to deduct answers when compared to its existing knowledge base, as humans do, it wouldn’t need to be taught to interpret data solely based on an enormous amount of data previously provided. Essentially, it could think for itself using common sense.

DARPA has now completed a “Proposers Day” wherein potential contractors were presented with the agency’s specifics for its MCS program. The next step is a “Broad Agency Announcement”, i.e., a formal invitation for proposals to work on the project with the hope of obtaining a federal contract to fulfill its aim.

If the contract winner is successful, will common sense lead to computer behavior we’d welcome rather than fear? Hopefully that will be figured out sooner rather than later.

Advertisement

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

Elon Musk

Tesla is sending its humanoid Optimus robot to the Boston Marathon

Tesla’s Optimus robot is heading to the Boston Marathon finish line

Published

on

By

Tesla’s Optimus humanoid robot will be stationed at the Tesla showroom at 888 Boylston Street in Boston, right along the final stretch of the Boston Marathon today, ready to cheer on runners and pose for photos with spectators.

According to a Tesla email shared by content creator Sawyer Merritt on X, Optimus will be at the Boston Boylston Street showroom on April 20, coinciding with Marathon Monday weekend. The Boston Marathon finishes on Boylston Street, and the surrounding area draws hundreds of thousands of spectators along with international broadcast coverage. Placing Optimus there puts it in front of a massive public audience at zero advertising cost.

The Tesla showroom is at 888 Boylston Street, between Gloucester Street and Fairfield Street. The final mile of the marathon runs directly along Boylston Street, with runners passing the big stores before reaching the finish line at Copley Square.

Optimus was first announced at Tesla’s AI Day event on August 19, 2021, when Elon Musk presented a vision for a general-purpose robot designed to take on dangerous, repetitive, and unwanted tasks. In March 2026, Optimus appeared at the Appliance and Electronics World Expo in Shanghai, where on-site staff stated that mass production of the robot could begin by the end of 2026. Before that, it showed up at the Tesla Hollywood Diner opening in July 2025 and at a Miami showroom event in December 2025.

Tesla’s well-calculated display of Optimus gives the public a low-pressure first encounter with a robot that Tesla is preparing  to soon deploy at scale. The company has previously indicated plans to manufacture Optimus robots at its Fremont facility at up to 1 million units annually, with an Optimus production line at Gigafactory Texas targeting 10 million units per year.

Tesla showcases Optimus humanoid robot at AWE 2026 in Shanghai

Advertisement

Musk has said that Optimus “has the potential to be more significant than the vehicle business over time,” and separately that roughly 80 percent of Tesla’s future value will come from the robot program. Whether that holds depends on production execution. For now, Boston gets a preview of what that future looks like, standing at the finish line on Boylston Street while 32,000 runners pass by.

Continue Reading

News

Tesla expands Unsupervised Robotaxi service to two new cities

This expansion builds directly on Tesla’s existing operations. Robotaxi has been ramping unsupervised rides in Austin for months and maintains activity in the San Francisco Bay Area.

Published

on

Credit: Tesla

Tesla has taken a major step forward in its autonomous ride-hailing ambitions.

On April 18, the company’s official Robotaxi account announced that Robotaxi service is now rolling out in Dallas and Houston, Texas. The update signals the rapid scaling of unsupervised autonomous operations in the Lone Star State.

The announcement includes a compelling 14-second video captured from inside a Model Y. Shot from the passenger perspective, the footage shows the vehicle navigating suburban roads in both cities with zero driver intervention, with no Safety Monitor to be seen.

Tesla also shared geofence maps highlighting the initial service areas: a compact zone in Houston covering parts of Willowbrook and Jersey Village, and a similarly defined area in Dallas near Highland Park and central neighborhoods.

Advertisement

This expansion builds directly on Tesla’s existing operations. Robotaxi has been ramping unsupervised rides in Austin for months and maintains activity in the San Francisco Bay Area.

With Dallas and Houston now live, Texas hosts three active hubs—an impressive concentration that triples the company’s Lone Star footprint in just weeks. The move aligns with Tesla’s Q4 2025 earnings guidance, which outlined a broader H1 2026 rollout across seven U.S. cities, including Phoenix, Miami, Orlando, Tampa, and Las Vegas.

Texas offers favorable regulations, high ride-share demand, and relatively straightforward suburban-to-urban driving patterns ideal for early autonomous scaling. While initial geofences appear modest—roughly 25 square miles per city—Tesla has historically expanded these zones quickly as it gathers real-world data.

Tesla confirms Robotaxi expansion plans with new cities and aggressive timeline

Advertisement

Unsupervised operation marks a critical milestone: passengers can summon, ride, and exit without safety drivers, a leap beyond many competitors still requiring human oversight.

For Tesla, the implications are significant. Successful scaling in major metros could accelerate the transition to a fully driverless fleet, unlocking new revenue streams and validating years of Full Self-Driving investment.

Riders gain convenient, potentially lower-cost mobility, while the company edges closer to Elon Musk’s vision of Robotaxis transforming urban transport.

As Tesla pushes into more cities this year, today’s launch in Dallas and Houston underscores its momentum. Hopefully, Tesla will be able to expand unsupervised rides to another U.S. state soon, which will mark yet another chapter in this short-but-encouraging Robotaxi story.

Advertisement
Continue Reading

News

Tesla is pushing Robotaxi features to owner cars with Spring Update

Tesla has quietly begun rolling out one of its most forward-looking Robotaxi-inspired features to existing customer vehicles.

Published

on

Tesla is starting to push Robotaxi features to owner cars, and the first instances are coming as the Spring 2026 Update starts to roll out.

Tesla has quietly begun rolling out one of its most forward-looking Robotaxi-inspired features to existing customer vehicles.

With the 2026 Spring Update (version 2026.14+), the rear passenger display now features a fully interactive navigation map that works while the car is driving — a capability previously reserved for Tesla Robotaxi.

Until now, Tesla’s rear displays have been largely limited to media controls, climate settings, and static route overviews. The new interactive map transforms the backseat into an active navigation hub, exactly the kind of passenger-first interface Tesla has been prototyping for its driverless fleet.

In a Robotaxi, where no one sits behind the wheel, every rider will need intuitive, real-time map access. By shipping this UI into thousands of owner cars months ahead of the Cybercab’s planned unveiling, Tesla is stress-testing the software in real-world conditions and giving loyal customers an early taste of the autonomous future.

The rollout is still in its early wave. Only a small number of vehicles have received 2026.14.1 so far, but the feature is expected to expand rapidly in the coming weeks. Owners of Model S, Model X, Model 3, Model Y, and Cybertruck are all eligible.

Advertisement

For buyers of the new Signature Edition Model S and X Plaid vehicles — whose deliveries begin in May — the update will likely arrive shortly after they take delivery, meaning the final chapter of Tesla’s flagship lineup will ship with cutting-edge Robotaxi preview tech baked in.

Elon Musk has long emphasized that Tesla ships supporting infrastructure well before new products launch. This rear-map rollout is a textbook example of that philosophy — quietly preparing both the software and the customer base for a world of fully driverless rides.

While the interactive map may seem like a modest convenience upgrade on the surface, its deeper purpose is unmistakable. Tesla is using its massive installed base of vehicles as a proving ground for the exact passenger experience that will define the Robotaxi era.

For current owners, it’s a free preview of tomorrow’s mobility; for the company, it’s invaluable data and real-world validation before the Cybercab hits the streets.

Advertisement
Continue Reading