News
US Department of Defense commits $2B to training AI to have “common sense”
While artificial intelligence is being painted by companies and government as the catch-all answer to many of today’s inefficiencies and problems, it currently has one glaring shortcoming: It can’t answer common sense questions.
In an effort to address this current shortcoming of AI, The U.S. Department of Defense (DoD) is committing $2 billion dollars over the next five years to its Machine Common Sense (MCS) Program. The program aims to enable computers to communicate naturally, behave reasonably in new situations, and learn from new experiences.
Thanks in part to Iron Man (and Elon Musk) fame, the Defense Advanced Research Projects Agency, aka “DARPA”, an agency within the DoD, may be one of the few alphabet soup government agencies with a future-tech-savvy reputation. That reputation is well deserved, too, if history has anything to say about it. As the agency that gave us the Internet through an extension of a defense communication project, just having a discussion online about DARPA itself is testament to the tech potential it represents. The challenge of creating true, thinking computers is perfectly aligned with what DARPA has done well with overall.

As the advancement of computer technology increases at a near exponential rate, so too has the potential relationship between them and humans. However, the possibility of a troubling disconnect is also a growing reality. In other words, humans and computers currently operate very differently from one another, and that could spell bad things for the weaker logician of the two. Yeah, that means us.
Elon Musk has famously harped about this predicted disconnect on numerous occasions, and one of the companies he’s invested in, Neuralink, is working on preemptive solutions for its coming problems. While Neuralink generally aims to help human brains work more like computers, DARPA is taking the approach of having computers work more like humans.
The term “common sense” can often be tossed around in conversations to imply a variety of shared knowledge bases, but as a federal government agency, DARPA has its own specific definition for this context: “The basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate.” By mimicking the cognitive processes we go through when we are young, the agency hopes computers will develop the “fundamental building blocks of intelligence and common sense” just like a human.
With advanced neural networks making amazing (and humorous) headlines regularly, what would a “common sense” machine bring to the table in terms of advancement? One primary answer is the requirement for less initial information. To quote Dr. Brian Pierce, director of DARPA’s Innovation Office, at a recent summit, “We’d like to get away from having an enormous amount of data to train neural networks.” If a machine could use its environment to deduct answers when compared to its existing knowledge base, as humans do, it wouldn’t need to be taught to interpret data solely based on an enormous amount of data previously provided. Essentially, it could think for itself using common sense.
DARPA has now completed a “Proposers Day” wherein potential contractors were presented with the agency’s specifics for its MCS program. The next step is a “Broad Agency Announcement”, i.e., a formal invitation for proposals to work on the project with the hope of obtaining a federal contract to fulfill its aim.
If the contract winner is successful, will common sense lead to computer behavior we’d welcome rather than fear? Hopefully that will be figured out sooner rather than later.
Elon Musk
Elon Musk’s Grok records lowest hallucination rate in AI reliability study
Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6.
A December 2025 study by casino games aggregator Relum has identified Elon Musk’s Grok as one of the most reliable AI chatbots for workplace use, boasting the lowest hallucination rate at just 8% among the 10 major models tested.
In comparison, market leader ChatGPT registered one of the highest hallucination rates at 35%, just behind Google’s Gemini, which registered a high hallucination rate of 38%. The findings highlight Grok’s factual prowess despite the AI model’s lower market visibility.
Grok tops hallucination metric
The research evaluated chatbots on hallucination rate, customer ratings, response consistency, and downtime rate. The chatbots were then assigned a reliability risk score from 0 to 99, with higher scores indicating bigger problems.
Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6. DeepSeek followed closely with 14% hallucinations and zero downtime for a stellar risk score of 4. ChatGPT’s high hallucination and downtime rates gave it the top risk score of 99, followed by Claude and Meta AI, which earned reliability risk scores of 75 and 70, respectively.

Why low hallucinations matter
Relum Chief Product Officer Razvan-Lucian Haiduc shared his thoughts about the study’s findings. “About 65% of US companies now use AI chatbots in their daily work, and nearly 45% of employees admit they’ve shared sensitive company information with these tools. These numbers show well how important chatbots have become in everyday work.
“Dependence on AI tools will likely increase even more, so companies should choose their chatbots based on how reliable and fit they are for their specific business needs. A chatbot that everyone uses isn’t necessarily the one that works best for your industry or gives accurate answers for your tasks.”
In a way, the study reveals a notable gap between AI chatbots’ popularity and performance, with Grok’s low hallucination rate positioning it as a strong choice for accuracy-critical applications. This was despite the fact that Grok is not used as much by users, at least compared to more mainstream AI applications such as ChatGPT.
News
Tesla (TSLA) receives “Buy” rating and $551 PT from Canaccord Genuity
He also maintained a “Buy” rating for TSLA stock over the company’s improving long-term outlook, which is driven by autonomy and robotics.
Canaccord Genuity analyst George Gianarikas raised his Tesla (NASDAQ:TSLA) price target from $482 to $551. He also maintained a “Buy” rating for TSLA stock over the company’s improving long-term outlook, which is driven by autonomy and robotics.
The analyst’s updated note
Gianarikas lowered his 4Q25 delivery estimates but pointed to several positive factors in the Tesla story. He noted that EV adoption in emerging markets is gaining pace, and progress in FSD and the Robotaxi rollout in 2026 represent major upside drivers. Further progress in the Optimus program next year could also add more momentum for the electric vehicle maker.
“Overall, yes, 4Q25 delivery expectations are being revised lower. However, the reset in the US EV market is laying the groundwork for a more durable and attractive long-term demand environment.
“At the same time, EV penetration in emerging markets is accelerating, reinforcing Tesla’s potential multi‑year growth runway beyond the US. Global progress in FSD and the anticipated rollout of a larger robotaxi fleet in 2026 are increasingly important components of the Tesla equity story and could provide sentiment tailwinds,” the analyst wrote.
Tesla’s busy 2026
The upcoming year would be a busy one for Tesla, considering the company’s plans and targets. The autonomous two-seat Cybercab has been confirmed to start production sometime in Q2 2026, as per Elon Musk during the 2025 Annual Shareholder Meeting.
Apart from this, Tesla is also expected to unveil the next-generation Roadster on April 1, 2026. Tesla is also expected to start high-volume production of the Tesla Semi in Nevada next year.
Apart from vehicle launches, Tesla has expressed its intentions to significantly ramp the rollout of FSD to several regions worldwide, such as Europe. Plans are also underway to launch more Robotaxi networks in several more key areas across the United States.
News
Waymo sues Santa Monica over order to halt overnight charging sessions
In its complaint, Waymo argued that its self-driving cars’ operations do not constitute a public nuisance, and compliance with the city’s order would cause the company irreparable harm.
Waymo has filed a lawsuit against the City of Santa Monica in Los Angeles County Superior Court, seeking to block an order that requires the company to cease overnight charging at two facilities.
In its complaint, Waymo argued that its self-driving cars’ operations do not constitute a public nuisance, and compliance with the city’s order would cause the company irreparable harm.
Nuisance claims
As noted in a report from the Los Angeles Times, Waymo’s two charging sites at Euclid Street and Broadway have operated for about a year, supporting the company’s growing fleet with round-the-clock activity. Unfortunately, this has also resulted in residents in the area reportedly being unable to sleep due to incessant beeping from self-driving taxis that are moving in and out of the charging stations around the clock.
Frustrated residents have protested against the Waymos by blocking the vehicles’ paths, placing cones, and “stacking” cars to create backups. This has also resulted in multiple calls to the police.
Last month, the city issued an order to Waymo and its charging partner, Voltera, to cease overnight operations at the charging locations, stating that the self-driving vehicles’ activities at night were a public nuisance. A December 15 meeting yielded no agreement on mitigations like software rerouting. Waymo proposed changes, but the city reportedly insisted that nothing would satisfy the irate residents.
“We are disappointed that the City has chosen an adversarial path over a collaborative one. The City’s position has been to insist that no actions taken or proposed by Waymo would satisfy the complaining neighbors and therefore must be deemed insufficient,” a Waymo spokesperson stated.
Waymo pushes back
In its legal complaint, Waymo stated that its “activities at the Broadway Facilities do not constitute a public nuisance.” The company also noted that it “faces imminent and irreparable harm to its operations, employees, and customers” from the city’s order. The suit also stated that the city was fully aware that the Voltera charging sites would be operating around the clock to support Waymo’s self-driving taxis.
The company highlighted over one million trips in Santa Monica since launch, with more than 50,000 rides starting or ending there in November alone. Waymo also criticized the city for adopting a contentious strategy against businesses.
“The City of Santa Monica’s recent actions are inconsistent with its stated goal of attracting investment. At a time when the City faces a serious fiscal crisis, officials are choosing to obstruct properly permitted investment rather than fostering a ‘ready for business’ environment,” Waymo stated.