Under the new leadership of Elon Musk, Twitter is working harder to thwart hateful conduct. The new Chief Twit took ownership of the platform just before Halloween weekend and has been rapidly implementing new changes, addressing the needs of Twitter’s users, such as wrongful suspensions and addressing a spike in the use of a racial slur that took place as Twitter transitioned to its new leadership.
Jason Calacanis, a host of the All-In podcast, is working with Twitter’s new leadership team to help Elon Musk make the necessary changes to the platform. Calacanis shared a tweet by Twitter’s Head of Safety & Integrity, Yoel Roth, and said that the coordinated, hateful conduct surge was quickly thwarted.
Update on the coordinated, hateful conduct surge — it was quickly thwarted. https://t.co/QUNveJRVY6
— @jason (@Jason) November 1, 2022
In his thread, Roth gave a very clear update on how Twitter is addressing the surge in hateful conduct. This is a very different Twitter since many users, including myself, have experienced hateful conduct and have seen Twitter’s slow response to it. Roth’s full thread reads as follows:
“Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter. We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero. Here’s the latest on our work and what’s next.”
“Our primary success measure for content moderation is impressions: how many times harmful content is seen by our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter.”

“Impressions on this content typically are extremely low, platform-wide. We’re primarily dealing with a focused, short-term trolling campaign. The 1500 accounts we removed don’t correspond with 1500 people; many are repeat bad actors.”

“Impressions don’t tell the whole story. These issues aren’t new, and the people targeted by hateful conduct aren’t numbers or data points. We’re going to continue investing in policy and technology to make things better.”
“Many of you have said you’ve reported hateful conduct and received notices saying it’s not a violation. Here’s why and what we’re doing to fix it:”
“To try to understand the context behind potentially harmful Tweets, we treat first-person, and bystander reports differently. First-person: This hateful interaction is happening to or targeting me. Bystander: This is happening to someone else.”
“Why? Because bystanders don’t always have full context, we have a higher bar for bystander reports in order to find a violation. As a result, many reports of Tweets that in fact, do violate our policies end up marked as non-violative on first review.”
“We’re changing how we enforce these policies, but not the policies themselves, to address the gaps here.”
“You’ll hear more from me and our teams in the days to come as we make progress. Talk is cheap; expect the data that proves we’re making meaningful improvements.”
Author’s note: There has been a huge uptick in bots over the weekend. I’ve noticed several bots targeting Teslarati and continuing to spam the replies of Elon Musk. There was even a verified account posting as “Tesla News” promoting a link to a YouTube that promoted a crypto scam.

That said, I don’t expect Elon Musk and his new team to solve these problems overnight. Seeing Twitter’s fast response to the hate is very hopeful. I also hope they apply this same speed to child sexual abuse materials. Advocate Eliza Blue has even offered to work with Twitter and Elon Musk for no charge to help spearhead the removal of the content.
I like to be as transparent with my followers as possible.
I have offered to work with X (Twitter) under the new leadership to remove child sexual exploitation material at scale. I offered to work for free.
— 𝔈𝔩𝔦𝔷𝔞 (@elizableu) October 26, 2022
As Eliza pointed out to me over the phone, Elon Musk was most likely not aware of the ongoing lawsuits against Twitter regarding child sexual abuse materials. Having this material up, she said, is a liability, and as a supporter of Elon’s, she would like to help Twitter remove it.
“One key benefit of Elon Musk prioritizing the removal of this content besides protecting children is that corporate media and governments won’t be able to weaponize this very real crime against him,” she told me.
It is a topic of the utmost importance and it's interesting that mainstream media never cared until Elon took over.
— Truth Nudge Unit (@TruthNudgeUnit) November 1, 2022
Your feedback is essential. If you have any comments or concerns or see a typo, you can email me at johnna@teslarati.com. You can also reach me on Twitter at @JohnnaCrider1.
Teslarati is now on TikTok. Follow us for interactive news & more. Teslarati is now on TikTok. Follow us for interactive news & more. You can also follow Teslarati on LinkedIn, Twitter, Instagram, and Facebook.
News
Elon Musk’s Grok AI to be used in U.S. War Department’s bespoke AI platform
The partnership aims to provide advanced capabilities to 3 million military and civilian personnel.
The U.S. Department of War announced Monday an agreement with Elon Musk’s xAI to embed the company’s frontier artificial intelligence systems, powered by the Grok family of models, into the department’s bespoke AI platform GenAI.mil.
The partnership aims to provide advanced capabilities to 3 million military and civilian personnel, with initial deployment targeted for early 2026 at Impact Level 5 (IL5) for secure handling of Controlled Unclassified Information.
xAI Integration
As noted by the War Department’s press release, GenAI.mil, its bespoke AI platform, will gain xAI for the Government’s suite of tools, which enable real-time global insights from the X platform for “decisive information advantage.” The rollout builds on xAI’s July launch of products for U.S. government customers, including federal, state, local, and national security use cases.
“Targeted for initial deployment in early 2026, this integration will allow all military and civilian personnel to use xAI’s capabilities at Impact Level 5 (IL5), enabling the secure handling of Controlled Unclassified Information (CUI) in daily workflows. Users will also gain access to real‑time global insights from the X platform, providing War Department personnel with a decisive information advantage,” the Department of War wrote in a press release.
Strategic advantages
The deal marks another step in the Department of War’s efforts to use cutting-edge AI in its operations. xAI, for its part, highlighted that its tools can support administrative tasks at the federal, state and local levels, as well as “critical mission use cases” at the front line of military operations.
“The War Department will continue scaling an AI ecosystem built for speed, security, and decision superiority. Newly IL5-certified capabilities will empower every aspect of the Department’s workforce, turning AI into a daily operational asset. This announcement marks another milestone in America’s AI revolution, and the War Department is driving that momentum forward,” the War Department noted.
News
Tesla FSD (Supervised) v14.2.2 starts rolling out
The update focuses on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing, among other improvements.
Tesla has started rolling out Full Self-Driving (Supervised) v14.2.2, bringing further refinements to its most advanced driver-assist system. The new FSD update focuses on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing, among other improvements.
Key FSD v14.2.2 improvements
As noted by Not a Tesla App, FSD v14.2.2 upgrades the vision encoder neural network with higher resolution features, enhancing detection of emergency vehicles, road obstacles, and human gestures. New Arrival Options let users select preferred drop-off styles, such as Parking Lot, Street, Driveway, Parking Garage, or Curbside, with the navigation pin automatically adjusting to the user’s ideal spot for precision.
Other additions include pulling over for emergency vehicles, real-time vision-based detours for blocked roads, improved gate and debris handling, and extreme Speed Profiles for customized driving styles. Reliability gains cover fault recovery, residue alerts on the windshield, and automatic narrow-field camera washing for new 2026 Model Y units.
FSD v14.2.2 also boosts unprotected turns, lane changes, cut-ins, and school bus scenarios, among other things. Tesla also noted that users’ FSD statistics will be saved under Controls > Autopilot, which should help drivers easily view how much they are using FSD in their daily drives.
Key FSD v14.2.2 release notes
Full Self-Driving (Supervised) v14.2.2 includes:
- Upgraded the neural network vision encoder, leveraging higher resolution features to further improve scenarios like handling emergency vehicles, obstacles on the road, and human gestures.
- Added Arrival Options for you to select where FSD should park: in a Parking Lot, on the Street, in a Driveway, in a Parking Garage, or at the Curbside.
- Added handling to pull over or yield for emergency vehicles (e.g. police cars, fire trucks, ambulances).
- Added navigation and routing into the vision-based neural network for real-time handling of blocked roads and detours.
- Added additional Speed Profile to further customize driving style preference.
- Improved handling for static and dynamic gates.
- Improved offsetting for road debris (e.g. tires, tree branches, boxes).
- Improve handling of several scenarios, including unprotected turns, lane changes, vehicle cut-ins, and school buses.
- Improved FSD’s ability to manage system faults and recover smoothly from degraded operation for enhanced reliability.
- Added alerting for residue build-up on interior windshield that may impact front camera visibility. If affected, visit Service for cleaning!
- Added automatic narrow field washing to provide rapid and efficient front camera self-cleaning, and optimize aerodynamics wash at higher vehicle speed.
- Camera visibility can lead to increased attention monitoring sensitivity.
Upcoming Improvements:
- Overall smoothness and sentience.
- Parking spot selection and parking quality.
News
Tesla is not sparing any expense in ensuring the Cybercab is safe
Images shared by the longtime watcher showed 16 Cybercab prototypes parked near Giga Texas’ dedicated crash test facility.
The Tesla Cybercab could very well be the safest taxi on the road when it is released and deployed for public use. This was, at least, hinted at by the intensive safety tests that Tesla seems to be putting the autonomous two-seater through at its Giga Texas crash test facility.
Intensive crash tests
As per recent images from longtime Giga Texas watcher and drone operator Joe Tegtmeyer, Tesla seems to be very busy crash testing Cybercab units. Images shared by the longtime watcher showed 16 Cybercab prototypes parked near Giga Texas’ dedicated crash test facility just before the holidays.
Tegtmeyer’s aerial photos showed the prototypes clustered outside the factory’s testing building. Some uncovered Cybercabs showed notable damage and one even had its airbags engaged. With Cybercab production expected to start in about 130 days, it appears that Tesla is very busy ensuring that its autonomous two-seater ends up becoming the safest taxi on public roads.
Prioritizing safety
With no human driver controls, the Cybercab demands exceptional active and passive safety systems to protect occupants in any scenario. Considering Tesla’s reputation, it is then understandable that the company seems to be sparing no expense in ensuring that the Cybercab is as safe as possible.
Tesla’s focus on safety was recently highlighted when the Cybertruck achieved a Top Safety Pick+ rating from the Insurance Institute for Highway Safety (IIHS). This was a notable victory for the Cybertruck as critics have long claimed that the vehicle will be one of, if not the, most unsafe truck on the road due to its appearance. The vehicle’s Top Safety Pick+ rating, if any, simply proved that Tesla never neglects to make its cars as safe as possible, and that definitely includes the Cybercab.