News
Elon Musk’s Neuralink unveils sleek V0.9 device, uses sassy pigs for live brain machine demo
After another year of successfully staying in the shadows, Elon Musk’s Neuralink has revealed what’s been going on behind the scenes in terms of technological progress. In a live streamed event on Friday afternoon, the brain-machine interface company gave a demonstration, took questions, and left audiences with even more to mull over than ever.
“The primary purpose of this demo is recruiting,” Musk stated at the very beginning of the presentation. He emphasized that everyone at some point in their life will face a brain or spine problem – all inherently electrical – meaning it takes electrical solutions to solve electrical problems. Neuralink’s goals are to solve these problems for anyone who wants them solved, and that application will be simple and reversible with no negative effects.
Two pigs were used for the ‘real-time’ demonstration promised in the days leading up to the event. The first, named Gertrude, had a Neuralink implant installed for two months and was shown to be healthy and happy. A second pig, named Dorothy, had the implant previously installed and removed with no side effects afterward.
- Elon Musk shows off the Neuralink v0.9 Device (Credit: Neuralink)
After a bit of a delay from the amusingly sassy Neuralink-implanted pigs, the live stream and in-house audience witnessed Gertrude’s device in action. Notably, the neural implants could predict all the limb movements of the pigs based on the neural activity being read. Each reading was shown on a screen and musical notes attached as the data was processed.
Overall, here are some of the main takeaways from the presentation.
- The Neuralink implant device has been dramatically simplified since Summer 2019. Its design will be very low profile and nearly invisible on the outside, leaving only a small scar that could be covered by hair. “It’s like a FitBit in your skull with tiny wires,” Musk half-joked. “I could have it right now and you wouldn’t even know. Maybe I do!”
- The implant device is inductively charged, much like wireless smartphones are charged. It will also have functions that are akin to those available on smartwatches today.
- A “smart” robot installs the device, which requires engineering talent to accomplish, hence the recruiting focus of the Neuralink event. The “V2” robot featured in this year’s presentation looks like a step up from last year’s machine.
- The electrodes are installed without general anesthesia, no bleeding, and no noticeable damage. The currently developed robot has done all the current implant installations to date.
- The implant can be installed and removed without any side effects.
- You can have multiple Neuralink devices implanted and they will work seamlessly.
- The implant device would have an application linked to your phones.
- Neuralink received a ‘breakthrough device’ designation from the FDA in July, and the company is working with the agency to make the technology as safe as possible.
- The device will eventually be able to be sewn deeper within the brain, thereby having access to a greater range of functions beyond the upper cortex. Examples are motor function, depression, and addiction.
- Getting a Neuralink should take less than an hour, without the need for general anesthesia. Users could have the surgery done in the morning and go home later during the day.
- Credit: Neuralink
- Credit: Neuralink
The idea for Musk’s AI-focused brain venture first seemed to really take off after his appearance at Vox Media’s Recode Code Conference in 2016. The CEO had discussed the concept of a neural lace device on several occasions up to that point and suggested at the conference that he might be willing to tackle the challenge himself. A few months later, he revealed that he was in fact working on the idea, which was detailed at great length by Tim Urban on his website Wait But Why.
“He started Neuralink to accelerate our pace into the Wizard Era—into a world where he says that ‘everyone who wants to have this AI extension of themselves could have one, so there would be billions of individual human-AI symbiotes who, collectively, make decisions about the future.’ A world where AI really could be of the people, by the people, for the people,” Urban summarized. Given that bigger picture perspective, the 2020 Neuralink event seems even more impactful.
Neuralink’s official Twitter account opened the virtual floor to questions using the #askneuralink hashtag the night before the event, prompting several questions during the presentation. However, Musk fanned the building curiosity in the hours beforehand. “Giant gap between experimental medical device for use only in patients with extreme medical problems & widespread consumer use. This is way harder than making a small number of prototypes,” Musk responded to one question directed towards the mass market viability of a future Neuralink product line.
https://twitter.com/flcnhvy/status/1299422178329362437
Also in the days prior to the Neuralink event, Musk teased a few more bits of information about what to expect. “Live webcast of working @Neuralink device,” he said. Just prior to his confirmation of the device demonstration, he revealed that version two of the robot initially shown in the first progress update in 2019 wasn’t quite up to the level of a LASIK eye surgery machine, though only a few years away.
You can watch the full event below:
News
Tesla FSD v14.2.2 is getting rave reviews from drivers
So far, early testers have reported buttery-smooth drives with confident performance, even at night or on twisty roads.
Tesla Full Self-Driving (Supervised) v14.2.2 is receiving positive reviews from owners, with several drivers praising the build’s lack of hesitation during lane changes and its smoother decision-making, among others.
The update, which started rolling out on Monday, also adds features like dynamic arrival pin adjustment. So far, early testers have reported buttery-smooth drives with confident performance, even at night or on twisty roads.
Owners highlight major improvements
Longtime Tesla owner and FSD user @BLKMDL3 shared a detailed 10-hour impression of FSD v14.2.2, noting that the system exhibited “zero lane change hesitation” and “extremely refined” lane choices. He praised Mad Max mode’s performance, stellar parking in locations including ticket dispensers, and impressive canyon runs even in dark conditions.
Fellow FSD user Dan Burkland reported an hour of FSD v14.2.2’s nighttime driving with “zero hesitations” and “buttery smooth” confidence reminiscent of Robotaxi rides in areas such as Austin, Texas. Veteran FSD user Whole Mars Catalog also demonstrated voice navigation via Grok, while Tesla owner Devin Olsen completed a nearly two-hour drive with FSD v14.2.2 in heavy traffic and rain with strong performance.
Closer to unsupervised
FSD has been receiving rave reviews, even from Tesla’s competitors. Xpeng CEO He Xiaopeng, for one, offered fresh praise for FSD v14.2 after visiting Silicon Valley. Following extended test drives of Tesla vehicles running the latest FSD software, He stated that the system has made major strides, reinforcing his view that Tesla’s approach to autonomy is indeed the proper path towards autonomy.
According to He, Tesla’s FSD has evolved from a smooth Level 2 advanced driver assistance system into what he described as a “near-Level 4” experience in terms of capabilities. While acknowledging that areas of improvement are still present, the Xpeng CEO stated that FSD’s current iteration significantly surpasses last year’s capabilities. He also reiterated his belief that Tesla’s strategy of using the same autonomous software and hardware architecture across private vehicles and robotaxis is the right long-term approach, as it would allow users to bypass intermediate autonomy stages and move closer to Level 4 functionality.
News
Elon Musk’s Grok AI to be used in U.S. War Department’s bespoke AI platform
The partnership aims to provide advanced capabilities to 3 million military and civilian personnel.
The U.S. Department of War announced Monday an agreement with Elon Musk’s xAI to embed the company’s frontier artificial intelligence systems, powered by the Grok family of models, into the department’s bespoke AI platform GenAI.mil.
The partnership aims to provide advanced capabilities to 3 million military and civilian personnel, with initial deployment targeted for early 2026 at Impact Level 5 (IL5) for secure handling of Controlled Unclassified Information.
xAI Integration
As noted by the War Department’s press release, GenAI.mil, its bespoke AI platform, will gain xAI for the Government’s suite of tools, which enable real-time global insights from the X platform for “decisive information advantage.” The rollout builds on xAI’s July launch of products for U.S. government customers, including federal, state, local, and national security use cases.
“Targeted for initial deployment in early 2026, this integration will allow all military and civilian personnel to use xAI’s capabilities at Impact Level 5 (IL5), enabling the secure handling of Controlled Unclassified Information (CUI) in daily workflows. Users will also gain access to real‑time global insights from the X platform, providing War Department personnel with a decisive information advantage,” the Department of War wrote in a press release.
Strategic advantages
The deal marks another step in the Department of War’s efforts to use cutting-edge AI in its operations. xAI, for its part, highlighted that its tools can support administrative tasks at the federal, state and local levels, as well as “critical mission use cases” at the front line of military operations.
“The War Department will continue scaling an AI ecosystem built for speed, security, and decision superiority. Newly IL5-certified capabilities will empower every aspect of the Department’s workforce, turning AI into a daily operational asset. This announcement marks another milestone in America’s AI revolution, and the War Department is driving that momentum forward,” the War Department noted.
News
Tesla FSD (Supervised) v14.2.2 starts rolling out
The update focuses on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing, among other improvements.
Tesla has started rolling out Full Self-Driving (Supervised) v14.2.2, bringing further refinements to its most advanced driver-assist system. The new FSD update focuses on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing, among other improvements.
Key FSD v14.2.2 improvements
As noted by Not a Tesla App, FSD v14.2.2 upgrades the vision encoder neural network with higher resolution features, enhancing detection of emergency vehicles, road obstacles, and human gestures. New Arrival Options let users select preferred drop-off styles, such as Parking Lot, Street, Driveway, Parking Garage, or Curbside, with the navigation pin automatically adjusting to the user’s ideal spot for precision.
Other additions include pulling over for emergency vehicles, real-time vision-based detours for blocked roads, improved gate and debris handling, and extreme Speed Profiles for customized driving styles. Reliability gains cover fault recovery, residue alerts on the windshield, and automatic narrow-field camera washing for new 2026 Model Y units.
FSD v14.2.2 also boosts unprotected turns, lane changes, cut-ins, and school bus scenarios, among other things. Tesla also noted that users’ FSD statistics will be saved under Controls > Autopilot, which should help drivers easily view how much they are using FSD in their daily drives.
Key FSD v14.2.2 release notes
Full Self-Driving (Supervised) v14.2.2 includes:
- Upgraded the neural network vision encoder, leveraging higher resolution features to further improve scenarios like handling emergency vehicles, obstacles on the road, and human gestures.
- Added Arrival Options for you to select where FSD should park: in a Parking Lot, on the Street, in a Driveway, in a Parking Garage, or at the Curbside.
- Added handling to pull over or yield for emergency vehicles (e.g. police cars, fire trucks, ambulances).
- Added navigation and routing into the vision-based neural network for real-time handling of blocked roads and detours.
- Added additional Speed Profile to further customize driving style preference.
- Improved handling for static and dynamic gates.
- Improved offsetting for road debris (e.g. tires, tree branches, boxes).
- Improve handling of several scenarios, including unprotected turns, lane changes, vehicle cut-ins, and school buses.
- Improved FSD’s ability to manage system faults and recover smoothly from degraded operation for enhanced reliability.
- Added alerting for residue build-up on interior windshield that may impact front camera visibility. If affected, visit Service for cleaning!
- Added automatic narrow field washing to provide rapid and efficient front camera self-cleaning, and optimize aerodynamics wash at higher vehicle speed.
- Camera visibility can lead to increased attention monitoring sensitivity.
Upcoming Improvements:
- Overall smoothness and sentience.
- Parking spot selection and parking quality.








