News
First sounds of wind on Mars captured by NASA’s Insight Lander
This fresh in from the Elysium Planitia of Mars: the sound of wind from an alien world. On its 10th day as a new resident of the red planet, NASA’s InSight lander’s pressure sensor transmitted air vibration data from its trembling solar panels, representing a steady breeze about 99 million miles away. The combination of photos sent back from the craft with the sound of Martian wind gives Earth residents a unique moment to feel like they’ve joined the craft themselves. “It’s fun to imagine that I’m there,” mused Don Banfield during a JPL media teleconference discussing the recording. Banfield is InSight’s Auxiliary Payload Sensor Subsystem (APSS) Science Lead.
InSight, short for “Interior Exploration using Seismic Investigations, Geodesy and Heat Transport”, launched aboard an Atlas V rocket on May 5, 2018 and successfully landed on the Martian surface on November 26, 2018. The craft is a seismic investigator sent to study the red planet’s core, eventually drilling 10-16 feet down into its crust to gather geographical data. The craft’s landing event was live streamed online for viewers around the world, greeting Earthlings with a photo of its new home’s surface shortly after. It sent back more photos of the surrounding area prior to the wind recording.
The thin CO2 atmosphere on Mars doesn’t translate high sounds well, so the recorded vibrations from InSight’s pressure sensor are low on the audio spectrum, under 50 Hz, thus difficult to hear. However, after the frequency was increased by a factor of 100 (raised two octaves), it became possible to hear what sounds like a steady wind blowing across the regolith. Dust devils tracked in the area moving across the Martian surface had motion consistent with the wind recordings, thus confirming what was being heard by InSight’s scientists.

The way InSight picks up and translates sound is similar to how a human ear works: Air pressure vibrates the eardrum, then that vibration pattern is sent through the inner ear bones to the cochlea which has tiny hairs translating the vibrations into electrical signals sent to the brain. InSight’s solar panels are like its eardrums, the spacecraft structure itself like its inner ear, its instruments like its cochlear, and its electronic box translating and transmitting signals is like brain. The “sounds” we hear from Mars are translated data from wind-caused vibrations.
Ironically enough, wind noise is actually not a particularly desired outcome from InSight’s instruments. According to the scientists participating in NASA’s teleconference discussing the event, the inlet for the pressure sensor was specifically designed to minimize any chatter from air movement. Also, the placement of InSight’s seismographic gear will be based on the best area to reduce input from the lander’s interaction with the vibrations it’s recording, i.e., the lander’s movement from seismic events. It should be noted, though, the Martian wind gracing our human ears for the first time is only a taste of what’s to come from InSight’s instruments.
Once the wind and thermal shield (the white dome in the photos) has been lifted from the lander in a few weeks, all of InSight’s instruments will be exposed to the Martian environment for data collection. For now, the lander’s Earth-based team is first focusing on understanding the area the craft is in to pick the best place to set its instruments. After the main mission begins, however, a full study of Mars’ atmosphere will be underway and we could hear, among other natural events, the sounds of exploding meteors.

While wind may be a unique sound to hear on an alien world, it’s not the first time a NASA craft has entertained our ears and imaginations. Electromagnetic vibrations have been recorded all across our solar system, perhaps the most famous of which originated from the Voyager 1 spacecraft launched in 1977. The data collected from the craft’s radio-capturing instruments has been converted into audio files – you can even find a full album’s worth of the sounds on a variety of streaming sites. Some of the recordings are meditation-worthy, others a touch unnerving. We humans have additionally added some recordings of our own to space via Voyager’s famous “golden record”, the sounds of which are also available for listening online.
If you’re craving a full Martian soundtrack, you’ll be happy to know that NASA’s Mars 2020 rover is planned to provide just that. It will have two microphones on board, one of which will record the actual landing of the rover. Combined with telemetry data and surface photographs, Mars is on its way to its own documentary with inputs completely provided “on-location”. Stay tuned!
Listen to the Martian wind yourself below:
Elon Musk
Tesla Optimus V3 hand and arm details revealed in new patents
Two new patents, which were coincidentally filed on the same day as the “We, Robot” event back in October 2024, protect Tesla’s mechanically actuated, tendon-driven architecture.
Tesla is planning to soon reveal its latest and greatest version of the Optimus humanoid robot, and a series of new patents for the hands and arms, with the former being, admittedly, one of the most challenging parts of developing the project.
Two new patents, which were coincidentally filed on the same day as the “We, Robot” event back in October 2024, protect Tesla’s mechanically actuated, tendon-driven architecture.
The designs relocate heavy actuators to the forearm, route cables through a sophisticated wrist design, and employ innovative joint assemblies to achieve human-like dexterity while enabling lightweight construction and high-volume manufacturing.
Core Tendon-Driven Hand Architecture
The primary patent, which is titled “Mechanically Actuated Robotic Hand,” details a cable/tendon-driven system.
Actuators are positioned in the forearm rather than the hand. Each finger features four degrees of freedom (DoF), while the wrist adds two more.
Tesla’s Optimus V3 robot hand looks to have been revealed in a new international patent published today.
The patent describes a tendon/cable-driven hand:
• Actuators in the forearm
• Each finger has 4 degrees of freedom
• The wrist has 2 degrees of freedom
• Tendon-driven… pic.twitter.com/eE8xLEYSrx— Sawyer Merritt (@SawyerMerritt) April 16, 2026
Three thin, flexible control cables (tendons) per finger extend from the forearm actuators, pass through the wrist, and connect to the finger segments. Integrated channels within the finger phalanges guide these cables selectively—routing behind some joints and forward of others—to enable independent bending without unintended motion.
Patent diagrams illustrate thick cable bundles emerging from the wrist into the palm and fingers, with labeled pivots and routing guides. This setup closely mirrors human forearm-muscle and tendon anatomy, where most hand control originates proximally.
Advanced Wrist Routing Innovation
One of the standout features is the wrist’s cable transition mechanism. Cables shift from a lateral stack on the forearm side to a vertical stack on the hand side through a specialized transition zone.
Boom! @Tesla_Optimus 의 3세대 구조로 추정되는, 로봇 팔 및 관절에 대한 특허가 공개되었습니다.
아티클 작업에 들어가겠습니다.
1년 넘게 기다려 온, 정말 귀한 특허인데, 조회수 100만대로 터져줬으면 좋겠네요. 😉@herbertong @SawyerMerritt@GoingBallistic5 @TheHumanoidHub pic.twitter.com/CCEiIlMFSX
— SETI Park (@seti_park) April 16, 2026
This geometry significantly reduces cable stretch, torque, friction, and crosstalk during combined yaw and pitch wrist movements — common failure points in simpler tendon systems that cause imprecise or jerky motion.
By minimizing these issues, the design supports smoother, more reliable multi-axis wrist operation, essential for complex real-world tasks.
Companion Patents on Appendage and Joint Design
Two supporting patents provide additional depth. “Robotic Appendage” covers the overall forearm-to-palm-to-finger assembly, with a palm body movably coupled to the forearm and finger phalanges linked by tensile cables returning to forearm actuators. Tensioning these cables repositions the phalanges precisely.
“Joint Assembly for Robotic Appendage” describes curved contact surfaces on mating structures paired with a composite flexible member. This allows smooth pivoting while maintaining consistent tension, enhancing durability, and simplifying assembly for mass production.
Executive Insights on Hand Development Challenges
Tesla executives have consistently described the hand as the most difficult component of Optimus.
Elon Musk has called it “the majority of the engineering difficulty of the entire robot,” emphasizing that human hands possess roughly 27–28 DoF with an intricate tendon network powered largely by forearm muscles. He has likened the challenge to something “harder than Cybertruck or Model X… somewhere between Model X and Starship.”
In mid-2025, Musk acknowledged that Tesla was “struggling” to finalize the hand and forearm design. By early 2026, he stated that the company had overcome the “hardest” problems, including human-level manual dexterity, real-world AI integration, and volume production scalability.
He estimated the electromechanical hand represents about 60 percent of the overall Optimus challenge, compounded by the lack of an existing supply chain for such precision components.
These patents directly tackle the acknowledged pain points: relocating actuators reduces hand mass and inertia for better speed and efficiency; advanced wrist routing and joint geometry address friction and crosstalk; and simplified, stackable parts visible in the diagrams indicate readiness for high-volume manufacturing.
Implications for Optimus Production and Leadership
Collectively, the patents portray the Optimus v3 hand not as a mere prototype, but as a production-oriented system engineered from first principles.
The 22-DoF architecture, forearm-driven tendons, and crosstalk-minimizing wrist deliver a clear competitive edge in dexterity. They align with Musk’s view that high-volume manufacturing is one of the three critical elements missing from most other humanoid projects.
For Optimus to become the most capable humanoid robot, its hand needed to replicate the useful and applicable design of the human counterpart.
These filings demonstrate that Tesla has transformed years of engineering challenges into patented, elegant solutions — positioning the company strongly in the race toward general-purpose robotics.
News
Tesla intertwines FSD with in-house Insurance for attractive incentive
Every mile logged under FSD now carries a documented financial value—lower risk, lower cost—based on Tesla’s internal driving data rather than external crash statistics alone.
Tesla intertwined its Full Self-Driving (Supervised) suite with its in-house Insurance initiative in an effort to offer an attractive incentive to drivers.
Tesla announced that its new Safety Score 3.0 will automatically have a perfect score of 100 with every mile driven with Full Self-Driving (Supervised) enabled.
The change is designed to boost customers’ average safety scores and deliver noticeably lower monthly premiums.
The move marks the clearest link yet between Tesla’s autonomous driving technology and its proprietary insurance product. Tesla Insurance already relies on real-time vehicle data—such as acceleration, braking, following distance, and speed—to calculate a Safety Score between 0 and 100. Higher scores have long translated into cheaper rates.
Under the previous system, however, even brief manual interventions could drag down the average, frustrating owners who rely heavily on FSD. Version 3.0 eliminates that penalty for supervised autonomous miles, effectively treating FSD-driven segments as the safest possible driving behavior.
The incentive is immediate and financial. Drivers who keep FSD engaged for the majority of their trips will see their overall score rise, potentially shaving hundreds of dollars off annual premiums.
Tesla framed the update as a direct response to customer feedback, many of whom had complained that the old scoring model punished the very behavior it was meant to encourage.
For now, the program applies only to new policies in six states: Indiana, Tennessee, Texas, Arizona, Virginia, and Illinois.
Existing policyholders are not yet included, a point that drew swift questions from the Tesla community. Many owners in other states, including California and Georgia, expressed hope that the benefit would expand nationwide soon.
The announcement arrives as Tesla continues to roll out FSD Supervised updates and push for regulatory approval of more advanced autonomy. By tying insurance savings directly to FSD usage, the company is putting its own actuarial weight behind the technology’s safety claims.
Every mile logged under FSD now carries a documented financial value—lower risk, lower cost—based on Tesla’s internal driving data rather than external crash statistics alone.
Tesla has not disclosed exact premium reductions or the full rollout timeline beyond the six launch states.
Still, the message is clear: the more drivers trust FSD Supervised, the more Tesla Insurance will reward them. In an era when legacy insurers remain cautious about autonomous tech, Tesla is betting that its own data will prove the safest miles are the ones driven hands-free.
Elon Musk
Tesla finalizes AI5 chip design, Elon Musk makes bold claim on capability
The Tesla CEO’s words mark a strategic shift. Tesla has long emphasized software-hardware co-design, squeezing maximum performance from every transistor. Musk previously described AI5 as optimized for edge inference in both Robotaxi and Optimus.
Tesla has finalized its chip design for AI5, as Elon Musk confirmed today that the new chip has reached the tape-out stage, the final step before mass production.
But in a brief reply on X, Musk clarified Tesla’s AI hardware roadmap, essentially confirming that the new chip will not be utilized for being “enough to achieve much better than human safety for FSD.”
He said that AI4 is enough to do that.
Instead, the AI5 chip will be focused on Tesla’s big-time projects for the future: Optimus and supercomputer clusters.
Musk thanked TSMC and Samsung for production support, noting that AI5 could become “one of the most produced AI chips ever.” Yet, the key pivot came in his direct answer: vehicles no longer need the bleeding-edge silicon.
And thank you to @TaiwanSemi_TSC and @Samsung for your support in bringing this chip to production! It will be one of most produced AI chips ever.
— Elon Musk (@elonmusk) April 15, 2026
Existing AI4 hardware, which is already deployed in hundreds of thousands of HW4-equipped Teslas, delivers safety metrics superior to human drivers for Full Self-Driving. AI5 will instead accelerate Optimus robot development and massive Dojo-style training clusters.
The Tesla CEO’s words mark a strategic shift. Tesla has long emphasized software-hardware co-design, squeezing maximum performance from every transistor. Musk previously described AI5 as optimized for edge inference in both Robotaxi and Optimus.
Now, with AI4 proving sufficient, the company avoids costly retrofits across its fleet while redirecting next-generation compute toward higher-value applications: dexterous robots and exponential training scale.
But is it reasonable to assume AI4 enables unsupervised self-driving? Yes, but with important caveats.
On the hardware side, the claim is credible. Tesla’s FSD stack runs end-to-end neural networks trained on billions of miles of real-world data. Internal safety data reportedly shows AI4-equipped vehicles already outperforming average human drivers by a significant margin in controlled metrics (collision avoidance, reaction time, edge-case handling).
Dual-redundant AI4 chips provide ample headroom for the driving task, leaving bandwidth for future model improvements without new silicon. Musk’s assertion aligns with Tesla’s pattern of over-provisioning compute early, then optimizing ruthlessly, exactly as HW3 once sufficed before HW4 scaled further.
Optimus and our supercomputer clusters.
AI4 is enough to achieve much better than human safety for FSD.
— Elon Musk (@elonmusk) April 15, 2026
Unsupervised autonomy, meaning Level 4 or higher, is not solely a compute problem. Regulatory approval remains the primary gate.
Even if AI4 achieves “much better than human” safety statistically, agencies like the NHTSA demand exhaustive validation, liability frameworks, and public trust.
Tesla’s supervised FSD has shown rapid gains in recent versions, yet real-world edge cases, like construction zones, emergency vehicles, and adverse weather, still require driver intervention in many jurisdictions. Competitors like Waymo operate limited unsupervised fleets, but only in geofenced areas with extensive mapping. Tesla’s vision-only, fleet-scale approach is more ambitious—and harder to certify globally.
In short, Musk’s post is both pragmatic and bullish. AI4 is likely capable of unsupervised FSD from a technical standpoint. Whether regulators and consumers agree, and how quickly, will determine if Tesla’s bet pays off.
The company’s capital-efficient path keeps existing cars relevant while pouring future compute into robots. If the safety data holds, unsupervised autonomy could arrive sooner than many expect.