Connect with us

News

Scientists use AI neural network to translate speech from brain activity

Published

on

Three recently published studies focused on using artificial intelligence (AI) neural networks to generate audio output from brain signals have shown promising results, namely by producing identifiable sounds up to 80% of the time. Participants in the studies first had their brain signals measured while they were either reading aloud or listening to specific words. All the data was then given to a neural network to “learn” how to interpret brain signals after which the final sounds were reconstructed for listeners to identify. These results represent hopeful prospects for the field of brain-computer interfaces (BCIs), where thought-based communication is quickly moving from the realm of science fiction to reality.

The idea of connecting human brains to computers is far from new. In fact, several relevant milestones have been made in recent years including enabling paralyzed individuals to operate tablet computers with their brain waves. Elon Musk has also famously brought attention to the field with Neuralink, his BCI company that essentially hopes to merge human consciousness with the power of the Internet. As brain-computer interface technology expands and develops new ways to foster communication between brains and machines, studies like these, originally highlighted by Science Magazine, will continue demonstrating the steady march of progress.

Functional areas of the human brain. | Credit: Blausen.com staff (2014) via CC BY 3.0.

In the first study conducted by researchers from Columbia University and Hofstra Northwell School of Medicine, both in New York, five epileptic participants had the brain signals from their auditory cortexes recorded as they listened to stories and numbers being read to them. The signal data was provided to a neural network for analysis which then reconstructed audio files that were accurately identified by participating listeners 75% of the time.

In the second study conducted by a team from the University of Bremen (Germany), Maastricht University (Netherlands), Northwestern University (Illinois), and Virginia Commonwealth University (Virginia), brain signal data was gathered from six patients’ speech planning and motor areas while undergoing tumor surgeries. Each patient read specific words aloud to target the data collected. After the brain data and audio data were given to their neural network for training, the program was given brain signals not included in the training set to recreate audio, the result producing words that were recognizable 40% of the time.

Finally, in a third study by a team at the University of California, San Francisco, three participants with epilepsy read text aloud while brain activity was captured from the speech and motor areas of their brains. The audio generated from their neural network’s analysis of the signal readings was presented to a group of 166 people who were asked to identify the sentences from a multiple choice test – some sentences were identified with 80% accuracy.

While the research presented in these studies shows serious progress towards connecting human brains to computers, there are still a few significant hurdles. For one, the way neuron signal patterns in the brain translate into sounds varies from person to person, so neural networks must be trained on each individual person. The best results require the best data possible, i.e., the most precise neuron signals possible, meaning this is something that can only be obtained by placing electrodes in the brain itself. The opportunities to collect data at this invasive level for research are limited, relying on voluntary participation and approval of experiments.

Advertisement

All three of the studies highlighted demonstrated an ability to reconstruct speech based on neural data in some significant capacity; however, also in all cases, the study participants were able to create audible sounds to use with the computer training set. In the case of patients unable to speak, the level of difficultly in interpreting the brain’s speech signals from other signals will be the biggest challenge. Also, the differences between brain signals during actual speech vs. thinking about speech will complicate matters further.

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

Elon Musk

Elon Musk reveals when SpaceX will perform first-ever Starship catch

“Starship catch is probably flight 13 to 15, depending on how well V3 flights go,” Musk said.

Published

on

Credit: SpaceX

Elon Musk revealed when SpaceX would perform the first-ever catch attempt of Starship, its massive rocket that will one day take life to other planets.

On Tuesday, Starship aced its tenth test flight as SpaceX was able to complete each of its mission objectives, including a splashdown of the Super Heavy Booster in the Gulf, the deployment of eight Starlink simulators, and another splashdown of the ship in the Indian Ocean.

It was the first launch that featured a payload deployment:

SpaceX Starship Flight 10 was so successful, it’s breaking the anti-Musk narrative

SpaceX was transparent that it would not attempt to catch the Super Heavy Booster, something it has done on three previous occasions: Flight 5 on October 13, 2024, Flight 7 on January 16, and Flight 8 on March 6.

Advertisement

This time, it was not attempting to do so. However, there are bigger plans for the future, and Musk detailed them in a recent post on X, where he discussed SpaceX’s plans to catch Starship, which would be a monumental accomplishment.

Musk said the most likely opportunities for SpaceX to catch Starship itself would be Flight 13, Flight 14, and Flight 15, but it depends on “how well the V3 flights go.”

The Starship launched with Flight 10 was a V2, which is the same size as the subsequent V3 rocket but has a smaller payload-to-orbit rating and is less powerful in terms of initial thrust and booster thrust. Musk said there is only one more V2 rocket left to launch.

Advertisement

V3 will be the version flown through 2026, as V4, which will be the most capable Starship build SpaceX manufactures, is likely to be the first company ship to carry humans to space.

Musk said that SpaceX planned to “hopefully” attempt a catch of Starship in 2025. However, it appears that this will likely be pushed back to 2026 due to timing.

SpaceX will take Starship catch one step further very soon, Elon Musk confirms

SpaceX would need to launch the 11th and 12th test flights by the end of the year in order to get to Musk’s expected first catch attempt of Flight 13. It’s not unheard of, but the company will need to accelerate its launch rate as it has only had three test flights this year.

Advertisement
Continue Reading

News

Tesla Robotaxi rival Waymo confirms massive fleet expansion in Bay Area

New data from the California Public Utilities Commission (CPUC) said Waymo had 1,429 vehicles operating in California, and 875 of them were “associated with a terminal in San Francisco,” according to The SF Examiner.

Published

on

Credit: Uber

Tesla Robotaxi rival Waymo has confirmed that it has expanded its fleet of driverless ride-sharing vehicles in the Bay Area of California massively since its last public disclosure.

It is perhaps one of the most important metrics in the race for autonomous supremacy, along with overall service area. Tesla has seemed to focus on the latter, while expanding its fleet slowly to maintain safety.

Waymo, on the other hand, is bringing its fleet size across the country to significant levels. In March, it told The SF Examiner that there were over 300 Waymos in service in the San Francisco area, which was not a significant increase from the 250 vehicles on the road it reported in August 2023.

In May, the company said in a press release that it had more than 1,500 self-driving Waymos operating nationwide. More than 600 were in the San Francisco area.

Tesla analyst compares Robotaxi to Waymo: ‘The contrast was clear’

Advertisement

However, new data from the California Public Utilities Commission (CPUC) said Waymo had 1,429 vehicles operating in California, and 875 of them were “associated with a terminal in San Francisco,” according to The SF Examiner.

CPUC data from March 2025 indicated that there were a total of 1,087 Waymo vehicles in California, with 762 located in San Francisco. Some were test vehicles, others were deployed to operate as ride-sharing vehicles.

The company’s August update also said that it deploys more than 2,000 commercial vehicles in the United States. That number was 1,500 in May. There are also roughly 400 in Phoenix and 500 in Los Angeles.

While Waymo has done a good job of expanding its fleet, it has also been able to expand its footprint in the various cities it is operating in.

Most recently, it grew its geofence in Austin, Texas, to 90 square miles. This outpaced Tesla for a short period before the company expanded its Robotaxi service area earlier this week to roughly 170 square miles.

Advertisement

Tesla one-ups Waymo once again with latest Robotaxi expansion in Austin

The two companies have drastically different approaches to self-driving, as Waymo utilizes LiDAR, while Tesla relies solely on cameras for its suite. Tesla CEO Elon Musk has made no mistake about which he believes to be the superior solution to autonomy.

Continue Reading

News

Tesla launches Full Self-Driving in a new region

Today, Tesla launched Full Self-Driving in Australia for purchase by car buyers for $10,100, according to Aussie automotive blog Man of Many, which tried out the suite earlier this week.

Published

on

Credit: Tesla

Tesla has launched its Full Self-Driving suite in a new region, marking a significant step in the company’s progress to expand its driver assistance suite on a global scale.

It is also the first time Tesla has launched FSD in a right-hand-drive market.

Today, Tesla launched Full Self-Driving in Australia for purchase by car buyers for $10,100, according to Aussie automotive blog Man of Many, which tried out the suite earlier this week.

Previously, Basic and Enhanced Autopilot suites were available, but the FSD capability now adds Traffic Light and Stop Sign Control, along with all the features of the previous two Autopilot suites.

It is the first time Tesla has launched the suite by name in a region outside of North America. In China, Tesla has “City Autopilot,” as it was not permitted to use the Full Self-Driving label for regulatory reasons.

However, Tesla still lists Full Self-Driving (Supervised) as available in the U.S., Canada, China, Mexico, and Puerto Rico.

The company teased the launch of the suite in Australia earlier this week, and it appeared to have been released to select media members in the region earlier this week:

Advertisement

Tesla FSD upcoming Australia release seemingly teased bv media

The rollout of Full Self-Driving in the Australian market will occur in stages, as Model 3 and Model Y vehicles with Hardware 4 will receive the first batch of FSD rollouts in the region.

TechAU also reported that “the initial deployment of FSDs in Australia will roll out to a select number of people outside the company, these people are being invited into Tesla’s Early Access Program.”

Additionally, the company reportedly said it is “very close” to unlocking FSD in customer cars:

Each new Tesla sold will also come with a 30-day free trial of the suite.

Australia is the sixth country to officially have Full Self-Driving available to them, following the United States, Canada, China, Mexico, and Puerto Rico.

Here’s the first look at the suite operating in Australia:

Advertisement

Continue Reading

Trending