Connect with us

News

Paralyzed individuals successfully use brain waves to operate tablet computers

Published

on

In a collaborative study presented by scientists primarily affiliated with Stanford and Brown Universities, participants suffering from significant paralysis were successfully able to use non-modified applications on an Android tablet using their brain waves. In previous studies, “point-and-click” computer functionality interpreted from these kinds of signals has been accomplished, but the applications available to participants was limited to software and devices that had been specialized and personalized for users’ specific needs. This study has demonstrated technology that overcomes this limitation and enables access to the full range of software available to non-disabled users. Participants enjoyed applications previously unavailable to them such as streaming music services and a piano keyboard player.

To accomplish the study’s objectives, scientists capitalized and combined existing technologies for their unique end. Brain waves from participants’ brain implants were sent to a commercially available recording system and then processed and decoded by an existing real-time interpreter software. The decoded data was then transmitted to a Bluetooth interface configured as a wireless mouse which was paired to an Android tablet. While the steps to accomplish the task at hand are many, the result somewhat resembles telepathy but largely resembles greater accessibility for the disabled.

A study participant searches for orchid care information using signals from her brain. | Credit: CC0 via PLOS One

Individuals suffering from various forms of paralysis generally have difficulty using communication technologies. Strokes, neurological injuries, and neurodegenerative diseases such as ALS (Amyotrophic Lateral Sclerosis) can all lead to limited physical functionality that impairs the use of communication devices. However, although physical mobility is affected by paralytic conditions, the brain often continues to send signals to muscles to trigger movement. Using this attempted signaling, scientists have been developing technology to bridge the disconnect between the brain’s signals and the intended outcomes.

Brain-computer interfaces (BCIs) record activity in the brain and interpret that data to generate an action. Using microelectrode arrays implanted into areas of the brain corresponding to desired functionality, neural signals are transmitted to the array’s plates or shanks that interpret their activity into usable data. All of the participants in this study had implants placed in the hand area of their brains’ dominant motor cortexes from which signals were recorded. Since computers inherently operate via digital signals, they stand as promising tools to facilitate communications originating from BCIs. In particular, algorithms developed using machine learning have been designed to interpret brain signals into computer mouse movement, a capability which can be expanded into software accessibility.

One of the study’s researchers, Krishna V. Shenoy, is a consultant for Neuralink Inc., Elon Musk’s brain-computer interface company. While BCIs in general have similar goals to Neuralink’s, the major difference is in purpose, namely in that Musk’s company aims to enhance human capabilities to better compete with artificial intelligence rather than restoring or improving lost abilities. Where BCIs interpret existing brain signals to reenable preexisting capabilities, Neuralink hopes to merge computing and brain power into a seemingly singular function via tiny, dust-sized particles rather than traditional implants.

Watch the video below to see the participants’ tablet use during the study:

Advertisement

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

News

Sweden blocks Tesla FSD-style testing in Stockholm

It looks like FSD testing in Sweden would have to wait some time.

Published

on

Credit: Tesla AI/X

Tesla is putting a lot of effort into getting its Full Self Driving (FSD) system approved in territories outside North America. But while China seems to have embraced FSD fully, other countries like Sweden do not seem to be receiving Tesla’s automated driving system very well.

This became quite evident in a document from Stockholm City, which has started making the rounds online.

FSD Testing Rejected

The document, which was initially shared by X user @KRoelandschap, indicated that the Swedish Traffic Department in Stockholm had rejected Tesla’s request to start FSD testing in the city’s streets. Tesla has been demonstrating FSD in several areas across Europe, so it is not surprising that the company is also attempting to test its automated driving system in Sweden.

Unfortunately for Tesla, Sweden might prove to be a tough nut to crack. As per the City of Stockholm: 

“The Traffic Office is currently working on updating its approach to automation. At the same time, the city and the office are under heavy pressure from other ongoing innovation tests. Our ambition is to actively participate in and learn from the continued development in the field of automation. 

Advertisement

“Based on this, and in combination with the fact that the current test is the first of its kind, which entails certain risks for both infrastructure and third parties, and that it is planned to be carried out throughout the city, the City of Stockholm considers it is currently not possible to approve the implementation of the test.”

Tesla’s Other Swedish Troubles

Sweden’s FSD testing rejection is not the only roadblock facing Tesla in the country. Since October 2023, Swedish unions have been engaged in an active effort to disrupt Tesla’s operations. The unions’ efforts have been varied, with some resulting in Tesla having difficulty launching more Superchargers in Sweden. Despite this, Tesla has remained stubborn and has refused to bow to the unions’ demands.

Fortunately for Tesla, it seems like its numbers are still strong. Despite the company’s decline in several European countries, the new Model Y is starting to see strong sales figures in Sweden. In early May alone, the new Model Y became the country’s most popular electric vehicle—a notable accomplishment considering the unions’ active efforts to disrupt Tesla.

Continue Reading

News

Tesla firmware shows new Model Y seat configuration is coming

Tesla could be adding another seating configuration beside the seven-seater to the Model Y lineup later this year.

Published

on

Credit: Tesla

Tesla firmware has been a great place for some to reveal what the company has in the pipeline, and a new seating configuration for the best-selling Model Y looks to be on the way.

Last week, we reported that Tesla was already hinting toward a 7-seater configuration of the Model Y in a promotional email it sent to those on its contact list.

However, firmware revealed by Tesla hacker greentheonly is showing that a new seating configuration is on the way — a six-seater:

Green says the configuration would not be available in China-only, and will be potentially for sale in other markets as well.

The six-seat and seven-seat configurations of the Model Y were available in the Legacy version of the vehicle, but were met with mixed reviews, as many complained about the lack of legroom in the third row.

This was something that was a real concern for many of those owners who needed something larger than the traditional five-seat variant, but did not want to buy the much more pricey Model X.

We’ve covered the size of that third row on several occasions.

Some owners even took the idea of having a seven-seater into their own hands:

Tesla Model Y third row seat test explores options for a comfortable 7-seat setup

Tesla did not explicitly announce a six-seater configuration of the Model Y, but Lars Moravy, the company’s VP of Vehicle Engineering, said the seven-seater would come to production later in 2025.

Continue Reading

News

Tesla confirms massive hardware change for autonomy improvement

Tesla has confirmed that a recent change made to some of its recently refreshed vehicles is, in fact, a strategy it will use to improve its suite as it continues to work toward autonomy.

Published

on

Credit: Tesla

Tesla has confirmed that a recent change made to some of its recently refreshed vehicles is, in fact, a strategy it will use to improve its suite as it continues to work toward autonomy.

Tesla first introduced a front-facing camera on the front bumper with the Cybertruck.

Then, the Model Y “Juniper” received the hardware update. The Model S and Model X both received the front-facing camera with its latest update, which was officially revealed last week.

Tesla used new language with the release of the front-facing cameras on the Model S and Model X, confirming they will assist with several things, including “using Autopilot and Actually Smart Summon capabilities”:

“Enhanced visibility when parking or using Autopilot and Actually Smart Summon capabilities.”

This tiny feature on the new Tesla Model Y is perhaps its biggest addition

This is the first time Tesla has used this sort of language, as it was a completely different description with the launch of the new Model Y in January.

When Tesla launched this vehicle, it said the front bumper camera “provides a wider field of view for automatic assisted driving and advanced Smart Summon.”

Tesla switched from using cameras and sensors to only cameras with the launch of Tesla Vision several years ago. The company’s utilization of cameras comes from Tesla’s belief that Ultrasonic Sensors (USS) are not needed for self-driving efforts:

“Along with the removal of USS, we simultaneously launched our vision-based occupancy network – currently used in Full Self-Driving (FSD) (Supervised) – to replace the inputs generated by USS. With today’s software, this approach gives Autopilot high-definition spatial positioning, longer range visibility and the ability to identify and differentiate between objects. As with many Tesla features, our occupancy network will continue to improve rapidly over time.”

CEO Elon Musk has said that sensors were only a crutch and that self-driving would be solved through the use of cameras:

“When your vision works, it works better than the best human because it’s like having eight cameras, it’s like having eyes in the back of your head, beside your head, and has three eyes of different focal distances looking forward. This is — and processing it at a speed that is superhuman. There’s no question in my mind that with a pure vision solution, we can make a car that is dramatically safer than the average person.”

Continue Reading

Trending