Connect with us

News

Google wants to make “good” AI with your help

Google office in Zurich [Credit: Google]

Published

on

As a company with a global presence to the tune of at least a billion people, Google is taking both its immense tech capabilities and social responsibility role very seriously. Namely, it has pledged to provide tangible support to organizations wanting to help address societal challenges using artificial intelligence through its just announced “AI Impact Challenge”. Whether an idea needs coaching, grant funding from a pool of $25 million available, or credit and consulting from cloud services, Google will be there to help.

Towards this effort, the company has already provided an educational guide to machine learning, the primary tool it wants organizations to utilize in its problem-solving. It might seem counterintuitive for a proposer to need training on the very thing it’s proposing, but this is part of the point of Google’s support. To quote Google’s project page directly, “We want people from as many backgrounds as possible to surface problems that AI can help solve, and to be empowered to create solutions themselves…We don’t expect applicants to be AI experts.” Submissions are open until January 22, 2019, and winners will be announced in spring 2019.

Need inspiration for an idea? Or, perhaps, some examples of the kinds of problems that artificial intelligence can help solve? Google’s page dedicated to its “AI for social good” mission has featured projects that are already working towards societally beneficial goals. Here’s a breakdown of some of them:

  • The “Smart Wildfire Sensor” is a device that identifies and predicts areas in a forest that are susceptible to wildfires. To do this, it uses data from tools measuring wind speed, wind direction, humidity, and temperature combined with Google’s open source machine learning tool TensorFlow for photographic analysis of biomass (accumulated fallen branches and trees).
  • Protecting whales from preventable accidents such as entanglement in fishing gear and collisions with vessels is a challenge being addressed using whale songs and machine learning to locate where they’re singing from. The National Oceanic and Atmospheric Administration (NOAA) uses underwater audio recordings to identify and mitigate the presence of dangers in the estimated areas where whales are present. The thousands of hours of recordings accumulated presented a data challenge well suited to Google’s existing sound classification AI to help meet NOAA’s needs with conservation efforts.
  • As a top cause of infant mortality in the world, birth asphyxia is a serious threat needing all the tools available to new parents. Using machine learning trained to recognize the cries of a newborn with this condition, the company Ubenwa has developed a mobile app enabling a recording of a baby’s cry to be uploaded and diagnosed.

“With great power comes great responsibility” is a familiar motto that applies to the state of modern tech just as much as superheroes. For example, the fast-paced field of artificial intelligence brings frequent developments that challenge our security as a society, thus needing caution. However, the massive companies driving the primary innovations being used among the public on a grand scale are one of the larger demonstrations of this where this motto really applies in today’s world.

Google sharply felt the weight of its responsibility recently when its role in assisting the US Department of Defense to analyze drone footage (Project Maven) was revealed. The “Don’t be evil” part of the company’s Code of Conduct at the time appeared to be violated through the military assistance, and renewal of the contract has since been canceled. Google’s further work on its Chinese search engine with censorship in accordance with the communist government’s requirements has also drawn protest from both inside and outside the company. Given this background, a new project focused on doing “good” things for the benefit of society might be seen as possible damage control. The timing might be suspect, but it’s worth noting that, as seen in the projects described above, Google has been working to help with societal needs for quite some time already.

Advertisement

Overall, headlines in recent years have demonstrated just how flexible AI can be when it comes to solving challenges that face our world. While the fears brought on by future “intelligent” computers may have a foundation in reality, it may do us a great amount of good to turn our focus on the hope such technology can also bring. Whatever Google’s motivation is for launching its “AI for social good project”, if good is achieved, it may just be a win for us all.

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

News

Tesla Full Self-Driving gets latest bit of scrutiny from NHTSA

The analysis impacts roughly 3.2 million vehicles across the company’s entire lineup, and aims to identify how the suite’s degradation detection systems work and how effective they are when the cars encounter difficult visibility conditions.

Published

on

Credit: Tesla

The National Highway Traffic Safety Administration (NHTSA) has elevated its probe into Tesla’s Full Self-Driving (Supervised) suite to an Engineering Analysis.

The analysis impacts roughly 3.2 million vehicles across the company’s entire lineup, and aims to identify how the suite’s degradation detection systems work and how effective they are when the cars encounter difficult visibility conditions.

The step up into an Engineering Analysis is often required before the NHTSA will tell an automaker to issue a recall. However, this is not a guarantee that a recall will be issued.

The NTHSA wants to examine Tesla FSD’s ability to assess road conditions that have reduced visibility, as well as detect degradation to alert the driver with sufficient time to respond.

The Office of Defects Investigation (ODI) will evaluate the performance of FSD in degraded roadway conditions and the updates or modifications Tesla makes to the degradation detection system, including the timing, purpose, and capabilities of the updates.

Advertisement

Tesla routinely ships software updates to improve the capabilities of the FSD suite, so it will be interesting to see if various versions of FSD are tested. Interestingly, you can find many examples from real-world users of FSD handling snow-covered roads, heavy rain, and single-lane backroads.

However, there are incidents that the NHTSA has used to determine the need for this probe, at least for now. The agency said:

“Available incident data raise concerns that Tesla’s degradation detection system, both as originally deployed and later updated, fails to detect and/or warn the driver appropriately under degraded visibility conditions such as glare and airborne obscurants. In the crashes that ODI has reviewed, the system did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred.”

It continues to say in its report that a review of Tesla’s responses revealed additional crashes that occurred in similar environments showed FSD “did not detect a degraded state, and/or it did not present the driver with an alert with adequate time for the driver to react. In each of these crashes, FSD also lost track of or never detected a lead vehicle in its path.”

Advertisement

The next steps of the NHTSA Engineering Analysis require the agency to gather further information on Tesla’s attempts to upgrade the degradation detection system. It will also analyze six recent potentially related incidents.

The investigation is listed as EA26002.

Continue Reading

Elon Musk

SpaceX’s Starship V3 is almost ready and it will change space travel forever

SpaceX is targeting April for the debut test launch of Starship V3 “Version 3”

Published

on

By

SpaceX is closing in on one of the most anticipated rocket launches in history, as the company readies for a planned April test launch and debut of its next-gen Starship V3 “Version 3”.

The latest iteration of Starship V3 has a slightly taller Super Heavy booster and Starship upper stage than their predecessors, and produce stronger, more efficient thrust using SpaceX’s upgraded Raptor 3 engines. V3 also features increased propellant capacity, targeting a total payload capacity of over 100 tons to low Earth orbit, compared to around 35 tons for its predecessor. With Musk’s lifelong aspiration to colonize Mars one day, the increased payload capacity matters enormously, because Mars missions require moving massive amounts of cargo, fuel, and eventually, people. But the most critical upgrade may be orbital refueling. SpaceX’s entire deep space architecture depends on moving large amounts of propellant in space, and having orbital refueling capabilities turn Starship from just a rocket into a true transport system. Without it, neither the Moon nor Mars is reachable at scale.

A fully reusable Starship and Super Heavy, SpaceX aims to drive marginal launch costs down and at a tenfold reduction compared to current market leaders. To put that in perspective, getting a kilogram of cargo to orbit today costs thousands of dollars. Bring that number down far enough and space stops being an exclusive domain. That price point unlocks mass deployment of satellite constellations, large-scale science payloads, and affordable human transport beyond Earth orbit. It also means the Moon stops being a destination we visit and starts being one we inhabit.

Advertisement

Elon Musk pivots SpaceX plans to Moon base before Mars

NASA expects Starship to take off for the Moon’s South Pole in 2028, with the ultimate goal of establishing a permanently crewed science station there. A successful V3 flight this spring keeps that timeline alive.  As for Mars, Musk has shifted focus toward building a self-sustaining city on the Moon first, arguing that the Moon can be reached every 10 days versus Mars’s 26-month alignment window. Mars remains the horizon, but the Moon is the proving ground.

Elon Musk hasn’t been shy with hyping the upcoming Starship V3 launch. In a social media post on Wednesday, he confirmed the first V3 flight is getting closer to launch. SpaceX also announced its initial activation campaign for V3 and Starbase Pad 2 was complete, wrapping up several days of cryogenic fuel testing on a V3 vehicle for the first time. The countdown is on. April can’t come soon enough.

Advertisement
Continue Reading

Cybertruck

Tesla Cybertruck gets long-awaited safety feature

Tesla has announced the rollout of its innovative anti-dooring protection feature to the Cybertruck via the 2026.8 software update.

Published

on

Credit: Tesla Asia | X

Tesla is rolling out a new and long-awaited feature to the Cybertruck all-electric pickup, and it is a safety addition geared toward pedestrian and cyclist safety, as well as accidents with other vehicles.

Tesla has announced the rollout of its innovative anti-dooring protection feature to the Cybertruck via the 2026.8 software update.

This safety enhancement uses the vehicle’s existing cameras to detect approaching cyclists, pedestrians, or vehicles in the blind spot while parked. Upon attempting to open a door, if a hazard is detected, the system activates: the blind spot indicator light flashes, an audible chime sounds, and the door will not open on the initial button press.

Drivers must wait briefly and press the button again to override, providing crucial seconds to avoid an accident.

Advertisement

The feature, also known as Blind Spot Warning While Parked, comes standard on every new Model 3 and Model Y, and is now extending to the Cybertruck. Leveraging Tesla’s vision-based system without requiring new hardware, it represents a cost-effective software solution that builds on community suggestions dating back to 2018.

Advertisement

This technology addresses the persistent danger of “dooring,” where a driver opens a car door into the path of a passing cyclist or pedestrian.

Tesla implemented this little-known feature to make its cars even safer

Dooring incidents are alarmingly common in urban environments.

According to Chicago data, in 2011 alone, there were 344 reported dooring crashes, accounting for approximately 20 percent of all bicycle crashes in the city, nearly one incident per day.

Advertisement

While numbers have fluctuated (dropping to 11 percent in 2014 before rising again), dooring consistently represents 10-20 percent of bike-related crashes in major cities.

A national analysis of emergency department data estimates over 17,000 dooring-related injuries treated in the U.S. over a decade, with many involving fractures, contusions, and head trauma, particularly affecting upper extremities.

By automatically intervening, Tesla’s system not only protects vulnerable road users but also safeguards its owners from potential liability and enhances overall road safety.

As cities promote cycling for sustainable transport, features like this demonstrate how advanced driver assistance and camera systems can evolve beyond highway driving to everyday urban scenarios.

Advertisement

Enthusiastic responses on social media highlight appreciation for the proactive safety measure, with some calling for broader rollout to older models where hardware permits. Tesla continues to push the boundaries of vehicle safety through over-the-air updates, making its fleet smarter and safer over time.

Continue Reading