Connect with us

News

“Smart skin” can identify weaknesses in bridges and airplanes using laser scanner

Published

on

Recent research results have demonstrated that two-dimensional, on-demand mapping of the accumulated strain on metal structures will soon be a reality thanks to an engineered “smart skin” that’s only a fraction of the width of a human hair. By utilizing the unique properties of single-walled carbon nanotubes, a two-layer film airbrushed onto surfaces of bridges, pipelines, and airplanes, among others, can be scanned to reveal weaknesses in near real-time. As a bonus, the technology is barely visible even on a transparent surface, making it that much more flexible as an application.

Stress-inducing events, along with regular wear and tear, can deform structures and machines, affecting their safety and operability. Mechanical strain on structural surfaces provides information on the condition of the materials such as damage location and severity. Existing conventional sensors are only able to measure strain in one point along one axis, but with the smart skin technology, strain detection in any direction or location will be possible.

How “Smart Skin” Technology is Used

In 2002, researchers discovered that single-wall carbon nanotubes fluoresce, i.e., glow brightly when stimulated by a light source. Later, the fluorescence was further found to change color when stretched. This optical property was then considered in the context of metal structures that are subject to strain, specifically to apply the property as a diagnostic tool. To obtain the fluorescent data, researchers applied the smart skin to a testing surface, irradiated the area with a small laser scanner, and captured the resulting nanotube color emissions with an infrared spectrometer. Finally, two-dimensional maps of the accumulated strain were generated with the results.

Smart skin technology could be used to monitor the structural integrity in commercial jet engines. | Credit: CC0 via Pixabay, User: blickpixel

The primary researchers, Professors Satish Nagarajaiah and Bruce Weisman of Rice University in Texas, have published two scientific papers explaining the methods used for achieving this technology and the results of its proof-of-principle application. As described in the papers, aluminum bars with holes or notches in areas of potential stress were tested with the laser technique to demonstrate the full potential of their invention. The points measured were located 1 millimeter apart, but the researchers stated that the points could be located 20 times closer for even more accurate readings. Standard strain sensors have points located several millimeters apart.

What Are Carbon Nanotubes?

Carbon nanotubes (CNTs) are carbon molecules that have been structurally modified into cylinders, or rather, rolled up sheets of carbon atoms. There has been some evidence suggesting that CNTs can be formed via natural processes such as volcanic events. However, to really capitalize on their unique characteristics, production in a laboratory environment is much more efficient.

Several methods can be used for production, but the most widely used method for synthesizing CNTs is chemical vapor deposition (CVD). This process combines a catalyzing metal with a carbon-containing gas which are heated to approximately 1400 degrees Fahrenheit, triggering the carbon molecules to assemble and grow into nanotubes. The resulting formation resembles a forest or lawn grass, each trunk or blade averaging .43 nanometers in diameter. The length is dependent on variables such as the amount of time spent in the high heat environment.

Advertisement
-->
An artistic depiction of a carbon nanotube. | Credit: AJC1 via Flickr, CC BY-SA 2.0

Besides surface analysis, carbon nanotubes have proven invaluable in many research and commercial arenas, their luminescence being only one of many properties that can improve and enable other technologies. Their mechanical tensile strength is 400 times that of steel while only having one sixth the density, making them very lightweight. CNTs also have highly conductive electrical and thermal properties, are extremely resistant to corrosion, and can be filled with other nanomaterials. All of these advantages open up their applications to include solar cells, sensors, drug delivery, electronic devices and shielding, lithium-ion batteries, body armor, and perhaps even a space elevator, assuming significant advances overcome its hurdles.

Next Steps

The nanotube-laced smart skin is ready for scaling up into real-world applications, but its chosen industry may take time to adopt given the general resistance to change in a field with long-standing existing technology. While awaiting embrace in the arena it was primarily designed for, the smart skin has other potential uses in engineering research applications. Bruce Weisman, also the discoverer of CNT fluorescence, anticipates its advantages being used for testing the design of small-scaled structures and engines prior to deployment. Niche applications like these may be the primary entry point into the market for some time to come. In the meantime, the researchers plan to continue developing their strain reader to capture simultaneous readings from large surfaces.

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

News

Nvidia CEO Jensen Huang explains difference between Tesla FSD and Alpamayo

“Tesla’s FSD stack is completely world-class,” the Nvidia CEO said.

Published

on

Credit: Grok Imagine

NVIDIA CEO Jensen Huang has offered high praise for Tesla’s Full Self-Driving (FSD) system during a Q&A at CES 2026, calling it “world-class” and “state-of-the-art” in design, training, and performance. 

More importantly, he also shared some insights about the key differences between FSD and Nvidia’s recently announced Alpamayo system. 

Jensen Huang’s praise for Tesla FSD

Nvidia made headlines at CES following its announcement of Alpamayo, which uses artificial intelligence to accelerate the development of autonomous driving solutions. Due to its focus on AI, many started speculating that Alpamayo would be a direct rival to FSD. This was somewhat addressed by Elon Musk, who predicted that “they will find that it’s easy to get to 99% and then super hard to solve the long tail of the distribution.”

During his Q&A, Nvidia CEO Jensen Huang was asked about the difference between FSD and Alpamayo. His response was extensive:

“Tesla’s FSD stack is completely world-class. They’ve been working on it for quite some time. It’s world-class not only in the number of miles it’s accumulated, but in the way it’s designed, the way they do training, data collection, curation, synthetic data generation, and all of their simulation technologies. 

Advertisement
-->

“Of course, the latest generation is end-to-end Full Self-Driving—meaning it’s one large model trained end to end. And so… Elon’s AD system is, in every way, 100% state-of-the-art. I’m really quite impressed by the technology. I have it, and I drive it in our house, and it works incredibly well,” the Nvidia CEO said. 

Nvidia’s platform approach vs Tesla’s integration

Huang also stated that Nvidia’s Alpamayo system was built around a fundamentally different philosophy from Tesla’s. Rather than developing self-driving cars itself, Nvidia supplies the full autonomous technology stack for other companies to use.

“Nvidia doesn’t build self-driving cars. We build the full stack so others can,” Huang said, explaining that Nvidia provides separate systems for training, simulation, and in-vehicle computing, all supported by shared software.

He added that customers can adopt as much or as little of the platform as they need, noting that Nvidia works across the industry, including with Tesla on training systems and companies like Waymo, XPeng, and Nuro on vehicle computing.

“So our system is really quite pervasive because we’re a technology platform provider. That’s the primary difference. There’s no question in our mind that, of the billion cars on the road today, in another 10 years’ time, hundreds of millions of them will have great autonomous capability. This is likely one of the largest, fastest-growing technology industries over the next decade.”

Advertisement
-->

He also emphasized Nvidia’s open approach, saying the company open-sources its models and helps partners train their own systems. “We’re not a self-driving car company. We’re enabling the autonomous industry,” Huang said.

Continue Reading

Elon Musk

Elon Musk confirms xAI’s purchase of five 380 MW natural gas turbines

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

Published

on

Credit: xAI/X

xAI, Elon Musk’s artificial intelligence startup, has purchased five additional 380 MW natural gas turbines from South Korea’s Doosan Enerbility to power its growing supercomputer clusters. 

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

xAI’s turbine deal details

News of xAI’s new turbines was shared on social media platform X, with user @SemiAnalysis_ stating that the turbines were produced by South Korea’s Doosan Enerbility. As noted in an Asian Business Daily report, Doosan Enerbility announced last October that it signed a contract to supply two 380 MW gas turbines for a major U.S. tech company. Doosan later noted in December that it secured an order for three more 380 MW gas turbines.

As per the X user, the gas turbines would power an additional 600,000+ GB200 NVL72 equivalent size cluster. This should make xAI’s facilities among the largest in the world. In a reply, Elon Musk confirmed that xAI did purchase the turbines. “True,” Musk wrote in a post on X. 

xAI’s ambitions 

Recent reports have indicated that xAI closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. The funding, as per the AI startup, “will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products.”

Advertisement
-->

The company also teased the rollout of its upcoming frontier AI model. “Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote in a post on its website. 

Continue Reading

Elon Musk

Elon Musk’s xAI closes upsized $20B Series E funding round

xAI announced the investment round in a post on its official website. 

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

xAI has closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. 

xAI announced the investment round in a post on its official website. 

A $20 billion Series E round

As noted by the artificial intelligence startup in its post, the Series E funding round attracted a diverse group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group, among others. 

Strategic partners NVIDIA and Cisco Investments also continued support for building the world’s largest GPU clusters.

As xAI stated, “This financing will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products reaching billions of users, and fuel groundbreaking research advancing xAI’s core mission: Understanding the Universe.”

Advertisement
-->

xAI’s core mission

Th Series E funding builds on xAI’s previous rounds, powering Grok advancements and massive compute expansions like the Memphis supercluster. The upsized demand reflects growing recognition of xAI’s potential in frontier AI.

xAI also highlighted several of its breakthroughs in 2025, from the buildout of Colossus I and II, which ended with over 1 million H100 GPU equivalents, and the rollout of the Grok 4 Series, Grok Voice, and Grok Imagine, among others. The company also confirmed that work is already underway to train the flagship large language model’s next iteration, Grok 5. 

“Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote. 

Continue Reading