Connect with us

News

“Smart skin” can identify weaknesses in bridges and airplanes using laser scanner

Published

on

Recent research results have demonstrated that two-dimensional, on-demand mapping of the accumulated strain on metal structures will soon be a reality thanks to an engineered “smart skin” that’s only a fraction of the width of a human hair. By utilizing the unique properties of single-walled carbon nanotubes, a two-layer film airbrushed onto surfaces of bridges, pipelines, and airplanes, among others, can be scanned to reveal weaknesses in near real-time. As a bonus, the technology is barely visible even on a transparent surface, making it that much more flexible as an application.

Stress-inducing events, along with regular wear and tear, can deform structures and machines, affecting their safety and operability. Mechanical strain on structural surfaces provides information on the condition of the materials such as damage location and severity. Existing conventional sensors are only able to measure strain in one point along one axis, but with the smart skin technology, strain detection in any direction or location will be possible.

How “Smart Skin” Technology is Used

In 2002, researchers discovered that single-wall carbon nanotubes fluoresce, i.e., glow brightly when stimulated by a light source. Later, the fluorescence was further found to change color when stretched. This optical property was then considered in the context of metal structures that are subject to strain, specifically to apply the property as a diagnostic tool. To obtain the fluorescent data, researchers applied the smart skin to a testing surface, irradiated the area with a small laser scanner, and captured the resulting nanotube color emissions with an infrared spectrometer. Finally, two-dimensional maps of the accumulated strain were generated with the results.

Smart skin technology could be used to monitor the structural integrity in commercial jet engines. | Credit: CC0 via Pixabay, User: blickpixel

The primary researchers, Professors Satish Nagarajaiah and Bruce Weisman of Rice University in Texas, have published two scientific papers explaining the methods used for achieving this technology and the results of its proof-of-principle application. As described in the papers, aluminum bars with holes or notches in areas of potential stress were tested with the laser technique to demonstrate the full potential of their invention. The points measured were located 1 millimeter apart, but the researchers stated that the points could be located 20 times closer for even more accurate readings. Standard strain sensors have points located several millimeters apart.

What Are Carbon Nanotubes?

Carbon nanotubes (CNTs) are carbon molecules that have been structurally modified into cylinders, or rather, rolled up sheets of carbon atoms. There has been some evidence suggesting that CNTs can be formed via natural processes such as volcanic events. However, to really capitalize on their unique characteristics, production in a laboratory environment is much more efficient.

Advertisement

Several methods can be used for production, but the most widely used method for synthesizing CNTs is chemical vapor deposition (CVD). This process combines a catalyzing metal with a carbon-containing gas which are heated to approximately 1400 degrees Fahrenheit, triggering the carbon molecules to assemble and grow into nanotubes. The resulting formation resembles a forest or lawn grass, each trunk or blade averaging .43 nanometers in diameter. The length is dependent on variables such as the amount of time spent in the high heat environment.

An artistic depiction of a carbon nanotube. | Credit: AJC1 via Flickr, CC BY-SA 2.0

Besides surface analysis, carbon nanotubes have proven invaluable in many research and commercial arenas, their luminescence being only one of many properties that can improve and enable other technologies. Their mechanical tensile strength is 400 times that of steel while only having one sixth the density, making them very lightweight. CNTs also have highly conductive electrical and thermal properties, are extremely resistant to corrosion, and can be filled with other nanomaterials. All of these advantages open up their applications to include solar cells, sensors, drug delivery, electronic devices and shielding, lithium-ion batteries, body armor, and perhaps even a space elevator, assuming significant advances overcome its hurdles.

Next Steps

The nanotube-laced smart skin is ready for scaling up into real-world applications, but its chosen industry may take time to adopt given the general resistance to change in a field with long-standing existing technology. While awaiting embrace in the arena it was primarily designed for, the smart skin has other potential uses in engineering research applications. Bruce Weisman, also the discoverer of CNT fluorescence, anticipates its advantages being used for testing the design of small-scaled structures and engines prior to deployment. Niche applications like these may be the primary entry point into the market for some time to come. In the meantime, the researchers plan to continue developing their strain reader to capture simultaneous readings from large surfaces.

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

Elon Musk

Elon Musk teases crazy outlook for xAI against its competitors

Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.

Published

on

Credit: NVIDIA

Elon Musk has never been one to shy away from crazy timelines, massive expectations, and outrageous outlooks. However, his recent plans for xAI and where he believes it will end up compared to its competitors are sure to stimulate conversation.

In a bold and characteristic response on X, Elon Musk fired back at a recent analysis that positioned his AI venture, xAI, as lagging behind industry frontrunners.

The post, from March 14, came as a direct reply to forecaster Peter Wildeford’s assessment, which drew from benchmarks and reporting to rank AI developers.

Wildeford placed Anthropic, Google, and OpenAI in a virtual tie at the top, with xAI and Meta trailing by about seven months. Chinese players like Moonshot, Deepseek, zAI, and Alibaba were estimated to be nine months behind, while France’s Mistral lagged by about a year and a half.

Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.

He claimed xAI would “catch up this year,” meaning by the end of 2026, erasing that seven-month deficit against the leaders. But he didn’t stop there.

Advertisement

Musk escalated his vision to 2029, predicting xAI would “exceed them all by such a long distance” that observers would need the James Webb Space Telescope, NASA’s orbiting observatory stationed about 930,000 miles from Earth, to spot whoever lands in second place. This analogy underscores Musk’s confidence in xAI’s trajectory, implying an astronomical lead that could redefine the AI landscape.

Breaking down these claims reveals Musk’s strategic optimism. First, the short-term catch-up: xAI, launched in 2023, has already released models like Grok, but recent benchmarks, including those for Grok 4.2, have shown it falling short in capabilities compared to rivals.

Anthropic’s Claude series, Google’s Gemini, and OpenAI’s GPT models dominate in areas like reasoning, coding, and multimodal tasks. Musk’s assertion suggests aggressive scaling in compute, talent, or architecture, perhaps leveraging xAI’s ties to Tesla’s Dojo supercomputers or Musk’s vast resources, to close the gap swiftly.

The longer-term dominance by 2029 paints an even more audacious picture. Musk envisions xAI not just parity but supremacy, outpacing competitors in innovation speed and model sophistication.

Advertisement

This could involve breakthroughs in energy-efficient training, real-world integration, like Tesla’s robotics, or ethical AI alignment, aligning with Musk’s stated goal of “understanding the universe.”

Critics, however, point to parallels with Tesla’s Full Self-Driving delays; one reply highlighted Musk’s 2023 promise of FSD readiness. Musk has made this promise for many years, and although the system has been strong and improving, it is still a ways off from the completely autonomous operation that was expected by now.

Tesla Full Self-Driving v14.2.2.5 might be the most confusing release ever

Musk’s comment highlights the intensifying U.S.-centric AI race, with xAI challenging the “three-way” dominance noted by Wharton professor Ethan Mollick, whom Wildeford quoted. As geopolitical tensions rise—evident in the Chinese firms’ lag—Musk’s tease could spur investment and talent wars.

Advertisement

Yet, it also invites scrutiny: Will xAI deliver, or is this another telescope-needed mirage? In an industry where timelines slip but stakes soar, Musk’s words keep the spotlight on xAI’s ambitious path forward.

Continue Reading

Elon Musk

Tesla Terafab set for launch: Inside the $20B AI chip factory that will reshape the auto industry

Tesla set to launch “Terafab Project: A vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.

Published

on

By

Tesla is making one of the boldest bets in its history. On March 14, Elon Musk posted on X that the “Terafab Project launches in 7 days,” pointing to March 21, 2026 as the start date for what he has described as a vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.

Tesla first confirmed Terafab on its January 28, 2026 earnings call, where Musk told investors the company needs to build a chip fabrication facility to avoid a supply constraint projected to materialize within three to four years. But the seeds were planted even earlier. At Tesla’s annual general meeting last year, Musk warned that even in the best-case scenario for chip production from their suppliers, it still wouldn’t be enough, and declared that building a “gigantic chip fab” simply had to be done.

While there has been no official announcement on where Tesla plans to break ground on the massive Terafab, all signs point to the North Campus of Giga Texas in Austin.

Months of speculation has surrounded Tesla’s North Campus expansion at Giga Texas, where drone footage captured by observer Joe Tegtmeyer revealed massive construction site preparation just north of the existing factory on a scale that rivals the original Giga Texas footprint itself.

Advertisement

Samsung’s Tesla AI5/AI6 chip factory to start key equipment tests in March: report

The project is projected to produce 100–200 billion AI and memory chips annually, targeting 100,000 wafer starts per month, at an estimated cost of $20 billion. Tesla is targeting 2-nanometre process technology and anticipated to be the most advanced node currently in commercial production. Dubbed the Tesla AI5 chip, the chip will pack 40x–50x more compute performance and 9x more memory than AI4, and will be among the first products Terafab factory is set to produce. This highly optimized, and massively powerful inference chip is designed to make full self-driving (FSD) and Tesla’s Optimus robots faster, safer, and with full autonomy.

tesla-optimus-pilot-production-line

(Credit: Tesla)

This is where Terafab becomes a genuine game-changer. If Tesla successfully builds a 2nm chip fab at scale, it becomes one of only a handful of entities that’s capable of producing AI silicon in-house, with competitive implications that extend far beyond Tesla’s own vehicles, and potentially positioning Tesla as a chip supplier or licensor to other industries.

The next-gen Tesla AI chips will power advancements in Full Self-Driving software, the Cybercab Robotaxi program, and the Optimus humanoid robot line. Musk’s projections for Optimus require chip volumes that no existing external supplier can commit to on Tesla’s timeline.Competitors like Waymo and GM’s Cruise remain dependent on third-party silicon, leaving them exposed to the same supply chain vulnerabilities Tesla is now working to eliminate entirely.

The Terafab launch this week may not mean a factory opens its doors overnight, but it signals Tesla is serious about owning the entire AI stack, from software to silicon.

Advertisement
Continue Reading

Elon Musk

What is Digital Optimus? The new Tesla and xAI project explained

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Published

on

Credit: Grok

Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.

This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.

Tesla announces massive investment into xAI

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Advertisement

Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.

Advertisement

xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.

When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.

Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI

The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.

Advertisement

Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.

Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.

It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.

Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.

By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.

Advertisement

As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.

In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.

It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.

Advertisement
Continue Reading