Artificial intelligence is figuring out how to see through walls, predict human IQ, and clean up grainy pictures - with the help and input, of course, of some extremely savvy researchers. In this week's coolest things, AI is advancing by leaps and bounds.

Berkeley Scientists Try On A New Pair Of Genes

What is it? A new technique, developed at the University of California, Berkeley, and Lawrence Berkeley National Laboratory, might lead the way to 'DNA printers,' akin to the 3D printers of today.

Why does it matter? Synthetic DNA is a hot field and a big business, explains the university in a release : Custom-made DNA strands can be used to 'produce biologic drugs, industrial enzymes or useful chemicals in vats of microbes,' and researchers rely on them to try out things like CRISPR-based disease therapies. But the current technology to synthesize DNA relies on a 40-year-old method that's time-consuming, inefficient and susceptible to error. It also requires the use of toxic chemicals. The technique developed in Berkeley, and reported in the new issue of Nature Biotechnology, offers the possibility of a process that's faster, cleaner and less error-prone - and produces DNA strands 10 times longer.

How does it work? Like a lot of great discoveries, this one takes a tip from nature - specifically, it uses an enzyme from human immune cells 'that naturally has the ability to add nucleotides to an existing DNA molecule in water, where DNA is most stable,' according to Berkeley. Sebastian Palluk, a co-author of the paper, explained: 'We have come up with a novel way to synthesize DNA that harnesses the machinery that nature itself uses to make DNA. This approach is promising because enzymes have evolved for millions of years to perform this exact chemistry.'

An Electrifying Technique To Harvest Waste Heat

[Attachment]

Sandia physicist Paul Davids hopes that his team's rectenna may someday replace radioisotope thermoelectric generators as the go-to compact power supply for deep space missions and other uses where you can't just go and replace the batteries. Caption and image credit: Sandia National Laboratories.

What is it? Researchers at Sandia National Laboratories have created a tiny silicon device that converts waste heat - like the stuff that wafts off car engines - into DC power.

Why does it matter? It's not like that waste heat is doing anything useful, is it? In fact, unused heat given off as a by-product of a machine's functioning represents lost energy that scientists have been trying to figure out how to harvest. Physicist Paul Davids, a principle investigator on the study, suggests such technology could be used in hybrid cars, converting engine heat into electricity. (Davids and his team published their results in Physical Review Applied.) Another possibility? Space travel: The tech could electrify sensors on extraterrestrial missions where there's not enough sun available for solar power.

How does it work? The tiny device - smaller than the nail on your pinky - comprises common materials including aluminum and silicon, with a thin layer of silicon dioxide in the middle. The aluminum on the outside, acting as an antenna, catches infrared radiation and channels it into the silicon dioxide, creating fast electrical oscillations and generating a DC current via a process called rectification. Accordingly, Davids' team has dubbed the gadget an infrared rectenna - short for 'rectifying antenna.' The team is now trying to improve the device's efficiency. 'We've been whittling away at the problem and now we're beginning to get to the point where we're seeing relatively large gains in power conversion and I think that there's a path forward as an alternative to thermoelectrics,' Davids explains. 'It would be great if we could scale it up and change the world.'

A Neural Network Separates The Signal From The Noise

[Attachment]

The AI is shown both 'clean' and 'noisy' photos and learns to make up the difference between them. Image credit: NVIDIA, Aalto University, and MIT.

What is it? Researchers at NVIDIA, Aalto University and MIT have figured out a way to train a neural network to fix grainy photographs - to remove what they call the 'noise.' Remarkably, it does this only by looking at corrupted images, without clean ones for comparison. The team presented a paper on its findings (PDF) earlier this month at the International Conference on Machine Learning in Stockholm.

Why does it matter? The implications don't only have to do with redeeming some low-light pictures of Grandma's birthday that should've turned out better - radiologists could use the technique, for instance, to enhance MRI images, giving them a clearer look at what's going on inside the body.

How does it work? With other deep-learning approaches to photo restoration, the AI is shown both 'clean' and 'noisy' photos and learns to make up the difference between them. But what if you don't have a clean picture to show? In the latest work, the team found that with a 50,000-image data set, a properly programmed neural network could teach itself to remove the grain from photos without anything else available for comparison. According to the paper, 'It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars. [The neural network] is on par with state-of-the-art methods that make use of clean examples - using precisely the same training methodology, and often without appreciable drawbacks in training time or performance.'

AI That Can See Through Walls

What is it? Artificial intelligence now has X-ray vision, of a sort - researchers at MIT's Computer Science and Artificial Intelligence Laboratory, or CSAIL, have taught a neural network to sense motion that occurs even on the other side of a wall and then create a dynamic stick figure to represent that motion.

Why does it matter? Researchers say the project, called RF-POSE, could be a big help for medical professionals monitoring patients with conditions like Parkinson's disease, multiple sclerosis or muscular dystrophy, 'providing a better understanding of disease progression and allowing doctors to adjust medications accordingly,' according to MIT News. And it might help elderly people live independently by providing a way to watch for falls or injuries. Professor Dina Katabi, who leads CSAIL, said, 'We've seen that monitoring patients' walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn't have before, which could be meaningful for a whole range of diseases.' (The tech could also assist search-and-rescue missions and - closer to home - lead to cooler video games.)

How does it work? Researchers, according to MIT, 'use a neural network to analyze radio signals that bounce off people's bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.' After training the neural network by showing it images of people doing activities like walking and opening doors, they found that RF-Pose could 'estimate a person's posture and movements without cameras, using only the wireless reflections that bounce off people's bodies.' (A PDF of the paper detailing the findings is here.)

How To Measure Intelligence - No IQ Test Required

[Attachment]

Top image: Rather than sitting somebody down and asking them to take an IQ test, the researchers found that intelligence can be estimated simply by taking a peek at the brain at rest, more or less - at how it's firing when somebody's just lying there inside an MRI. Image credit: Getty Images. Above: An image of the brain obtained by a MRI scanner. Image credit: GE Healthcare.

What is it? Using functional magnetic resonance imaging (fMRI) and a little help from a machine-learning algorithm, a team of researchers from Caltech, Cedars-Sinai Medical Center and the University of Salerno in Italy have found that they can predict a person's intelligence just by looking at a scan of the brain.

Why does it matter? In part, the advance is a test case of the potential of fMRI, which could one day help doctors diagnose conditions like autism, schizophrenia and anxiety - just like the technology currently does 'finding tumors, aneurisms, or liver disease,' according to Caltech. Co-author Julien Dubois, a postdoctoral fellow at Cedars-Sinai, said: 'Functional MRI has not yet delivered on its promise as a diagnostic tool. We, and many others, are actively working to change this. The availability of large data sets that can be mined by scientists around the world is making this possible.' (A PDF of the paper is here.)

How does it work? Rather than sitting somebody down and asking them to take an IQ test, the researchers found that intelligence can be estimated simply by taking a peek at the brain at rest, more or less - at how it's firing when somebody's just lying there inside an MRI. They collected brain scans gathered by the Human Connectome Project and fed the data into their algorithm, which went to work analyzing the patterns of activity and making a stab at the subjects' intelligence from there. According to Caltech, the algorithm was able to 'predict intelligence at statistically significant levels' with data from 900 subjects. A parallel endeavor to predict personality traits using the same technique was less successful, they found - so there's some work left to do.

Attachments

  • Original document
  • Permalink

Disclaimer

GE - General Electric Company published this content on 13 July 2018 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 13 July 2018 13:34:05 UTC