Americas

  • United States

A $399 device that translates brain signals into digital commands

News Analysis
Feb 19, 20203 mins
NetworkingSoftware Development

Startup NextMind is readying a $399 development kit for its brain-computer interface technology that enables users to interact, hands-free, with computers and VR/AR headsets.

virtual brain / digital mind / artificial intelligence / machine learning / neural network
Credit: MetamorWorks / Getty Images

Scientists have long envisioned brain-sensing technology that can translate thoughts into digital commands, eliminating the need for computer-input devices like a keyboard and mouse. One company is preparing to ship its latest contribution to the effort: a $399 development package for a noninvasive, AI-based, brain-computer interface.

The kit will let “users control anything in their digital world by using just their thoughts,” NextMind, a commercial spinoff of a cognitive neuroscience lab claims in a press release.

The company says that its puck-like device inserts into a cap or headband and rests on the back of the head. The dry electrode-based receiver then grabs data from the electrical signals generated through neuron activity. It uses machine learning algorithms to convert that signal output into computer controls. The interaction could be with a computer, artificial-reality or virtual-reality headset, or module.

“Imagine taking your phone to send a text message without ever touching the screen, without using Siri, just by using the speed and power of your thoughts,” said NextMind founder Sid Kouider in a video presentation at Helsinki startup conference Slush in late 2019.

Advances in neuroscience are enabling real-time consciousness-decoding, without surgery or a doctor visit, according to Kouider.

One obstacle that has thwarted previous efforts is the human skull, which can act as a barrier to sensors. It’s been difficult for scientists to differentiate indicators from noise, and some past efforts have only been able to discern basic things, such as whether or not a person is in a state of sleep or relaxation. New materials, better sensors, and more sophisticated algorithms and modeling have overcome some of those limitations. NextMind’s noninvasive technology “translates the data in real time,” Kouider says.

Essentially, what happens is that a person’s eyes project an image of what they see onto the visual cortex in the back of the head, a bit like a projector. The NextMind device decodes the neural activity created as the object is viewed and sends that information, via an SDK, back as an input to a computer. So, by fixing one’s gaze on an object, one selects that object. For example, a user could select a screen icon by glancing at it.

“The demos were by no means perfect, but there was no doubt in my mind that the technology worked,” wrote VentureBeat writer Emil Protalinski, who tested a pre-release device in January.

Kouider has stated it’s the “intent” aspect of the technology that’s most interesting; if a person focuses on one thing more than something else, the technology can decode the neural signals to capture that user’s intent.

“It really gives you a kind of sixth sense, where you can feel your brain in action, thanks to the feedback loop between your brain and a display,” Kouider says in the Slush presentation.

patricknelson

Patrick Nelson was editor and publisher of the music industry trade publication Producer Report and has written for a number of technology blogs. Nelson wrote the cult-classic novel Sprawlism.

The opinions expressed in this blog are those of Patrick Nelson and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.