Software Turns ‘Mental Handwriting’ into On-Screen Words, Sentences

Artificial intelligence, interpreting data from a device placed at the brain’s surface, enables people who are paralyzed or have severely impaired limb movement to communicate by text.

Stanford University investigators have coupled artificial-intelligence software with a device, called a brain-computer interface, implanted in the brain of a man with full-body paralysis. The software was able to decode information from the BCI to quickly convert the man’s thoughts about handwriting into text on a computer screen.

The man was able to write using this approach more than twice as quickly as he could using a previous method developed by the Stanford researchers, who reported those findings in 2017 in the journal eLife.

Hope for those without use of their arms

The new findings, published online May 12 in Nature, could spur further advances benefiting hundreds of thousands of Americans, and millions globally, who’ve lost the use of their upper limbs or their ability to speak due to spinal-cord injuries, strokes or amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease, said Jaimie Henderson, MD, professor of neurosurgery.

“This approach allowed a person with paralysis to compose sentences at speeds nearly comparable to those of able-bodied adults of the same age typing on a smartphone,” said Henderson, the John and Jene Blume — Robert and Ruth Halperin Professor.” The goal is to restore the ability to communicate by text.”

The participant in the study produced text at a rate of about 18 words per minute. By comparison, able-bodied people of the same age can punch out about 23 words per minute on a smartphone.

The participant, referred to as T5, lost practically all movement below the neck because of a spinal-cord injury in 2007. Nine years later, Henderson placed two brain-computer-interface chips, each the size of a baby aspirin, on theleft side of T5’s brain. Each chip has 100 electrodes that pick up signals from neurons firing in the part of the motor cortex — a region of the brain’s outermost surface — that governs hand movement.

Those neural signals are sent via wires to a computer, where artificial-intelligence algorithms decode the signals and surmise T5’s intended hand and finger motion. The algorithms were designed in Stanford’s Neural Prosthetics Translational Lab, co-directed by Henderson and Krishna Shenoy, PhD, professor of electrical engineering and the Hong Seh and Vivian W. M. Lim Professor of Engineering.

Shenoy and Henderson, who have been collaborating on BCIs since 2005, are the senior co-authors of the new study. The lead author is Frank Willett, PhD, a research scientist in the lab and with the Howard Hughes Medical Institute.

“We’ve learned that the brain retains its ability to prescribe fine movements a full decade after the body has lost its ability to execute those movements,” Willett said. “And we’ve learned that complicated intended motions involving changing speeds and curved trajectories, like handwriting, can be interpreted more easily and more rapidly by the artificial-intelligence algorithms we’re using than can simpler intended motions like moving a cursor in a straight path at a steady speed. Alphabetical letters are different from one another, so they’re easier to tell apart.”

In the 2017 study, three participants with limb paralysis, including T5 — all with BCIs placed in the motor cortex — were asked to concentrate on using an arm and hand to move a cursor from one key to the next on a computer-screen keyboard display, then to focus on clicking on that key.

A mental handwriting speed record

In that study, T5 set what was until now the all-time record: copying displayed sentences at about 40 characters per minute. Another study participant was able to write extemporaneously, selecting whatever words she wanted, at 24.4 characters per minute.

If the paradigm underlying the 2017 study was analogous to typing, the model for the new Nature study is analogous to handwriting. T5 concentrated on trying to write individual letters of the alphabet on an imaginary legal pad with an imaginary pen, despite his inability to move his arm or hand. He repeated each letter 10 times, permitting the software to “learn” to recognize the neural signals associated with his effort to write that particular letter.

Jaimie Henderson and Krishna Shenoy, who have been collaborating on brain-computer interfaces since 2005, are the co-senior authors of a study describing the new work.
Paul Sakuma

In further sessions, T5 was instructed to copy sentences the algorithms had never been exposed to. He was eventually able to generate 90 characters, or about 18 words, per minute.  Later, asked to give his answers to open-ended questions, which required some pauses for thought, he generated 73.8 characters (close to 15 words, on average) per minute, tripling the previous free-composition record set in the 2017 study.

T5’s sentence-copying error rate was about one mistake in every 18 or 19 attempted characters. His free-composition error rate was about one in every 11 or 12 characters. When the researchers used an after-the-fact autocorrect function — similar to the ones incorporated into our smartphone keyboards — to clean things up, those error rates were markedly lower: below 1% for copying, and just over 2% for freestyle.

These error rates are quite low compared with other BCIs, said Shenoy, who is also a Howard Hughes Medical Institute investigator.

“While handwriting can approach 20 words per minute, we tend to speak around 125 words per minute, and this is another exciting direction that complements handwriting. If combined, these systems could together offer even more options for patients to communicate effectively,” Shenoy said.

The BCI used in the study is limited by law to investigational use and is not yet approved for commercial use.

Stanford University’s Office of Technology Licensing has applied for a patent on intellectual property associated with Willett, Henderson and Shenoy’s work.

Henderson and Shenoy are members of the Wu Tsai Neurosciences Institute at Stanford and of Stanford Bio-X.

Donald Avansino, PhD, a software engineer in the Neural Prosthetics Translational Lab, was a co-author of the study.

The study’s results are the latest chapter of a long-running collaboration between Henderson and Shenoy and a multi-institutional consortium and clinical trial called BrainGate2 (NCT00912041). Study co-author Leigh Hochberg, MD, PhD, a neurologist and neuroscientist at Massachusetts General Hospital, Brown University and the Veterans Affairs Providence Health Care System in Rhode Island, is the sponsor-investigator of BrainGate2.

The study was funded by the Wu Tsai Neurosciences Institute, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the National Institutes of Health (grants UH2NS095548, R01DC009899, R01DC017844, R01DC014034 and U01NS098968), Larry and Pamela Garlick, Samuel and Betsy Reeves, and the Simons Foundation.

Source: Bruce Goldman, Stanford Medicine

Posted on May 14th, 2021 in Assistive Technology.