Whenever he watches a football game, John F. Connolly winces whenever one of the players takes a hit to the head that could result in a concussion.
“Let’s say I’m watching the best team in football, the New England Patriots. If I’m watching and someone gets hit, and goes off the field, they’re going to run the concussion protocol,” says Connolly, who used to live in Boston.
As a fan, he knows the NFL is likely using the best of what’s currently available to evaluate how badly the player’s brain has been hit, and whether it’s prudent to return to the game. But, as a professor of cognitive neuroscience of language at the Centre for Advanced Research in Experimental & Applied Linguistics at McMaster University in Hamilton, Ontario, Canada, he also knows how easy it can be for players to game that system.
“Sometimes players, when they do the pre-season screening, they fake bad. The score is lower, and so when they get drilled, and they come off the field, and they do the concussion protocol, it’ll show that they’re about where they were in the preseason. We know people do that,” he says.
A better solution, while not available now, might someday involve employing some variation of a brain computer interface (BCI) that would allow the player’s brain activity after a hit to be evaluated side-by-side with records of brain activity taken before the hit. That kind of technology is being explored by researchers like Connolly, and others, and not just for football players suffering from concussions. There is research being done into how BCIs might be beneficial for a wide range of applications, from helping people suffering from comas to aiding drivers in high-tech cars.
Excitement, Some Trepidation
“It could be extremely beneficial for anyone with any type of communication impairment, from something trivial to something catastrophic,” Connolly says. “No one is claiming it can be done now, or that it will be done any time soon, but we are claiming that it probably can be done. From stuttering to post-stroke language loss. It could help with all of those things.”
The chatter and excitement, along with some trepidation, about this kind of technology was elevated earlier this year when Facebook’s Regina Dugan, the head of Facebook’s experimental technology division called Building 8, talked about BCIs and silent speech interfaces, and the company’s goal of developing them, during Facebook’s annual developer conference in San Jose, California. She talked not only about how these technologies would help people with disabilities and medical problems, but how they also would help the average person communicate more quickly on a smartphone using, say, for example, something like Facebook with its 2 billion or so users.
Dugan said these silent speech interfaces could have “all of the convenience of voice and the privacy of text.”
Type With Your Brain, Hear With Your Skin
“What if you could type directly with your brain? And what if you could hear with your skin?” Dugan asked in a recent post to Facebook, to her more than 9,000 followers. “Over the next two years, we will be building systems that demonstrate the capability to type at 100 wpm by decoding neural activity devoted to speech… Even something as simple as a ‘yes/no’ brain click, or a ‘brain mouse’ would be transformative.”
In a discussion with the Institute of Electrical and Electronics Engineers (IEEE), one of its team members, neuroscience expert Mark Chevillet, acknowledged that there was a fair amount of “technical and research risk involved” with the undertaking. “But we’re not looking for the next guaranteed incremental step, we’re looking for transformative steps,” he was quoted as saying in an article on the IEEE website.
To meet their goals, he says the team is looking into developing “non-invasive technology that can read out high-quality neural data,” and that they are, at the same time, taking a deep dive into how exactly language and speech work. He told the IEEE that they are trying to figure out how to decode 100 words per minute, assuming the availability of technology that could provide high-quality neural data.
These are ambitious goals indeed, but not entirely in the realm of science fiction, some experts say. Facebook’s announcement, along with other recent discussions of the technology by Elon Musk, Tesla Motor’s CEO, chairman, and founder, of similar technology has raised fears about the role of machines in our lives, along with the ethical and moral implications of these developments.
Many agree that the useful and prudent adoption of such technology will require discussions among philosophers, scientists, ethicists, engineers, and academics, among others. And there are complex manufacturing obstacles to be overcome and problems to be solved as well, the type of problems that are a perfect playground for someone with a background in rapid prototyping and expertise in 3D computing.
Connolly says that, while he is no expert in the field of 3D printing, he’s followed recent developments in 3D printing with aerospace and medical applications, and knows that it will play a big role.
“Huge,” he says. “You can use it to print all kinds of materials. You can print layers of things. Plastics. Metals. Something with a grid in it. You can print all of that. And you can do it from the largest scale to the smallest. You can do it quickly. If something doesn’t bend right, you just change the parameters on the keyboard, hit print, and you have something new in no time. 3D printing is where it’s at.”
Recovering from a “Terrible Reputation”
Advancements in fields like 3D printing, combined with advances in the field of neuroscience are part of what is creating some of the excitement about BCIs, he says.
There was a time, not too long ago, when fields involving neuro-feedback and bio-feedback, after showing some initial promise, were appropriated by people who employed them and sold them with less than academic rigor.
“At one time fields like bio-feedback had a terrible reputation. It was so flaky. It was all associated with the late 1960s,” he says. “Still, there was some promise in the notion of controlling functions of your body that you wouldn’t normally think of controlling, like your heart rate activity, muscle activity, or brain activity.”
Then, gradually, over time, reputable scientists – like Niels Birbaumer at the University of Tuebingen, Emanuel Donchin of the University of South Florida, and Jonathan R. Wolpaw, Director of the National Center for Adaptive Neurotechnologies – began to take the field, crowding out the quacks.
Scientists began to explore and experiment with electroencephalography (EEG), components like event related potential (ERP), and P300 waves.
“There’s a lot of jargon associated with this field,” Connolly admits. “But research into this area opened up a whole new world.”
The simplest way to understand P300 is to consider it a brain response that can be recorded from an EEG. Any time there is a noise, or stimulus, and you are paying attention, your brain re-synchronizes.
“That’s how you recognize things,” he says. “So, if I’m looking at a picture of my wife, there are patterns of my brain activity that light up. If I’m looking at a picture of my son, different patterns in my brain light up. If I look at a picture of my son’s girlfriend, yet another pattern in my brain lights up.”
These unique patterns have been harnessed in experiments with BCIs. Connolly recounted one research project being done at his university by a Ph.D candidate. Using a wireless headset, this young man was able to call up and manipulate objects of different geometric shapes, sizes, and colors, using just his mind.
“He’s really experienced at developing algorithms, and that matters,” he said. “And I’m watching him, and he took just a moment and concentrated, and the screen came on. He didn’t touch anything. And, as he had described to us what would happen, a series of shapes and colors came on the screen. When he finished, I thought, OK, this is crazy. He was doing it with EEG activity. It was colors, shapes, movements, and it was fairly complex.”
Connolly hastened to add that his university is not the only one doing this kind of work, and that the gold standard, replicable findings by multiple researchers, is the ultimate goal. “I don’t want to give the impression that we are the only ones. We are joining a very healthy field. But we will be making contributions in this field. It’s a very active field, and it’s very exciting. When I first saw this young fellow, I was just so impressed. I had never seen it done in person,” he says.
After the demonstration, they all went out, had lunch, and talked about what they saw. “What he explained? It was not simple. But it was straightforward…It’s all associated with brain electrical activity. That’s the core of how our brain works. It’s what we record in an EEG. It’s basically dendritic activity, based on chemical activity in synapses in the cell body.”
He said this field of research, after having gone through a “flaky period with everybody and his uncle fiddling around with it,” is now at a point of making “breathtaking progress.”
He says when he hears about some potential applications of this technology, it sounds a bit far-fetched. But then he’s reminded that some of the things imagined and expressed by the likes of surrealist author Jules Verne, or Gene Rodenberry, who created the original Star Trek television series, seemed insane. Until someone then created something just like what they were describing.
“Increasingly, these things are being technologically possible. It’s a very exciting world. Things I thought were just a figment of someone’s imagination when I was a graduate student are doable now,” he says. “They may sound strange, but they are doable.”
Like what you’ve just read? Sign up to receive GrabCAD’s free weekly Digital Thread newsletter.