Daily-Dose

Contents

From New Yorker

From Vox

In Neuralink’s early years, before the company had settled on its current approach — which does involve drilling into the skull — one of its research teams allegedly looked into the tamer intravascular approach, four former Neuralink employees told me. This team explored the option of delivering a device to the brain through an artery and demonstrated that it was feasible.

But by 2019, Neuralink had rejected this option, choosing instead to go with the more invasive surgical robot that implants threads directly into the brain.

Why? If the intravascular approach can restore key functioning to paralyzed patients, and also avoids some of the safety risks that come with crossing the blood-brain barrier, such as inflammation and scar tissue buildup in the brain, why opt for something more invasive than necessary?

The company isn’t saying. But according to Hirobumi Watanabe, who led Neuralink’s intravascular research team in 2018, the main reason was the company’s obsession with maximizing bandwidth.

“The goal of Neuralink is to go for more electrodes, more bandwidth,” Watanabe said, “so that this interface can do way more than what other technologies can do.”

After all, Musk has suggested that a seamless merge with machines could enable us to do everything from enhancing our memory to uploading our minds and living forever — staples of Silicon Valley’s transhumanist fantasies. Which perhaps helps make sense of the company’s dual mission: to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”

“Neuralink is explicitly aiming at producing general-purpose neural interfaces,” the Munich-based neuroethicist Marcello Ienca told me. “To my knowledge, they are the only company that is currently planning clinical trials for implantable medical neural interfaces while making public statements about future nonmedical applications of neural implants for cognitive enhancement. To create a general-purpose technology, you need to create a seamless interface between humans and computers, enabling enhanced cognitive and sensory abilities. Achieving this vision may indeed require more invasive methods to achieve higher bandwidth and precision.”

Watanabe believes Neuralink prioritized maximizing bandwidth because that serves Musk’s goal of creating a generalized BCI that lets us merge with AI and develop all sorts of new capacities. “That’s what Elon Musk is saying, so that’s what the company has to do,” he said.

The intravascular approach didn’t seem like it could deliver as much bandwidth as the invasive approach. Staying in the blood vessels may be safer, but the downside is that you don’t have access to as many neurons. “That’s the biggest reason they did not go for this approach,” Watanabe said. “It’s rather sad.” He added that he believed Neuralink was too quick to abandon the minimally invasive approach. “We could have pushed this project forward.”

For Tom Oxley, the CEO of Synchron, this raises a big question. “The question is, does a clash emerge between the short-term goal of patient-oriented clinical health outcomes and the long-term goal of AI symbiosis?” he told me. “I think the answer is probably yes.”

“It matters what you’re designing for and if you have a patient problem in mind,” Oxley added. Synchron could theoretically build toward increasing bandwidth by miniaturizing its tech and going into deeper branches of the blood vessels; research shows this is viable. “But,” he said, “we chose a point at which we think we have enough signal to solve a problem for a patient.”

Ben Rapoport, a neurosurgeon who left Neuralink to found Precision Neuroscience, emphasized that any time you’ve got electrodes penetrating the brain, you’re doing some damage to brain tissue. And that’s unnecessary if your goal is helping paralyzed patients.

“I don’t think that tradeoff is required for the kind of neuroprosthetic function that we need to restore speech and motor function to patients with stroke and spinal cord injury,” Rapoport told me. “One of our guiding philosophies is that building a high-fidelity brain-computer interface system can be accomplished without damaging the brain.”

To prove that you don’t need Muskian invasiveness to achieve high bandwidth, Precision has designed a thin film that coats the surface of the brain with 1,024 electrodes — the same number of electrodes in Neuralink’s implant — that deliver signals similar to Neuralink’s. The film has to be inserted through a slit in the skull, but the advantage is that it sits on the brain’s surface without penetrating it. Rapoport calls this the “Goldilocks solution,” and it’s already been implanted in a handful of patients, recording their brain activity at high resolution.

“It’s key to do a very, very safe procedure that doesn’t damage the brain and that is minimally invasive in nature,” Rapoport said. “And furthermore, that as we scale up the bandwidth of the system, the risk to the patient should not increase.”

This makes sense if your most cherished ambition is to help patients improve their lives as much as possible without courting undue risk. But Musk, we know, has other ambitions.

“What Neuralink doesn’t seem to be very interested in is that while a more invasive approach might offer advantages in terms of bandwidth, it raises greater ethical and safety concerns,” Ienca told me. “At least, I haven’t heard any public statement in which they indicate how they intend to address the greater privacy, safety, and mental integrity risks generated by their approach. This is strange because according to international research ethics guidelines it wouldn’t be ethical to use a more invasive technology if the same performance can be achieved using less invasive methods.”

More invasive methods, by their nature, can do real damage to the brain — as Neuralink’s experiments on animals have shown.

Ethical concerns about Neuralink, as illustrated by its animals

Some Neuralink employees have come forward to speak on behalf of the pigs and monkeys used in the company’s experiments, saying they suffered and died at higher rates than necessary because the company was rushing and botching surgeries. Musk, they alleged, was pushing the staff to get FDA approval quickly after he’d repeatedly predicted the company would soon start human trials.

One example of a grisly error: In 2021, Neuralink implanted 25 out of 60 pigs with devices that were the wrong size. Afterward, the company killed all the affected pigs. Staff told Reuters that the mistake could have been averted if they’d had more time to prepare.

Veterinary reports indicate that Neuralink’s monkeys also suffered gruesome fates. In one monkey, a bit of the device “broke off” during implantation in the brain. The monkey scratched and yanked until part of the device was dislodged, and infections took hold. Another monkey developed bleeding in her brain, with the implant leaving parts of her cortex “tattered.” Both animals were euthanized.

Last December, the US Department of Agriculture’s Office of Inspector General launched an investigation into possible animal welfare violations at Neuralink. The company is also facing a probe from the Department of Transportation over worries that implants removed from monkeys’ brains may have been packaged and moved unsafely, potentially exposing people to pathogens.

“Past animal experiments [at Neuralink] revealed serious safety concerns stemming from the product’s invasiveness and rushed, sloppy actions by company employees,” said the Physicians Committee for Responsible Medicine, a nonprofit that opposes animal testing, in a May statement. “As such, the public should continue to be skeptical of the safety and functionality of any device produced by Neuralink.”

Nevertheless, the FDA has cleared the company to begin human trials.

“The company has provided sufficient information to support the approval of its IDE [investigational device exemption] application to begin human trials under the criteria and requirements of the IDE approval,” the FDA said in a statement to Vox, adding, “The agency’s focus for determining approval of an IDE is based on assessing the safety profile for potential subjects, ensuring risks are appropriately minimized and communicated to subjects, and ensuring the potential for benefit, including the value of the knowledge to be gained, outweighs the risk.”

What if Neuralink’s approach works too well?

Beyond what the surgeries will mean for the individuals who get recruited for Neuralink’s trials, there are ethical concerns about what BCI technology means for society more broadly. If high-bandwidth implants of the type Musk is pursuing really do allow unprecedented access to what’s happening in people’s brains, that could make dystopian possibilities more likely. Some neuroethicists argue that the potential for misuse is so great that we need revamped human rights laws to protect us before we move forward.

For one thing, our brains are the final privacy frontier. They’re the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren’t ours to control, what is?

In China, the government is already mining data from some workers’ brains by having them wear caps that scan their brainwaves for emotional states. In the US, the military is looking into neurotechnologies to make soldiers more fit for duty — more alert, for instance.

And some police departments around the world have been exploring “brain fingerprinting” technology, which analyzes automatic responses that occur in our brains when we encounter stimuli we recognize. (The idea is that this could enable police to interrogate a suspect’s brain; their brain responses would be more negative for faces or phrases they don’t recognize than for faces or phrases they do recognize.) Brain fingerprinting tech is scientifically questionable, yet India’s police have used it since 2003, Singapore’s police bought it in 2013, and the Florida state police signed a contract to use it in 2014.

Imagine a scenario where your government uses BCIs for surveillance or interrogations. The right to not self-incriminate — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent.

Experts also worry that devices like those being built by Neuralink may be vulnerable to hacking. What happens if you’re using one of them and a malicious actor intercepts the Bluetooth connection, changing the signals that go to your brain to make you more depressed, say, or more compliant?

Neuroethicists refer to that as brainjacking. “This is still hypothetical, but the possibility has been demonstrated in proof-of-concept studies,” Ienca told me in 2019. “A hack like this wouldn’t require that much technological sophistication.”

Finally, consider how your psychological continuity or fundamental sense of self could be disrupted by the imposition of a BCI — or by its removal. In one study, an epileptic woman who’d been given a BCI came to feel such a radical symbiosis with it that, she said, “It became me.” Then the company that implanted the device in her brain went bankrupt and she was forced to have it removed. She cried, saying, “I lost myself.”

To ward off the risk of a hypothetical all-powerful AI in the future, Musk wants to create a symbiosis between your brain and machines. But the symbiosis generates its own very real risks — and they are upon us now.

From The Hindu: Sports

From The Hindu: National News

From BBC: Europe

From Ars Technica

From Jokes Subreddit