Mind Reading Machine That Can Read Other People's Thoughts
Opinion
The Brain Implants That Could Change Humanity
Brains are talking to computers, and computers to brains. Are our daydreams safe?
Credit... By Derrick Schultz
Contributing Opinion Writer
Jack Gallant never set out to create a mind-reading auto. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a aeroplane or an apple or a dog — and how that activity represents the object existence viewed.
By the late 2000s, scientists could determine what kind of matter a person might be looking at from the way the brain lit upwards — a human face up, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not simply the class of thing, but which verbal image a subject was viewing. (Which photo of a true cat, out of three options, for example.)
One day, Dr. Gallant and his postdocs got to talking. In the same style that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they'd developed so they could visualize, solely from brain activity, what a person was seeing.
The get-go phase of the project was to train the AI. For hours, Dr. Gallant and his colleagues showed volunteers in fMRI machines pic clips. By matching patterns of encephalon activation prompted by the moving images, the AI built a model of how the volunteers' visual cortex, which parses data from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it at present knew virtually their brains, information technology thought they might be looking at.
The experiment focused merely on a subsection of the visual cortex. It didn't capture what was happening elsewhere in the brain — how a person might experience about what she was seeing, for case, or what she might be fantasizing about as she watched. The endeavor was, in Dr. Gallant'due south words, a primitive proof of concept.
And withal the results, published in 2011, are remarkable.
The reconstructed images motion with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright incorrect.) But where they succeed, they represent an astonishing accomplishment: a automobile translating patterns of brain activity into a moving image understandable past other people — a machine that can read the encephalon.
Dr. Gallant was thrilled. Imagine the possibilities when better brain-reading applied science became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig's illness, the people incapacitated by strokes, who could do good from a machine that could help them collaborate with the world?
Video
He was also scared because the experiment showed, in a concrete fashion, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Dr. Gallant wondered, when you could read thoughts the thinker might non even be consciously enlightened of, when you could see people'south memories?
"That'due south a existent sobering thought that now you have to take seriously," he told me recently.
The 'Google Cap'
For decades, we've communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens. These tools and the bony digits we prod them with provide a natural limit to the speed of communication betwixt homo encephalon and machine. We can convey information only as quickly (and accurately) equally nosotros can type or click.
Voice recognition, similar that used by Apple's Siri or Amazon's Alexa, is a step toward more than seamless integration of human and machine. The side by side step, i that scientists around the earth are pursuing, is applied science that allows people to command computers — and everything connected to them, including cars, robotic artillery and drones — merely by thinking.
Dr. Gallant jokingly calls the imagined slice of hardware that would practice this a "Google cap": a lid that could sense silent commands and prompt computers to answer appropriately.
The problem is that, to work, that cap would demand to exist able to run into, with some detail, what's happening in the nearly 100 billion neurons that make up the brain.
Technology that can easily peer through the skull, like the MRI car, is far too unwieldy to mount on your caput. Less bulky technology, like electroencephalogram, or East.East.1000., which measures the brain'southward electric activity through electrodes attached to the scalp, doesn't provide near the same clarity. I scientist compares it to looking for the surface ripples made by a fish swimming underwater while a tempest roils the lake.
Other methods of "seeing" into the brain might include magnetoencephalography, or Grand.E.G., which measures magnetic waves emanating outside the skull from neurons firing beneath it; or using infrared light, which can penetrate living tissue, to infer brain activity from changes in blood menses. (Pulse oximeters piece of work this manner, past shining infrared light through your finger.)
What technologies will power the brain-estimator interface of the futurity is nevertheless unclear. And if it's unclear how we'll "read" the brain, it'due south even less clear how we'll "write" to it.
This is the other holy grail of brain-auto enquiry: technology that can transmit information to the brain straight. We're probably nowhere near the moment when you can silently ask, "Alexa, what's the uppercase of Peru?" and take "Lima" materialize in your listen.
Even so, solutions to these challenges are beginning to emerge. Much of the research has occurred in the medical realm, where, for years, scientists have worked incrementally toward giving quadriplegics and others with immobilizing neurological conditions amend means of interacting with the world through computers. Only in recent years, tech companies — including Facebook, Microsoft and Elon Musk'southward Neuralink — take begun investing in the field.
Some scientists are elated by this infusion of free energy and resources. Others worry that as this tech moves into the consumer realm, it could have a diversity of unintended and potentially dangerous consequences, from the erosion of mental privacy to the exacerbation of inequality.
Rafael Yuste, a neurobiologist at Columbia University, counts two great advances in computing that have transformed society: the transition from room-size mainframe computers to personal computers that fit on a desk-bound (so in your lap), and the advent of mobile calculating with smartphones in the 2000s. Noninvasive brain-reading tech would be a third great leap, he says.
"Forget about the Covid crisis," Dr. Yuste told me. "What'southward coming with this new tech tin modify humanity."
Beloved Brain
Non many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who've been paralyzed. Then when Robert Kirsch, the chairman of biomedical engineering at Example Western Reserve University, put out such a call nearly 10 years agone, and one person both met the criteria and was willing, he knew he had a pioneer on his easily.
Video
The homo's name was Bill Kochevar. He'd been paralyzed from the cervix down in a biking accident years earlier. His motto, as he later on explained it, was "somebody has to do the research."
At that betoken, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or motility robotic arms. Simply Dr. Kirsch was after something different. He wanted to help Mr. Kochevar movement his ain limbs.
The start step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar'south correct arm. Electrodes that could receive signals from those arrays via a estimator were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury.
Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Mr. Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could fifty-fifty scratch his nose.
About two dozen people around the earth who have lost the use of limbs from accidents or neurological disease have had sensors implanted on their brains. Many, Mr. Kochevar included, participated in a Usa government-funded program called BrainGate. The sensor arrays used in this inquiry, smaller than a button, allow patients to motility robotic arms or cursors on a screen but past thinking. Merely as far as Dr. Kirsch knows, Mr. Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this engineering science.
This fall, Dr. Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they'll implant vi smaller arrays — more sensors will ameliorate the quality of the signal. And instead of implanting electrodes directly in the volunteers' muscles, they'll insert them upstream, circling the nerves that movement the muscles. In theory, Dr. Kirsch says, that volition enable motility of the entire arm and hand.
The adjacent major goal is to restore sensation and so that people tin know if they're belongings a rock, say, or an orange — or if their hand has wandered also close to a flame. "Sensation has been the longest ignored part of paralysis," Dr. Kirsch told me.
A few years agone, scientists at the University of Pittsburgh began groundbreaking experiments on that front end with a man named Nathan Copeland who was paralyzed from the upper chest down. They routed sensory information from a robotic arm into the function of his cortex that dealt with his correct mitt's sense of bear upon.
Every brain is a living, undulating organ that changes over time. That's why, before each of Mr. Copeland'due south sessions, the AI has to recalibrate — to construct a new brain decoder. "The signals in your brain shift," Mr. Copeland told me. "They're not exactly the same every day."
And the results weren't perfect. Mr. Copeland described them to me as "weird," "electrical tingly" but also "amazing." The sensory feedback was immensely of import, though, in knowing that he'd actually grasped what he idea he'd grasped. And more generally, it demonstrated that a person could "feel" a robotic hand equally his or her own, and that information coming from electronic sensors could be fed into the human brain.
Preliminary equally these experiments are, they suggest that the pieces of a brain-machine interface that tin can both "read" and "write" already exist. People cannot merely move robotic arms simply by thinking; machines tin can too, notwithstanding imperfectly, convey information to the brain about what that arm encounters.
Who knows how shortly versions of this technology volition exist available for kids who want to think-movement avatars in video games or think-surf the web. People can already fly drones with their encephalon signals, so perchance crude consumer versions will appear in coming years. But it'due south hard to overstate how life-changing such tech could be for people with spinal string injuries or neurological diseases.
Edward Chang, a neurosurgeon at the University of California, San Francisco, who works on brain-based voice communication recognition, said that maintaining the ability to communicate can mean the difference between life or death. "For some people, if they have a means to continue to communicate, that may be the reason they make up one's mind to stay live," he told me. "That motivates us a lot in our work."
In a recent study, Dr. Chang and his colleagues predicted with upwards to 97 percentage accuracy — the best rate yet accomplished, they say — what words a volunteer had said (from virtually 250 words used in a predetermined set of 50 sentences) by using implanted sensors that monitored activity in the part of the encephalon that moves the muscles involved in speaking. (The volunteers in this study weren't paralyzed; they were epilepsy patients undergoing encephalon surgery to address that status, and the implants were not permanent.)
Dr. Chang used sensor arrays similar to those Dr. Kirsch used, simply a noninvasive method may non be also far away.
Facebook, which funded Dr. Chang's study, is working on a brain-reading helmet-like contraption that uses infrared lite to peer into the brain. Marker Chevillet, the managing director of brain-computer interface inquiry at Facebook Reality Labs, told me in an email that while full speech communication recognition remains distant, his lab will be able to decode simple commands like "home," "select" and "delete" in "coming years."
This progress isn't solely driven by advances in brain-sensing technology — by the physical meeting point of mankind and machine. The AI matters as much, if not more.
Trying to understand the brain from exterior the skull is like trying to make sense of a conversation taking place two rooms abroad. The bespeak is ofttimes messy, hard to decipher. So information technology's the same types of algorithms that now let oral communication-recognition software to exercise a decent task of understanding spoken speech — including individual idiosyncrasies of pronunciation and regional accents — that may now enable brain-reading applied science.
Zap That Urge
Non all the applications of brain reading require something every bit complex as understanding oral communication, however. In some cases, scientists but desire to edgeless urges.
When Casey Halpern, a neurosurgeon at Stanford, was in college, he had a friend who drank besides much. Another was overweight only couldn't terminate eating. "Impulse control is such a pervasive problem," he told me.
Video
As a budding scientist, he learned nigh methods of deep encephalon stimulation used to care for Parkinson's disease. A balmy electric current applied to a part of the brain involved in movement could lessen tremors acquired by the disease. Could he apply that technology to the problem of inadequate self command?
Working with mice in the 2010s, he identified a part of the encephalon, called the nucleus accumbens, where activity spiked in a anticipated design just before a mouse was nearly to gorge on high-fat food. He establish he could reduce how much the mouse ate by disrupting that activity with a balmy electrical current. He could zap the coercion to gorge as it was taking hold in the rodents' brains.
Earlier this year, he began testing the approach in people suffering from obesity who haven't been helped by any other treatment, including gastric-bypass surgery. He implants an electrode in their nucleus accumbens. It's connected to an appliance that was originally developed to prevent seizures in people with epilepsy.
As with Dr. Chang or Dr. Gallant's work, an algorithm first has to acquire near the brain information technology'due south attached to — to recognize the signs of oncoming loss of control. Dr. Halpern and his colleagues train the algorithm by giving patients a taste of a shake, or offering a cafe of the patient'south favorite foods, and so recording their brain activity just earlier the person indulges.
He's and then far completed two implantations. "The goal is to help restore command," he told me. And if it works in obesity, which afflicts roughly 40 pct of adults in the Us, he plans to test the gizmo against addictions to alcohol, cocaine and other substances.
Dr. Halpern's arroyo takes as fact something that he says many people have a hard time accepting: that the lack of impulse control that may underlie addictive behavior isn't a choice, just results from a malfunction of the brain. "We have to accept that it's a disease," he says. "We oftentimes merely judge people and presume it's their own mistake. That's not what the current research is suggesting we should exercise."
I must confess that of the numerous proposed applications of brain-machine interfacing I came beyond, Dr. Halpern'south was my favorite to extrapolate on. How many lives take been derailed by the disability to resist the temptation of that next pill or that side by side beer? What if Dr. Halpern'due south solution was generalizable?
What if every time your mind wandered off while writing an article, you could, with the aid of your concentration implant, prod it dorsum to the task at hand, finally completing those life-irresolute projects you've never gotten around to finishing?
These applications remain fantasies, of class. Simply the mere fact that such a thing may be possible is partly what prompts Dr. Yuste, the neurobiologist, to worry virtually how this applied science could mistiness the boundaries of what we consider to exist our personalities.
Such blurring is already an issue, he points out. Parkinson'southward patients with implants sometimes report feeling more aggressive than usual when the auto is "on." Depressed patients undergoing deep brain stimulation sometimes wonder if they're actually themselves anymore. "Yous kind of feel artificial," one patient told researchers. The car isn't implanting ideas in their minds, like Leonardo DiCaprio'due south graphic symbol in the movie "Inception," but it is seemingly changing their sense of cocky.
What happens if people are no longer certain if their emotions are theirs, or the furnishings of the machines they're connected to?
Dr. Halpern dismisses these concerns as overblown. Such effects are role of many medical treatments, he points out, including commonly prescribed antidepressants and stimulants. And sometimes, as in the instance of hopeless habit, changing someone's beliefs is precisely the goal.
However the longer-term result of what could happen when encephalon-writing technology jumps from the medical into the consumer realm is difficult to forget. If my imagined focus enhancer existed, for case, just was very expensive, it could exacerbate the already yawning chasm between those who can afford expensive tutors, cars and colleges — and now grit-boosting engineering — and those who cannot.
"Certain groups volition get this tech, and will enhance themselves," Dr. Yuste told me. "This is a really serious threat to humanity."
The Brain Business concern
"The thought that you lot accept to drill holes in skulls to read the brains is nuts," Mary Lou Jepsen, the chief executive and founder of Openwater, told me in an email. Her visitor is developing engineering science that, she says, uses infrared light and ultrasonic waves to peer into the body.
Other researchers are simply trying to make invasive approaches less invasive. A company called Synchron seeks to avert opening the skull or touching brain tissue at all by inserting a sensor through the jugular vein in the neck. It's currently undergoing a safety and feasibility trial.
Dr. Kirsch suspects that Elon Musk'southward Neuralink is probably the all-time encephalon-sensing tech in development. It requires surgery, but unlike the BrainGate sensor arrays, it's thin, flexible and can adjust to the mountainous topography of the brain. The hope is that this makes it less caustic. It likewise has hairlike filaments that sink into brain tissue. Each filament contains multiple sensors, theoretically allowing the capture of more information than flatter arrays that sit down at the brain's surface. Information technology tin can both read and write to the encephalon, and information technology'south accompanied by a robot that assists with the implantation.
A major claiming to implants is that, as Dr. Gallant says, "your encephalon doesn't like having stuff stuck in your brain." Over fourth dimension, immune cells may swarm the implant, covering it with goop.
1 way to try to avoid this is to drastically shrink the size of the sensors. Arto Nurmikko, a professor of engineering science and physics at Brownish University who's function of the BrainGate effort, is developing what he calls "neurograins" — tiny, implantable silicon sensors no larger than a handful of neurons. They're likewise small to accept batteries, so they're powered by microwaves beamed in from outside the skull.
He foresees perchance 1,000 mini sensors implanted throughout the brain. He's so far tested them only in rodents. Simply peradventure nosotros shouldn't be so sure that good for you people wouldn't volunteer for "mental enhancement" surgery. Every year, Dr. Nurmikko poses a hypothetical to his students: 1,000 neurograin implants that would allow students to learn and communicate faster; any volunteers?
"Typically most half the grade says, 'Sure,'" he told me. "That speaks to where nosotros are today."
Jose Carmena and Michel Maharbiz, scientists at Berkeley and founders of a start-up called Iota Biosciences, accept their ain version of this idea, which they phone call "neural dust": tiny implants for the peripheral nervous organization — arm, legs and organs besides the brain. "It'south like a Fitbit for your liver," Dr. Carmena told me.
They imagine treating inflammatory diseases by stimulating nerves throughout the trunk with these tiny devices. And where Dr. Nurmikko uses microwaves to ability the devices, Dr. Carmena and Dr. Maharbiz foresee the use of ultrasound to beam power to them.
Video
Generally, they say, this kind of tech will be adopted showtime in the medical context and so movement to the lay population. "We're going to evolve to augmenting humans," Dr. Carmena told me. "There'due south no question."
Just hype permeates the field, he warns. Sure, Elon Musk has argued that closer brain-automobile integration will help humans compete with ever-more than-powerful A.I.south. Merely in reality, we're nowhere near a device that could, for example, help you lot master Kung Fu instantaneously like Keanu Reeves in "The Matrix."
What does the about future await similar for the boilerplate consumer? Ramses Alcaide, the chief executive of a company called Neurable, imagines a world in which smartphones tucked in our pockets or backpacks act as processing hubs for data streaming in from smaller computers and sensors worn around the torso. These devices — glasses that serve as displays, earbuds that whisper in our ears — are where the actual interfacing between human and computer volition occur.
Microsoft sells a headset called HoloLens that superimposes images onto the world, an idea chosen "augmented reality." A visitor called Mojo Vision is working toward a contact lens that projects monochrome images directly onto the retina, a private computer brandish superimposed over the world.
And Dr. Alcaide himself is working on what he sees as the linchpin to this vision, a device that, one day, may aid you to silently communicate with all your digital paraphernalia. He was vague near the form the production will take — it isn't market ready however — except to note that it's an earphone that can measure the brain's electrical activity to sense "cognitive states," like whether you're hungry or concentrating.
We already compulsively bank check Instagram and Facebook and email, even though nosotros're supposedly impeded by our fleshy fingers. I asked Dr. Alcaide: What will happen when we tin can compulsively bank check social media only past thinking?
Ever the optimist, he told me that brain-sensing technology could actually help with the digital incursion. The smart earbud could sense that you're working, for example, and block advertisements or telephone calls. "What if your computer knew you lot were focusing?" he told me. "What if it actually removes bombardment from your life?"
Perchance information technology's no surprise that Dr. Alcaide has enjoyed the HBO sci-fi show "Westworld," a universe where technologies that make communicating with computers more seamless are commonplace (though no ane seems amend off for it). Rafael Yuste, on the other hand, refuses to watch the show. He likens the idea to a scientist who studies Covid-19 watching a movie about pandemics. "It's the final matter I want to do," he says.
'A Human Rights Upshot'
To grasp why Dr. Yuste frets then much near brain-reading technology, it helps to understand his research. He helped pioneer a technology that can read and write to the brain with unprecedented precision, and it doesn't require surgery. Simply it does require genetic engineering.
Dr. Yuste infects mice with a virus that inserts 2 genes into the animals' neurons. One prompts the cells to produce a protein that make them sensitive to infrared low-cal; the other makes the neurons emit light when they activate. Thereafter, when the neurons fire, Dr. Yuste can see them calorie-free upwards. And he tin can activate neurons in turn with an infrared laser. Dr. Yuste tin can thus read what's happening in the mouse brain and write to the mouse's encephalon with an accuracy impossible with other techniques.
And he can, it appears, make the mice "see" things that aren't at that place.
In 1 experiment, he trained mice to take a drink of sugar water after a serial of bars appeared on a screen. He recorded which neurons in the visual cortex fired when the mice saw those bars. And then he activated those same neurons with the laser, but without showing them the actual bars. The mice had the aforementioned reaction: They took a drink.
He likens what he did to implanting an hallucination. "Nosotros were able to implant into these mice perceptions of things that they hadn't seen," he told me. "We manipulated the mouse like a puppet."
This method, called optogenetics, is a long way from existence used in people. To begin with, we have thicker skulls and bigger brains, making it harder for infrared calorie-free to penetrate. And from a political and regulatory standpoint, the bar is high for genetically technology man beings. But scientists are exploring workarounds — drugs and nanoparticles that make neurons receptive to infrared light, allowing precise activation of neurons without genetic engineering.
The lesson in Dr. Yuste's view is not that we'll soon accept lasers mounted on our heads that play us "like pianos," but that brain-reading and mayhap brain-writing technologies are fast budgeted, and society isn't prepared for them.
"Nosotros think this is a homo rights outcome," he told me.
In a 2017 paper in the journal Nature, Dr. Yuste and 24 other signatories, including Dr. Gallant, called for the conception of a human being rights declaration that explicitly addressed "neurorights" and what they see as the threats posed by brain-reading applied science earlier information technology becomes ubiquitous. Information taken from people's brains should be protected like medical data, Dr. Yuste says, and non exploited for profit or worse. And just as people have the right non to self-incriminate with speech communication, we should have the correct not to self-incriminate with information gleaned from our brains.
Dr. Yuste's activism was prompted in role, he told me, past the big companies suddenly interested in brain-machine research.
Say y'all're using your Google Cap. And similar many products in the Google ecosystem, it collects information well-nigh y'all, which information technology uses to help advertisers target you with ads. But now, it'due south non harvesting your search results or your map location; information technology'south harvesting your thoughts, your daydreams, your desires.
Who owns those data?
Or imagine that writing to the brain is possible. And in that location are lower-tier versions of brain-writing gizmos that, in exchange for their free use, occasionally "make suggestions" directly to your brain. How will you know if your impulses are your own, or if an algorithm has stimulated that sudden craving for Ben & Jerry'due south ice cream or Gucci handbags?
"People have been trying to manipulate each other since the first of time," Dr. Yuste told me. "But there'south a line that you cross in one case the manipulation goes directly to the brain, because you will not be able to tell yous are being manipulated."
When I asked Facebook about concerns around the ethics of big tech inbound the brain-reckoner interface space, Mr. Chevillet, of Facebook Reality Labs, highlighted the transparency of its brain-reading project. "This is why we've talked openly about our B.C.I. research — so it can be discussed throughout the neuroethics community as we collectively explore what responsible innovation looks like in this field," he said in an electronic mail.
Ed Cutrell, a senior master researcher at Microsoft, which besides has a B.C.I. programme, emphasized the importance of treating user information carefully. "There needs to be clear sense of where that information goes," he told me. "Every bit we are sensing more and more nigh people, to what extent is that information I'one thousand collecting about you yours?"
Some find all this talk of ethics and rights, if not irrelevant, then at to the lowest degree premature.
Medical scientists working to help paralyzed patients, for example, are already governed past HIPAA laws, which protect patient privacy. Whatsoever new medical technology has to go through the Food and Drug Administration approval process, which includes ethical considerations.
(Ethical quandaries all the same arise, though, notes Dr. Kirsch. Allow'due south say you desire to implant a sensor assortment in a patient suffering from locked-in syndrome. How do you lot go consent to behave surgery that might change the person'southward life for the meliorate from someone who can't communicate?)
Leigh Hochberg, a professor of engineering at Brown University and office of the BrainGate initiative, sees the companies now piling into the brain-machine space equally a benefaction. The field needs these companies' dynamism — and their deep pockets, he told me. Discussions virtually ethics are important, "just those discussions should not at whatever point derail the imperative to provide restorative neurotechnologies to people who could benefit from them," he added.
Ethicists, Dr. Jepsen told me, "must too see this: The alternative would be deciding we aren't interested in a deeper understanding of how our minds work, curing mental disease, really understanding depression, peering inside people in comas or with Alzheimer's, and enhancing our abilities in finding new ways to communicate."
At that place's even arguably a national security imperative to plow forward. China has its own version of BrainGate. If American companies don't pioneer this technology, some call back, Chinese companies volition. "People have described this equally a encephalon artillery race," Dr. Yuste said.
Not even Dr. Gallant, who first succeeded in translating neural activity into a moving prototype of what another person was seeing — and who was both elated and horrified by the exercise — thinks the Luddite approach is an option. "The only fashion out of the technology-driven pigsty we're in is more technology and science," he told me. "That's just a cool fact of life."
Moises Velasquez-Manoff, the author of "An Epidemic of Absenteeism: A New Style of Understanding Allergies and Autoimmune Diseases," is a contributing opinion writer.
The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what yous recollect about this or any of our manufactures. Hither are some tips . And here's our email: letters@nytimes.com .
Follow The New York Times Opinion section on Facebook , Twitter (@NYTopinion) and Instagram .
Source: https://www.nytimes.com/2020/08/28/opinion/sunday/brain-machine-artificial-intelligence.html
0 Response to "Mind Reading Machine That Can Read Other People's Thoughts"
Post a Comment