SIGN IN YOUR ACCOUNT TO HAVE ACCESS TO DIFFERENT FEATURES

CREATE AN ACCOUNT FORGOT YOUR PASSWORD?

FORGOT YOUR DETAILS?

AAH, WAIT, I REMEMBER NOW!

CREATE ACCOUNT

ALREADY HAVE AN ACCOUNT?
A global alliance against cyber torture and state-sponsored terror—united, informed, and ready to fight back.
  • LOGIN

Cyber Torture

  • Tech
    • Neuro Signal Intelligence
    • Devices, Hardware & Reviews
    • TSCM & Threat Detection
    • Tools & Special Equipment
    • Spectrum Analysis
    • Experimental & DIY Projects
    • Neurotechnology & Brain Interaction
    • Signal Intelligence & Detection Techniques
    • RF Fundamentals
  • Community Protection
    • Warnings
    • Debunked
    • FCC Reporting Templates
    • Legal Complaint Forms
    • Regulatory Complaint Guides
    • TI Technical Defense
  • Legal
  • Survival
  • Victims
  • Evidence
  • Intelligence
  • Security
    • Cyber Security
    • Physical Security
  • Media
  • Forum
  • Events
  • No products in cart.
  • Home
  • Media
  • Krishna Shenoy (BCI interfaces)

Krishna Shenoy (BCI interfaces)

0
cybertortureinfo@proton.me
Tuesday, 13 May 2025 / Published in Media

Krishna Shenoy (BCI interfaces)

Spread the love

🧠 Implantable Brain Interfaces:

The Rise of Neural Decoding & Remote-Control Threats

🎯 Analyzed from a T.I. Countermeasure Perspective
📡 Presented by: Dr. Krishna Shenoy, Stanford University, Neuralink Consultant


🧬 Executive Summary

DARPA-funded research at Stanford — in collaboration with Meta and Neuralink — is now capable of:

  • 📡 Reading your thoughts from hundreds of neurons
  • 🧠 Decoding intentions and handwriting directly from the motor cortex
  • 👁️ Bypassing spinal cord injuries by extracting commands from the brain
  • ⌨️ Typing, surfing the web, and building words just by “thinking”
  • 💻 Controlling computers with no movement or sound

These systems, while currently aimed at assisting paralyzed patients, pose serious threat vectors to the TI community if misused or militarized.


🔩 1. Tech Breakdown: What Is Being Implanted?

The system uses a silicon microelectrode array known as a Utah Array:

  • 📐 4mm x 4mm chip with 100 needle-like electrodes
  • 🧠 Implanted 1.5mm into the brain’s outer surface (motor cortex)
  • 🎧 Each electrode picks up 1–3 nearby neurons
  • 📈 Action potentials (spikes) are recorded at ~70μV for ~1ms
  • 🔌 Current systems use a wire protruding from the skull; future systems will be fully wireless with Bluetooth

⚠️ These arrays extract raw neural signals — giving live access to user intent, movement direction, and mental handwriting.


🧠 2. Functional Capabilities (Documented)

These are not “hypothetical” functions — they’ve been demonstrated in human trials:

✍️ Thought-to-Text Handwriting

  • A participant with ALS types 90 characters per minute
  • Only by imagining handwriting movements — nothing is physically moved
  • Machine learning algorithms (RNNs) decode this in real-time
  • Error rate: 0.5%, comparable to commercial predictive text systems

🖱️ Cursor & Web Navigation

  • Another participant uses a thought-driven cursor to:
    • Surf the internet
    • Perform Google image searches
    • Select web results without touching the device
  • Click action is triggered by imagining hand squeezing

🦾 Restoring Arm Movement

  • Systems can translate brain signals into robotic arm movement or
    • Stimulate paralyzed muscles using surface or implanted electrodes
  • Brown & Pittsburgh have enabled coffee cup retrieval with robotic arms
  • Case Western has restored arm function through direct muscle stimulation

🧩 3. Threat Matrix: Why This Matters to TIs

CapabilityPotential Abuse Scenario
🧠 Thought decodingSurveillance of private mental intent & inner speech
🧲 Magnetic/Electrical readoutsRemote eavesdropping via implants or covert sensors
📡 Wireless interface (Bluetooth)Wireless hijacking or AI feedback manipulation
✍️ Handwriting reconstructionSilent “mind typing” of what you’re thinking
🤖 Robotic/muscle controlForced movement or stimulation using AI override

❗ This is the same interface Neuralink plans to commercialize.
With full-brain machine interfacing, thoughts become data — and data can be intercepted, predicted, or injected.


⚙️ 4. How the Brain is Decoded

  • Each neuron emits voltage spikes (action potentials)
  • Algorithms interpret patterns of firing as motion commands
  • Movement is decoded down to precise 3D vectors (mm/s)
  • Machine Learning (Kalman filters, RNNs) translate these patterns into:
    • Cursor positions
    • Written letters
    • Robotic gestures

🧠 High-Level Summary:

“The brain says: I want to write the word ‘orchid’.
The system reads: ‘H-E-L-L-O W-O-R-L-D’…
…without hands, voice, or eye movement.”


📡 5. Surveillance Risks & Covert Access

Although framed as assistive, this tech could easily be repurposed for mind surveillance or coercion:

  • “Wireless brain tap”: Once wireless, these devices could be exploited via spoofed signals, interference, or malicious firmware updates
  • Behavioral tagging: If brain signals can be decoded to intention, they can also be classified, recorded, or anticipated
  • Closed-loop override: Devices like Neuropace already deliver real-time stimulation to modify behavior (e.g., interrupt seizures) This could be reprogrammed to suppress dissent, modify emotional states, or reduce agency

🔒 6. Defensive Considerations

👁️ Watch For:

  • EM emissions or RF anomalies around the skull
  • Unusual behavior alignment with specific thoughts
  • Implanted or forgotten surgeries (e.g. “sinus” surgery with unexplained devices)

🛠️ Experimental Defenses:

  • Faraday-capable headwear for short-range Bluetooth shielding
  • Oscilloscopic EM detection around ~2.4 GHz (Bluetooth band)
  • Electromagnetic noise injection for jamming readout frequencies
  • Behavioral deconditioning (change thinking patterns to interfere with ML predictions)

🧬 7. What About Fatigue, Consent, & Ethics?

The presenter acknowledges:

  • Little to no fatigue — once trained, users stop consciously controlling it (⚠️ vulnerability to subconscious influence)
  • Implants adapt with the brain: “plasticity” makes the interface more seamless over time
  • Consent: Current patients volunteer. But the same tech could be deployed without consent (e.g., military, prison, or covert ops)

🧠 “Eventually, you don’t even think about it — it just works.”
⚠️ That’s a direct quote that confirms subconscious conditioning is part of the system evolution.


🚨 8. Who’s Involved?

  • 🎓 Stanford University
  • 🧠 Neuralink (Elon Musk)
  • 🧠 Meta Reality Labs (Facebook)
  • 🧪 DARPA + NIH funding
  • 🧬 Research consultants: Control Labs, Paradromics
  • ⚙️ FDA-approved Utah Array technology

These are not fringe players — this is the front line of militarized neural technology.


🔚 9. Final Thoughts

This is not speculation.
This is operational.
And it’s moving fast.

The ability to monitor, predict, and intervene in thought processes already exists in clinical settings.
With AI and wireless transmission, that boundary collapses further.


🔗 References

  • 🧠 Neuralink Research Page
  • 📄 Nature: High-Performance Brain-to-Text Interfaces (May 2021)
  • 🧪 Stanford BCI Lab
  • 🧠 DARPA: Biological Technologies Office (BTO)

Full Transcript;

It’s a real pleasure to share with you some of the things that our laboratory has been thinking about doing in recent years. To try to help people with a wide range of neurological disease and disorders, in particular, paralysis. And so this afternoon, I’ll really represent the wonderful work that my students and post docs, our medical residents and fellows and all of us do here as part of the labs. So without further ado, let’s take the coming 25 or so minutes to talk about implantable medical systems that we’ve all become very familiar with over the past couple of decades. And this cover of science back in 2002 really sort of illustrates that we’ve all become comfortable with various types of different implants. Cardiac pacemakers, they help our hearts beat regularly, artificial knees and hips. And the bionic human might be a little bit overstated, but it gives us this real sense of what the future could look like, where we can actually go in and replace or bypass the whole variety of injuries. They don’t really fit into the model of surgery alone or pharmaceuticals alone. It’s somehow new and different, its electrical, its mechanical and so forth. And what’s really happened in the last 20 years as exemplified in this cover article in The Economist, is that the brain is really thought to be this next front here. And what this really reflects is something rather remarkable and that is that we are starting to really think credibly in science and in medicine about interfacing with brain. Treating the brain as a computational system, understanding that the brain communicates largely electrically. It also is very importantly dependent on chemical transmission, neuro transmission and that’s why drugs have an effect. And what we can do is we can think of this three pound [LAUGH] sort of, well, chunk of meat if we’re speaking informally as a remarkable place. But unfortunately, we can do very little to repair it currently. What’s trying to change that? Now to start off, let me remind all of us of what we actually have also become a little bit familiar with. And that is that there are several medical systems that write information into the brain, okay? And this is meant to indicate that we can send signals into the brain to replace lost functions. For example, retinal implants, a company called Second Sight and several other startups are starting to do this, where you put a little camera not unlike your cell phone camera at the back of the eye on the so called retina. And then what it does is it transducers light, and then creates a pattern of electrical stimulation. That activates the optic nerve that goes to your brain and it provides some semblance of vision. Now, you can’t read, it’s not that accurate or precise, high resolution, but you can definitely tell where openings and doorways are, where objects are. And this is really remarkable, and this is coming a long way and these types of technologies have been around for some time. Cochlear implants are sort of the origin story, the granddaddy of them all, if you will. And this is a small coil of stimulating electrodes that pass electric current into the inner ear called the cochlea. And it does so because it has a microphone and it picks up sound [COUGH] and then again it produces patterns of electrical stimulation. Which have become so advanced that they actually allow people to acquire spoken language, even if they’re born congenitally deaf. And so I think that, if I were to ask for a show of hands, some large number of you actually will know, either by one or two steps removal, somebody that has a cochlear implant. Now, rather newer but still FDA approved and going in all the time in the last 15 years or so. In fact, my partner in all the things I’ll be sharing with you, Professor JB Henderson, who’s the head of functional neurosurgery here at Stanford. Puts these in several times a week every week right here at Stanford. And what this is a so called Deep Brain stimulator which is an electrode a few inches long. That’s neuro surgically implanted to a deep region of your brain called the globus pallidus. And through a wire that’s routed under the skin to a pacemaker like unit, it trickles electrical current the end and that electrically stimulate cells and disrupts abhorrent neural activity. And, actually, nobody [COUGH] really knows exactly how that works but that is sometimes often the case in medicine. Nevertheless, what it does, is if you’re suffering from Parkinson’s tremor, when you turn this electrical system on, the tremor stops. And so people are walking around, hundreds of thousands of people worldwide who constantly have electrical stimulation rooting into their brain to suppress tremor. The final system is very new. Neuropace is a company just down the road. Medtronic, of course, is a very large company based in Minneapolis. But Neuropace attempts to since the oncoming electrical storm associated with epilepsy make a decision, yes, there is going to be an epileptic seizure. And then it sends electrical current out to a different set of electrodes to try to avert the electrical storm from ever happening in the first place. And this is the so called closed loop, it senses and it disrupts, okay? And that leads naturally into the other type of system so very creatively, I flipped the arrow here, right? So now what we’re doing is, we’re reading out information from the brain, so when we are preparing to move or a variety of other brain states. If we can measure from enough neurons at the same time, we can do some potentially interesting things. So again, this deep brain stimulator is now paired with a sensing system. So not only are you trickling electrical current in but it has electrode so it can measure neural activity. And then it can go into a feedback loop where it can say, I stimulated, I see how the electrical activity is changing. I think we need a little bit more stimulation that can do that on its own instead of needing to go into the clinic all the time to get tuneups, okay? Another is this epilepsy implant, where now I’m just showing you the electrodes associated with censoring and stimulating, where this is really a decision making, okay? Now those are existing systems or systems that are FDA approved on the way out, but let’s, on this afternoon, go on a little bit of a mental journey and imagine a future. What’s imagine a future where it’s possible to record or measure electrically from thousands or even millions of neurons. Now there are about 10 to 11, or 100 billion neurons in your brain. And so, currently, systems are measuring from a few 100. I’m here saying maybe a few thousands or millions, but this is still a tiny fraction of the total number of neurons. So let’s keep a question in mind. Is it really possible to do anything useful? Listening in on such a few number of neurons, the answer will be yes. We can imagine stimulating now thousands or millions of neurons also. And what if we could do that in fully implantable ultra-low power systems? By that, I simply mean no big bulky connectors or other things. But fully implanted in tiny little packages that can need recharging of course but maybe that’s only every few days and through the skin not unlike your electric toothbrush with inductive power. And finally, what fuels all of this is our neuroscientific understanding of what to measure into what to stimulate. And this is really the revolution that you’ve probably all heard of which is neuroscience, the frontiers of science of trying to understand how the brain works. So let’s be even more concrete. And let me, in a couple of slides here, zoom in on the exact problem we are tackling, and then I’ll show you a couple of the systems that we’ve built to help people and then we’ll move to question answer here in a bit. So millions of people really suffer from paralysis of a wide variety. But some types of paralysis are so severe as this picture of Christopher Reeve reminds us, for those of you too young to remember, Christopher Reeve, before he passed, was Superman in the movies. And in the mid 90s, he was thrown from his horse and severed his spinal cord. And from that day forward, he was not able to walk or move his arms and less appreciated. He couldn’t speak clearly either because the need for ventilation. And despite being a person of considerable means, and actually starting the Christopher Reeve Paralysis Foundation, which is actually funded some of our work, he passed unfortunately many years ago really being in no better state than he is, as shown here. But what was this picture brings to mind is a fairly age old idea, which is, [SOUND], I wonder if the intentions of the desires to move are really still fairly normal in the brain, and it’s just that those signals can’t get out the spinal cord on to stimulate the paralyzed muscles here, or stimulate the muscles and move your arm. So can we eavesdrop in on that electrical activity? Well, turns out yes, you can. So let me zoom in one level more on the same problem. And this is a side picture, of course, of your brain. And if you see a coffee cup that you wish to reach out and pick up, you, first of all, detect where the cup is by seeing it on your eye and then you know how your eye is angled in your head and the brain does all these remarkable calculations. It seeds for that cup is and then it starts formulating initial plans to move your arm there. And then those plans are elaborated into detailed signals control the arm that are sent down the spinal cord activate all the different muscles in your arm at different forces in different ways working through all your joints and ligaments, and cause the arm hand move over, pick up the coffee cup. Now if there’s an injury here, which is what we’re considering, we could, for example, implant a tiny little electrode, okay? And this is a little chip, literally a computer chip made of silicon, in a very particular way that I’ll expand on in a second. And from that, we can then measure the electrical activity of many neurons. I’m showing you about 3 but I want you to start thinking about maybe 100 or 200 neurons. And these tiny little blips show you voltage versus time. And they’re about one millisecond long. And these are these little pulses of electricity that one neuron uses to communicate to the next. That is the language of the brain that is believed by modern neuroscience to be how all cells communicate with each other. There are at any distance, there are some other fine points of how they can communicate. And it’s really the rate at which these action potentials, so called, or spikes, sometimes they’re called, are emitted that encode the information, more on than the minute or two. So we can measure these signals and then we can come up with detailed mathematical algorithms, decoding algorithms, that are run on low power integrated circuits that could actually be implanted right there as well, then those signals could come out, and do one of three things in this particular case, it could stimulate the paralyzed muscles of the arm. And that is shown here. Our colleagues at Case Western Reserve have done this. A couple of electrodes come out through wires, you compute how you want to stimulate all these different electrodes surgically implanted in the muscle bodies over the arm and the arm is able to pick up a coffee cup and bring it up for this participant to drink. Another example is to control a prosthetic arm or robotic arm. Now this robotic arm isn’t natural. It doesn’t look too natural, okay? But you can imagine much more sophisticated ones that DARPA and NIH have made in recent years. And, again, by thinking about moving the arm, those signals are read out and move that robotic arm to bring a coffee cup up. And this is wonderful work done by our colleagues at Brown University in Pittsburgh University. Now those are wonderful systems. They’re very important and I’m not gonna talk about them. I’m going to, instead, talk with you about this third, which is to interface with the computer. And what’s going on here is that we took a very deep bet about a decade ago at the time it was seen as a little bit strange perhaps but welcome to research and that was to say, we’ll bet That more and more of a person’s life is spent interacting with electronic devices. So if we pause for a second and ask ourselves honestly, be honest, how much time each do you spend in front of your cell phone, your tablet, or your computer, right? And of course, if you’re a parent like I am, you are pretty aware that, at least with kids, that’s a very large number. But it’s also for us as well, we spend an awful lot of our time doing this. So if we had an injury that prevented us from working or communicating well, if we could communicate with computer devices of all flavors, that would really restore independence. And independence is the number one thing with quadriplegia, so-called tetraplegia these days, wish for. So let’s build this system, I’ll show you a couple videos of how they work, and then we’ll go to questions. So what is this sensor? Here is a sensor shown next to a fingertip. It’s about 4 millimeters by 4 millimeters. It is made out of silicon. It’s made on the same semiconductor manufacturing lines as many other computer chips. So particularly these days, where most of us are aware of the great chip shortage. Fortunately, these aren’t being too badly affected, but they could be. And what’s protruding out here are 10-by-10, so 100, tiny little electrodes, each about 1.5 millimeters long, very thin. And that is the distance it’s going to be inserted into the outer surface of your brain, called the cerebral cortex. Now this is done, of course, under full general anesthesia at the hands of extremely skilled neurosurgeons, here’s how it works. So imagine that in the operating room, you expose some of the brain. Of course, the rest of the brain is intact in the skull and so forth. But you expose a little bit, you put the, [COUGH], the sensor, the so-called array here. And then, hopefully you’re seeing this play reasonably smoothly, you can see the electrode sitting there. And then it is inserted into the brain, okay, at about a 1.5-millimeter depth. And I want to then zoom in on what the top of each one of these electrode is doing. And what’s it’s doing is that it’s coming to rest very close to individual cell bodies in the brain. What I’m showing you here is one neuron, one cell of the brain. And this is communicating with this cell over here by emitting those tiny little voltage deflections. Think maybe 70, 80 millivolts, okay, I’m sorry, microvolts, and about one millisecond in duration, okay, is what we typically measure here. Now, the tips of these electrodes will pick up on nearby neurons, firing these action potentials. And across the whole electrode array, you could imagine how each one of these tips is picking up for one or two or three of these neurons. So overall, we’re able to measure from a few hundred individual cells, and they’re each telling a different story about how you wanna move your arm. So for example, if you want to move your right arm to the right, then some of these cells are going to fire a lot of these action potentials. Whereas other ones are not gonna fire very much at all, not emit those action potentials very much. But if you wanna move your right arm to the left, you may find the opposite pattern, and the same for up and down. So there’s a unique pattern across populations of neurons that allow us to not just say, the person wants to move their sorta to the right. It’s very precise and mathematical. It’s, this is how many millimeters per second, at exactly what angle in three-dimensional space, okay? And this is where tens of PhD theses from my lab have come, to really understand the signal processing and the machine learning of how to interpret that set of signals. Then what we need to do is bring the signals from those electrodes out through wire bundle to a connector, okay? And that connector protrudes slightly through the skin. And that’s the current system, but I don’t want you to think that that’s the future. The future’s getting rid of all this, and just having a tiny little chip back here that sends out, for example, Bluetooth signals, just like so many of your other devices. And then just during the session where we’re recording, we can plug in an amplifier. But again, I want you to think about this all being gone. And this is where the signals get amplified and sent to the computer. Now, I’m gonna tell you two brief stories, very brief. The first is oriented on this words per minute scale, where you can see that people that need to use a so-called sip-and-puff interface, if they’re not able to move or talk. Or other natural types of systems, like conversational speech, is 150 words per minute. Professional typewriting’s maybe 75, and so forth. And I’ll show you how it looks very mediocre on this scale, but it’s way above 0. And we are creeping to the right here, where a two-dimensional cursor can be controlled with one’s mind, so point-and-click. And the second story will be this brand-new BrainDTech system I’ll describe. So I will not go through this in detail. But full credit to Vikash and to Chatham, who are now professors on their own. Where we built a system that takes these neural signals out, does a variety of electrical engineering signal processing to get out low frequencies and high frequencies from these signals. And then this green box is the decoding element, the mathematical element. For those of you that may know Kalman filters, a lot of it’s built on that. More recently, this is all replaced with a current neural network/machine learning AI algorithms. And then that is driving this white cursor, and here’s how it works. So here is one of our participants, T6, who has, unfortunately, amyotrophic lateral sclerosis, ALS, or Lou Gehrig’s disease. Is unable to move her arms or her legs, and unable to speak clearly. And we have one of these electrodes implanted. You can see the amplifier here, which is only plugged in when we’re doing the experiments. She’s looking at a screen, and she’s going to be controlling this little white computer cursor. It’s going to slide however she wants in two dimensions Over the letter she wants to type and then what she’s going to do instead of thinking about moving her right hand up or down or left right to move the cursor, but is to squeeze her left hand and by squeezing her left hand. What she’s able to do is click OK to select the letter. Let me play the video and if I get lucky it’ll be smooth if not I will hand mimic it so I see on my end at least unfortunately it seems to be lurching but it moves continuously I assure you. You type out on the screen and the answer to the question, how did you encourage your sons to practice music? And she’s typing when they started there last, okay? I apologize for this. I think not moving smoothly. So what I’m going to do is I’m gonna manually do this at about the right rate. Sorry about this. But imagine, this cursor just moving slowly. And then when she wants to select a key, she squeezes her left hand, it turns blue. Then that letter is copied up above. To slowly type out, just like you or I would on our cell phone keyboard, software defined keyboard type. And it would get transferred up into a text box, for example. And we’re able to achieve about 24 correct characters per minute. Okay? And at about five words per minute, you get about seven or eight words per minute. Now, that’s fine, but I just want to control a tablet. Okay, so sort of the culmination of this line and experiments is we just bought a tablet literally from Amazon didn’t change a single thing and actually didn’t even use accessibility mode which makes things larger on the screen. And asked her just to go ahead and control it and this is where for the first time you get things like word completion and so forth. So she’s looking at this tablet here and I’ve copied it up here larger so you can see it. And again, I apologize the video seems to not be moving smoothly, but this is the cursor that she’s moving. And I’m just going to take over here and sort of hand animate it. So the cursor is on the screen and she’s typing, and she’s typing out orchid because she’s an avid gardener. Okay? And clicks and then what she sees pop up in a Google search are a few more selections. She goes ahead and makes a selection on orchid care. And then she’s confronted with what we’re often confronted with, which is sort of a wall of text that’s not too appealing, and know how small the letters are. So what she does is she says, well, I could select these, and I may, okay. But what I really wanna do is, I wanna go just look at some pictures. So she goes back as you can see here. And she goes to image search as URI might. And then she just goes ahead and select images to see a nice picture of orchids and select the one that she wants. Okay, and how do we know that she’s getting the one that she wants? Well, we can ask her and she can let us know. Okay, that’s moving a computer cursor in 2d and selecting it. That works very well. You need those types of systems but what if what you really want to do is type as quickly and as accurately as you can just like your eye might on our keyboard. And we don’t really care how our finger moves to get there. We just want the text to appear quickly. So Frank Willett, wonderful postdoc and now research scientist in our group had the really nice idea of saying. Well, you know, what people can do is attempt to handwrite because we all know how to handwrite. And what that’s really doing is it’s a very insightful way to have very rapid motor commands generated, okay, and recording in the so called motor regions of the brain motor cortex. And so, this thought bubble is imagining or attempting to write H E L L O space W, Hello World, okay, little computer science joke, okay, and we record from these arrays and what gets typed on the keyboard is H E L L O Space. Now note, the W isn’t yet complete, so we haven’t seen all the neural evidence of what character that is. So we should wait, and we do. And then once it’s completed, we can say, that was a W, and then type up a W. And the second very important thing is, we are not attempting to reconstruct penmanship. Not the exact way you write the H and the L, we just wanna know is this an H or is this a Q or is this an A? We just wanna type it. Okay, so that’s a classification problem. It’s related, but maybe it’s a little bit easier. So again we put these electrode arrays in just characteristic little so called hand knob area, okay? Where those electrodes are sitting in the area of the brain responsible for the hands and the arm. This is a so called homunculus, motor homunculus. Okay, and we start off by, asking our participants to attempt to write the letter A, prepare to write the letter A, then please, attempt to make it. Thank you. How about preparing to write the letter M? And we do this randomly through the alphabet, many, many, many times to collect, training data, meaning we know what. Our participant was trying to write and we have the neural signals now can we reconstruct the Imagine 10 pin tip trajectory from this neural activity. So we need to have existence proof that we can actually get meaningful signal from the brain related to those hand movements. And the way we can do that is we can take in the neural activity, which you can imagine to be a vector of the length of the number of neurons you’re recording from. And then there’s some matrix that you multiply it by. Okay, and out pops the horizontal and vertical pent-up velocity. So, probably half of you like the little return trip to math and the other half of you, it may have said chills about past experiences. That’s natural, that’s fine. And what we’re able to see Is that when our participants attempting to write the letter a. We can see that from the brain activity. We can construct the letter a and b, and c and d. Now, wait a minute, Krishna. You were saying you were not trying to reconstruct the penmanship? That’s correct. We actually don’t care that this A is not excellent or this B or the C. The key point is that these are different the fact that they’re different and they’re different from each of the different letters. Is the key, because that means we can tell them apart, now, how well can we do that? And how well can we do that on a so called single trial? So if I try to write one letter, not an a on average, but that a well, we can take the neural activity. Which lives in the so called high dimensional space, we can reduce that dimensionality down with so called machine learning and dimensionality reduction, this is not unlike taking a three dimensionally bent paperclip. Shining light on it and then looking at it shadow, the paperclip is 3D but the shadow is 2D. That’s a dimensionality reduction operator, okay? And what you see is that all the g’s and all the s’s, are pretty easy to tell apart, you can draw a line through there. Where each of these dots in red are all the different g’s that the person attempts to write over an hour. For example, same thing with s and t and all of them, except here you get a little bit of a mess. But it turns out that if you rotate this they split apart, and they do that well enough that 94% of the time, we can tell the correct character. So that gave us good encouragement to go build the system, I’ll just show you how this works and we’re done. So he can attempt to write a letter, and then many letters after the other to get a word. And we have our neural activity, the function of time, we take a little bit of data at a time that scrolls along as time progresses. We can run it through our decode algorithm, that I would not go into more, but this is a so called recurrent neural network. One of these modern machine learning algorithms, and out pops the probability of each letter as a function of time. So here’s the t and n and e, and if you draw a line and you just picked which letter was above it, you would get what we call the raw output, tne paper. Well, I don’t know what tne paper is, but I know what va paper is, so this is a mistake, okay. And fair enough we make the mistake about 5% of the time, and it makes sense that would confuse them each. And in those are pretty similar in physical space, so you couldn’t just live with it because we often text each other with mistakes. And we don’t really notice it unless your phone, word correctly mangles with so badly but we can do much better. We can do much better because of modern machine learning language models, which capture all the probabilities of transition. If you have a t, it’s 90%, likely that you’ll go to an age in comparison to gonna an end, and so forth. So you can intersect what is known about the statistics of the language, with the evidence coming from the brain in real time to effectively correct these errors and reduce that error rate of 5% down to 0.5%. So now you’re typing at 99.5% accuracy, and you’re doing so at 90 characters per minutes. Okay, and this is a video showing that, here’s a participant of ours, where now we have two electrodes implanted we see two cables. Again, these are only attached when we are there with him running these experiments. And we have a sentence typed out here, felt like a soldier on a battlefield. And the right character here is what we asked him to attempt to write instead of a space. It’s far easier to detect a symbol than the lack of a symbol, which is what it space is. And now here, you’ll see letters pop up, and I showed you a video of his hands, his hands are absolutely still. Because he’s paralyzed due to a spinal cord injury, okay, so he’s just thinking about handwriting, and this is what pops out. Okay, so again, it’s a little jerky, it’s doing a little better this time, it’s absolutely smooth and reality. Okay, and you can see that letters just get typed out here, and this is a building on top of the previous system. The 2D cursor which doubled world record performance, this system doubles it again. Okay, and this is a field where every 5 or 10% improvement is a huge deal. So these are pretty substantial leaps, okay, final things, I’ll put those two side by side. You’ve seen this bottom movie. I’ve just showed you the upper movie, but it turns out that we ran both the two dimensional system that I first showed you in the same participant that we also had the opportunity of working with, with the handwriting. So instead of 8 words per minute, we’re now doing about 17 words per minute. And the video simply shows the two systems running at the same time, okay, so at least on my screen for some reason. This looks like it’s playing correctly, hopefully it is on your monitors as well, so the cursor is moving continuously, and smoothly selecting. You must be that, but you see here, you must be that change you wish to see, this is going twice as fast,.okay? So in conclusion, the so called inter cortical, because we’re putting electrode arrays into the outer surface of the brain. The cerebral cortex, brain computer interfaces that decode brain activity are really advancing rapidly, okay? And this is enabled by new neural interfaces, neuroscience, low power electronics, and very importantly, neuroscience discoveries. These new iBCIs that guide two dimensional cursors, more than double previous communication rate methods. And that’s enabled by new neuroscience knowledge that we brought to bear, and new machine learning decode envelopes. And know that in the text box tiny is going to be putting links in case you just want to look at the original paper, okay? And the same again here for this new handwriting iBCI, which again doubles performance. And this is also enabled by new neuroscience and machine learning, and there’s also a link to a recent paper. Where you can recognize these letters that you’ve now seen, on the cover of nature from back in May. So finally, the most important slide I ever show, and unfortunately I don’t have time to go through in more detail. That’s why I put some pictures in, I wanna thank obviously the just absolutely brilliant students and postdocs. And spectacular staff, collaborators, former students in the recent past, and lab alumni, all of our funding agencies. My endowed chair from Hong Fei and Vivian WM Lim, and also for full disclosure, I’m on a number of scientific advisory boards. The two most relevant is that I’m a consultant with Control Labs, which is part of Facebook Reality Labs, or now Meta. And also I co-founded and I’m a consultant with Neuralink which a little on Musk’s company in this space. Thank you very much. >> Thank you, Christian, for such an amazing presentation. We have a lot of questions from alumni, so I’m gonna jump right in. The first question is, does the deep brain stimulation also work for depression? >> That is a great question. That has been investigated specifically for that indication, depression. Results have been a little bit mixed. But the informal belief in the field is that yes it can, and there are new clinical trials starting up to go after that. There’s really exciting work with transcranial magnetic stimulation right here at Stanford, where you send a pulse of electricity, it sends a pulse of magnetic energy and that causes activity in the brain. And that can also be used very helpfully for refractory depression. It’s not electroshock therapy, it’s sort of in between. >> Thank you. We have a great question from Jeff Butler. As an MBA one living with a spinal cord injury, this technology is really exciting. What is the current technological blocker to getting VMIs into people? Is a Utah race sufficient to get this done or are companies like Neuralink going to hold the key to commercialization? >> Yeah, that’s a great question. And I definitely feel for you. We need to get this technology out quickly. I don’t say that because that’s our work, I say that because that’s really the ethos at Stanford, which is basic science, engineering, proof of concept, terrific, but let’s also get it to people. It absolutely, as you suggest, it has to involve companies. And the challenge here is that these are medical device companies with enormous capital expenses associated with them. So think computer chips, not software, right, large investments. And [COUGH] the technology is sufficient, I believe personally, not maybe the Utah ray in its exact current form but a scaled up version of that, which is what Neuralink is working on. There’s a few companies working on this. I think that technology with what we know can absolutely help people today, if all that were built, if all that were shown to be functionally useful and safe. Of course our job is to show that it can be done, our job is to show the proof of concept. And then finally FDA approval and the whole enterprise of taking it out there. This is my personal motivation for being involved on a weekly basis as a consultant and as an advisor and co-founder. And I think that Neuralink’s founding, one Musk’s entry, controversial individual to be sure, but that jarred probably around $1 billion of VC funding into the field. So stay tuned, I hope that it’s much faster than I would have even guessed five years ago. >> Here’s a great question from Karen Monday. Does it matter where precisely you embed the electrodes in the cerebral cortex? >> Yeah, that’s a great question. So there’s a fundamental principle so-called functional organization, different parts of the brain are largely responsible for different functions. So at the rear of your brain, the occipital lobe is primarily vision and in the frontal, just interior or forward from sort of your ears is where motor cancer shown, that’s where we implant. And even further ahead, the so called frontal lobe, as you know that is executive function, decision making. So yes it does matter where you put it because you’ll get stronger signals related to what you care about. But it’s also an excellent question because the brain and so called plastic, meaning it can change its functionality and it can adapt to what it’s controlling. This is a huge thing and cochlear implants too that your brain learns how to interpret those signals, the reverse happens on the other side. So yeah, it’s sort of like you have to be in the right state in the country, [LAUGH] but then maybe you don’t have to be in exactly the right city. >> We have another question, can you give references discussing how you interpret the sense electric fields from hundreds and thousands of neurons, especially how to separate overlapping signals from nearby neurons? >> Yeah, that’s a great question. Yeah, I’d love to have you in the lab. [LAUGH] So this is the so called blind source separation problem. So one electrode could hear, for example, two neurons, one closer, one farther away. The one that’s closer, because it emits the same size action potential roughly as the one that’s farther away, will be measured as larger by this electrode. And so that gives you one way to tell the two neurons apart, okay. The other way, if they both are firing at the same time and therefore actually add together, it’s an extremely so called linear material, okay, they just add. Then you can invert it because you’ve learned the shapes the two constituents and you can probabilistically say, what I’m seeing is a mixture of these two. So there is a way to do this, it’s called spike sorting, it’s an entire sort of cottage industry. And it works okay with one electrode, but the real answer to your question is by putting higher density electrodes you can have multiple electrodes picking up on the same neuron. That’s like having a quadrophonic recording of a speaker and then another speaker in a room like in a cocktail party, and then you can do a really good job. >> What is the prognosis for non-invasive planted EEG shash BCI to help people with ALS to communicate? >> Yeah, that’s a great question. So first of all, there’s a range of technologies, and in this short talk I didn’t have time to go into it. But if you put electrodes on your scalp, that is called EEG, this is done all the time in neurology. If you may be having seizures or other things, that is what will be done and broad waves can be interpreted and it can be diagnosed, okay. Now if you put those same electrodes below the skull but on top of the outer skin covering your brain called the dura, that’s called ECOG, electrocorticography. And you can put it above or below that dura, below is called subdural. And then finally what I was showing you, which are little electrodes that go right next to the source. Now I’m personally a huge believer in all of them, okay. Many in the field are not, they sort of love their own technology, that is not my view. This is just the technology I work with because I actually want to know how individual neurons work. I’m also a neuroscientist, okay. The trade off is that if you measure from farther away, outside your skull, you cannot Currently, and people have really tried quite hard with truly sophisticated signal processing. So it’s not like, the right field hasn’t looked at those. You can’t tell individual neurons from each other. So what you get is an average across many, many, many neurons, in fact, thousands of neurons. So you can measure some information. You can use that to control a computer cursor. But to give you some rough numbers, you’re able to do only about one-fifth as well as if you are inside, okay? So, different people will want different treatments, some people never want surgery at all for anything, We respect that. You need to have treatment options for people that is their preference. Patient’s wishes are always correct, right? Whatever a patient wishes, that is correct, okay? But some people want the implant. Same thing by the way, with Parkinson’s disease. You’ll typically go on so called L dopa and dopamine precursor that will abate the tremor. Often, however, that will not persist to be effective after a certain number of years. Some people may wish to have a deep brain stimulator, some people may not wish to. That’s absolutely fine, but we wanna have an array of options. >> A lot of great questions. Next question is from Eric Sableman. The pins of the Utah rare region and the brain is flexible and loose with every audio pulse. Published studies show damaged neurons that depend insulation over time. How do you anticipate achieving longer lasting electrodes? >> Flexible electrodes. That’s a great question. Very sophisticated question, of course [LAUGH] not surprising with this audience. So you’re right. Young’s modulus. The stiffness of silicone is enormous. And it’s like taking a pin and pushing it into a bowl of Jello, right? Where your brain is Jello. If you take your bowl of Jello and you shake it, you’ll get little tremors, okay? Same thing with the brain. This is why concussions exist, right? You boom, hit your head, your brain sloshes around, you get concussions. So, there is a so-called mechanical impedance mismatch between that stiff electrode, as I was showing you the Utah electrodes, and the compliant elastic, as it’s called brain. Now, newer technologies, but the reason we use the Utah raises in the video, they are the only ones that are currently FDA approved, right? That’s critical. We would love to simply buy, [LAUGH] other electrodes, okay? We don’t make electrodes ourselves, okay? So, other technologies, for example Neuralink Technology or Paradramics Technology are coming in with different types of electrodes. For example, you could take a needle and push into the brain, these are very, very, very fine needles, okay? Don’t worry, [LAUGH] push into the brain and then pull that needle back. And boy, if you’d had a piece of thread through that needle, that’s called a sewing machine. And when you pull the needle back it leaves the thread there. Well, what if that thread is now made of polymers, okay? With tiny electrodes? And then what’s left behind is totally flexible, moves absolutely stable with the brain, does not have so-called micro motion that triggers the immunological responses. Cuz just sitting there signaling to the brain all the time that’s there is a foreign body, and what the body does to foreign bodies is wall them off with so-called glial cells. So flexible electrodes, electrodes that have different types of immunosuppressants put on them, this is not unlike dexamethasone coated cardiac stints, and so forth. So you can manage that chemically as well. >> Can this system work with debilitating migraines? >> First of all, debilitating migraines are, well, they’re debilitating. Yes, so this is a very, very serious condition. And unfortunately, I’m not aware of any work on these types of systems for that indication. So I apologize, I don’t know. I’m not aware of anything, but that does not mean that it doesn’t exist. >> You have two related questions from, one’s from Janet. The patients are using these brain to tech systems. Do you report fatigue, more than we would if we were without injury? And then another one from Esther, when the patient write letters with these methods, do their eyes follow the motion of the words they’re trying to trace, and can they be trapped in neural activity? >> Yeah, so these great questions. So fatigue, it’s a very important thing. The short answer is not much at all. Why is that? Because we’re putting these electrodes in the areas of the brain that people use anyway to control that function. If you wanna move a computer cursor on your screen, what do you do? You grab your mouse and you move your hand left, right, up, down, okay? That’s what we’re asking them to attempt to do, so that the fatigue is not high, it doesn’t take a lot of cognitive effort. In fact, you ask people on the first day, hey, when you’re using this, what were you thinking? We were like, well, I was thinking about trying to move my arm and then seeing the cursor and, okay? You ask them after a week or two and they’re like that, I don’t know. I mean, I’m just moving the cursor. Just like if I asked you what are you thinking about when you use your mouse pad, you probably say, well, not not too much, I don’t know. I’m paying more attention to the cursor and getting on with my work, okay? I’m sorry, I forgot the second question, Tanya. >> The second question when the patient writes letters with these methods, do their eyes follow the motions we’re trying to trace? >> Right, yeah, so natural eye-hand coordination is that if I wanna reach out and pick up this cup, but it’s gonna be hard with my Zoom background. [LAUGH] But if I wanna reach out and pick up something over here on the left, my eyes will dart out to the target first and then my hand will follow. That’s natural behavior. You’re doing that all the time. You of course don’t think about that, but that’s what you’re doing. So, that is what people were doing here, okay? Now, very importantly, in control experiments they didn’t share with you, we can have people hold their eyes fixed. We put a little cross and say fixate that, we infrar-track their eyes, and they can still use these systems just as well without their eyes. So it’s not as though we’re picking up on eye movements as a contaminant, okay? And also that absolutely just would not work for the handwriting, for example, okay? The other thing is that this is very closely related to a very natural question which is eye tracking systems, okay? Now, I can put up an image of an alphabet on the screen like a keyboard, and if you said, Krishna, please write your name, Cod minimum either k and then r and then I, and I could track that with glasses. And I can do really well. Really well, because our eyes are extremely precise, okay? So why not just use an eye tracker? Well, there’s a couple reasons. One is that, people are often given eye trackers, but it requires somebody put the eye tracker on them. And then it ties up the eyes which are extremely expressive element of human nature. So if you ask neurologists what they’ll say is, yeah, actually Medicare reimburses eye trackers are like 20 grand, we often given to people and in two weeks there in the closet. People don’t use them, okay? And for higher dimensional control like a robotic arm or other things like that, then the two dimensional movements don’t map to that problem very well. >> A lot of amazing questions. I just have one last question, where do you see this emerging technology to help people in the future? >> Yeah, so I think some very conservative [LAUGH] individual, but I do you have to recognize that what we’re talking about here is a very general proposition of interacting, interfacing with the brain, any part of the brain. There’s not viewed to be any scientific or technological fundamental barrier, right? No physics has to be violated to do this, it will take time to engineer the materials. Now we could have a whole conversation on nanomaterials, because we’re still looking at bulk things, but why don’t we think of things on the atomic scale and so forth, okay? So any part of our mental lies, any part of psychiatric, neurological, memory, you know, Alzheimer’s. There is a growing consensus, that these types of interface systems electrical currently they could be optical, chemical, really have a very long bright future ahead of them. Neuroethics is crucial, so I spend a lot of my day, a lot of my time at a variety of national committee’s, and so forth on this topic. It is critical, and recent months and years with a variety of industries even social media, have taught us to be very cautious of just going out and build stuff. And this just couples back into the way any sufficiently innovative technology always is how it is. By that I mean, there’s always a benefit, and there’s always some not great use. CRISPR CAS9 revolutionary gene editing could be potentially used to solve all sorts of human diseases, companies all over the place. Jennifer Doudna, Berkeley Nobel prize last year. But you could also have designer babies, and this happened two years ago, okay? Normally to avert a bad disease upon birth and twins, but, you know, and of course nuclear power is the granddaddy example of them all. So we have to proceed, you can never put the genie back in the bottle. But we have to do it cautiously and it’s critical and academics, policymakers, everybody worked in the efficient way, to make sure that we stay on top of the ethics of this. >> Thank you Krishna, presenting this groundbreaking research and giving hope to those people are afflicted with paralysis, or severe disabilities >> Thank you.

What you can read next

James Giordano George Town University
Blaze Media on Havana Syndrome
Dr. Kevin Warwick (Captain Cyborg)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Mind Control: Past, Present & Future
  • Why It Feels Like the Fan Is Talking to You
  • Capturing Skull Pulses & Knuckle Cracking Effects
  • Rhythmic Knuckle Cracking Over Ear
  • Cybertorture.com is Launching a Legal Case

Recent Comments

  1. William rae/kilpatrick on Dr Hoffers Diagnostic Testing Protocol
  2. cybertortureinfo@proton.me on Synthetic Telepathy & Signal Intelligence Toolkit
  3. Maurice Parker on Synthetic Telepathy & Signal Intelligence Toolkit
  4. 0xl0r3nz0 on DIY Non-Linear Junction Detector (NLJD) for Nanotech Detection
  5. cybertortureinfo@proton.me on Only Way Forward is The Necessity Clause

Recent Posts

  • Mind Control: Past, Present & Future

    Spread the love🧠 Mind Control: Past, Present &a...
  • Why It Feels Like the Fan Is Talking to You

    Spread the love🌀 Why It Feels Like the Fan Is T...
  • Capturing Skull Pulses & Knuckle Cracking Effects

    Spread the love🧠📡 Experimental Setup Design: Ca...
  • Rhythmic Knuckle Cracking Over Ear

    Spread the loveRhythmic Knuckle Cracking Over E...
  • Cybertorture.com is Launching a Legal Case

    Spread the love⚖️ Launching a Legal Case: Pre-E...

Recent Comments

  • William rae/kilpatrick on Dr Hoffers Diagnostic Testing Protocol
  • cybertortureinfo@proton.me on Synthetic Telepathy & Signal Intelligence Toolkit
  • Maurice Parker on Synthetic Telepathy & Signal Intelligence Toolkit
  • 0xl0r3nz0 on DIY Non-Linear Junction Detector (NLJD) for Nanotech Detection
  • cybertortureinfo@proton.me on Only Way Forward is The Necessity Clause

Archives

  • June 2025
  • May 2025
  • April 2025

Categories

  • Cyber Security
  • Debunked
  • Devices, Hardware & Reviews
  • Evidence
  • Experimental & DIY Projects
  • Intelligence
  • Legal
  • Legal Complaint Forms
  • Media
  • Neuro Signal Intelligence
  • Neurotechnology & Brain Interaction
  • Physical Security
  • RF Fundamentals
  • Signal Intelligence & Detection Techniques
  • Spectrum Analysis
  • Survival
  • Tech
  • TI Technical Defense
  • Tools & Special Equipment
  • TSCM & Threat Detection
  • Victims
  • Warnings

SIGN UP TO OUR NEWSLETTER

Subscribe to our newsletter and receive our latest news straight to your inbox.

SOCIAL MEDIA

TOP