SIGN IN YOUR ACCOUNT TO HAVE ACCESS TO DIFFERENT FEATURES

CREATE AN ACCOUNT FORGOT YOUR PASSWORD?

FORGOT YOUR DETAILS?

AAH, WAIT, I REMEMBER NOW!

CREATE ACCOUNT

ALREADY HAVE AN ACCOUNT?
A global alliance against cyber torture and state-sponsored terror—united, informed, and ready to fight back.
  • LOGIN

Cyber Torture

  • Tech
    • Neuro Signal Intelligence
    • Devices, Hardware & Reviews
    • TSCM & Threat Detection
    • Tools & Special Equipment
    • Spectrum Analysis
    • Experimental & DIY Projects
    • Neurotechnology & Brain Interaction
    • Signal Intelligence & Detection Techniques
    • RF Fundamentals
  • Community Protection
    • Warnings
    • Debunked
    • FCC Reporting Templates
    • Legal Complaint Forms
    • Regulatory Complaint Guides
    • TI Technical Defense
  • Legal
  • Survival
  • Victims
  • Evidence
  • Intelligence
  • Security
    • Cyber Security
    • Physical Security
  • Media
  • Forum
  • Events
  • No products in cart.
  • Home
  • Media
  • Open BCI Subvocal

Open BCI Subvocal

0
cybertortureinfo@proton.me
Tuesday, 13 May 2025 / Published in Media

Open BCI Subvocal

Spread the love

🤖🔇 Synthetic Telepathy is Real — Meet the AI That Hears Your Thoughts

🧠 Welcome to the future: where artificial intelligence can read your internal monologue, track your silent words, and potentially interface with your mind—without you ever opening your mouth.

📍 In this post, we break down the synthetic telepathy project developed by NeuroTech SC, a student-run lab that’s built a subvocal recognition headset. Their goal? Enabling human-AI communication without speaking a word — just through your inner voice.

👇 Here’s why you need to pay attention.


🧬 What Is Subvocal Recognition?

Subvocalization is the act of silently mouthing or thinking words. Even when you don’t speak out loud, your brain still sends electrical signals to the muscles in your face, jaw, and throat.

NeuroTech SC uses EMG sensors (electromyography) to detect these signals from seven facial electrodes, and then uses machine learning to figure out what you’re “saying” internally.

📊 They’ve already built:

  • A 3D-printed headset 🧢
  • Gold-plated facial electrodes ⚡
  • Real-time EMG signal processing software 🧾
  • An AI that decodes your yes/no answers based on these facial signals 🤖

🛠️ How It Works

ComponentDescription
🧢 HeadsetBuilt with Fusion 360, it includes a battery pack, copper wire skeleton, and electrode holders
🧪 Electrodes7 measurement electrodes + 1 bias + 1 reference — placed on your face with conductive paste
💻 Hardware InterfaceConnects via USB dongle to a computer using the OpenBCI platform
🧠 Signal ProcessingButterworth filtering, standardization, and slicing into 2-second EMG chunks
🧠 AI ModelBased on EEGNet: a lightweight convolutional neural network with 90%+ accuracy in binary classification
🖥️ Frontend InterfaceA React web app powered by Flask for easy control and feedback loops

🎯 Real Applications (Already Functional)

🧑‍🦽 Silent Communication for Disabled Users
People with locked-in syndrome or vocal paralysis could use this to “speak” without making a sound.

🃏 Discrete Poker or Secret Games
Imagine calling a bluff silently — no gestures, no lip reading.

🔎 Lie Detection
Subvocal signals could become a more accurate, less invasive polygraph system.

🔇 AI Interfaces Without Voice
Use Siri, Alexa, or ChatGPT by thinking your query — without being overheard in public.


⚠️ Why This Matters for Targeted Individuals (TIs)

Let’s be very clear:
If this tech can read your inner voice, so can others.
This project shows you don’t need implanted electrodes or sci-fi tech to decode thoughts.

All it takes is:

  • A few facial sensors
  • Open-source software
  • Machine learning
  • And your silent internal voice

🧠 This validates what many TIs have been claiming:

“They’re reading my thoughts.”
“They respond to what I think.”
“I can’t say anything in my head without triggering a reaction.”

📡 This project shows it is scientifically and technically possible to decode thoughts via non-invasive muscle signals, and that these can be used to communicate silently with machines.


📈 Research Summary

ModuleOutcome
🎤 EMG Recording50+ minutes of test data collected using gold facial electrodes
🧹 Data ProcessingFiltering, slicing, normalization (reached 80–85% accuracy)
🧠 AI Model90% test/train accuracy on yes/no classification using EEGNet
🧪 Live TestingReal-time questions like “Do you have a twin?” answered silently
🧑‍💻 Frontend UIReact + Flask app with question logging, live prediction, and accuracy feedback

🔭 Future Goals (And What Comes Next)

🌀 Continuous speech recognition — not just yes/no
🧬 Phonetic-level decoding — actual full words and phrases
🎮 More applications — gaming, assistive tech, silent texting
🧪 Rolling windows — capturing spontaneous thoughts in motion
🔓 Open-source community — public replication and enhancements


🧠 Why the TI Community Should Be Concerned

This technology doesn’t require surgery or implants. It works from the surface of your skin and can already:

  • Decode binary answers
  • Track intentional thoughts
  • Interface directly with AI in real-time

So imagine this:

If you can train a model to decode “Yes” and “No”…

💬 What’s stopping someone from training it to decode what you’re trying NOT to say?

Now imagine black-budget groups or hostile actors with access to:

  • Higher-resolution electrodes
  • Longer training data
  • Non-consensual testing
  • RF-powered remote EMG sensors (yes, they exist)

This isn’t conspiracy — it’s plausible, testable, and already prototyped.


🔓 Our Takeaway

Subvocal decoding has crossed out of science fiction and into student clubs. Here’s what this means:

✅ Thought-reading is possible (from facial EMG)
✅ It’s trainable in weeks with off-the-shelf hardware
✅ It will be miniaturized
✅ It can be repurposed for surveillance, coercion, or control
✅ Or — with ethics — it can help unlock locked-in minds and enable silent human-AI collaboration


🚨 What You Can Do

🛑 Stop thinking this is “theory.”
📥 Download and review the open-source code when available.
🔐 Build your own jammer or interference device to block facial EMG.
👁️‍🗨️ Investigate your own subvocal activity under calm vs. stress states.
🗣️ Speak up about the ethics of brain-computer interfaces before it’s too late.


📚 Want to Learn More?

  • EEGNet paper (source of their AI model)
  • OpenBCI EMG Hardware Toolkit
  • Fusion 360 (3D design software used)

🧠💬💻 The mind is no longer private.

TI community — we warned the world this was coming. Now it’s here.

Full Transcript:

hey guys we’re neurotech santa cruz and we’re building a synthetic telepathy project using sub-vocal recognition so ai voice assistants have developed really quickly in the past few years and they have great capabilities but they’re still unable to provide a really effective user experience this is partly due to social restrictions on speaking out in public and vocal misunderstandings that come up from time to time um so we’re going about this by using subvocal recognition subvocalization is a process by which your brain causes really subtle activations in your mo in your vocal muscles when you’re thinking and then we can actually use emg to measure those muscle activations and deep learning models to recognize the word that you’re reading or thinking um so using this kind of subvocal recognition paradigm we can go about enabling synthetic telepathy and accelerating the merger between ai and humans as elon musk would like to say so now we have a quick demo on this from phoebe our headset does look a bit cyberpunk but these components are all necessary for stable reproducible recordings and here’s some of our hardware that we’ll discuss later on after starting the web app phoebe gets prompted with some questions that she can subvocalize in response to these are just yes or no questions because right now our model is trained for that binary classification um but our app also allows her to tell us whether that was an accurate sub-vocalization prediction or not so she doesn’t have a twin and she finds that that was indeed an accurate subvocalization recognition so each time she has to click that start time button because we want to get a more accurate time window frame and after she clicks that the hardware just records the data and then passes it through the data processing and machine learning pipelines awesome so now let’s take a look at the details of hardware teams technology hi i’m sabrina from the hardware team and so the hardware team is in charge of two main processes which are is taking recordings and 3d headset design the materials we used were the cytome board and the usb dongle which are used to connect to the computer nine gold plated electrodes which are divided into seven measurement electrodes and one bias and one reference electrode and a 10 20 conductive paste which is used to attach the electrodes to the face the samples were taken with the open vci gui and rate flow the 3d headset design was modeled using the diffusion 360. it features a housing for the electrodes battery pack and board we also included copper wire skeleton to the headset in order to provide structure for electrode wires velcro was used to secure the battery and board enclosures to the bicep and the headset to the head we also used glue to attach the copper wires and loops to the headset and so here are the results we have 50 minutes of recorded data in total and we have a 3d printed test set with emg electrodes cool so now let’s take a look at the data team which actually processed the data that hardware recorded yeah hi i’m phil from the data team so initially the data pipeline was set up after understanding what kind of data would be getting from the hardware and how we could uh maximize the accuracy in the ml model the pipeline framework we built up was as follows first when we received the two-minute recording from the hardware we splice them up into 60 chunks of two seconds and each individual uh chunk will have um go through a little a fourth order butterworth filter and we would use the independent component analysis applied by the many tools um and many tools api and then afterwards at the very end we wouldn’t scale it down using normalization then we would export all these chunks into the pkl file and send it to the ml team for them to test it out when we were actually testing what how what impacts our techniques were making on the data we realized that uh ica was making minimal impact on the entire data so we removed it and then afterwards we realized that normal that standardization is a normalization uh scaling technique yielding more accuracy so like you can see here on the two graphs the raw data on the left and the filter data on the right how the filter data actually splits the raw data out and then like i said before the after replacing normalization standardization are accuracy shot up to 80 to 85 from 70 and the work on increasing accuracy continues cool so now let’s take a look at machine learning team from jessalyn i’m definitely from the machine learning team and our um focus was to build a model based on the data that the data team passed to us and our model was a classification model it’s based on the eeg net paper and the architecture is as follows it has three convolution layers a temporal convolution a depth wise convolution and a separable convolution and then it goes into a linear linear layer into a sigmoid function um the reason why we use the eeg net paper is because it has fewer parameters to fit and the paper showed that it performed just as well as generalized models uh it performs better than generalized models and just as well as specified convolution model the libraries we used were pytorch sake and numpy and although the mark the model architecture was heavily based on the eg net papers the hyper families were tuned to state uh emg data uh the results we reached up to 90 train test accuracy on single participant binary classification and this is a graph that’s showing our accuracy and um in the future we will move um we will continue to train our model to try to improve the accuracy um awesome so now let’s take a look at the ui team which actually used those machine learning models in production conrad hi i’m connor the ui team representative um our goal here was to display the output of the ml model and provide a simple user experience so we began with that by building ui with react and flask and once our basic functionalities were set up we integrated our work with the ml data and hardware teams and we were then able to send the data to be displayed on the front end from the flask server so um some functionalities we had were the ability to move between questions so uh as you can see it’s like next question and then you can also go to the data tab and then view the previous questions as well as we set up a database to store the accuracy and predictions for future use for the ml team after each question it’s entered into the database and then we also have a port number input for others other users to connect their headset and um yeah so as you can see we also accomplished our goal of simple user experience as our user testing yielded positive results and yeah great so now let’s just take a couple of minutes for each team rep to go over the lessons they’ve learned working with their own teams hardware yeah so the hardware team faced some limitations um due to securing one device and it slowed down the amount of recordings that we could get because there would only be one device producing recordings as for the data team and when the school year started the productivity level of all the quality members dropped so we had a few team morning sessions which actually helped increase the productivity for us overall for the ml team the mlt members were fairly inexperienced and we learned a lot and we learned that it is important to have good knowledge and experience before creating a model yeah the ui team had a similar lessons as the other teams we also noted that testing with hardware and getting the integrations done as early as possible is important to allow for us to uh we’re incorporating additional features for future projects cool now kate is going to cover some future work and new possibilities of our technology definitely we’re really excited to continue working and developing our project so potential use cases that we could already apply our project to include communicating with disabled or trauma victims who are too shocked to speak uh also doing simple card games that require discrete communication like poker and doing polygraph or lie tests that can be measured with civical recognition for more accurate classification of whether or not a person is telling the truth we also plan to train the model on continuous data rather than recordings of fixed window sizes and having a rolling window might improve the classification of words continuously as opposed to in a fixed frame we also plan to do phonetic recognition that will allow us to better interface with ai and assistance on mobile or desktop neurotech sc was started in march of this year during the kobe 19 pandemic so the extent of all our communication has been over slack and zoom however we were still able to develop and test the hardware project virtually with over 20 members as a first year club we were able to make a substantial first step and we look forward to evolving as a club by building more projects and also making contributions in research yeah it’s been really great working with all the team reps and i’m sure the team reps all enjoyed their teams as well and we think we made a pretty cool project and look forward to continuing work on it in the next few months thank you that’s about it

What you can read next

Krishna Shenoy (BCI interfaces)
Dr. Kevin Warwick (Captain Cyborg)
Prof. John Donoghue

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Mind Control: Past, Present & Future
  • Why It Feels Like the Fan Is Talking to You
  • Capturing Skull Pulses & Knuckle Cracking Effects
  • Rhythmic Knuckle Cracking Over Ear
  • Cybertorture.com is Launching a Legal Case

Recent Comments

  1. William rae/kilpatrick on Dr Hoffers Diagnostic Testing Protocol
  2. cybertortureinfo@proton.me on Synthetic Telepathy & Signal Intelligence Toolkit
  3. Maurice Parker on Synthetic Telepathy & Signal Intelligence Toolkit
  4. 0xl0r3nz0 on DIY Non-Linear Junction Detector (NLJD) for Nanotech Detection
  5. cybertortureinfo@proton.me on Only Way Forward is The Necessity Clause

Recent Posts

  • Mind Control: Past, Present & Future

    Spread the love🧠 Mind Control: Past, Present &a...
  • Why It Feels Like the Fan Is Talking to You

    Spread the love🌀 Why It Feels Like the Fan Is T...
  • Capturing Skull Pulses & Knuckle Cracking Effects

    Spread the love🧠📡 Experimental Setup Design: Ca...
  • Rhythmic Knuckle Cracking Over Ear

    Spread the loveRhythmic Knuckle Cracking Over E...
  • Cybertorture.com is Launching a Legal Case

    Spread the love⚖️ Launching a Legal Case: Pre-E...

Recent Comments

  • William rae/kilpatrick on Dr Hoffers Diagnostic Testing Protocol
  • cybertortureinfo@proton.me on Synthetic Telepathy & Signal Intelligence Toolkit
  • Maurice Parker on Synthetic Telepathy & Signal Intelligence Toolkit
  • 0xl0r3nz0 on DIY Non-Linear Junction Detector (NLJD) for Nanotech Detection
  • cybertortureinfo@proton.me on Only Way Forward is The Necessity Clause

Archives

  • June 2025
  • May 2025
  • April 2025

Categories

  • Cyber Security
  • Debunked
  • Devices, Hardware & Reviews
  • Evidence
  • Experimental & DIY Projects
  • Intelligence
  • Legal
  • Legal Complaint Forms
  • Media
  • Neuro Signal Intelligence
  • Neurotechnology & Brain Interaction
  • Physical Security
  • RF Fundamentals
  • Signal Intelligence & Detection Techniques
  • Spectrum Analysis
  • Survival
  • Tech
  • TI Technical Defense
  • Tools & Special Equipment
  • TSCM & Threat Detection
  • Victims
  • Warnings

SIGN UP TO OUR NEWSLETTER

Subscribe to our newsletter and receive our latest news straight to your inbox.

SOCIAL MEDIA

TOP