Survey Drama Llama

Firstly, I’d like to thank the people who have already submitted responses to the Modernity Worldview Survey. I’ll post that you submitted entries before this warning was presented.


» Modernity Worldview Survey «


Google has taken action and very responsively removed this warning. If you saw this whilst attempting to visit the URL, try again. Sorry for any fright or inconvenience. I’ll continue as if this never happened. smh


I am frustrated to say the least. I created this survey over the past month or so, writing, rewriting, refactoring, and switching technology and hosts until I settled on Google Cloud (GCP). It worked fine yesterday. When I visited today, I saw this warning.

As I mentioned in my announcement post, I collect no personal information. I don’t even ask for an email address, let alone a credit card number. On a technical note, this is the information I use:

id                 autogenerated unique identifier
timestamp          date and time stamp of record creation (UTC)
question-response  which response option made per question
ternary-triplet    the position of the average modernity score (pre, mod, post) 
plot_x             Cartesian x-axis plot point for the ternary chart
plot_y             Cartesian y-axis plot point for the ternary chart
session_id         facilitates continuity for a user's browser experience
browser*            which browser being used (Chrome, Safari, and so on)
region             browser's language setting (US, GB, FR)
source             whether the user is accessing from the web or 'locally'
                   ('local' indicates a test record, so i can filter them out)

* These examples illustrate the colected browser information:
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36

- Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Mobile Safari/537.36

This is all.

This is a Chrome Warning. Ironically, a Google product. I tested this on Opera, Edge, and Safari without this nonsense.

The front end (UI) is written in HTML, Python, JavaScript, and React with some standard imports. The backend (database) is MySQL. It is version-controlled on GitHub and entirely hosted on GCP. I link to the survey from here (WordPress) or other social media presences. I did make the mistake of not making the site responsive. I paid the price when I visited the site on my Samsung S24. The page felt like the size of a postage stamp. I may fix this once this security issue is resolved.

I sent Google a request to remove this from their blacklist. This could take three weeks, more or less.

Meantime, I’ll pause survey promotions and hope this resolves quickly. The survey will remain live. If you use something other than Chrome, you should be able to take it. Obviously, I’ll also delay analysing and releasing any summary results.

Apologies for rambling. Thank you for your patience.

Using AI to Decode Speech from Brain Activity

Apologies in advance for sharing PR hype from Meta (formerly known as Facebook),but I want to comment on the essence of the idea, which is using AI to decode speech from brain activity. It seems to imply that one would apply supervised machine learning to train a system to map speech to brain activity as illustrated by the image below.

Podcast: Audio rendition of this page content
To decode speech from noninvasive brain signals, we train a model with contrastive learning to align speech and its corresponding brain activity

The dataset would require the captured patterns of a large enough sample size. In this case, it appears to have been some 417 volunteers.

Activations of wav2vec 2.0 (left) map onto the brain (right) in response to the same speech sounds. The representations of the first layers of this algorithm (cool colours) map onto the early auditory cortex, whereas the deepest layers map onto high-level brain regions (e.g. prefrontal and parietal cortex).

This feels like it could have many commercial, consumer, and industrial uses including removing other human-computer interface devices, notably keyboards, but perhaps even mouses. Yes, I said mouses. Sue me.

Given hypotheses related to language and cognition, I am wondering what can be gleaned by mapping different multiple native language speakers to cognitive processes in order to remap them to speech output if it would be able to arrive at some common grammar that could then output a given thought stream into any known (and mapped) language, allowing for instantaneous “translation”.

Of course, a longer-term goal would be to skip the external devices and interface brain to brain. This sounds rogue science fiction scary, as one might imagine an external device trained on a brain to read its contents. One of the last things this world needs is to have to worry about neuro-rights and about being monitored for thought crimes. Come to think of it, isn’t there already a book on this? Nevermind. Probably not.

Technology is generally not inherently harmful or helpful, as that is determined by use. Humans do seem to tend toward the nefarious. Where do you think this will go? Leave a comment.