On Technology (Ai Analysis)

 

This is a summary of the book using the Merlin browser extension that uses Gemini. It's s ironically poetic in the way it summarizes the content because it's a feedback loop between the content itself and how artificial intelligence parses it. The danger is that everything is reduced to bullet-points, and people can't understand things thoroughly if they're just glossing over things and looking at only bullet points.  

With AI tools there's different ways of skinning the cat, but also different ways of skimming the cat. I noticed when I imported an edited transcript of one of my video riffs, Gemini neatly summarized it, but it left a lot out–things that would make a big difference in the ideas as I’ve personally expressed them. The danger now is that this is all going to wash over everybody and people aren't going to be fact-checking.

 If you take two Google Notebooks that have essentially the same documents in them, but one has a document full of lies, the results will be very different. And if you reorder the documents so that it's prioritizing the one that has all the lies, then it's going to be a bunch of lies which will sound like the truth. It's a "skimming" of content, a cheap veneer.

***

The AI-generated summary using the browser extension:

• Technology's Double-Edged Sword:  Every new technology simultaneously enhances and diminishes human intelligence.  While providing readily available information and connectivity, it also fosters laziness and reliance on quick fixes, potentially hindering deeper thinking and critical analysis.

[I didn’t say that. I used the word "laziness" in this entry: 2887. “Play that has some level of work in it is probably better than all play. When work becomes all play, that's probably not an optimal creative state. Laziness or unseriousness comes into play.” In fact, this entry isn’t in this book. It’s clearly a hallucination].

• The Internet's Evolution: From Utopia to Dust Bowl: The internet's initial promise of global connection and free information flow has yielded unintended consequences.  Social media, for instance, while connecting people, can also foster division and discourage authentic artistic expression by prioritizing engagement over substance.

[I didn’t say “foster division” anywhere. I said “tribal division” in this context:  “1631. It's so ironic that in today's online tribal divisions, we like to play on both sides of the wall, i.e. using the progress we've made to even construct the internet in the spirit of globalism in the first place (Facebook's idea of a 'zeitgeist'), and then entertain the idea to recede to our ancient past we thought we had transcended. Even after the fall of the wall in 1989, 'mauer im kopf' or 'wall in the head' remains.” This entry is also not in the book. It’s making facile inferences with common cliches and popular sentiments].

• AI: Progress or Peril?:  Artificial intelligence presents both incredible opportunities and significant risks.  Concerns exist about the potential for misuse, the lack of ethical discussion surrounding its development, and its potential for unforeseen negative consequences, even the potential for human extinction.  However,  developing critical thinking skills and personal ethical frameworks are crucial to navigating the challenges of AI.

[Uh, yea…]

Here's the "Deep Dive" which has it's own inaccuracies:

[Software so clunky! Yes!]

2940. Like the nuclear arms race, AI is a race to the elimination of the human race. (Can the winners of the race to the future remain unscathed?) (4/2024) See: https://dynaxiom.blogspot.com/2025/03/2940.html

The Musk comment was from 2014, 11 years ago.  0768. How will AI ever work when systems are typically bogged down with software updates, security updates, re-indexing, system backups, virus checks, and 'system not responding' errors? Anything that involves a text file (dictionaries, computer code) will invariably have bugs and errors. Elon Musk: 'I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don't do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out'. (5/2014)

***

This is a perfect example of low-context as the late anthropologist Edward Hall had posited in many of his books. In low-context societies every little detail has to be explained. In high-context societies, everything is tacitly understood based on wisdom. Now that we have to explain everything, it requires more time to wade through all the AI slop, and people just aren't going to do that. They're always going to opt for the bullet points.

In many ways we are entering a new mystical phase, but even the mystics are going to have to explain everything that they're talking about, because artificial intelligence will twist it.
 
***

Popular Posts

0815

0789