grimjim: infinite voyage (Default)
[personal profile] grimjim
I've been a bit obsessed with getting a more intuitive feel for what LLM technology is, poking and prodding at the local 7B LLM I can run at home without giving free training data to a tech company. Kitbashing Python libraries until models work is a bit of a chore.

My intuition: LLMs are somewhat like a stochastic parrot that can remix, following tenuous connections to do so, hence the potential for "hallucination". Training them is a bit like carving out a phase space within a manifold representing the potential paths of human text. Grammar and syntax are easier to extract (and compress into this phase space) than more complex relationships that are often simply not stated in text and must be inferred. There's no true "understanding" in the sense of a coherent internal model of the world, just correlation. That's why a mainland Chinese-trained LLM will faithfully answer differently to the question of whether Taiwan is part of China depending on whether the question is posed in English (no) or Chinese (yes).

To anthropomorphize a bit, an LLM tries hard to emulate a sequence of text that would pass based on the inputs that were obtained as an artifact from conscious people.

People who think they're dealing with a nascent consciousness have fallen into a trap: they are co-creating a dialogue with the LLM, and the LLM is doing its best to fill its half with content that resembles the text streams of people conversing based on its training set. It's a projected mirage of consciousness, perhaps like how television presents projected images and video as opposed to the real thing. There's no ghost in this machine.

Since LLM text output is generally middling as literature, it seems reasonable to conclude that bad AI-generated literature is a reflection of a lack of editing skill and/or competence, since humans are quite capable of rewriting and paraphrasing output to make it more presentable. (Or laziness, I suppose upon reflection.)

On a more casual note, sometimes the concept of a prompt is far more amusing than its actual result.
Write a scene. Rachael is watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog. Describe Rachael's reaction.

Date: 2023-12-08 12:14 (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
I guess we tend to anthromorphize.

Since LLM text output is generally middling as literature, it seems reasonable to conclude that bad AI-generated literature is a reflection of a lack of editing skill and/or competence, since humans are quite capable of rewriting and paraphrasing output to make it more presentable. (Or laziness, I suppose upon reflection.)

I don't think it's that so much as an LLM, and its programmers, are fundamentally incapable of understanding what the point of a story is.

Write a scene. Rachael is watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog. Describe Rachael's reaction.

A good short story would be just a list of prompts like this that ChatGPT comes up with unsatisfying responses to.

Date: 2023-12-10 13:28 (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
Oh, I mean without the responses. Or generating a response that is very same-y same-y, for comedic value.

Profile

grimjim: infinite voyage (Default)
grimjim

June 2025

S M T W T F S
1 234567
891011121314
15161718192021
22232425262728
2930     

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 12th, 2025 15:27
Powered by Dreamwidth Studios