Deep and not so deep LLM thoughts
I've been a bit obsessed with getting a more intuitive feel for what LLM technology is, poking and prodding at the local 7B LLM I can run at home without giving free training data to a tech company. Kitbashing Python libraries until models work is a bit of a chore.
My intuition: LLMs are somewhat like a stochastic parrot that can remix, following tenuous connections to do so, hence the potential for "hallucination". Training them is a bit like carving out a phase space within a manifold representing the potential paths of human text. Grammar and syntax are easier to extract (and compress into this phase space) than more complex relationships that are often simply not stated in text and must be inferred. There's no true "understanding" in the sense of a coherent internal model of the world, just correlation. That's why a mainland Chinese-trained LLM will faithfully answer differently to the question of whether Taiwan is part of China depending on whether the question is posed in English (no) or Chinese (yes).
To anthropomorphize a bit, an LLM tries hard to emulate a sequence of text that would pass based on the inputs that were obtained as an artifact from conscious people.
People who think they're dealing with a nascent consciousness have fallen into a trap: they are co-creating a dialogue with the LLM, and the LLM is doing its best to fill its half with content that resembles the text streams of people conversing based on its training set. It's a projected mirage of consciousness, perhaps like how television presents projected images and video as opposed to the real thing. There's no ghost in this machine.
Since LLM text output is generally middling as literature, it seems reasonable to conclude that bad AI-generated literature is a reflection of a lack of editing skill and/or competence, since humans are quite capable of rewriting and paraphrasing output to make it more presentable. (Or laziness, I suppose upon reflection.)
On a more casual note, sometimes the concept of a prompt is far more amusing than its actual result.
My intuition: LLMs are somewhat like a stochastic parrot that can remix, following tenuous connections to do so, hence the potential for "hallucination". Training them is a bit like carving out a phase space within a manifold representing the potential paths of human text. Grammar and syntax are easier to extract (and compress into this phase space) than more complex relationships that are often simply not stated in text and must be inferred. There's no true "understanding" in the sense of a coherent internal model of the world, just correlation. That's why a mainland Chinese-trained LLM will faithfully answer differently to the question of whether Taiwan is part of China depending on whether the question is posed in English (no) or Chinese (yes).
To anthropomorphize a bit, an LLM tries hard to emulate a sequence of text that would pass based on the inputs that were obtained as an artifact from conscious people.
People who think they're dealing with a nascent consciousness have fallen into a trap: they are co-creating a dialogue with the LLM, and the LLM is doing its best to fill its half with content that resembles the text streams of people conversing based on its training set. It's a projected mirage of consciousness, perhaps like how television presents projected images and video as opposed to the real thing. There's no ghost in this machine.
Since LLM text output is generally middling as literature, it seems reasonable to conclude that bad AI-generated literature is a reflection of a lack of editing skill and/or competence, since humans are quite capable of rewriting and paraphrasing output to make it more presentable. (Or laziness, I suppose upon reflection.)
On a more casual note, sometimes the concept of a prompt is far more amusing than its actual result.
Write a scene. Rachael is watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog. Describe Rachael's reaction.
Tweaking the prompt slightly can significantly impact output
As the actors gracefully moved about the stage, Rachael observed the guests savoring the delicacy before them. The oysters, glistening in their shells, were being devoured with relish. Laughter and animated conversations filled the imaginary banquet hall.
However, the atmosphere took a surreal turn as the entree was unveiled. A hushed murmur spread through the audience as the actors presented a platter of boiled dog as the main course. The gasps and whispers rippled through the crowd, creating an unexpected tension in the theater.
Rachael's eyes widened, a mixture of shock and disbelief etched across her face. The play had taken an unexpected and disturbing turn. The once vibrant ambiance of the banquet scene now hung in the air like a heavy fog.
She shifted uncomfortably in her seat, her gaze darting between the stage and the reactions of the other theatergoers. Some faces registered horror, while others wore expressions of confusion. Rachael's hand instinctively covered her mouth as the reality of the scene sank in.
The unsettling tableau played out before her, and Rachael found herself grappling with conflicting emotions. The skilled actors on stage had succeeded in creating a disturbing and thought-provoking narrative, but the discomfort it elicited was palpable.
As the drama unfolded, Rachael wrestled with the unsettling nature of the performance. The play had achieved its desired effect, provoking introspection and challenging preconceived notions. Yet, the raw emotions stirred within her, and she couldn't shake the disquiet that lingered in the aftermath of the shocking revelation. The stage had become a mirror reflecting uncomfortable truths, and Rachael found herself caught in the tumultuous currents of conflicting thoughts and emotions.