10 Comments
User's avatar
Isha Yiras Hashem's avatar

I can't help you, but I feel obliged to take the opportunity to note that I'm unbeatable at this type of game. It is rare that I get to feel smarter than you, so let me enjoy it.

Expand full comment
Alex Reinhart's avatar

Interesting favorite word for a 13 year old- 'Sarsaparilla is a group of plants that grow in tropical parts of the world.'

Expand full comment
Auros's avatar

It's also the name of a beverage typically brewed from a mix of sassafras root bark, and potentially some other flavors like birch tree bark or wintergreen.

https://en.wikipedia.org/wiki/Sarsaparilla_(drink)

You'll often see it featured in stuff like kids' cartoons set in the Old West -- it's portrayed as a thing that people drink at the town saloon when they're too Good / White Hat to drink whiskey. (And there may actually be some truth to the stereotype.)

https://www.youtube.com/watch?v=kurxmZ1sYlA

The original version might have been lightly carbonated through fermentation (similar to old versions of root beer), but would've been something like 1-2% ABV, so it wouldn't really get you drunk. You also might pep it up with acid phosphate (as seen in that recipe video).

Expand full comment
Alex Reinhart's avatar

Wow you learn new things every day

Expand full comment
Auros's avatar

I definitely had encountered the word when I was quite young, but the way it's typically pronounced sounds closer to something like "sassparella" or "sassparilla" -- I definitely would not have known how to spell it when I was a kid. The fact that it's actually "sarsaparilla" feels almost British. (See: worcestershire, which is pronounced "wistəshər" or "woostəshər".)

There's actually apparently a character named Sassparilla in a silly anime, who is the mother of another character named Sody Pop (so, definitely on-theme, since sarsaparilla is one of the original forms of "soda pop", along with birch beer, root beer, and the original colas brewed from kola nuts).

https://chiknnuggit.fandom.com/wiki/Sassparilla

Expand full comment
Richard Sprague's avatar

Misspellings are fine because the LLM is simply turning your string of tokens into a number projected into multidimensional space. A misspelling here or there will change the location of that number, but it usually won't matter a lot because the important thing is what other concepts are nearby in that space. Because misspellings tend to have predictable patterns, the nearby concepts are predictable too.

Expand full comment
Trevor Klee's avatar

Trying to redirect the LLM into the wrong location fails though. Claude successfully answered "Watt watt is my lightbulb" with an explanation how to figure out the number of watts a lightbulb uses. By contrast, it answers "what what is my lightbulb" with a request to know what exactly I'm asking about my lightbub. Could you explain that with your paradigm?

Expand full comment
Richard Sprague's avatar

The LLM isn't really looking at the words, per se. Rather, it deals in impossibly-complicated networks of semantic relationships that it learned from training. When it sees "watt" and "lightbulb", its training puts your query into a semantic space related to electricity and measurement; by contrast, "what" and "lightbulb" puts it more into a semantic space related to general questions about objects. The spelling (or even the language) of the words doesn't matter so long as it can identify the semantic relationship. For example, you can even try "1瓦 is my lightbulb" (using the Chinese word for "watt") and it'll correctly assume you mean electricity.

Expand full comment
Trevor Klee's avatar

Interesting, seems plausible. I asked "wht is my lightbulb" and it responded "If you're asking 'what', I'm still not able to see your specific lightbulb to tell you anything about it.". What's the process of the second-order thinking that allows it to say, "If you're asking 'what'..."?

Expand full comment
Richard Sprague's avatar

There's a whole other post-training set of steps ("reinforcement learning with human feedback") where they ask the LLM to generate several possible responses and then humans choose the best for a given scenario (in this case, scenario = "useful chatbot for general public"). Btw, the best overview is here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Expand full comment