I deal with a lot of servers at work, and one thing everyone wants to know about their servers is how close they are to being at max utilization. It should be easy, right? Just pull up top or another system monitor tool, look at network, memory and CPU utilization …
I was thinking about LLM tokenization (as one does) and had a thought: We select the next output token for an LLM based on its likelihood, but (some) shorter tokens are more likely.
Claude has trouble playing Pokemon partially because it can't see the screen very well. This made me wonder if Claude would be better at an ASCII game like Dwarf Fortress, where it doesn't need to rely on image recognition.
To check this, I built an MCP server to let Claude …
There's a semi-common meme on Twitter where people share their most X opinion, where X is a group the poster doesn't identify with>; or sometimes my least X opinion, where X is a group they do identify with. In that spirit, my least libertarian opinion is that exclusivity deals with sufficiently entrenched companies* are bad and should be illegal.
AI training data comes from humans, not AIs, so every piece of training data for "What would an AI say to X?" is from a human pretending to be an AI. The training data does not contain AIs describing their inner experiences or thought processes. Even synthetic training data only contains AIs predicting what a human pretending to be an AI would say. AIs are trained to predict the training data, not to learn unrelated abilities, so we should expect an AI asked to predict the thoughts of an AI to describe the thoughts of a human pretending to be an AI.