If you look at CPU utilization and assume it will increase linearly, you're going to have a rough time. If you're using the CPU efficiently (running above "50%" utilization), the reported utilization is an underestimate, sometimes significantly so.

If you look at CPU utilization and assume it will increase linearly, you're going to have a rough time. If you're using the CPU efficiently (running above "50%" utilization), the reported utilization is an underestimate, sometimes significantly so.
I was thinking about LLM tokenization (as one does) and had a thought: We select the next output token for an LLM based on its likelihood, but (some) shorter tokens are more likely.
Why? Longer tokens can only complete …
Claude has trouble playing Pokemon partially because it can't see the screen very well. This made me wonder if Claude would be better at an ASCII game like Dwarf Fortress, where it doesn't need to rely on image recognition.
To check this, I built an MCP server to let Claude …
There's a semi-common meme on Twitter where people share their most X opinion, where X is a group the poster doesn't identify with>; or sometimes my least X opinion, where X is a group they do identify with. In that spirit, my least libertarian opinion is that exclusivity deals with sufficiently entrenched companies* are bad and should be illegal.
AI training data comes from humans, not AIs, so every piece of training data for "What would an AI say to X?" is from a human pretending to be an AI. The training data does not contain AIs describing their inner experiences or thought processes. Even synthetic training data only contains AIs predicting what a human pretending to be an AI would say. AIs are trained to predict the training data, not to learn unrelated abilities, so we should expect an AI asked to predict the thoughts of an AI to describe the thoughts of a human pretending to be an AI.
Page 1 / 14 »