Musk goes on to say that both unavailability of chips and electrical power are constraints on AI model training. Each Nvidia H100 chip consumes about 500 watts. So a training setup using 20,000 H100s would consume about 10,000 kilowatts, and one using 100,000 H100s would consume about 50 Megawatts. Adding in memory, CPUs, and other circuitry, etc., the data center draw is probably about 100 MW. That's only about a tenth of a typical nuclear power plant, so not too bad. But it's not the thing you want every little organization trying to do.