The large number of Evicted pods suggests you've set the resource requests too low. An 8x difference between requests and limits "feels" very large to me. Given your setup, the kubectl describe node output looks about right to me. Notice that the resource requests are very close to 100%: Kubernetes will keep scheduling pods on a node until its resource requests get up to 100%, and whatever the corresponding limits are, they are. So if you've managed to schedule 7x 256 MiB request pods, that would request 1,792 MiB of memory (88% of a 2 GiB node); and if each pod specifies a limit of 2 GiB, then the total limits would be 7x 2048 MiB or 14,336 MiB (700% of the physical capacity). If the actual limits are that much above the physical capacity of the system, and the pods are actually using that much memory, then the system will eventually run out of memory. When this happens, a pod will get Evicted; which pod depends on how much its actual usage exceeds its request, even if it's within its limit. Node-pressure Eviction in the Kubernetes documentation describes the process in more detail. Setting these limits well is something of an art. If the requests and limits are equal, then the pod will never be evicted (its usage can't exceed its requests); but in this case if the pod isn't using 100% of its requested memory then the node will be underutilized. If they're different then it's easier to schedule pods on fewer nodes, but the node will be overcommitted, and something will get evicted when actual memory usage increases. I might set the requests to the expected (observed) steady-state memory usage, and the limits to the highest you'll ever expect to see in normal operation. (责任编辑:) |