linux/Documentation/scheduler/sched-stats.txt
<<
>>
Prefs
   1Version 15 of schedstats dropped counters for some sched_yield:
   2yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is
   3identical to version 14.
   4
   5Version 14 of schedstats includes support for sched_domains, which hit the
   6mainline kernel in 2.6.20 although it is identical to the stats from version
   712 which was in the kernel from 2.6.13-2.6.19 (version 13 never saw a kernel
   8release).  Some counters make more sense to be per-runqueue; other to be
   9per-domain.  Note that domains (and their associated information) will only
  10be pertinent and available on machines utilizing CONFIG_SMP.
  11
  12In version 14 of schedstat, there is at least one level of domain
  13statistics for each cpu listed, and there may well be more than one
  14domain.  Domains have no particular names in this implementation, but
  15the highest numbered one typically arbitrates balancing across all the
  16cpus on the machine, while domain0 is the most tightly focused domain,
  17sometimes balancing only between pairs of cpus.  At this time, there
  18are no architectures which need more than three domain levels. The first
  19field in the domain stats is a bit map indicating which cpus are affected
  20by that domain.
  21
  22These fields are counters, and only increment.  Programs which make use
  23of these will need to start with a baseline observation and then calculate
  24the change in the counters at each subsequent observation.  A perl script
  25which does this for many of the fields is available at
  26
  27    http://eaglet.rain.com/rick/linux/schedstat/
  28
  29Note that any such script will necessarily be version-specific, as the main
  30reason to change versions is changes in the output format.  For those wishing
  31to write their own scripts, the fields are described here.
  32
  33CPU statistics
  34--------------
  35cpu<N> 1 2 3 4 5 6 7 8 9
  36
  37First field is a sched_yield() statistic:
  38     1) # of times sched_yield() was called
  39
  40Next three are schedule() statistics:
  41     2) This field is a legacy array expiration count field used in the O(1)
  42        scheduler. We kept it for ABI compatibility, but it is always set to zero.
  43     3) # of times schedule() was called
  44     4) # of times schedule() left the processor idle
  45
  46Next two are try_to_wake_up() statistics:
  47     5) # of times try_to_wake_up() was called
  48     6) # of times try_to_wake_up() was called to wake up the local cpu
  49
  50Next three are statistics describing scheduling latency:
  51     7) sum of all time spent running by tasks on this processor (in jiffies)
  52     8) sum of all time spent waiting to run by tasks on this processor (in
  53        jiffies)
  54     9) # of timeslices run on this cpu
  55
  56
  57Domain statistics
  58-----------------
  59One of these is produced per domain for each cpu described. (Note that if
  60CONFIG_SMP is not defined, *no* domains are utilized and these lines
  61will not appear in the output.)
  62
  63domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
  64
  65The first field is a bit mask indicating what cpus this domain operates over.
  66
  67The next 24 are a variety of load_balance() statistics in grouped into types
  68of idleness (idle, busy, and newly idle):
  69
  70     1) # of times in this domain load_balance() was called when the
  71        cpu was idle
  72     2) # of times in this domain load_balance() checked but found
  73        the load did not require balancing when the cpu was idle
  74     3) # of times in this domain load_balance() tried to move one or
  75        more tasks and failed, when the cpu was idle
  76     4) sum of imbalances discovered (if any) with each call to
  77        load_balance() in this domain when the cpu was idle
  78     5) # of times in this domain pull_task() was called when the cpu
  79        was idle
  80     6) # of times in this domain pull_task() was called even though
  81        the target task was cache-hot when idle
  82     7) # of times in this domain load_balance() was called but did
  83        not find a busier queue while the cpu was idle
  84     8) # of times in this domain a busier queue was found while the
  85        cpu was idle but no busier group was found
  86
  87     9) # of times in this domain load_balance() was called when the
  88        cpu was busy
  89    10) # of times in this domain load_balance() checked but found the
  90        load did not require balancing when busy
  91    11) # of times in this domain load_balance() tried to move one or
  92        more tasks and failed, when the cpu was busy
  93    12) sum of imbalances discovered (if any) with each call to
  94        load_balance() in this domain when the cpu was busy
  95    13) # of times in this domain pull_task() was called when busy
  96    14) # of times in this domain pull_task() was called even though the
  97        target task was cache-hot when busy
  98    15) # of times in this domain load_balance() was called but did not
  99        find a busier queue while the cpu was busy
 100    16) # of times in this domain a busier queue was found while the cpu
 101        was busy but no busier group was found
 102
 103    17) # of times in this domain load_balance() was called when the
 104        cpu was just becoming idle
 105    18) # of times in this domain load_balance() checked but found the
 106        load did not require balancing when the cpu was just becoming idle
 107    19) # of times in this domain load_balance() tried to move one or more
 108        tasks and failed, when the cpu was just becoming idle
 109    20) sum of imbalances discovered (if any) with each call to
 110        load_balance() in this domain when the cpu was just becoming idle
 111    21) # of times in this domain pull_task() was called when newly idle
 112    22) # of times in this domain pull_task() was called even though the
 113        target task was cache-hot when just becoming idle
 114    23) # of times in this domain load_balance() was called but did not
 115        find a busier queue while the cpu was just becoming idle
 116    24) # of times in this domain a busier queue was found while the cpu
 117        was just becoming idle but no busier group was found
 118
 119   Next three are active_load_balance() statistics:
 120    25) # of times active_load_balance() was called
 121    26) # of times active_load_balance() tried to move a task and failed
 122    27) # of times active_load_balance() successfully moved a task
 123
 124   Next three are sched_balance_exec() statistics:
 125    28) sbe_cnt is not used
 126    29) sbe_balanced is not used
 127    30) sbe_pushed is not used
 128
 129   Next three are sched_balance_fork() statistics:
 130    31) sbf_cnt is not used
 131    32) sbf_balanced is not used
 132    33) sbf_pushed is not used
 133
 134   Next three are try_to_wake_up() statistics:
 135    34) # of times in this domain try_to_wake_up() awoke a task that
 136        last ran on a different cpu in this domain
 137    35) # of times in this domain try_to_wake_up() moved a task to the
 138        waking cpu because it was cache-cold on its own cpu anyway
 139    36) # of times in this domain try_to_wake_up() started passive balancing
 140
 141/proc/<pid>/schedstat
 142----------------
 143schedstats also adds a new /proc/<pid>/schedstat file to include some of
 144the same information on a per-process level.  There are three fields in
 145this file correlating for that process to:
 146     1) time spent on the cpu
 147     2) time spent waiting on a runqueue
 148     3) # of timeslices run on this cpu
 149
 150A program could be easily written to make use of these extra fields to
 151report on how well a particular process or set of processes is faring
 152under the scheduler's policies.  A simple version of such a program is
 153available at
 154    http://eaglet.rain.com/rick/linux/schedstat/v12/latency.c
 155
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.