1<?xml version="1.0" encoding="UTF-8"?>
   2<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
   3        "" []>
   5<book id="drmDevelopersGuide">
   6  <bookinfo>
   7    <title>Linux DRM Developer's Guide</title>
   9    <copyright>
  10      <year>2008-2009</year>
  11      <holder>
  12        Intel Corporation (Jesse Barnes &lt;;)
  13      </holder>
  14    </copyright>
  16    <legalnotice>
  17      <para>
  18        The contents of this file may be used under the terms of the GNU
  19        General Public License version 2 (the "GPL") as distributed in
  20        the kernel source COPYING file.
  21      </para>
  22    </legalnotice>
  23  </bookinfo>
  27  <!-- Introduction -->
  29  <chapter id="drmIntroduction">
  30    <title>Introduction</title>
  31    <para>
  32      The Linux DRM layer contains code intended to support the needs
  33      of complex graphics devices, usually containing programmable
  34      pipelines well suited to 3D graphics acceleration.  Graphics
  35      drivers in the kernel may make use of DRM functions to make
  36      tasks like memory management, interrupt handling and DMA easier,
  37      and provide a uniform interface to applications.
  38    </para>
  39    <para>
  40      A note on versions: this guide covers features found in the DRM
  41      tree, including the TTM memory manager, output configuration and
  42      mode setting, and the new vblank internals, in addition to all
  43      the regular features found in current kernels.
  44    </para>
  45    <para>
  46      [Insert diagram of typical DRM stack here]
  47    </para>
  48  </chapter>
  50  <!-- Internals -->
  52  <chapter id="drmInternals">
  53    <title>DRM Internals</title>
  54    <para>
  55      This chapter documents DRM internals relevant to driver authors
  56      and developers working to add support for the latest features to
  57      existing drivers.
  58    </para>
  59    <para>
  60      First, we go over some typical driver initialization
  61      requirements, like setting up command buffers, creating an
  62      initial output configuration, and initializing core services.
  63      Subsequent sections cover core internals in more detail,
  64      providing implementation notes and examples.
  65    </para>
  66    <para>
  67      The DRM layer provides several services to graphics drivers,
  68      many of them driven by the application interfaces it provides
  69      through libdrm, the library that wraps most of the DRM ioctls.
  70      These include vblank event handling, memory
  71      management, output management, framebuffer management, command
  72      submission &amp; fencing, suspend/resume support, and DMA
  73      services.
  74    </para>
  75    <para>
  76      The core of every DRM driver is struct drm_driver.  Drivers
  77      typically statically initialize a drm_driver structure,
  78      then pass it to drm_init() at load time.
  79    </para>
  81  <!-- Internals: driver init -->
  83  <sect1>
  84    <title>Driver initialization</title>
  85    <para>
  86      Before calling the DRM initialization routines, the driver must
  87      first create and fill out a struct drm_driver structure.
  88    </para>
  89    <programlisting>
  90      static struct drm_driver driver = {
  91        /* Don't use MTRRs here; the Xserver or userspace app should
  92         * deal with them for Intel hardware.
  93         */
  94        .driver_features =
  97        .load = i915_driver_load,
  98        .unload = i915_driver_unload,
  99        .firstopen = i915_driver_firstopen,
 100        .lastclose = i915_driver_lastclose,
 101        .preclose = i915_driver_preclose,
 102        .save = i915_save,
 103        .restore = i915_restore,
 104        .device_is_agp = i915_driver_device_is_agp,
 105        .get_vblank_counter = i915_get_vblank_counter,
 106        .enable_vblank = i915_enable_vblank,
 107        .disable_vblank = i915_disable_vblank,
 108        .irq_preinstall = i915_driver_irq_preinstall,
 109        .irq_postinstall = i915_driver_irq_postinstall,
 110        .irq_uninstall = i915_driver_irq_uninstall,
 111        .irq_handler = i915_driver_irq_handler,
 112        .reclaim_buffers = drm_core_reclaim_buffers,
 113        .get_map_ofs = drm_core_get_map_ofs,
 114        .get_reg_ofs = drm_core_get_reg_ofs,
 115        .fb_probe = intelfb_probe,
 116        .fb_remove = intelfb_remove,
 117        .fb_resize = intelfb_resize,
 118        .master_create = i915_master_create,
 119        .master_destroy = i915_master_destroy,
 120#if defined(CONFIG_DEBUG_FS)
 121        .debugfs_init = i915_debugfs_init,
 122        .debugfs_cleanup = i915_debugfs_cleanup,
 124        .gem_init_object = i915_gem_init_object,
 125        .gem_free_object = i915_gem_free_object,
 126        .gem_vm_ops = &amp;i915_gem_vm_ops,
 127        .ioctls = i915_ioctls,
 128        .fops = {
 129                .owner = THIS_MODULE,
 130                .open = drm_open,
 131                .release = drm_release,
 132                .ioctl = drm_ioctl,
 133                .mmap = drm_mmap,
 134                .poll = drm_poll,
 135                .fasync = drm_fasync,
 136#ifdef CONFIG_COMPAT
 137                .compat_ioctl = i915_compat_ioctl,
 139                .llseek = noop_llseek,
 140                },
 141        .pci_driver = {
 142                .name = DRIVER_NAME,
 143                .id_table = pciidlist,
 144                .probe = probe,
 145                .remove = __devexit_p(drm_cleanup_pci),
 146                },
 147        .name = DRIVER_NAME,
 148        .desc = DRIVER_DESC,
 149        .date = DRIVER_DATE,
 150        .major = DRIVER_MAJOR,
 151        .minor = DRIVER_MINOR,
 152        .patchlevel = DRIVER_PATCHLEVEL,
 153      };
 154    </programlisting>
 155    <para>
 156      In the example above, taken from the i915 DRM driver, the driver
 157      sets several flags indicating what core features it supports;
 158      we go over the individual callbacks in later sections.  Since
 159      flags indicate which features your driver supports to the DRM
 160      core, you need to set most of them prior to calling drm_init().  Some,
 161      like DRIVER_MODESET can be set later based on user supplied parameters,
 162      but that's the exception rather than the rule.
 163    </para>
 164    <variablelist>
 165      <title>Driver flags</title>
 166      <varlistentry>
 167        <term>DRIVER_USE_AGP</term>
 168        <listitem><para>
 169            Driver uses AGP interface
 170        </para></listitem>
 171      </varlistentry>
 172      <varlistentry>
 173        <term>DRIVER_REQUIRE_AGP</term>
 174        <listitem><para>
 175            Driver needs AGP interface to function.
 176        </para></listitem>
 177      </varlistentry>
 178      <varlistentry>
 179        <term>DRIVER_USE_MTRR</term>
 180        <listitem>
 181          <para>
 182            Driver uses MTRR interface for mapping memory.  Deprecated.
 183          </para>
 184        </listitem>
 185      </varlistentry>
 186      <varlistentry>
 187        <term>DRIVER_PCI_DMA</term>
 188        <listitem><para>
 189            Driver is capable of PCI DMA.  Deprecated.
 190        </para></listitem>
 191      </varlistentry>
 192      <varlistentry>
 193        <term>DRIVER_SG</term>
 194        <listitem><para>
 195            Driver can perform scatter/gather DMA.  Deprecated.
 196        </para></listitem>
 197      </varlistentry>
 198      <varlistentry>
 199        <term>DRIVER_HAVE_DMA</term>
 200        <listitem><para>Driver supports DMA.  Deprecated.</para></listitem>
 201      </varlistentry>
 202      <varlistentry>
 203        <term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term>
 204        <listitem>
 205          <para>
 206            DRIVER_HAVE_IRQ indicates whether the driver has an IRQ
 207            handler.  DRIVER_IRQ_SHARED indicates whether the device &amp;
 208            handler support shared IRQs (note that this is required of
 209            PCI drivers).
 210          </para>
 211        </listitem>
 212      </varlistentry>
 213      <varlistentry>
 214        <term>DRIVER_DMA_QUEUE</term>
 215        <listitem>
 216          <para>
 217            Should be set if the driver queues DMA requests and completes them
 218            asynchronously.  Deprecated.
 219          </para>
 220        </listitem>
 221      </varlistentry>
 222      <varlistentry>
 223        <term>DRIVER_FB_DMA</term>
 224        <listitem>
 225          <para>
 226            Driver supports DMA to/from the framebuffer.  Deprecated.
 227          </para>
 228        </listitem>
 229      </varlistentry>
 230      <varlistentry>
 231        <term>DRIVER_MODESET</term>
 232        <listitem>
 233          <para>
 234            Driver supports mode setting interfaces.
 235          </para>
 236        </listitem>
 237      </varlistentry>
 238    </variablelist>
 239    <para>
 240      In this specific case, the driver requires AGP and supports
 241      IRQs.  DMA, as discussed later, is handled by device-specific ioctls
 242      in this case.  It also supports the kernel mode setting APIs, though
 243      unlike in the actual i915 driver source, this example unconditionally
 244      exports KMS capability.
 245    </para>
 246  </sect1>
 248  <!-- Internals: driver load -->
 250  <sect1>
 251    <title>Driver load</title>
 252    <para>
 253      In the previous section, we saw what a typical drm_driver
 254      structure might look like.  One of the more important fields in
 255      the structure is the hook for the load function.
 256    </para>
 257    <programlisting>
 258      static struct drm_driver driver = {
 259        ...
 260        .load = i915_driver_load,
 261        ...
 262      };
 263    </programlisting>
 264    <para>
 265      The load function has many responsibilities: allocating a driver
 266      private structure, specifying supported performance counters,
 267      configuring the device (e.g. mapping registers &amp; command
 268      buffers), initializing the memory manager, and setting up the
 269      initial output configuration.
 270    </para>
 271    <para>
 272      If compatibility is a concern (e.g. with drivers converted over
 273      to the new interfaces from the old ones), care must be taken to
 274      prevent device initialization and control that is incompatible with
 275      currently active userspace drivers.  For instance, if user
 276      level mode setting drivers are in use, it would be problematic
 277      to perform output discovery &amp; configuration at load time.
 278      Likewise, if user-level drivers unaware of memory management are
 279      in use, memory management and command buffer setup may need to
 280      be omitted.  These requirements are driver-specific, and care
 281      needs to be taken to keep both old and new applications and
 282      libraries working.  The i915 driver supports the "modeset"
 283      module parameter to control whether advanced features are
 284      enabled at load time or in legacy fashion.
 285    </para>
 287    <sect2>
 288      <title>Driver private &amp; performance counters</title>
 289      <para>
 290        The driver private hangs off the main drm_device structure and
 291        can be used for tracking various device-specific bits of
 292        information, like register offsets, command buffer status,
 293        register state for suspend/resume, etc.  At load time, a
 294        driver may simply allocate one and set drm_device.dev_priv
 295        appropriately; it should be freed and drm_device.dev_priv set
 296        to NULL when the driver is unloaded.
 297      </para>
 298      <para>
 299        The DRM supports several counters which may be used for rough
 300        performance characterization.  Note that the DRM stat counter
 301        system is not often used by applications, and supporting
 302        additional counters is completely optional.
 303      </para>
 304      <para>
 305        These interfaces are deprecated and should not be used.  If performance
 306        monitoring is desired, the developer should investigate and
 307        potentially enhance the kernel perf and tracing infrastructure to export
 308        GPU related performance information for consumption by performance
 309        monitoring tools and applications.
 310      </para>
 311    </sect2>
 313    <sect2>
 314      <title>Configuring the device</title>
 315      <para>
 316        Obviously, device configuration is device-specific.
 317        However, there are several common operations: finding a
 318        device's PCI resources, mapping them, and potentially setting
 319        up an IRQ handler.
 320      </para>
 321      <para>
 322        Finding &amp; mapping resources is fairly straightforward.  The
 323        DRM wrapper functions, drm_get_resource_start() and
 324        drm_get_resource_len(), may be used to find BARs on the given
 325        drm_device struct.  Once those values have been retrieved, the
 326        driver load function can call drm_addmap() to create a new
 327        mapping for the BAR in question.  Note that you probably want a
 328        drm_local_map_t in your driver private structure to track any
 329        mappings you create.
 330<!-- !Fdrivers/gpu/drm/drm_bufs.c drm_get_resource_* -->
 331<!-- !Finclude/drm/drmP.h drm_local_map_t -->
 332      </para>
 333      <para>
 334        if compatibility with other operating systems isn't a concern
 335        (DRM drivers can run under various BSD variants and OpenSolaris),
 336        native Linux calls may be used for the above, e.g. pci_resource_*
 337        and iomap*/iounmap.  See the Linux device driver book for more
 338        info.
 339      </para>
 340      <para>
 341        Once you have a register map, you may use the DRM_READn() and
 342        DRM_WRITEn() macros to access the registers on your device, or
 343        use driver-specific versions to offset into your MMIO space
 344        relative to a driver-specific base pointer (see I915_READ for
 345        an example).
 346      </para>
 347      <para>
 348        If your device supports interrupt generation, you may want to
 349        set up an interrupt handler when the driver is loaded.  This
 350        is done using the drm_irq_install() function.  If your device
 351        supports vertical blank interrupts, it should call
 352        drm_vblank_init() to initialize the core vblank handling code before
 353        enabling interrupts on your device.  This ensures the vblank related
 354        structures are allocated and allows the core to handle vblank events.
 355      </para>
 356<!--!Fdrivers/char/drm/drm_irq.c drm_irq_install-->
 357      <para>
 358        Once your interrupt handler is registered (it uses your
 359        drm_driver.irq_handler as the actual interrupt handling
 360        function), you can safely enable interrupts on your device,
 361        assuming any other state your interrupt handler uses is also
 362        initialized.
 363      </para>
 364      <para>
 365        Another task that may be necessary during configuration is
 366        mapping the video BIOS.  On many devices, the VBIOS describes
 367        device configuration, LCD panel timings (if any), and contains
 368        flags indicating device state.  Mapping the BIOS can be done
 369        using the pci_map_rom() call, a convenience function that
 370        takes care of mapping the actual ROM, whether it has been
 371        shadowed into memory (typically at address 0xc0000) or exists
 372        on the PCI device in the ROM BAR.  Note that after the ROM
 373        has been mapped and any necessary information has been extracted,
 374        it should be unmapped; on many devices, the ROM address decoder is
 375        shared with other BARs, so leaving it mapped could cause
 376        undesired behavior like hangs or memory corruption.
 377<!--!Fdrivers/pci/rom.c pci_map_rom-->
 378      </para>
 379    </sect2>
 381    <sect2>
 382      <title>Memory manager initialization</title>
 383      <para>
 384        In order to allocate command buffers, cursor memory, scanout
 385        buffers, etc., as well as support the latest features provided
 386        by packages like Mesa and the X.Org X server, your driver
 387        should support a memory manager.
 388      </para>
 389      <para>
 390        If your driver supports memory management (it should!), you
 391        need to set that up at load time as well.  How you initialize
 392        it depends on which memory manager you're using: TTM or GEM.
 393      </para>
 394      <sect3>
 395        <title>TTM initialization</title>
 396        <para>
 397          TTM (for Translation Table Manager) manages video memory and
 398          aperture space for graphics devices. TTM supports both UMA devices
 399          and devices with dedicated video RAM (VRAM), i.e. most discrete
 400          graphics devices.  If your device has dedicated RAM, supporting
 401          TTM is desirable.  TTM also integrates tightly with your
 402          driver-specific buffer execution function.  See the radeon
 403          driver for examples.
 404        </para>
 405        <para>
 406          The core TTM structure is the ttm_bo_driver struct.  It contains
 407          several fields with function pointers for initializing the TTM,
 408          allocating and freeing memory, waiting for command completion
 409          and fence synchronization, and memory migration.  See the
 410          radeon_ttm.c file for an example of usage.
 411        </para>
 412        <para>
 413          The ttm_global_reference structure is made up of several fields:
 414        </para>
 415        <programlisting>
 416          struct ttm_global_reference {
 417                enum ttm_global_types global_type;
 418                size_t size;
 419                void *object;
 420                int (*init) (struct ttm_global_reference *);
 421                void (*release) (struct ttm_global_reference *);
 422          };
 423        </programlisting>
 424        <para>
 425          There should be one global reference structure for your memory
 426          manager as a whole, and there will be others for each object
 427          created by the memory manager at runtime.  Your global TTM should
 428          have a type of TTM_GLOBAL_TTM_MEM.  The size field for the global
 429          object should be sizeof(struct ttm_mem_global), and the init and
 430          release hooks should point at your driver-specific init and
 431          release routines, which probably eventually call
 432          ttm_mem_global_init and ttm_mem_global_release, respectively.
 433        </para>
 434        <para>
 435          Once your global TTM accounting structure is set up and initialized
 436          by calling ttm_global_item_ref() on it,
 437          you need to create a buffer object TTM to
 438          provide a pool for buffer object allocation by clients and the
 439          kernel itself.  The type of this object should be TTM_GLOBAL_TTM_BO,
 440          and its size should be sizeof(struct ttm_bo_global).  Again,
 441          driver-specific init and release functions may be provided,
 442          likely eventually calling ttm_bo_global_init() and
 443          ttm_bo_global_release(), respectively.  Also, like the previous
 444          object, ttm_global_item_ref() is used to create an initial reference
 445          count for the TTM, which will call your initialization function.
 446        </para>
 447      </sect3>
 448      <sect3>
 449        <title>GEM initialization</title>
 450        <para>
 451          GEM is an alternative to TTM, designed specifically for UMA
 452          devices.  It has simpler initialization and execution requirements
 453          than TTM, but has no VRAM management capability.  Core GEM
 454          is initialized by calling drm_mm_init() to create
 455          a GTT DRM MM object, which provides an address space pool for
 456          object allocation.  In a KMS configuration, the driver
 457          needs to allocate and initialize a command ring buffer following
 458          core GEM initialization.  A UMA device usually has what is called a
 459          "stolen" memory region, which provides space for the initial
 460          framebuffer and large, contiguous memory regions required by the
 461          device.  This space is not typically managed by GEM, and it must
 462          be initialized separately into its own DRM MM object.
 463        </para>
 464        <para>
 465          Initialization is driver-specific. In the case of Intel
 466          integrated graphics chips like 965GM, GEM initialization can
 467          be done by calling the internal GEM init function,
 468          i915_gem_do_init().  Since the 965GM is a UMA device
 469          (i.e. it doesn't have dedicated VRAM), GEM manages
 470          making regular RAM available for GPU operations.  Memory set
 471          aside by the BIOS (called "stolen" memory by the i915
 472          driver) is managed by the DRM memrange allocator; the
 473          rest of the aperture is managed by GEM.
 474          <programlisting>
 475            /* Basic memrange allocator for stolen space (aka vram) */
 476            drm_memrange_init(&amp;dev_priv->vram, 0, prealloc_size);
 477            /* Let GEM Manage from end of prealloc space to end of aperture */
 478            i915_gem_do_init(dev, prealloc_size, agp_size);
 479          </programlisting>
 481        </para>
 482        <para>
 483          Once the memory manager has been set up, we may allocate the
 484          command buffer.  In the i915 case, this is also done with a
 485          GEM function, i915_gem_init_ringbuffer().
 486        </para>
 487      </sect3>
 488    </sect2>
 490    <sect2>
 491      <title>Output configuration</title>
 492      <para>
 493        The final initialization task is output configuration.  This involves:
 494        <itemizedlist>
 495          <listitem>
 496            Finding and initializing the CRTCs, encoders, and connectors
 497            for the device.
 498          </listitem>
 499          <listitem>
 500            Creating an initial configuration.
 501          </listitem>
 502          <listitem>
 503            Registering a framebuffer console driver.
 504          </listitem>
 505        </itemizedlist>
 506      </para>
 507      <sect3>
 508        <title>Output discovery and initialization</title>
 509        <para>
 510          Several core functions exist to create CRTCs, encoders, and
 511          connectors, namely: drm_crtc_init(), drm_connector_init(), and
 512          drm_encoder_init(), along with several "helper" functions to
 513          perform common tasks.
 514        </para>
 515        <para>
 516          Connectors should be registered with sysfs once they've been
 517          detected and initialized, using the
 518          drm_sysfs_connector_add() function.  Likewise, when they're
 519          removed from the system, they should be destroyed with
 520          drm_sysfs_connector_remove().
 521        </para>
 522        <programlisting>
 524void intel_crt_init(struct drm_device *dev)
 526        struct drm_connector *connector;
 527        struct intel_output *intel_output;
 529        intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL);
 530        if (!intel_output)
 531                return;
 533        connector = &intel_output->base;
 534        drm_connector_init(dev, &intel_output->base,
 535                           &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
 537        drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs,
 538                         DRM_MODE_ENCODER_DAC);
 540        drm_mode_connector_attach_encoder(&intel_output->base,
 541                                          &intel_output->enc);
 543        /* Set up the DDC bus. */
 544        intel_output->ddc_bus = intel_i2c_create(dev, GPIOA, "CRTDDC_A");
 545        if (!intel_output->ddc_bus) {
 546                dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration "
 547                           "failed.\n");
 548                return;
 549        }
 551        intel_output->type = INTEL_OUTPUT_ANALOG;
 552        connector->interlace_allowed = 0;
 553        connector->doublescan_allowed = 0;
 555        drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs);
 556        drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
 558        drm_sysfs_connector_add(connector);
 561        </programlisting>
 562        <para>
 563          In the example above (again, taken from the i915 driver), a
 564          CRT connector and encoder combination is created.  A device-specific
 565          i2c bus is also created for fetching EDID data and
 566          performing monitor detection.  Once the process is complete,
 567          the new connector is registered with sysfs to make its
 568          properties available to applications.
 569        </para>
 570        <sect4>
 571          <title>Helper functions and core functions</title>
 572          <para>
 573            Since many PC-class graphics devices have similar display output
 574            designs, the DRM provides a set of helper functions to make
 575            output management easier.  The core helper routines handle
 576            encoder re-routing and the disabling of unused functions following
 577            mode setting.  Using the helpers is optional, but recommended for
 578            devices with PC-style architectures (i.e. a set of display planes
 579            for feeding pixels to encoders which are in turn routed to
 580            connectors).  Devices with more complex requirements needing
 581            finer grained management may opt to use the core callbacks
 582            directly.
 583          </para>
 584          <para>
 585            [Insert typical diagram here.]  [Insert OMAP style config here.]
 586          </para>
 587        </sect4>
 588        <para>
 589          Each encoder object needs to provide:
 590          <itemizedlist>
 591            <listitem>
 592              A DPMS (basically on/off) function.
 593            </listitem>
 594            <listitem>
 595              A mode-fixup function (for converting requested modes into
 596              native hardware timings).
 597            </listitem>
 598            <listitem>
 599              Functions (prepare, set, and commit) for use by the core DRM
 600              helper functions.
 601            </listitem>
 602          </itemizedlist>
 603          Connector helpers need to provide functions (mode-fetch, validity,
 604          and encoder-matching) for returning an ideal encoder for a given
 605          connector.  The core connector functions include a DPMS callback,
 606          save/restore routines (deprecated), detection, mode probing,
 607          property handling, and cleanup functions.
 608        </para>
 612      </sect3>
 613    </sect2>
 614  </sect1>
 616  <!-- Internals: vblank handling -->
 618  <sect1>
 619    <title>VBlank event handling</title>
 620    <para>
 621      The DRM core exposes two vertical blank related ioctls:
 622      <variablelist>
 623        <varlistentry>
 624          <term>DRM_IOCTL_WAIT_VBLANK</term>
 625          <listitem>
 626            <para>
 627              This takes a struct drm_wait_vblank structure as its argument,
 628              and it is used to block or request a signal when a specified
 629              vblank event occurs.
 630            </para>
 631          </listitem>
 632        </varlistentry>
 633        <varlistentry>
 634          <term>DRM_IOCTL_MODESET_CTL</term>
 635          <listitem>
 636            <para>
 637              This should be called by application level drivers before and
 638              after mode setting, since on many devices the vertical blank
 639              counter is reset at that time.  Internally, the DRM snapshots
 640              the last vblank count when the ioctl is called with the
 641              _DRM_PRE_MODESET command, so that the counter won't go backwards
 642              (which is dealt with when _DRM_POST_MODESET is used).
 643            </para>
 644          </listitem>
 645        </varlistentry>
 646      </variablelist>
 648    </para>
 649    <para>
 650      To support the functions above, the DRM core provides several
 651      helper functions for tracking vertical blank counters, and
 652      requires drivers to provide several callbacks:
 653      get_vblank_counter(), enable_vblank() and disable_vblank().  The
 654      core uses get_vblank_counter() to keep the counter accurate
 655      across interrupt disable periods.  It should return the current
 656      vertical blank event count, which is often tracked in a device
 657      register.  The enable and disable vblank callbacks should enable
 658      and disable vertical blank interrupts, respectively.  In the
 659      absence of DRM clients waiting on vblank events, the core DRM
 660      code uses the disable_vblank() function to disable
 661      interrupts, which saves power.  They are re-enabled again when
 662      a client calls the vblank wait ioctl above.
 663    </para>
 664    <para>
 665      A device that doesn't provide a count register may simply use an
 666      internal atomic counter incremented on every vertical blank
 667      interrupt (and then treat the enable_vblank() and disable_vblank()
 668      callbacks as no-ops).
 669    </para>
 670  </sect1>
 672  <sect1>
 673    <title>Memory management</title>
 674    <para>
 675      The memory manager lies at the heart of many DRM operations; it
 676      is required to support advanced client features like OpenGL
 677      pbuffers.  The DRM currently contains two memory managers: TTM
 678      and GEM.
 679    </para>
 681    <sect2>
 682      <title>The Translation Table Manager (TTM)</title>
 683      <para>
 684        TTM was developed by Tungsten Graphics, primarily by Thomas
 685        Hellström, and is intended to be a flexible, high performance
 686        graphics memory manager.
 687      </para>
 688      <para>
 689        Drivers wishing to support TTM must fill out a drm_bo_driver
 690        structure.
 691      </para>
 692      <para>
 693        TTM design background and information belongs here.
 694      </para>
 695    </sect2>
 697    <sect2>
 698      <title>The Graphics Execution Manager (GEM)</title>
 699      <para>
 700        GEM is an Intel project, authored by Eric Anholt and Keith
 701        Packard.  It provides simpler interfaces than TTM, and is well
 702        suited for UMA devices.
 703      </para>
 704      <para>
 705        GEM-enabled drivers must provide gem_init_object() and
 706        gem_free_object() callbacks to support the core memory
 707        allocation routines.  They should also provide several driver-specific
 708        ioctls to support command execution, pinning, buffer
 709        read &amp; write, mapping, and domain ownership transfers.
 710      </para>
 711      <para>
 712        On a fundamental level, GEM involves several operations:
 713        <itemizedlist>
 714          <listitem>Memory allocation and freeing</listitem>
 715          <listitem>Command execution</listitem>
 716          <listitem>Aperture management at command execution time</listitem>
 717        </itemizedlist>
 718        Buffer object allocation is relatively
 719        straightforward and largely provided by Linux's shmem layer, which
 720        provides memory to back each object.  When mapped into the GTT
 721        or used in a command buffer, the backing pages for an object are
 722        flushed to memory and marked write combined so as to be coherent
 723        with the GPU.  Likewise, if the CPU accesses an object after the GPU
 724        has finished rendering to the object, then the object must be made
 725        coherent with the CPU's view
 726        of memory, usually involving GPU cache flushing of various kinds.
 727        This core CPU&lt;-&gt;GPU coherency management is provided by a
 728        device-specific ioctl, which evaluates an object's current domain and
 729        performs any necessary flushing or synchronization to put the object
 730        into the desired coherency domain (note that the object may be busy,
 731        i.e. an active render target; in that case, setting the domain
 732        blocks the client and waits for rendering to complete before
 733        performing any necessary flushing operations).
 734      </para>
 735      <para>
 736        Perhaps the most important GEM function is providing a command
 737        execution interface to clients.  Client programs construct command
 738        buffers containing references to previously allocated memory objects,
 739        and then submit them to GEM.  At that point, GEM takes care to bind
 740        all the objects into the GTT, execute the buffer, and provide
 741        necessary synchronization between clients accessing the same buffers.
 742        This often involves evicting some objects from the GTT and re-binding
 743        others (a fairly expensive operation), and providing relocation
 744        support which hides fixed GTT offsets from clients.  Clients must
 745        take care not to submit command buffers that reference more objects
 746        than can fit in the GTT; otherwise, GEM will reject them and no rendering
 747        will occur.  Similarly, if several objects in the buffer require
 748        fence registers to be allocated for correct rendering (e.g. 2D blits
 749        on pre-965 chips), care must be taken not to require more fence
 750        registers than are available to the client.  Such resource management
 751        should be abstracted from the client in libdrm.
 752      </para>
 753    </sect2>
 755  </sect1>
 757  <!-- Output management -->
 758  <sect1>
 759    <title>Output management</title>
 760    <para>
 761      At the core of the DRM output management code is a set of
 762      structures representing CRTCs, encoders, and connectors.
 763    </para>
 764    <para>
 765      A CRTC is an abstraction representing a part of the chip that
 766      contains a pointer to a scanout buffer.  Therefore, the number
 767      of CRTCs available determines how many independent scanout
 768      buffers can be active at any given time.  The CRTC structure
 769      contains several fields to support this: a pointer to some video
 770      memory, a display mode, and an (x, y) offset into the video
 771      memory to support panning or configurations where one piece of
 772      video memory spans multiple CRTCs.
 773    </para>
 774    <para>
 775      An encoder takes pixel data from a CRTC and converts it to a
 776      format suitable for any attached connectors.  On some devices,
 777      it may be possible to have a CRTC send data to more than one
 778      encoder.  In that case, both encoders would receive data from
 779      the same scanout buffer, resulting in a "cloned" display
 780      configuration across the connectors attached to each encoder.
 781    </para>
 782    <para>
 783      A connector is the final destination for pixel data on a device,
 784      and usually connects directly to an external display device like
 785      a monitor or laptop panel.  A connector can only be attached to
 786      one encoder at a time.  The connector is also the structure
 787      where information about the attached display is kept, so it
 788      contains fields for display data, EDID data, DPMS &amp;
 789      connection status, and information about modes supported on the
 790      attached displays.
 791    </para>
 793  </sect1>
 795  <sect1>
 796    <title>Framebuffer management</title>
 797    <para>
 798      Clients need to provide a framebuffer object which provides a source
 799      of pixels for a CRTC to deliver to the encoder(s) and ultimately the
 800      connector(s). A framebuffer is fundamentally a driver-specific memory
 801      object, made into an opaque handle by the DRM's addfb() function.
 802      Once a framebuffer has been created this way, it may be passed to the
 803      KMS mode setting routines for use in a completed configuration.
 804    </para>
 805  </sect1>
 807  <sect1>
 808    <title>Command submission &amp; fencing</title>
 809    <para>
 810      This should cover a few device-specific command submission
 811      implementations.
 812    </para>
 813  </sect1>
 815  <sect1>
 816    <title>Suspend/resume</title>
 817    <para>
 818      The DRM core provides some suspend/resume code, but drivers
 819      wanting full suspend/resume support should provide save() and
 820      restore() functions.  These are called at suspend,
 821      hibernate, or resume time, and should perform any state save or
 822      restore required by your device across suspend or hibernate
 823      states.
 824    </para>
 825  </sect1>
 827  <sect1>
 828    <title>DMA services</title>
 829    <para>
 830      This should cover how DMA mapping etc. is supported by the core.
 831      These functions are deprecated and should not be used.
 832    </para>
 833  </sect1>
 834  </chapter>
 836  <!-- External interfaces -->
 838  <chapter id="drmExternals">
 839    <title>Userland interfaces</title>
 840    <para>
 841      The DRM core exports several interfaces to applications,
 842      generally intended to be used through corresponding libdrm
 843      wrapper functions.  In addition, drivers export device-specific
 844      interfaces for use by userspace drivers &amp; device-aware
 845      applications through ioctls and sysfs files.
 846    </para>
 847    <para>
 848      External interfaces include: memory mapping, context management,
 849      DMA operations, AGP management, vblank control, fence
 850      management, memory management, and output management.
 851    </para>
 852    <para>
 853      Cover generic ioctls and sysfs layout here.  We only need high-level
 854      info, since man pages should cover the rest.
 855    </para>
 856  </chapter>
 858  <!-- API reference -->
 860  <appendix id="drmDriverApi">
 861    <title>DRM Driver API</title>
 862    <para>
 863      Include auto-generated API reference here (need to reference it
 864      from paragraphs above too).
 865    </para>
 866  </appendix>
 869 kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.