mirror of
https://github.com/Fishwaldo/Star64_linux.git
synced 2025-06-22 14:41:27 +00:00
A fair amount of stuff this time around, dominated by yet another massive
set from Mauro toward the completion of the RST conversion. I *really* hope we are getting close to the end of this. Meanwhile, those patches reach pretty far afield to update document references around the tree; there should be no actual code changes there. There will be, alas, more of the usual trivial merge conflicts. Beyond that we have more translations, improvements to the sphinx scripting, a number of additions to the sysctl documentation, and lots of fixes. -----BEGIN PGP SIGNATURE----- iQFDBAABCAAtFiEEIw+MvkEiF49krdp9F0NaE2wMflgFAl7VId8PHGNvcmJldEBs d24ubmV0AAoJEBdDWhNsDH5Yq/gH/iaDgirQZV6UZ2v9sfwQNYolNpf2sKAuOZjd bPFB7WJoMQbKwQEvYrAUL2+5zPOcLYuIfzyOfo1BV1py+EyKbACcKjI4AedxfJF7 +NchmOBhlEqmEhzx2U08HRc4/8J223WG17fJRVsV3p+opJySexSFeQucfOciX5NR RUCxweWWyg/FgyqjkyMMTtsePqZPmcT5dWTlVXISlbWzcv5NFhuJXnSrw8Sfzcmm SJMzqItv3O+CabnKQ8kMLV2PozXTMfjeWH47ZUK0Y8/8PP9+cvqwFzZ0UDQJ1Xaz oyW/TqmunaXhfMsMFeFGSwtfgwRHvXdxkQdtwNHvo1dV4dzTvDw= =fDC/ -----END PGP SIGNATURE----- Merge tag 'docs-5.8' of git://git.lwn.net/linux Pull documentation updates from Jonathan Corbet: "A fair amount of stuff this time around, dominated by yet another massive set from Mauro toward the completion of the RST conversion. I *really* hope we are getting close to the end of this. Meanwhile, those patches reach pretty far afield to update document references around the tree; there should be no actual code changes there. There will be, alas, more of the usual trivial merge conflicts. Beyond that we have more translations, improvements to the sphinx scripting, a number of additions to the sysctl documentation, and lots of fixes" * tag 'docs-5.8' of git://git.lwn.net/linux: (130 commits) Documentation: fixes to the maintainer-entry-profile template zswap: docs/vm: Fix typo accept_threshold_percent in zswap.rst tracing: Fix events.rst section numbering docs: acpi: fix old http link and improve document format docs: filesystems: add info about efivars content Documentation: LSM: Correct the basic LSM description mailmap: change email for Ricardo Ribalda docs: sysctl/kernel: document unaligned controls Documentation: admin-guide: update bug-hunting.rst docs: sysctl/kernel: document ngroups_max nvdimm: fixes to maintainter-entry-profile Documentation/features: Correct RISC-V kprobes support entry Documentation/features: Refresh the arch support status files Revert "docs: sysctl/kernel: document ngroups_max" docs: move locking-specific documents to locking/ docs: move digsig docs to the security book docs: move the kref doc into the core-api book docs: add IRQ documentation at the core-api book docs: debugging-via-ohci1394.txt: add it to the core-api book docs: fix references for ipmi.rst file ...
This commit is contained in:
commit
b23c4771ff
267 changed files with 6300 additions and 4337 deletions
185
Documentation/core-api/debugging-via-ohci1394.rst
Normal file
185
Documentation/core-api/debugging-via-ohci1394.rst
Normal file
|
@ -0,0 +1,185 @@
|
|||
===========================================================================
|
||||
Using physical DMA provided by OHCI-1394 FireWire controllers for debugging
|
||||
===========================================================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Basically all FireWire controllers which are in use today are compliant
|
||||
to the OHCI-1394 specification which defines the controller to be a PCI
|
||||
bus master which uses DMA to offload data transfers from the CPU and has
|
||||
a "Physical Response Unit" which executes specific requests by employing
|
||||
PCI-Bus master DMA after applying filters defined by the OHCI-1394 driver.
|
||||
|
||||
Once properly configured, remote machines can send these requests to
|
||||
ask the OHCI-1394 controller to perform read and write requests on
|
||||
physical system memory and, for read requests, send the result of
|
||||
the physical memory read back to the requester.
|
||||
|
||||
With that, it is possible to debug issues by reading interesting memory
|
||||
locations such as buffers like the printk buffer or the process table.
|
||||
|
||||
Retrieving a full system memory dump is also possible over the FireWire,
|
||||
using data transfer rates in the order of 10MB/s or more.
|
||||
|
||||
With most FireWire controllers, memory access is limited to the low 4 GB
|
||||
of physical address space. This can be a problem on IA64 machines where
|
||||
memory is located mostly above that limit, but it is rarely a problem on
|
||||
more common hardware such as x86, x86-64 and PowerPC.
|
||||
|
||||
At least LSI FW643e and FW643e2 controllers are known to support access to
|
||||
physical addresses above 4 GB, but this feature is currently not enabled by
|
||||
Linux.
|
||||
|
||||
Together with a early initialization of the OHCI-1394 controller for debugging,
|
||||
this facility proved most useful for examining long debugs logs in the printk
|
||||
buffer on to debug early boot problems in areas like ACPI where the system
|
||||
fails to boot and other means for debugging (serial port) are either not
|
||||
available (notebooks) or too slow for extensive debug information (like ACPI).
|
||||
|
||||
Drivers
|
||||
-------
|
||||
|
||||
The firewire-ohci driver in drivers/firewire uses filtered physical
|
||||
DMA by default, which is more secure but not suitable for remote debugging.
|
||||
Pass the remote_dma=1 parameter to the driver to get unfiltered physical DMA.
|
||||
|
||||
Because the firewire-ohci driver depends on the PCI enumeration to be
|
||||
completed, an initialization routine which runs pretty early has been
|
||||
implemented for x86. This routine runs long before console_init() can be
|
||||
called, i.e. before the printk buffer appears on the console.
|
||||
|
||||
To activate it, enable CONFIG_PROVIDE_OHCI1394_DMA_INIT (Kernel hacking menu:
|
||||
Remote debugging over FireWire early on boot) and pass the parameter
|
||||
"ohci1394_dma=early" to the recompiled kernel on boot.
|
||||
|
||||
Tools
|
||||
-----
|
||||
|
||||
firescope - Originally developed by Benjamin Herrenschmidt, Andi Kleen ported
|
||||
it from PowerPC to x86 and x86_64 and added functionality, firescope can now
|
||||
be used to view the printk buffer of a remote machine, even with live update.
|
||||
|
||||
Bernhard Kaindl enhanced firescope to support accessing 64-bit machines
|
||||
from 32-bit firescope and vice versa:
|
||||
- http://v3.sk/~lkundrak/firescope/
|
||||
|
||||
and he implemented fast system dump (alpha version - read README.txt):
|
||||
- http://halobates.de/firewire/firedump-0.1.tar.bz2
|
||||
|
||||
There is also a gdb proxy for firewire which allows to use gdb to access
|
||||
data which can be referenced from symbols found by gdb in vmlinux:
|
||||
- http://halobates.de/firewire/fireproxy-0.33.tar.bz2
|
||||
|
||||
The latest version of this gdb proxy (fireproxy-0.34) can communicate (not
|
||||
yet stable) with kgdb over an memory-based communication module (kgdbom).
|
||||
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
The OHCI-1394 specification regulates that the OHCI-1394 controller must
|
||||
disable all physical DMA on each bus reset.
|
||||
|
||||
This means that if you want to debug an issue in a system state where
|
||||
interrupts are disabled and where no polling of the OHCI-1394 controller
|
||||
for bus resets takes place, you have to establish any FireWire cable
|
||||
connections and fully initialize all FireWire hardware __before__ the
|
||||
system enters such state.
|
||||
|
||||
Step-by-step instructions for using firescope with early OHCI initialization:
|
||||
|
||||
1) Verify that your hardware is supported:
|
||||
|
||||
Load the firewire-ohci module and check your kernel logs.
|
||||
You should see a line similar to::
|
||||
|
||||
firewire_ohci 0000:15:00.1: added OHCI v1.0 device as card 2, 4 IR + 4 IT
|
||||
... contexts, quirks 0x11
|
||||
|
||||
when loading the driver. If you have no supported controller, many PCI,
|
||||
CardBus and even some Express cards which are fully compliant to OHCI-1394
|
||||
specification are available. If it requires no driver for Windows operating
|
||||
systems, it most likely is. Only specialized shops have cards which are not
|
||||
compliant, they are based on TI PCILynx chips and require drivers for Windows
|
||||
operating systems.
|
||||
|
||||
The mentioned kernel log message contains the string "physUB" if the
|
||||
controller implements a writable Physical Upper Bound register. This is
|
||||
required for physical DMA above 4 GB (but not utilized by Linux yet).
|
||||
|
||||
2) Establish a working FireWire cable connection:
|
||||
|
||||
Any FireWire cable, as long at it provides electrically and mechanically
|
||||
stable connection and has matching connectors (there are small 4-pin and
|
||||
large 6-pin FireWire ports) will do.
|
||||
|
||||
If an driver is running on both machines you should see a line like::
|
||||
|
||||
firewire_core 0000:15:00.1: created device fw1: GUID 00061b0020105917, S400
|
||||
|
||||
on both machines in the kernel log when the cable is plugged in
|
||||
and connects the two machines.
|
||||
|
||||
3) Test physical DMA using firescope:
|
||||
|
||||
On the debug host, make sure that /dev/fw* is accessible,
|
||||
then start firescope::
|
||||
|
||||
$ firescope
|
||||
Port 0 (/dev/fw1) opened, 2 nodes detected
|
||||
|
||||
FireScope
|
||||
---------
|
||||
Target : <unspecified>
|
||||
Gen : 1
|
||||
[Ctrl-T] choose target
|
||||
[Ctrl-H] this menu
|
||||
[Ctrl-Q] quit
|
||||
|
||||
------> Press Ctrl-T now, the output should be similar to:
|
||||
|
||||
2 nodes available, local node is: 0
|
||||
0: ffc0, uuid: 00000000 00000000 [LOCAL]
|
||||
1: ffc1, uuid: 00279000 ba4bb801
|
||||
|
||||
Besides the [LOCAL] node, it must show another node without error message.
|
||||
|
||||
4) Prepare for debugging with early OHCI-1394 initialization:
|
||||
|
||||
4.1) Kernel compilation and installation on debug target
|
||||
|
||||
Compile the kernel to be debugged with CONFIG_PROVIDE_OHCI1394_DMA_INIT
|
||||
(Kernel hacking: Provide code for enabling DMA over FireWire early on boot)
|
||||
enabled and install it on the machine to be debugged (debug target).
|
||||
|
||||
4.2) Transfer the System.map of the debugged kernel to the debug host
|
||||
|
||||
Copy the System.map of the kernel be debugged to the debug host (the host
|
||||
which is connected to the debugged machine over the FireWire cable).
|
||||
|
||||
5) Retrieving the printk buffer contents:
|
||||
|
||||
With the FireWire cable connected, the OHCI-1394 driver on the debugging
|
||||
host loaded, reboot the debugged machine, booting the kernel which has
|
||||
CONFIG_PROVIDE_OHCI1394_DMA_INIT enabled, with the option ohci1394_dma=early.
|
||||
|
||||
Then, on the debugging host, run firescope, for example by using -A::
|
||||
|
||||
firescope -A System.map-of-debug-target-kernel
|
||||
|
||||
Note: -A automatically attaches to the first non-local node. It only works
|
||||
reliably if only connected two machines are connected using FireWire.
|
||||
|
||||
After having attached to the debug target, press Ctrl-D to view the
|
||||
complete printk buffer or Ctrl-U to enter auto update mode and get an
|
||||
updated live view of recent kernel messages logged on the debug target.
|
||||
|
||||
Call "firescope -h" to get more information on firescope's options.
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
Documentation and specifications: http://halobates.de/firewire/
|
||||
|
||||
FireWire is a trademark of Apple Inc. - for more information please refer to:
|
||||
https://en.wikipedia.org/wiki/FireWire
|
929
Documentation/core-api/dma-api-howto.rst
Normal file
929
Documentation/core-api/dma-api-howto.rst
Normal file
|
@ -0,0 +1,929 @@
|
|||
=========================
|
||||
Dynamic DMA mapping Guide
|
||||
=========================
|
||||
|
||||
:Author: David S. Miller <davem@redhat.com>
|
||||
:Author: Richard Henderson <rth@cygnus.com>
|
||||
:Author: Jakub Jelinek <jakub@redhat.com>
|
||||
|
||||
This is a guide to device driver writers on how to use the DMA API
|
||||
with example pseudo-code. For a concise description of the API, see
|
||||
DMA-API.txt.
|
||||
|
||||
CPU and DMA addresses
|
||||
=====================
|
||||
|
||||
There are several kinds of addresses involved in the DMA API, and it's
|
||||
important to understand the differences.
|
||||
|
||||
The kernel normally uses virtual addresses. Any address returned by
|
||||
kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
|
||||
be stored in a ``void *``.
|
||||
|
||||
The virtual memory system (TLB, page tables, etc.) translates virtual
|
||||
addresses to CPU physical addresses, which are stored as "phys_addr_t" or
|
||||
"resource_size_t". The kernel manages device resources like registers as
|
||||
physical addresses. These are the addresses in /proc/iomem. The physical
|
||||
address is not directly useful to a driver; it must use ioremap() to map
|
||||
the space and produce a virtual address.
|
||||
|
||||
I/O devices use a third kind of address: a "bus address". If a device has
|
||||
registers at an MMIO address, or if it performs DMA to read or write system
|
||||
memory, the addresses used by the device are bus addresses. In some
|
||||
systems, bus addresses are identical to CPU physical addresses, but in
|
||||
general they are not. IOMMUs and host bridges can produce arbitrary
|
||||
mappings between physical and bus addresses.
|
||||
|
||||
From a device's point of view, DMA uses the bus address space, but it may
|
||||
be restricted to a subset of that space. For example, even if a system
|
||||
supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
|
||||
so devices only need to use 32-bit DMA addresses.
|
||||
|
||||
Here's a picture and some examples::
|
||||
|
||||
CPU CPU Bus
|
||||
Virtual Physical Address
|
||||
Address Address Space
|
||||
Space Space
|
||||
|
||||
+-------+ +------+ +------+
|
||||
| | |MMIO | Offset | |
|
||||
| | Virtual |Space | applied | |
|
||||
C +-------+ --------> B +------+ ----------> +------+ A
|
||||
| | mapping | | by host | |
|
||||
+-----+ | | | | bridge | | +--------+
|
||||
| | | | +------+ | | | |
|
||||
| CPU | | | | RAM | | | | Device |
|
||||
| | | | | | | | | |
|
||||
+-----+ +-------+ +------+ +------+ +--------+
|
||||
| | Virtual |Buffer| Mapping | |
|
||||
X +-------+ --------> Y +------+ <---------- +------+ Z
|
||||
| | mapping | RAM | by IOMMU
|
||||
| | | |
|
||||
| | | |
|
||||
+-------+ +------+
|
||||
|
||||
During the enumeration process, the kernel learns about I/O devices and
|
||||
their MMIO space and the host bridges that connect them to the system. For
|
||||
example, if a PCI device has a BAR, the kernel reads the bus address (A)
|
||||
from the BAR and converts it to a CPU physical address (B). The address B
|
||||
is stored in a struct resource and usually exposed via /proc/iomem. When a
|
||||
driver claims a device, it typically uses ioremap() to map physical address
|
||||
B at a virtual address (C). It can then use, e.g., ioread32(C), to access
|
||||
the device registers at bus address A.
|
||||
|
||||
If the device supports DMA, the driver sets up a buffer using kmalloc() or
|
||||
a similar interface, which returns a virtual address (X). The virtual
|
||||
memory system maps X to a physical address (Y) in system RAM. The driver
|
||||
can use virtual address X to access the buffer, but the device itself
|
||||
cannot because DMA doesn't go through the CPU virtual memory system.
|
||||
|
||||
In some simple systems, the device can do DMA directly to physical address
|
||||
Y. But in many others, there is IOMMU hardware that translates DMA
|
||||
addresses to physical addresses, e.g., it translates Z to Y. This is part
|
||||
of the reason for the DMA API: the driver can give a virtual address X to
|
||||
an interface like dma_map_single(), which sets up any required IOMMU
|
||||
mapping and returns the DMA address Z. The driver then tells the device to
|
||||
do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
|
||||
RAM.
|
||||
|
||||
So that Linux can use the dynamic DMA mapping, it needs some help from the
|
||||
drivers, namely it has to take into account that DMA addresses should be
|
||||
mapped only for the time they are actually used and unmapped after the DMA
|
||||
transfer.
|
||||
|
||||
The following API will work of course even on platforms where no such
|
||||
hardware exists.
|
||||
|
||||
Note that the DMA API works with any bus independent of the underlying
|
||||
microprocessor architecture. You should use the DMA API rather than the
|
||||
bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
|
||||
pci_map_*() interfaces.
|
||||
|
||||
First of all, you should make sure::
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
is in your driver, which provides the definition of dma_addr_t. This type
|
||||
can hold any valid DMA address for the platform and should be used
|
||||
everywhere you hold a DMA address returned from the DMA mapping functions.
|
||||
|
||||
What memory is DMA'able?
|
||||
========================
|
||||
|
||||
The first piece of information you must know is what kernel memory can
|
||||
be used with the DMA mapping facilities. There has been an unwritten
|
||||
set of rules regarding this, and this text is an attempt to finally
|
||||
write them down.
|
||||
|
||||
If you acquired your memory via the page allocator
|
||||
(i.e. __get_free_page*()) or the generic memory allocators
|
||||
(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
|
||||
that memory using the addresses returned from those routines.
|
||||
|
||||
This means specifically that you may _not_ use the memory/addresses
|
||||
returned from vmalloc() for DMA. It is possible to DMA to the
|
||||
_underlying_ memory mapped into a vmalloc() area, but this requires
|
||||
walking page tables to get the physical addresses, and then
|
||||
translating each of those pages back to a kernel address using
|
||||
something like __va(). [ EDIT: Update this when we integrate
|
||||
Gerd Knorr's generic code which does this. ]
|
||||
|
||||
This rule also means that you may use neither kernel image addresses
|
||||
(items in data/text/bss segments), nor module image addresses, nor
|
||||
stack addresses for DMA. These could all be mapped somewhere entirely
|
||||
different than the rest of physical memory. Even if those classes of
|
||||
memory could physically work with DMA, you'd need to ensure the I/O
|
||||
buffers were cacheline-aligned. Without that, you'd see cacheline
|
||||
sharing problems (data corruption) on CPUs with DMA-incoherent caches.
|
||||
(The CPU could write to one word, DMA would write to a different one
|
||||
in the same cache line, and one of them could be overwritten.)
|
||||
|
||||
Also, this means that you cannot take the return of a kmap()
|
||||
call and DMA to/from that. This is similar to vmalloc().
|
||||
|
||||
What about block I/O and networking buffers? The block I/O and
|
||||
networking subsystems make sure that the buffers they use are valid
|
||||
for you to DMA from/to.
|
||||
|
||||
DMA addressing capabilities
|
||||
===========================
|
||||
|
||||
By default, the kernel assumes that your device can address 32-bits of DMA
|
||||
addressing. For a 64-bit capable device, this needs to be increased, and for
|
||||
a device with limitations, it needs to be decreased.
|
||||
|
||||
Special note about PCI: PCI-X specification requires PCI-X devices to support
|
||||
64-bit addressing (DAC) for all transactions. And at least one platform (SGI
|
||||
SN2) requires 64-bit consistent allocations to operate correctly when the IO
|
||||
bus is in PCI-X mode.
|
||||
|
||||
For correct operation, you must set the DMA mask to inform the kernel about
|
||||
your devices DMA addressing capabilities.
|
||||
|
||||
This is performed via a call to dma_set_mask_and_coherent()::
|
||||
|
||||
int dma_set_mask_and_coherent(struct device *dev, u64 mask);
|
||||
|
||||
which will set the mask for both streaming and coherent APIs together. If you
|
||||
have some special requirements, then the following two separate calls can be
|
||||
used instead:
|
||||
|
||||
The setup for streaming mappings is performed via a call to
|
||||
dma_set_mask()::
|
||||
|
||||
int dma_set_mask(struct device *dev, u64 mask);
|
||||
|
||||
The setup for consistent allocations is performed via a call
|
||||
to dma_set_coherent_mask()::
|
||||
|
||||
int dma_set_coherent_mask(struct device *dev, u64 mask);
|
||||
|
||||
Here, dev is a pointer to the device struct of your device, and mask is a bit
|
||||
mask describing which bits of an address your device supports. Often the
|
||||
device struct of your device is embedded in the bus-specific device struct of
|
||||
your device. For example, &pdev->dev is a pointer to the device struct of a
|
||||
PCI device (pdev is a pointer to the PCI device struct of your device).
|
||||
|
||||
These calls usually return zero to indicated your device can perform DMA
|
||||
properly on the machine given the address mask you provided, but they might
|
||||
return an error if the mask is too small to be supportable on the given
|
||||
system. If it returns non-zero, your device cannot perform DMA properly on
|
||||
this platform, and attempting to do so will result in undefined behavior.
|
||||
You must not use DMA on this device unless the dma_set_mask family of
|
||||
functions has returned success.
|
||||
|
||||
This means that in the failure case, you have two options:
|
||||
|
||||
1) Use some non-DMA mode for data transfer, if possible.
|
||||
2) Ignore this device and do not initialize it.
|
||||
|
||||
It is recommended that your driver print a kernel KERN_WARNING message when
|
||||
setting the DMA mask fails. In this manner, if a user of your driver reports
|
||||
that performance is bad or that the device is not even detected, you can ask
|
||||
them for the kernel messages to find out exactly why.
|
||||
|
||||
The standard 64-bit addressing device would do something like this::
|
||||
|
||||
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
If the device only supports 32-bit addressing for descriptors in the
|
||||
coherent allocations, but supports full 64-bits for streaming mappings
|
||||
it would look like this::
|
||||
|
||||
if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
The coherent mask will always be able to set the same or a smaller mask as
|
||||
the streaming mask. However for the rare case that a device driver only
|
||||
uses consistent allocations, one would have to check the return value from
|
||||
dma_set_coherent_mask().
|
||||
|
||||
Finally, if your device can only drive the low 24-bits of
|
||||
address you might do something like::
|
||||
|
||||
if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
|
||||
dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
|
||||
returns zero, the kernel saves away this mask you have provided. The
|
||||
kernel will use this information later when you make DMA mappings.
|
||||
|
||||
There is a case which we are aware of at this time, which is worth
|
||||
mentioning in this documentation. If your device supports multiple
|
||||
functions (for example a sound card provides playback and record
|
||||
functions) and the various different functions have _different_
|
||||
DMA addressing limitations, you may wish to probe each mask and
|
||||
only provide the functionality which the machine can handle. It
|
||||
is important that the last call to dma_set_mask() be for the
|
||||
most specific mask.
|
||||
|
||||
Here is pseudo-code showing how this might be done::
|
||||
|
||||
#define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
|
||||
#define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
|
||||
|
||||
struct my_sound_card *card;
|
||||
struct device *dev;
|
||||
|
||||
...
|
||||
if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
|
||||
card->playback_enabled = 1;
|
||||
} else {
|
||||
card->playback_enabled = 0;
|
||||
dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
|
||||
card->record_enabled = 1;
|
||||
} else {
|
||||
card->record_enabled = 0;
|
||||
dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
|
||||
A sound card was used as an example here because this genre of PCI
|
||||
devices seems to be littered with ISA chips given a PCI front end,
|
||||
and thus retaining the 16MB DMA addressing limitations of ISA.
|
||||
|
||||
Types of DMA mappings
|
||||
=====================
|
||||
|
||||
There are two types of DMA mappings:
|
||||
|
||||
- Consistent DMA mappings which are usually mapped at driver
|
||||
initialization, unmapped at the end and for which the hardware should
|
||||
guarantee that the device and the CPU can access the data
|
||||
in parallel and will see updates made by each other without any
|
||||
explicit software flushing.
|
||||
|
||||
Think of "consistent" as "synchronous" or "coherent".
|
||||
|
||||
The current default is to return consistent memory in the low 32
|
||||
bits of the DMA space. However, for future compatibility you should
|
||||
set the consistent mask even if this default is fine for your
|
||||
driver.
|
||||
|
||||
Good examples of what to use consistent mappings for are:
|
||||
|
||||
- Network card DMA ring descriptors.
|
||||
- SCSI adapter mailbox command data structures.
|
||||
- Device firmware microcode executed out of
|
||||
main memory.
|
||||
|
||||
The invariant these examples all require is that any CPU store
|
||||
to memory is immediately visible to the device, and vice
|
||||
versa. Consistent mappings guarantee this.
|
||||
|
||||
.. important::
|
||||
|
||||
Consistent DMA memory does not preclude the usage of
|
||||
proper memory barriers. The CPU may reorder stores to
|
||||
consistent memory just as it may normal memory. Example:
|
||||
if it is important for the device to see the first word
|
||||
of a descriptor updated before the second, you must do
|
||||
something like::
|
||||
|
||||
desc->word0 = address;
|
||||
wmb();
|
||||
desc->word1 = DESC_VALID;
|
||||
|
||||
in order to get correct behavior on all platforms.
|
||||
|
||||
Also, on some platforms your driver may need to flush CPU write
|
||||
buffers in much the same way as it needs to flush write buffers
|
||||
found in PCI bridges (such as by reading a register's value
|
||||
after writing it).
|
||||
|
||||
- Streaming DMA mappings which are usually mapped for one DMA
|
||||
transfer, unmapped right after it (unless you use dma_sync_* below)
|
||||
and for which hardware can optimize for sequential accesses.
|
||||
|
||||
Think of "streaming" as "asynchronous" or "outside the coherency
|
||||
domain".
|
||||
|
||||
Good examples of what to use streaming mappings for are:
|
||||
|
||||
- Networking buffers transmitted/received by a device.
|
||||
- Filesystem buffers written/read by a SCSI device.
|
||||
|
||||
The interfaces for using this type of mapping were designed in
|
||||
such a way that an implementation can make whatever performance
|
||||
optimizations the hardware allows. To this end, when using
|
||||
such mappings you must be explicit about what you want to happen.
|
||||
|
||||
Neither type of DMA mapping has alignment restrictions that come from
|
||||
the underlying bus, although some devices may have such restrictions.
|
||||
Also, systems with caches that aren't DMA-coherent will work better
|
||||
when the underlying buffers don't share cache lines with other data.
|
||||
|
||||
|
||||
Using Consistent DMA mappings
|
||||
=============================
|
||||
|
||||
To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
|
||||
you should do::
|
||||
|
||||
dma_addr_t dma_handle;
|
||||
|
||||
cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
|
||||
|
||||
where device is a ``struct device *``. This may be called in interrupt
|
||||
context with the GFP_ATOMIC flag.
|
||||
|
||||
Size is the length of the region you want to allocate, in bytes.
|
||||
|
||||
This routine will allocate RAM for that region, so it acts similarly to
|
||||
__get_free_pages() (but takes size instead of a page order). If your
|
||||
driver needs regions sized smaller than a page, you may prefer using
|
||||
the dma_pool interface, described below.
|
||||
|
||||
The consistent DMA mapping interfaces, will by default return a DMA address
|
||||
which is 32-bit addressable. Even if the device indicates (via the DMA mask)
|
||||
that it may address the upper 32-bits, consistent allocation will only
|
||||
return > 32-bit addresses for DMA if the consistent DMA mask has been
|
||||
explicitly changed via dma_set_coherent_mask(). This is true of the
|
||||
dma_pool interface as well.
|
||||
|
||||
dma_alloc_coherent() returns two values: the virtual address which you
|
||||
can use to access it from the CPU and dma_handle which you pass to the
|
||||
card.
|
||||
|
||||
The CPU virtual address and the DMA address are both
|
||||
guaranteed to be aligned to the smallest PAGE_SIZE order which
|
||||
is greater than or equal to the requested size. This invariant
|
||||
exists (for example) to guarantee that if you allocate a chunk
|
||||
which is smaller than or equal to 64 kilobytes, the extent of the
|
||||
buffer you receive will not cross a 64K boundary.
|
||||
|
||||
To unmap and free such a DMA region, you call::
|
||||
|
||||
dma_free_coherent(dev, size, cpu_addr, dma_handle);
|
||||
|
||||
where dev, size are the same as in the above call and cpu_addr and
|
||||
dma_handle are the values dma_alloc_coherent() returned to you.
|
||||
This function may not be called in interrupt context.
|
||||
|
||||
If your driver needs lots of smaller memory regions, you can write
|
||||
custom code to subdivide pages returned by dma_alloc_coherent(),
|
||||
or you can use the dma_pool API to do that. A dma_pool is like
|
||||
a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
|
||||
Also, it understands common hardware constraints for alignment,
|
||||
like queue heads needing to be aligned on N byte boundaries.
|
||||
|
||||
Create a dma_pool like this::
|
||||
|
||||
struct dma_pool *pool;
|
||||
|
||||
pool = dma_pool_create(name, dev, size, align, boundary);
|
||||
|
||||
The "name" is for diagnostics (like a kmem_cache name); dev and size
|
||||
are as above. The device's hardware alignment requirement for this
|
||||
type of data is "align" (which is expressed in bytes, and must be a
|
||||
power of two). If your device has no boundary crossing restrictions,
|
||||
pass 0 for boundary; passing 4096 says memory allocated from this pool
|
||||
must not cross 4KByte boundaries (but at that time it may be better to
|
||||
use dma_alloc_coherent() directly instead).
|
||||
|
||||
Allocate memory from a DMA pool like this::
|
||||
|
||||
cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
|
||||
|
||||
flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
|
||||
holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
|
||||
this returns two values, cpu_addr and dma_handle.
|
||||
|
||||
Free memory that was allocated from a dma_pool like this::
|
||||
|
||||
dma_pool_free(pool, cpu_addr, dma_handle);
|
||||
|
||||
where pool is what you passed to dma_pool_alloc(), and cpu_addr and
|
||||
dma_handle are the values dma_pool_alloc() returned. This function
|
||||
may be called in interrupt context.
|
||||
|
||||
Destroy a dma_pool by calling::
|
||||
|
||||
dma_pool_destroy(pool);
|
||||
|
||||
Make sure you've called dma_pool_free() for all memory allocated
|
||||
from a pool before you destroy the pool. This function may not
|
||||
be called in interrupt context.
|
||||
|
||||
DMA Direction
|
||||
=============
|
||||
|
||||
The interfaces described in subsequent portions of this document
|
||||
take a DMA direction argument, which is an integer and takes on
|
||||
one of the following values::
|
||||
|
||||
DMA_BIDIRECTIONAL
|
||||
DMA_TO_DEVICE
|
||||
DMA_FROM_DEVICE
|
||||
DMA_NONE
|
||||
|
||||
You should provide the exact DMA direction if you know it.
|
||||
|
||||
DMA_TO_DEVICE means "from main memory to the device"
|
||||
DMA_FROM_DEVICE means "from the device to main memory"
|
||||
It is the direction in which the data moves during the DMA
|
||||
transfer.
|
||||
|
||||
You are _strongly_ encouraged to specify this as precisely
|
||||
as you possibly can.
|
||||
|
||||
If you absolutely cannot know the direction of the DMA transfer,
|
||||
specify DMA_BIDIRECTIONAL. It means that the DMA can go in
|
||||
either direction. The platform guarantees that you may legally
|
||||
specify this, and that it will work, but this may be at the
|
||||
cost of performance for example.
|
||||
|
||||
The value DMA_NONE is to be used for debugging. One can
|
||||
hold this in a data structure before you come to know the
|
||||
precise direction, and this will help catch cases where your
|
||||
direction tracking logic has failed to set things up properly.
|
||||
|
||||
Another advantage of specifying this value precisely (outside of
|
||||
potential platform-specific optimizations of such) is for debugging.
|
||||
Some platforms actually have a write permission boolean which DMA
|
||||
mappings can be marked with, much like page protections in the user
|
||||
program address space. Such platforms can and do report errors in the
|
||||
kernel logs when the DMA controller hardware detects violation of the
|
||||
permission setting.
|
||||
|
||||
Only streaming mappings specify a direction, consistent mappings
|
||||
implicitly have a direction attribute setting of
|
||||
DMA_BIDIRECTIONAL.
|
||||
|
||||
The SCSI subsystem tells you the direction to use in the
|
||||
'sc_data_direction' member of the SCSI command your driver is
|
||||
working on.
|
||||
|
||||
For Networking drivers, it's a rather simple affair. For transmit
|
||||
packets, map/unmap them with the DMA_TO_DEVICE direction
|
||||
specifier. For receive packets, just the opposite, map/unmap them
|
||||
with the DMA_FROM_DEVICE direction specifier.
|
||||
|
||||
Using Streaming DMA mappings
|
||||
============================
|
||||
|
||||
The streaming DMA mapping routines can be called from interrupt
|
||||
context. There are two versions of each map/unmap, one which will
|
||||
map/unmap a single memory region, and one which will map/unmap a
|
||||
scatterlist.
|
||||
|
||||
To map a single region, you do::
|
||||
|
||||
struct device *dev = &my_dev->dev;
|
||||
dma_addr_t dma_handle;
|
||||
void *addr = buffer->ptr;
|
||||
size_t size = buffer->len;
|
||||
|
||||
dma_handle = dma_map_single(dev, addr, size, direction);
|
||||
if (dma_mapping_error(dev, dma_handle)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling;
|
||||
}
|
||||
|
||||
and to unmap it::
|
||||
|
||||
dma_unmap_single(dev, dma_handle, size, direction);
|
||||
|
||||
You should call dma_mapping_error() as dma_map_single() could fail and return
|
||||
error. Doing so will ensure that the mapping code will work correctly on all
|
||||
DMA implementations without any dependency on the specifics of the underlying
|
||||
implementation. Using the returned address without checking for errors could
|
||||
result in failures ranging from panics to silent data corruption. The same
|
||||
applies to dma_map_page() as well.
|
||||
|
||||
You should call dma_unmap_single() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
Using CPU pointers like this for single mappings has a disadvantage:
|
||||
you cannot reference HIGHMEM memory in this way. Thus, there is a
|
||||
map/unmap interface pair akin to dma_{map,unmap}_single(). These
|
||||
interfaces deal with page/offset pairs instead of CPU pointers.
|
||||
Specifically::
|
||||
|
||||
struct device *dev = &my_dev->dev;
|
||||
dma_addr_t dma_handle;
|
||||
struct page *page = buffer->page;
|
||||
unsigned long offset = buffer->offset;
|
||||
size_t size = buffer->len;
|
||||
|
||||
dma_handle = dma_map_page(dev, page, offset, size, direction);
|
||||
if (dma_mapping_error(dev, dma_handle)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling;
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
dma_unmap_page(dev, dma_handle, size, direction);
|
||||
|
||||
Here, "offset" means byte offset within the given page.
|
||||
|
||||
You should call dma_mapping_error() as dma_map_page() could fail and return
|
||||
error as outlined under the dma_map_single() discussion.
|
||||
|
||||
You should call dma_unmap_page() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
With scatterlists, you map a region gathered from several regions by::
|
||||
|
||||
int i, count = dma_map_sg(dev, sglist, nents, direction);
|
||||
struct scatterlist *sg;
|
||||
|
||||
for_each_sg(sglist, sg, count, i) {
|
||||
hw_address[i] = sg_dma_address(sg);
|
||||
hw_len[i] = sg_dma_len(sg);
|
||||
}
|
||||
|
||||
where nents is the number of entries in the sglist.
|
||||
|
||||
The implementation is free to merge several consecutive sglist entries
|
||||
into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
|
||||
consecutive sglist entries can be merged into one provided the first one
|
||||
ends and the second one starts on a page boundary - in fact this is a huge
|
||||
advantage for cards which either cannot do scatter-gather or have very
|
||||
limited number of scatter-gather entries) and returns the actual number
|
||||
of sg entries it mapped them to. On failure 0 is returned.
|
||||
|
||||
Then you should loop count times (note: this can be less than nents times)
|
||||
and use sg_dma_address() and sg_dma_len() macros where you previously
|
||||
accessed sg->address and sg->length as shown above.
|
||||
|
||||
To unmap a scatterlist, just call::
|
||||
|
||||
dma_unmap_sg(dev, sglist, nents, direction);
|
||||
|
||||
Again, make sure DMA activity has already finished.
|
||||
|
||||
.. note::
|
||||
|
||||
The 'nents' argument to the dma_unmap_sg call must be
|
||||
the _same_ one you passed into the dma_map_sg call,
|
||||
it should _NOT_ be the 'count' value _returned_ from the
|
||||
dma_map_sg call.
|
||||
|
||||
Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
|
||||
counterpart, because the DMA address space is a shared resource and
|
||||
you could render the machine unusable by consuming all DMA addresses.
|
||||
|
||||
If you need to use the same streaming DMA region multiple times and touch
|
||||
the data in between the DMA transfers, the buffer needs to be synced
|
||||
properly in order for the CPU and device to see the most up-to-date and
|
||||
correct copy of the DMA buffer.
|
||||
|
||||
So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
|
||||
transfer call either::
|
||||
|
||||
dma_sync_single_for_cpu(dev, dma_handle, size, direction);
|
||||
|
||||
or::
|
||||
|
||||
dma_sync_sg_for_cpu(dev, sglist, nents, direction);
|
||||
|
||||
as appropriate.
|
||||
|
||||
Then, if you wish to let the device get at the DMA area again,
|
||||
finish accessing the data with the CPU, and then before actually
|
||||
giving the buffer to the hardware call either::
|
||||
|
||||
dma_sync_single_for_device(dev, dma_handle, size, direction);
|
||||
|
||||
or::
|
||||
|
||||
dma_sync_sg_for_device(dev, sglist, nents, direction);
|
||||
|
||||
as appropriate.
|
||||
|
||||
.. note::
|
||||
|
||||
The 'nents' argument to dma_sync_sg_for_cpu() and
|
||||
dma_sync_sg_for_device() must be the same passed to
|
||||
dma_map_sg(). It is _NOT_ the count returned by
|
||||
dma_map_sg().
|
||||
|
||||
After the last DMA transfer call one of the DMA unmap routines
|
||||
dma_unmap_{single,sg}(). If you don't touch the data from the first
|
||||
dma_map_*() call till dma_unmap_*(), then you don't have to call the
|
||||
dma_sync_*() routines at all.
|
||||
|
||||
Here is pseudo code which shows a situation in which you would need
|
||||
to use the dma_sync_*() interfaces::
|
||||
|
||||
my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
|
||||
{
|
||||
dma_addr_t mapping;
|
||||
|
||||
mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(cp->dev, mapping)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling;
|
||||
}
|
||||
|
||||
cp->rx_buf = buffer;
|
||||
cp->rx_len = len;
|
||||
cp->rx_dma = mapping;
|
||||
|
||||
give_rx_buf_to_card(cp);
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
|
||||
{
|
||||
struct my_card *cp = devid;
|
||||
|
||||
...
|
||||
if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
|
||||
struct my_card_header *hp;
|
||||
|
||||
/* Examine the header to see if we wish
|
||||
* to accept the data. But synchronize
|
||||
* the DMA transfer with the CPU first
|
||||
* so that we see updated contents.
|
||||
*/
|
||||
dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
|
||||
cp->rx_len,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
/* Now it is safe to examine the buffer. */
|
||||
hp = (struct my_card_header *) cp->rx_buf;
|
||||
if (header_is_ok(hp)) {
|
||||
dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
|
||||
DMA_FROM_DEVICE);
|
||||
pass_to_upper_layers(cp->rx_buf);
|
||||
make_and_setup_new_rx_buf(cp);
|
||||
} else {
|
||||
/* CPU should not write to
|
||||
* DMA_FROM_DEVICE-mapped area,
|
||||
* so dma_sync_single_for_device() is
|
||||
* not needed here. It would be required
|
||||
* for DMA_BIDIRECTIONAL mapping if
|
||||
* the memory was modified.
|
||||
*/
|
||||
give_rx_buf_to_card(cp);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Drivers converted fully to this interface should not use virt_to_bus() any
|
||||
longer, nor should they use bus_to_virt(). Some drivers have to be changed a
|
||||
little bit, because there is no longer an equivalent to bus_to_virt() in the
|
||||
dynamic DMA mapping scheme - you have to always store the DMA addresses
|
||||
returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
|
||||
calls (dma_map_sg() stores them in the scatterlist itself if the platform
|
||||
supports dynamic DMA mapping in hardware) in your driver structures and/or
|
||||
in the card registers.
|
||||
|
||||
All drivers should be using these interfaces with no exceptions. It
|
||||
is planned to completely remove virt_to_bus() and bus_to_virt() as
|
||||
they are entirely deprecated. Some ports already do not provide these
|
||||
as it is impossible to correctly support them.
|
||||
|
||||
Handling Errors
|
||||
===============
|
||||
|
||||
DMA address space is limited on some architectures and an allocation
|
||||
failure can be determined by:
|
||||
|
||||
- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
|
||||
|
||||
- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
|
||||
by using dma_mapping_error()::
|
||||
|
||||
dma_addr_t dma_handle;
|
||||
|
||||
dma_handle = dma_map_single(dev, addr, size, direction);
|
||||
if (dma_mapping_error(dev, dma_handle)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling;
|
||||
}
|
||||
|
||||
- unmap pages that are already mapped, when mapping error occurs in the middle
|
||||
of a multiple page mapping attempt. These example are applicable to
|
||||
dma_map_page() as well.
|
||||
|
||||
Example 1::
|
||||
|
||||
dma_addr_t dma_handle1;
|
||||
dma_addr_t dma_handle2;
|
||||
|
||||
dma_handle1 = dma_map_single(dev, addr, size, direction);
|
||||
if (dma_mapping_error(dev, dma_handle1)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling1;
|
||||
}
|
||||
dma_handle2 = dma_map_single(dev, addr, size, direction);
|
||||
if (dma_mapping_error(dev, dma_handle2)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling2;
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
map_error_handling2:
|
||||
dma_unmap_single(dma_handle1);
|
||||
map_error_handling1:
|
||||
|
||||
Example 2::
|
||||
|
||||
/*
|
||||
* if buffers are allocated in a loop, unmap all mapped buffers when
|
||||
* mapping error is detected in the middle
|
||||
*/
|
||||
|
||||
dma_addr_t dma_addr;
|
||||
dma_addr_t array[DMA_BUFFERS];
|
||||
int save_index = 0;
|
||||
|
||||
for (i = 0; i < DMA_BUFFERS; i++) {
|
||||
|
||||
...
|
||||
|
||||
dma_addr = dma_map_single(dev, addr, size, direction);
|
||||
if (dma_mapping_error(dev, dma_addr)) {
|
||||
/*
|
||||
* reduce current DMA mapping usage,
|
||||
* delay and try again later or
|
||||
* reset driver.
|
||||
*/
|
||||
goto map_error_handling;
|
||||
}
|
||||
array[i].dma_addr = dma_addr;
|
||||
save_index++;
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
map_error_handling:
|
||||
|
||||
for (i = 0; i < save_index; i++) {
|
||||
|
||||
...
|
||||
|
||||
dma_unmap_single(array[i].dma_addr);
|
||||
}
|
||||
|
||||
Networking drivers must call dev_kfree_skb() to free the socket buffer
|
||||
and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
|
||||
(ndo_start_xmit). This means that the socket buffer is just dropped in
|
||||
the failure case.
|
||||
|
||||
SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
|
||||
fails in the queuecommand hook. This means that the SCSI subsystem
|
||||
passes the command to the driver again later.
|
||||
|
||||
Optimizing Unmap State Space Consumption
|
||||
========================================
|
||||
|
||||
On many platforms, dma_unmap_{single,page}() is simply a nop.
|
||||
Therefore, keeping track of the mapping address and length is a waste
|
||||
of space. Instead of filling your drivers up with ifdefs and the like
|
||||
to "work around" this (which would defeat the whole purpose of a
|
||||
portable API) the following facilities are provided.
|
||||
|
||||
Actually, instead of describing the macros one by one, we'll
|
||||
transform some example code.
|
||||
|
||||
1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
|
||||
Example, before::
|
||||
|
||||
struct ring_state {
|
||||
struct sk_buff *skb;
|
||||
dma_addr_t mapping;
|
||||
__u32 len;
|
||||
};
|
||||
|
||||
after::
|
||||
|
||||
struct ring_state {
|
||||
struct sk_buff *skb;
|
||||
DEFINE_DMA_UNMAP_ADDR(mapping);
|
||||
DEFINE_DMA_UNMAP_LEN(len);
|
||||
};
|
||||
|
||||
2) Use dma_unmap_{addr,len}_set() to set these values.
|
||||
Example, before::
|
||||
|
||||
ringp->mapping = FOO;
|
||||
ringp->len = BAR;
|
||||
|
||||
after::
|
||||
|
||||
dma_unmap_addr_set(ringp, mapping, FOO);
|
||||
dma_unmap_len_set(ringp, len, BAR);
|
||||
|
||||
3) Use dma_unmap_{addr,len}() to access these values.
|
||||
Example, before::
|
||||
|
||||
dma_unmap_single(dev, ringp->mapping, ringp->len,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
after::
|
||||
|
||||
dma_unmap_single(dev,
|
||||
dma_unmap_addr(ringp, mapping),
|
||||
dma_unmap_len(ringp, len),
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
It really should be self-explanatory. We treat the ADDR and LEN
|
||||
separately, because it is possible for an implementation to only
|
||||
need the address in order to perform the unmap operation.
|
||||
|
||||
Platform Issues
|
||||
===============
|
||||
|
||||
If you are just writing drivers for Linux and do not maintain
|
||||
an architecture port for the kernel, you can safely skip down
|
||||
to "Closing".
|
||||
|
||||
1) Struct scatterlist requirements.
|
||||
|
||||
You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
|
||||
supports IOMMUs (including software IOMMU).
|
||||
|
||||
2) ARCH_DMA_MINALIGN
|
||||
|
||||
Architectures must ensure that kmalloc'ed buffer is
|
||||
DMA-safe. Drivers and subsystems depend on it. If an architecture
|
||||
isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
|
||||
the CPU cache is identical to data in main memory),
|
||||
ARCH_DMA_MINALIGN must be set so that the memory allocator
|
||||
makes sure that kmalloc'ed buffer doesn't share a cache line with
|
||||
the others. See arch/arm/include/asm/cache.h as an example.
|
||||
|
||||
Note that ARCH_DMA_MINALIGN is about DMA memory alignment
|
||||
constraints. You don't need to worry about the architecture data
|
||||
alignment constraints (e.g. the alignment constraints about 64-bit
|
||||
objects).
|
||||
|
||||
Closing
|
||||
=======
|
||||
|
||||
This document, and the API itself, would not be in its current
|
||||
form without the feedback and suggestions from numerous individuals.
|
||||
We would like to specifically mention, in no particular order, the
|
||||
following people::
|
||||
|
||||
Russell King <rmk@arm.linux.org.uk>
|
||||
Leo Dagum <dagum@barrel.engr.sgi.com>
|
||||
Ralf Baechle <ralf@oss.sgi.com>
|
||||
Grant Grundler <grundler@cup.hp.com>
|
||||
Jay Estabrook <Jay.Estabrook@compaq.com>
|
||||
Thomas Sailer <sailer@ife.ee.ethz.ch>
|
||||
Andrea Arcangeli <andrea@suse.de>
|
||||
Jens Axboe <jens.axboe@oracle.com>
|
||||
David Mosberger-Tang <davidm@hpl.hp.com>
|
745
Documentation/core-api/dma-api.rst
Normal file
745
Documentation/core-api/dma-api.rst
Normal file
|
@ -0,0 +1,745 @@
|
|||
============================================
|
||||
Dynamic DMA mapping using the generic device
|
||||
============================================
|
||||
|
||||
:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
|
||||
|
||||
This document describes the DMA API. For a more gentle introduction
|
||||
of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
|
||||
|
||||
This API is split into two pieces. Part I describes the basic API.
|
||||
Part II describes extensions for supporting non-consistent memory
|
||||
machines. Unless you know that your driver absolutely has to support
|
||||
non-consistent platforms (this is usually only legacy platforms) you
|
||||
should only use the API described in part I.
|
||||
|
||||
Part I - dma_API
|
||||
----------------
|
||||
|
||||
To get the dma_API, you must #include <linux/dma-mapping.h>. This
|
||||
provides dma_addr_t and the interfaces described below.
|
||||
|
||||
A dma_addr_t can hold any valid DMA address for the platform. It can be
|
||||
given to a device to use as a DMA source or target. A CPU cannot reference
|
||||
a dma_addr_t directly because there may be translation between its physical
|
||||
address space and the DMA address space.
|
||||
|
||||
Part Ia - Using large DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
::
|
||||
|
||||
void *
|
||||
dma_alloc_coherent(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t flag)
|
||||
|
||||
Consistent memory is memory for which a write by either the device or
|
||||
the processor can immediately be read by the processor or device
|
||||
without having to worry about caching effects. (You may however need
|
||||
to make sure to flush the processor's write buffers before telling
|
||||
devices to read that memory.)
|
||||
|
||||
This routine allocates a region of <size> bytes of consistent memory.
|
||||
|
||||
It returns a pointer to the allocated region (in the processor's virtual
|
||||
address space) or NULL if the allocation failed.
|
||||
|
||||
It also returns a <dma_handle> which may be cast to an unsigned integer the
|
||||
same width as the bus and given to the device as the DMA address base of
|
||||
the region.
|
||||
|
||||
Note: consistent memory can be expensive on some platforms, and the
|
||||
minimum allocation length may be as big as a page, so you should
|
||||
consolidate your requests for consistent memory as much as possible.
|
||||
The simplest way to do that is to use the dma_pool calls (see below).
|
||||
|
||||
The flag parameter (dma_alloc_coherent() only) allows the caller to
|
||||
specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
|
||||
implementation may choose to ignore flags that affect the location of
|
||||
the returned memory, like GFP_DMA).
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
|
||||
dma_addr_t dma_handle)
|
||||
|
||||
Free a region of consistent memory you previously allocated. dev,
|
||||
size and dma_handle must all be the same as those passed into
|
||||
dma_alloc_coherent(). cpu_addr must be the virtual address returned by
|
||||
the dma_alloc_coherent().
|
||||
|
||||
Note that unlike their sibling allocation calls, these routines
|
||||
may only be called with IRQs enabled.
|
||||
|
||||
|
||||
Part Ib - Using small DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
To get this part of the dma_API, you must #include <linux/dmapool.h>
|
||||
|
||||
Many drivers need lots of small DMA-coherent memory regions for DMA
|
||||
descriptors or I/O buffers. Rather than allocating in units of a page
|
||||
or more using dma_alloc_coherent(), you can use DMA pools. These work
|
||||
much like a struct kmem_cache, except that they use the DMA-coherent allocator,
|
||||
not __get_free_pages(). Also, they understand common hardware constraints
|
||||
for alignment, like queue heads needing to be aligned on N-byte boundaries.
|
||||
|
||||
|
||||
::
|
||||
|
||||
struct dma_pool *
|
||||
dma_pool_create(const char *name, struct device *dev,
|
||||
size_t size, size_t align, size_t alloc);
|
||||
|
||||
dma_pool_create() initializes a pool of DMA-coherent buffers
|
||||
for use with a given device. It must be called in a context which
|
||||
can sleep.
|
||||
|
||||
The "name" is for diagnostics (like a struct kmem_cache name); dev and size
|
||||
are like what you'd pass to dma_alloc_coherent(). The device's hardware
|
||||
alignment requirement for this type of data is "align" (which is expressed
|
||||
in bytes, and must be a power of two). If your device has no boundary
|
||||
crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
|
||||
from this pool must not cross 4KByte boundaries.
|
||||
|
||||
::
|
||||
|
||||
void *
|
||||
dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
|
||||
dma_addr_t *handle)
|
||||
|
||||
Wraps dma_pool_alloc() and also zeroes the returned memory if the
|
||||
allocation attempt succeeded.
|
||||
|
||||
|
||||
::
|
||||
|
||||
void *
|
||||
dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
|
||||
dma_addr_t *dma_handle);
|
||||
|
||||
This allocates memory from the pool; the returned memory will meet the
|
||||
size and alignment requirements specified at creation time. Pass
|
||||
GFP_ATOMIC to prevent blocking, or if it's permitted (not
|
||||
in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
|
||||
blocking. Like dma_alloc_coherent(), this returns two values: an
|
||||
address usable by the CPU, and the DMA address usable by the pool's
|
||||
device.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_pool_free(struct dma_pool *pool, void *vaddr,
|
||||
dma_addr_t addr);
|
||||
|
||||
This puts memory back into the pool. The pool is what was passed to
|
||||
dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
|
||||
were returned when that routine allocated the memory being freed.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_pool_destroy(struct dma_pool *pool);
|
||||
|
||||
dma_pool_destroy() frees the resources of the pool. It must be
|
||||
called in a context which can sleep. Make sure you've freed all allocated
|
||||
memory back to the pool before you destroy it.
|
||||
|
||||
|
||||
Part Ic - DMA addressing limitations
|
||||
------------------------------------
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_set_mask_and_coherent(struct device *dev, u64 mask)
|
||||
|
||||
Checks to see if the mask is possible and updates the device
|
||||
streaming and coherent DMA mask parameters if it is.
|
||||
|
||||
Returns: 0 if successful and a negative error if not.
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_set_mask(struct device *dev, u64 mask)
|
||||
|
||||
Checks to see if the mask is possible and updates the device
|
||||
parameters if it is.
|
||||
|
||||
Returns: 0 if successful and a negative error if not.
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_set_coherent_mask(struct device *dev, u64 mask)
|
||||
|
||||
Checks to see if the mask is possible and updates the device
|
||||
parameters if it is.
|
||||
|
||||
Returns: 0 if successful and a negative error if not.
|
||||
|
||||
::
|
||||
|
||||
u64
|
||||
dma_get_required_mask(struct device *dev)
|
||||
|
||||
This API returns the mask that the platform requires to
|
||||
operate efficiently. Usually this means the returned mask
|
||||
is the minimum required to cover all of memory. Examining the
|
||||
required mask gives drivers with variable descriptor sizes the
|
||||
opportunity to use smaller descriptors as necessary.
|
||||
|
||||
Requesting the required mask does not alter the current mask. If you
|
||||
wish to take advantage of it, you should issue a dma_set_mask()
|
||||
call to set the mask to the value returned.
|
||||
|
||||
::
|
||||
|
||||
size_t
|
||||
dma_max_mapping_size(struct device *dev);
|
||||
|
||||
Returns the maximum size of a mapping for the device. The size parameter
|
||||
of the mapping functions like dma_map_single(), dma_map_page() and
|
||||
others should not be larger than the returned value.
|
||||
|
||||
::
|
||||
|
||||
unsigned long
|
||||
dma_get_merge_boundary(struct device *dev);
|
||||
|
||||
Returns the DMA merge boundary. If the device cannot merge any the DMA address
|
||||
segments, the function returns 0.
|
||||
|
||||
Part Id - Streaming DMA mappings
|
||||
--------------------------------
|
||||
|
||||
::
|
||||
|
||||
dma_addr_t
|
||||
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Maps a piece of processor virtual memory so it can be accessed by the
|
||||
device and returns the DMA address of the memory.
|
||||
|
||||
The direction for both APIs may be converted freely by casting.
|
||||
However the dma_API uses a strongly typed enumerator for its
|
||||
direction:
|
||||
|
||||
======================= =============================================
|
||||
DMA_NONE no direction (used for debugging)
|
||||
DMA_TO_DEVICE data is going from the memory to the device
|
||||
DMA_FROM_DEVICE data is coming from the device to the memory
|
||||
DMA_BIDIRECTIONAL direction isn't known
|
||||
======================= =============================================
|
||||
|
||||
.. note::
|
||||
|
||||
Not all memory regions in a machine can be mapped by this API.
|
||||
Further, contiguous kernel virtual space may not be contiguous as
|
||||
physical memory. Since this API does not provide any scatter/gather
|
||||
capability, it will fail if the user tries to map a non-physically
|
||||
contiguous piece of memory. For this reason, memory to be mapped by
|
||||
this API should be obtained from sources which guarantee it to be
|
||||
physically contiguous (like kmalloc).
|
||||
|
||||
Further, the DMA address of the memory must be within the
|
||||
dma_mask of the device (the dma_mask is a bit mask of the
|
||||
addressable region for the device, i.e., if the DMA address of
|
||||
the memory ANDed with the dma_mask is still equal to the DMA
|
||||
address, then the device can perform DMA to the memory). To
|
||||
ensure that the memory allocated by kmalloc is within the dma_mask,
|
||||
the driver may specify various platform-dependent flags to restrict
|
||||
the DMA address range of the allocation (e.g., on x86, GFP_DMA
|
||||
guarantees to be within the first 16MB of available DMA addresses,
|
||||
as required by ISA devices).
|
||||
|
||||
Note also that the above constraints on physical contiguity and
|
||||
dma_mask may not apply if the platform has an IOMMU (a device which
|
||||
maps an I/O DMA address to a physical memory address). However, to be
|
||||
portable, device driver writers may *not* assume that such an IOMMU
|
||||
exists.
|
||||
|
||||
.. warning::
|
||||
|
||||
Memory coherency operates at a granularity called the cache
|
||||
line width. In order for memory mapped by this API to operate
|
||||
correctly, the mapped region must begin exactly on a cache line
|
||||
boundary and end exactly on one (to prevent two separately mapped
|
||||
regions from sharing a single cache line). Since the cache line size
|
||||
may not be known at compile time, the API will not enforce this
|
||||
requirement. Therefore, it is recommended that driver writers who
|
||||
don't take special care to determine the cache line size at run time
|
||||
only map virtual regions that begin and end on page boundaries (which
|
||||
are guaranteed also to be cache line boundaries).
|
||||
|
||||
DMA_TO_DEVICE synchronisation must be done after the last modification
|
||||
of the memory region by the software and before it is handed off to
|
||||
the device. Once this primitive is used, memory covered by this
|
||||
primitive should be treated as read-only by the device. If the device
|
||||
may write to it at any point, it should be DMA_BIDIRECTIONAL (see
|
||||
below).
|
||||
|
||||
DMA_FROM_DEVICE synchronisation must be done before the driver
|
||||
accesses data that may be changed by the device. This memory should
|
||||
be treated as read-only by the driver. If the driver needs to write
|
||||
to it at any point, it should be DMA_BIDIRECTIONAL (see below).
|
||||
|
||||
DMA_BIDIRECTIONAL requires special handling: it means that the driver
|
||||
isn't sure if the memory was modified before being handed off to the
|
||||
device and also isn't sure if the device will also modify it. Thus,
|
||||
you must always sync bidirectional memory twice: once before the
|
||||
memory is handed off to the device (to make sure all memory changes
|
||||
are flushed from the processor) and once before the data may be
|
||||
accessed after being used by the device (to make sure any processor
|
||||
cache lines are updated with data that the device may have changed).
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Unmaps the region previously mapped. All the parameters passed in
|
||||
must be identical to those passed in (and returned) by the mapping
|
||||
API.
|
||||
|
||||
::
|
||||
|
||||
dma_addr_t
|
||||
dma_map_page(struct device *dev, struct page *page,
|
||||
unsigned long offset, size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
void
|
||||
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
API for mapping and unmapping for pages. All the notes and warnings
|
||||
for the other mapping APIs apply here. Also, although the <offset>
|
||||
and <size> parameters are provided to do partial page mapping, it is
|
||||
recommended that you never use these unless you really know what the
|
||||
cache width is.
|
||||
|
||||
::
|
||||
|
||||
dma_addr_t
|
||||
dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
|
||||
enum dma_data_direction dir, unsigned long attrs)
|
||||
|
||||
void
|
||||
dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
|
||||
enum dma_data_direction dir, unsigned long attrs)
|
||||
|
||||
API for mapping and unmapping for MMIO resources. All the notes and
|
||||
warnings for the other mapping APIs apply here. The API should only be
|
||||
used to map device MMIO resources, mapping of RAM is not permitted.
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
|
||||
|
||||
In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
|
||||
will fail to create a mapping. A driver can check for these errors by testing
|
||||
the returned DMA address with dma_mapping_error(). A non-zero return value
|
||||
means the mapping could not be created and the driver should take appropriate
|
||||
action (e.g. reduce current DMA mapping usage or delay and try again later).
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_map_sg(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction direction)
|
||||
|
||||
Returns: the number of DMA address segments mapped (this may be shorter
|
||||
than <nents> passed in if some elements of the scatter/gather list are
|
||||
physically or virtually adjacent and an IOMMU maps them with a single
|
||||
entry).
|
||||
|
||||
Please note that the sg cannot be mapped again if it has been mapped once.
|
||||
The mapping process is allowed to destroy information in the sg.
|
||||
|
||||
As with the other mapping interfaces, dma_map_sg() can fail. When it
|
||||
does, 0 is returned and a driver must take appropriate action. It is
|
||||
critical that the driver do something, in the case of a block driver
|
||||
aborting the request or even oopsing is better than doing nothing and
|
||||
corrupting the filesystem.
|
||||
|
||||
With scatterlists, you use the resulting mapping like this::
|
||||
|
||||
int i, count = dma_map_sg(dev, sglist, nents, direction);
|
||||
struct scatterlist *sg;
|
||||
|
||||
for_each_sg(sglist, sg, count, i) {
|
||||
hw_address[i] = sg_dma_address(sg);
|
||||
hw_len[i] = sg_dma_len(sg);
|
||||
}
|
||||
|
||||
where nents is the number of entries in the sglist.
|
||||
|
||||
The implementation is free to merge several consecutive sglist entries
|
||||
into one (e.g. with an IOMMU, or if several pages just happen to be
|
||||
physically contiguous) and returns the actual number of sg entries it
|
||||
mapped them to. On failure 0, is returned.
|
||||
|
||||
Then you should loop count times (note: this can be less than nents times)
|
||||
and use sg_dma_address() and sg_dma_len() macros where you previously
|
||||
accessed sg->address and sg->length as shown above.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction direction)
|
||||
|
||||
Unmap the previously mapped scatter/gather list. All the parameters
|
||||
must be the same as those and passed in to the scatter/gather mapping
|
||||
API.
|
||||
|
||||
Note: <nents> must be the number you passed in, *not* the number of
|
||||
DMA address entries returned.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
|
||||
size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
void
|
||||
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
|
||||
size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
void
|
||||
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
|
||||
int nents,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
void
|
||||
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
|
||||
int nents,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Synchronise a single contiguous or scatter/gather mapping for the CPU
|
||||
and device. With the sync_sg API, all the parameters must be the same
|
||||
as those passed into the single mapping API. With the sync_single API,
|
||||
you can use dma_handle and size parameters that aren't identical to
|
||||
those passed into the single mapping API to do a partial sync.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
You must do this:
|
||||
|
||||
- Before reading values that have been written by DMA from the device
|
||||
(use the DMA_FROM_DEVICE direction)
|
||||
- After writing values that will be written to the device using DMA
|
||||
(use the DMA_TO_DEVICE) direction
|
||||
- before *and* after handing memory to the device if the memory is
|
||||
DMA_BIDIRECTIONAL
|
||||
|
||||
See also dma_map_single().
|
||||
|
||||
::
|
||||
|
||||
dma_addr_t
|
||||
dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
|
||||
enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
|
||||
void
|
||||
dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
|
||||
size_t size, enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
|
||||
int
|
||||
dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
|
||||
int nents, enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
|
||||
void
|
||||
dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
|
||||
int nents, enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
|
||||
The four functions above are just like the counterpart functions
|
||||
without the _attrs suffixes, except that they pass an optional
|
||||
dma_attrs.
|
||||
|
||||
The interpretation of DMA attributes is architecture-specific, and
|
||||
each attribute should be documented in Documentation/DMA-attributes.txt.
|
||||
|
||||
If dma_attrs are 0, the semantics of each of these functions
|
||||
is identical to those of the corresponding function
|
||||
without the _attrs suffix. As a result dma_map_single_attrs()
|
||||
can generally replace dma_map_single(), etc.
|
||||
|
||||
As an example of the use of the ``*_attrs`` functions, here's how
|
||||
you could pass an attribute DMA_ATTR_FOO when mapping memory
|
||||
for DMA::
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
|
||||
* documented in Documentation/DMA-attributes.txt */
|
||||
...
|
||||
|
||||
unsigned long attr;
|
||||
attr |= DMA_ATTR_FOO;
|
||||
....
|
||||
n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
|
||||
....
|
||||
|
||||
Architectures that care about DMA_ATTR_FOO would check for its
|
||||
presence in their implementations of the mapping and unmapping
|
||||
routines, e.g.:::
|
||||
|
||||
void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
|
||||
size_t size, enum dma_data_direction dir,
|
||||
unsigned long attrs)
|
||||
{
|
||||
....
|
||||
if (attrs & DMA_ATTR_FOO)
|
||||
/* twizzle the frobnozzle */
|
||||
....
|
||||
}
|
||||
|
||||
|
||||
Part II - Advanced dma usage
|
||||
----------------------------
|
||||
|
||||
Warning: These pieces of the DMA API should not be used in the
|
||||
majority of cases, since they cater for unlikely corner cases that
|
||||
don't belong in usual drivers.
|
||||
|
||||
If you don't understand how cache line coherency works between a
|
||||
processor and an I/O device, you should not be using this part of the
|
||||
API at all.
|
||||
|
||||
::
|
||||
|
||||
void *
|
||||
dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
|
||||
gfp_t flag, unsigned long attrs)
|
||||
|
||||
Identical to dma_alloc_coherent() except that when the
|
||||
DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the
|
||||
platform will choose to return either consistent or non-consistent memory
|
||||
as it sees fit. By using this API, you are guaranteeing to the platform
|
||||
that you have all the correct and necessary sync points for this memory
|
||||
in the driver should it choose to return non-consistent memory.
|
||||
|
||||
Note: where the platform can return consistent memory, it will
|
||||
guarantee that the sync points become nops.
|
||||
|
||||
Warning: Handling non-consistent memory is a real pain. You should
|
||||
only use this API if you positively know your driver will be
|
||||
required to work on one of the rare (usually non-PCI) architectures
|
||||
that simply cannot make consistent memory.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
|
||||
dma_addr_t dma_handle, unsigned long attrs)
|
||||
|
||||
Free memory allocated by the dma_alloc_attrs(). All common
|
||||
parameters must be identical to those otherwise passed to dma_free_coherent,
|
||||
and the attrs argument must be identical to the attrs passed to
|
||||
dma_alloc_attrs().
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_get_cache_alignment(void)
|
||||
|
||||
Returns the processor cache alignment. This is the absolute minimum
|
||||
alignment *and* width that you must observe when either mapping
|
||||
memory or doing partial flushes.
|
||||
|
||||
.. note::
|
||||
|
||||
This API may return a number *larger* than the actual cache
|
||||
line, but it will guarantee that one or more cache lines fit exactly
|
||||
into the width returned by this call. It will also always be a power
|
||||
of two for easy alignment.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Do a partial sync of memory that was allocated by dma_alloc_attrs() with
|
||||
the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and
|
||||
continuing on for size. Again, you *must* observe the cache line
|
||||
boundaries when doing this.
|
||||
|
||||
::
|
||||
|
||||
int
|
||||
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size);
|
||||
|
||||
Declare region of memory to be handed out by dma_alloc_coherent() when
|
||||
it's asked for coherent memory for this device.
|
||||
|
||||
phys_addr is the CPU physical address to which the memory is currently
|
||||
assigned (this will be ioremapped so the CPU can access the region).
|
||||
|
||||
device_addr is the DMA address the device needs to be programmed
|
||||
with to actually address this memory (this will be handed out as the
|
||||
dma_addr_t in dma_alloc_coherent()).
|
||||
|
||||
size is the size of the area (must be multiples of PAGE_SIZE).
|
||||
|
||||
As a simplification for the platforms, only *one* such region of
|
||||
memory may be declared per device.
|
||||
|
||||
For reasons of efficiency, most platforms choose to track the declared
|
||||
region only at the granularity of a page. For smaller allocations,
|
||||
you should use the dma_pool() API.
|
||||
|
||||
Part III - Debug drivers use of the DMA-API
|
||||
-------------------------------------------
|
||||
|
||||
The DMA-API as described above has some constraints. DMA addresses must be
|
||||
released with the corresponding function with the same size for example. With
|
||||
the advent of hardware IOMMUs it becomes more and more important that drivers
|
||||
do not violate those constraints. In the worst case such a violation can
|
||||
result in data corruption up to destroyed filesystems.
|
||||
|
||||
To debug drivers and find bugs in the usage of the DMA-API checking code can
|
||||
be compiled into the kernel which will tell the developer about those
|
||||
violations. If your architecture supports it you can select the "Enable
|
||||
debugging of DMA-API usage" option in your kernel configuration. Enabling this
|
||||
option has a performance impact. Do not enable it in production kernels.
|
||||
|
||||
If you boot the resulting kernel will contain code which does some bookkeeping
|
||||
about what DMA memory was allocated for which device. If this code detects an
|
||||
error it prints a warning message with some details into your kernel log. An
|
||||
example warning message may look like this::
|
||||
|
||||
WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
|
||||
check_unmap+0x203/0x490()
|
||||
Hardware name:
|
||||
forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
|
||||
function [device address=0x00000000640444be] [size=66 bytes] [mapped as
|
||||
single] [unmapped as page]
|
||||
Modules linked in: nfsd exportfs bridge stp llc r8169
|
||||
Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
|
||||
Call Trace:
|
||||
<IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
|
||||
[<ffffffff80647b70>] _spin_unlock+0x10/0x30
|
||||
[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
|
||||
[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
|
||||
[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
|
||||
[<ffffffff80252f96>] queue_work+0x56/0x60
|
||||
[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
|
||||
[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
|
||||
[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
|
||||
[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
|
||||
[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
|
||||
[<ffffffff803c7ea3>] check_unmap+0x203/0x490
|
||||
[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
|
||||
[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
|
||||
[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
|
||||
[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
|
||||
[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
|
||||
[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
|
||||
[<ffffffff8020c093>] ret_from_intr+0x0/0xa
|
||||
<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
|
||||
|
||||
The driver developer can find the driver and the device including a stacktrace
|
||||
of the DMA-API call which caused this warning.
|
||||
|
||||
Per default only the first error will result in a warning message. All other
|
||||
errors will only silently counted. This limitation exist to prevent the code
|
||||
from flooding your kernel log. To support debugging a device driver this can
|
||||
be disabled via debugfs. See the debugfs interface documentation below for
|
||||
details.
|
||||
|
||||
The debugfs directory for the DMA-API debugging code is called dma-api/. In
|
||||
this directory the following files can currently be found:
|
||||
|
||||
=============================== ===============================================
|
||||
dma-api/all_errors This file contains a numeric value. If this
|
||||
value is not equal to zero the debugging code
|
||||
will print a warning for every error it finds
|
||||
into the kernel log. Be careful with this
|
||||
option, as it can easily flood your logs.
|
||||
|
||||
dma-api/disabled This read-only file contains the character 'Y'
|
||||
if the debugging code is disabled. This can
|
||||
happen when it runs out of memory or if it was
|
||||
disabled at boot time
|
||||
|
||||
dma-api/dump This read-only file contains current DMA
|
||||
mappings.
|
||||
|
||||
dma-api/error_count This file is read-only and shows the total
|
||||
numbers of errors found.
|
||||
|
||||
dma-api/num_errors The number in this file shows how many
|
||||
warnings will be printed to the kernel log
|
||||
before it stops. This number is initialized to
|
||||
one at system boot and be set by writing into
|
||||
this file
|
||||
|
||||
dma-api/min_free_entries This read-only file can be read to get the
|
||||
minimum number of free dma_debug_entries the
|
||||
allocator has ever seen. If this value goes
|
||||
down to zero the code will attempt to increase
|
||||
nr_total_entries to compensate.
|
||||
|
||||
dma-api/num_free_entries The current number of free dma_debug_entries
|
||||
in the allocator.
|
||||
|
||||
dma-api/nr_total_entries The total number of dma_debug_entries in the
|
||||
allocator, both free and used.
|
||||
|
||||
dma-api/driver_filter You can write a name of a driver into this file
|
||||
to limit the debug output to requests from that
|
||||
particular driver. Write an empty string to
|
||||
that file to disable the filter and see
|
||||
all errors again.
|
||||
=============================== ===============================================
|
||||
|
||||
If you have this code compiled into your kernel it will be enabled by default.
|
||||
If you want to boot without the bookkeeping anyway you can provide
|
||||
'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
|
||||
Notice that you can not enable it again at runtime. You have to reboot to do
|
||||
so.
|
||||
|
||||
If you want to see debug messages only for a special device driver you can
|
||||
specify the dma_debug_driver=<drivername> parameter. This will enable the
|
||||
driver filter at boot time. The debug code will only print errors for that
|
||||
driver afterwards. This filter can be disabled or changed later using debugfs.
|
||||
|
||||
When the code disables itself at runtime this is most likely because it ran
|
||||
out of dma_debug_entries and was unable to allocate more on-demand. 65536
|
||||
entries are preallocated at boot - if this is too low for you boot with
|
||||
'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
|
||||
that the code allocates entries in batches, so the exact number of
|
||||
preallocated entries may be greater than the actual number requested. The
|
||||
code will print to the kernel log each time it has dynamically allocated
|
||||
as many entries as were initially preallocated. This is to indicate that a
|
||||
larger preallocation size may be appropriate, or if it happens continually
|
||||
that a driver may be leaking mappings.
|
||||
|
||||
::
|
||||
|
||||
void
|
||||
debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
|
||||
|
||||
dma-debug interface debug_dma_mapping_error() to debug drivers that fail
|
||||
to check DMA mapping errors on addresses returned by dma_map_single() and
|
||||
dma_map_page() interfaces. This interface clears a flag set by
|
||||
debug_dma_map_page() to indicate that dma_mapping_error() has been called by
|
||||
the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
|
||||
this flag is still set, prints warning message that includes call trace that
|
||||
leads up to the unmap. This interface can be called from dma_mapping_error()
|
||||
routines to enable DMA mapping error check debugging.
|
140
Documentation/core-api/dma-attributes.rst
Normal file
140
Documentation/core-api/dma-attributes.rst
Normal file
|
@ -0,0 +1,140 @@
|
|||
==============
|
||||
DMA attributes
|
||||
==============
|
||||
|
||||
This document describes the semantics of the DMA attributes that are
|
||||
defined in linux/dma-mapping.h.
|
||||
|
||||
DMA_ATTR_WEAK_ORDERING
|
||||
----------------------
|
||||
|
||||
DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping
|
||||
may be weakly ordered, that is that reads and writes may pass each other.
|
||||
|
||||
Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING,
|
||||
those that do not will simply ignore the attribute and exhibit default
|
||||
behavior.
|
||||
|
||||
DMA_ATTR_WRITE_COMBINE
|
||||
----------------------
|
||||
|
||||
DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be
|
||||
buffered to improve performance.
|
||||
|
||||
Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE,
|
||||
those that do not will simply ignore the attribute and exhibit default
|
||||
behavior.
|
||||
|
||||
DMA_ATTR_NON_CONSISTENT
|
||||
-----------------------
|
||||
|
||||
DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
|
||||
consistent or non-consistent memory as it sees fit. By using this API,
|
||||
you are guaranteeing to the platform that you have all the correct and
|
||||
necessary sync points for this memory in the driver.
|
||||
|
||||
DMA_ATTR_NO_KERNEL_MAPPING
|
||||
--------------------------
|
||||
|
||||
DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
|
||||
virtual mapping for the allocated buffer. On some architectures creating
|
||||
such mapping is non-trivial task and consumes very limited resources
|
||||
(like kernel virtual address space or dma consistent address space).
|
||||
Buffers allocated with this attribute can be only passed to user space
|
||||
by calling dma_mmap_attrs(). By using this API, you are guaranteeing
|
||||
that you won't dereference the pointer returned by dma_alloc_attr(). You
|
||||
can treat it as a cookie that must be passed to dma_mmap_attrs() and
|
||||
dma_free_attrs(). Make sure that both of these also get this attribute
|
||||
set on each call.
|
||||
|
||||
Since it is optional for platforms to implement
|
||||
DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
|
||||
attribute and exhibit default behavior.
|
||||
|
||||
DMA_ATTR_SKIP_CPU_SYNC
|
||||
----------------------
|
||||
|
||||
By default dma_map_{single,page,sg} functions family transfer a given
|
||||
buffer from CPU domain to device domain. Some advanced use cases might
|
||||
require sharing a buffer between more than one device. This requires
|
||||
having a mapping created separately for each device and is usually
|
||||
performed by calling dma_map_{single,page,sg} function more than once
|
||||
for the given buffer with device pointer to each device taking part in
|
||||
the buffer sharing. The first call transfers a buffer from 'CPU' domain
|
||||
to 'device' domain, what synchronizes CPU caches for the given region
|
||||
(usually it means that the cache has been flushed or invalidated
|
||||
depending on the dma direction). However, next calls to
|
||||
dma_map_{single,page,sg}() for other devices will perform exactly the
|
||||
same synchronization operation on the CPU cache. CPU cache synchronization
|
||||
might be a time consuming operation, especially if the buffers are
|
||||
large, so it is highly recommended to avoid it if possible.
|
||||
DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
|
||||
the CPU cache for the given buffer assuming that it has been already
|
||||
transferred to 'device' domain. This attribute can be also used for
|
||||
dma_unmap_{single,page,sg} functions family to force buffer to stay in
|
||||
device domain after releasing a mapping for it. Use this attribute with
|
||||
care!
|
||||
|
||||
DMA_ATTR_FORCE_CONTIGUOUS
|
||||
-------------------------
|
||||
|
||||
By default DMA-mapping subsystem is allowed to assemble the buffer
|
||||
allocated by dma_alloc_attrs() function from individual pages if it can
|
||||
be mapped as contiguous chunk into device dma address space. By
|
||||
specifying this attribute the allocated buffer is forced to be contiguous
|
||||
also in physical memory.
|
||||
|
||||
DMA_ATTR_ALLOC_SINGLE_PAGES
|
||||
---------------------------
|
||||
|
||||
This is a hint to the DMA-mapping subsystem that it's probably not worth
|
||||
the time to try to allocate memory to in a way that gives better TLB
|
||||
efficiency (AKA it's not worth trying to build the mapping out of larger
|
||||
pages). You might want to specify this if:
|
||||
|
||||
- You know that the accesses to this memory won't thrash the TLB.
|
||||
You might know that the accesses are likely to be sequential or
|
||||
that they aren't sequential but it's unlikely you'll ping-pong
|
||||
between many addresses that are likely to be in different physical
|
||||
pages.
|
||||
- You know that the penalty of TLB misses while accessing the
|
||||
memory will be small enough to be inconsequential. If you are
|
||||
doing a heavy operation like decryption or decompression this
|
||||
might be the case.
|
||||
- You know that the DMA mapping is fairly transitory. If you expect
|
||||
the mapping to have a short lifetime then it may be worth it to
|
||||
optimize allocation (avoid coming up with large pages) instead of
|
||||
getting the slight performance win of larger pages.
|
||||
|
||||
Setting this hint doesn't guarantee that you won't get huge pages, but it
|
||||
means that we won't try quite as hard to get them.
|
||||
|
||||
.. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
|
||||
though ARM64 patches will likely be posted soon.
|
||||
|
||||
DMA_ATTR_NO_WARN
|
||||
----------------
|
||||
|
||||
This tells the DMA-mapping subsystem to suppress allocation failure reports
|
||||
(similarly to __GFP_NOWARN).
|
||||
|
||||
On some architectures allocation failures are reported with error messages
|
||||
to the system logs. Although this can help to identify and debug problems,
|
||||
drivers which handle failures (eg, retry later) have no problems with them,
|
||||
and can actually flood the system logs with error messages that aren't any
|
||||
problem at all, depending on the implementation of the retry mechanism.
|
||||
|
||||
So, this provides a way for drivers to avoid those error messages on calls
|
||||
where allocation failures are not a problem, and shouldn't bother the logs.
|
||||
|
||||
.. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC.
|
||||
|
||||
DMA_ATTR_PRIVILEGED
|
||||
-------------------
|
||||
|
||||
Some advanced peripherals such as remote processors and GPUs perform
|
||||
accesses to DMA buffers in both privileged "supervisor" and unprivileged
|
||||
"user" modes. This attribute is used to indicate to the DMA-mapping
|
||||
subsystem that the buffer is fully accessible at the elevated privilege
|
||||
level (and ideally inaccessible or at least read-only at the
|
||||
lesser-privileged levels).
|
152
Documentation/core-api/dma-isa-lpc.rst
Normal file
152
Documentation/core-api/dma-isa-lpc.rst
Normal file
|
@ -0,0 +1,152 @@
|
|||
============================
|
||||
DMA with ISA and LPC devices
|
||||
============================
|
||||
|
||||
:Author: Pierre Ossman <drzeus@drzeus.cx>
|
||||
|
||||
This document describes how to do DMA transfers using the old ISA DMA
|
||||
controller. Even though ISA is more or less dead today the LPC bus
|
||||
uses the same DMA system so it will be around for quite some time.
|
||||
|
||||
Headers and dependencies
|
||||
------------------------
|
||||
|
||||
To do ISA style DMA you need to include two headers::
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <asm/dma.h>
|
||||
|
||||
The first is the generic DMA API used to convert virtual addresses to
|
||||
bus addresses (see Documentation/DMA-API.txt for details).
|
||||
|
||||
The second contains the routines specific to ISA DMA transfers. Since
|
||||
this is not present on all platforms make sure you construct your
|
||||
Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries
|
||||
to build your driver on unsupported platforms.
|
||||
|
||||
Buffer allocation
|
||||
-----------------
|
||||
|
||||
The ISA DMA controller has some very strict requirements on which
|
||||
memory it can access so extra care must be taken when allocating
|
||||
buffers.
|
||||
|
||||
(You usually need a special buffer for DMA transfers instead of
|
||||
transferring directly to and from your normal data structures.)
|
||||
|
||||
The DMA-able address space is the lowest 16 MB of _physical_ memory.
|
||||
Also the transfer block may not cross page boundaries (which are 64
|
||||
or 128 KiB depending on which channel you use).
|
||||
|
||||
In order to allocate a piece of memory that satisfies all these
|
||||
requirements you pass the flag GFP_DMA to kmalloc.
|
||||
|
||||
Unfortunately the memory available for ISA DMA is scarce so unless you
|
||||
allocate the memory during boot-up it's a good idea to also pass
|
||||
__GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder.
|
||||
|
||||
(This scarcity also means that you should allocate the buffer as
|
||||
early as possible and not release it until the driver is unloaded.)
|
||||
|
||||
Address translation
|
||||
-------------------
|
||||
|
||||
To translate the virtual address to a bus address, use the normal DMA
|
||||
API. Do _not_ use isa_virt_to_bus() even though it does the same
|
||||
thing. The reason for this is that the function isa_virt_to_bus()
|
||||
will require a Kconfig dependency to ISA, not just ISA_DMA_API which
|
||||
is really all you need. Remember that even though the DMA controller
|
||||
has its origins in ISA it is used elsewhere.
|
||||
|
||||
Note: x86_64 had a broken DMA API when it came to ISA but has since
|
||||
been fixed. If your arch has problems then fix the DMA API instead of
|
||||
reverting to the ISA functions.
|
||||
|
||||
Channels
|
||||
--------
|
||||
|
||||
A normal ISA DMA controller has 8 channels. The lower four are for
|
||||
8-bit transfers and the upper four are for 16-bit transfers.
|
||||
|
||||
(Actually the DMA controller is really two separate controllers where
|
||||
channel 4 is used to give DMA access for the second controller (0-3).
|
||||
This means that of the four 16-bits channels only three are usable.)
|
||||
|
||||
You allocate these in a similar fashion as all basic resources:
|
||||
|
||||
extern int request_dma(unsigned int dmanr, const char * device_id);
|
||||
extern void free_dma(unsigned int dmanr);
|
||||
|
||||
The ability to use 16-bit or 8-bit transfers is _not_ up to you as a
|
||||
driver author but depends on what the hardware supports. Check your
|
||||
specs or test different channels.
|
||||
|
||||
Transfer data
|
||||
-------------
|
||||
|
||||
Now for the good stuff, the actual DMA transfer. :)
|
||||
|
||||
Before you use any ISA DMA routines you need to claim the DMA lock
|
||||
using claim_dma_lock(). The reason is that some DMA operations are
|
||||
not atomic so only one driver may fiddle with the registers at a
|
||||
time.
|
||||
|
||||
The first time you use the DMA controller you should call
|
||||
clear_dma_ff(). This clears an internal register in the DMA
|
||||
controller that is used for the non-atomic operations. As long as you
|
||||
(and everyone else) uses the locking functions then you only need to
|
||||
reset this once.
|
||||
|
||||
Next, you tell the controller in which direction you intend to do the
|
||||
transfer using set_dma_mode(). Currently you have the options
|
||||
DMA_MODE_READ and DMA_MODE_WRITE.
|
||||
|
||||
Set the address from where the transfer should start (this needs to
|
||||
be 16-bit aligned for 16-bit transfers) and how many bytes to
|
||||
transfer. Note that it's _bytes_. The DMA routines will do all the
|
||||
required translation to values that the DMA controller understands.
|
||||
|
||||
The final step is enabling the DMA channel and releasing the DMA
|
||||
lock.
|
||||
|
||||
Once the DMA transfer is finished (or timed out) you should disable
|
||||
the channel again. You should also check get_dma_residue() to make
|
||||
sure that all data has been transferred.
|
||||
|
||||
Example::
|
||||
|
||||
int flags, residue;
|
||||
|
||||
flags = claim_dma_lock();
|
||||
|
||||
clear_dma_ff();
|
||||
|
||||
set_dma_mode(channel, DMA_MODE_WRITE);
|
||||
set_dma_addr(channel, phys_addr);
|
||||
set_dma_count(channel, num_bytes);
|
||||
|
||||
dma_enable(channel);
|
||||
|
||||
release_dma_lock(flags);
|
||||
|
||||
while (!device_done());
|
||||
|
||||
flags = claim_dma_lock();
|
||||
|
||||
dma_disable(channel);
|
||||
|
||||
residue = dma_get_residue(channel);
|
||||
if (residue != 0)
|
||||
printk(KERN_ERR "driver: Incomplete DMA transfer!"
|
||||
" %d bytes left!\n", residue);
|
||||
|
||||
release_dma_lock(flags);
|
||||
|
||||
Suspend/resume
|
||||
--------------
|
||||
|
||||
It is the driver's responsibility to make sure that the machine isn't
|
||||
suspended while a DMA transfer is in progress. Also, all DMA settings
|
||||
are lost when the system suspends so if your driver relies on the DMA
|
||||
controller being in a certain state then you have to restore these
|
||||
registers upon resume.
|
|
@ -18,6 +18,7 @@ it.
|
|||
|
||||
kernel-api
|
||||
workqueue
|
||||
printk-basics
|
||||
printk-formats
|
||||
symbol-namespaces
|
||||
|
||||
|
@ -30,10 +31,12 @@ Library functionality that is used throughout the kernel.
|
|||
:maxdepth: 1
|
||||
|
||||
kobject
|
||||
kref
|
||||
assoc_array
|
||||
xarray
|
||||
idr
|
||||
circular-buffers
|
||||
rbtree
|
||||
generic-radix-tree
|
||||
packing
|
||||
timekeeping
|
||||
|
@ -50,6 +53,7 @@ How Linux keeps everything from happening at the same time. See
|
|||
|
||||
atomic_ops
|
||||
refcount-vs-atomic
|
||||
irq/index
|
||||
local_ops
|
||||
padata
|
||||
../RCU/index
|
||||
|
@ -78,6 +82,10 @@ more memory-management documentation in :doc:`/vm/index`.
|
|||
:maxdepth: 1
|
||||
|
||||
memory-allocation
|
||||
dma-api
|
||||
dma-api-howto
|
||||
dma-attributes
|
||||
dma-isa-lpc
|
||||
mm-api
|
||||
genalloc
|
||||
pin_user_pages
|
||||
|
@ -92,6 +100,7 @@ Interfaces for kernel debugging
|
|||
|
||||
debug-objects
|
||||
tracepoint
|
||||
debugging-via-ohci1394
|
||||
|
||||
Everything else
|
||||
===============
|
||||
|
|
24
Documentation/core-api/irq/concepts.rst
Normal file
24
Documentation/core-api/irq/concepts.rst
Normal file
|
@ -0,0 +1,24 @@
|
|||
===============
|
||||
What is an IRQ?
|
||||
===============
|
||||
|
||||
An IRQ is an interrupt request from a device.
|
||||
Currently they can come in over a pin, or over a packet.
|
||||
Several devices may be connected to the same pin thus
|
||||
sharing an IRQ.
|
||||
|
||||
An IRQ number is a kernel identifier used to talk about a hardware
|
||||
interrupt source. Typically this is an index into the global irq_desc
|
||||
array, but except for what linux/interrupt.h implements the details
|
||||
are architecture specific.
|
||||
|
||||
An IRQ number is an enumeration of the possible interrupt sources on a
|
||||
machine. Typically what is enumerated is the number of input pins on
|
||||
all of the interrupt controller in the system. In the case of ISA
|
||||
what is enumerated are the 16 input pins on the two i8259 interrupt
|
||||
controllers.
|
||||
|
||||
Architectures can assign additional meaning to the IRQ numbers, and
|
||||
are encouraged to in the case where there is any manual configuration
|
||||
of the hardware involved. The ISA IRQs are a classic example of
|
||||
assigning this kind of additional meaning.
|
11
Documentation/core-api/irq/index.rst
Normal file
11
Documentation/core-api/irq/index.rst
Normal file
|
@ -0,0 +1,11 @@
|
|||
====
|
||||
IRQs
|
||||
====
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
concepts
|
||||
irq-affinity
|
||||
irq-domain
|
||||
irqflags-tracing
|
70
Documentation/core-api/irq/irq-affinity.rst
Normal file
70
Documentation/core-api/irq/irq-affinity.rst
Normal file
|
@ -0,0 +1,70 @@
|
|||
================
|
||||
SMP IRQ affinity
|
||||
================
|
||||
|
||||
ChangeLog:
|
||||
- Started by Ingo Molnar <mingo@redhat.com>
|
||||
- Update by Max Krasnyansky <maxk@qualcomm.com>
|
||||
|
||||
|
||||
/proc/irq/IRQ#/smp_affinity and /proc/irq/IRQ#/smp_affinity_list specify
|
||||
which target CPUs are permitted for a given IRQ source. It's a bitmask
|
||||
(smp_affinity) or cpu list (smp_affinity_list) of allowed CPUs. It's not
|
||||
allowed to turn off all CPUs, and if an IRQ controller does not support
|
||||
IRQ affinity then the value will not change from the default of all cpus.
|
||||
|
||||
/proc/irq/default_smp_affinity specifies default affinity mask that applies
|
||||
to all non-active IRQs. Once IRQ is allocated/activated its affinity bitmask
|
||||
will be set to the default mask. It can then be changed as described above.
|
||||
Default mask is 0xffffffff.
|
||||
|
||||
Here is an example of restricting IRQ44 (eth1) to CPU0-3 then restricting
|
||||
it to CPU4-7 (this is an 8-CPU SMP box)::
|
||||
|
||||
[root@moon 44]# cd /proc/irq/44
|
||||
[root@moon 44]# cat smp_affinity
|
||||
ffffffff
|
||||
|
||||
[root@moon 44]# echo 0f > smp_affinity
|
||||
[root@moon 44]# cat smp_affinity
|
||||
0000000f
|
||||
[root@moon 44]# ping -f h
|
||||
PING hell (195.4.7.3): 56 data bytes
|
||||
...
|
||||
--- hell ping statistics ---
|
||||
6029 packets transmitted, 6027 packets received, 0% packet loss
|
||||
round-trip min/avg/max = 0.1/0.1/0.4 ms
|
||||
[root@moon 44]# cat /proc/interrupts | grep 'CPU\|44:'
|
||||
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
|
||||
44: 1068 1785 1785 1783 0 0 0 0 IO-APIC-level eth1
|
||||
|
||||
As can be seen from the line above IRQ44 was delivered only to the first four
|
||||
processors (0-3).
|
||||
Now lets restrict that IRQ to CPU(4-7).
|
||||
|
||||
::
|
||||
|
||||
[root@moon 44]# echo f0 > smp_affinity
|
||||
[root@moon 44]# cat smp_affinity
|
||||
000000f0
|
||||
[root@moon 44]# ping -f h
|
||||
PING hell (195.4.7.3): 56 data bytes
|
||||
..
|
||||
--- hell ping statistics ---
|
||||
2779 packets transmitted, 2777 packets received, 0% packet loss
|
||||
round-trip min/avg/max = 0.1/0.5/585.4 ms
|
||||
[root@moon 44]# cat /proc/interrupts | 'CPU\|44:'
|
||||
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
|
||||
44: 1068 1785 1785 1783 1784 1069 1070 1069 IO-APIC-level eth1
|
||||
|
||||
This time around IRQ44 was delivered only to the last four processors.
|
||||
i.e counters for the CPU0-3 did not change.
|
||||
|
||||
Here is an example of limiting that same irq (44) to cpus 1024 to 1031::
|
||||
|
||||
[root@moon 44]# echo 1024-1031 > smp_affinity_list
|
||||
[root@moon 44]# cat smp_affinity_list
|
||||
1024-1031
|
||||
|
||||
Note that to do this with a bitmask would require 32 bitmasks of zero
|
||||
to follow the pertinent one.
|
270
Documentation/core-api/irq/irq-domain.rst
Normal file
270
Documentation/core-api/irq/irq-domain.rst
Normal file
|
@ -0,0 +1,270 @@
|
|||
===============================================
|
||||
The irq_domain interrupt number mapping library
|
||||
===============================================
|
||||
|
||||
The current design of the Linux kernel uses a single large number
|
||||
space where each separate IRQ source is assigned a different number.
|
||||
This is simple when there is only one interrupt controller, but in
|
||||
systems with multiple interrupt controllers the kernel must ensure
|
||||
that each one gets assigned non-overlapping allocations of Linux
|
||||
IRQ numbers.
|
||||
|
||||
The number of interrupt controllers registered as unique irqchips
|
||||
show a rising tendency: for example subdrivers of different kinds
|
||||
such as GPIO controllers avoid reimplementing identical callback
|
||||
mechanisms as the IRQ core system by modelling their interrupt
|
||||
handlers as irqchips, i.e. in effect cascading interrupt controllers.
|
||||
|
||||
Here the interrupt number loose all kind of correspondence to
|
||||
hardware interrupt numbers: whereas in the past, IRQ numbers could
|
||||
be chosen so they matched the hardware IRQ line into the root
|
||||
interrupt controller (i.e. the component actually fireing the
|
||||
interrupt line to the CPU) nowadays this number is just a number.
|
||||
|
||||
For this reason we need a mechanism to separate controller-local
|
||||
interrupt numbers, called hardware irq's, from Linux IRQ numbers.
|
||||
|
||||
The irq_alloc_desc*() and irq_free_desc*() APIs provide allocation of
|
||||
irq numbers, but they don't provide any support for reverse mapping of
|
||||
the controller-local IRQ (hwirq) number into the Linux IRQ number
|
||||
space.
|
||||
|
||||
The irq_domain library adds mapping between hwirq and IRQ numbers on
|
||||
top of the irq_alloc_desc*() API. An irq_domain to manage mapping is
|
||||
preferred over interrupt controller drivers open coding their own
|
||||
reverse mapping scheme.
|
||||
|
||||
irq_domain also implements translation from an abstract irq_fwspec
|
||||
structure to hwirq numbers (Device Tree and ACPI GSI so far), and can
|
||||
be easily extended to support other IRQ topology data sources.
|
||||
|
||||
irq_domain usage
|
||||
================
|
||||
|
||||
An interrupt controller driver creates and registers an irq_domain by
|
||||
calling one of the irq_domain_add_*() functions (each mapping method
|
||||
has a different allocator function, more on that later). The function
|
||||
will return a pointer to the irq_domain on success. The caller must
|
||||
provide the allocator function with an irq_domain_ops structure.
|
||||
|
||||
In most cases, the irq_domain will begin empty without any mappings
|
||||
between hwirq and IRQ numbers. Mappings are added to the irq_domain
|
||||
by calling irq_create_mapping() which accepts the irq_domain and a
|
||||
hwirq number as arguments. If a mapping for the hwirq doesn't already
|
||||
exist then it will allocate a new Linux irq_desc, associate it with
|
||||
the hwirq, and call the .map() callback so the driver can perform any
|
||||
required hardware setup.
|
||||
|
||||
When an interrupt is received, irq_find_mapping() function should
|
||||
be used to find the Linux IRQ number from the hwirq number.
|
||||
|
||||
The irq_create_mapping() function must be called *atleast once*
|
||||
before any call to irq_find_mapping(), lest the descriptor will not
|
||||
be allocated.
|
||||
|
||||
If the driver has the Linux IRQ number or the irq_data pointer, and
|
||||
needs to know the associated hwirq number (such as in the irq_chip
|
||||
callbacks) then it can be directly obtained from irq_data->hwirq.
|
||||
|
||||
Types of irq_domain mappings
|
||||
============================
|
||||
|
||||
There are several mechanisms available for reverse mapping from hwirq
|
||||
to Linux irq, and each mechanism uses a different allocation function.
|
||||
Which reverse map type should be used depends on the use case. Each
|
||||
of the reverse map types are described below:
|
||||
|
||||
Linear
|
||||
------
|
||||
|
||||
::
|
||||
|
||||
irq_domain_add_linear()
|
||||
irq_domain_create_linear()
|
||||
|
||||
The linear reverse map maintains a fixed size table indexed by the
|
||||
hwirq number. When a hwirq is mapped, an irq_desc is allocated for
|
||||
the hwirq, and the IRQ number is stored in the table.
|
||||
|
||||
The Linear map is a good choice when the maximum number of hwirqs is
|
||||
fixed and a relatively small number (~ < 256). The advantages of this
|
||||
map are fixed time lookup for IRQ numbers, and irq_descs are only
|
||||
allocated for in-use IRQs. The disadvantage is that the table must be
|
||||
as large as the largest possible hwirq number.
|
||||
|
||||
irq_domain_add_linear() and irq_domain_create_linear() are functionally
|
||||
equivalent, except for the first argument is different - the former
|
||||
accepts an Open Firmware specific 'struct device_node', while the latter
|
||||
accepts a more general abstraction 'struct fwnode_handle'.
|
||||
|
||||
The majority of drivers should use the linear map.
|
||||
|
||||
Tree
|
||||
----
|
||||
|
||||
::
|
||||
|
||||
irq_domain_add_tree()
|
||||
irq_domain_create_tree()
|
||||
|
||||
The irq_domain maintains a radix tree map from hwirq numbers to Linux
|
||||
IRQs. When an hwirq is mapped, an irq_desc is allocated and the
|
||||
hwirq is used as the lookup key for the radix tree.
|
||||
|
||||
The tree map is a good choice if the hwirq number can be very large
|
||||
since it doesn't need to allocate a table as large as the largest
|
||||
hwirq number. The disadvantage is that hwirq to IRQ number lookup is
|
||||
dependent on how many entries are in the table.
|
||||
|
||||
irq_domain_add_tree() and irq_domain_create_tree() are functionally
|
||||
equivalent, except for the first argument is different - the former
|
||||
accepts an Open Firmware specific 'struct device_node', while the latter
|
||||
accepts a more general abstraction 'struct fwnode_handle'.
|
||||
|
||||
Very few drivers should need this mapping.
|
||||
|
||||
No Map
|
||||
------
|
||||
|
||||
::
|
||||
|
||||
irq_domain_add_nomap()
|
||||
|
||||
The No Map mapping is to be used when the hwirq number is
|
||||
programmable in the hardware. In this case it is best to program the
|
||||
Linux IRQ number into the hardware itself so that no mapping is
|
||||
required. Calling irq_create_direct_mapping() will allocate a Linux
|
||||
IRQ number and call the .map() callback so that driver can program the
|
||||
Linux IRQ number into the hardware.
|
||||
|
||||
Most drivers cannot use this mapping.
|
||||
|
||||
Legacy
|
||||
------
|
||||
|
||||
::
|
||||
|
||||
irq_domain_add_simple()
|
||||
irq_domain_add_legacy()
|
||||
irq_domain_add_legacy_isa()
|
||||
|
||||
The Legacy mapping is a special case for drivers that already have a
|
||||
range of irq_descs allocated for the hwirqs. It is used when the
|
||||
driver cannot be immediately converted to use the linear mapping. For
|
||||
example, many embedded system board support files use a set of #defines
|
||||
for IRQ numbers that are passed to struct device registrations. In that
|
||||
case the Linux IRQ numbers cannot be dynamically assigned and the legacy
|
||||
mapping should be used.
|
||||
|
||||
The legacy map assumes a contiguous range of IRQ numbers has already
|
||||
been allocated for the controller and that the IRQ number can be
|
||||
calculated by adding a fixed offset to the hwirq number, and
|
||||
visa-versa. The disadvantage is that it requires the interrupt
|
||||
controller to manage IRQ allocations and it requires an irq_desc to be
|
||||
allocated for every hwirq, even if it is unused.
|
||||
|
||||
The legacy map should only be used if fixed IRQ mappings must be
|
||||
supported. For example, ISA controllers would use the legacy map for
|
||||
mapping Linux IRQs 0-15 so that existing ISA drivers get the correct IRQ
|
||||
numbers.
|
||||
|
||||
Most users of legacy mappings should use irq_domain_add_simple() which
|
||||
will use a legacy domain only if an IRQ range is supplied by the
|
||||
system and will otherwise use a linear domain mapping. The semantics
|
||||
of this call are such that if an IRQ range is specified then
|
||||
descriptors will be allocated on-the-fly for it, and if no range is
|
||||
specified it will fall through to irq_domain_add_linear() which means
|
||||
*no* irq descriptors will be allocated.
|
||||
|
||||
A typical use case for simple domains is where an irqchip provider
|
||||
is supporting both dynamic and static IRQ assignments.
|
||||
|
||||
In order to avoid ending up in a situation where a linear domain is
|
||||
used and no descriptor gets allocated it is very important to make sure
|
||||
that the driver using the simple domain call irq_create_mapping()
|
||||
before any irq_find_mapping() since the latter will actually work
|
||||
for the static IRQ assignment case.
|
||||
|
||||
Hierarchy IRQ domain
|
||||
--------------------
|
||||
|
||||
On some architectures, there may be multiple interrupt controllers
|
||||
involved in delivering an interrupt from the device to the target CPU.
|
||||
Let's look at a typical interrupt delivering path on x86 platforms::
|
||||
|
||||
Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU
|
||||
|
||||
There are three interrupt controllers involved:
|
||||
|
||||
1) IOAPIC controller
|
||||
2) Interrupt remapping controller
|
||||
3) Local APIC controller
|
||||
|
||||
To support such a hardware topology and make software architecture match
|
||||
hardware architecture, an irq_domain data structure is built for each
|
||||
interrupt controller and those irq_domains are organized into hierarchy.
|
||||
When building irq_domain hierarchy, the irq_domain near to the device is
|
||||
child and the irq_domain near to CPU is parent. So a hierarchy structure
|
||||
as below will be built for the example above::
|
||||
|
||||
CPU Vector irq_domain (root irq_domain to manage CPU vectors)
|
||||
^
|
||||
|
|
||||
Interrupt Remapping irq_domain (manage irq_remapping entries)
|
||||
^
|
||||
|
|
||||
IOAPIC irq_domain (manage IOAPIC delivery entries/pins)
|
||||
|
||||
There are four major interfaces to use hierarchy irq_domain:
|
||||
|
||||
1) irq_domain_alloc_irqs(): allocate IRQ descriptors and interrupt
|
||||
controller related resources to deliver these interrupts.
|
||||
2) irq_domain_free_irqs(): free IRQ descriptors and interrupt controller
|
||||
related resources associated with these interrupts.
|
||||
3) irq_domain_activate_irq(): activate interrupt controller hardware to
|
||||
deliver the interrupt.
|
||||
4) irq_domain_deactivate_irq(): deactivate interrupt controller hardware
|
||||
to stop delivering the interrupt.
|
||||
|
||||
Following changes are needed to support hierarchy irq_domain:
|
||||
|
||||
1) a new field 'parent' is added to struct irq_domain; it's used to
|
||||
maintain irq_domain hierarchy information.
|
||||
2) a new field 'parent_data' is added to struct irq_data; it's used to
|
||||
build hierarchy irq_data to match hierarchy irq_domains. The irq_data
|
||||
is used to store irq_domain pointer and hardware irq number.
|
||||
3) new callbacks are added to struct irq_domain_ops to support hierarchy
|
||||
irq_domain operations.
|
||||
|
||||
With support of hierarchy irq_domain and hierarchy irq_data ready, an
|
||||
irq_domain structure is built for each interrupt controller, and an
|
||||
irq_data structure is allocated for each irq_domain associated with an
|
||||
IRQ. Now we could go one step further to support stacked(hierarchy)
|
||||
irq_chip. That is, an irq_chip is associated with each irq_data along
|
||||
the hierarchy. A child irq_chip may implement a required action by
|
||||
itself or by cooperating with its parent irq_chip.
|
||||
|
||||
With stacked irq_chip, interrupt controller driver only needs to deal
|
||||
with the hardware managed by itself and may ask for services from its
|
||||
parent irq_chip when needed. So we could achieve a much cleaner
|
||||
software architecture.
|
||||
|
||||
For an interrupt controller driver to support hierarchy irq_domain, it
|
||||
needs to:
|
||||
|
||||
1) Implement irq_domain_ops.alloc and irq_domain_ops.free
|
||||
2) Optionally implement irq_domain_ops.activate and
|
||||
irq_domain_ops.deactivate.
|
||||
3) Optionally implement an irq_chip to manage the interrupt controller
|
||||
hardware.
|
||||
4) No need to implement irq_domain_ops.map and irq_domain_ops.unmap,
|
||||
they are unused with hierarchy irq_domain.
|
||||
|
||||
Hierarchy irq_domain is in no way x86 specific, and is heavily used to
|
||||
support other architectures, such as ARM, ARM64 etc.
|
||||
|
||||
Debugging
|
||||
=========
|
||||
|
||||
Most of the internals of the IRQ subsystem are exposed in debugfs by
|
||||
turning CONFIG_GENERIC_IRQ_DEBUGFS on.
|
52
Documentation/core-api/irq/irqflags-tracing.rst
Normal file
52
Documentation/core-api/irq/irqflags-tracing.rst
Normal file
|
@ -0,0 +1,52 @@
|
|||
=======================
|
||||
IRQ-flags state tracing
|
||||
=======================
|
||||
|
||||
:Author: started by Ingo Molnar <mingo@redhat.com>
|
||||
|
||||
The "irq-flags tracing" feature "traces" hardirq and softirq state, in
|
||||
that it gives interested subsystems an opportunity to be notified of
|
||||
every hardirqs-off/hardirqs-on, softirqs-off/softirqs-on event that
|
||||
happens in the kernel.
|
||||
|
||||
CONFIG_TRACE_IRQFLAGS_SUPPORT is needed for CONFIG_PROVE_SPIN_LOCKING
|
||||
and CONFIG_PROVE_RW_LOCKING to be offered by the generic lock debugging
|
||||
code. Otherwise only CONFIG_PROVE_MUTEX_LOCKING and
|
||||
CONFIG_PROVE_RWSEM_LOCKING will be offered on an architecture - these
|
||||
are locking APIs that are not used in IRQ context. (the one exception
|
||||
for rwsems is worked around)
|
||||
|
||||
Architecture support for this is certainly not in the "trivial"
|
||||
category, because lots of lowlevel assembly code deal with irq-flags
|
||||
state changes. But an architecture can be irq-flags-tracing enabled in a
|
||||
rather straightforward and risk-free manner.
|
||||
|
||||
Architectures that want to support this need to do a couple of
|
||||
code-organizational changes first:
|
||||
|
||||
- add and enable TRACE_IRQFLAGS_SUPPORT in their arch level Kconfig file
|
||||
|
||||
and then a couple of functional changes are needed as well to implement
|
||||
irq-flags-tracing support:
|
||||
|
||||
- in lowlevel entry code add (build-conditional) calls to the
|
||||
trace_hardirqs_off()/trace_hardirqs_on() functions. The lock validator
|
||||
closely guards whether the 'real' irq-flags matches the 'virtual'
|
||||
irq-flags state, and complains loudly (and turns itself off) if the
|
||||
two do not match. Usually most of the time for arch support for
|
||||
irq-flags-tracing is spent in this state: look at the lockdep
|
||||
complaint, try to figure out the assembly code we did not cover yet,
|
||||
fix and repeat. Once the system has booted up and works without a
|
||||
lockdep complaint in the irq-flags-tracing functions arch support is
|
||||
complete.
|
||||
- if the architecture has non-maskable interrupts then those need to be
|
||||
excluded from the irq-tracing [and lock validation] mechanism via
|
||||
lockdep_off()/lockdep_on().
|
||||
|
||||
In general there is no risk from having an incomplete irq-flags-tracing
|
||||
implementation in an architecture: lockdep will detect that and will
|
||||
turn itself off. I.e. the lock validator will still be reliable. There
|
||||
should be no crashes due to irq-tracing bugs. (except if the assembly
|
||||
changes break other code by modifying conditions or registers that
|
||||
shouldn't be)
|
||||
|
|
@ -80,11 +80,11 @@ what is the pointer to the containing structure? You must avoid tricks
|
|||
(such as assuming that the kobject is at the beginning of the structure)
|
||||
and, instead, use the container_of() macro, found in ``<linux/kernel.h>``::
|
||||
|
||||
container_of(pointer, type, member)
|
||||
container_of(ptr, type, member)
|
||||
|
||||
where:
|
||||
|
||||
* ``pointer`` is the pointer to the embedded kobject,
|
||||
* ``ptr`` is the pointer to the embedded kobject,
|
||||
* ``type`` is the type of the containing structure, and
|
||||
* ``member`` is the name of the structure field to which ``pointer`` points.
|
||||
|
||||
|
@ -140,7 +140,7 @@ the name of the kobject, call kobject_rename()::
|
|||
|
||||
int kobject_rename(struct kobject *kobj, const char *new_name);
|
||||
|
||||
kobject_rename does not perform any locking or have a solid notion of
|
||||
kobject_rename() does not perform any locking or have a solid notion of
|
||||
what names are valid so the caller must provide their own sanity checking
|
||||
and serialization.
|
||||
|
||||
|
@ -210,7 +210,7 @@ statically and will warn the developer of this improper usage.
|
|||
If all that you want to use a kobject for is to provide a reference counter
|
||||
for your structure, please use the struct kref instead; a kobject would be
|
||||
overkill. For more information on how to use struct kref, please see the
|
||||
file Documentation/kref.txt in the Linux kernel source tree.
|
||||
file Documentation/core-api/kref.rst in the Linux kernel source tree.
|
||||
|
||||
|
||||
Creating "simple" kobjects
|
||||
|
@ -222,17 +222,17 @@ ksets, show and store functions, and other details. This is the one
|
|||
exception where a single kobject should be created. To create such an
|
||||
entry, use the function::
|
||||
|
||||
struct kobject *kobject_create_and_add(char *name, struct kobject *parent);
|
||||
struct kobject *kobject_create_and_add(const char *name, struct kobject *parent);
|
||||
|
||||
This function will create a kobject and place it in sysfs in the location
|
||||
underneath the specified parent kobject. To create simple attributes
|
||||
associated with this kobject, use::
|
||||
|
||||
int sysfs_create_file(struct kobject *kobj, struct attribute *attr);
|
||||
int sysfs_create_file(struct kobject *kobj, const struct attribute *attr);
|
||||
|
||||
or::
|
||||
|
||||
int sysfs_create_group(struct kobject *kobj, struct attribute_group *grp);
|
||||
int sysfs_create_group(struct kobject *kobj, const struct attribute_group *grp);
|
||||
|
||||
Both types of attributes used here, with a kobject that has been created
|
||||
with the kobject_create_and_add(), can be of type kobj_attribute, so no
|
||||
|
@ -300,8 +300,10 @@ kobj_type::
|
|||
void (*release)(struct kobject *kobj);
|
||||
const struct sysfs_ops *sysfs_ops;
|
||||
struct attribute **default_attrs;
|
||||
const struct attribute_group **default_groups;
|
||||
const struct kobj_ns_type_operations *(*child_ns_type)(struct kobject *kobj);
|
||||
const void *(*namespace)(struct kobject *kobj);
|
||||
void (*get_ownership)(struct kobject *kobj, kuid_t *uid, kgid_t *gid);
|
||||
};
|
||||
|
||||
This structure is used to describe a particular type of kobject (or, more
|
||||
|
@ -352,12 +354,12 @@ created and never declared statically or on the stack. To create a new
|
|||
kset use::
|
||||
|
||||
struct kset *kset_create_and_add(const char *name,
|
||||
struct kset_uevent_ops *u,
|
||||
struct kobject *parent);
|
||||
const struct kset_uevent_ops *uevent_ops,
|
||||
struct kobject *parent_kobj);
|
||||
|
||||
When you are finished with the kset, call::
|
||||
|
||||
void kset_unregister(struct kset *kset);
|
||||
void kset_unregister(struct kset *k);
|
||||
|
||||
to destroy it. This removes the kset from sysfs and decrements its reference
|
||||
count. When the reference count goes to zero, the kset will be released.
|
||||
|
@ -371,9 +373,9 @@ If a kset wishes to control the uevent operations of the kobjects
|
|||
associated with it, it can use the struct kset_uevent_ops to handle it::
|
||||
|
||||
struct kset_uevent_ops {
|
||||
int (*filter)(struct kset *kset, struct kobject *kobj);
|
||||
const char *(*name)(struct kset *kset, struct kobject *kobj);
|
||||
int (*uevent)(struct kset *kset, struct kobject *kobj,
|
||||
int (* const filter)(struct kset *kset, struct kobject *kobj);
|
||||
const char *(* const name)(struct kset *kset, struct kobject *kobj);
|
||||
int (* const uevent)(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env);
|
||||
};
|
||||
|
||||
|
|
323
Documentation/core-api/kref.rst
Normal file
323
Documentation/core-api/kref.rst
Normal file
|
@ -0,0 +1,323 @@
|
|||
===================================================
|
||||
Adding reference counters (krefs) to kernel objects
|
||||
===================================================
|
||||
|
||||
:Author: Corey Minyard <minyard@acm.org>
|
||||
:Author: Thomas Hellstrom <thellstrom@vmware.com>
|
||||
|
||||
A lot of this was lifted from Greg Kroah-Hartman's 2004 OLS paper and
|
||||
presentation on krefs, which can be found at:
|
||||
|
||||
- http://www.kroah.com/linux/talks/ols_2004_kref_paper/Reprint-Kroah-Hartman-OLS2004.pdf
|
||||
- http://www.kroah.com/linux/talks/ols_2004_kref_talk/
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
krefs allow you to add reference counters to your objects. If you
|
||||
have objects that are used in multiple places and passed around, and
|
||||
you don't have refcounts, your code is almost certainly broken. If
|
||||
you want refcounts, krefs are the way to go.
|
||||
|
||||
To use a kref, add one to your data structures like::
|
||||
|
||||
struct my_data
|
||||
{
|
||||
.
|
||||
.
|
||||
struct kref refcount;
|
||||
.
|
||||
.
|
||||
};
|
||||
|
||||
The kref can occur anywhere within the data structure.
|
||||
|
||||
Initialization
|
||||
==============
|
||||
|
||||
You must initialize the kref after you allocate it. To do this, call
|
||||
kref_init as so::
|
||||
|
||||
struct my_data *data;
|
||||
|
||||
data = kmalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
kref_init(&data->refcount);
|
||||
|
||||
This sets the refcount in the kref to 1.
|
||||
|
||||
Kref rules
|
||||
==========
|
||||
|
||||
Once you have an initialized kref, you must follow the following
|
||||
rules:
|
||||
|
||||
1) If you make a non-temporary copy of a pointer, especially if
|
||||
it can be passed to another thread of execution, you must
|
||||
increment the refcount with kref_get() before passing it off::
|
||||
|
||||
kref_get(&data->refcount);
|
||||
|
||||
If you already have a valid pointer to a kref-ed structure (the
|
||||
refcount cannot go to zero) you may do this without a lock.
|
||||
|
||||
2) When you are done with a pointer, you must call kref_put()::
|
||||
|
||||
kref_put(&data->refcount, data_release);
|
||||
|
||||
If this is the last reference to the pointer, the release
|
||||
routine will be called. If the code never tries to get
|
||||
a valid pointer to a kref-ed structure without already
|
||||
holding a valid pointer, it is safe to do this without
|
||||
a lock.
|
||||
|
||||
3) If the code attempts to gain a reference to a kref-ed structure
|
||||
without already holding a valid pointer, it must serialize access
|
||||
where a kref_put() cannot occur during the kref_get(), and the
|
||||
structure must remain valid during the kref_get().
|
||||
|
||||
For example, if you allocate some data and then pass it to another
|
||||
thread to process::
|
||||
|
||||
void data_release(struct kref *ref)
|
||||
{
|
||||
struct my_data *data = container_of(ref, struct my_data, refcount);
|
||||
kfree(data);
|
||||
}
|
||||
|
||||
void more_data_handling(void *cb_data)
|
||||
{
|
||||
struct my_data *data = cb_data;
|
||||
.
|
||||
. do stuff with data here
|
||||
.
|
||||
kref_put(&data->refcount, data_release);
|
||||
}
|
||||
|
||||
int my_data_handler(void)
|
||||
{
|
||||
int rv = 0;
|
||||
struct my_data *data;
|
||||
struct task_struct *task;
|
||||
data = kmalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
kref_init(&data->refcount);
|
||||
|
||||
kref_get(&data->refcount);
|
||||
task = kthread_run(more_data_handling, data, "more_data_handling");
|
||||
if (task == ERR_PTR(-ENOMEM)) {
|
||||
rv = -ENOMEM;
|
||||
kref_put(&data->refcount, data_release);
|
||||
goto out;
|
||||
}
|
||||
|
||||
.
|
||||
. do stuff with data here
|
||||
.
|
||||
out:
|
||||
kref_put(&data->refcount, data_release);
|
||||
return rv;
|
||||
}
|
||||
|
||||
This way, it doesn't matter what order the two threads handle the
|
||||
data, the kref_put() handles knowing when the data is not referenced
|
||||
any more and releasing it. The kref_get() does not require a lock,
|
||||
since we already have a valid pointer that we own a refcount for. The
|
||||
put needs no lock because nothing tries to get the data without
|
||||
already holding a pointer.
|
||||
|
||||
In the above example, kref_put() will be called 2 times in both success
|
||||
and error paths. This is necessary because the reference count got
|
||||
incremented 2 times by kref_init() and kref_get().
|
||||
|
||||
Note that the "before" in rule 1 is very important. You should never
|
||||
do something like::
|
||||
|
||||
task = kthread_run(more_data_handling, data, "more_data_handling");
|
||||
if (task == ERR_PTR(-ENOMEM)) {
|
||||
rv = -ENOMEM;
|
||||
goto out;
|
||||
} else
|
||||
/* BAD BAD BAD - get is after the handoff */
|
||||
kref_get(&data->refcount);
|
||||
|
||||
Don't assume you know what you are doing and use the above construct.
|
||||
First of all, you may not know what you are doing. Second, you may
|
||||
know what you are doing (there are some situations where locking is
|
||||
involved where the above may be legal) but someone else who doesn't
|
||||
know what they are doing may change the code or copy the code. It's
|
||||
bad style. Don't do it.
|
||||
|
||||
There are some situations where you can optimize the gets and puts.
|
||||
For instance, if you are done with an object and enqueuing it for
|
||||
something else or passing it off to something else, there is no reason
|
||||
to do a get then a put::
|
||||
|
||||
/* Silly extra get and put */
|
||||
kref_get(&obj->ref);
|
||||
enqueue(obj);
|
||||
kref_put(&obj->ref, obj_cleanup);
|
||||
|
||||
Just do the enqueue. A comment about this is always welcome::
|
||||
|
||||
enqueue(obj);
|
||||
/* We are done with obj, so we pass our refcount off
|
||||
to the queue. DON'T TOUCH obj AFTER HERE! */
|
||||
|
||||
The last rule (rule 3) is the nastiest one to handle. Say, for
|
||||
instance, you have a list of items that are each kref-ed, and you wish
|
||||
to get the first one. You can't just pull the first item off the list
|
||||
and kref_get() it. That violates rule 3 because you are not already
|
||||
holding a valid pointer. You must add a mutex (or some other lock).
|
||||
For instance::
|
||||
|
||||
static DEFINE_MUTEX(mutex);
|
||||
static LIST_HEAD(q);
|
||||
struct my_data
|
||||
{
|
||||
struct kref refcount;
|
||||
struct list_head link;
|
||||
};
|
||||
|
||||
static struct my_data *get_entry()
|
||||
{
|
||||
struct my_data *entry = NULL;
|
||||
mutex_lock(&mutex);
|
||||
if (!list_empty(&q)) {
|
||||
entry = container_of(q.next, struct my_data, link);
|
||||
kref_get(&entry->refcount);
|
||||
}
|
||||
mutex_unlock(&mutex);
|
||||
return entry;
|
||||
}
|
||||
|
||||
static void release_entry(struct kref *ref)
|
||||
{
|
||||
struct my_data *entry = container_of(ref, struct my_data, refcount);
|
||||
|
||||
list_del(&entry->link);
|
||||
kfree(entry);
|
||||
}
|
||||
|
||||
static void put_entry(struct my_data *entry)
|
||||
{
|
||||
mutex_lock(&mutex);
|
||||
kref_put(&entry->refcount, release_entry);
|
||||
mutex_unlock(&mutex);
|
||||
}
|
||||
|
||||
The kref_put() return value is useful if you do not want to hold the
|
||||
lock during the whole release operation. Say you didn't want to call
|
||||
kfree() with the lock held in the example above (since it is kind of
|
||||
pointless to do so). You could use kref_put() as follows::
|
||||
|
||||
static void release_entry(struct kref *ref)
|
||||
{
|
||||
/* All work is done after the return from kref_put(). */
|
||||
}
|
||||
|
||||
static void put_entry(struct my_data *entry)
|
||||
{
|
||||
mutex_lock(&mutex);
|
||||
if (kref_put(&entry->refcount, release_entry)) {
|
||||
list_del(&entry->link);
|
||||
mutex_unlock(&mutex);
|
||||
kfree(entry);
|
||||
} else
|
||||
mutex_unlock(&mutex);
|
||||
}
|
||||
|
||||
This is really more useful if you have to call other routines as part
|
||||
of the free operations that could take a long time or might claim the
|
||||
same lock. Note that doing everything in the release routine is still
|
||||
preferred as it is a little neater.
|
||||
|
||||
The above example could also be optimized using kref_get_unless_zero() in
|
||||
the following way::
|
||||
|
||||
static struct my_data *get_entry()
|
||||
{
|
||||
struct my_data *entry = NULL;
|
||||
mutex_lock(&mutex);
|
||||
if (!list_empty(&q)) {
|
||||
entry = container_of(q.next, struct my_data, link);
|
||||
if (!kref_get_unless_zero(&entry->refcount))
|
||||
entry = NULL;
|
||||
}
|
||||
mutex_unlock(&mutex);
|
||||
return entry;
|
||||
}
|
||||
|
||||
static void release_entry(struct kref *ref)
|
||||
{
|
||||
struct my_data *entry = container_of(ref, struct my_data, refcount);
|
||||
|
||||
mutex_lock(&mutex);
|
||||
list_del(&entry->link);
|
||||
mutex_unlock(&mutex);
|
||||
kfree(entry);
|
||||
}
|
||||
|
||||
static void put_entry(struct my_data *entry)
|
||||
{
|
||||
kref_put(&entry->refcount, release_entry);
|
||||
}
|
||||
|
||||
Which is useful to remove the mutex lock around kref_put() in put_entry(), but
|
||||
it's important that kref_get_unless_zero is enclosed in the same critical
|
||||
section that finds the entry in the lookup table,
|
||||
otherwise kref_get_unless_zero may reference already freed memory.
|
||||
Note that it is illegal to use kref_get_unless_zero without checking its
|
||||
return value. If you are sure (by already having a valid pointer) that
|
||||
kref_get_unless_zero() will return true, then use kref_get() instead.
|
||||
|
||||
Krefs and RCU
|
||||
=============
|
||||
|
||||
The function kref_get_unless_zero also makes it possible to use rcu
|
||||
locking for lookups in the above example::
|
||||
|
||||
struct my_data
|
||||
{
|
||||
struct rcu_head rhead;
|
||||
.
|
||||
struct kref refcount;
|
||||
.
|
||||
.
|
||||
};
|
||||
|
||||
static struct my_data *get_entry_rcu()
|
||||
{
|
||||
struct my_data *entry = NULL;
|
||||
rcu_read_lock();
|
||||
if (!list_empty(&q)) {
|
||||
entry = container_of(q.next, struct my_data, link);
|
||||
if (!kref_get_unless_zero(&entry->refcount))
|
||||
entry = NULL;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
return entry;
|
||||
}
|
||||
|
||||
static void release_entry_rcu(struct kref *ref)
|
||||
{
|
||||
struct my_data *entry = container_of(ref, struct my_data, refcount);
|
||||
|
||||
mutex_lock(&mutex);
|
||||
list_del_rcu(&entry->link);
|
||||
mutex_unlock(&mutex);
|
||||
kfree_rcu(entry, rhead);
|
||||
}
|
||||
|
||||
static void put_entry(struct my_data *entry)
|
||||
{
|
||||
kref_put(&entry->refcount, release_entry_rcu);
|
||||
}
|
||||
|
||||
But note that the struct kref member needs to remain in valid memory for a
|
||||
rcu grace period after release_entry_rcu was called. That can be accomplished
|
||||
by using kfree_rcu(entry, rhead) as done above, or by calling synchronize_rcu()
|
||||
before using kfree, but note that synchronize_rcu() may sleep for a
|
||||
substantial amount of time.
|
115
Documentation/core-api/printk-basics.rst
Normal file
115
Documentation/core-api/printk-basics.rst
Normal file
|
@ -0,0 +1,115 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===========================
|
||||
Message logging with printk
|
||||
===========================
|
||||
|
||||
printk() is one of the most widely known functions in the Linux kernel. It's the
|
||||
standard tool we have for printing messages and usually the most basic way of
|
||||
tracing and debugging. If you're familiar with printf(3) you can tell printk()
|
||||
is based on it, although it has some functional differences:
|
||||
|
||||
- printk() messages can specify a log level.
|
||||
|
||||
- the format string, while largely compatible with C99, doesn't follow the
|
||||
exact same specification. It has some extensions and a few limitations
|
||||
(no ``%n`` or floating point conversion specifiers). See :ref:`How to get
|
||||
printk format specifiers right <printk-specifiers>`.
|
||||
|
||||
All printk() messages are printed to the kernel log buffer, which is a ring
|
||||
buffer exported to userspace through /dev/kmsg. The usual way to read it is
|
||||
using ``dmesg``.
|
||||
|
||||
printk() is typically used like this::
|
||||
|
||||
printk(KERN_INFO "Message: %s\n", arg);
|
||||
|
||||
where ``KERN_INFO`` is the log level (note that it's concatenated to the format
|
||||
string, the log level is not a separate argument). The available log levels are:
|
||||
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| Name | String | Alias function |
|
||||
+================+========+===============================================+
|
||||
| KERN_EMERG | "0" | pr_emerg() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_ALERT | "1" | pr_alert() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_CRIT | "2" | pr_crit() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_ERR | "3" | pr_err() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_WARNING | "4" | pr_warn() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_NOTICE | "5" | pr_notice() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_INFO | "6" | pr_info() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_DEBUG | "7" | pr_debug() and pr_devel() if DEBUG is defined |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_DEFAULT | "" | |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
| KERN_CONT | "c" | pr_cont() |
|
||||
+----------------+--------+-----------------------------------------------+
|
||||
|
||||
|
||||
The log level specifies the importance of a message. The kernel decides whether
|
||||
to show the message immediately (printing it to the current console) depending
|
||||
on its log level and the current *console_loglevel* (a kernel variable). If the
|
||||
message priority is higher (lower log level value) than the *console_loglevel*
|
||||
the message will be printed to the console.
|
||||
|
||||
If the log level is omitted, the message is printed with ``KERN_DEFAULT``
|
||||
level.
|
||||
|
||||
You can check the current *console_loglevel* with::
|
||||
|
||||
$ cat /proc/sys/kernel/printk
|
||||
4 4 1 7
|
||||
|
||||
The result shows the *current*, *default*, *minimum* and *boot-time-default* log
|
||||
levels.
|
||||
|
||||
To change the current console_loglevel simply write the the desired level to
|
||||
``/proc/sys/kernel/printk``. For example, to print all messages to the console::
|
||||
|
||||
# echo 8 > /proc/sys/kernel/printk
|
||||
|
||||
Another way, using ``dmesg``::
|
||||
|
||||
# dmesg -n 5
|
||||
|
||||
sets the console_loglevel to print KERN_WARNING (4) or more severe messages to
|
||||
console. See ``dmesg(1)`` for more information.
|
||||
|
||||
As an alternative to printk() you can use the ``pr_*()`` aliases for
|
||||
logging. This family of macros embed the log level in the macro names. For
|
||||
example::
|
||||
|
||||
pr_info("Info message no. %d\n", msg_num);
|
||||
|
||||
prints a ``KERN_INFO`` message.
|
||||
|
||||
Besides being more concise than the equivalent printk() calls, they can use a
|
||||
common definition for the format string through the pr_fmt() macro. For
|
||||
instance, defining this at the top of a source file (before any ``#include``
|
||||
directive)::
|
||||
|
||||
#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__
|
||||
|
||||
would prefix every pr_*() message in that file with the module and function name
|
||||
that originated the message.
|
||||
|
||||
For debugging purposes there are also two conditionally-compiled macros:
|
||||
pr_debug() and pr_devel(), which are compiled-out unless ``DEBUG`` (or
|
||||
also ``CONFIG_DYNAMIC_DEBUG`` in the case of pr_debug()) is defined.
|
||||
|
||||
|
||||
Function reference
|
||||
==================
|
||||
|
||||
.. kernel-doc:: kernel/printk/printk.c
|
||||
:functions: printk
|
||||
|
||||
.. kernel-doc:: include/linux/printk.h
|
||||
:functions: pr_emerg pr_alert pr_crit pr_err pr_warn pr_notice pr_info
|
||||
pr_fmt pr_debug pr_devel pr_cont
|
|
@ -2,6 +2,8 @@
|
|||
How to get printk format specifiers right
|
||||
=========================================
|
||||
|
||||
.. _printk-specifiers:
|
||||
|
||||
:Author: Randy Dunlap <rdunlap@infradead.org>
|
||||
:Author: Andrew Murray <amurray@mpc-data.co.uk>
|
||||
|
||||
|
|
429
Documentation/core-api/rbtree.rst
Normal file
429
Documentation/core-api/rbtree.rst
Normal file
|
@ -0,0 +1,429 @@
|
|||
=================================
|
||||
Red-black Trees (rbtree) in Linux
|
||||
=================================
|
||||
|
||||
|
||||
:Date: January 18, 2007
|
||||
:Author: Rob Landley <rob@landley.net>
|
||||
|
||||
What are red-black trees, and what are they for?
|
||||
------------------------------------------------
|
||||
|
||||
Red-black trees are a type of self-balancing binary search tree, used for
|
||||
storing sortable key/value data pairs. This differs from radix trees (which
|
||||
are used to efficiently store sparse arrays and thus use long integer indexes
|
||||
to insert/access/delete nodes) and hash tables (which are not kept sorted to
|
||||
be easily traversed in order, and must be tuned for a specific size and
|
||||
hash function where rbtrees scale gracefully storing arbitrary keys).
|
||||
|
||||
Red-black trees are similar to AVL trees, but provide faster real-time bounded
|
||||
worst case performance for insertion and deletion (at most two rotations and
|
||||
three rotations, respectively, to balance the tree), with slightly slower
|
||||
(but still O(log n)) lookup time.
|
||||
|
||||
To quote Linux Weekly News:
|
||||
|
||||
There are a number of red-black trees in use in the kernel.
|
||||
The deadline and CFQ I/O schedulers employ rbtrees to
|
||||
track requests; the packet CD/DVD driver does the same.
|
||||
The high-resolution timer code uses an rbtree to organize outstanding
|
||||
timer requests. The ext3 filesystem tracks directory entries in a
|
||||
red-black tree. Virtual memory areas (VMAs) are tracked with red-black
|
||||
trees, as are epoll file descriptors, cryptographic keys, and network
|
||||
packets in the "hierarchical token bucket" scheduler.
|
||||
|
||||
This document covers use of the Linux rbtree implementation. For more
|
||||
information on the nature and implementation of Red Black Trees, see:
|
||||
|
||||
Linux Weekly News article on red-black trees
|
||||
http://lwn.net/Articles/184495/
|
||||
|
||||
Wikipedia entry on red-black trees
|
||||
http://en.wikipedia.org/wiki/Red-black_tree
|
||||
|
||||
Linux implementation of red-black trees
|
||||
---------------------------------------
|
||||
|
||||
Linux's rbtree implementation lives in the file "lib/rbtree.c". To use it,
|
||||
"#include <linux/rbtree.h>".
|
||||
|
||||
The Linux rbtree implementation is optimized for speed, and thus has one
|
||||
less layer of indirection (and better cache locality) than more traditional
|
||||
tree implementations. Instead of using pointers to separate rb_node and data
|
||||
structures, each instance of struct rb_node is embedded in the data structure
|
||||
it organizes. And instead of using a comparison callback function pointer,
|
||||
users are expected to write their own tree search and insert functions
|
||||
which call the provided rbtree functions. Locking is also left up to the
|
||||
user of the rbtree code.
|
||||
|
||||
Creating a new rbtree
|
||||
---------------------
|
||||
|
||||
Data nodes in an rbtree tree are structures containing a struct rb_node member::
|
||||
|
||||
struct mytype {
|
||||
struct rb_node node;
|
||||
char *keystring;
|
||||
};
|
||||
|
||||
When dealing with a pointer to the embedded struct rb_node, the containing data
|
||||
structure may be accessed with the standard container_of() macro. In addition,
|
||||
individual members may be accessed directly via rb_entry(node, type, member).
|
||||
|
||||
At the root of each rbtree is an rb_root structure, which is initialized to be
|
||||
empty via:
|
||||
|
||||
struct rb_root mytree = RB_ROOT;
|
||||
|
||||
Searching for a value in an rbtree
|
||||
----------------------------------
|
||||
|
||||
Writing a search function for your tree is fairly straightforward: start at the
|
||||
root, compare each value, and follow the left or right branch as necessary.
|
||||
|
||||
Example::
|
||||
|
||||
struct mytype *my_search(struct rb_root *root, char *string)
|
||||
{
|
||||
struct rb_node *node = root->rb_node;
|
||||
|
||||
while (node) {
|
||||
struct mytype *data = container_of(node, struct mytype, node);
|
||||
int result;
|
||||
|
||||
result = strcmp(string, data->keystring);
|
||||
|
||||
if (result < 0)
|
||||
node = node->rb_left;
|
||||
else if (result > 0)
|
||||
node = node->rb_right;
|
||||
else
|
||||
return data;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
Inserting data into an rbtree
|
||||
-----------------------------
|
||||
|
||||
Inserting data in the tree involves first searching for the place to insert the
|
||||
new node, then inserting the node and rebalancing ("recoloring") the tree.
|
||||
|
||||
The search for insertion differs from the previous search by finding the
|
||||
location of the pointer on which to graft the new node. The new node also
|
||||
needs a link to its parent node for rebalancing purposes.
|
||||
|
||||
Example::
|
||||
|
||||
int my_insert(struct rb_root *root, struct mytype *data)
|
||||
{
|
||||
struct rb_node **new = &(root->rb_node), *parent = NULL;
|
||||
|
||||
/* Figure out where to put new node */
|
||||
while (*new) {
|
||||
struct mytype *this = container_of(*new, struct mytype, node);
|
||||
int result = strcmp(data->keystring, this->keystring);
|
||||
|
||||
parent = *new;
|
||||
if (result < 0)
|
||||
new = &((*new)->rb_left);
|
||||
else if (result > 0)
|
||||
new = &((*new)->rb_right);
|
||||
else
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
/* Add new node and rebalance tree. */
|
||||
rb_link_node(&data->node, parent, new);
|
||||
rb_insert_color(&data->node, root);
|
||||
|
||||
return TRUE;
|
||||
}
|
||||
|
||||
Removing or replacing existing data in an rbtree
|
||||
------------------------------------------------
|
||||
|
||||
To remove an existing node from a tree, call::
|
||||
|
||||
void rb_erase(struct rb_node *victim, struct rb_root *tree);
|
||||
|
||||
Example::
|
||||
|
||||
struct mytype *data = mysearch(&mytree, "walrus");
|
||||
|
||||
if (data) {
|
||||
rb_erase(&data->node, &mytree);
|
||||
myfree(data);
|
||||
}
|
||||
|
||||
To replace an existing node in a tree with a new one with the same key, call::
|
||||
|
||||
void rb_replace_node(struct rb_node *old, struct rb_node *new,
|
||||
struct rb_root *tree);
|
||||
|
||||
Replacing a node this way does not re-sort the tree: If the new node doesn't
|
||||
have the same key as the old node, the rbtree will probably become corrupted.
|
||||
|
||||
Iterating through the elements stored in an rbtree (in sort order)
|
||||
------------------------------------------------------------------
|
||||
|
||||
Four functions are provided for iterating through an rbtree's contents in
|
||||
sorted order. These work on arbitrary trees, and should not need to be
|
||||
modified or wrapped (except for locking purposes)::
|
||||
|
||||
struct rb_node *rb_first(struct rb_root *tree);
|
||||
struct rb_node *rb_last(struct rb_root *tree);
|
||||
struct rb_node *rb_next(struct rb_node *node);
|
||||
struct rb_node *rb_prev(struct rb_node *node);
|
||||
|
||||
To start iterating, call rb_first() or rb_last() with a pointer to the root
|
||||
of the tree, which will return a pointer to the node structure contained in
|
||||
the first or last element in the tree. To continue, fetch the next or previous
|
||||
node by calling rb_next() or rb_prev() on the current node. This will return
|
||||
NULL when there are no more nodes left.
|
||||
|
||||
The iterator functions return a pointer to the embedded struct rb_node, from
|
||||
which the containing data structure may be accessed with the container_of()
|
||||
macro, and individual members may be accessed directly via
|
||||
rb_entry(node, type, member).
|
||||
|
||||
Example::
|
||||
|
||||
struct rb_node *node;
|
||||
for (node = rb_first(&mytree); node; node = rb_next(node))
|
||||
printk("key=%s\n", rb_entry(node, struct mytype, node)->keystring);
|
||||
|
||||
Cached rbtrees
|
||||
--------------
|
||||
|
||||
Computing the leftmost (smallest) node is quite a common task for binary
|
||||
search trees, such as for traversals or users relying on a the particular
|
||||
order for their own logic. To this end, users can use 'struct rb_root_cached'
|
||||
to optimize O(logN) rb_first() calls to a simple pointer fetch avoiding
|
||||
potentially expensive tree iterations. This is done at negligible runtime
|
||||
overhead for maintanence; albeit larger memory footprint.
|
||||
|
||||
Similar to the rb_root structure, cached rbtrees are initialized to be
|
||||
empty via::
|
||||
|
||||
struct rb_root_cached mytree = RB_ROOT_CACHED;
|
||||
|
||||
Cached rbtree is simply a regular rb_root with an extra pointer to cache the
|
||||
leftmost node. This allows rb_root_cached to exist wherever rb_root does,
|
||||
which permits augmented trees to be supported as well as only a few extra
|
||||
interfaces::
|
||||
|
||||
struct rb_node *rb_first_cached(struct rb_root_cached *tree);
|
||||
void rb_insert_color_cached(struct rb_node *, struct rb_root_cached *, bool);
|
||||
void rb_erase_cached(struct rb_node *node, struct rb_root_cached *);
|
||||
|
||||
Both insert and erase calls have their respective counterpart of augmented
|
||||
trees::
|
||||
|
||||
void rb_insert_augmented_cached(struct rb_node *node, struct rb_root_cached *,
|
||||
bool, struct rb_augment_callbacks *);
|
||||
void rb_erase_augmented_cached(struct rb_node *, struct rb_root_cached *,
|
||||
struct rb_augment_callbacks *);
|
||||
|
||||
|
||||
Support for Augmented rbtrees
|
||||
-----------------------------
|
||||
|
||||
Augmented rbtree is an rbtree with "some" additional data stored in
|
||||
each node, where the additional data for node N must be a function of
|
||||
the contents of all nodes in the subtree rooted at N. This data can
|
||||
be used to augment some new functionality to rbtree. Augmented rbtree
|
||||
is an optional feature built on top of basic rbtree infrastructure.
|
||||
An rbtree user who wants this feature will have to call the augmentation
|
||||
functions with the user provided augmentation callback when inserting
|
||||
and erasing nodes.
|
||||
|
||||
C files implementing augmented rbtree manipulation must include
|
||||
<linux/rbtree_augmented.h> instead of <linux/rbtree.h>. Note that
|
||||
linux/rbtree_augmented.h exposes some rbtree implementations details
|
||||
you are not expected to rely on; please stick to the documented APIs
|
||||
there and do not include <linux/rbtree_augmented.h> from header files
|
||||
either so as to minimize chances of your users accidentally relying on
|
||||
such implementation details.
|
||||
|
||||
On insertion, the user must update the augmented information on the path
|
||||
leading to the inserted node, then call rb_link_node() as usual and
|
||||
rb_augment_inserted() instead of the usual rb_insert_color() call.
|
||||
If rb_augment_inserted() rebalances the rbtree, it will callback into
|
||||
a user provided function to update the augmented information on the
|
||||
affected subtrees.
|
||||
|
||||
When erasing a node, the user must call rb_erase_augmented() instead of
|
||||
rb_erase(). rb_erase_augmented() calls back into user provided functions
|
||||
to updated the augmented information on affected subtrees.
|
||||
|
||||
In both cases, the callbacks are provided through struct rb_augment_callbacks.
|
||||
3 callbacks must be defined:
|
||||
|
||||
- A propagation callback, which updates the augmented value for a given
|
||||
node and its ancestors, up to a given stop point (or NULL to update
|
||||
all the way to the root).
|
||||
|
||||
- A copy callback, which copies the augmented value for a given subtree
|
||||
to a newly assigned subtree root.
|
||||
|
||||
- A tree rotation callback, which copies the augmented value for a given
|
||||
subtree to a newly assigned subtree root AND recomputes the augmented
|
||||
information for the former subtree root.
|
||||
|
||||
The compiled code for rb_erase_augmented() may inline the propagation and
|
||||
copy callbacks, which results in a large function, so each augmented rbtree
|
||||
user should have a single rb_erase_augmented() call site in order to limit
|
||||
compiled code size.
|
||||
|
||||
|
||||
Sample usage
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Interval tree is an example of augmented rb tree. Reference -
|
||||
"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
|
||||
More details about interval trees:
|
||||
|
||||
Classical rbtree has a single key and it cannot be directly used to store
|
||||
interval ranges like [lo:hi] and do a quick lookup for any overlap with a new
|
||||
lo:hi or to find whether there is an exact match for a new lo:hi.
|
||||
|
||||
However, rbtree can be augmented to store such interval ranges in a structured
|
||||
way making it possible to do efficient lookup and exact match.
|
||||
|
||||
This "extra information" stored in each node is the maximum hi
|
||||
(max_hi) value among all the nodes that are its descendants. This
|
||||
information can be maintained at each node just be looking at the node
|
||||
and its immediate children. And this will be used in O(log n) lookup
|
||||
for lowest match (lowest start address among all possible matches)
|
||||
with something like::
|
||||
|
||||
struct interval_tree_node *
|
||||
interval_tree_first_match(struct rb_root *root,
|
||||
unsigned long start, unsigned long last)
|
||||
{
|
||||
struct interval_tree_node *node;
|
||||
|
||||
if (!root->rb_node)
|
||||
return NULL;
|
||||
node = rb_entry(root->rb_node, struct interval_tree_node, rb);
|
||||
|
||||
while (true) {
|
||||
if (node->rb.rb_left) {
|
||||
struct interval_tree_node *left =
|
||||
rb_entry(node->rb.rb_left,
|
||||
struct interval_tree_node, rb);
|
||||
if (left->__subtree_last >= start) {
|
||||
/*
|
||||
* Some nodes in left subtree satisfy Cond2.
|
||||
* Iterate to find the leftmost such node N.
|
||||
* If it also satisfies Cond1, that's the match
|
||||
* we are looking for. Otherwise, there is no
|
||||
* matching interval as nodes to the right of N
|
||||
* can't satisfy Cond1 either.
|
||||
*/
|
||||
node = left;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
if (node->start <= last) { /* Cond1 */
|
||||
if (node->last >= start) /* Cond2 */
|
||||
return node; /* node is leftmost match */
|
||||
if (node->rb.rb_right) {
|
||||
node = rb_entry(node->rb.rb_right,
|
||||
struct interval_tree_node, rb);
|
||||
if (node->__subtree_last >= start)
|
||||
continue;
|
||||
}
|
||||
}
|
||||
return NULL; /* No match */
|
||||
}
|
||||
}
|
||||
|
||||
Insertion/removal are defined using the following augmented callbacks::
|
||||
|
||||
static inline unsigned long
|
||||
compute_subtree_last(struct interval_tree_node *node)
|
||||
{
|
||||
unsigned long max = node->last, subtree_last;
|
||||
if (node->rb.rb_left) {
|
||||
subtree_last = rb_entry(node->rb.rb_left,
|
||||
struct interval_tree_node, rb)->__subtree_last;
|
||||
if (max < subtree_last)
|
||||
max = subtree_last;
|
||||
}
|
||||
if (node->rb.rb_right) {
|
||||
subtree_last = rb_entry(node->rb.rb_right,
|
||||
struct interval_tree_node, rb)->__subtree_last;
|
||||
if (max < subtree_last)
|
||||
max = subtree_last;
|
||||
}
|
||||
return max;
|
||||
}
|
||||
|
||||
static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
|
||||
{
|
||||
while (rb != stop) {
|
||||
struct interval_tree_node *node =
|
||||
rb_entry(rb, struct interval_tree_node, rb);
|
||||
unsigned long subtree_last = compute_subtree_last(node);
|
||||
if (node->__subtree_last == subtree_last)
|
||||
break;
|
||||
node->__subtree_last = subtree_last;
|
||||
rb = rb_parent(&node->rb);
|
||||
}
|
||||
}
|
||||
|
||||
static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
|
||||
{
|
||||
struct interval_tree_node *old =
|
||||
rb_entry(rb_old, struct interval_tree_node, rb);
|
||||
struct interval_tree_node *new =
|
||||
rb_entry(rb_new, struct interval_tree_node, rb);
|
||||
|
||||
new->__subtree_last = old->__subtree_last;
|
||||
}
|
||||
|
||||
static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
|
||||
{
|
||||
struct interval_tree_node *old =
|
||||
rb_entry(rb_old, struct interval_tree_node, rb);
|
||||
struct interval_tree_node *new =
|
||||
rb_entry(rb_new, struct interval_tree_node, rb);
|
||||
|
||||
new->__subtree_last = old->__subtree_last;
|
||||
old->__subtree_last = compute_subtree_last(old);
|
||||
}
|
||||
|
||||
static const struct rb_augment_callbacks augment_callbacks = {
|
||||
augment_propagate, augment_copy, augment_rotate
|
||||
};
|
||||
|
||||
void interval_tree_insert(struct interval_tree_node *node,
|
||||
struct rb_root *root)
|
||||
{
|
||||
struct rb_node **link = &root->rb_node, *rb_parent = NULL;
|
||||
unsigned long start = node->start, last = node->last;
|
||||
struct interval_tree_node *parent;
|
||||
|
||||
while (*link) {
|
||||
rb_parent = *link;
|
||||
parent = rb_entry(rb_parent, struct interval_tree_node, rb);
|
||||
if (parent->__subtree_last < last)
|
||||
parent->__subtree_last = last;
|
||||
if (start < parent->start)
|
||||
link = &parent->rb.rb_left;
|
||||
else
|
||||
link = &parent->rb.rb_right;
|
||||
}
|
||||
|
||||
node->__subtree_last = last;
|
||||
rb_link_node(&node->rb, rb_parent, link);
|
||||
rb_insert_augmented(&node->rb, root, &augment_callbacks);
|
||||
}
|
||||
|
||||
void interval_tree_remove(struct interval_tree_node *node,
|
||||
struct rb_root *root)
|
||||
{
|
||||
rb_erase_augmented(&node->rb, root, &augment_callbacks);
|
||||
}
|
Loading…
Add table
Add a link
Reference in a new issue