Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Synchronize with 'net' in order to sort out some l2tp, wireless, and
ipv6 GRE fixes that will be built on top of in 'net-next'.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2013-02-08 18:02:14 -05:00
commit fd5023111c
266 changed files with 2114 additions and 1437 deletions

0
Documentation/hid/hid-sensor.txt Executable file → Normal file
View file

View file

@ -2438,7 +2438,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
real-time workloads. It can also improve energy real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors. efficiency for asymmetric multiprocessors.
rcu_nocbs_poll [KNL,BOOT] rcu_nocb_poll [KNL,BOOT]
Rather than requiring that offloaded CPUs Rather than requiring that offloaded CPUs
(specified by rcu_nocbs= above) explicitly (specified by rcu_nocbs= above) explicitly
awaken the corresponding "rcuoN" kthreads, awaken the corresponding "rcuoN" kthreads,

View file

@ -57,6 +57,10 @@ Protocol 2.10: (Kernel 2.6.31) Added a protocol for relaxed alignment
Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover
protocol entry point. protocol entry point.
Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields
to struct boot_params for for loading bzImage and ramdisk
above 4G in 64bit.
**** MEMORY LAYOUT **** MEMORY LAYOUT
The traditional memory map for the kernel loader, used for Image or The traditional memory map for the kernel loader, used for Image or
@ -182,7 +186,7 @@ Offset Proto Name Meaning
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/1 2.10+ min_alignment Minimum alignment, as a power of two 0235/1 2.10+ min_alignment Minimum alignment, as a power of two
0236/2 N/A pad3 Unused 0236/2 2.12+ xloadflags Boot protocol option flags
0238/4 2.06+ cmdline_size Maximum size of the kernel command line 0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture 023C/4 2.07+ hardware_subarch Hardware subarchitecture
0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data 0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data
@ -582,6 +586,27 @@ Protocol: 2.10+
misaligned kernel. Therefore, a loader should typically try each misaligned kernel. Therefore, a loader should typically try each
power-of-two alignment from kernel_alignment down to this alignment. power-of-two alignment from kernel_alignment down to this alignment.
Field name: xloadflags
Type: read
Offset/size: 0x236/2
Protocol: 2.12+
This field is a bitmask.
Bit 0 (read): XLF_KERNEL_64
- If 1, this kernel has the legacy 64-bit entry point at 0x200.
Bit 1 (read): XLF_CAN_BE_LOADED_ABOVE_4G
- If 1, kernel/boot_params/cmdline/ramdisk can be above 4G.
Bit 2 (read): XLF_EFI_HANDOVER_32
- If 1, the kernel supports the 32-bit EFI handoff entry point
given at handover_offset.
Bit 3 (read): XLF_EFI_HANDOVER_64
- If 1, the kernel supports the 64-bit EFI handoff entry point
given at handover_offset + 0x200.
Field name: cmdline_size Field name: cmdline_size
Type: read Type: read
Offset/size: 0x238/4 Offset/size: 0x238/4

View file

@ -19,6 +19,9 @@ Offset Proto Name Meaning
090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!! 090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!!
0A0/010 ALL sys_desc_table System description table (struct sys_desc_table) 0A0/010 ALL sys_desc_table System description table (struct sys_desc_table)
0B0/010 ALL olpc_ofw_header OLPC's OpenFirmware CIF and friends 0B0/010 ALL olpc_ofw_header OLPC's OpenFirmware CIF and friends
0C0/004 ALL ext_ramdisk_image ramdisk_image high 32bits
0C4/004 ALL ext_ramdisk_size ramdisk_size high 32bits
0C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits
140/080 ALL edid_info Video mode setup (struct edid_info) 140/080 ALL edid_info Video mode setup (struct edid_info)
1C0/020 ALL efi_info EFI 32 information (struct efi_info) 1C0/020 ALL efi_info EFI 32 information (struct efi_info)
1E0/004 ALL alk_mem_k Alternative mem check, in KB 1E0/004 ALL alk_mem_k Alternative mem check, in KB
@ -27,6 +30,7 @@ Offset Proto Name Meaning
1E9/001 ALL eddbuf_entries Number of entries in eddbuf (below) 1E9/001 ALL eddbuf_entries Number of entries in eddbuf (below)
1EA/001 ALL edd_mbr_sig_buf_entries Number of entries in edd_mbr_sig_buffer 1EA/001 ALL edd_mbr_sig_buf_entries Number of entries in edd_mbr_sig_buffer
(below) (below)
1EF/001 ALL sentinel Used to detect broken bootloaders
290/040 ALL edd_mbr_sig_buffer EDD MBR signatures 290/040 ALL edd_mbr_sig_buffer EDD MBR signatures
2D0/A00 ALL e820_map E820 memory map table 2D0/A00 ALL e820_map E820 memory map table
(array of struct e820entry) (array of struct e820entry)

View file

@ -1489,7 +1489,7 @@ AVR32 ARCHITECTURE
M: Haavard Skinnemoen <hskinnemoen@gmail.com> M: Haavard Skinnemoen <hskinnemoen@gmail.com>
M: Hans-Christian Egtvedt <egtvedt@samfundet.no> M: Hans-Christian Egtvedt <egtvedt@samfundet.no>
W: http://www.atmel.com/products/AVR32/ W: http://www.atmel.com/products/AVR32/
W: http://avr32linux.org/ W: http://mirror.egtvedt.no/avr32linux.org/
W: http://avrfreaks.net/ W: http://avrfreaks.net/
S: Maintained S: Maintained
F: arch/avr32/ F: arch/avr32/
@ -7076,7 +7076,7 @@ F: include/uapi/sound/
F: sound/ F: sound/
SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC) SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC)
M: Liam Girdwood <lrg@ti.com> M: Liam Girdwood <lgirdwood@gmail.com>
M: Mark Brown <broonie@opensource.wolfsonmicro.com> M: Mark Brown <broonie@opensource.wolfsonmicro.com>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
L: alsa-devel@alsa-project.org (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers)

View file

@ -1,8 +1,8 @@
VERSION = 3 VERSION = 3
PATCHLEVEL = 8 PATCHLEVEL = 8
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc5 EXTRAVERSION = -rc7
NAME = Terrified Chipmunk NAME = Unicycling Gorilla
# *DOCUMENTATION* # *DOCUMENTATION*
# To see a list of typical targets execute "make help" # To see a list of typical targets execute "make help"

View file

@ -351,6 +351,25 @@ void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
irq_set_chained_handler(irq, gic_handle_cascade_irq); irq_set_chained_handler(irq, gic_handle_cascade_irq);
} }
static u8 gic_get_cpumask(struct gic_chip_data *gic)
{
void __iomem *base = gic_data_dist_base(gic);
u32 mask, i;
for (i = mask = 0; i < 32; i += 4) {
mask = readl_relaxed(base + GIC_DIST_TARGET + i);
mask |= mask >> 16;
mask |= mask >> 8;
if (mask)
break;
}
if (!mask)
pr_crit("GIC CPU mask not found - kernel will fail to boot.\n");
return mask;
}
static void __init gic_dist_init(struct gic_chip_data *gic) static void __init gic_dist_init(struct gic_chip_data *gic)
{ {
unsigned int i; unsigned int i;
@ -369,7 +388,9 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
/* /*
* Set all global interrupts to this CPU only. * Set all global interrupts to this CPU only.
*/ */
cpumask = readl_relaxed(base + GIC_DIST_TARGET + 0); cpumask = gic_get_cpumask(gic);
cpumask |= cpumask << 8;
cpumask |= cpumask << 16;
for (i = 32; i < gic_irqs; i += 4) for (i = 32; i < gic_irqs; i += 4)
writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4); writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
@ -400,7 +421,7 @@ static void __cpuinit gic_cpu_init(struct gic_chip_data *gic)
* Get what the GIC says our CPU mask is. * Get what the GIC says our CPU mask is.
*/ */
BUG_ON(cpu >= NR_GIC_CPU_IF); BUG_ON(cpu >= NR_GIC_CPU_IF);
cpu_mask = readl_relaxed(dist_base + GIC_DIST_TARGET + 0); cpu_mask = gic_get_cpumask(gic);
gic_cpu_map[cpu] = cpu_mask; gic_cpu_map[cpu] = cpu_mask;
/* /*

View file

@ -37,7 +37,7 @@
*/ */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(0x01000000)) #define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(0x01000000))
#define TASK_UNMAPPED_BASE (UL(CONFIG_PAGE_OFFSET) / 3) #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/* /*
* The maximum size of a 26-bit user space task. * The maximum size of a 26-bit user space task.

View file

@ -414,7 +414,7 @@ config MACH_EXYNOS4_DT
select CPU_EXYNOS4210 select CPU_EXYNOS4210
select HAVE_SAMSUNG_KEYPAD if INPUT_KEYBOARD select HAVE_SAMSUNG_KEYPAD if INPUT_KEYBOARD
select PINCTRL select PINCTRL
select PINCTRL_EXYNOS4 select PINCTRL_EXYNOS
select USE_OF select USE_OF
help help
Machine support for Samsung Exynos4 machine with device tree enabled. Machine support for Samsung Exynos4 machine with device tree enabled.

View file

@ -115,7 +115,7 @@
/* /*
* Only define NR_IRQS if less than NR_IRQS_EB * Only define NR_IRQS if less than NR_IRQS_EB
*/ */
#define NR_IRQS_EB (IRQ_EB_GIC_START + 96) #define NR_IRQS_EB (IRQ_EB_GIC_START + 128)
#if defined(CONFIG_MACH_REALVIEW_EB) \ #if defined(CONFIG_MACH_REALVIEW_EB) \
&& (!defined(NR_IRQS) || (NR_IRQS < NR_IRQS_EB)) && (!defined(NR_IRQS) || (NR_IRQS < NR_IRQS_EB))

View file

@ -640,7 +640,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
if (is_coherent || nommu()) if (is_coherent || nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page); addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (gfp & GFP_ATOMIC) else if (!(gfp & __GFP_WAIT))
addr = __alloc_from_pool(size, &page); addr = __alloc_from_pool(size, &page);
else if (!IS_ENABLED(CONFIG_CMA)) else if (!IS_ENABLED(CONFIG_CMA))
addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);

View file

@ -336,4 +336,14 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* __ASM_AVR32_DMA_MAPPING_H */ #endif /* __ASM_AVR32_DMA_MAPPING_H */

View file

@ -154,4 +154,14 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
_dma_sync((dma_addr_t)vaddr, size, dir); _dma_sync((dma_addr_t)vaddr, size, dir);
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _BLACKFIN_DMA_MAPPING_H */ #endif /* _BLACKFIN_DMA_MAPPING_H */

View file

@ -89,4 +89,19 @@ extern void dma_free_coherent(struct device *, size_t, void *, dma_addr_t);
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f)) #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f))
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h)) #define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_C6X_DMA_MAPPING_H */ #endif /* _ASM_C6X_DMA_MAPPING_H */

View file

@ -158,5 +158,15 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif #endif

View file

@ -132,4 +132,19 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
flush_write_buffers(); flush_write_buffers();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_DMA_MAPPING_H */ #endif /* _ASM_DMA_MAPPING_H */

View file

@ -115,4 +115,14 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t handle)
#include <asm-generic/dma-mapping-broken.h> #include <asm-generic/dma-mapping-broken.h>
#endif #endif
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _M68K_DMA_MAPPING_H */ #endif /* _M68K_DMA_MAPPING_H */

View file

@ -8,8 +8,10 @@ config BCM47XX_SSB
select SSB_DRIVER_EXTIF select SSB_DRIVER_EXTIF
select SSB_EMBEDDED select SSB_EMBEDDED
select SSB_B43_PCI_BRIDGE if PCI select SSB_B43_PCI_BRIDGE if PCI
select SSB_DRIVER_PCICORE if PCI
select SSB_PCICORE_HOSTMODE if PCI select SSB_PCICORE_HOSTMODE if PCI
select SSB_DRIVER_GPIO select SSB_DRIVER_GPIO
select GPIOLIB
default y default y
help help
Add support for old Broadcom BCM47xx boards with Sonics Silicon Backplane support. Add support for old Broadcom BCM47xx boards with Sonics Silicon Backplane support.
@ -25,6 +27,7 @@ config BCM47XX_BCMA
select BCMA_HOST_PCI if PCI select BCMA_HOST_PCI if PCI
select BCMA_DRIVER_PCI_HOSTMODE if PCI select BCMA_DRIVER_PCI_HOSTMODE if PCI
select BCMA_DRIVER_GPIO select BCMA_DRIVER_GPIO
select GPIOLIB
default y default y
help help
Add support for new Broadcom BCM47xx boards with Broadcom specific Advanced Microcontroller Bus. Add support for new Broadcom BCM47xx boards with Broadcom specific Advanced Microcontroller Bus.

View file

@ -30,6 +30,7 @@
* measurement, and debugging facilities. * measurement, and debugging facilities.
*/ */
#include <linux/compiler.h>
#include <linux/irqflags.h> #include <linux/irqflags.h>
#include <asm/octeon/cvmx.h> #include <asm/octeon/cvmx.h>
#include <asm/octeon/cvmx-l2c.h> #include <asm/octeon/cvmx-l2c.h>
@ -285,22 +286,22 @@ uint64_t cvmx_l2c_read_perf(uint32_t counter)
*/ */
static void fault_in(uint64_t addr, int len) static void fault_in(uint64_t addr, int len)
{ {
volatile char *ptr; char *ptr;
volatile char dummy;
/* /*
* Adjust addr and length so we get all cache lines even for * Adjust addr and length so we get all cache lines even for
* small ranges spanning two cache lines. * small ranges spanning two cache lines.
*/ */
len += addr & CVMX_CACHE_LINE_MASK; len += addr & CVMX_CACHE_LINE_MASK;
addr &= ~CVMX_CACHE_LINE_MASK; addr &= ~CVMX_CACHE_LINE_MASK;
ptr = (volatile char *)cvmx_phys_to_ptr(addr); ptr = cvmx_phys_to_ptr(addr);
/* /*
* Invalidate L1 cache to make sure all loads result in data * Invalidate L1 cache to make sure all loads result in data
* being in L2. * being in L2.
*/ */
CVMX_DCACHE_INVALIDATE; CVMX_DCACHE_INVALIDATE;
while (len > 0) { while (len > 0) {
dummy += *ptr; ACCESS_ONCE(*ptr);
len -= CVMX_CACHE_LINE_SIZE; len -= CVMX_CACHE_LINE_SIZE;
ptr += CVMX_CACHE_LINE_SIZE; ptr += CVMX_CACHE_LINE_SIZE;
} }

View file

@ -16,7 +16,7 @@
#include <asm/mipsregs.h> #include <asm/mipsregs.h>
#define DSP_DEFAULT 0x00000000 #define DSP_DEFAULT 0x00000000
#define DSP_MASK 0x3ff #define DSP_MASK 0x3f
#define __enable_dsp_hazard() \ #define __enable_dsp_hazard() \
do { \ do { \

View file

@ -353,6 +353,7 @@ union mips_instruction {
struct u_format u_format; struct u_format u_format;
struct c_format c_format; struct c_format c_format;
struct r_format r_format; struct r_format r_format;
struct p_format p_format;
struct f_format f_format; struct f_format f_format;
struct ma_format ma_format; struct ma_format ma_format;
struct b_format b_format; struct b_format b_format;

View file

@ -21,4 +21,4 @@
#define R10000_LLSC_WAR 0 #define R10000_LLSC_WAR 0
#define MIPS34K_MISSED_ITLB_WAR 0 #define MIPS34K_MISSED_ITLB_WAR 0
#endif /* __ASM_MIPS_MACH_PNX8550_WAR_H */ #endif /* __ASM_MIPS_MACH_PNX833X_WAR_H */

View file

@ -230,6 +230,7 @@ static inline void pud_clear(pud_t *pudp)
#else #else
#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) #define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT))
#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) #define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot))
#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot))
#endif #endif
#define __pgd_offset(address) pgd_index(address) #define __pgd_offset(address) pgd_index(address)

View file

@ -3,6 +3,7 @@ include include/uapi/asm-generic/Kbuild.asm
header-y += auxvec.h header-y += auxvec.h
header-y += bitsperlong.h header-y += bitsperlong.h
header-y += break.h
header-y += byteorder.h header-y += byteorder.h
header-y += cachectl.h header-y += cachectl.h
header-y += errno.h header-y += errno.h

View file

@ -25,6 +25,12 @@
#define MCOUNT_OFFSET_INSNS 4 #define MCOUNT_OFFSET_INSNS 4
#endif #endif
/* Arch override because MIPS doesn't need to run this from stop_machine() */
void arch_ftrace_update_code(int command)
{
ftrace_modify_all_code(command);
}
/* /*
* Check if the address is in kernel space * Check if the address is in kernel space
* *
@ -89,6 +95,24 @@ static int ftrace_modify_code(unsigned long ip, unsigned int new_code)
return 0; return 0;
} }
#ifndef CONFIG_64BIT
static int ftrace_modify_code_2(unsigned long ip, unsigned int new_code1,
unsigned int new_code2)
{
int faulted;
safe_store_code(new_code1, ip, faulted);
if (unlikely(faulted))
return -EFAULT;
ip += 4;
safe_store_code(new_code2, ip, faulted);
if (unlikely(faulted))
return -EFAULT;
flush_icache_range(ip, ip + 8); /* original ip + 12 */
return 0;
}
#endif
/* /*
* The details about the calling site of mcount on MIPS * The details about the calling site of mcount on MIPS
* *
@ -131,8 +155,18 @@ int ftrace_make_nop(struct module *mod,
* needed. * needed.
*/ */
new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F; new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F;
#ifdef CONFIG_64BIT
return ftrace_modify_code(ip, new); return ftrace_modify_code(ip, new);
#else
/*
* On 32 bit MIPS platforms, gcc adds a stack adjust
* instruction in the delay slot after the branch to
* mcount and expects mcount to restore the sp on return.
* This is based on a legacy API and does nothing but
* waste instructions so it's being removed at runtime.
*/
return ftrace_modify_code_2(ip, new, INSN_NOP);
#endif
} }
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)

View file

@ -46,9 +46,8 @@
PTR_L a5, PT_R9(sp) PTR_L a5, PT_R9(sp)
PTR_L a6, PT_R10(sp) PTR_L a6, PT_R10(sp)
PTR_L a7, PT_R11(sp) PTR_L a7, PT_R11(sp)
PTR_ADDIU sp, PT_SIZE
#else #else
PTR_ADDIU sp, (PT_SIZE + 8) PTR_ADDIU sp, PT_SIZE
#endif #endif
.endm .endm
@ -69,7 +68,9 @@ NESTED(ftrace_caller, PT_SIZE, ra)
.globl _mcount .globl _mcount
_mcount: _mcount:
b ftrace_stub b ftrace_stub
nop addiu sp,sp,8
/* When tracing is activated, it calls ftrace_caller+8 (aka here) */
lw t1, function_trace_stop lw t1, function_trace_stop
bnez t1, ftrace_stub bnez t1, ftrace_stub
nop nop

View file

@ -705,7 +705,7 @@ static int vpe_run(struct vpe * v)
printk(KERN_WARNING printk(KERN_WARNING
"VPE loader: TC %d is already in use.\n", "VPE loader: TC %d is already in use.\n",
t->index); v->tc->index);
return -ENOEXEC; return -ENOEXEC;
} }
} else { } else {

View file

@ -408,7 +408,7 @@ int __init icu_of_init(struct device_node *node, struct device_node *parent)
#endif #endif
/* tell oprofile which irq to use */ /* tell oprofile which irq to use */
cp0_perfcount_irq = LTQ_PERF_IRQ; cp0_perfcount_irq = irq_create_mapping(ltq_domain, LTQ_PERF_IRQ);
/* /*
* if the timer irq is not one of the mips irqs we need to * if the timer irq is not one of the mips irqs we need to

View file

@ -21,7 +21,7 @@ void __delay(unsigned long loops)
" .set noreorder \n" " .set noreorder \n"
" .align 3 \n" " .align 3 \n"
"1: bnez %0, 1b \n" "1: bnez %0, 1b \n"
#if __SIZEOF_LONG__ == 4 #if BITS_PER_LONG == 32
" subu %0, 1 \n" " subu %0, 1 \n"
#else #else
" dsubu %0, 1 \n" " dsubu %0, 1 \n"

View file

@ -190,9 +190,3 @@ void __iounmap(const volatile void __iomem *addr)
EXPORT_SYMBOL(__ioremap); EXPORT_SYMBOL(__ioremap);
EXPORT_SYMBOL(__iounmap); EXPORT_SYMBOL(__iounmap);
int __virt_addr_valid(const volatile void *kaddr)
{
return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
}
EXPORT_SYMBOL_GPL(__virt_addr_valid);

View file

@ -192,3 +192,9 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
return ret; return ret;
} }
int __virt_addr_valid(const volatile void *kaddr)
{
return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
}
EXPORT_SYMBOL_GPL(__virt_addr_valid);

View file

@ -193,8 +193,11 @@ static void nlm_init_node(void)
void __init prom_init(void) void __init prom_init(void)
{ {
int i, *argv, *envp; /* passed as 32 bit ptrs */ int *argv, *envp; /* passed as 32 bit ptrs */
struct psb_info *prom_infop; struct psb_info *prom_infop;
#ifdef CONFIG_SMP
int i;
#endif
/* truncate to 32 bit and sign extend all args */ /* truncate to 32 bit and sign extend all args */
argv = (int *)(long)(int)fw_arg1; argv = (int *)(long)(int)fw_arg1;

View file

@ -24,7 +24,7 @@
#include <asm/mach-ath79/pci.h> #include <asm/mach-ath79/pci.h>
#define AR71XX_PCI_MEM_BASE 0x10000000 #define AR71XX_PCI_MEM_BASE 0x10000000
#define AR71XX_PCI_MEM_SIZE 0x08000000 #define AR71XX_PCI_MEM_SIZE 0x07000000
#define AR71XX_PCI_WIN0_OFFS 0x10000000 #define AR71XX_PCI_WIN0_OFFS 0x10000000
#define AR71XX_PCI_WIN1_OFFS 0x11000000 #define AR71XX_PCI_WIN1_OFFS 0x11000000

View file

@ -21,7 +21,7 @@
#define AR724X_PCI_CTRL_SIZE 0x100 #define AR724X_PCI_CTRL_SIZE 0x100
#define AR724X_PCI_MEM_BASE 0x10000000 #define AR724X_PCI_MEM_BASE 0x10000000
#define AR724X_PCI_MEM_SIZE 0x08000000 #define AR724X_PCI_MEM_SIZE 0x04000000
#define AR724X_PCI_REG_RESET 0x18 #define AR724X_PCI_REG_RESET 0x18
#define AR724X_PCI_REG_INT_STATUS 0x4c #define AR724X_PCI_REG_INT_STATUS 0x4c

View file

@ -168,4 +168,19 @@ void dma_cache_sync(void *vaddr, size_t size,
mn10300_dcache_flush_inv(); mn10300_dcache_flush_inv();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif

View file

@ -238,4 +238,19 @@ void * sba_get_iommu(struct parisc_device *dev);
/* At the moment, we panic on error for IOMMU resource exaustion */ /* At the moment, we panic on error for IOMMU resource exaustion */
#define dma_mapping_error(dev, x) 0 #define dma_mapping_error(dev, x) 0
/* This API cannot be supported on PA-RISC */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif

View file

@ -115,11 +115,13 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
sldi r29,r5,SID_SHIFT - VPN_SHIFT sldi r29,r5,SID_SHIFT - VPN_SHIFT
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29 or r29,r28,r29
/*
/* Calculate hash value for primary slot and store it in r28 */ * Calculate hash value for primary slot and store it in r28
rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ * r3 = va, r5 = vsid
rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)
xor r28,r5,r0 */
rldicl r0,r3,64-12,48
xor r28,r5,r0 /* hash */
b 4f b 4f
3: /* Calc vpn and put it in r29 */ 3: /* Calc vpn and put it in r29 */
@ -130,11 +132,12 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
/* /*
* calculate hash value for primary slot and * calculate hash value for primary slot and
* store it in r28 for 1T segment * store it in r28 for 1T segment
* r3 = va, r5 = vsid
*/ */
rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ sldi r28,r5,25 /* vsid << 25 */
clrldi r5,r5,40 /* vsid & 0xffffff */ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */
rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */ rldicl r0,r3,64-12,36
xor r28,r28,r5 xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */ xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */ /* Convert linux PTE bits into HW equivalents */
@ -407,11 +410,13 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
*/ */
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29 or r29,r28,r29
/*
/* Calculate hash value for primary slot and store it in r28 */ * Calculate hash value for primary slot and store it in r28
rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ * r3 = va, r5 = vsid
rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)
xor r28,r5,r0 */
rldicl r0,r3,64-12,48
xor r28,r5,r0 /* hash */
b 4f b 4f
3: /* Calc vpn and put it in r29 */ 3: /* Calc vpn and put it in r29 */
@ -426,11 +431,12 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
/* /*
* Calculate hash value for primary slot and * Calculate hash value for primary slot and
* store it in r28 for 1T segment * store it in r28 for 1T segment
* r3 = va, r5 = vsid
*/ */
rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ sldi r28,r5,25 /* vsid << 25 */
clrldi r5,r5,40 /* vsid & 0xffffff */ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */
rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */ rldicl r0,r3,64-12,36
xor r28,r28,r5 xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */ xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */ /* Convert linux PTE bits into HW equivalents */
@ -752,25 +758,27 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT) rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29 or r29,r28,r29
/* Calculate hash value for primary slot and store it in r28 */ /* Calculate hash value for primary slot and store it in r28
rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */ * r3 = va, r5 = vsid
rldicl r0,r3,64-16,52 /* (ea >> 16) & 0xfff */ * r0 = (va >> 16) & ((1ul << (28 - 16)) -1)
xor r28,r5,r0 */
rldicl r0,r3,64-16,52
xor r28,r5,r0 /* hash */
b 4f b 4f
3: /* Calc vpn and put it in r29 */ 3: /* Calc vpn and put it in r29 */
sldi r29,r5,SID_SHIFT_1T - VPN_SHIFT sldi r29,r5,SID_SHIFT_1T - VPN_SHIFT
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT_1T - VPN_SHIFT) rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT_1T - VPN_SHIFT)
or r29,r28,r29 or r29,r28,r29
/* /*
* calculate hash value for primary slot and * calculate hash value for primary slot and
* store it in r28 for 1T segment * store it in r28 for 1T segment
* r3 = va, r5 = vsid
*/ */
rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */ sldi r28,r5,25 /* vsid << 25 */
clrldi r5,r5,40 /* vsid & 0xffffff */ /* r0 = (va >> 16) & ((1ul << (40 - 16)) -1) */
rldicl r0,r3,64-16,40 /* (ea >> 16) & 0xffffff */ rldicl r0,r3,64-16,40
xor r28,r28,r5 xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */ xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */ /* Convert linux PTE bits into HW equivalents */

View file

@ -1365,6 +1365,18 @@ static inline void pmdp_invalidate(struct vm_area_struct *vma,
__pmd_idte(address, pmdp); __pmd_idte(address, pmdp);
} }
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
unsigned long address, pmd_t *pmdp)
{
pmd_t pmd = *pmdp;
if (pmd_write(pmd)) {
__pmd_idte(address, pmdp);
set_pmd_at(mm, address, pmdp, pmd_wrprotect(pmd));
}
}
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot) static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{ {
pmd_t __pmd; pmd_t __pmd;

View file

@ -2138,6 +2138,7 @@ config OLPC_XO1_RTC
config OLPC_XO1_SCI config OLPC_XO1_SCI
bool "OLPC XO-1 SCI extras" bool "OLPC XO-1 SCI extras"
depends on OLPC && OLPC_XO1_PM depends on OLPC && OLPC_XO1_PM
depends on INPUT=y
select POWER_SUPPLY select POWER_SUPPLY
select GPIO_CS5535 select GPIO_CS5535
select MFD_CORE select MFD_CORE

View file

@ -71,7 +71,7 @@ GCOV_PROFILE := n
$(obj)/bzImage: asflags-y := $(SVGA_MODE) $(obj)/bzImage: asflags-y := $(SVGA_MODE)
quiet_cmd_image = BUILD $@ quiet_cmd_image = BUILD $@
cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin > $@ cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/zoffset.h > $@
$(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/tools/build FORCE $(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/tools/build FORCE
$(call if_changed,image) $(call if_changed,image)
@ -92,7 +92,7 @@ targets += voffset.h
$(obj)/voffset.h: vmlinux FORCE $(obj)/voffset.h: vmlinux FORCE
$(call if_changed,voffset) $(call if_changed,voffset)
sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p' sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|startup_64\|efi_pe_entry\|efi_stub_entry\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p'
quiet_cmd_zoffset = ZOFFSET $@ quiet_cmd_zoffset = ZOFFSET $@
cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@ cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@

View file

@ -256,10 +256,10 @@ static efi_status_t setup_efi_pci(struct boot_params *params)
int i; int i;
struct setup_data *data; struct setup_data *data;
data = (struct setup_data *)params->hdr.setup_data; data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
while (data && data->next) while (data && data->next)
data = (struct setup_data *)data->next; data = (struct setup_data *)(unsigned long)data->next;
status = efi_call_phys5(sys_table->boottime->locate_handle, status = efi_call_phys5(sys_table->boottime->locate_handle,
EFI_LOCATE_BY_PROTOCOL, &pci_proto, EFI_LOCATE_BY_PROTOCOL, &pci_proto,
@ -295,16 +295,18 @@ static efi_status_t setup_efi_pci(struct boot_params *params)
if (!pci) if (!pci)
continue; continue;
#ifdef CONFIG_X86_64
status = efi_call_phys4(pci->attributes, pci, status = efi_call_phys4(pci->attributes, pci,
EfiPciIoAttributeOperationGet, 0, EfiPciIoAttributeOperationGet, 0,
&attributes); &attributes);
#else
status = efi_call_phys5(pci->attributes, pci,
EfiPciIoAttributeOperationGet, 0, 0,
&attributes);
#endif
if (status != EFI_SUCCESS) if (status != EFI_SUCCESS)
continue; continue;
if (!(attributes & EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM))
continue;
if (!pci->romimage || !pci->romsize) if (!pci->romimage || !pci->romsize)
continue; continue;
@ -345,9 +347,9 @@ static efi_status_t setup_efi_pci(struct boot_params *params)
memcpy(rom->romdata, pci->romimage, pci->romsize); memcpy(rom->romdata, pci->romimage, pci->romsize);
if (data) if (data)
data->next = (uint64_t)rom; data->next = (unsigned long)rom;
else else
params->hdr.setup_data = (uint64_t)rom; params->hdr.setup_data = (unsigned long)rom;
data = (struct setup_data *)rom; data = (struct setup_data *)rom;
@ -432,10 +434,9 @@ static efi_status_t setup_gop(struct screen_info *si, efi_guid_t *proto,
* Once we've found a GOP supporting ConOut, * Once we've found a GOP supporting ConOut,
* don't bother looking any further. * don't bother looking any further.
*/ */
first_gop = gop;
if (conout_found) if (conout_found)
break; break;
first_gop = gop;
} }
} }

View file

@ -35,11 +35,11 @@ ENTRY(startup_32)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
jmp preferred_addr jmp preferred_addr
.balign 0x10
/* /*
* We don't need the return address, so set up the stack so * We don't need the return address, so set up the stack so
* efi_main() can find its arugments. * efi_main() can find its arguments.
*/ */
ENTRY(efi_pe_entry)
add $0x4, %esp add $0x4, %esp
call make_boot_params call make_boot_params
@ -50,8 +50,10 @@ ENTRY(startup_32)
pushl %eax pushl %eax
pushl %esi pushl %esi
pushl %ecx pushl %ecx
sub $0x4, %esp
.org 0x30,0x90 ENTRY(efi_stub_entry)
add $0x4, %esp
call efi_main call efi_main
cmpl $0, %eax cmpl $0, %eax
movl %eax, %esi movl %eax, %esi

View file

@ -201,12 +201,12 @@ ENTRY(startup_64)
*/ */
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
/* /*
* The entry point for the PE/COFF executable is 0x210, so only * The entry point for the PE/COFF executable is efi_pe_entry, so
* legacy boot loaders will execute this jmp. * only legacy boot loaders will execute this jmp.
*/ */
jmp preferred_addr jmp preferred_addr
.org 0x210 ENTRY(efi_pe_entry)
mov %rcx, %rdi mov %rcx, %rdi
mov %rdx, %rsi mov %rdx, %rsi
pushq %rdi pushq %rdi
@ -218,7 +218,7 @@ ENTRY(startup_64)
popq %rsi popq %rsi
popq %rdi popq %rdi
.org 0x230,0x90 ENTRY(efi_stub_entry)
call efi_main call efi_main
movq %rax,%rsi movq %rax,%rsi
cmpq $0,%rax cmpq $0,%rax

View file

@ -21,6 +21,7 @@
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/page_types.h> #include <asm/page_types.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/bootparam.h>
#include "boot.h" #include "boot.h"
#include "voffset.h" #include "voffset.h"
#include "zoffset.h" #include "zoffset.h"
@ -255,6 +256,9 @@ section_table:
# header, from the old boot sector. # header, from the old boot sector.
.section ".header", "a" .section ".header", "a"
.globl sentinel
sentinel: .byte 0xff, 0xff /* Used to detect broken loaders */
.globl hdr .globl hdr
hdr: hdr:
setup_sects: .byte 0 /* Filled in by build.c */ setup_sects: .byte 0 /* Filled in by build.c */
@ -279,7 +283,7 @@ _start:
# Part 2 of the header, from the old setup.S # Part 2 of the header, from the old setup.S
.ascii "HdrS" # header signature .ascii "HdrS" # header signature
.word 0x020b # header version number (>= 0x0105) .word 0x020c # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail) # or else old loadlin-1.5 will fail)
.globl realmode_swtch .globl realmode_swtch
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
@ -297,13 +301,7 @@ type_of_loader: .byte 0 # 0 means ancient bootloader, newer
# flags, unused bits must be zero (RFU) bit within loadflags # flags, unused bits must be zero (RFU) bit within loadflags
loadflags: loadflags:
LOADED_HIGH = 1 # If set, the kernel is loaded high .byte LOADED_HIGH # The kernel is to be loaded high
CAN_USE_HEAP = 0x80 # If set, the loader also has set
# heap_end_ptr to tell how much
# space behind setup.S can be used for
# heap purposes.
# Only the loader knows what is free
.byte LOADED_HIGH
setup_move_size: .word 0x8000 # size to move, when setup is not setup_move_size: .word 0x8000 # size to move, when setup is not
# loaded at 0x90000. We will move setup # loaded at 0x90000. We will move setup
@ -369,7 +367,23 @@ relocatable_kernel: .byte 1
relocatable_kernel: .byte 0 relocatable_kernel: .byte 0
#endif #endif
min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment
pad3: .word 0
xloadflags:
#ifdef CONFIG_X86_64
# define XLF0 XLF_KERNEL_64 /* 64-bit kernel */
#else
# define XLF0 0
#endif
#ifdef CONFIG_EFI_STUB
# ifdef CONFIG_X86_64
# define XLF23 XLF_EFI_HANDOVER_64 /* 64-bit EFI handover ok */
# else
# define XLF23 XLF_EFI_HANDOVER_32 /* 32-bit EFI handover ok */
# endif
#else
# define XLF23 0
#endif
.word XLF0 | XLF23
cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line, cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line,
#added with boot protocol #added with boot protocol
@ -397,8 +411,13 @@ pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
#define INIT_SIZE VO_INIT_SIZE #define INIT_SIZE VO_INIT_SIZE
#endif #endif
init_size: .long INIT_SIZE # kernel initialization size init_size: .long INIT_SIZE # kernel initialization size
handover_offset: .long 0x30 # offset to the handover handover_offset:
#ifdef CONFIG_EFI_STUB
.long 0x30 # offset to the handover
# protocol entry point # protocol entry point
#else
.long 0
#endif
# End of setup header ##################################################### # End of setup header #####################################################

View file

@ -13,7 +13,7 @@ SECTIONS
.bstext : { *(.bstext) } .bstext : { *(.bstext) }
.bsdata : { *(.bsdata) } .bsdata : { *(.bsdata) }
. = 497; . = 495;
.header : { *(.header) } .header : { *(.header) }
.entrytext : { *(.entrytext) } .entrytext : { *(.entrytext) }
.inittext : { *(.inittext) } .inittext : { *(.inittext) }

View file

@ -52,6 +52,10 @@ int is_big_kernel;
#define PECOFF_RELOC_RESERVE 0x20 #define PECOFF_RELOC_RESERVE 0x20
unsigned long efi_stub_entry;
unsigned long efi_pe_entry;
unsigned long startup_64;
/*----------------------------------------------------------------------*/ /*----------------------------------------------------------------------*/
static const u32 crctab32[] = { static const u32 crctab32[] = {
@ -132,7 +136,7 @@ static void die(const char * str, ...)
static void usage(void) static void usage(void)
{ {
die("Usage: build setup system [> image]"); die("Usage: build setup system [zoffset.h] [> image]");
} }
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
@ -206,30 +210,54 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz)
*/ */
put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]); put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]);
#ifdef CONFIG_X86_32
/* /*
* Address of entry point. * Address of entry point for PE/COFF executable
*
* The EFI stub entry point is +16 bytes from the start of
* the .text section.
*/ */
put_unaligned_le32(text_start + 16, &buf[pe_header + 0x28]); put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]);
#else
/*
* Address of entry point. startup_32 is at the beginning and
* the 64-bit entry point (startup_64) is always 512 bytes
* after. The EFI stub entry point is 16 bytes after that, as
* the first instruction allows legacy loaders to jump over
* the EFI stub initialisation
*/
put_unaligned_le32(text_start + 528, &buf[pe_header + 0x28]);
#endif /* CONFIG_X86_32 */
update_pecoff_section_header(".text", text_start, text_sz); update_pecoff_section_header(".text", text_start, text_sz);
} }
#endif /* CONFIG_EFI_STUB */ #endif /* CONFIG_EFI_STUB */
/*
* Parse zoffset.h and find the entry points. We could just #include zoffset.h
* but that would mean tools/build would have to be rebuilt every time. It's
* not as if parsing it is hard...
*/
#define PARSE_ZOFS(p, sym) do { \
if (!strncmp(p, "#define ZO_" #sym " ", 11+sizeof(#sym))) \
sym = strtoul(p + 11 + sizeof(#sym), NULL, 16); \
} while (0)
static void parse_zoffset(char *fname)
{
FILE *file;
char *p;
int c;
file = fopen(fname, "r");
if (!file)
die("Unable to open `%s': %m", fname);
c = fread(buf, 1, sizeof(buf) - 1, file);
if (ferror(file))
die("read-error on `zoffset.h'");
buf[c] = 0;
p = (char *)buf;
while (p && *p) {
PARSE_ZOFS(p, efi_stub_entry);
PARSE_ZOFS(p, efi_pe_entry);
PARSE_ZOFS(p, startup_64);
p = strchr(p, '\n');
while (p && (*p == '\r' || *p == '\n'))
p++;
}
}
int main(int argc, char ** argv) int main(int argc, char ** argv)
{ {
unsigned int i, sz, setup_sectors; unsigned int i, sz, setup_sectors;
@ -241,7 +269,19 @@ int main(int argc, char ** argv)
void *kernel; void *kernel;
u32 crc = 0xffffffffUL; u32 crc = 0xffffffffUL;
if (argc != 3) /* Defaults for old kernel */
#ifdef CONFIG_X86_32
efi_pe_entry = 0x10;
efi_stub_entry = 0x30;
#else
efi_pe_entry = 0x210;
efi_stub_entry = 0x230;
startup_64 = 0x200;
#endif
if (argc == 4)
parse_zoffset(argv[3]);
else if (argc != 3)
usage(); usage();
/* Copy the setup code */ /* Copy the setup code */
@ -299,6 +339,11 @@ int main(int argc, char ** argv)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz)); update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz));
#ifdef CONFIG_X86_64 /* Yes, this is really how we defined it :( */
efi_stub_entry -= 0x200;
#endif
put_unaligned_le32(efi_stub_entry, &buf[0x264]);
#endif #endif
crc = partial_crc32(buf, i, crc); crc = partial_crc32(buf, i, crc);

View file

@ -207,7 +207,7 @@ sysexit_from_sys_call:
testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
jnz ia32_ret_from_sys_call jnz ia32_ret_from_sys_call
TRACE_IRQS_ON TRACE_IRQS_ON
sti ENABLE_INTERRUPTS(CLBR_NONE)
movl %eax,%esi /* second arg, syscall return value */ movl %eax,%esi /* second arg, syscall return value */
cmpl $-MAX_ERRNO,%eax /* is it an error ? */ cmpl $-MAX_ERRNO,%eax /* is it an error ? */
jbe 1f jbe 1f
@ -217,7 +217,7 @@ sysexit_from_sys_call:
call __audit_syscall_exit call __audit_syscall_exit
movq RAX-ARGOFFSET(%rsp),%rax /* reload syscall return value */ movq RAX-ARGOFFSET(%rsp),%rax /* reload syscall return value */
movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi
cli DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF TRACE_IRQS_OFF
testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
jz \exit jz \exit

View file

@ -94,6 +94,7 @@ extern void __iomem *efi_ioremap(unsigned long addr, unsigned long size,
#endif /* CONFIG_X86_32 */ #endif /* CONFIG_X86_32 */
extern int add_efi_memmap; extern int add_efi_memmap;
extern unsigned long x86_efi_facility;
extern void efi_set_executable(efi_memory_desc_t *md, bool executable); extern void efi_set_executable(efi_memory_desc_t *md, bool executable);
extern int efi_memblock_x86_reserve_range(void); extern int efi_memblock_x86_reserve_range(void);
extern void efi_call_phys_prelog(void); extern void efi_call_phys_prelog(void);

View file

@ -16,7 +16,7 @@ extern void uv_system_init(void);
extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm, struct mm_struct *mm,
unsigned long start, unsigned long start,
unsigned end, unsigned long end,
unsigned int cpu); unsigned int cpu);
#else /* X86_UV */ #else /* X86_UV */

View file

@ -1,6 +1,31 @@
#ifndef _ASM_X86_BOOTPARAM_H #ifndef _ASM_X86_BOOTPARAM_H
#define _ASM_X86_BOOTPARAM_H #define _ASM_X86_BOOTPARAM_H
/* setup_data types */
#define SETUP_NONE 0
#define SETUP_E820_EXT 1
#define SETUP_DTB 2
#define SETUP_PCI 3
/* ram_size flags */
#define RAMDISK_IMAGE_START_MASK 0x07FF
#define RAMDISK_PROMPT_FLAG 0x8000
#define RAMDISK_LOAD_FLAG 0x4000
/* loadflags */
#define LOADED_HIGH (1<<0)
#define QUIET_FLAG (1<<5)
#define KEEP_SEGMENTS (1<<6)
#define CAN_USE_HEAP (1<<7)
/* xloadflags */
#define XLF_KERNEL_64 (1<<0)
#define XLF_CAN_BE_LOADED_ABOVE_4G (1<<1)
#define XLF_EFI_HANDOVER_32 (1<<2)
#define XLF_EFI_HANDOVER_64 (1<<3)
#ifndef __ASSEMBLY__
#include <linux/types.h> #include <linux/types.h>
#include <linux/screen_info.h> #include <linux/screen_info.h>
#include <linux/apm_bios.h> #include <linux/apm_bios.h>
@ -9,12 +34,6 @@
#include <asm/ist.h> #include <asm/ist.h>
#include <video/edid.h> #include <video/edid.h>
/* setup data types */
#define SETUP_NONE 0
#define SETUP_E820_EXT 1
#define SETUP_DTB 2
#define SETUP_PCI 3
/* extensible setup data list node */ /* extensible setup data list node */
struct setup_data { struct setup_data {
__u64 next; __u64 next;
@ -28,9 +47,6 @@ struct setup_header {
__u16 root_flags; __u16 root_flags;
__u32 syssize; __u32 syssize;
__u16 ram_size; __u16 ram_size;
#define RAMDISK_IMAGE_START_MASK 0x07FF
#define RAMDISK_PROMPT_FLAG 0x8000
#define RAMDISK_LOAD_FLAG 0x4000
__u16 vid_mode; __u16 vid_mode;
__u16 root_dev; __u16 root_dev;
__u16 boot_flag; __u16 boot_flag;
@ -42,10 +58,6 @@ struct setup_header {
__u16 kernel_version; __u16 kernel_version;
__u8 type_of_loader; __u8 type_of_loader;
__u8 loadflags; __u8 loadflags;
#define LOADED_HIGH (1<<0)
#define QUIET_FLAG (1<<5)
#define KEEP_SEGMENTS (1<<6)
#define CAN_USE_HEAP (1<<7)
__u16 setup_move_size; __u16 setup_move_size;
__u32 code32_start; __u32 code32_start;
__u32 ramdisk_image; __u32 ramdisk_image;
@ -58,7 +70,8 @@ struct setup_header {
__u32 initrd_addr_max; __u32 initrd_addr_max;
__u32 kernel_alignment; __u32 kernel_alignment;
__u8 relocatable_kernel; __u8 relocatable_kernel;
__u8 _pad2[3]; __u8 min_alignment;
__u16 xloadflags;
__u32 cmdline_size; __u32 cmdline_size;
__u32 hardware_subarch; __u32 hardware_subarch;
__u64 hardware_subarch_data; __u64 hardware_subarch_data;
@ -106,7 +119,10 @@ struct boot_params {
__u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */
struct sys_desc_table sys_desc_table; /* 0x0a0 */ struct sys_desc_table sys_desc_table; /* 0x0a0 */
struct olpc_ofw_header olpc_ofw_header; /* 0x0b0 */ struct olpc_ofw_header olpc_ofw_header; /* 0x0b0 */
__u8 _pad4[128]; /* 0x0c0 */ __u32 ext_ramdisk_image; /* 0x0c0 */
__u32 ext_ramdisk_size; /* 0x0c4 */
__u32 ext_cmd_line_ptr; /* 0x0c8 */
__u8 _pad4[116]; /* 0x0cc */
struct edid_info edid_info; /* 0x140 */ struct edid_info edid_info; /* 0x140 */
struct efi_info efi_info; /* 0x1c0 */ struct efi_info efi_info; /* 0x1c0 */
__u32 alt_mem_k; /* 0x1e0 */ __u32 alt_mem_k; /* 0x1e0 */
@ -115,7 +131,20 @@ struct boot_params {
__u8 eddbuf_entries; /* 0x1e9 */ __u8 eddbuf_entries; /* 0x1e9 */
__u8 edd_mbr_sig_buf_entries; /* 0x1ea */ __u8 edd_mbr_sig_buf_entries; /* 0x1ea */
__u8 kbd_status; /* 0x1eb */ __u8 kbd_status; /* 0x1eb */
__u8 _pad6[5]; /* 0x1ec */ __u8 _pad5[3]; /* 0x1ec */
/*
* The sentinel is set to a nonzero value (0xff) in header.S.
*
* A bootloader is supposed to only take setup_header and put
* it into a clean boot_params buffer. If it turns out that
* it is clumsy or too generous with the buffer, it most
* probably will pick up the sentinel variable too. The fact
* that this variable then is still 0xff will let kernel
* know that some variables in boot_params are invalid and
* kernel should zero out certain portions of boot_params.
*/
__u8 sentinel; /* 0x1ef */
__u8 _pad6[1]; /* 0x1f0 */
struct setup_header hdr; /* setup header */ /* 0x1f1 */ struct setup_header hdr; /* setup header */ /* 0x1f1 */
__u8 _pad7[0x290-0x1f1-sizeof(struct setup_header)]; __u8 _pad7[0x290-0x1f1-sizeof(struct setup_header)];
__u32 edd_mbr_sig_buffer[EDD_MBR_SIG_MAX]; /* 0x290 */ __u32 edd_mbr_sig_buffer[EDD_MBR_SIG_MAX]; /* 0x290 */
@ -134,6 +163,6 @@ enum {
X86_NR_SUBARCHS, X86_NR_SUBARCHS,
}; };
#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_BOOTPARAM_H */ #endif /* _ASM_X86_BOOTPARAM_H */

View file

@ -298,8 +298,7 @@ struct _cache_attr {
unsigned int); unsigned int);
}; };
#ifdef CONFIG_AMD_NB #if defined(CONFIG_AMD_NB) && defined(CONFIG_SYSFS)
/* /*
* L3 cache descriptors * L3 cache descriptors
*/ */
@ -524,9 +523,9 @@ store_subcaches(struct _cpuid4_info *this_leaf, const char *buf, size_t count,
static struct _cache_attr subcaches = static struct _cache_attr subcaches =
__ATTR(subcaches, 0644, show_subcaches, store_subcaches); __ATTR(subcaches, 0644, show_subcaches, store_subcaches);
#else /* CONFIG_AMD_NB */ #else
#define amd_init_l3_cache(x, y) #define amd_init_l3_cache(x, y)
#endif /* CONFIG_AMD_NB */ #endif /* CONFIG_AMD_NB && CONFIG_SYSFS */
static int static int
__cpuinit cpuid4_cache_lookup_regs(int index, __cpuinit cpuid4_cache_lookup_regs(int index,

View file

@ -2019,7 +2019,10 @@ __init int intel_pmu_init(void)
break; break;
case 28: /* Atom */ case 28: /* Atom */
case 54: /* Cedariew */ case 38: /* Lincroft */
case 39: /* Penwell */
case 53: /* Cloverview */
case 54: /* Cedarview */
memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
@ -2084,6 +2087,7 @@ __init int intel_pmu_init(void)
pr_cont("SandyBridge events, "); pr_cont("SandyBridge events, ");
break; break;
case 58: /* IvyBridge */ case 58: /* IvyBridge */
case 62: /* IvyBridge EP */
memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
sizeof(hw_cache_event_ids)); sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs,

View file

@ -19,7 +19,7 @@ static const u64 p6_perfmon_event_map[] =
}; };
static __initconst u64 p6_hw_cache_event_ids static u64 p6_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = [PERF_COUNT_HW_CACHE_RESULT_MAX] =

View file

@ -1781,6 +1781,7 @@ first_nmi:
* Leave room for the "copied" frame * Leave room for the "copied" frame
*/ */
subq $(5*8), %rsp subq $(5*8), %rsp
CFI_ADJUST_CFA_OFFSET 5*8
/* Copy the stack frame to the Saved frame */ /* Copy the stack frame to the Saved frame */
.rept 5 .rept 5
@ -1863,10 +1864,8 @@ end_repeat_nmi:
nmi_swapgs: nmi_swapgs:
SWAPGS_UNSAFE_STACK SWAPGS_UNSAFE_STACK
nmi_restore: nmi_restore:
RESTORE_ALL 8 /* Pop the extra iret frame at once */
RESTORE_ALL 6*8
/* Pop the extra iret frame */
addq $(5*8), %rsp
/* Clear the NMI executing stack variable */ /* Clear the NMI executing stack variable */
movq $0, 5*8(%rsp) movq $0, 5*8(%rsp)

View file

@ -300,6 +300,12 @@ ENTRY(startup_32_smp)
leal -__PAGE_OFFSET(%ecx),%esp leal -__PAGE_OFFSET(%ecx),%esp
default_entry: default_entry:
#define CR0_STATE (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \
X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \
X86_CR0_PG)
movl $(CR0_STATE & ~X86_CR0_PG),%eax
movl %eax,%cr0
/* /*
* New page tables may be in 4Mbyte page mode and may * New page tables may be in 4Mbyte page mode and may
* be using the global pages. * be using the global pages.
@ -364,8 +370,7 @@ default_entry:
*/ */
movl $pa(initial_page_table), %eax movl $pa(initial_page_table), %eax
movl %eax,%cr3 /* set the page table pointer.. */ movl %eax,%cr3 /* set the page table pointer.. */
movl %cr0,%eax movl $CR0_STATE,%eax
orl $X86_CR0_PG,%eax
movl %eax,%cr0 /* ..and set paging (PG) bit */ movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */ ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1: 1:

View file

@ -174,6 +174,9 @@ static int msr_open(struct inode *inode, struct file *file)
unsigned int cpu; unsigned int cpu;
struct cpuinfo_x86 *c; struct cpuinfo_x86 *c;
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
cpu = iminor(file->f_path.dentry->d_inode); cpu = iminor(file->f_path.dentry->d_inode);
if (cpu >= nr_cpu_ids || !cpu_online(cpu)) if (cpu >= nr_cpu_ids || !cpu_online(cpu))
return -ENXIO; /* No such CPU */ return -ENXIO; /* No such CPU */

View file

@ -56,7 +56,7 @@ struct device x86_dma_fallback_dev = {
EXPORT_SYMBOL(x86_dma_fallback_dev); EXPORT_SYMBOL(x86_dma_fallback_dev);
/* Number of entries preallocated for DMA-API debugging */ /* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES 32768 #define PREALLOC_DMA_DEBUG_ENTRIES 65536
int dma_set_mask(struct device *dev, u64 mask) int dma_set_mask(struct device *dev, u64 mask)
{ {

View file

@ -584,7 +584,7 @@ static void native_machine_emergency_restart(void)
break; break;
case BOOT_EFI: case BOOT_EFI:
if (efi_enabled) if (efi_enabled(EFI_RUNTIME_SERVICES))
efi.reset_system(reboot_mode ? efi.reset_system(reboot_mode ?
EFI_RESET_WARM : EFI_RESET_WARM :
EFI_RESET_COLD, EFI_RESET_COLD,

View file

@ -807,15 +807,15 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature, if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
"EL32", 4)) { "EL32", 4)) {
efi_enabled = 1; set_bit(EFI_BOOT, &x86_efi_facility);
efi_64bit = false;
} else if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature, } else if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
"EL64", 4)) { "EL64", 4)) {
efi_enabled = 1; set_bit(EFI_BOOT, &x86_efi_facility);
efi_64bit = true; set_bit(EFI_64BIT, &x86_efi_facility);
} }
if (efi_enabled && efi_memblock_x86_reserve_range())
efi_enabled = 0; if (efi_enabled(EFI_BOOT))
efi_memblock_x86_reserve_range();
#endif #endif
x86_init.oem.arch_setup(); x86_init.oem.arch_setup();
@ -888,7 +888,7 @@ void __init setup_arch(char **cmdline_p)
finish_e820_parsing(); finish_e820_parsing();
if (efi_enabled) if (efi_enabled(EFI_BOOT))
efi_init(); efi_init();
dmi_scan_machine(); dmi_scan_machine();
@ -971,7 +971,7 @@ void __init setup_arch(char **cmdline_p)
* The EFI specification says that boot service code won't be called * The EFI specification says that boot service code won't be called
* after ExitBootServices(). This is, in fact, a lie. * after ExitBootServices(). This is, in fact, a lie.
*/ */
if (efi_enabled) if (efi_enabled(EFI_MEMMAP))
efi_reserve_boot_services(); efi_reserve_boot_services();
/* preallocate 4k for mptable mpc */ /* preallocate 4k for mptable mpc */
@ -1114,7 +1114,7 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_VT #ifdef CONFIG_VT
#if defined(CONFIG_VGA_CONSOLE) #if defined(CONFIG_VGA_CONSOLE)
if (!efi_enabled || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY)) if (!efi_enabled(EFI_BOOT) || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))
conswitchp = &vga_con; conswitchp = &vga_con;
#elif defined(CONFIG_DUMMY_CONSOLE) #elif defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con; conswitchp = &dummy_con;
@ -1131,14 +1131,14 @@ void __init setup_arch(char **cmdline_p)
register_refined_jiffies(CLOCK_TICK_RATE); register_refined_jiffies(CLOCK_TICK_RATE);
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
/* Once setup is done above, disable efi_enabled on mismatched /* Once setup is done above, unmap the EFI memory map on
* firmware/kernel archtectures since there is no support for * mismatched firmware/kernel archtectures since there is no
* runtime services. * support for runtime services.
*/ */
if (efi_enabled && IS_ENABLED(CONFIG_X86_64) != efi_64bit) { if (efi_enabled(EFI_BOOT) &&
IS_ENABLED(CONFIG_X86_64) != efi_enabled(EFI_64BIT)) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n"); pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap(); efi_unmap_memmap();
efi_enabled = 0;
} }
#endif #endif
} }

View file

@ -51,9 +51,6 @@
#define EFI_DEBUG 1 #define EFI_DEBUG 1
int efi_enabled;
EXPORT_SYMBOL(efi_enabled);
struct efi __read_mostly efi = { struct efi __read_mostly efi = {
.mps = EFI_INVALID_TABLE_ADDR, .mps = EFI_INVALID_TABLE_ADDR,
.acpi = EFI_INVALID_TABLE_ADDR, .acpi = EFI_INVALID_TABLE_ADDR,
@ -69,19 +66,28 @@ EXPORT_SYMBOL(efi);
struct efi_memory_map memmap; struct efi_memory_map memmap;
bool efi_64bit;
static struct efi efi_phys __initdata; static struct efi efi_phys __initdata;
static efi_system_table_t efi_systab __initdata; static efi_system_table_t efi_systab __initdata;
static inline bool efi_is_native(void) static inline bool efi_is_native(void)
{ {
return IS_ENABLED(CONFIG_X86_64) == efi_64bit; return IS_ENABLED(CONFIG_X86_64) == efi_enabled(EFI_64BIT);
} }
unsigned long x86_efi_facility;
/*
* Returns 1 if 'facility' is enabled, 0 otherwise.
*/
int efi_enabled(int facility)
{
return test_bit(facility, &x86_efi_facility) != 0;
}
EXPORT_SYMBOL(efi_enabled);
static int __init setup_noefi(char *arg) static int __init setup_noefi(char *arg)
{ {
efi_enabled = 0; clear_bit(EFI_BOOT, &x86_efi_facility);
return 0; return 0;
} }
early_param("noefi", setup_noefi); early_param("noefi", setup_noefi);
@ -426,6 +432,7 @@ void __init efi_reserve_boot_services(void)
void __init efi_unmap_memmap(void) void __init efi_unmap_memmap(void)
{ {
clear_bit(EFI_MEMMAP, &x86_efi_facility);
if (memmap.map) { if (memmap.map) {
early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size); early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
memmap.map = NULL; memmap.map = NULL;
@ -460,7 +467,7 @@ void __init efi_free_boot_services(void)
static int __init efi_systab_init(void *phys) static int __init efi_systab_init(void *phys)
{ {
if (efi_64bit) { if (efi_enabled(EFI_64BIT)) {
efi_system_table_64_t *systab64; efi_system_table_64_t *systab64;
u64 tmp = 0; u64 tmp = 0;
@ -552,7 +559,7 @@ static int __init efi_config_init(u64 tables, int nr_tables)
void *config_tables, *tablep; void *config_tables, *tablep;
int i, sz; int i, sz;
if (efi_64bit) if (efi_enabled(EFI_64BIT))
sz = sizeof(efi_config_table_64_t); sz = sizeof(efi_config_table_64_t);
else else
sz = sizeof(efi_config_table_32_t); sz = sizeof(efi_config_table_32_t);
@ -572,7 +579,7 @@ static int __init efi_config_init(u64 tables, int nr_tables)
efi_guid_t guid; efi_guid_t guid;
unsigned long table; unsigned long table;
if (efi_64bit) { if (efi_enabled(EFI_64BIT)) {
u64 table64; u64 table64;
guid = ((efi_config_table_64_t *)tablep)->guid; guid = ((efi_config_table_64_t *)tablep)->guid;
table64 = ((efi_config_table_64_t *)tablep)->table; table64 = ((efi_config_table_64_t *)tablep)->table;
@ -684,7 +691,6 @@ void __init efi_init(void)
if (boot_params.efi_info.efi_systab_hi || if (boot_params.efi_info.efi_systab_hi ||
boot_params.efi_info.efi_memmap_hi) { boot_params.efi_info.efi_memmap_hi) {
pr_info("Table located above 4GB, disabling EFI.\n"); pr_info("Table located above 4GB, disabling EFI.\n");
efi_enabled = 0;
return; return;
} }
efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab; efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;
@ -694,10 +700,10 @@ void __init efi_init(void)
((__u64)boot_params.efi_info.efi_systab_hi<<32)); ((__u64)boot_params.efi_info.efi_systab_hi<<32));
#endif #endif
if (efi_systab_init(efi_phys.systab)) { if (efi_systab_init(efi_phys.systab))
efi_enabled = 0;
return; return;
}
set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility);
/* /*
* Show what we know for posterity * Show what we know for posterity
@ -715,10 +721,10 @@ void __init efi_init(void)
efi.systab->hdr.revision >> 16, efi.systab->hdr.revision >> 16,
efi.systab->hdr.revision & 0xffff, vendor); efi.systab->hdr.revision & 0xffff, vendor);
if (efi_config_init(efi.systab->tables, efi.systab->nr_tables)) { if (efi_config_init(efi.systab->tables, efi.systab->nr_tables))
efi_enabled = 0;
return; return;
}
set_bit(EFI_CONFIG_TABLES, &x86_efi_facility);
/* /*
* Note: We currently don't support runtime services on an EFI * Note: We currently don't support runtime services on an EFI
@ -727,15 +733,17 @@ void __init efi_init(void)
if (!efi_is_native()) if (!efi_is_native())
pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n"); pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n");
else if (efi_runtime_init()) { else {
efi_enabled = 0; if (efi_runtime_init())
return; return;
set_bit(EFI_RUNTIME_SERVICES, &x86_efi_facility);
} }
if (efi_memmap_init()) { if (efi_memmap_init())
efi_enabled = 0;
return; return;
}
set_bit(EFI_MEMMAP, &x86_efi_facility);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
if (efi_is_native()) { if (efi_is_native()) {
x86_platform.get_wallclock = efi_get_time; x86_platform.get_wallclock = efi_get_time;
@ -941,7 +949,7 @@ void __init efi_enter_virtual_mode(void)
* *
* Call EFI services through wrapper functions. * Call EFI services through wrapper functions.
*/ */
efi.runtime_version = efi_systab.fw_revision; efi.runtime_version = efi_systab.hdr.revision;
efi.get_time = virt_efi_get_time; efi.get_time = virt_efi_get_time;
efi.set_time = virt_efi_set_time; efi.set_time = virt_efi_set_time;
efi.get_wakeup_time = virt_efi_get_wakeup_time; efi.get_wakeup_time = virt_efi_get_wakeup_time;
@ -969,6 +977,9 @@ u32 efi_mem_type(unsigned long phys_addr)
efi_memory_desc_t *md; efi_memory_desc_t *md;
void *p; void *p;
if (!efi_enabled(EFI_MEMMAP))
return 0;
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
md = p; md = p;
if ((md->phys_addr <= phys_addr) && if ((md->phys_addr <= phys_addr) &&

View file

@ -38,7 +38,7 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
static pgd_t save_pgd __initdata; static pgd_t *save_pgd __initdata;
static unsigned long efi_flags __initdata; static unsigned long efi_flags __initdata;
static void __init early_code_mapping_set_exec(int executable) static void __init early_code_mapping_set_exec(int executable)
@ -61,12 +61,20 @@ static void __init early_code_mapping_set_exec(int executable)
void __init efi_call_phys_prelog(void) void __init efi_call_phys_prelog(void)
{ {
unsigned long vaddress; unsigned long vaddress;
int pgd;
int n_pgds;
early_code_mapping_set_exec(1); early_code_mapping_set_exec(1);
local_irq_save(efi_flags); local_irq_save(efi_flags);
vaddress = (unsigned long)__va(0x0UL);
save_pgd = *pgd_offset_k(0x0UL); n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE);
set_pgd(pgd_offset_k(0x0UL), *pgd_offset_k(vaddress)); save_pgd = kmalloc(n_pgds * sizeof(pgd_t), GFP_KERNEL);
for (pgd = 0; pgd < n_pgds; pgd++) {
save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
}
__flush_tlb_all(); __flush_tlb_all();
} }
@ -75,7 +83,11 @@ void __init efi_call_phys_epilog(void)
/* /*
* After the lock is released, the original page table is restored. * After the lock is released, the original page table is restored.
*/ */
set_pgd(pgd_offset_k(0x0UL), save_pgd); int pgd;
int n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE);
for (pgd = 0; pgd < n_pgds; pgd++)
set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), save_pgd[pgd]);
kfree(save_pgd);
__flush_tlb_all(); __flush_tlb_all();
local_irq_restore(efi_flags); local_irq_restore(efi_flags);
early_code_mapping_set_exec(0); early_code_mapping_set_exec(0);

View file

@ -1034,7 +1034,8 @@ static int set_distrib_bits(struct cpumask *flush_mask, struct bau_control *bcp,
* globally purge translation cache of a virtual address or all TLB's * globally purge translation cache of a virtual address or all TLB's
* @cpumask: mask of all cpu's in which the address is to be removed * @cpumask: mask of all cpu's in which the address is to be removed
* @mm: mm_struct containing virtual address range * @mm: mm_struct containing virtual address range
* @va: virtual address to be removed (or TLB_FLUSH_ALL for all TLB's on cpu) * @start: start virtual address to be removed from TLB
* @end: end virtual address to be remove from TLB
* @cpu: the current cpu * @cpu: the current cpu
* *
* This is the entry point for initiating any UV global TLB shootdown. * This is the entry point for initiating any UV global TLB shootdown.
@ -1056,7 +1057,7 @@ static int set_distrib_bits(struct cpumask *flush_mask, struct bau_control *bcp,
*/ */
const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm, unsigned long start, struct mm_struct *mm, unsigned long start,
unsigned end, unsigned int cpu) unsigned long end, unsigned int cpu)
{ {
int locals = 0; int locals = 0;
int remotes = 0; int remotes = 0;
@ -1113,7 +1114,10 @@ const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
record_send_statistics(stat, locals, hubs, remotes, bau_desc); record_send_statistics(stat, locals, hubs, remotes, bau_desc);
if (!end || (end - start) <= PAGE_SIZE)
bau_desc->payload.address = start; bau_desc->payload.address = start;
else
bau_desc->payload.address = TLB_FLUSH_ALL;
bau_desc->payload.sending_cpu = cpu; bau_desc->payload.sending_cpu = cpu;
/* /*
* uv_flush_send_and_wait returns 0 if all cpu's were messaged, * uv_flush_send_and_wait returns 0 if all cpu's were messaged,

View file

@ -55,7 +55,7 @@ static FILE *input_file; /* Input file name */
static void usage(const char *err) static void usage(const char *err)
{ {
if (err) if (err)
fprintf(stderr, "Error: %s\n\n", err); fprintf(stderr, "%s: Error: %s\n\n", prog, err);
fprintf(stderr, "Usage: %s [-y|-n|-v] [-s seed[,no]] [-m max] [-i input]\n", prog); fprintf(stderr, "Usage: %s [-y|-n|-v] [-s seed[,no]] [-m max] [-i input]\n", prog);
fprintf(stderr, "\t-y 64bit mode\n"); fprintf(stderr, "\t-y 64bit mode\n");
fprintf(stderr, "\t-n 32bit mode\n"); fprintf(stderr, "\t-n 32bit mode\n");
@ -269,7 +269,13 @@ int main(int argc, char **argv)
insns++; insns++;
} }
fprintf(stdout, "%s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n", (errors) ? "Failure" : "Success", insns, (input_file) ? "given" : "random", errors, seed); fprintf(stdout, "%s: %s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n",
prog,
(errors) ? "Failure" : "Success",
insns,
(input_file) ? "given" : "random",
errors,
seed);
return errors ? 1 : 0; return errors ? 1 : 0;
} }

View file

@ -814,12 +814,14 @@ int main(int argc, char **argv)
read_relocs(fp); read_relocs(fp);
if (show_absolute_syms) { if (show_absolute_syms) {
print_absolute_symbols(); print_absolute_symbols();
return 0; goto out;
} }
if (show_absolute_relocs) { if (show_absolute_relocs) {
print_absolute_relocs(); print_absolute_relocs();
return 0; goto out;
} }
emit_relocs(as_text, use_real_mode); emit_relocs(as_text, use_real_mode);
out:
fclose(fp);
return 0; return 0;
} }

View file

@ -170,4 +170,19 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
consistent_sync(vaddr, size, direction); consistent_sync(vaddr, size, direction);
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _XTENSA_DMA_MAPPING_H */ #endif /* _XTENSA_DMA_MAPPING_H */

View file

@ -35,6 +35,8 @@ static DEFINE_IDR(ext_devt_idr);
static struct device_type disk_type; static struct device_type disk_type;
static void disk_check_events(struct disk_events *ev,
unsigned int *clearing_ptr);
static void disk_alloc_events(struct gendisk *disk); static void disk_alloc_events(struct gendisk *disk);
static void disk_add_events(struct gendisk *disk); static void disk_add_events(struct gendisk *disk);
static void disk_del_events(struct gendisk *disk); static void disk_del_events(struct gendisk *disk);
@ -1549,6 +1551,7 @@ unsigned int disk_clear_events(struct gendisk *disk, unsigned int mask)
const struct block_device_operations *bdops = disk->fops; const struct block_device_operations *bdops = disk->fops;
struct disk_events *ev = disk->ev; struct disk_events *ev = disk->ev;
unsigned int pending; unsigned int pending;
unsigned int clearing = mask;
if (!ev) { if (!ev) {
/* for drivers still using the old ->media_changed method */ /* for drivers still using the old ->media_changed method */
@ -1558,34 +1561,53 @@ unsigned int disk_clear_events(struct gendisk *disk, unsigned int mask)
return 0; return 0;
} }
/* tell the workfn about the events being cleared */ disk_block_events(disk);
/*
* store the union of mask and ev->clearing on the stack so that the
* race with disk_flush_events does not cause ambiguity (ev->clearing
* can still be modified even if events are blocked).
*/
spin_lock_irq(&ev->lock); spin_lock_irq(&ev->lock);
ev->clearing |= mask; clearing |= ev->clearing;
ev->clearing = 0;
spin_unlock_irq(&ev->lock); spin_unlock_irq(&ev->lock);
/* uncondtionally schedule event check and wait for it to finish */ disk_check_events(ev, &clearing);
disk_block_events(disk); /*
queue_delayed_work(system_freezable_wq, &ev->dwork, 0); * if ev->clearing is not 0, the disk_flush_events got called in the
flush_delayed_work(&ev->dwork); * middle of this function, so we want to run the workfn without delay.
__disk_unblock_events(disk, false); */
__disk_unblock_events(disk, ev->clearing ? true : false);
/* then, fetch and clear pending events */ /* then, fetch and clear pending events */
spin_lock_irq(&ev->lock); spin_lock_irq(&ev->lock);
WARN_ON_ONCE(ev->clearing & mask); /* cleared by workfn */
pending = ev->pending & mask; pending = ev->pending & mask;
ev->pending &= ~mask; ev->pending &= ~mask;
spin_unlock_irq(&ev->lock); spin_unlock_irq(&ev->lock);
WARN_ON_ONCE(clearing & mask);
return pending; return pending;
} }
/*
* Separate this part out so that a different pointer for clearing_ptr can be
* passed in for disk_clear_events.
*/
static void disk_events_workfn(struct work_struct *work) static void disk_events_workfn(struct work_struct *work)
{ {
struct delayed_work *dwork = to_delayed_work(work); struct delayed_work *dwork = to_delayed_work(work);
struct disk_events *ev = container_of(dwork, struct disk_events, dwork); struct disk_events *ev = container_of(dwork, struct disk_events, dwork);
disk_check_events(ev, &ev->clearing);
}
static void disk_check_events(struct disk_events *ev,
unsigned int *clearing_ptr)
{
struct gendisk *disk = ev->disk; struct gendisk *disk = ev->disk;
char *envp[ARRAY_SIZE(disk_uevents) + 1] = { }; char *envp[ARRAY_SIZE(disk_uevents) + 1] = { };
unsigned int clearing = ev->clearing; unsigned int clearing = *clearing_ptr;
unsigned int events; unsigned int events;
unsigned long intv; unsigned long intv;
int nr_events = 0, i; int nr_events = 0, i;
@ -1598,7 +1620,7 @@ static void disk_events_workfn(struct work_struct *work)
events &= ~ev->pending; events &= ~ev->pending;
ev->pending |= events; ev->pending |= events;
ev->clearing &= ~clearing; *clearing_ptr &= ~clearing;
intv = disk_events_poll_jiffies(disk); intv = disk_events_poll_jiffies(disk);
if (!ev->block && intv) if (!ev->block && intv)

View file

@ -250,7 +250,7 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
return acpi_rsdp; return acpi_rsdp;
#endif #endif
if (efi_enabled) { if (efi_enabled(EFI_CONFIG_TABLES)) {
if (efi.acpi20 != EFI_INVALID_TABLE_ADDR) if (efi.acpi20 != EFI_INVALID_TABLE_ADDR)
return efi.acpi20; return efi.acpi20;
else if (efi.acpi != EFI_INVALID_TABLE_ADDR) else if (efi.acpi != EFI_INVALID_TABLE_ADDR)

View file

@ -636,81 +636,81 @@ struct rx_buf_desc {
#define SEG_BASE IPHASE5575_FRAG_CONTROL_REG_BASE #define SEG_BASE IPHASE5575_FRAG_CONTROL_REG_BASE
#define REASS_BASE IPHASE5575_REASS_CONTROL_REG_BASE #define REASS_BASE IPHASE5575_REASS_CONTROL_REG_BASE
typedef volatile u_int freg_t; typedef volatile u_int ffreg_t;
typedef u_int rreg_t; typedef u_int rreg_t;
typedef struct _ffredn_t { typedef struct _ffredn_t {
freg_t idlehead_high; /* Idle cell header (high) */ ffreg_t idlehead_high; /* Idle cell header (high) */
freg_t idlehead_low; /* Idle cell header (low) */ ffreg_t idlehead_low; /* Idle cell header (low) */
freg_t maxrate; /* Maximum rate */ ffreg_t maxrate; /* Maximum rate */
freg_t stparms; /* Traffic Management Parameters */ ffreg_t stparms; /* Traffic Management Parameters */
freg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */ ffreg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */
freg_t rm_type; /* */ ffreg_t rm_type; /* */
u_int filler5[0x17 - 0x06]; u_int filler5[0x17 - 0x06];
freg_t cmd_reg; /* Command register */ ffreg_t cmd_reg; /* Command register */
u_int filler18[0x20 - 0x18]; u_int filler18[0x20 - 0x18];
freg_t cbr_base; /* CBR Pointer Base */ ffreg_t cbr_base; /* CBR Pointer Base */
freg_t vbr_base; /* VBR Pointer Base */ ffreg_t vbr_base; /* VBR Pointer Base */
freg_t abr_base; /* ABR Pointer Base */ ffreg_t abr_base; /* ABR Pointer Base */
freg_t ubr_base; /* UBR Pointer Base */ ffreg_t ubr_base; /* UBR Pointer Base */
u_int filler24; u_int filler24;
freg_t vbrwq_base; /* VBR Wait Queue Base */ ffreg_t vbrwq_base; /* VBR Wait Queue Base */
freg_t abrwq_base; /* ABR Wait Queue Base */ ffreg_t abrwq_base; /* ABR Wait Queue Base */
freg_t ubrwq_base; /* UBR Wait Queue Base */ ffreg_t ubrwq_base; /* UBR Wait Queue Base */
freg_t vct_base; /* Main VC Table Base */ ffreg_t vct_base; /* Main VC Table Base */
freg_t vcte_base; /* Extended Main VC Table Base */ ffreg_t vcte_base; /* Extended Main VC Table Base */
u_int filler2a[0x2C - 0x2A]; u_int filler2a[0x2C - 0x2A];
freg_t cbr_tab_beg; /* CBR Table Begin */ ffreg_t cbr_tab_beg; /* CBR Table Begin */
freg_t cbr_tab_end; /* CBR Table End */ ffreg_t cbr_tab_end; /* CBR Table End */
freg_t cbr_pointer; /* CBR Pointer */ ffreg_t cbr_pointer; /* CBR Pointer */
u_int filler2f[0x30 - 0x2F]; u_int filler2f[0x30 - 0x2F];
freg_t prq_st_adr; /* Packet Ready Queue Start Address */ ffreg_t prq_st_adr; /* Packet Ready Queue Start Address */
freg_t prq_ed_adr; /* Packet Ready Queue End Address */ ffreg_t prq_ed_adr; /* Packet Ready Queue End Address */
freg_t prq_rd_ptr; /* Packet Ready Queue read pointer */ ffreg_t prq_rd_ptr; /* Packet Ready Queue read pointer */
freg_t prq_wr_ptr; /* Packet Ready Queue write pointer */ ffreg_t prq_wr_ptr; /* Packet Ready Queue write pointer */
freg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/ ffreg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/
freg_t tcq_ed_adr; /* Transmit Complete Queue End Address */ ffreg_t tcq_ed_adr; /* Transmit Complete Queue End Address */
freg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */ ffreg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */
freg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/ ffreg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/
u_int filler38[0x40 - 0x38]; u_int filler38[0x40 - 0x38];
freg_t queue_base; /* Base address for PRQ and TCQ */ ffreg_t queue_base; /* Base address for PRQ and TCQ */
freg_t desc_base; /* Base address of descriptor table */ ffreg_t desc_base; /* Base address of descriptor table */
u_int filler42[0x45 - 0x42]; u_int filler42[0x45 - 0x42];
freg_t mode_reg_0; /* Mode register 0 */ ffreg_t mode_reg_0; /* Mode register 0 */
freg_t mode_reg_1; /* Mode register 1 */ ffreg_t mode_reg_1; /* Mode register 1 */
freg_t intr_status_reg;/* Interrupt Status register */ ffreg_t intr_status_reg;/* Interrupt Status register */
freg_t mask_reg; /* Mask Register */ ffreg_t mask_reg; /* Mask Register */
freg_t cell_ctr_high1; /* Total cell transfer count (high) */ ffreg_t cell_ctr_high1; /* Total cell transfer count (high) */
freg_t cell_ctr_lo1; /* Total cell transfer count (low) */ ffreg_t cell_ctr_lo1; /* Total cell transfer count (low) */
freg_t state_reg; /* Status register */ ffreg_t state_reg; /* Status register */
u_int filler4c[0x58 - 0x4c]; u_int filler4c[0x58 - 0x4c];
freg_t curr_desc_num; /* Contains the current descriptor num */ ffreg_t curr_desc_num; /* Contains the current descriptor num */
freg_t next_desc; /* Next descriptor */ ffreg_t next_desc; /* Next descriptor */
freg_t next_vc; /* Next VC */ ffreg_t next_vc; /* Next VC */
u_int filler5b[0x5d - 0x5b]; u_int filler5b[0x5d - 0x5b];
freg_t present_slot_cnt;/* Present slot count */ ffreg_t present_slot_cnt;/* Present slot count */
u_int filler5e[0x6a - 0x5e]; u_int filler5e[0x6a - 0x5e];
freg_t new_desc_num; /* New descriptor number */ ffreg_t new_desc_num; /* New descriptor number */
freg_t new_vc; /* New VC */ ffreg_t new_vc; /* New VC */
freg_t sched_tbl_ptr; /* Schedule table pointer */ ffreg_t sched_tbl_ptr; /* Schedule table pointer */
freg_t vbrwq_wptr; /* VBR wait queue write pointer */ ffreg_t vbrwq_wptr; /* VBR wait queue write pointer */
freg_t vbrwq_rptr; /* VBR wait queue read pointer */ ffreg_t vbrwq_rptr; /* VBR wait queue read pointer */
freg_t abrwq_wptr; /* ABR wait queue write pointer */ ffreg_t abrwq_wptr; /* ABR wait queue write pointer */
freg_t abrwq_rptr; /* ABR wait queue read pointer */ ffreg_t abrwq_rptr; /* ABR wait queue read pointer */
freg_t ubrwq_wptr; /* UBR wait queue write pointer */ ffreg_t ubrwq_wptr; /* UBR wait queue write pointer */
freg_t ubrwq_rptr; /* UBR wait queue read pointer */ ffreg_t ubrwq_rptr; /* UBR wait queue read pointer */
freg_t cbr_vc; /* CBR VC */ ffreg_t cbr_vc; /* CBR VC */
freg_t vbr_sb_vc; /* VBR SB VC */ ffreg_t vbr_sb_vc; /* VBR SB VC */
freg_t abr_sb_vc; /* ABR SB VC */ ffreg_t abr_sb_vc; /* ABR SB VC */
freg_t ubr_sb_vc; /* UBR SB VC */ ffreg_t ubr_sb_vc; /* UBR SB VC */
freg_t vbr_next_link; /* VBR next link */ ffreg_t vbr_next_link; /* VBR next link */
freg_t abr_next_link; /* ABR next link */ ffreg_t abr_next_link; /* ABR next link */
freg_t ubr_next_link; /* UBR next link */ ffreg_t ubr_next_link; /* UBR next link */
u_int filler7a[0x7c-0x7a]; u_int filler7a[0x7c-0x7a];
freg_t out_rate_head; /* Out of rate head */ ffreg_t out_rate_head; /* Out of rate head */
u_int filler7d[0xca-0x7d]; /* pad out to full address space */ u_int filler7d[0xca-0x7d]; /* pad out to full address space */
freg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */ ffreg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */
freg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */ ffreg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */
u_int fillercc[0x100-0xcc]; /* pad out to full address space */ u_int fillercc[0x100-0xcc]; /* pad out to full address space */
} ffredn_t; } ffredn_t;

View file

@ -97,11 +97,16 @@ void bcma_core_pci_hostmode_init(struct bcma_drv_pci *pc);
#ifdef CONFIG_BCMA_DRIVER_GPIO #ifdef CONFIG_BCMA_DRIVER_GPIO
/* driver_gpio.c */ /* driver_gpio.c */
int bcma_gpio_init(struct bcma_drv_cc *cc); int bcma_gpio_init(struct bcma_drv_cc *cc);
int bcma_gpio_unregister(struct bcma_drv_cc *cc);
#else #else
static inline int bcma_gpio_init(struct bcma_drv_cc *cc) static inline int bcma_gpio_init(struct bcma_drv_cc *cc)
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline int bcma_gpio_unregister(struct bcma_drv_cc *cc)
{
return 0;
}
#endif /* CONFIG_BCMA_DRIVER_GPIO */ #endif /* CONFIG_BCMA_DRIVER_GPIO */
#endif #endif

View file

@ -107,3 +107,8 @@ int bcma_gpio_init(struct bcma_drv_cc *cc)
return gpiochip_add(chip); return gpiochip_add(chip);
} }
int bcma_gpio_unregister(struct bcma_drv_cc *cc)
{
return gpiochip_remove(&cc->gpio);
}

View file

@ -276,6 +276,13 @@ int bcma_bus_register(struct bcma_bus *bus)
void bcma_bus_unregister(struct bcma_bus *bus) void bcma_bus_unregister(struct bcma_bus *bus)
{ {
struct bcma_device *cores[3]; struct bcma_device *cores[3];
int err;
err = bcma_gpio_unregister(&bus->drv_cc);
if (err == -EBUSY)
bcma_err(bus, "Some GPIOs are still in use.\n");
else if (err)
bcma_err(bus, "Can not unregister GPIO driver: %i\n", err);
cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K); cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K);
cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE); cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE);

View file

@ -168,7 +168,7 @@ static void wake_all_senders(struct drbd_tconn *tconn) {
} }
/* must hold resource->req_lock */ /* must hold resource->req_lock */
static void start_new_tl_epoch(struct drbd_tconn *tconn) void start_new_tl_epoch(struct drbd_tconn *tconn)
{ {
/* no point closing an epoch, if it is empty, anyways. */ /* no point closing an epoch, if it is empty, anyways. */
if (tconn->current_tle_writes == 0) if (tconn->current_tle_writes == 0)

View file

@ -267,6 +267,7 @@ struct bio_and_error {
int error; int error;
}; };
extern void start_new_tl_epoch(struct drbd_tconn *tconn);
extern void drbd_req_destroy(struct kref *kref); extern void drbd_req_destroy(struct kref *kref);
extern void _req_may_be_done(struct drbd_request *req, extern void _req_may_be_done(struct drbd_request *req,
struct bio_and_error *m); struct bio_and_error *m);

View file

@ -931,6 +931,7 @@ __drbd_set_state(struct drbd_conf *mdev, union drbd_state ns,
enum drbd_state_rv rv = SS_SUCCESS; enum drbd_state_rv rv = SS_SUCCESS;
enum sanitize_state_warnings ssw; enum sanitize_state_warnings ssw;
struct after_state_chg_work *ascw; struct after_state_chg_work *ascw;
bool did_remote, should_do_remote;
os = drbd_read_state(mdev); os = drbd_read_state(mdev);
@ -981,11 +982,17 @@ __drbd_set_state(struct drbd_conf *mdev, union drbd_state ns,
(os.disk != D_DISKLESS && ns.disk == D_DISKLESS)) (os.disk != D_DISKLESS && ns.disk == D_DISKLESS))
atomic_inc(&mdev->local_cnt); atomic_inc(&mdev->local_cnt);
did_remote = drbd_should_do_remote(mdev->state);
mdev->state.i = ns.i; mdev->state.i = ns.i;
should_do_remote = drbd_should_do_remote(mdev->state);
mdev->tconn->susp = ns.susp; mdev->tconn->susp = ns.susp;
mdev->tconn->susp_nod = ns.susp_nod; mdev->tconn->susp_nod = ns.susp_nod;
mdev->tconn->susp_fen = ns.susp_fen; mdev->tconn->susp_fen = ns.susp_fen;
/* put replicated vs not-replicated requests in seperate epochs */
if (did_remote != should_do_remote)
start_new_tl_epoch(mdev->tconn);
if (os.disk == D_ATTACHING && ns.disk >= D_NEGOTIATING) if (os.disk == D_ATTACHING && ns.disk >= D_NEGOTIATING)
drbd_print_uuids(mdev, "attached to UUIDs"); drbd_print_uuids(mdev, "attached to UUIDs");

View file

@ -626,13 +626,14 @@ static void mtip_timeout_function(unsigned long int data)
} }
} }
if (cmdto_cnt && !test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) { if (cmdto_cnt) {
print_tags(port->dd, "timed out", tagaccum, cmdto_cnt); print_tags(port->dd, "timed out", tagaccum, cmdto_cnt);
if (!test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) {
mtip_restart_port(port); mtip_restart_port(port);
clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
wake_up_interruptible(&port->svc_wait); wake_up_interruptible(&port->svc_wait);
} }
clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
}
if (port->ic_pause_timer) { if (port->ic_pause_timer) {
to = port->ic_pause_timer + msecs_to_jiffies(1000); to = port->ic_pause_timer + msecs_to_jiffies(1000);
@ -3887,7 +3888,12 @@ static int mtip_block_remove(struct driver_data *dd)
* Delete our gendisk structure. This also removes the device * Delete our gendisk structure. This also removes the device
* from /dev * from /dev
*/ */
if (dd->disk) {
if (dd->disk->queue)
del_gendisk(dd->disk); del_gendisk(dd->disk);
else
put_disk(dd->disk);
}
spin_lock(&rssd_index_lock); spin_lock(&rssd_index_lock);
ida_remove(&rssd_index_ida, dd->index); ida_remove(&rssd_index_ida, dd->index);
@ -3921,7 +3927,13 @@ static int mtip_block_shutdown(struct driver_data *dd)
"Shutting down %s ...\n", dd->disk->disk_name); "Shutting down %s ...\n", dd->disk->disk_name);
/* Delete our gendisk structure, and cleanup the blk queue. */ /* Delete our gendisk structure, and cleanup the blk queue. */
if (dd->disk) {
if (dd->disk->queue)
del_gendisk(dd->disk); del_gendisk(dd->disk);
else
put_disk(dd->disk);
}
spin_lock(&rssd_index_lock); spin_lock(&rssd_index_lock);
ida_remove(&rssd_index_ida, dd->index); ida_remove(&rssd_index_ida, dd->index);

View file

@ -161,10 +161,12 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
static void make_response(struct xen_blkif *blkif, u64 id, static void make_response(struct xen_blkif *blkif, u64 id,
unsigned short op, int st); unsigned short op, int st);
#define foreach_grant(pos, rbtree, node) \ #define foreach_grant_safe(pos, n, rbtree, node) \
for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node); \ for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node), \
(n) = rb_next(&(pos)->node); \
&(pos)->node != NULL; \ &(pos)->node != NULL; \
(pos) = container_of(rb_next(&(pos)->node), typeof(*(pos)), node)) (pos) = container_of(n, typeof(*(pos)), node), \
(n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL)
static void add_persistent_gnt(struct rb_root *root, static void add_persistent_gnt(struct rb_root *root,
@ -217,10 +219,11 @@ static void free_persistent_gnts(struct rb_root *root, unsigned int num)
struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST]; struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST]; struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct persistent_gnt *persistent_gnt; struct persistent_gnt *persistent_gnt;
struct rb_node *n;
int ret = 0; int ret = 0;
int segs_to_unmap = 0; int segs_to_unmap = 0;
foreach_grant(persistent_gnt, root, node) { foreach_grant_safe(persistent_gnt, n, root, node) {
BUG_ON(persistent_gnt->handle == BUG_ON(persistent_gnt->handle ==
BLKBACK_INVALID_HANDLE); BLKBACK_INVALID_HANDLE);
gnttab_set_unmap_op(&unmap[segs_to_unmap], gnttab_set_unmap_op(&unmap[segs_to_unmap],
@ -230,9 +233,6 @@ static void free_persistent_gnts(struct rb_root *root, unsigned int num)
persistent_gnt->handle); persistent_gnt->handle);
pages[segs_to_unmap] = persistent_gnt->page; pages[segs_to_unmap] = persistent_gnt->page;
rb_erase(&persistent_gnt->node, root);
kfree(persistent_gnt);
num--;
if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST || if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
!rb_next(&persistent_gnt->node)) { !rb_next(&persistent_gnt->node)) {
@ -241,6 +241,10 @@ static void free_persistent_gnts(struct rb_root *root, unsigned int num)
BUG_ON(ret); BUG_ON(ret);
segs_to_unmap = 0; segs_to_unmap = 0;
} }
rb_erase(&persistent_gnt->node, root);
kfree(persistent_gnt);
num--;
} }
BUG_ON(num != 0); BUG_ON(num != 0);
} }

View file

@ -792,6 +792,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
{ {
struct llist_node *all_gnts; struct llist_node *all_gnts;
struct grant *persistent_gnt; struct grant *persistent_gnt;
struct llist_node *n;
/* Prevent new requests being issued until we fix things up. */ /* Prevent new requests being issued until we fix things up. */
spin_lock_irq(&info->io_lock); spin_lock_irq(&info->io_lock);
@ -804,7 +805,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
/* Remove all persistent grants */ /* Remove all persistent grants */
if (info->persistent_gnts_c) { if (info->persistent_gnts_c) {
all_gnts = llist_del_all(&info->persistent_gnts); all_gnts = llist_del_all(&info->persistent_gnts);
llist_for_each_entry(persistent_gnt, all_gnts, node) { llist_for_each_entry_safe(persistent_gnt, n, all_gnts, node) {
gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL); gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
__free_page(pfn_to_page(persistent_gnt->pfn)); __free_page(pfn_to_page(persistent_gnt->pfn));
kfree(persistent_gnt); kfree(persistent_gnt);
@ -835,7 +836,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
struct blkif_response *bret) struct blkif_response *bret)
{ {
int i; int i = 0;
struct bio_vec *bvec; struct bio_vec *bvec;
struct req_iterator iter; struct req_iterator iter;
unsigned long flags; unsigned long flags;
@ -852,7 +853,8 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
*/ */
rq_for_each_segment(bvec, s->request, iter) { rq_for_each_segment(bvec, s->request, iter) {
BUG_ON((bvec->bv_offset + bvec->bv_len) > PAGE_SIZE); BUG_ON((bvec->bv_offset + bvec->bv_len) > PAGE_SIZE);
i = offset >> PAGE_SHIFT; if (bvec->bv_offset < offset)
i++;
BUG_ON(i >= s->req.u.rw.nr_segments); BUG_ON(i >= s->req.u.rw.nr_segments);
shared_data = kmap_atomic( shared_data = kmap_atomic(
pfn_to_page(s->grants_used[i]->pfn)); pfn_to_page(s->grants_used[i]->pfn));
@ -861,7 +863,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
bvec->bv_len); bvec->bv_len);
bvec_kunmap_irq(bvec_data, &flags); bvec_kunmap_irq(bvec_data, &flags);
kunmap_atomic(shared_data); kunmap_atomic(shared_data);
offset += bvec->bv_len; offset = bvec->bv_offset + bvec->bv_len;
} }
} }
/* Add the persistent grant into the list of free grants */ /* Add the persistent grant into the list of free grants */

View file

@ -2062,6 +2062,7 @@ static void virtcons_remove(struct virtio_device *vdev)
/* Disable interrupts for vqs */ /* Disable interrupts for vqs */
vdev->config->reset(vdev); vdev->config->reset(vdev);
/* Finish up work that's lined up */ /* Finish up work that's lined up */
if (use_multiport(portdev))
cancel_work_sync(&portdev->control_work); cancel_work_sync(&portdev->control_work);
list_for_each_entry_safe(port, port2, &portdev->ports, list) list_for_each_entry_safe(port, port2, &portdev->ports, list)

View file

@ -340,7 +340,7 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
/* /*
* Alocate and fill the csrow/channels structs * Alocate and fill the csrow/channels structs
*/ */
mci->csrows = kcalloc(sizeof(*mci->csrows), tot_csrows, GFP_KERNEL); mci->csrows = kcalloc(tot_csrows, sizeof(*mci->csrows), GFP_KERNEL);
if (!mci->csrows) if (!mci->csrows)
goto error; goto error;
for (row = 0; row < tot_csrows; row++) { for (row = 0; row < tot_csrows; row++) {
@ -351,7 +351,7 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
csr->csrow_idx = row; csr->csrow_idx = row;
csr->mci = mci; csr->mci = mci;
csr->nr_channels = tot_channels; csr->nr_channels = tot_channels;
csr->channels = kcalloc(sizeof(*csr->channels), tot_channels, csr->channels = kcalloc(tot_channels, sizeof(*csr->channels),
GFP_KERNEL); GFP_KERNEL);
if (!csr->channels) if (!csr->channels)
goto error; goto error;
@ -369,7 +369,7 @@ struct mem_ctl_info *edac_mc_alloc(unsigned mc_num,
/* /*
* Allocate and fill the dimm structs * Allocate and fill the dimm structs
*/ */
mci->dimms = kcalloc(sizeof(*mci->dimms), tot_dimms, GFP_KERNEL); mci->dimms = kcalloc(tot_dimms, sizeof(*mci->dimms), GFP_KERNEL);
if (!mci->dimms) if (!mci->dimms)
goto error; goto error;

View file

@ -256,7 +256,7 @@ static ssize_t edac_pci_dev_store(struct kobject *kobj,
struct edac_pci_dev_attribute *edac_pci_dev; struct edac_pci_dev_attribute *edac_pci_dev;
edac_pci_dev = (struct edac_pci_dev_attribute *)attr; edac_pci_dev = (struct edac_pci_dev_attribute *)attr;
if (edac_pci_dev->show) if (edac_pci_dev->store)
return edac_pci_dev->store(edac_pci_dev->value, buffer, count); return edac_pci_dev->store(edac_pci_dev->value, buffer, count);
return -EIO; return -EIO;
} }

View file

@ -471,7 +471,7 @@ void __init dmi_scan_machine(void)
char __iomem *p, *q; char __iomem *p, *q;
int rc; int rc;
if (efi_enabled) { if (efi_enabled(EFI_CONFIG_TABLES)) {
if (efi.smbios == EFI_INVALID_TABLE_ADDR) if (efi.smbios == EFI_INVALID_TABLE_ADDR)
goto error; goto error;

View file

@ -674,7 +674,7 @@ static int efi_status_to_err(efi_status_t status)
err = -EACCES; err = -EACCES;
break; break;
case EFI_NOT_FOUND: case EFI_NOT_FOUND:
err = -ENOENT; err = -EIO;
break; break;
default: default:
err = -EINVAL; err = -EINVAL;
@ -793,6 +793,7 @@ static ssize_t efivarfs_file_write(struct file *file,
spin_unlock(&efivars->lock); spin_unlock(&efivars->lock);
efivar_unregister(var); efivar_unregister(var);
drop_nlink(inode); drop_nlink(inode);
d_delete(file->f_dentry);
dput(file->f_dentry); dput(file->f_dentry);
} else { } else {
@ -994,7 +995,7 @@ static int efivarfs_unlink(struct inode *dir, struct dentry *dentry)
list_del(&var->list); list_del(&var->list);
spin_unlock(&efivars->lock); spin_unlock(&efivars->lock);
efivar_unregister(var); efivar_unregister(var);
drop_nlink(dir); drop_nlink(dentry->d_inode);
dput(dentry); dput(dentry);
return 0; return 0;
} }
@ -1782,7 +1783,7 @@ efivars_init(void)
printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION, printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,
EFIVARS_DATE); EFIVARS_DATE);
if (!efi_enabled) if (!efi_enabled(EFI_RUNTIME_SERVICES))
return 0; return 0;
/* For now we'll register the efi directory at /sys/firmware/efi */ /* For now we'll register the efi directory at /sys/firmware/efi */
@ -1822,7 +1823,7 @@ err_put:
static void __exit static void __exit
efivars_exit(void) efivars_exit(void)
{ {
if (efi_enabled) { if (efi_enabled(EFI_RUNTIME_SERVICES)) {
unregister_efivars(&__efivars); unregister_efivars(&__efivars);
kobject_put(efi_kobj); kobject_put(efi_kobj);
} }

View file

@ -99,7 +99,7 @@ unsigned long __init find_ibft_region(unsigned long *sizep)
/* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will /* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will
* only use ACPI for this */ * only use ACPI for this */
if (!efi_enabled) if (!efi_enabled(EFI_BOOT))
find_ibft_in_mem(); find_ibft_in_mem();
if (ibft_addr) { if (ibft_addr) {

View file

@ -24,7 +24,7 @@ config DRM_EXYNOS_DMABUF
config DRM_EXYNOS_FIMD config DRM_EXYNOS_FIMD
bool "Exynos DRM FIMD" bool "Exynos DRM FIMD"
depends on DRM_EXYNOS && !FB_S3C depends on DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM
help help
Choose this option if you want to use Exynos FIMD for DRM. Choose this option if you want to use Exynos FIMD for DRM.
@ -48,7 +48,7 @@ config DRM_EXYNOS_G2D
config DRM_EXYNOS_IPP config DRM_EXYNOS_IPP
bool "Exynos DRM IPP" bool "Exynos DRM IPP"
depends on DRM_EXYNOS depends on DRM_EXYNOS && !ARCH_MULTIPLATFORM
help help
Choose this option if you want to use IPP feature for DRM. Choose this option if you want to use IPP feature for DRM.

View file

@ -18,7 +18,6 @@
#include "exynos_drm_drv.h" #include "exynos_drm_drv.h"
#include "exynos_drm_encoder.h" #include "exynos_drm_encoder.h"
#define MAX_EDID 256
#define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\ #define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\
drm_connector) drm_connector)
@ -96,7 +95,9 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector)
to_exynos_connector(connector); to_exynos_connector(connector);
struct exynos_drm_manager *manager = exynos_connector->manager; struct exynos_drm_manager *manager = exynos_connector->manager;
struct exynos_drm_display_ops *display_ops = manager->display_ops; struct exynos_drm_display_ops *display_ops = manager->display_ops;
unsigned int count; struct edid *edid = NULL;
unsigned int count = 0;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
@ -114,27 +115,21 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector)
* because lcd panel has only one mode. * because lcd panel has only one mode.
*/ */
if (display_ops->get_edid) { if (display_ops->get_edid) {
int ret; edid = display_ops->get_edid(manager->dev, connector);
void *edid; if (IS_ERR_OR_NULL(edid)) {
ret = PTR_ERR(edid);
edid = kzalloc(MAX_EDID, GFP_KERNEL); edid = NULL;
if (!edid) { DRM_ERROR("Panel operation get_edid failed %d\n", ret);
DRM_ERROR("failed to allocate edid\n"); goto out;
return 0;
} }
ret = display_ops->get_edid(manager->dev, connector, count = drm_add_edid_modes(connector, edid);
edid, MAX_EDID); if (count < 0) {
if (ret < 0) { DRM_ERROR("Add edid modes failed %d\n", count);
DRM_ERROR("failed to get edid data.\n"); goto out;
kfree(edid);
edid = NULL;
return 0;
} }
drm_mode_connector_update_edid_property(connector, edid); drm_mode_connector_update_edid_property(connector, edid);
count = drm_add_edid_modes(connector, edid);
kfree(edid);
} else { } else {
struct exynos_drm_panel_info *panel; struct exynos_drm_panel_info *panel;
struct drm_display_mode *mode = drm_mode_create(connector->dev); struct drm_display_mode *mode = drm_mode_create(connector->dev);
@ -161,6 +156,8 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector)
count = 1; count = 1;
} }
out:
kfree(edid);
return count; return count;
} }

View file

@ -19,6 +19,7 @@
struct exynos_drm_dmabuf_attachment { struct exynos_drm_dmabuf_attachment {
struct sg_table sgt; struct sg_table sgt;
enum dma_data_direction dir; enum dma_data_direction dir;
bool is_mapped;
}; };
static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf, static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf,
@ -72,17 +73,10 @@ static struct sg_table *
DRM_DEBUG_PRIME("%s\n", __FILE__); DRM_DEBUG_PRIME("%s\n", __FILE__);
if (WARN_ON(dir == DMA_NONE))
return ERR_PTR(-EINVAL);
/* just return current sgt if already requested. */ /* just return current sgt if already requested. */
if (exynos_attach->dir == dir) if (exynos_attach->dir == dir && exynos_attach->is_mapped)
return &exynos_attach->sgt; return &exynos_attach->sgt;
/* reattaching is not allowed. */
if (WARN_ON(exynos_attach->dir != DMA_NONE))
return ERR_PTR(-EBUSY);
buf = gem_obj->buffer; buf = gem_obj->buffer;
if (!buf) { if (!buf) {
DRM_ERROR("buffer is null.\n"); DRM_ERROR("buffer is null.\n");
@ -107,13 +101,17 @@ static struct sg_table *
wr = sg_next(wr); wr = sg_next(wr);
} }
if (dir != DMA_NONE) {
nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir); nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
if (!nents) { if (!nents) {
DRM_ERROR("failed to map sgl with iommu.\n"); DRM_ERROR("failed to map sgl with iommu.\n");
sg_free_table(sgt);
sgt = ERR_PTR(-EIO); sgt = ERR_PTR(-EIO);
goto err_unlock; goto err_unlock;
} }
}
exynos_attach->is_mapped = true;
exynos_attach->dir = dir; exynos_attach->dir = dir;
attach->priv = exynos_attach; attach->priv = exynos_attach;

View file

@ -148,8 +148,8 @@ struct exynos_drm_overlay {
struct exynos_drm_display_ops { struct exynos_drm_display_ops {
enum exynos_drm_output_type type; enum exynos_drm_output_type type;
bool (*is_connected)(struct device *dev); bool (*is_connected)(struct device *dev);
int (*get_edid)(struct device *dev, struct drm_connector *connector, struct edid *(*get_edid)(struct device *dev,
u8 *edid, int len); struct drm_connector *connector);
void *(*get_panel)(struct device *dev); void *(*get_panel)(struct device *dev);
int (*check_timing)(struct device *dev, void *timing); int (*check_timing)(struct device *dev, void *timing);
int (*power_on)(struct device *dev, int mode); int (*power_on)(struct device *dev, int mode);

View file

@ -324,7 +324,7 @@ out:
g2d_userptr = NULL; g2d_userptr = NULL;
} }
dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev, static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
unsigned long userptr, unsigned long userptr,
unsigned long size, unsigned long size,
struct drm_file *filp, struct drm_file *filp,

View file

@ -108,18 +108,17 @@ static bool drm_hdmi_is_connected(struct device *dev)
return false; return false;
} }
static int drm_hdmi_get_edid(struct device *dev, static struct edid *drm_hdmi_get_edid(struct device *dev,
struct drm_connector *connector, u8 *edid, int len) struct drm_connector *connector)
{ {
struct drm_hdmi_context *ctx = to_context(dev); struct drm_hdmi_context *ctx = to_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->get_edid) if (hdmi_ops && hdmi_ops->get_edid)
return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector, edid, return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector);
len);
return 0; return NULL;
} }
static int drm_hdmi_check_timing(struct device *dev, void *timing) static int drm_hdmi_check_timing(struct device *dev, void *timing)

View file

@ -30,8 +30,8 @@ struct exynos_drm_hdmi_context {
struct exynos_hdmi_ops { struct exynos_hdmi_ops {
/* display */ /* display */
bool (*is_connected)(void *ctx); bool (*is_connected)(void *ctx);
int (*get_edid)(void *ctx, struct drm_connector *connector, struct edid *(*get_edid)(void *ctx,
u8 *edid, int len); struct drm_connector *connector);
int (*check_timing)(void *ctx, void *timing); int (*check_timing)(void *ctx, void *timing);
int (*power_on)(void *ctx, int mode); int (*power_on)(void *ctx, int mode);

View file

@ -869,7 +869,7 @@ static void ipp_put_event(struct drm_exynos_ipp_cmd_node *c_node,
} }
} }
void ipp_handle_cmd_work(struct device *dev, static void ipp_handle_cmd_work(struct device *dev,
struct exynos_drm_ippdrv *ippdrv, struct exynos_drm_ippdrv *ippdrv,
struct drm_exynos_ipp_cmd_work *cmd_work, struct drm_exynos_ipp_cmd_work *cmd_work,
struct drm_exynos_ipp_cmd_node *c_node) struct drm_exynos_ipp_cmd_node *c_node)

View file

@ -734,7 +734,7 @@ static int rotator_remove(struct platform_device *pdev)
return 0; return 0;
} }
struct rot_limit_table rot_limit_tbl = { static struct rot_limit_table rot_limit_tbl = {
.ycbcr420_2p = { .ycbcr420_2p = {
.min_w = 32, .min_w = 32,
.min_h = 32, .min_h = 32,
@ -751,7 +751,7 @@ struct rot_limit_table rot_limit_tbl = {
}, },
}; };
struct platform_device_id rotator_driver_ids[] = { static struct platform_device_id rotator_driver_ids[] = {
{ {
.name = "exynos-rot", .name = "exynos-rot",
.driver_data = (unsigned long)&rot_limit_tbl, .driver_data = (unsigned long)&rot_limit_tbl,

View file

@ -98,10 +98,12 @@ static bool vidi_display_is_connected(struct device *dev)
return ctx->connected ? true : false; return ctx->connected ? true : false;
} }
static int vidi_get_edid(struct device *dev, struct drm_connector *connector, static struct edid *vidi_get_edid(struct device *dev,
u8 *edid, int len) struct drm_connector *connector)
{ {
struct vidi_context *ctx = get_vidi_context(dev); struct vidi_context *ctx = get_vidi_context(dev);
struct edid *edid;
int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
@ -111,13 +113,18 @@ static int vidi_get_edid(struct device *dev, struct drm_connector *connector,
*/ */
if (!ctx->raw_edid) { if (!ctx->raw_edid) {
DRM_DEBUG_KMS("raw_edid is null.\n"); DRM_DEBUG_KMS("raw_edid is null.\n");
return -EFAULT; return ERR_PTR(-EFAULT);
} }
memcpy(edid, ctx->raw_edid, min((1 + ctx->raw_edid->extensions) edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
* EDID_LENGTH, len)); edid = kzalloc(edid_len, GFP_KERNEL);
if (!edid) {
DRM_DEBUG_KMS("failed to allocate edid\n");
return ERR_PTR(-ENOMEM);
}
return 0; memcpy(edid, ctx->raw_edid, edid_len);
return edid;
} }
static void *vidi_get_panel(struct device *dev) static void *vidi_get_panel(struct device *dev)
@ -514,7 +521,6 @@ int vidi_connection_ioctl(struct drm_device *drm_dev, void *data,
struct exynos_drm_manager *manager; struct exynos_drm_manager *manager;
struct exynos_drm_display_ops *display_ops; struct exynos_drm_display_ops *display_ops;
struct drm_exynos_vidi_connection *vidi = data; struct drm_exynos_vidi_connection *vidi = data;
struct edid *raw_edid;
int edid_len; int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
@ -551,11 +557,11 @@ int vidi_connection_ioctl(struct drm_device *drm_dev, void *data,
} }
if (vidi->connection) { if (vidi->connection) {
if (!vidi->edid) { struct edid *raw_edid = (struct edid *)(uint32_t)vidi->edid;
DRM_DEBUG_KMS("edid data is null.\n"); if (!drm_edid_is_valid(raw_edid)) {
DRM_DEBUG_KMS("edid data is invalid.\n");
return -EINVAL; return -EINVAL;
} }
raw_edid = (struct edid *)(uint32_t)vidi->edid;
edid_len = (1 + raw_edid->extensions) * EDID_LENGTH; edid_len = (1 + raw_edid->extensions) * EDID_LENGTH;
ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL); ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL);
if (!ctx->raw_edid) { if (!ctx->raw_edid) {

View file

@ -34,7 +34,6 @@
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/of_gpio.h> #include <linux/of_gpio.h>
#include <plat/gpio-cfg.h>
#include <drm/exynos_drm.h> #include <drm/exynos_drm.h>
@ -98,8 +97,7 @@ struct hdmi_context {
void __iomem *regs; void __iomem *regs;
void *parent_ctx; void *parent_ctx;
int external_irq; int irq;
int internal_irq;
struct i2c_client *ddc_port; struct i2c_client *ddc_port;
struct i2c_client *hdmiphy_port; struct i2c_client *hdmiphy_port;
@ -1391,8 +1389,7 @@ static bool hdmi_is_connected(void *ctx)
return hdata->hpd; return hdata->hpd;
} }
static int hdmi_get_edid(void *ctx, struct drm_connector *connector, static struct edid *hdmi_get_edid(void *ctx, struct drm_connector *connector)
u8 *edid, int len)
{ {
struct edid *raw_edid; struct edid *raw_edid;
struct hdmi_context *hdata = ctx; struct hdmi_context *hdata = ctx;
@ -1400,22 +1397,18 @@ static int hdmi_get_edid(void *ctx, struct drm_connector *connector,
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (!hdata->ddc_port) if (!hdata->ddc_port)
return -ENODEV; return ERR_PTR(-ENODEV);
raw_edid = drm_get_edid(connector, hdata->ddc_port->adapter); raw_edid = drm_get_edid(connector, hdata->ddc_port->adapter);
if (raw_edid) { if (!raw_edid)
return ERR_PTR(-ENODEV);
hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid); hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid);
memcpy(edid, raw_edid, min((1 + raw_edid->extensions)
* EDID_LENGTH, len));
DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n", DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n",
(hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"), (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"),
raw_edid->width_cm, raw_edid->height_cm); raw_edid->width_cm, raw_edid->height_cm);
kfree(raw_edid);
} else {
return -ENODEV;
}
return 0; return raw_edid;
} }
static int hdmi_v13_check_timing(struct fb_videomode *check_timing) static int hdmi_v13_check_timing(struct fb_videomode *check_timing)
@ -1652,16 +1645,16 @@ static void hdmi_conf_reset(struct hdmi_context *hdata)
/* resetting HDMI core */ /* resetting HDMI core */
hdmi_reg_writemask(hdata, reg, 0, HDMI_CORE_SW_RSTOUT); hdmi_reg_writemask(hdata, reg, 0, HDMI_CORE_SW_RSTOUT);
mdelay(10); usleep_range(10000, 12000);
hdmi_reg_writemask(hdata, reg, ~0, HDMI_CORE_SW_RSTOUT); hdmi_reg_writemask(hdata, reg, ~0, HDMI_CORE_SW_RSTOUT);
mdelay(10); usleep_range(10000, 12000);
} }
static void hdmi_conf_init(struct hdmi_context *hdata) static void hdmi_conf_init(struct hdmi_context *hdata)
{ {
struct hdmi_infoframe infoframe; struct hdmi_infoframe infoframe;
/* disable HPD interrupts */ /* disable HPD interrupts from HDMI IP block, use GPIO instead */
hdmi_reg_writemask(hdata, HDMI_INTC_CON, 0, HDMI_INTC_EN_GLOBAL | hdmi_reg_writemask(hdata, HDMI_INTC_CON, 0, HDMI_INTC_EN_GLOBAL |
HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG); HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG);
@ -1779,7 +1772,7 @@ static void hdmi_v13_timing_apply(struct hdmi_context *hdata)
u32 val = hdmi_reg_read(hdata, HDMI_V13_PHY_STATUS); u32 val = hdmi_reg_read(hdata, HDMI_V13_PHY_STATUS);
if (val & HDMI_PHY_STATUS_READY) if (val & HDMI_PHY_STATUS_READY)
break; break;
mdelay(1); usleep_range(1000, 2000);
} }
/* steady state not achieved */ /* steady state not achieved */
if (tries == 0) { if (tries == 0) {
@ -1946,7 +1939,7 @@ static void hdmi_v14_timing_apply(struct hdmi_context *hdata)
u32 val = hdmi_reg_read(hdata, HDMI_PHY_STATUS_0); u32 val = hdmi_reg_read(hdata, HDMI_PHY_STATUS_0);
if (val & HDMI_PHY_STATUS_READY) if (val & HDMI_PHY_STATUS_READY)
break; break;
mdelay(1); usleep_range(1000, 2000);
} }
/* steady state not achieved */ /* steady state not achieved */
if (tries == 0) { if (tries == 0) {
@ -1998,9 +1991,9 @@ static void hdmiphy_conf_reset(struct hdmi_context *hdata)
/* reset hdmiphy */ /* reset hdmiphy */
hdmi_reg_writemask(hdata, reg, ~0, HDMI_PHY_SW_RSTOUT); hdmi_reg_writemask(hdata, reg, ~0, HDMI_PHY_SW_RSTOUT);
mdelay(10); usleep_range(10000, 12000);
hdmi_reg_writemask(hdata, reg, 0, HDMI_PHY_SW_RSTOUT); hdmi_reg_writemask(hdata, reg, 0, HDMI_PHY_SW_RSTOUT);
mdelay(10); usleep_range(10000, 12000);
} }
static void hdmiphy_poweron(struct hdmi_context *hdata) static void hdmiphy_poweron(struct hdmi_context *hdata)
@ -2048,7 +2041,7 @@ static void hdmiphy_conf_apply(struct hdmi_context *hdata)
return; return;
} }
mdelay(10); usleep_range(10000, 12000);
/* operation mode */ /* operation mode */
operation[0] = 0x1f; operation[0] = 0x1f;
@ -2170,6 +2163,13 @@ static void hdmi_commit(void *ctx)
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&hdata->hdmi_mutex);
if (!hdata->powered) {
mutex_unlock(&hdata->hdmi_mutex);
return;
}
mutex_unlock(&hdata->hdmi_mutex);
hdmi_conf_apply(hdata); hdmi_conf_apply(hdata);
} }
@ -2265,7 +2265,7 @@ static struct exynos_hdmi_ops hdmi_ops = {
.dpms = hdmi_dpms, .dpms = hdmi_dpms,
}; };
static irqreturn_t hdmi_external_irq_thread(int irq, void *arg) static irqreturn_t hdmi_irq_thread(int irq, void *arg)
{ {
struct exynos_drm_hdmi_context *ctx = arg; struct exynos_drm_hdmi_context *ctx = arg;
struct hdmi_context *hdata = ctx->ctx; struct hdmi_context *hdata = ctx->ctx;
@ -2280,31 +2280,6 @@ static irqreturn_t hdmi_external_irq_thread(int irq, void *arg)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static irqreturn_t hdmi_internal_irq_thread(int irq, void *arg)
{
struct exynos_drm_hdmi_context *ctx = arg;
struct hdmi_context *hdata = ctx->ctx;
u32 intc_flag;
intc_flag = hdmi_reg_read(hdata, HDMI_INTC_FLAG);
/* clearing flags for HPD plug/unplug */
if (intc_flag & HDMI_INTC_FLAG_HPD_UNPLUG) {
DRM_DEBUG_KMS("unplugged\n");
hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0,
HDMI_INTC_FLAG_HPD_UNPLUG);
}
if (intc_flag & HDMI_INTC_FLAG_HPD_PLUG) {
DRM_DEBUG_KMS("plugged\n");
hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0,
HDMI_INTC_FLAG_HPD_PLUG);
}
if (ctx->drm_dev)
drm_helper_hpd_irq_event(ctx->drm_dev);
return IRQ_HANDLED;
}
static int hdmi_resources_init(struct hdmi_context *hdata) static int hdmi_resources_init(struct hdmi_context *hdata)
{ {
struct device *dev = hdata->dev; struct device *dev = hdata->dev;
@ -2555,39 +2530,24 @@ static int hdmi_probe(struct platform_device *pdev)
hdata->hdmiphy_port = hdmi_hdmiphy; hdata->hdmiphy_port = hdmi_hdmiphy;
hdata->external_irq = gpio_to_irq(hdata->hpd_gpio); hdata->irq = gpio_to_irq(hdata->hpd_gpio);
if (hdata->external_irq < 0) { if (hdata->irq < 0) {
DRM_ERROR("failed to get GPIO external irq\n"); DRM_ERROR("failed to get GPIO irq\n");
ret = hdata->external_irq; ret = hdata->irq;
goto err_hdmiphy;
}
hdata->internal_irq = platform_get_irq(pdev, 0);
if (hdata->internal_irq < 0) {
DRM_ERROR("failed to get platform internal irq\n");
ret = hdata->internal_irq;
goto err_hdmiphy; goto err_hdmiphy;
} }
hdata->hpd = gpio_get_value(hdata->hpd_gpio); hdata->hpd = gpio_get_value(hdata->hpd_gpio);
ret = request_threaded_irq(hdata->external_irq, NULL, ret = request_threaded_irq(hdata->irq, NULL,
hdmi_external_irq_thread, IRQF_TRIGGER_RISING | hdmi_irq_thread, IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING | IRQF_ONESHOT, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"hdmi_external", drm_hdmi_ctx); "hdmi", drm_hdmi_ctx);
if (ret) { if (ret) {
DRM_ERROR("failed to register hdmi external interrupt\n"); DRM_ERROR("failed to register hdmi interrupt\n");
goto err_hdmiphy; goto err_hdmiphy;
} }
ret = request_threaded_irq(hdata->internal_irq, NULL,
hdmi_internal_irq_thread, IRQF_ONESHOT,
"hdmi_internal", drm_hdmi_ctx);
if (ret) {
DRM_ERROR("failed to register hdmi internal interrupt\n");
goto err_free_irq;
}
/* Attach HDMI Driver to common hdmi. */ /* Attach HDMI Driver to common hdmi. */
exynos_hdmi_drv_attach(drm_hdmi_ctx); exynos_hdmi_drv_attach(drm_hdmi_ctx);
@ -2598,8 +2558,6 @@ static int hdmi_probe(struct platform_device *pdev)
return 0; return 0;
err_free_irq:
free_irq(hdata->external_irq, drm_hdmi_ctx);
err_hdmiphy: err_hdmiphy:
i2c_del_driver(&hdmiphy_driver); i2c_del_driver(&hdmiphy_driver);
err_ddc: err_ddc:
@ -2617,8 +2575,7 @@ static int hdmi_remove(struct platform_device *pdev)
pm_runtime_disable(dev); pm_runtime_disable(dev);
free_irq(hdata->internal_irq, hdata); free_irq(hdata->irq, hdata);
free_irq(hdata->external_irq, hdata);
/* hdmiphy i2c driver */ /* hdmiphy i2c driver */
@ -2637,8 +2594,7 @@ static int hdmi_suspend(struct device *dev)
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
disable_irq(hdata->internal_irq); disable_irq(hdata->irq);
disable_irq(hdata->external_irq);
hdata->hpd = false; hdata->hpd = false;
if (ctx->drm_dev) if (ctx->drm_dev)
@ -2663,8 +2619,7 @@ static int hdmi_resume(struct device *dev)
hdata->hpd = gpio_get_value(hdata->hpd_gpio); hdata->hpd = gpio_get_value(hdata->hpd_gpio);
enable_irq(hdata->external_irq); enable_irq(hdata->irq);
enable_irq(hdata->internal_irq);
if (!pm_runtime_suspended(dev)) { if (!pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already resumed\n", __func__); DRM_DEBUG_KMS("%s : Already resumed\n", __func__);

View file

@ -600,7 +600,7 @@ static void vp_win_reset(struct mixer_context *ctx)
/* waiting until VP_SRESET_PROCESSING is 0 */ /* waiting until VP_SRESET_PROCESSING is 0 */
if (~vp_reg_read(res, VP_SRESET) & VP_SRESET_PROCESSING) if (~vp_reg_read(res, VP_SRESET) & VP_SRESET_PROCESSING)
break; break;
mdelay(10); usleep_range(10000, 12000);
} }
WARN(tries == 0, "failed to reset Video Processor\n"); WARN(tries == 0, "failed to reset Video Processor\n");
} }
@ -776,6 +776,13 @@ static void mixer_win_commit(void *ctx, int win)
DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win); DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win);
mutex_lock(&mixer_ctx->mixer_mutex);
if (!mixer_ctx->powered) {
mutex_unlock(&mixer_ctx->mixer_mutex);
return;
}
mutex_unlock(&mixer_ctx->mixer_mutex);
if (win > 1 && mixer_ctx->vp_enabled) if (win > 1 && mixer_ctx->vp_enabled)
vp_video_buffer(mixer_ctx, win); vp_video_buffer(mixer_ctx, win);
else else

View file

@ -30,6 +30,7 @@
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/export.h> #include <linux/export.h>
#include <generated/utsrelease.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include "intel_drv.h" #include "intel_drv.h"
#include "intel_ringbuffer.h" #include "intel_ringbuffer.h"
@ -690,6 +691,7 @@ static int i915_error_state(struct seq_file *m, void *unused)
seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec, seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec,
error->time.tv_usec); error->time.tv_usec);
seq_printf(m, "Kernel: " UTS_RELEASE);
seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device); seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device);
seq_printf(m, "EIR: 0x%08x\n", error->eir); seq_printf(m, "EIR: 0x%08x\n", error->eir);
seq_printf(m, "IER: 0x%08x\n", error->ier); seq_printf(m, "IER: 0x%08x\n", error->ier);

View file

@ -533,6 +533,7 @@
#define MI_MODE 0x0209c #define MI_MODE 0x0209c
# define VS_TIMER_DISPATCH (1 << 6) # define VS_TIMER_DISPATCH (1 << 6)
# define MI_FLUSH_ENABLE (1 << 12) # define MI_FLUSH_ENABLE (1 << 12)
# define ASYNC_FLIP_PERF_DISABLE (1 << 14)
#define GEN6_GT_MODE 0x20d0 #define GEN6_GT_MODE 0x20d0
#define GEN6_GT_MODE_HI (1 << 9) #define GEN6_GT_MODE_HI (1 << 9)

View file

@ -505,13 +505,25 @@ static int init_render_ring(struct intel_ring_buffer *ring)
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
int ret = init_ring_common(ring); int ret = init_ring_common(ring);
if (INTEL_INFO(dev)->gen > 3) { if (INTEL_INFO(dev)->gen > 3)
I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH)); I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH));
/* We need to disable the AsyncFlip performance optimisations in order
* to use MI_WAIT_FOR_EVENT within the CS. It should already be
* programmed to '1' on all products.
*/
if (INTEL_INFO(dev)->gen >= 6)
I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
/* Required for the hardware to program scanline values for waiting */
if (INTEL_INFO(dev)->gen == 6)
I915_WRITE(GFX_MODE,
_MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_ALWAYS));
if (IS_GEN7(dev)) if (IS_GEN7(dev))
I915_WRITE(GFX_MODE_GEN7, I915_WRITE(GFX_MODE_GEN7,
_MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) | _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) |
_MASKED_BIT_ENABLE(GFX_REPLAY_MODE)); _MASKED_BIT_ENABLE(GFX_REPLAY_MODE));
}
if (INTEL_INFO(dev)->gen >= 5) { if (INTEL_INFO(dev)->gen >= 5) {
ret = init_pipe_control(ring); ret = init_pipe_control(ring);

View file

@ -1313,14 +1313,18 @@ void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *sav
if (!(tmp & EVERGREEN_CRTC_BLANK_DATA_EN)) { if (!(tmp & EVERGREEN_CRTC_BLANK_DATA_EN)) {
radeon_wait_for_vblank(rdev, i); radeon_wait_for_vblank(rdev, i);
tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp); WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
} }
} else { } else {
tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) { if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) {
radeon_wait_for_vblank(rdev, i); radeon_wait_for_vblank(rdev, i);
tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE; tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp); WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
} }
} }
/* wait for the next frame */ /* wait for the next frame */
@ -1345,6 +1349,8 @@ void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *sav
blackout &= ~BLACKOUT_MODE_MASK; blackout &= ~BLACKOUT_MODE_MASK;
WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1); WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1);
} }
/* wait for the MC to settle */
udelay(100);
} }
void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save) void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save)
@ -1378,11 +1384,15 @@ void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *s
if (ASIC_IS_DCE6(rdev)) { if (ASIC_IS_DCE6(rdev)) {
tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]); tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]);
tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp); WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
} else { } else {
tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE; tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp); WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
} }
/* wait for the next frame */ /* wait for the next frame */
frame_count = radeon_get_vblank_counter(rdev, i); frame_count = radeon_get_vblank_counter(rdev, i);
@ -2036,9 +2046,20 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
WREG32(HDP_ADDR_CONFIG, gb_addr_config); WREG32(HDP_ADDR_CONFIG, gb_addr_config);
WREG32(DMA_TILING_CONFIG, gb_addr_config); WREG32(DMA_TILING_CONFIG, gb_addr_config);
if ((rdev->config.evergreen.max_backends == 1) &&
(rdev->flags & RADEON_IS_IGP)) {
if ((disabled_rb_mask & 3) == 1) {
/* RB0 disabled, RB1 enabled */
tmp = 0x11111111;
} else {
/* RB1 disabled, RB0 enabled */
tmp = 0x00000000;
}
} else {
tmp = gb_addr_config & NUM_PIPES_MASK; tmp = gb_addr_config & NUM_PIPES_MASK;
tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends, tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends,
EVERGREEN_MAX_BACKENDS, disabled_rb_mask); EVERGREEN_MAX_BACKENDS, disabled_rb_mask);
}
WREG32(GB_BACKEND_MAP, tmp); WREG32(GB_BACKEND_MAP, tmp);
WREG32(CGTS_SYS_TCC_DISABLE, 0); WREG32(CGTS_SYS_TCC_DISABLE, 0);

Some files were not shown because too many files have changed in this diff Show more