1*10465441SEvalZeroFrom d001bd8483c805c45a42d9bd0468a96722e72875 Mon Sep 17 00:00:00 2001 2*10465441SEvalZeroFrom: Grissiom <[email protected]> 3*10465441SEvalZeroDate: Thu, 1 Aug 2013 14:59:56 +0800 4*10465441SEvalZeroSubject: [PATCH 1/2] RTT-VMM: implement dual system running on realview-pb-a8 5*10465441SEvalZero 6*10465441SEvalZeroSigned-off-by: Grissiom <[email protected]> 7*10465441SEvalZeroSigned-off-by: Bernard.Xiong <[email protected]> 8*10465441SEvalZero--- 9*10465441SEvalZero arch/arm/Kconfig | 1 + 10*10465441SEvalZero arch/arm/Makefile | 1 + 11*10465441SEvalZero arch/arm/common/gic.c | 67 +++++++++++++- 12*10465441SEvalZero arch/arm/include/asm/assembler.h | 8 +- 13*10465441SEvalZero arch/arm/include/asm/domain.h | 7 ++ 14*10465441SEvalZero arch/arm/include/asm/irqflags.h | 84 ++++++++++++----- 15*10465441SEvalZero arch/arm/include/asm/mach/map.h | 5 + 16*10465441SEvalZero arch/arm/include/vmm/vmm.h | 35 +++++++ 17*10465441SEvalZero arch/arm/include/vmm/vmm_config.h | 7 ++ 18*10465441SEvalZero arch/arm/kernel/entry-armv.S | 30 +++++- 19*10465441SEvalZero arch/arm/kernel/entry-common.S | 3 + 20*10465441SEvalZero arch/arm/kernel/entry-header.S | 15 ++- 21*10465441SEvalZero arch/arm/mach-omap2/irq.c | 12 +++ 22*10465441SEvalZero arch/arm/mm/fault.c | 9 ++ 23*10465441SEvalZero arch/arm/mm/init.c | 8 ++ 24*10465441SEvalZero arch/arm/mm/mmu.c | 44 +++++++++ 25*10465441SEvalZero arch/arm/vmm/Kconfig | 49 ++++++++++ 26*10465441SEvalZero arch/arm/vmm/Makefile | 10 ++ 27*10465441SEvalZero arch/arm/vmm/README | 1 + 28*10465441SEvalZero arch/arm/vmm/am33xx/intc.h | 13 +++ 29*10465441SEvalZero arch/arm/vmm/am33xx/softirq.c | 14 +++ 30*10465441SEvalZero arch/arm/vmm/am33xx/virq.c | 48 ++++++++++ 31*10465441SEvalZero arch/arm/vmm/realview_a8/softirq.c | 12 +++ 32*10465441SEvalZero arch/arm/vmm/vmm.c | 32 +++++++ 33*10465441SEvalZero arch/arm/vmm/vmm_traps.c | 37 ++++++++ 34*10465441SEvalZero arch/arm/vmm/vmm_virhw.h | 59 ++++++++++++ 35*10465441SEvalZero arch/arm/vmm/vmm_virq.c | 183 +++++++++++++++++++++++++++++++++++++ 36*10465441SEvalZero 27 files changed, 767 insertions(+), 27 deletions(-) 37*10465441SEvalZero create mode 100644 arch/arm/include/vmm/vmm.h 38*10465441SEvalZero create mode 100644 arch/arm/include/vmm/vmm_config.h 39*10465441SEvalZero create mode 100644 arch/arm/vmm/Kconfig 40*10465441SEvalZero create mode 100644 arch/arm/vmm/Makefile 41*10465441SEvalZero create mode 100644 arch/arm/vmm/README 42*10465441SEvalZero create mode 100644 arch/arm/vmm/am33xx/intc.h 43*10465441SEvalZero create mode 100644 arch/arm/vmm/am33xx/softirq.c 44*10465441SEvalZero create mode 100644 arch/arm/vmm/am33xx/virq.c 45*10465441SEvalZero create mode 100644 arch/arm/vmm/realview_a8/softirq.c 46*10465441SEvalZero create mode 100644 arch/arm/vmm/vmm.c 47*10465441SEvalZero create mode 100644 arch/arm/vmm/vmm_traps.c 48*10465441SEvalZero create mode 100644 arch/arm/vmm/vmm_virhw.h 49*10465441SEvalZero create mode 100644 arch/arm/vmm/vmm_virq.c 50*10465441SEvalZero 51*10465441SEvalZerodiff --git a/arch/arm/Kconfig b/arch/arm/Kconfig 52*10465441SEvalZeroindex 67874b8..eb82cd6 100644 53*10465441SEvalZero--- a/arch/arm/Kconfig 54*10465441SEvalZero+++ b/arch/arm/Kconfig 55*10465441SEvalZero@@ -1164,6 +1164,7 @@ config ARM_TIMER_SP804 56*10465441SEvalZero select HAVE_SCHED_CLOCK 57*10465441SEvalZero 58*10465441SEvalZero source arch/arm/mm/Kconfig 59*10465441SEvalZero+source arch/arm/vmm/Kconfig 60*10465441SEvalZero 61*10465441SEvalZero config ARM_NR_BANKS 62*10465441SEvalZero int 63*10465441SEvalZerodiff --git a/arch/arm/Makefile b/arch/arm/Makefile 64*10465441SEvalZeroindex 30c443c..262c8e2 100644 65*10465441SEvalZero--- a/arch/arm/Makefile 66*10465441SEvalZero+++ b/arch/arm/Makefile 67*10465441SEvalZero@@ -252,6 +252,7 @@ core-$(CONFIG_FPE_NWFPE) += arch/arm/nwfpe/ 68*10465441SEvalZero core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ) 69*10465441SEvalZero core-$(CONFIG_VFP) += arch/arm/vfp/ 70*10465441SEvalZero core-$(CONFIG_XEN) += arch/arm/xen/ 71*10465441SEvalZero+core-$(CONFIG_ARM_VMM) += arch/arm/vmm/ 72*10465441SEvalZero 73*10465441SEvalZero # If we have a machine-specific directory, then include it in the build. 74*10465441SEvalZero core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/ 75*10465441SEvalZerodiff --git a/arch/arm/common/gic.c b/arch/arm/common/gic.c 76*10465441SEvalZeroindex 87dfa90..a9d7357 100644 77*10465441SEvalZero--- a/arch/arm/common/gic.c 78*10465441SEvalZero+++ b/arch/arm/common/gic.c 79*10465441SEvalZero@@ -45,6 +45,11 @@ 80*10465441SEvalZero #include <asm/mach/irq.h> 81*10465441SEvalZero #include <asm/hardware/gic.h> 82*10465441SEvalZero 83*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 84*10465441SEvalZero+#include <vmm/vmm.h> 85*10465441SEvalZero+#include "../vmm/vmm_virhw.h" 86*10465441SEvalZero+#endif 87*10465441SEvalZero+ 88*10465441SEvalZero union gic_base { 89*10465441SEvalZero void __iomem *common_base; 90*10465441SEvalZero void __percpu __iomem **percpu_base; 91*10465441SEvalZero@@ -276,12 +281,72 @@ static int gic_set_wake(struct irq_data *d, unsigned int on) 92*10465441SEvalZero #define gic_set_wake NULL 93*10465441SEvalZero #endif 94*10465441SEvalZero 95*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 96*10465441SEvalZero+void vmm_irq_handle(struct gic_chip_data *gic, struct pt_regs *regs) 97*10465441SEvalZero+{ 98*10465441SEvalZero+ unsigned long flags; 99*10465441SEvalZero+ struct vmm_context* _vmm_context; 100*10465441SEvalZero+ 101*10465441SEvalZero+ _vmm_context = vmm_context_get(); 102*10465441SEvalZero+ 103*10465441SEvalZero+ while (_vmm_context->virq_pended) { 104*10465441SEvalZero+ int index; 105*10465441SEvalZero+ 106*10465441SEvalZero+ flags = vmm_irq_save(); 107*10465441SEvalZero+ _vmm_context->virq_pended = 0; 108*10465441SEvalZero+ vmm_irq_restore(flags); 109*10465441SEvalZero+ 110*10465441SEvalZero+ /* get the pending interrupt */ 111*10465441SEvalZero+ for (index = 0; index < IRQS_NR_32; index++) { 112*10465441SEvalZero+ int pdbit; 113*10465441SEvalZero+ 114*10465441SEvalZero+ for (pdbit = __builtin_ffs(_vmm_context->virq_pending[index]); 115*10465441SEvalZero+ pdbit != 0; 116*10465441SEvalZero+ pdbit = __builtin_ffs(_vmm_context->virq_pending[index])) { 117*10465441SEvalZero+ unsigned long inner_flag; 118*10465441SEvalZero+ int irqnr, oirqnr; 119*10465441SEvalZero+ 120*10465441SEvalZero+ pdbit--; 121*10465441SEvalZero+ 122*10465441SEvalZero+ inner_flag = vmm_irq_save(); 123*10465441SEvalZero+ _vmm_context->virq_pending[index] &= ~(1 << pdbit); 124*10465441SEvalZero+ vmm_irq_restore(inner_flag); 125*10465441SEvalZero+ 126*10465441SEvalZero+ oirqnr = pdbit + index * 32; 127*10465441SEvalZero+ if (likely(oirqnr > 15 && oirqnr < 1021)) { 128*10465441SEvalZero+ irqnr = irq_find_mapping(gic->domain, oirqnr); 129*10465441SEvalZero+ handle_IRQ(irqnr, regs); 130*10465441SEvalZero+ } else if (oirqnr < 16) { 131*10465441SEvalZero+ /* soft IRQs are EOIed by the host. */ 132*10465441SEvalZero+#ifdef CONFIG_SMP 133*10465441SEvalZero+ handle_IPI(oirqnr, regs); 134*10465441SEvalZero+#endif 135*10465441SEvalZero+ } 136*10465441SEvalZero+ /* umask interrupt */ 137*10465441SEvalZero+ /* FIXME: maybe we don't need this */ 138*10465441SEvalZero+ writel_relaxed(1 << (oirqnr % 32), 139*10465441SEvalZero+ gic_data_dist_base(gic) 140*10465441SEvalZero+ + GIC_DIST_ENABLE_SET 141*10465441SEvalZero+ + (oirqnr / 32) * 4); 142*10465441SEvalZero+ 143*10465441SEvalZero+ } 144*10465441SEvalZero+ } 145*10465441SEvalZero+ } 146*10465441SEvalZero+} 147*10465441SEvalZero+#endif 148*10465441SEvalZero+ 149*10465441SEvalZero asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) 150*10465441SEvalZero { 151*10465441SEvalZero u32 irqstat, irqnr; 152*10465441SEvalZero struct gic_chip_data *gic = &gic_data[0]; 153*10465441SEvalZero void __iomem *cpu_base = gic_data_cpu_base(gic); 154*10465441SEvalZero 155*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 156*10465441SEvalZero+ if (vmm_get_status()) { 157*10465441SEvalZero+ vmm_irq_handle(gic, regs); 158*10465441SEvalZero+ return; 159*10465441SEvalZero+ } 160*10465441SEvalZero+#endif 161*10465441SEvalZero do { 162*10465441SEvalZero irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK); 163*10465441SEvalZero irqnr = irqstat & ~0x1c00; 164*10465441SEvalZero@@ -777,7 +842,7 @@ void __cpuinit gic_secondary_init(unsigned int gic_nr) 165*10465441SEvalZero gic_cpu_init(&gic_data[gic_nr]); 166*10465441SEvalZero } 167*10465441SEvalZero 168*10465441SEvalZero-#ifdef CONFIG_SMP 169*10465441SEvalZero+#if defined(CONFIG_SMP) || defined(CONFIG_ARM_VMM) 170*10465441SEvalZero void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) 171*10465441SEvalZero { 172*10465441SEvalZero int cpu; 173*10465441SEvalZerodiff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h 174*10465441SEvalZeroindex eb87200..b646fa7 100644 175*10465441SEvalZero--- a/arch/arm/include/asm/assembler.h 176*10465441SEvalZero+++ b/arch/arm/include/asm/assembler.h 177*10465441SEvalZero@@ -82,11 +82,15 @@ 178*10465441SEvalZero */ 179*10465441SEvalZero #if __LINUX_ARM_ARCH__ >= 6 180*10465441SEvalZero .macro disable_irq_notrace 181*10465441SEvalZero- cpsid i 182*10465441SEvalZero+ stmdb sp!, {r0-r3, ip, lr} 183*10465441SEvalZero+ bl irq_disable_asm 184*10465441SEvalZero+ ldmia sp!, {r0-r3, ip, lr} 185*10465441SEvalZero .endm 186*10465441SEvalZero 187*10465441SEvalZero .macro enable_irq_notrace 188*10465441SEvalZero- cpsie i 189*10465441SEvalZero+ stmdb sp!, {r0-r3, ip, lr} 190*10465441SEvalZero+ bl irq_enable_asm 191*10465441SEvalZero+ ldmia sp!, {r0-r3, ip, lr} 192*10465441SEvalZero .endm 193*10465441SEvalZero #else 194*10465441SEvalZero .macro disable_irq_notrace 195*10465441SEvalZerodiff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h 196*10465441SEvalZeroindex 6ddbe44..bbc4470 100644 197*10465441SEvalZero--- a/arch/arm/include/asm/domain.h 198*10465441SEvalZero+++ b/arch/arm/include/asm/domain.h 199*10465441SEvalZero@@ -44,6 +44,13 @@ 200*10465441SEvalZero #define DOMAIN_IO 0 201*10465441SEvalZero #endif 202*10465441SEvalZero 203*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 204*10465441SEvalZero+/* RT-Thread VMM memory space */ 205*10465441SEvalZero+#define DOMAIN_RTVMM 3 206*10465441SEvalZero+/* shared memory with VMM and Linux */ 207*10465441SEvalZero+#define DOMAIN_RTVMM_SHR 4 208*10465441SEvalZero+#endif 209*10465441SEvalZero+ 210*10465441SEvalZero /* 211*10465441SEvalZero * Domain types 212*10465441SEvalZero */ 213*10465441SEvalZerodiff --git a/arch/arm/include/asm/irqflags.h b/arch/arm/include/asm/irqflags.h 214*10465441SEvalZeroindex 1e6cca5..bfaedff 100644 215*10465441SEvalZero--- a/arch/arm/include/asm/irqflags.h 216*10465441SEvalZero+++ b/arch/arm/include/asm/irqflags.h 217*10465441SEvalZero@@ -9,34 +9,56 @@ 218*10465441SEvalZero * CPU interrupt mask handling. 219*10465441SEvalZero */ 220*10465441SEvalZero #if __LINUX_ARM_ARCH__ >= 6 221*10465441SEvalZero+#include <vmm/vmm.h> /* VMM only support ARMv7 right now */ 222*10465441SEvalZero 223*10465441SEvalZero static inline unsigned long arch_local_irq_save(void) 224*10465441SEvalZero { 225*10465441SEvalZero unsigned long flags; 226*10465441SEvalZero 227*10465441SEvalZero- asm volatile( 228*10465441SEvalZero- " mrs %0, cpsr @ arch_local_irq_save\n" 229*10465441SEvalZero- " cpsid i" 230*10465441SEvalZero- : "=r" (flags) : : "memory", "cc"); 231*10465441SEvalZero+ if (vmm_status) 232*10465441SEvalZero+ { 233*10465441SEvalZero+ flags = vmm_save_virq(); 234*10465441SEvalZero+ } 235*10465441SEvalZero+ else 236*10465441SEvalZero+ { 237*10465441SEvalZero+ asm volatile( 238*10465441SEvalZero+ " mrs %0, cpsr @ arch_local_irq_save\n" 239*10465441SEvalZero+ " cpsid i" 240*10465441SEvalZero+ : "=r" (flags) : : "memory", "cc"); 241*10465441SEvalZero+ } 242*10465441SEvalZero return flags; 243*10465441SEvalZero } 244*10465441SEvalZero 245*10465441SEvalZero static inline void arch_local_irq_enable(void) 246*10465441SEvalZero { 247*10465441SEvalZero- asm volatile( 248*10465441SEvalZero- " cpsie i @ arch_local_irq_enable" 249*10465441SEvalZero- : 250*10465441SEvalZero- : 251*10465441SEvalZero- : "memory", "cc"); 252*10465441SEvalZero+ if (vmm_status) 253*10465441SEvalZero+ { 254*10465441SEvalZero+ vmm_enable_virq(); 255*10465441SEvalZero+ } 256*10465441SEvalZero+ else 257*10465441SEvalZero+ { 258*10465441SEvalZero+ asm volatile( 259*10465441SEvalZero+ " cpsie i @ arch_local_irq_enable" 260*10465441SEvalZero+ : 261*10465441SEvalZero+ : 262*10465441SEvalZero+ : "memory", "cc"); 263*10465441SEvalZero+ } 264*10465441SEvalZero } 265*10465441SEvalZero 266*10465441SEvalZero static inline void arch_local_irq_disable(void) 267*10465441SEvalZero { 268*10465441SEvalZero- asm volatile( 269*10465441SEvalZero- " cpsid i @ arch_local_irq_disable" 270*10465441SEvalZero- : 271*10465441SEvalZero- : 272*10465441SEvalZero- : "memory", "cc"); 273*10465441SEvalZero+ if (vmm_status) 274*10465441SEvalZero+ { 275*10465441SEvalZero+ vmm_disable_virq(); 276*10465441SEvalZero+ } 277*10465441SEvalZero+ else 278*10465441SEvalZero+ { 279*10465441SEvalZero+ asm volatile( 280*10465441SEvalZero+ " cpsid i @ arch_local_irq_disable" 281*10465441SEvalZero+ : 282*10465441SEvalZero+ : 283*10465441SEvalZero+ : "memory", "cc"); 284*10465441SEvalZero+ } 285*10465441SEvalZero } 286*10465441SEvalZero 287*10465441SEvalZero #define local_fiq_enable() __asm__("cpsie f @ __stf" : : : "memory", "cc") 288*10465441SEvalZero@@ -128,9 +150,17 @@ static inline void arch_local_irq_disable(void) 289*10465441SEvalZero static inline unsigned long arch_local_save_flags(void) 290*10465441SEvalZero { 291*10465441SEvalZero unsigned long flags; 292*10465441SEvalZero- asm volatile( 293*10465441SEvalZero- " mrs %0, cpsr @ local_save_flags" 294*10465441SEvalZero- : "=r" (flags) : : "memory", "cc"); 295*10465441SEvalZero+ 296*10465441SEvalZero+ if (vmm_status) 297*10465441SEvalZero+ { 298*10465441SEvalZero+ flags = vmm_return_virq(); 299*10465441SEvalZero+ } 300*10465441SEvalZero+ else 301*10465441SEvalZero+ { 302*10465441SEvalZero+ asm volatile( 303*10465441SEvalZero+ " mrs %0, cpsr @ local_save_flags" 304*10465441SEvalZero+ : "=r" (flags) : : "memory", "cc"); 305*10465441SEvalZero+ } 306*10465441SEvalZero return flags; 307*10465441SEvalZero } 308*10465441SEvalZero 309*10465441SEvalZero@@ -139,15 +169,25 @@ static inline unsigned long arch_local_save_flags(void) 310*10465441SEvalZero */ 311*10465441SEvalZero static inline void arch_local_irq_restore(unsigned long flags) 312*10465441SEvalZero { 313*10465441SEvalZero- asm volatile( 314*10465441SEvalZero- " msr cpsr_c, %0 @ local_irq_restore" 315*10465441SEvalZero- : 316*10465441SEvalZero- : "r" (flags) 317*10465441SEvalZero- : "memory", "cc"); 318*10465441SEvalZero+ if (vmm_status) 319*10465441SEvalZero+ { 320*10465441SEvalZero+ vmm_restore_virq(flags); 321*10465441SEvalZero+ } 322*10465441SEvalZero+ else 323*10465441SEvalZero+ { 324*10465441SEvalZero+ asm volatile( 325*10465441SEvalZero+ " msr cpsr_c, %0 @ local_irq_restore" 326*10465441SEvalZero+ : 327*10465441SEvalZero+ : "r" (flags) 328*10465441SEvalZero+ : "memory", "cc"); 329*10465441SEvalZero+ } 330*10465441SEvalZero } 331*10465441SEvalZero 332*10465441SEvalZero static inline int arch_irqs_disabled_flags(unsigned long flags) 333*10465441SEvalZero { 334*10465441SEvalZero+ if (vmm_status) 335*10465441SEvalZero+ return (flags == 0x01); 336*10465441SEvalZero+ 337*10465441SEvalZero return flags & PSR_I_BIT; 338*10465441SEvalZero } 339*10465441SEvalZero 340*10465441SEvalZerodiff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h 341*10465441SEvalZeroindex 2fe141f..502b341 100644 342*10465441SEvalZero--- a/arch/arm/include/asm/mach/map.h 343*10465441SEvalZero+++ b/arch/arm/include/asm/mach/map.h 344*10465441SEvalZero@@ -35,6 +35,11 @@ struct map_desc { 345*10465441SEvalZero #define MT_MEMORY_SO 14 346*10465441SEvalZero #define MT_MEMORY_DMA_READY 15 347*10465441SEvalZero 348*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 349*10465441SEvalZero+#define MT_RTVMM 16 350*10465441SEvalZero+#define MT_RTVMM_SHARE 17 351*10465441SEvalZero+#endif 352*10465441SEvalZero+ 353*10465441SEvalZero #ifdef CONFIG_MMU 354*10465441SEvalZero extern void iotable_init(struct map_desc *, int); 355*10465441SEvalZero extern void vm_reserve_area_early(unsigned long addr, unsigned long size, 356*10465441SEvalZerodiff --git a/arch/arm/include/vmm/vmm.h b/arch/arm/include/vmm/vmm.h 357*10465441SEvalZeronew file mode 100644 358*10465441SEvalZeroindex 0000000..3ff3f31 359*10465441SEvalZero--- /dev/null 360*10465441SEvalZero+++ b/arch/arm/include/vmm/vmm.h 361*10465441SEvalZero@@ -0,0 +1,35 @@ 362*10465441SEvalZero+#ifndef __LINUX_VMM_H__ 363*10465441SEvalZero+#define __LINUX_VMM_H__ 364*10465441SEvalZero+ 365*10465441SEvalZero+#include <linux/compiler.h> 366*10465441SEvalZero+ 367*10465441SEvalZero+#include "vmm_config.h" 368*10465441SEvalZero+ 369*10465441SEvalZero+struct irq_domain; 370*10465441SEvalZero+struct pt_regs; 371*10465441SEvalZero+ 372*10465441SEvalZero+extern int vmm_status; 373*10465441SEvalZero+extern struct vmm_context *_vmm_context; 374*10465441SEvalZero+ 375*10465441SEvalZero+/* VMM context routines */ 376*10465441SEvalZero+void vmm_context_init(void* context); 377*10465441SEvalZero+struct vmm_context* vmm_context_get(void); 378*10465441SEvalZero+ 379*10465441SEvalZero+void vmm_set_status(int status); 380*10465441SEvalZero+int vmm_get_status(void); 381*10465441SEvalZero+ 382*10465441SEvalZero+void vmm_mem_init(void); 383*10465441SEvalZero+void vmm_raise_softirq(int irq); 384*10465441SEvalZero+ 385*10465441SEvalZero+/* VMM vIRQ routines */ 386*10465441SEvalZero+unsigned long vmm_save_virq(void); 387*10465441SEvalZero+unsigned long vmm_return_virq(void); 388*10465441SEvalZero+ 389*10465441SEvalZero+void vmm_restore_virq(unsigned long flags); 390*10465441SEvalZero+void vmm_enable_virq(void); 391*10465441SEvalZero+void vmm_disable_virq(void); 392*10465441SEvalZero+void vmm_enter_hw_noirq(void); 393*10465441SEvalZero+ 394*10465441SEvalZero+void vmm_raise_softirq(int irq); 395*10465441SEvalZero+ 396*10465441SEvalZero+#endif 397*10465441SEvalZerodiff --git a/arch/arm/include/vmm/vmm_config.h b/arch/arm/include/vmm/vmm_config.h 398*10465441SEvalZeronew file mode 100644 399*10465441SEvalZeroindex 0000000..cce5e8a 400*10465441SEvalZero--- /dev/null 401*10465441SEvalZero+++ b/arch/arm/include/vmm/vmm_config.h 402*10465441SEvalZero@@ -0,0 +1,7 @@ 403*10465441SEvalZero+#ifndef __LINUX_VMM_CONFIG_H__ 404*10465441SEvalZero+#define __LINUX_VMM_CONFIG_H__ 405*10465441SEvalZero+ 406*10465441SEvalZero+#define HOST_VMM_ADDR_END CONFIG_HOST_VMM_ADDR_END 407*10465441SEvalZero+#define HOST_VMM_ADDR_BEGIN (CONFIG_HOST_VMM_ADDR_END - CONFIG_HOST_VMM_SIZE) 408*10465441SEvalZero+ 409*10465441SEvalZero+#endif 410*10465441SEvalZerodiff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S 411*10465441SEvalZeroindex 0f82098..80f1681 100644 412*10465441SEvalZero--- a/arch/arm/kernel/entry-armv.S 413*10465441SEvalZero+++ b/arch/arm/kernel/entry-armv.S 414*10465441SEvalZero@@ -182,6 +182,15 @@ ENDPROC(__und_invalid) 415*10465441SEvalZero @ 416*10465441SEvalZero stmia r7, {r2 - r6} 417*10465441SEvalZero 418*10465441SEvalZero+ stmdb sp!, {r0-r3, ip, lr} 419*10465441SEvalZero+ mov r0, r5 420*10465441SEvalZero+ add r1, sp, #4*6 421*10465441SEvalZero+ bl vmm_save_virq_spsr_asm 422*10465441SEvalZero+ mov r5, r0 423*10465441SEvalZero+ bl vmm_switch_nohwirq_to_novirq 424*10465441SEvalZero+ ldmia sp!, {r0-r3, ip, lr} 425*10465441SEvalZero+ str r5, [sp, #S_PSR] @ fix the pushed SPSR 426*10465441SEvalZero+ 427*10465441SEvalZero #ifdef CONFIG_TRACE_IRQFLAGS 428*10465441SEvalZero bl trace_hardirqs_off 429*10465441SEvalZero #endif 430*10465441SEvalZero@@ -208,6 +217,23 @@ __dabt_svc: 431*10465441SEvalZero UNWIND(.fnend ) 432*10465441SEvalZero ENDPROC(__dabt_svc) 433*10465441SEvalZero 434*10465441SEvalZero+ .macro svc_exit_irq, rpsr 435*10465441SEvalZero+ cpsid i 436*10465441SEvalZero+ msr spsr_cxsf, \rpsr 437*10465441SEvalZero+ mov r0, \rpsr 438*10465441SEvalZero+ bl vmm_on_svc_exit_irq 439*10465441SEvalZero+#if defined(CONFIG_CPU_V6) 440*10465441SEvalZero+ ldr r0, [sp] 441*10465441SEvalZero+ strex r1, r2, [sp] @ clear the exclusive monitor 442*10465441SEvalZero+ ldmib sp, {r1 - pc}^ @ load r1 - pc, cpsr 443*10465441SEvalZero+#elif defined(CONFIG_CPU_32v6K) 444*10465441SEvalZero+ clrex @ clear the exclusive monitor 445*10465441SEvalZero+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr 446*10465441SEvalZero+#else 447*10465441SEvalZero+ ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr 448*10465441SEvalZero+#endif 449*10465441SEvalZero+ .endm 450*10465441SEvalZero+ 451*10465441SEvalZero .align 5 452*10465441SEvalZero __irq_svc: 453*10465441SEvalZero svc_entry 454*10465441SEvalZero@@ -228,7 +254,7 @@ __irq_svc: 455*10465441SEvalZero @ the first place, so there's no point checking the PSR I bit. 456*10465441SEvalZero bl trace_hardirqs_on 457*10465441SEvalZero #endif 458*10465441SEvalZero- svc_exit r5 @ return from exception 459*10465441SEvalZero+ svc_exit_irq r5 @ return from exception 460*10465441SEvalZero UNWIND(.fnend ) 461*10465441SEvalZero ENDPROC(__irq_svc) 462*10465441SEvalZero 463*10465441SEvalZero@@ -393,6 +419,8 @@ ENDPROC(__pabt_svc) 464*10465441SEvalZero @ 465*10465441SEvalZero zero_fp 466*10465441SEvalZero 467*10465441SEvalZero+ bl vmm_switch_nohwirq_to_novirq 468*10465441SEvalZero+ 469*10465441SEvalZero #ifdef CONFIG_IRQSOFF_TRACER 470*10465441SEvalZero bl trace_hardirqs_off 471*10465441SEvalZero #endif 472*10465441SEvalZerodiff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S 473*10465441SEvalZeroindex a6c301e..325a26e 100644 474*10465441SEvalZero--- a/arch/arm/kernel/entry-common.S 475*10465441SEvalZero+++ b/arch/arm/kernel/entry-common.S 476*10465441SEvalZero@@ -349,6 +349,9 @@ ENTRY(vector_swi) 477*10465441SEvalZero str lr, [sp, #S_PC] @ Save calling PC 478*10465441SEvalZero str r8, [sp, #S_PSR] @ Save CPSR 479*10465441SEvalZero str r0, [sp, #S_OLD_R0] @ Save OLD_R0 480*10465441SEvalZero+ stmdb sp!, {r0-r3, ip, lr} 481*10465441SEvalZero+ bl vmm_switch_nohwirq_to_novirq 482*10465441SEvalZero+ ldmia sp!, {r0-r3, ip, lr} 483*10465441SEvalZero zero_fp 484*10465441SEvalZero 485*10465441SEvalZero /* 486*10465441SEvalZerodiff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S 487*10465441SEvalZeroindex 9a8531e..9e438dc 100644 488*10465441SEvalZero--- a/arch/arm/kernel/entry-header.S 489*10465441SEvalZero+++ b/arch/arm/kernel/entry-header.S 490*10465441SEvalZero@@ -75,7 +75,11 @@ 491*10465441SEvalZero 492*10465441SEvalZero #ifndef CONFIG_THUMB2_KERNEL 493*10465441SEvalZero .macro svc_exit, rpsr 494*10465441SEvalZero- msr spsr_cxsf, \rpsr 495*10465441SEvalZero+ cpsid i 496*10465441SEvalZero+ mov r0, \rpsr 497*10465441SEvalZero+ bl vmm_restore_virq_asm @ restore the IRQ to emulate 498*10465441SEvalZero+ @ the behavior of ldmia {}^ 499*10465441SEvalZero+ msr spsr_cxsf, r0 500*10465441SEvalZero #if defined(CONFIG_CPU_V6) 501*10465441SEvalZero ldr r0, [sp] 502*10465441SEvalZero strex r1, r2, [sp] @ clear the exclusive monitor 503*10465441SEvalZero@@ -90,6 +94,10 @@ 504*10465441SEvalZero 505*10465441SEvalZero .macro restore_user_regs, fast = 0, offset = 0 506*10465441SEvalZero ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr 507*10465441SEvalZero+ @ protect the spsr *and* stack we push the registers into this stack 508*10465441SEvalZero+ @ and if the sp is not point to the bottom of the stack, IRQ should be 509*10465441SEvalZero+ @ disabled. 510*10465441SEvalZero+ cpsid i 511*10465441SEvalZero ldr lr, [sp, #\offset + S_PC]! @ get pc 512*10465441SEvalZero msr spsr_cxsf, r1 @ save in spsr_svc 513*10465441SEvalZero #if defined(CONFIG_CPU_V6) 514*10465441SEvalZero@@ -105,6 +113,11 @@ 515*10465441SEvalZero mov r0, r0 @ ARMv5T and earlier require a nop 516*10465441SEvalZero @ after ldm {}^ 517*10465441SEvalZero add sp, sp, #S_FRAME_SIZE - S_PC 518*10465441SEvalZero+ @ TODO: in some conditions the call to vmm_on_ret_to_usr is useless. 519*10465441SEvalZero+ stmdb sp!, {r0-r3, ip, lr} 520*10465441SEvalZero+ mrs r0, spsr @ debug code 521*10465441SEvalZero+ bl vmm_on_ret_to_usr 522*10465441SEvalZero+ ldmia sp!, {r0-r3, ip, lr} 523*10465441SEvalZero movs pc, lr @ return & move spsr_svc into cpsr 524*10465441SEvalZero .endm 525*10465441SEvalZero 526*10465441SEvalZerodiff --git a/arch/arm/mach-omap2/irq.c b/arch/arm/mach-omap2/irq.c 527*10465441SEvalZeroindex 3926f37..252577f 100644 528*10465441SEvalZero--- a/arch/arm/mach-omap2/irq.c 529*10465441SEvalZero+++ b/arch/arm/mach-omap2/irq.c 530*10465441SEvalZero@@ -23,6 +23,10 @@ 531*10465441SEvalZero #include <linux/of_address.h> 532*10465441SEvalZero #include <linux/of_irq.h> 533*10465441SEvalZero 534*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 535*10465441SEvalZero+#include <vmm/vmm.h> 536*10465441SEvalZero+#endif 537*10465441SEvalZero+ 538*10465441SEvalZero #include "soc.h" 539*10465441SEvalZero #include "iomap.h" 540*10465441SEvalZero #include "common.h" 541*10465441SEvalZero@@ -223,6 +227,14 @@ static inline void omap_intc_handle_irq(void __iomem *base_addr, struct pt_regs 542*10465441SEvalZero { 543*10465441SEvalZero u32 irqnr; 544*10465441SEvalZero 545*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 546*10465441SEvalZero+ if (vmm_get_status()) 547*10465441SEvalZero+ { 548*10465441SEvalZero+ vmm_irq_handle(base_addr, domain, regs); 549*10465441SEvalZero+ return; 550*10465441SEvalZero+ } 551*10465441SEvalZero+#endif 552*10465441SEvalZero+ 553*10465441SEvalZero do { 554*10465441SEvalZero irqnr = readl_relaxed(base_addr + 0x98); 555*10465441SEvalZero if (irqnr) 556*10465441SEvalZerodiff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c 557*10465441SEvalZeroindex 5dbf13f..e76ba74 100644 558*10465441SEvalZero--- a/arch/arm/mm/fault.c 559*10465441SEvalZero+++ b/arch/arm/mm/fault.c 560*10465441SEvalZero@@ -255,6 +255,10 @@ out: 561*10465441SEvalZero return fault; 562*10465441SEvalZero } 563*10465441SEvalZero 564*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 565*10465441SEvalZero+#include <vmm/vmm.h> 566*10465441SEvalZero+#endif 567*10465441SEvalZero+ 568*10465441SEvalZero static int __kprobes 569*10465441SEvalZero do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) 570*10465441SEvalZero { 571*10465441SEvalZero@@ -268,6 +272,11 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) 572*10465441SEvalZero if (notify_page_fault(regs, fsr)) 573*10465441SEvalZero return 0; 574*10465441SEvalZero 575*10465441SEvalZero+#ifdef CONFIG_ARM_VMMX 576*10465441SEvalZero+ WARN(HOST_VMM_ADDR_BEGIN < regs->ARM_pc && 577*10465441SEvalZero+ regs->ARM_pc < HOST_VMM_ADDR_END); 578*10465441SEvalZero+#endif 579*10465441SEvalZero+ 580*10465441SEvalZero tsk = current; 581*10465441SEvalZero mm = tsk->mm; 582*10465441SEvalZero 583*10465441SEvalZerodiff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c 584*10465441SEvalZeroindex ad722f1..ebb4e7f 100644 585*10465441SEvalZero--- a/arch/arm/mm/init.c 586*10465441SEvalZero+++ b/arch/arm/mm/init.c 587*10465441SEvalZero@@ -34,6 +34,10 @@ 588*10465441SEvalZero #include <asm/mach/arch.h> 589*10465441SEvalZero #include <asm/mach/map.h> 590*10465441SEvalZero 591*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 592*10465441SEvalZero+#include <vmm/vmm.h> 593*10465441SEvalZero+#endif 594*10465441SEvalZero+ 595*10465441SEvalZero #include "mm.h" 596*10465441SEvalZero 597*10465441SEvalZero static unsigned long phys_initrd_start __initdata = 0; 598*10465441SEvalZero@@ -338,6 +342,10 @@ void __init arm_memblock_init(struct meminfo *mi, struct machine_desc *mdesc) 599*10465441SEvalZero for (i = 0; i < mi->nr_banks; i++) 600*10465441SEvalZero memblock_add(mi->bank[i].start, mi->bank[i].size); 601*10465441SEvalZero 602*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 603*10465441SEvalZero+ memblock_reserve(__pa(HOST_VMM_ADDR_BEGIN), HOST_VMM_ADDR_END - HOST_VMM_ADDR_BEGIN); 604*10465441SEvalZero+#endif 605*10465441SEvalZero+ 606*10465441SEvalZero /* Register the kernel text, kernel data and initrd with memblock. */ 607*10465441SEvalZero #ifdef CONFIG_XIP_KERNEL 608*10465441SEvalZero memblock_reserve(__pa(_sdata), _end - _sdata); 609*10465441SEvalZerodiff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c 610*10465441SEvalZeroindex ce328c7..7e7d0ca 100644 611*10465441SEvalZero--- a/arch/arm/mm/mmu.c 612*10465441SEvalZero+++ b/arch/arm/mm/mmu.c 613*10465441SEvalZero@@ -294,6 +294,20 @@ static struct mem_type mem_types[] = { 614*10465441SEvalZero .prot_l1 = PMD_TYPE_TABLE, 615*10465441SEvalZero .domain = DOMAIN_KERNEL, 616*10465441SEvalZero }, 617*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 618*10465441SEvalZero+ [MT_RTVMM] = { 619*10465441SEvalZero+ .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY, 620*10465441SEvalZero+ .prot_l1 = PMD_TYPE_TABLE, 621*10465441SEvalZero+ .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 622*10465441SEvalZero+ .domain = DOMAIN_RTVMM, 623*10465441SEvalZero+ }, 624*10465441SEvalZero+ [MT_RTVMM_SHARE] = { 625*10465441SEvalZero+ .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY, 626*10465441SEvalZero+ .prot_l1 = PMD_TYPE_TABLE, 627*10465441SEvalZero+ .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, 628*10465441SEvalZero+ .domain = DOMAIN_RTVMM_SHR, 629*10465441SEvalZero+ }, 630*10465441SEvalZero+#endif 631*10465441SEvalZero }; 632*10465441SEvalZero 633*10465441SEvalZero const struct mem_type *get_mem_type(unsigned int type) 634*10465441SEvalZero@@ -450,6 +464,9 @@ static void __init build_mem_type_table(void) 635*10465441SEvalZero mem_types[MT_DEVICE_CACHED].prot_pte |= L_PTE_SHARED; 636*10465441SEvalZero mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S; 637*10465441SEvalZero mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED; 638*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 639*10465441SEvalZero+ /* FIXME */ 640*10465441SEvalZero+#endif 641*10465441SEvalZero mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED; 642*10465441SEvalZero mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S; 643*10465441SEvalZero mem_types[MT_MEMORY_NONCACHED].prot_pte |= L_PTE_SHARED; 644*10465441SEvalZero@@ -503,6 +520,12 @@ static void __init build_mem_type_table(void) 645*10465441SEvalZero mem_types[MT_HIGH_VECTORS].prot_l1 |= ecc_mask; 646*10465441SEvalZero mem_types[MT_MEMORY].prot_sect |= ecc_mask | cp->pmd; 647*10465441SEvalZero mem_types[MT_MEMORY].prot_pte |= kern_pgprot; 648*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 649*10465441SEvalZero+ mem_types[MT_RTVMM].prot_sect |= ecc_mask | cp->pmd; 650*10465441SEvalZero+ mem_types[MT_RTVMM].prot_pte |= kern_pgprot; 651*10465441SEvalZero+ mem_types[MT_RTVMM_SHARE].prot_sect |= ecc_mask | cp->pmd; 652*10465441SEvalZero+ mem_types[MT_RTVMM_SHARE].prot_pte |= kern_pgprot; 653*10465441SEvalZero+#endif 654*10465441SEvalZero mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot; 655*10465441SEvalZero mem_types[MT_MEMORY_NONCACHED].prot_sect |= ecc_mask; 656*10465441SEvalZero mem_types[MT_ROM].prot_sect |= cp->pmd; 657*10465441SEvalZero@@ -1152,6 +1175,27 @@ static void __init devicemaps_init(struct machine_desc *mdesc) 658*10465441SEvalZero #endif 659*10465441SEvalZero 660*10465441SEvalZero /* 661*10465441SEvalZero+ * Create mappings for RT-Thread VMM and it's shared memory with Linux 662*10465441SEvalZero+ */ 663*10465441SEvalZero+#ifdef CONFIG_ARM_VMM 664*10465441SEvalZero+ /* the TEXCB attribute is not right yet */ 665*10465441SEvalZero+ /* shared memory region comes first */ 666*10465441SEvalZero+ map.pfn = __phys_to_pfn(virt_to_phys((void*)HOST_VMM_ADDR_BEGIN)); 667*10465441SEvalZero+ map.virtual = HOST_VMM_ADDR_BEGIN; 668*10465441SEvalZero+ map.length = CONFIG_RTVMM_SHARED_SIZE; 669*10465441SEvalZero+ map.type = MT_RTVMM_SHARE; 670*10465441SEvalZero+ create_mapping(&map); 671*10465441SEvalZero+ 672*10465441SEvalZero+ /* vmm private region comes next */ 673*10465441SEvalZero+ map.pfn = __phys_to_pfn(virt_to_phys((void*)HOST_VMM_ADDR_BEGIN 674*10465441SEvalZero+ + CONFIG_RTVMM_SHARED_SIZE)); 675*10465441SEvalZero+ map.virtual = HOST_VMM_ADDR_BEGIN + CONFIG_RTVMM_SHARED_SIZE; 676*10465441SEvalZero+ map.length = CONFIG_HOST_VMM_SIZE - CONFIG_RTVMM_SHARED_SIZE; 677*10465441SEvalZero+ map.type = MT_RTVMM; 678*10465441SEvalZero+ create_mapping(&map); 679*10465441SEvalZero+#endif 680*10465441SEvalZero+ 681*10465441SEvalZero+ /* 682*10465441SEvalZero * Create a mapping for the machine vectors at the high-vectors 683*10465441SEvalZero * location (0xffff0000). If we aren't using high-vectors, also 684*10465441SEvalZero * create a mapping at the low-vectors virtual address. 685*10465441SEvalZerodiff --git a/arch/arm/vmm/Kconfig b/arch/arm/vmm/Kconfig 686*10465441SEvalZeronew file mode 100644 687*10465441SEvalZeroindex 0000000..d852056 688*10465441SEvalZero--- /dev/null 689*10465441SEvalZero+++ b/arch/arm/vmm/Kconfig 690*10465441SEvalZero@@ -0,0 +1,49 @@ 691*10465441SEvalZero+menu "RT-Thread VMM Features" 692*10465441SEvalZero+ 693*10465441SEvalZero+# ARM-VMM 694*10465441SEvalZero+config ARM_VMM 695*10465441SEvalZero+ bool "Support RT-Thread VMM on ARM Cortex-A8" 696*10465441SEvalZero+ depends on MACH_REALVIEW_PBA8 697*10465441SEvalZero+ help 698*10465441SEvalZero+ RT-Thread VMM implementation on ARM Cortex-A8 699*10465441SEvalZero+ 700*10465441SEvalZero+ Say Y if you want support for the RT-Thread VMM. 701*10465441SEvalZero+ Otherwise, say N. 702*10465441SEvalZero+ 703*10465441SEvalZero+if SOC_AM33XX 704*10465441SEvalZero+config HOST_VMM_ADDR_END 705*10465441SEvalZero+ hex "End address of VMM" 706*10465441SEvalZero+ depends on ARM_VMM 707*10465441SEvalZero+ default 0xE0000000 708*10465441SEvalZero+ help 709*10465441SEvalZero+ The end address of VMM space. Normally, it's the 710*10465441SEvalZero+ end address of DDR memory. 711*10465441SEvalZero+endif 712*10465441SEvalZero+ 713*10465441SEvalZero+if MACH_REALVIEW_PBA8 714*10465441SEvalZero+config HOST_VMM_ADDR_END 715*10465441SEvalZero+ hex "End address of VMM" 716*10465441SEvalZero+ depends on ARM_VMM 717*10465441SEvalZero+ default 0xE0000000 718*10465441SEvalZero+ help 719*10465441SEvalZero+ The end address of VMM space. Normally, it's the 720*10465441SEvalZero+ end address of DDR memory. 721*10465441SEvalZero+endif 722*10465441SEvalZero+ 723*10465441SEvalZero+config HOST_VMM_SIZE 724*10465441SEvalZero+ hex "Size of VMM space" 725*10465441SEvalZero+ depends on ARM_VMM 726*10465441SEvalZero+ default 0x400000 727*10465441SEvalZero+ help 728*10465441SEvalZero+ The size of VMM space. 729*10465441SEvalZero+ 730*10465441SEvalZero+config RTVMM_SHARED_SIZE 731*10465441SEvalZero+ hex "Size of shared memory space between rt-vmm and Linux" 732*10465441SEvalZero+ depends on ARM_VMM 733*10465441SEvalZero+ default 0x100000 734*10465441SEvalZero+ help 735*10465441SEvalZero+ The size of shared memory space between rt-vmm and Linux. This shared 736*10465441SEvalZero+ space is within the total size of the HOST_VMM_SIZE. So it is should 737*10465441SEvalZero+ be smaller than HOST_VMM_SIZE. 738*10465441SEvalZero+ 739*10465441SEvalZero+endmenu 740*10465441SEvalZerodiff --git a/arch/arm/vmm/Makefile b/arch/arm/vmm/Makefile 741*10465441SEvalZeronew file mode 100644 742*10465441SEvalZeroindex 0000000..127e43a 743*10465441SEvalZero--- /dev/null 744*10465441SEvalZero+++ b/arch/arm/vmm/Makefile 745*10465441SEvalZero@@ -0,0 +1,10 @@ 746*10465441SEvalZero+# 747*10465441SEvalZero+# Makefile for the linux arm-vmm 748*10465441SEvalZero+# 749*10465441SEvalZero+ 750*10465441SEvalZero+obj-$(CONFIG_ARM_VMM) += vmm.o vmm_traps.o vmm_virq.o 751*10465441SEvalZero+ 752*10465441SEvalZero+ifeq ($(CONFIG_ARM_VMM),y) 753*10465441SEvalZero+obj-$(CONFIG_SOC_AM33XX) += am33xx/softirq.o am33xx/virq.o 754*10465441SEvalZero+obj-$(CONFIG_MACH_REALVIEW_PBA8) += realview_a8/softirq.o 755*10465441SEvalZero+endif 756*10465441SEvalZerodiff --git a/arch/arm/vmm/README b/arch/arm/vmm/README 757*10465441SEvalZeronew file mode 100644 758*10465441SEvalZeroindex 0000000..24f1b42 759*10465441SEvalZero--- /dev/null 760*10465441SEvalZero+++ b/arch/arm/vmm/README 761*10465441SEvalZero@@ -0,0 +1 @@ 762*10465441SEvalZero+Linux VMM kernel routines 763*10465441SEvalZerodiff --git a/arch/arm/vmm/am33xx/intc.h b/arch/arm/vmm/am33xx/intc.h 764*10465441SEvalZeronew file mode 100644 765*10465441SEvalZeroindex 0000000..6c24f8d 766*10465441SEvalZero--- /dev/null 767*10465441SEvalZero+++ b/arch/arm/vmm/am33xx/intc.h 768*10465441SEvalZero@@ -0,0 +1,13 @@ 769*10465441SEvalZero+#ifndef __INTC_H__ 770*10465441SEvalZero+#define __INTC_H__ 771*10465441SEvalZero+ 772*10465441SEvalZero+#define OMAP34XX_IC_BASE 0x48200000 773*10465441SEvalZero+ 774*10465441SEvalZero+#define INTC_SIR_SET0 0x0090 775*10465441SEvalZero+#define INTC_MIR_CLEAR0 0x0088 776*10465441SEvalZero+ 777*10465441SEvalZero+#define OMAP2_L4_IO_OFFSET 0xb2000000 778*10465441SEvalZero+#define OMAP2_L4_IO_ADDRESS(pa) IOMEM((pa) + OMAP2_L4_IO_OFFSET) /* L4 */ 779*10465441SEvalZero+#define OMAP3_IRQ_BASE OMAP2_L4_IO_ADDRESS(OMAP34XX_IC_BASE) 780*10465441SEvalZero+ 781*10465441SEvalZero+#endif 782*10465441SEvalZerodiff --git a/arch/arm/vmm/am33xx/softirq.c b/arch/arm/vmm/am33xx/softirq.c 783*10465441SEvalZeronew file mode 100644 784*10465441SEvalZeroindex 0000000..5648496 785*10465441SEvalZero--- /dev/null 786*10465441SEvalZero+++ b/arch/arm/vmm/am33xx/softirq.c 787*10465441SEvalZero@@ -0,0 +1,14 @@ 788*10465441SEvalZero+#include <linux/kernel.h> 789*10465441SEvalZero+#include <linux/module.h> 790*10465441SEvalZero+#include <asm/io.h> 791*10465441SEvalZero+ 792*10465441SEvalZero+#include <vmm/vmm.h> 793*10465441SEvalZero+#include "../vmm_virhw.h" 794*10465441SEvalZero+#include "intc.h" 795*10465441SEvalZero+ 796*10465441SEvalZero+void vmm_raise_softirq(int irq) 797*10465441SEvalZero+{ 798*10465441SEvalZero+ writel_relaxed(1 << (irq % 32), 799*10465441SEvalZero+ OMAP3_IRQ_BASE + INTC_SIR_SET0 + (irq / 32) * 4); 800*10465441SEvalZero+} 801*10465441SEvalZero+EXPORT_SYMBOL(vmm_raise_softirq); 802*10465441SEvalZerodiff --git a/arch/arm/vmm/am33xx/virq.c b/arch/arm/vmm/am33xx/virq.c 803*10465441SEvalZeronew file mode 100644 804*10465441SEvalZeroindex 0000000..4ef7671 805*10465441SEvalZero--- /dev/null 806*10465441SEvalZero+++ b/arch/arm/vmm/am33xx/virq.c 807*10465441SEvalZero@@ -0,0 +1,48 @@ 808*10465441SEvalZero+#include <linux/kernel.h> 809*10465441SEvalZero+#include <linux/module.h> 810*10465441SEvalZero+#include <linux/irqdomain.h> 811*10465441SEvalZero+ 812*10465441SEvalZero+#include <asm/io.h> 813*10465441SEvalZero+#include <asm/irq.h> 814*10465441SEvalZero+ 815*10465441SEvalZero+#include <vmm/vmm.h> 816*10465441SEvalZero+#include "../vmm_virhw.h" 817*10465441SEvalZero+#include "intc.h" 818*10465441SEvalZero+ 819*10465441SEvalZero+void vmm_irq_handle(void __iomem *base_addr, struct irq_domain *domain, 820*10465441SEvalZero+ struct pt_regs *regs) 821*10465441SEvalZero+{ 822*10465441SEvalZero+ unsigned long flags; 823*10465441SEvalZero+ struct vmm_context* _vmm_context; 824*10465441SEvalZero+ 825*10465441SEvalZero+ _vmm_context = vmm_context_get(); 826*10465441SEvalZero+ 827*10465441SEvalZero+ while (_vmm_context->virq_pended) { 828*10465441SEvalZero+ int index; 829*10465441SEvalZero+ 830*10465441SEvalZero+ flags = vmm_irq_save(); 831*10465441SEvalZero+ _vmm_context->virq_pended = 0; 832*10465441SEvalZero+ vmm_irq_restore(flags); 833*10465441SEvalZero+ 834*10465441SEvalZero+ /* get the pending interrupt */ 835*10465441SEvalZero+ for (index = 0; index < IRQS_NR_32; index++) { 836*10465441SEvalZero+ int pdbit; 837*10465441SEvalZero+ 838*10465441SEvalZero+ for (pdbit = __builtin_ffs(_vmm_context->virq_pending[index]); 839*10465441SEvalZero+ pdbit != 0; 840*10465441SEvalZero+ pdbit = __builtin_ffs(_vmm_context->virq_pending[index])) { 841*10465441SEvalZero+ unsigned long inner_flag; 842*10465441SEvalZero+ int irqnr; 843*10465441SEvalZero+ 844*10465441SEvalZero+ pdbit--; 845*10465441SEvalZero+ 846*10465441SEvalZero+ inner_flag = vmm_irq_save(); 847*10465441SEvalZero+ _vmm_context->virq_pending[index] &= ~(1 << pdbit); 848*10465441SEvalZero+ vmm_irq_restore(inner_flag); 849*10465441SEvalZero+ 850*10465441SEvalZero+ irqnr = irq_find_mapping(domain, pdbit + index * 32); 851*10465441SEvalZero+ handle_IRQ(irqnr, regs); 852*10465441SEvalZero+ } 853*10465441SEvalZero+ } 854*10465441SEvalZero+ } 855*10465441SEvalZero+} 856*10465441SEvalZerodiff --git a/arch/arm/vmm/realview_a8/softirq.c b/arch/arm/vmm/realview_a8/softirq.c 857*10465441SEvalZeronew file mode 100644 858*10465441SEvalZeroindex 0000000..a52b79c7 859*10465441SEvalZero--- /dev/null 860*10465441SEvalZero+++ b/arch/arm/vmm/realview_a8/softirq.c 861*10465441SEvalZero@@ -0,0 +1,12 @@ 862*10465441SEvalZero+#include <linux/kernel.h> 863*10465441SEvalZero+#include <linux/module.h> 864*10465441SEvalZero+#include <asm/io.h> 865*10465441SEvalZero+#include <asm/hardware/gic.h> 866*10465441SEvalZero+ 867*10465441SEvalZero+#include <vmm/vmm.h> 868*10465441SEvalZero+ 869*10465441SEvalZero+void vmm_raise_softirq(int irq) 870*10465441SEvalZero+{ 871*10465441SEvalZero+ gic_raise_softirq(cpumask_of(0), irq); 872*10465441SEvalZero+} 873*10465441SEvalZero+EXPORT_SYMBOL(vmm_raise_softirq); 874*10465441SEvalZerodiff --git a/arch/arm/vmm/vmm.c b/arch/arm/vmm/vmm.c 875*10465441SEvalZeronew file mode 100644 876*10465441SEvalZeroindex 0000000..3b1d202 877*10465441SEvalZero--- /dev/null 878*10465441SEvalZero+++ b/arch/arm/vmm/vmm.c 879*10465441SEvalZero@@ -0,0 +1,32 @@ 880*10465441SEvalZero+#include <linux/kernel.h> 881*10465441SEvalZero+#include <linux/module.h> 882*10465441SEvalZero+ 883*10465441SEvalZero+#include <vmm/vmm.h> 884*10465441SEvalZero+ 885*10465441SEvalZero+struct vmm_context* _vmm_context = NULL; 886*10465441SEvalZero+int vmm_status = 0; 887*10465441SEvalZero+EXPORT_SYMBOL(vmm_status); 888*10465441SEvalZero+ 889*10465441SEvalZero+void vmm_set_status(int status) 890*10465441SEvalZero+{ 891*10465441SEvalZero+ vmm_status = status; 892*10465441SEvalZero+} 893*10465441SEvalZero+EXPORT_SYMBOL(vmm_set_status); 894*10465441SEvalZero+ 895*10465441SEvalZero+int vmm_get_status(void) 896*10465441SEvalZero+{ 897*10465441SEvalZero+ return vmm_status; 898*10465441SEvalZero+} 899*10465441SEvalZero+EXPORT_SYMBOL(vmm_get_status); 900*10465441SEvalZero+ 901*10465441SEvalZero+void vmm_context_init(void* context_addr) 902*10465441SEvalZero+{ 903*10465441SEvalZero+ _vmm_context = (struct vmm_context*)context_addr; 904*10465441SEvalZero+} 905*10465441SEvalZero+EXPORT_SYMBOL(vmm_context_init); 906*10465441SEvalZero+ 907*10465441SEvalZero+struct vmm_context* vmm_context_get(void) 908*10465441SEvalZero+{ 909*10465441SEvalZero+ return _vmm_context; 910*10465441SEvalZero+} 911*10465441SEvalZero+EXPORT_SYMBOL(vmm_context_get); 912*10465441SEvalZerodiff --git a/arch/arm/vmm/vmm_traps.c b/arch/arm/vmm/vmm_traps.c 913*10465441SEvalZeronew file mode 100644 914*10465441SEvalZeroindex 0000000..def0d90 915*10465441SEvalZero--- /dev/null 916*10465441SEvalZero+++ b/arch/arm/vmm/vmm_traps.c 917*10465441SEvalZero@@ -0,0 +1,37 @@ 918*10465441SEvalZero+#include <linux/kernel.h> 919*10465441SEvalZero+#include <linux/module.h> 920*10465441SEvalZero+#include <asm/traps.h> 921*10465441SEvalZero+#include <asm/cp15.h> 922*10465441SEvalZero+#include <asm/cacheflush.h> 923*10465441SEvalZero+ 924*10465441SEvalZero+void trap_set_vector(void *start, unsigned int length) 925*10465441SEvalZero+{ 926*10465441SEvalZero+ unsigned char *ptr; 927*10465441SEvalZero+ unsigned char *vector; 928*10465441SEvalZero+ 929*10465441SEvalZero+ ptr = start; 930*10465441SEvalZero+ vector = (unsigned char*)vectors_page; 931*10465441SEvalZero+ 932*10465441SEvalZero+ /* only set IRQ and FIQ */ 933*10465441SEvalZero+#if defined(CONFIG_CPU_USE_DOMAINS) 934*10465441SEvalZero+ /* IRQ */ 935*10465441SEvalZero+ memcpy((void *)0xffff0018, (void*)(ptr + 0x18), 4); 936*10465441SEvalZero+ memcpy((void *)(0xffff0018 + 0x20), (void*)(ptr + 0x18 + 0x20), 4); 937*10465441SEvalZero+ 938*10465441SEvalZero+ /* FIQ */ 939*10465441SEvalZero+ memcpy((void *)0xffff001C, (void*)(ptr + 0x1C), 4); 940*10465441SEvalZero+ memcpy((void *)(0xffff001C + 0x20), (void*)(ptr + 0x1C + 0x20), 4); 941*10465441SEvalZero+#else 942*10465441SEvalZero+ /* IRQ */ 943*10465441SEvalZero+ memcpy(vector + 0x18, (void*)(ptr + 0x18), 4); 944*10465441SEvalZero+ memcpy(vector + 0x18 + 0x20, (void*)(ptr + 0x18 + 0x20), 4); 945*10465441SEvalZero+ 946*10465441SEvalZero+ /* FIQ */ 947*10465441SEvalZero+ memcpy(vector + 0x1C, (void*)(ptr + 0x1C), 4); 948*10465441SEvalZero+ memcpy(vector + 0x1C + 0x20, (void*)(ptr + 0x1C + 0x20), 4); 949*10465441SEvalZero+#endif 950*10465441SEvalZero+ flush_icache_range(0xffff0000, 0xffff0000 + length); 951*10465441SEvalZero+ if (!vectors_high()) 952*10465441SEvalZero+ flush_icache_range(0x00, 0x00 + length); 953*10465441SEvalZero+} 954*10465441SEvalZero+EXPORT_SYMBOL(trap_set_vector); 955*10465441SEvalZerodiff --git a/arch/arm/vmm/vmm_virhw.h b/arch/arm/vmm/vmm_virhw.h 956*10465441SEvalZeronew file mode 100644 957*10465441SEvalZeroindex 0000000..363cc6e 958*10465441SEvalZero--- /dev/null 959*10465441SEvalZero+++ b/arch/arm/vmm/vmm_virhw.h 960*10465441SEvalZero@@ -0,0 +1,59 @@ 961*10465441SEvalZero+#ifndef __VMM_VIRTHWH__ 962*10465441SEvalZero+#define __VMM_VIRTHWH__ 963*10465441SEvalZero+ 964*10465441SEvalZero+#define REALVIEW_NR_IRQS 96 965*10465441SEvalZero+#define IRQS_NR_32 ((REALVIEW_NR_IRQS + 31)/32) 966*10465441SEvalZero+#define RTT_VMM_IRQ_TRIGGER 10 967*10465441SEvalZero+ 968*10465441SEvalZero+struct vmm_context 969*10465441SEvalZero+{ 970*10465441SEvalZero+ /* the status of vGuest irq */ 971*10465441SEvalZero+ volatile unsigned long virq_status; 972*10465441SEvalZero+ 973*10465441SEvalZero+ /* has interrupt pended on vGuest OS IRQ */ 974*10465441SEvalZero+ volatile unsigned long virq_pended; 975*10465441SEvalZero+ 976*10465441SEvalZero+ /* pending interrupt for vGuest OS */ 977*10465441SEvalZero+ volatile unsigned long virq_pending[IRQS_NR_32]; 978*10465441SEvalZero+}; 979*10465441SEvalZero+ 980*10465441SEvalZero+/* IRQ operation under VMM */ 981*10465441SEvalZero+static inline unsigned long vmm_irq_save(void) 982*10465441SEvalZero+{ 983*10465441SEvalZero+ unsigned long flags; 984*10465441SEvalZero+ 985*10465441SEvalZero+ asm volatile( 986*10465441SEvalZero+ " mrs %0, cpsr @ arch_local_irq_save\n" 987*10465441SEvalZero+ " cpsid i" 988*10465441SEvalZero+ : "=r" (flags) : : "memory", "cc"); 989*10465441SEvalZero+ return flags; 990*10465441SEvalZero+} 991*10465441SEvalZero+ 992*10465441SEvalZero+static inline void vmm_irq_restore(unsigned long flags) 993*10465441SEvalZero+{ 994*10465441SEvalZero+ asm volatile( 995*10465441SEvalZero+ " msr cpsr_c, %0 @ local_irq_restore" 996*10465441SEvalZero+ : 997*10465441SEvalZero+ : "r" (flags) 998*10465441SEvalZero+ : "memory", "cc"); 999*10465441SEvalZero+} 1000*10465441SEvalZero+ 1001*10465441SEvalZero+static inline void vmm_irq_enable(void) 1002*10465441SEvalZero+{ 1003*10465441SEvalZero+ asm volatile( 1004*10465441SEvalZero+ " cpsie i @ arch_local_irq_enable" 1005*10465441SEvalZero+ : 1006*10465441SEvalZero+ : 1007*10465441SEvalZero+ : "memory", "cc"); 1008*10465441SEvalZero+} 1009*10465441SEvalZero+ 1010*10465441SEvalZero+static inline void vmm_irq_disable(void) 1011*10465441SEvalZero+{ 1012*10465441SEvalZero+ asm volatile( 1013*10465441SEvalZero+ " cpsid i @ arch_local_irq_disable" 1014*10465441SEvalZero+ : 1015*10465441SEvalZero+ : 1016*10465441SEvalZero+ : "memory", "cc"); 1017*10465441SEvalZero+} 1018*10465441SEvalZero+ 1019*10465441SEvalZero+#endif 1020*10465441SEvalZerodiff --git a/arch/arm/vmm/vmm_virq.c b/arch/arm/vmm/vmm_virq.c 1021*10465441SEvalZeronew file mode 100644 1022*10465441SEvalZeroindex 0000000..85886a2 1023*10465441SEvalZero--- /dev/null 1024*10465441SEvalZero+++ b/arch/arm/vmm/vmm_virq.c 1025*10465441SEvalZero@@ -0,0 +1,183 @@ 1026*10465441SEvalZero+#include <linux/bug.h> 1027*10465441SEvalZero+#include <linux/kernel.h> 1028*10465441SEvalZero+#include <linux/module.h> 1029*10465441SEvalZero+#include <asm/unwind.h> 1030*10465441SEvalZero+ 1031*10465441SEvalZero+#include <vmm/vmm.h> 1032*10465441SEvalZero+ 1033*10465441SEvalZero+#include "vmm_virhw.h" 1034*10465441SEvalZero+ 1035*10465441SEvalZero+/* VMM use the I bit in SPSR to save the virq status in the isr entry. So warn 1036*10465441SEvalZero+ * on the I bit set would gave some false negative result. */ 1037*10465441SEvalZero+//#define VMM_WARN_ON_I_BIT 1038*10465441SEvalZero+ 1039*10465441SEvalZero+extern struct vmm_context* _vmm_context; 1040*10465441SEvalZero+ 1041*10465441SEvalZero+void vmm_disable_virq(void) 1042*10465441SEvalZero+{ 1043*10465441SEvalZero+ unsigned long flags = vmm_irq_save(); 1044*10465441SEvalZero+ _vmm_context->virq_status = 0x01; 1045*10465441SEvalZero+ vmm_irq_restore(flags); 1046*10465441SEvalZero+} 1047*10465441SEvalZero+EXPORT_SYMBOL(vmm_disable_virq); 1048*10465441SEvalZero+ 1049*10465441SEvalZero+static void _vmm_raise_on_pended(void) 1050*10465441SEvalZero+{ 1051*10465441SEvalZero+ /* check any interrupt pended in vIRQ */ 1052*10465441SEvalZero+ if (_vmm_context->virq_pended) { 1053*10465441SEvalZero+ /* trigger an soft interrupt */ 1054*10465441SEvalZero+ vmm_raise_softirq(RTT_VMM_IRQ_TRIGGER); 1055*10465441SEvalZero+ return; 1056*10465441SEvalZero+ } 1057*10465441SEvalZero+ 1058*10465441SEvalZero+#if 0 1059*10465441SEvalZero+ int i; 1060*10465441SEvalZero+ for (i = 0; i < ARRAY_SIZE(_vmm_context->virq_pending); i++) { 1061*10465441SEvalZero+ if (_vmm_context->virq_pending[i]) { 1062*10465441SEvalZero+ _vmm_context->virq_pended = 1; 1063*10465441SEvalZero+ pr_info("\n"); 1064*10465441SEvalZero+ vmm_raise_softirq(RTT_VMM_IRQ_TRIGGER); 1065*10465441SEvalZero+ return; 1066*10465441SEvalZero+ } 1067*10465441SEvalZero+ } 1068*10465441SEvalZero+#endif 1069*10465441SEvalZero+} 1070*10465441SEvalZero+ 1071*10465441SEvalZero+void vmm_enable_virq(void) 1072*10465441SEvalZero+{ 1073*10465441SEvalZero+ unsigned long flags = vmm_irq_save(); 1074*10465441SEvalZero+ _vmm_context->virq_status = 0x00; 1075*10465441SEvalZero+ _vmm_raise_on_pended(); 1076*10465441SEvalZero+ vmm_irq_restore(flags); 1077*10465441SEvalZero+} 1078*10465441SEvalZero+EXPORT_SYMBOL(vmm_enable_virq); 1079*10465441SEvalZero+ 1080*10465441SEvalZero+unsigned long vmm_return_virq(void) 1081*10465441SEvalZero+{ 1082*10465441SEvalZero+ unsigned long flags; 1083*10465441SEvalZero+ unsigned long level; 1084*10465441SEvalZero+ 1085*10465441SEvalZero+ level = vmm_irq_save(); 1086*10465441SEvalZero+ flags = _vmm_context->virq_status; 1087*10465441SEvalZero+ vmm_irq_restore(level); 1088*10465441SEvalZero+ 1089*10465441SEvalZero+ return flags; 1090*10465441SEvalZero+} 1091*10465441SEvalZero+EXPORT_SYMBOL(vmm_return_virq); 1092*10465441SEvalZero+ 1093*10465441SEvalZero+unsigned long vmm_save_virq(void) 1094*10465441SEvalZero+{ 1095*10465441SEvalZero+ int status; 1096*10465441SEvalZero+ unsigned long flags = vmm_irq_save(); 1097*10465441SEvalZero+ 1098*10465441SEvalZero+ status = _vmm_context->virq_status; 1099*10465441SEvalZero+ _vmm_context->virq_status = 0x01; 1100*10465441SEvalZero+ vmm_irq_restore(flags); 1101*10465441SEvalZero+ 1102*10465441SEvalZero+ return status; 1103*10465441SEvalZero+} 1104*10465441SEvalZero+EXPORT_SYMBOL(vmm_save_virq); 1105*10465441SEvalZero+ 1106*10465441SEvalZero+void vmm_restore_virq(unsigned long flags) 1107*10465441SEvalZero+{ 1108*10465441SEvalZero+ unsigned long level; 1109*10465441SEvalZero+ 1110*10465441SEvalZero+ level = vmm_irq_save(); 1111*10465441SEvalZero+ _vmm_context->virq_status = flags; 1112*10465441SEvalZero+ if (_vmm_context->virq_status == 0) 1113*10465441SEvalZero+ { 1114*10465441SEvalZero+ _vmm_raise_on_pended(); 1115*10465441SEvalZero+ } 1116*10465441SEvalZero+ vmm_irq_restore(level); 1117*10465441SEvalZero+} 1118*10465441SEvalZero+EXPORT_SYMBOL(vmm_restore_virq); 1119*10465441SEvalZero+ 1120*10465441SEvalZero+unsigned long vmm_save_virq_spsr_asm(unsigned long spsr, struct pt_regs *regs) 1121*10465441SEvalZero+{ 1122*10465441SEvalZero+ if (vmm_status) { 1123*10465441SEvalZero+ if (_vmm_context->virq_status) 1124*10465441SEvalZero+ return spsr | PSR_I_BIT; 1125*10465441SEvalZero+ } 1126*10465441SEvalZero+ return spsr; 1127*10465441SEvalZero+} 1128*10465441SEvalZero+ 1129*10465441SEvalZero+void irq_enable_asm(void) 1130*10465441SEvalZero+{ 1131*10465441SEvalZero+ if (vmm_status) { 1132*10465441SEvalZero+ vmm_enable_virq(); 1133*10465441SEvalZero+ } else { 1134*10465441SEvalZero+ asm volatile("cpsie i" : : : "memory", "cc"); 1135*10465441SEvalZero+ } 1136*10465441SEvalZero+} 1137*10465441SEvalZero+ 1138*10465441SEvalZero+void irq_disable_asm(void) 1139*10465441SEvalZero+{ 1140*10465441SEvalZero+ if (vmm_status) { 1141*10465441SEvalZero+ vmm_disable_virq(); 1142*10465441SEvalZero+ } else { 1143*10465441SEvalZero+ asm volatile("cpsid i" : : : "memory", "cc"); 1144*10465441SEvalZero+ } 1145*10465441SEvalZero+} 1146*10465441SEvalZero+ 1147*10465441SEvalZero+/* should be called when the guest entering the state that the IRQ is disabled 1148*10465441SEvalZero+ * by hardware, for example, entering SVC, PABT, DABT mode. 1149*10465441SEvalZero+ * 1150*10465441SEvalZero+ * It will the open the hardware IRQ, virtual IRQ remain unchanged. 1151*10465441SEvalZero+ */ 1152*10465441SEvalZero+void vmm_switch_nohwirq_to_novirq(void) 1153*10465441SEvalZero+{ 1154*10465441SEvalZero+ if (vmm_status) { 1155*10465441SEvalZero+ vmm_disable_virq(); 1156*10465441SEvalZero+ asm volatile("cpsie i" : : : "memory", "cc"); 1157*10465441SEvalZero+ } 1158*10465441SEvalZero+} 1159*10465441SEvalZero+ 1160*10465441SEvalZero+unsigned long vmm_restore_virq_asm(unsigned long spsr) 1161*10465441SEvalZero+{ 1162*10465441SEvalZero+ if (vmm_status) { 1163*10465441SEvalZero+#ifdef VMM_WARN_ON_I_BIT 1164*10465441SEvalZero+ WARN(spsr & PSR_I_BIT, "return to svc mode with I in SPSR set\n"); 1165*10465441SEvalZero+#endif 1166*10465441SEvalZero+ vmm_restore_virq(!!(spsr & PSR_I_BIT)); 1167*10465441SEvalZero+ return spsr & ~PSR_I_BIT; 1168*10465441SEvalZero+ } else { 1169*10465441SEvalZero+ return spsr; 1170*10465441SEvalZero+ } 1171*10465441SEvalZero+} 1172*10465441SEvalZero+ 1173*10465441SEvalZero+void vmm_on_ret_to_usr(unsigned long spsr) 1174*10465441SEvalZero+{ 1175*10465441SEvalZero+ if (vmm_status) { 1176*10465441SEvalZero+#ifdef VMM_WARN_ON_I_BIT 1177*10465441SEvalZero+ WARN(spsr & PSR_I_BIT, "return to user mode with I in SPSR set\n"); 1178*10465441SEvalZero+#endif 1179*10465441SEvalZero+ vmm_enable_virq(); 1180*10465441SEvalZero+ } 1181*10465441SEvalZero+} 1182*10465441SEvalZero+ 1183*10465441SEvalZero+void vmm_on_svc_exit_irq(unsigned long spsr) 1184*10465441SEvalZero+{ 1185*10465441SEvalZero+ if (vmm_status) { 1186*10465441SEvalZero+#ifdef VMM_WARN_ON_I_BIT 1187*10465441SEvalZero+ WARN(spsr & PSR_I_BIT, "exit IRQ with I in SPSR set\n"); 1188*10465441SEvalZero+#endif 1189*10465441SEvalZero+ vmm_enable_virq(); 1190*10465441SEvalZero+ } 1191*10465441SEvalZero+} 1192*10465441SEvalZero+ 1193*10465441SEvalZero+void vmm_dump_irq(void) 1194*10465441SEvalZero+{ 1195*10465441SEvalZero+ int i; 1196*10465441SEvalZero+ unsigned long cpsr; 1197*10465441SEvalZero+ 1198*10465441SEvalZero+ asm volatile ("mrs %0, cpsr": "=r"(cpsr)); 1199*10465441SEvalZero+ 1200*10465441SEvalZero+ printk("status: %08lx, pended: %08lx, cpsr: %08lx\n", 1201*10465441SEvalZero+ _vmm_context->virq_status, _vmm_context->virq_pended, cpsr); 1202*10465441SEvalZero+ printk("pending: "); 1203*10465441SEvalZero+ for (i = 0; i < ARRAY_SIZE(_vmm_context->virq_pending); i++) { 1204*10465441SEvalZero+ printk("%08lx, ", _vmm_context->virq_pending[i]); 1205*10465441SEvalZero+ } 1206*10465441SEvalZero+ printk("\n"); 1207*10465441SEvalZero+} 1208*10465441SEvalZero+ 1209*10465441SEvalZero-- 1210*10465441SEvalZero1.8.4 1211*10465441SEvalZero 1212