dect
/
linux-2.6
Archived
13
0
Fork 0
Commit Graph

28 Commits

Author SHA1 Message Date
Arun Sharma 60063497a9 atomic: use <linux/atomic.h>
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-07-26 16:49:47 -07:00
Dimitri Sivanich 75c1c91cb9 [IA64] eliminate race condition in smp_flush_tlb_mm
A race condition exists within smp_call_function_many() when called from
smp_flush_tlb_mm().  On rare occasions the cpu_vm_mask can be cleared
while smp_call_function_many is executing, occasionally resulting in a
hung process.

Make a copy of the mask prior to calling smp_call_function_many().

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2010-12-28 14:06:21 -08:00
Rusty Russell da83a84b53 ia64: convert last user of smp_call_function_mask
smp_call_function_many is the new version: it takes a pointer.  Also,
use mm accessor macro while we're changing this.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24 09:34:40 +09:30
Tejun Heo b9bf3121af percpu: use DEFINE_PER_CPU_SHARED_ALIGNED()
There are a few places where ___cacheline_aligned* is used with
DEFINE_PER_CPU().  Use DEFINE_PER_CPU_SHARED_ALIGNED() instead.

DEFINE_PER_CPU_SHARED_ALIGNED() applies alignment only on SMPs.  While
all other converted places used _in_smp variant or only get compiled
for SMP, net/rds used unconditional ____cacheline_aligned.  I don't
see any reason these data structures should be aligned on UP and thus
converted together.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Andy Grover <andy.grover@oracle.com>
2009-06-24 15:13:47 +09:00
Tejun Heo 204fba4aa3 percpu: cleanup percpu array definitions
Currently, the following three different ways to define percpu arrays
are in use.

1. DEFINE_PER_CPU(elem_type[array_len], array_name);
2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
3. DEFINE_PER_CPU(elem_type, array_name)[array_len];

Unify to #1 which correctly separates the roles of the two parameters
and thus allows more flexibility in the way percpu variables are
defined.

[ Impact: cleanup ]

Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: linux-mm@kvack.org
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
2009-06-24 15:13:45 +09:00
Matthew Wilcox e088a4ad7f [IA64] Convert ia64 to use int-ll64.h
It is generally agreed that it would be beneficial for u64 to be an
unsigned long long on all architectures.  ia64 (in common with several
other 64-bit architectures) currently uses unsigned long.  Migrating
piecemeal is too painful; this giant patch fixes all compilation warnings
and errors that come as a result of switching to use int-ll64.h.

Note that userspace will still see __u64 defined as unsigned long.  This
is important as it affects C++ name mangling.

[Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
 u64 for start/end rather than unsigned long]

Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-06-17 09:33:49 -07:00
Dimitri Sivanich edb91dc01a [IA64] smp_flush_tlb_mm() should only send IPI's to cpus in cpu_vm_mask
Having flush_tlb_mm->smp_flush_tlb_mm() send an IPI to every cpu
on the system is occasionally triggering spin_lock contention in
generic_smp_call_function_interrupt().

Follow x86 arch's lead and only sends IPIs to the cpus in mm->cpu_vm_mask.

Experiments with this change have shown significant improvement in this
contention issue.

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-04-16 11:51:35 -07:00
Marcelo Tosatti c4cb768f02 [IA64] export smp_send_reschedule
KVM will use smp_send_reschedule to force a cpu out of guest mode.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-04-16 11:48:49 -07:00
Rusty Russell 40fe697a17 cpumask: arch_send_call_function_ipi_mask: ia64
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask().

We also take the chance to wean send_IPI_mask off the obsolescent
for_each_cpu_mask(): making it take the pointer seemed the most
natural way.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-03-16 14:12:41 +10:30
Robin Holt 97653f92c0 [IA64] Shrink shadow_flush_counts to a short array to save 8k of per_cpu area.
Making allmodconfig will break the current build.  This patch shrinks
the per_cpu__shadow_flush_counts from 16k to 8k which frees enough space
to allow allmodconfig to successfully complete.

Fixes http://bugzilla.kernel.org/show_bug.cgi?id=11338

Signed-off-by: Robin Holt <holt@sgi.com>
Acked-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-08-18 15:39:48 -07:00
Jens Axboe 15c8b6c1aa on_each_cpu(): kill unused 'retry' parameter
It's not even passed on to smp_call_function() anymore, since that
was removed. So kill it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:24:38 +02:00
Jens Axboe f27b433ef3 ia64: convert to generic helpers for IPI function calls
This converts ia64 to use the new helpers for smp_call_function() and
friends, and adds support for smp_call_function_single().

Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:22:30 +02:00
Hidetoshi Seto c0cd661b1b [IA64] smp.c coding style fix
Fix indenting of switch statement to follow CodingStyle, and
pull out handling of call_data into an inlined function.

I confirmed that applying this fix doesn't affect assembled code.

Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-05-01 14:29:12 -07:00
Xiantao Zhang 31a6b11fed [IA64] Implement smp_call_function_mask for ia64
This interface provides more flexible functionality for smp
infrastructure ... e.g. KVM frequently needs to operate on
a subset of cpus.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-04-03 11:39:43 -07:00
Avi Kivity 8a2d869305 [IA64] Allow smp_call_function_single() to current cpu
This removes the requirement for callers to get_cpu() to check in simple
cases.  i386 and x86_64 already received a similar treatment.

Signed-off-by: Avi Kivity <avi@qumranet.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-07-30 16:26:45 -07:00
Tony Luck cb2e0912f7 [IA64] Nail two more simple section mismatch errors
pcibios_setup (between 'pci_setup' and 'quirk_mellanox_tavor')
setup_profiling_timer (between 'write_profile' and 'delayed_put_task_struct')

Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-07-25 13:08:41 -07:00
Fenghua Yu f34e3b61f2 use the new percpu interface for shared data
Currently most of the per cpu data, which is accessed by different cpus,
has a ____cacheline_aligned_in_smp attribute.  Move all this data to the
new per cpu shared data section: .data.percpu.shared_aligned.

This will seperate the percpu data which is referenced frequently by other
cpus from the local only percpu data.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 10:04:45 -07:00
Simon Arlott 72fdbdce3d [IA64] spelling fixes: arch/ia64/
Spelling and apostrophe fixes in arch/ia64/.

Signed-off-by: Simon Arlott <simon@fire.lp0.eu>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-05-11 14:55:43 -07:00
Jack Steiner 3be44b9cc3 [IA64] Optional method to purge the TLB on SN systems
This patch adds an optional method for purging the TLB on SN IA64 systems.
The change should not affect any non-SN system.

Signed-off-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-05-08 14:50:43 -07:00
Al Viro ccbebdaccf [PATCH] arch/ia64: ansify
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-09 09:14:06 -08:00
Horms 45a98fc622 [IA64] CONFIG_KEXEC/CONFIG_CRASH_DUMP permutations
Actually, on reflection I think that there is a good case for
keeping the options separate. I am thinking particularly of people
who want a very small crashdump kernel and thus don't want to compile
in kexec.

The patch below should fix things up so that all valid combinations of
KEXEC, CRASH_DUMP and VMCORE compile cleanly - VMCORE depends on
CRASH_DUMP which is why I said valid combinations. In a nutshell
it just untangles unrelated code and switches around a few defines.

Please note that it creats a new file, arch/ia64/kernel/crash_dump.c
This is in keeping with the i386 implementation.

Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-12-12 10:11:00 -08:00
Zou Nan hai a79561134f [IA64] IA64 Kexec/kdump
Changes and updates.

1. Remove fake rendz path and related code according to discuss with Khalid Aziz.
2. fc.i offset fix in relocate_kernel.S.
3. iospic shutdown code eoi and mask race fix from Fujitsu.
4. Warm boot hook in machine_kexec to SN SAL code from Jack Steiner.
5. Send slave to SAL slave loop patch from Jay Lan.
6. Kdump on non-recoverable MCA event patch from Jay Lan
7. Use CTL_UNNUMBERED in kdump_on_init sysctl.

Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-12-07 09:51:35 -08:00
Keith Owens 024e4f2c51 [IA64] Correct definition of handle_IPI
The declaration of handle_IPI in arch/ia64/kernel/smp.c was changed but
not the definition of this function.  Remove struct pt_regs from
handle_IPI().

Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-10-31 14:38:15 -08:00
Kenji Kaneshige 5ee7737379 [IA64] cpu-hotplug: Fixing confliction between CPU hot-add and IPI
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Acked-by: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-10-31 14:17:27 -08:00
hawkes@sgi.com dc565b525d [IA64] wider use of for_each_cpu_mask() in arch/ia64
In arch/ia64 change the explicit use of for-loops and NR_CPUS into the
general for_each_cpu() or for_each_online_cpu() constructs, as
appropriate.  This widens the scope of potential future optimizations
of the general constructs, as well as takes advantage of the existing
optimizations of first_cpu() and next_cpu().

Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-10-25 15:10:08 -07:00
Peter Chubb a68db763af [IA64] Fix another IA64 preemption problem
There's another problem shown up by Ingo's recent patch to make
smp_processor_id() complain if it's called with preemption enabled.
local_finish_flush_tlb_mm() calls activate_context() in a situation
where it could be rescheduled to another processor.  This patch
disables preemption around the call.

Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-28 10:01:19 -07:00
Christophe Lucas 52a0de2cd2 [IA64] printk needs KERN_INFO arch/ia64/kernel/smp.c
printk() calls should include appropriate KERN_* constant.

Signed-off-by: Christophe Lucas <clucas@rotomalug.org>
Signed-off-by: Domen Puncer <domen@coderock.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-06-21 14:21:17 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00