compare and swap

Write here if you have problems with your C source code

Moderator: Board moderators

Post Reply
beloni
Learning poster
Posts: 66
Joined: Thu Jan 05, 2006 1:41 pm
Location: Pelotas, RS, Brazil

compare and swap

Post by beloni »

Hello,
is there some kind of compare and swap function or macro available in GNU gcc for linux?
I'm trying to implement a lock-free work-stealing queue...

Thanks
mf
Guru
Posts: 1244
Joined: Mon Feb 28, 2005 4:51 am
Location: Zürich, Switzerland
Contact:

Re: compare and swap

Post by mf »

gcc 4.1+ has __sync_val_compare_and_swap built-in, described here: http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html

I don't know if previous gcc versions have something similar, but in case they don't have, it shouldn't be too hard to write a wrapper around cmpxchg instruction with inline assembly.
beloni
Learning poster
Posts: 66
Joined: Thu Jan 05, 2006 1:41 pm
Location: Pelotas, RS, Brazil

Re: compare and swap

Post by beloni »

Thanks, man

I've found one CAS implemetation on the file /usr/include/asm/system.h of old slackware 10.2 kernel 2.4.31 and it did works. So I just copy it to my project and voilà. Here is the CAS code that worked well for me:

Code: Select all

#ifndef CAS_H
#define CAS_H

#define LOCK_PREFIX "lock ; "

struct __xchg_dummy { unsigned long a[100]; };
#define __xg(x) ((struct __xchg_dummy *)(x))


static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int size)
{
	switch (size) {
		case 1:
			__asm__ __volatile__("xchgb %b0,%1"
				:"=q" (x)
				:"m" (*__xg(ptr)), "0" (x)
				:"memory");
			break;
		case 2:
			__asm__ __volatile__("xchgw %w0,%1"
				:"=r" (x)
				:"m" (*__xg(ptr)), "0" (x)
				:"memory");
			break;
		case 4:
			__asm__ __volatile__("xchgl %0,%1"
				:"=r" (x)
				:"m" (*__xg(ptr)), "0" (x)
				:"memory");
			break;
	}
	return x;
}

/*
 * Atomic compare and exchange.  Compare OLD with MEM, if identical,
 * store NEW in MEM.  Return the initial value in MEM.  Success is
 * indicated by comparing RETURN with OLD.
 */

#ifdef CONFIG_X86_CMPXCHG
#define __HAVE_ARCH_CMPXCHG 1
#endif

static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
				      unsigned long new, int size)
{
	unsigned long prev;
	switch (size) {
	case 1:
		__asm__ __volatile__(LOCK_PREFIX "cmpxchgb %b1,%2"
				     : "=a"(prev)
				     : "q"(new), "m"(*__xg(ptr)), "0"(old)
				     : "memory");
		return prev;
	case 2:
		__asm__ __volatile__(LOCK_PREFIX "cmpxchgw %w1,%2"
				     : "=a"(prev)
				     : "q"(new), "m"(*__xg(ptr)), "0"(old)
				     : "memory");
		return prev;
	case 4:
		__asm__ __volatile__(LOCK_PREFIX "cmpxchgl %1,%2"
				     : "=a"(prev)
				     : "q"(new), "m"(*__xg(ptr)), "0"(old)
				     : "memory");
		return prev;
	}
	return old;
}

#define cmpxchg(ptr,o,n)\
	((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)(o),\
					(unsigned long)(n),sizeof(*(ptr))))

#endif

That's it
"A machine can do the work of fifty ordinary men, but no machine can do the work of one extraordinary man.", Shahriar Manzoor
marinajoes
New poster
Posts: 2
Joined: Mon Feb 16, 2015 11:54 am

Re: compare and swap

Post by marinajoes »

Thank you. Useful information.
Post Reply

Return to “C”