On Fri, 2011-11-25 at 15:21 +0100, Nick Copeland wrote:
[...]
>
> So if the pipe() is replaced with
>
> socketpair(PF_UNIX, SOCK_STREAM, PF_UNSPEC, pipe_fd);
>
> Then the issue I was seeing goes away. Perhaps the pipe() code has not been
> optimised since sockets were developed to replace them when IPC suddenly
> needed to be between hosts rather than processes? Pure conjecture.
Added sockets:
Normal scheduling:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe recv time: 7.048740
Pipe send time: 7.048648
Socket send time: 2.365210
Socket recv time: 2.365292
Queue recv time: 2.072530
Queue send time: 2.072494
SCHED_FIFO:
$ ./ipc 4096 1000000
Sending a 4096 byte message 1000000 times.
Pipe send time: 5.279210
Pipe recv time: 5.279508
Socket send time: 2.709628
Socket recv time: 2.709645
Queue send time: 5.228892
Queue recv time: 5.228980
Interesting. I find sockets being significantly faster than the much
simpler pipes quite counter-intuitive.
Code at the same location:
http://drobilla.net/files/ipc.c
What would be a more interesting/relevant benchmark is to spawn p
processes, and have each send a message to the next in a chain. Measure
the time from the start until the very last process has received the
message. That should give us a pretty good idea about the sort of
scheduling behaviour we actually care about, and you can crank up p to
make the difference very apparent.
Science is fun.
-dr
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Sat Nov 26 04:15:02 2011
This archive was generated by hypermail 2.1.8 : Sat Nov 26 2011 - 04:15:02 EET