Re: [LAD] Pipes vs. Message Queues

From: Nick Copeland <nickycopeland@email-addr-hidden>
Date: Fri Nov 25 2011 - 16:21:28 EET

> From: clemens@email-addr-hidden
> To: nickycopeland@email-addr-hidden
> CC: d@email-addr-hidden; linux-audio-dev@email-addr-hidden
>
> Perhaps I should revisit another project I was working on which was syslog event
> correlation: it used multiple threads to be scalable to >1M syslog per second
> (big installation). I was testing it with socketpair()s and other stuff. I would be
> interested to know if scheduler changes affect it too.

So if the pipe() is replaced with

    socketpair(PF_UNIX, SOCK_STREAM, PF_UNSPEC, pipe_fd);

Then the issue I was seeing goes away. Perhaps the pipe() code has not been
optimised since sockets were developed to replace them when IPC suddenly
needed to be between hosts rather than processes? Pure conjecture.

[nicky@email-addr-hidden] /tmp [148] cc ipc.c -lrt -DSET_RT_SCHED=1
[nicky@email-addr-hidden] /tmp [149] ./a.out 4096 10000000
Sending a 4096 byte message 10000000 times.
Pipe send time: 26.131743
Pipe recv time: 26.132117
Queue send time: 18.576559
Queue recv time: 18.576592

The results were repeatable, CPU load was evenly distributed and the ludicrous
context switching figures were gone. Perhaps I should have replaced 'Pipe send time'
with 'Socket send time'? The message queues seem to maintain the best results.
Somebody should compare that to a shared memory lockless ringbuffer although
I have a feeling they will not exceed the messages queues used here.

Regards, nick.
                                               

_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Fri Nov 25 16:15:05 2011

This archive was generated by hypermail 2.1.8 : Fri Nov 25 2011 - 16:15:05 EET