Show HN: Iceoryx2 – Fast IPC Library for Rust, C++, and C

61 pointsposted 15 hours ago
by elfenpiff

14 Comments

hardwaresofton

5 hours ago

Been doing some IPC experiments recently following the 3tilley post[0], because there just isn't enough definitive information (even if it's a snapshot in time) out there.

Shared memory is crazy fast, and I'm surprised that there aren't more things that take advantage of it. Super odd that gRPC doesn't do shared memory, and basically never plans to?[1].

All that said, the constructive criticism I can offer for this post is that in mass-consumption announcements like this one for your project, you should:

- RPC throughput (with the usual caveats/disclaimers) - Comparison (ideally graphed) to an alternative approach (ex. domain sockets) - Your best/most concise & expressive usage snippet

100ns is great to know, but I would really like to know how much RPC/s this translates to without doing the math, or seeing it with realistic de-serialization on the other end.

[0]: https://3tilley.github.io/posts/simple-ipc-ping-pong/

[1]: https://github.com/grpc/grpc/issues/19959

abhirag

39 minutes ago

At $work we are evaluating different IPC strategies in Rust. My colleague expanded upon 3tilley's work, they have updated benchmarks with iceoryx2 included here[0]. I suppose the current release should perform even better.

[0]: https://pranitha.rs/posts/rust-ipc-ping-pong/

npalli

10 hours ago

Congrats on the release.

What's the difference between iceoryx and iceoryx2? I don't want to use Rust and want to stick to C++ if possible.

elBoberido

7 hours ago

Besides being written in Rust, the big difference is the decentralized approach. With iceoryx1 a central daemon is required but with iceoryx2 this in not the case anymore. Furthermore, more fine grained control over the resources like memory and endpoints like publisher. Overall the architecture is more modular and it should be easier to port iceoryx2 to even more platforms and customize it with 3rd party extension.

With this release we have initial support for C and C++. Not all features of the Rust version are supported yet, but the plan is to finish the bindings with the next release. Furthermore, with an upcoming release we will make it trivial to communicate between Rust, C and C++ applications and all the other language bindings we are going to provide, with Python being probably the next one.

sebastos

2 hours ago

I've been looking around for some kind of design documents that explain how you were able to ditch the central broker, but I haven't found much. Do you have breadcrumbs?

simfoo

2 hours ago

Same here. Shared memory is one of those things where the kernel could really help some more with reliable cleanup (1). Until then you're mostly doomed to have a rock solid cleanup daemon or are limited to eventual cleanup by restarting processes. I have my doubts that it isn't possible to get into a situation where segments are being exhausted and you're forced to intervene

(1) I'm referring to automatic refcounting of shm segments using posix shm (not sys v!) when the last process dies or unmaps

tbillington

8 hours ago

> Language bindings for C and C++ with CMake and Bazel support right out of the box. Python and other languages are coming soon.

tormeh

7 hours ago

Looks like it has significantly lower latency.

> want to stick to C++ if possible

The answer to that concern is in the title of the submission.

westurner

6 hours ago

How does this compare to and/or integrate with OTOH Apache Arrow which had "arrow plasma IPC" and is supported by pandas with dtype_backend="pyarrow", lancedb/lancedb, and Serde.rs? https://serde.rs/#data-formats

zxexz

4 hours ago

The other commenter answering you is, I think, trying to point out that the Arrow plasma store is deprecated (and no longer present in the arrow project).

I think it's worth being a little more clear here - Arrow IPC is _not_ deprecated, and has massive momentum - so much so that it's more or less already become the default IPC format for many libraries.

To me it remains unclear what the benefits of Iceoryx2 over the Arrow ecosystem is, and what the level of interoperability is, and what the tradeoffs of either are relative to eachother. Within a single machine, you can mmap the IPC file. You can use Arrow Flight for inter-node or inter-process communication. You can use Arrow with Ray, which is where Plasma went.

I love anything new in this space though, if/when I have time I'll check this out - would love it if somebody could actually eloborate on the differences though.

dumah

5 hours ago

zxexz

3 hours ago

Not downvoting, but those two links don't really describe much - though they are part of the story.

For other readers, here's a general takeaway -

- Arrow Plasma was deprecated, and is now no longer even present in the Arrow project. - The maintainers behind the Plasma store in Arrow forked it, into Ray. Plasma is still alive and well in Ray - and it still uses the Arrow IPC format, among other things. In addition to the links from the above poster, read the original blog post on Plasma [0], and the section on Ray on [1].

I use Ray quite a bit. For more lightweight stuff, or for more low-level control, I use Arrow Flight [2] (and [3] for Python examples)

[0] https://ray-project.github.io/2017/08/08/plasma-in-memory-ob... [1] https://arrow.apache.org/powered_by/ [2] https://arrow.apache.org/docs/python/flight.html [3] https://arrow.apache.org/cookbook/py/flight.html

forrestthewoods

2 hours ago

Why is Windows target support tier 2 and not tier 1?