Adamah – A portable Vulkan compute library for Python and FFI

1 pointsposted 22 days ago
by krokodil-byte

2 Comments

krokodil-byte

13 days ago

UPDATED TO V5.0.0 Now outperforming CuPy by 50% on Transformer Blocks. Benchmarks show 1.5x speedup over CuPy on Transformer blocks and up to 1800x on small-batch MatMul. No CUDA, no vendor lock-in, just pure SPIR-V efficiency.

pip adamah still legacy, use : pip install git+https://github.com/krokodil-byte/ADAMAH.git

krokodil-byte

22 days ago

I built a GPU compute library that's actually simple. No CUDA, no complex setup. Python: import adamah adamah.init() adamah.put("x", [1, 2, 3, 4]) adamah.sin("y", "x", 4) print(adamah.get("y")) c: inject("x", data, n); vop1(VOP_SIN, "y", "x", n); extract("y", result, n); Features:

Named buffers (auto-managed, no alloc/free) Full math: trig, calculus, reduce, linear algebra Sparse memory maps (scatter/gather) FFI-ready for any language Single header + one .c file

Built on Vulkan so it runs everywhere (not just NVIDIA). pip install adamah