Bringing NumPy's type-completeness score to nearly 90%

56 pointsposted 9 days ago
by todsacerdoti

25 Comments

jofer

5 hours ago

One of my biggest gripes around typing in python actually revolves around things like numpy arrays and other scientific data structures. Typing in python is great if you're only using builtins or things that the typing system was designed for. But it wasn't designed with scientific data structures particularly in mind. Being able to denote dtype (e.g. uint8 array vs int array) is certainly helpful, but only one aspect.

There's not a good way to say "Expects a 3D array-like" (i.e. something convertible into an array with at least 3 dimensions). Similarly, things like "At least 2 dimensional" or similar just aren't expressible in the type system and potentially could be. You wind up relying on docstrings. Personally, I think typing in docstrings is great. At least for me, IDE (vim) hinting/autocompletion/etc all work already with standard docstrings and strictly typed interpreters are a completely moot point for most scientific computing. What happens in practice is that you have the real info in the docstring and a type "stub" for typing. However, at the point that all of the relevant information about the expected type is going to have to be the docstring, is the additional typing really adding anything?

In short, I'd love to see the ability to indicate expected dimensionality or dimensionality of operation in typing of numpy arrays.

But with that said, I worry that typing for these use cases adds relatively little functionality at the significant expense of readability.

HPsquared

5 hours ago

Have you looked at nptyping? Type hints for ndarray.

https://github.com/ramonhagenaars/nptyping/blob/master/USERD...

kccqzy

an hour ago

What is this project actually for?

Its FAQ states:

> Can MyPy do the instance checking? Because of the dynamic nature of numpy and pandas, this is currently not possible. The checking done by MyPy is limited to detecting whether or not a numpy or pandas type is provided when that is hinted. There are no static checks on shapes, structures or types.

So this is equivalent to not using this library and making all such types np.ndarray/np.dtype etc then.

So we expend effort to coming up with a type system for numpy, and tools cannot statically check types? What good are types if they aren't checked? Just a more concise documentation for humans?

jofer

5 hours ago

That one's new to me. Thanks! (With that said, I worry that 3rd party libs are a bad place for types for numpy.)

nerdponx

4 hours ago

Numpy ships built-in type hints as well as a type for hinting arrays in your own code (numpy.typing.NDArray).

The real challenge is denoting what you can accept as input. `NDArray[np.floating] | pd.Series[float] | float` is a start but doesn't cover everything especially if you are a library author trying to provide a good type-hinted API.

hamasho

4 hours ago

I also had a very hard time to annotate types in python few years ago. A lot of popular python libraries like pandas, SQLAlchemy, django, and requests, are so flexible it's almost impossible to infer types automatically without parsing the entire code base. I tried several libraries for typing, often created by other people and not maintained well, but after painful experience it was clear our development was much faster without them while the type safety was not improved much at all.

davnn

an hour ago

I am using runtime type and shape checking and wrote a tiny library to merge both into a single typecheck decorator [1]. It‘s not perfect, but I haven‘t found a better approach yet.

[1] https://github.com/davnn/safecheck

dwohnitmok

5 hours ago

This isn't static, but jaxtyping gives you at least runtime checks and also a standardized form of documenting those types. https://github.com/patrick-kidger/jaxtyping

jofer

5 hours ago

It actually doesn't, as far as I know :) It does get close, though. I should give it a deeper look than I have previously, though.

"array-like" has real meaning in the python world and lots of things operate in that world. A very common need in libraries is indicating that things expect something that's either a numpy array or a subclass of one or something that's _convertible_ into a numpy array. That last part is key. E.g. nested lists. Or something with the __array__ interface.

In addition to dimensionality that part doesn't translate well.

And regardless, if the type representation is not standardized across multiple libraries (i.e. in core numpy), there's little value to it.

nerdponx

4 hours ago

I wonder if we should standardize on __array__ like how Iterable is standardized on the presence of __iter__, which can just return self if the Iterable is already an Iterator.

efavdb

5 hours ago

Would a custom decorator work for you?

jofer

5 hours ago

Unless I'm missing something entirely, what would that add? You still can't express the core information you need in the type system.

efavdb

2 hours ago

I meant only that you can insist a parameter has some quality when passed.

CaliforniaKarl

5 hours ago

I've recently been writing a Python SDK for an internal API that's maintained by a different group. I've been making sure to include typing for everything in the SDK, as well as attempting to maximize unit test coverage (using mocked API responses).

It's a heck of a lot of work (easily as much work as writing the actual code!), but typing has already paid dividends as I've been starting to use the SDK for one-off scripts.

tecoholic

6 hours ago

> That's it! CanIndex was an unknown symbol and was probably mistyped, and replacing it with the correct SupportsIndex one brought NumPy's overall type-completeness to over 80%!

Let’s take a pause here for a second - the ‘CanIndex’ and ‘SupportsIndex’ from the looks are just “int”. I have a hard time dealing with these custom types because they are so obscure.

Does anyone find these useful? How should I be reading them? I understand custom types that package data like classes. But feel completely confused for this yadayava is string, floorpglooep is int style types.

CaliforniaKarl

5 hours ago

> Let’s take a pause here for a second - the ‘CanIndex’ and ‘SupportsIndex’ from the looks are just “int”.

The PR for the change is https://github.com/numpy/numpy/pull/28913 - The details of files changed[0] shows the change was made in 'numpy/__init__.pyi'. Looking at the whole file[1] shows SupportsIndex is being imported from the standard library's typing module[2].

Where are you seeing SupportsIndex being defined as an int?

> I have a hard time dealing with these custom types because they are so obscure.

SupportsIndex is obscure, I agree, but it's not a custom type. It's defined in stdlib's typing module[2], and was added in Python 3.8.

[0]: https://github.com/numpy/numpy/pull/28913/files

[1]: https://github.com/charris/numpy/blob/c906f847f8ebfe0adec896...

[2]: https://docs.python.org/3/library/typing.html#typing.Support...

tecoholic

3 hours ago

Thanks for pointing those out. Your comment and the others who responded tell me I don’t typically work with complex software. I am a web dev who also had to wrangle data at times, the data part is where I face the issues.

I looked at the function in the article taking in an int and I stopped there. And then I read the docs for SupportsIndex, I just can’t think of an occasion I would have used it.

ameliaquining

2 hours ago

SupportsIndex makes it so that, if you have a value that's not a Python int but can be implicitly and losslessly converted to one, you can use it to index into or slice a NumPy ndarray, without first explicitly converting it to a Python int. There are two particular data types that it's especially useful to be able to do this with: NumPy's fixed-width integer types (analogous to the ones in C, exposed because numeric calculations on ndarrays of them are often much faster than on Python's arbitrary-precision int), and zero-dimensional ndarrays (which can't be indexed into or sliced, and are morally equivalent to a single scalar). Programmers often wind up with one of these as the output of some operation, think of them as ints, and don't know or care that they're actually a different data type, so NumPy tries to make it so that you can treat them like ints and get the same result, this being the point of a dynamic language like Python. Using SupportsIndex instead of int as an argument type ensures that type checkers won't reject code that does this.

This is admittedly fairly arcane, but only people delving into the guts of NumPy or the Python type system are expected to have to deal with it. The library code is complicated so that your code can be simple: you can just write NumPy code the way you're used to, and the type checker will accept it, without you having to know or care about the above details. That's the goal, anyway.

parhamn

an hour ago

> SupportsIndex makes it so that, if you have a value that's not a Python int but can be implicitly and losslessly converted to one

Just to be clear for folks who care more about the general python around this: any class that has a method __index__ that returns an int can be used as an index for array access. e.g:

    class FetchIndex:
        def __index__(self):
            return int(requests.get("https://indextolookup.com").text.strip())
            

    a = ["no", "lol", "wow"]
    print(a[FetchIndex()])


The builtin `int` also has a __index__ method that returns itself.

Dunders are part of what makes typing in python so hard.

masspro

5 hours ago

* “which of the 3 big data structures in this part of the program/function/etc is this int/string key an index into?”

* some arithmetic/geometry problems for example 2D layout where there are several different things that are “an integer number of pixels” but mean wildly different things

In either case it can help pick apart dense code or help stay on track while writing it. It can instead become anti-helpful by causing distraction or increasing the friction to make changes.

jkingsman

5 hours ago

I find it especially helpful during refactors — understanding the significance/meaning of the variable and what its values can be (and are constrained to) and not just the type is great, but if typing is complete, I can also go and see everywhere that type can be ingested, knowing every surface that uses it and that might need to be refactored.

strbean

5 hours ago

Typing related NumPy whine:

The type annotations say `float32`, `float64`, etc. are type aliases for `Floating[32]` and `Floating[64]`. In reality, they are concrete classes.

This means that `isinstance(foo, float32)` works as expected, but pisses off static analysis tools.

refactor_master

4 hours ago

I hate when libraries do that, or create a `def ClassLike`.

I’ve found the library `beartype` to be very good for runtime checks built into partially annotated `Annotated` types with custom validators, so instead of instance checks you rely directly on signatures and typeguards.

t1129437123

4 hours ago

"ArrayLike" is not a type. The entire Python "type system" is really hackish compared to Julia or typed array languages.

Does this "type system" ensure that you can't get a runtime type error? I don't think so.