Result for 3180E2906C8524BA7B9656D93EC1771FD06C165A

Query result

Key Value
FileName./usr/lib/python3.8/site-packages/opt_einsum/backends/__pycache__/torch.cpython-38.pyc
FileSize3284
MD546EC4E718A926B7AF12354F0662E8D04
SHA-13180E2906C8524BA7B9656D93EC1771FD06C165A
SHA-2562EA35B92005500840C5196DF1E6B676B784DA03E608F43D2EB997E608D745C19
SSDEEP96:KCI5HPPJnzsQclWi9765gFOtjoUjXTy6qDDw5DRw:KjnJ4UY7q8I9qDww
TLSHT16B6162CA3CC2192CFB5DF1F1A08B52516333E3A637E9C537BA2495AB1A464C51F32948
hashlookup:parent-total1
hashlookup:trust55

Network graph view

Parents (Total: 1)

The searched file hash is included in 1 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
MD55B2F621F42C556B3D50FC9B7D0EDEB19
PackageArchnoarch
PackageDescriptionOptimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g., `np.einsum`,`dask.array.einsum`,`pytorch.einsum`,`tensorflow.einsum`) by optimizing the expression's contraction order and dispatching many operations to canonical BLAS, cuBLAS, or other specialized routines. Optimized einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch, Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially any library which conforms to a standard API. See the [**documentation**](http://optimized-einsum.readthedocs.io) for more information.
PackageNamepython38-opt-einsum
PackageRelease2.1
PackageVersion3.3.0
SHA-1E79B623191DE489EABC9F4BCB1335B5F09EF5164
SHA-256F8F32840CB9C48B1C0945FCA87D8F355D800EBE0BBD35A171831263CE912B27F