Result for 1218CD16B4EC79612E7EB1CEB435899C99B41ED9

Query result

Key Value
FileName./usr/lib/python3.8/site-packages/opt_einsum/backends/__pycache__/tensorflow.cpython-38.opt-1.pyc
FileSize4421
MD542FFBB4A0706969CC5355AC58F90A2B0
SHA-11218CD16B4EC79612E7EB1CEB435899C99B41ED9
SHA-25645C2513DA161C3CBC48E92AAA710A7F80DBA98E66566A9DC8B3BECDE386DA559
SSDEEP96:QPsQalX4ZR6TZc0kY+WaHVTHt+2956te26jYSrtw/qcMj:AaZ1klWOZ5eHSrUi
TLSHT13F9125FD28466BADFDB5F6F11ADBD3401331A22F2659F142470891AF09C97C529324AD
hashlookup:parent-total1
hashlookup:trust55

Network graph view

Parents (Total: 1)

The searched file hash is included in 1 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
MD5D0D68673AAA7F0F8025CD2E26F524ED5
PackageArchnoarch
PackageDescriptionOptimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g., `np.einsum`,`dask.array.einsum`,`pytorch.einsum`,`tensorflow.einsum`) by optimizing the expression's contraction order and dispatching many operations to canonical BLAS, cuBLAS, or other specialized routines. Optimized einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch, Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially any library which conforms to a standard API. See the [**documentation**](http://optimized-einsum.readthedocs.io) for more information.
PackageNamepython3-opt-einsum
PackageRelease3.2
PackageVersion3.1.0
SHA-161717F590F6B4F708C57FDA189CE6EF59DDFBE4D
SHA-2564BB8F400012C11C8AC6E3BAFDD3A680ED68A1AAF6D08CA4FF6FFB0FE12BCE512