Source: armnn
Section: devel
Priority: optional
Maintainer: Francis Murtagh <francis.murtagh@arm.com>
Uploaders: Wookey <wookey@debian.org>, Emanuele Rocca <ema@debian.org>
Build-Depends: libboost-test-dev (>= 1.64),
  libboost-system-dev (>= 1.64), libboost-filesystem-dev (>= 1.64), 
  libboost-log-dev (>= 1.64), libboost-program-options-dev (>= 1.64), 
  cmake, debhelper-compat (= 12), valgrind, libflatbuffers-dev, 
  libarm-compute-dev [arm64 armhf],
  swig (>= 4.0.1-5), dh-python, python3-all, python3-setuptools,
  python3-dev, python3-numpy, xxd, flatbuffers-compiler, chrpath
Standards-Version: 4.6.2
Vcs-Git: https://salsa.debian.org/deeplearning-team/armnn.git
Vcs-Browser: https://salsa.debian.org/deeplearning-team/armnn

Package: libarmnn22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el 
Multi-Arch: same
Depends: ${shlibs:Depends}, ${misc:Depends}
Suggests: libarmnntfliteparser22 (= ${binary:Version}),
          python3-pyarmnn (= ${binary:Version})
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.

Package: libarmnn-dev
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

Package: libarmnntfliteparser22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the shared library package.


Package: libarmnntfliteparser-dev
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn-dev (= ${binary:Version}),
         libarmnntfliteparser22 (= ${binary:Version}),
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the development package containing header files.

Package: python3-pyarmnn
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Depends: libarmnn22 (= ${binary:Version}),
         libarmnntfliteparser22 (= ${binary:Version}),
         ${shlibs:Depends},
         ${misc:Depends},
         ${python3:Depends}
Recommends: libarmnn-cpuref-backend22
Description: PyArmNN is a python extension for the Armnn SDK
 PyArmNN provides interface similar to Arm NN C++ Api.
 .
 PyArmNN is built around public headers from the armnn/include folder
 of Arm NN. PyArmNN does not implement any computation kernels itself,
 all operations are delegated to the Arm NN library.

Package: libarmnn-cpuacc-backend22
Architecture: arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Neon backend package.

Package: libarmnnaclcommon22
Architecture: armhf arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the common shared library used by Arm Compute Library backends.

Package: libarmnn-cpuref-backend22
Architecture: amd64 arm64 armhf i386 mipsel mips64el ppc64el
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}),
         libarmnnaclcommon22 (= ${binary:Version}) [arm64 armhf],
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable Reference backend package.

Package: libarmnn-gpuacc-backend22
Architecture: armhf arm64
Multi-Arch: same
Depends: libarmnn22 (= ${binary:Version}),
         libarmnnaclcommon22 (= ${binary:Version}) [arm64 armhf],
         ${shlibs:Depends},
         ${misc:Depends}
Description: Arm NN is an inference engine for CPUs, GPUs and NPUs
 Arm NN is a set of tools that enables machine learning workloads on
 any hardware. It provides a bridge between existing neural network
 frameworks and whatever hardware is available and supported. On arm
 architectures (arm64 and armhf) it utilizes the Arm Compute Library
 to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
 possible. On other architectures/hardware it falls back to unoptimised
 functions.
 .
 This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
 Arm NN takes networks from these frameworks, translates them
 to the internal Arm NN format and then through the Arm Compute Library,
 deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs.
 .
 This is the dynamically loadable CL backend package.
