intel oneAPI Math Kernel Library User Guide
- June 9, 2024
- Intel
Table of Contents
intel oneAPI Math Kernel Library
Get Started with Intel® oneAPI Math Kernel Library
The Intel® oneAPI Math Kernel Library (oneMKL) helps you achieve maximum performance with a math computing library of highly optimized, extensively parallelized routines for CPU and GPU. The library has C and Fortran interfaces for most routines on CPU, and DPC++ interfaces for some routines on both CPU and GPU. You can find comprehensive support for several math operations in various interfaces including:
For C and Fortran on CPU
- Linear algebra
- Fast Fourier Transforms (FFT)
- Vector math
- Direct and iterative sparse solvers
- Random number generators
For DPC++ on CPU and GPU (Refer to the Intel® oneAPI Math Kernel Library—Data Parallel C++ Developer Reference for more details.)
- Linear algebra
- BLAS
- Selected Sparse BLAS functionality
- Selected LAPACK functionality
- Fast Fourier Transforms (FFT)
- 1D, 2D, and 3D
- Random number generators
- Selected functionality
- Selected Vector Math functionality
Before You Begin
Visit the Release Notes page for the Known Issues and most up-to-date
information.
Visit the Intel® oneAPI Math Kernel Library System Requirements page for
system requirements.
Visit the Get Started with the Intel® oneAPI DPC++/C++ Compiler for DPC++
Compiler requirements.
Step 1: Install Intel® oneAPI Math Kernel Library
Download Intel® oneAPI Math Kernel Library from the Intel® oneAPI Base
Toolkit.
For Python distributions, refer to Installing the Intel® Distribution for
Python and Intel® Performance Libraries with pip and PyPI.
For Python distributions, note the following limitation:
The oneMKL devel package (mkl-devel) for PIP distribution on Linux and macOS*
does not provide dynamic libraries symlinks (for more information see PIP
GitHub issue #5919).
In the case of dynamic or single dynamic library linking with oneMKL devel
package (for more information see oneMKL Link Line Advisor ) you must modify
link line with oneMKL libraries full names and versions.
Refer to Intel® oneAPI Math Kernel Library and pkg-config tool for information
about compiling and linking with the pkg-config tool.
oneMKL link line example with the oneAPI Base Toolkit via symlinks:
-
Linux:
icc app.obj -L${MKLROOT}/lib/intel64 -lmkl_intel_lp64-lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lm -ldl -
macOS:
icc app.obj -L${MKLROOT}/lib -Wl,-rpath,${MKLROOT}/lib-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread
-lm -ldl
The oneMKL link line example with PIP devel package via libraries full names and versions: Linux:
icc app.obj ${MKLROOT}/lib/intel64/libmkl_intel_lp64.so.1 ${MKLROOT}/lib/intel64/libmkl_intel_thread.so.1 ${MKLROOT}/lib/intel64/libmkl_core.so.1 -liomp5 -lpthread -lm -ldl -
macOS:
icc app.obj -Wl,-rpath,${MKLROOT}/lib${MKLROOT}/lib/intel64/libmkl_intel_lp64.1.dylib $ {MKLROOT}/lib/intel64/libmkl_intel_thread.1.dylib
${MKLROOT}/lib/intel64/libmkl_core.1.dylib -liomp5 -lpthread -lm-ldl
Step 2: Select a Function or Routine
Select a function or routine from oneMKL that is best suited for your problem.
Use these resources:
Resource Link: Contents
oneMKL Developer Guide for Linux
oneMKL Developer Guide for Windows
oneMKL Developer Guide for macOS*
The Developer Guide contains detailed information on several topics including:
- Compiling and linking applications
- Building custom DLLs
- Threading
- Memory Management
oneMKL Developer Reference – C
Language oneMKL Developer Reference – Fortran Language
oneMKL Developer Reference – DPC++ Language
- The Developer Reference (in C, Fortran, and DPC++ formats) contains detailed descriptions of the functions and interfaces for all library domains.
Intel® oneAPI Math Kernel Library Function Finding Advisor
- Use the LAPACK Function Finding Advisor to explore LAPACK routines that are useful for a particular problem. For example, if you specify an operation as:
- Routine type: Computational
- Computational problem: Orthogonal factorization
- Matrix type: General
- Operation: Perform QR factorization
Step 3: Link Your Code
Use the oneMKL Link Line Advisor to configure the link command according to
your program features.
Some limitations and additional requirements:
Intel® oneAPI Math Kernel Library for DPC++ only supports using the
mkl_intel_ilp64 interface library and sequential or TBB threading.
For DPC++ interfaces with static linking on Linux
icpx -fsycl -fsycl-device-code-split=per_kernel -DMKL_ILP64 <typical user
includes and linking flags and other libs>
${MKLROOT}/lib/intel64/libmkl_sycl.a -Wl,–start-group
${MKLROOT}/lib/intel64/libmkl_intelilp64.a ${MKLROOT}/lib/intel64/
libmkl<sequential|tbb_thread>.a ${MKLROOT}/lib/intel64/libmkl_core.a -Wl
,–end-group -lsycl -lOpenCL -lpthread -ldl -lm
For example, building/statically linking main.cpp with ilp64 interfaces and
TBB threading:
icpx -fsycl -fsycl-device-code-split=per_kernel -DMKL_ILP64
-I${MKLROOT}/include main.cpp $
{MKLROOT}/lib/intel64/libmkl_sycl.a -Wl,–start-group ${MKLROOT}/lib/intel64/
libmkl_intel_ilp64.a ${MKLROOT}/lib/intel64/libmkl_tbb_thread.a
${MKLROOT}/lib/intel64/
libmkl_core.a -Wl,–end-group -L${TBBROOT}/lib/intel64/gcc4.8 -ltbb -lsycl
-lOpenCL -lpthread -lm -ldl
For DPC++ interfaces with dynamic linking on Linux
icpx -fsycl -DMKL_ILP64 <typical user includes and linking flags and other
libs> -L$ {MKLROOT}/lib/intel64 -lmkl_sycl -lmkl_intelilp64
-lmkl<sequential|tbb_thread> -lmkl_core -lsycl -lOpenCL -lpthread -ldl -lm
For example, building/dynamically linking main.cpp with ilp64 interfaces and
TBB threading:
icpx -fsycl -DMKL_ILP64 -I${MKLROOT}/include main.cpp -L${MKLROOT}/lib/intel64
-lmkl_sycl -lmkl_intel_ilp64 -lmkl_tbb_thread -lmkl_core -lsycl -lOpenCL -ltbb
-lpthread -ldl -lm
For DPC++ interfaces with static linking on Windows
icpx -fsycl -fsycl-device-code-split=per_kernel -DMKL_ILP64 <typical user
includes and linking flags and other libs>
“%MKLROOT%”\lib\intel64\mkl_sycl.lib
mkl_intelilp64.lib mkl<sequential|tbb_thread>.lib mkl_core_lib sycl.lib
OpenCL.lib
For example, building/statically linking main.cpp with ilp64 interfaces and
TBB threading:
icpx -fsycl -fsycl-device-code-split=per_kernel -DMKL_ILP64
-I”%MKLROOT%\include” main.cpp”%MKLROOT%”\lib\intel64\mkl_sycl.lib
mkl_intel_ilp64.lib mkl_tbb_thread.lib mkl_core.lib sycl.lib OpenCL.lib
tbb.lib
For DPC++ interfaces with dynamic linking on Windows
icpx -fsycl -DMKL_ILP64 <typical user includes and linking flags and other
libs> “%MKLROOT%”\lib\intel64\mkl_sycl_dll.lib mkl_intel_ilp64dll.lib
mkl<sequential|tbb_thread>_dll.lib mkl_core_dll.lib tbb.lib sycl.lib
OpenCL.lib
For example, building/dynamically linking main.cpp with ilp64 interfaces and
TBB threading:
icpx -fsycl -fsycl-device-code-split=per_kernel -DMKL_ILP64
-I”%MKLROOT%\include” main.cpp “%MKLROOT%”\lib\intel64\mkl_sycl_dll.lib
mkl_intel_ilp64_dll.lib mkl_tbb_thread_dll.lib mkl_core_dll.lib tbb.lib
sycl.lib OpenCL.lib
For C/Fortran Interfaces with OpenMP Offload Support
Use the C/Fotran Intel® oneAPI Math Kernel Library interfaces with OpenMP
offload feature to the GPU.
See the C OpenMP Offload Developer Guide for more details about this feature.
Add the following changes to the C/Fortran oneMKL compile/link lines to enable
OpenMP offload feature to GPU:
- Additional compile/link options: -fiopenmp -fopenmp-targets=spir64 -mllvm -vpo-paropt-use-raw-dev-ptr -fsycl
- Additional oneMKL library: oneMKL DPC++ library
For example, building/ dynamically linking main.cpp on Linux with ilp64
interfaces and OpenMP threading:
icx -fiopenmp -fopenmp-targets=spir64 -mllvm -vpo-paropt-use-raw-dev-ptr
-fsycl -DMKL_ILP64 -m64 -I$(MKLROOT)/include main.cpp L${MKLROOT}/lib/intel64
-lmkl_sycl -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -liomp5 -lsycl
-lOpenCL -lstdc++ -lpthread -lm -ldl
For all other supported configurations, see Intel® oneAPI Math Kernel Library
Link Line Advisor.
Find More
Resource: Description
Tutorial: Using Intel® oneAPI Math Kernel Library for Matrix Multiplication:
- Tutorial – C Language
- Tutorial – Fortran Language
This tutorial demonstrates how you can use oneMKL to multiply matrices, measure the performance of matrix multiplication, and control threading.
Intel® oneAPI Math Kernel Library (oneMKL) Release Notes control threading.
The release notes contain information specific to the latest release of oneMKL
including new and changed features. The release notes include links to
principal online information resources related to the release. You can also
find information on:
- What’s new in the release
- Product contents
- Obtaining technical support
- License definitions
Intel® oneAPI Math Kernel Library
The Intel® oneAPI Math Kernel Library (oneMKL) product page. See this page for
support and online documentation.
Intel® oneAPI Math Kernel Library Cookbook
The Intel® oneAPI Math Kernel Library contains many routines to help you solve
various numerical problems, such as multiplying matrices, solving a system of
equations, and performing a Fourier transform.
Notes for Intel® oneAPI Math Kernel Library Vector Statistics
This document includes an overview, a usage model and testing results of
random number generators included in VS.
Intel® oneAPI Math Kernel Library Vector Statistics Random Number Generator
Performance Data
Performance data obtained using vector statistics (VS) random number generator
(RNG) including CPE (clocks per element) unit of measure, basic random number
generators (BRNG), generated distribution generators, and length of generated
vectors.
Intel® oneAPI Math Kernel Library Vector Mathematics Performance and Accuracy
Data
Vector Mathematics (VM) computes elementary functions on vector arguments. VM
includes a set of highly optimized implementations of computationally
expensive core mathematical functions (power, trigonometric, exponential,
hyperbolic, and others) that operate on vectors.
Application Notes for Intel® oneAPI Math Kernel Library Summary Statistics
Summary Statistics is a subcomponent of the Vector Statistics domain of Intel®
oneAPI Math Kernel Library. Summary Statistics provides you with functions for
initial statistical analysis, and offers solutions for parallel processing of
multi-dimensional datasets.
LAPACK Examples
This document provides code examples for oneMKL LAPACK (Linear Algebra
PACKage) routines.
Notices and Disclaimers
Software and workloads used in performance tests may have been optimized
for performance only on Intel microprocessors. Performance tests, such as
SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those
factors may cause the results to vary. You should consult other information
and performance tests to assist you in fully evaluating your contemplated
purchases, including the performance of that product when combined with other
products. For more complete information visit
www.intel.com/benchmarks.
Intel technologies may require enabled hardware, software or service
activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are
trademarks of Intel Corporation or its subsidiaries. Other names and brands
may be claimed as the property of others.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at
www.Intel.com/PerformanceIndex.
Notice revision #20201201
No license (express or implied, by estoppel or otherwise) to any intellectual
property rights is granted by this document.
The products described may contain design defects or errors known as errata
which may cause the product to deviate from published specifications. Current
characterized errata are available on request.
Intel disclaims all express and implied warranties, including without
limitation, the implied warranties of merchantability, fitness for a
particular purpose, and non-infringement, as well as any warranty arising from
course of performance, course of dealing, or usage in trade.
References
- Symlink (and other) handling of archives · Issue #5919 · pypa/pip · GitHub
- Intel® Distribution for Python*
- Intel® Math Kernel Library (Intel® MKL) and pkg-config tool
- Intel® Math Kernel Library Release Notes and New Features
- LAPACK Function Finding Advisor for Intel® oneAPI Math Kernel Library
- Link Line Advisor for Intel® oneAPI Math Kernel Library
- Intel® oneAPI Math Kernel Library (oneMKL) Release Notes
- Intel® oneAPI Math Kernel Library System Requirements
- Get Started with OpenMP* Offload to GPU for the Intel® oneAPI...
- Get Started with the Intel® oneAPI DPC++/C++ Compiler
- Intel® oneAPI Math Kernel Library - Data Parallel C++ Developer...
- Intel® oneAPI Math Kernel Library Cookbook
- Developer Reference for Intel® oneAPI Math Kernel Library - C
- Developer Reference for Intel® oneAPI Math Kernel Library - Fortran
- Developer Guide for Intel® oneAPI Math Kernel Library for Linux*
- Getting Help and Support
- Application Notes for Intel® oneAPI Math Kernel Library Summary...
- Tutorial: Using the Intel® oneAPI Math Kernel Library (oneMKL) for...
- Tutorial: Using the Intel® oneAPI Math Kernel Library (oneMKL) for...
- Intel® oneAPI Math Kernel Library Vector Mathematics Performance and...
- Notes for Intel® oneAPI Math Kernel Library Vector Statistics
- Intel® oneAPI Math Kernel Library Vector Statistics Random Number...
- Developer Guide for Intel® oneAPI Math Kernel Library for Windows*
- Accelerate Fast Math with Intel® oneAPI Math Kernel Library
- Intel® oneAPI Math Kernel Library LAPACK Examples
- Overview - 1 | Performance Index
Read User Manual Online (PDF format)
Read User Manual Online (PDF format) >>