diff --git a/docs/search/search_index.json b/docs/search/search_index.json index e14722e5b..51aa1f234 100644 --- a/docs/search/search_index.json +++ b/docs/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#introduction","title":"Introduction","text":"

OpenBLAS is an optimized Basic Linear Algebra Subprograms (BLAS) library based on GotoBLAS2 1.13 BSD version.

OpenBLAS implements low-level routines for performing linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. OpenBLAS makes these routines available on multiple platforms, covering server, desktop and mobile operating systems, as well as different architectures including x86, ARM, MIPS, PPC, RISC-V, and zarch.

The old GotoBLAS documentation can be found on GitHub.

"},{"location":"#license","title":"License","text":"

OpenBLAS is licensed under the 3-clause BSD license. The full license can be found on GitHub.

"},{"location":"about/","title":"About","text":""},{"location":"about/#mailing-list","title":"Mailing list","text":"

We have a GitHub discussions forum to discuss usage and development of OpenBLAS. We also have a Google group for users and a Google group for development of OpenBLAS.

"},{"location":"about/#donations","title":"Donations","text":"

You can read OpenBLAS statement of receipts and disbursement and cash balance on google doc. A backer list is available on GitHub.

We welcome the hardware donation, including the latest CPU and boards.

"},{"location":"about/#acknowledgements","title":"Acknowledgements","text":"

This work is partially supported by * Research and Development of Compiler System and Toolchain for Domestic CPU, National S&T Major Projects: Core Electronic Devices, High-end General Chips and Fundamental Software (No.2009ZX01036-001-002) * National High-tech R&D Program of China (Grant No.2012AA010903)

"},{"location":"about/#users-of-openblas","title":"Users of OpenBLAS","text":""},{"location":"about/#publications","title":"Publications","text":""},{"location":"about/#2013","title":"2013","text":""},{"location":"about/#2012","title":"2012","text":""},{"location":"build_system/","title":"Build system","text":"

Warning

This page is made by someone who is not the developer and should not be considered as an official documentation of the build system. For getting the full picture, it is best to read the Makefiles and understand them yourself.

"},{"location":"build_system/#makefile-dep-graph","title":"Makefile dep graph","text":"
Makefile                                                        \n|                                                               \n|-----  Makefile.system # !!! this is included by many of the Makefiles in the subdirectories !!!\n|       |\n|       |=====  Makefile.prebuild # This is triggered (not included) once by Makefile.system \n|       |       |                 # and runs before any of the actual library code is built.\n|       |       |                 # (builds and runs the \"getarch\" tool for cpu identification,\n|       |       |                 # runs the compiler detection scripts c_check and f_check) \n|       |       |\n|       |       -----  (Makefile.conf) [ either this or Makefile_kernel.conf is generated ] \n|       |       |                            { Makefile.system#L243 }\n|       |       -----  (Makefile_kernel.conf) [ temporary Makefile.conf during DYNAMIC_ARCH builds ]\n|       |\n|       |-----  Makefile.rule # defaults for build options that can be given on the make command line\n|       |\n|       |-----  Makefile.$(ARCH) # architecture-specific compiler options and OpenBLAS buffer size values\n|\n|~~~~~ exports/\n|\n|~~~~~ test/\n|\n|~~~~~ utest/  \n|\n|~~~~~ ctest/\n|\n|~~~~~ cpp_thread_test/\n|\n|~~~~~ kernel/\n|\n|~~~~~ ${SUBDIRS}\n|\n|~~~~~ ${BLASDIRS}\n|\n|~~~~~ ${NETLIB_LAPACK_DIR}{,/timing,/testing/{EIG,LIN}}\n|\n|~~~~~ relapack/\n
"},{"location":"build_system/#important-variables","title":"Important Variables","text":"

Most of the tunable variables are found in Makefile.rule, along with their detailed descriptions. Most of the variables are detected automatically in Makefile.prebuild, if they are not set in the environment.

"},{"location":"build_system/#cpu-related","title":"CPU related","text":"
ARCH         - Target architecture (eg. x86_64)\nTARGET       - Target CPU architecture, in case of DYNAMIC_ARCH=1 means library will not be usable on less capable CPUs\nTARGET_CORE  - TARGET_CORE will override TARGET internally during each cpu-specific cycle of the build for DYNAMIC_ARCH\nDYNAMIC_ARCH - For building library for multiple TARGETs (does not lose any optimizations, but increases library size)\nDYNAMIC_LIST - optional user-provided subset of the DYNAMIC_CORE list in Makefile.system\n
"},{"location":"build_system/#toolchain-related","title":"Toolchain related","text":"
CC                 - TARGET C compiler used for compilation (can be cross-toolchains)\nFC                 - TARGET Fortran compiler used for compilation (can be cross-toolchains, set NOFORTRAN=1 if used cross-toolchain has no fortran compiler)\nAR, AS, LD, RANLIB - TARGET toolchain helpers used for compilation (can be cross-toolchains)\n\nHOSTCC             - compiler of build machine, needed to create proper config files for target architecture\nHOST_CFLAGS        - flags for build machine compiler\n
"},{"location":"build_system/#library-related","title":"Library related","text":"
BINARY          - 32/64 bit library\n\nBUILD_SHARED    - Create shared library\nBUILD_STATIC    - Create static library\n\nQUAD_PRECISION  - enable support for IEEE quad precision [ largely unimplemented leftover from GotoBLAS, do not use ]\nEXPRECISION     - Obsolete option to use float80 of SSE on BSD-like systems\nINTERFACE64     - Build with 64bit integer representations to support large array index values [ incompatible with standard API ]\n\nBUILD_SINGLE    - build the single-precision real functions of BLAS [and optionally LAPACK] \nBUILD_DOUBLE    - build the double-precision real functions\nBUILD_COMPLEX   - build the single-precision complex functions\nBUILD_COMPLEX16 - build the double-precision complex functions\n(all four types are included in the build by default when none was specifically selected)\n\nBUILD_BFLOAT16  - build the \"half precision brainfloat\" real functions \n\nUSE_THREAD      - Use a multithreading backend (default to pthread)\nUSE_LOCKING     - implement locking for thread safety even when USE_THREAD is not set (so that the singlethreaded library can\n                  safely be called from multithreaded programs)\nUSE_OPENMP      - Use OpenMP as multithreading backend\nNUM_THREADS     - define this to the maximum number of parallel threads you expect to need (defaults to the number of cores in the build cpu)\nNUM_PARALLEL    - define this to the number of OpenMP instances that your code may use for parallel calls into OpenBLAS (default 1,see below)\n

OpenBLAS uses a fixed set of memory buffers internally, used for communicating and compiling partial results from individual threads. For efficiency, the management array structure for these buffers is sized at build time - this makes it necessary to know in advance how many threads need to be supported on the target system(s). With OpenMP, there is an additional level of complexity as there may be calls originating from a parallel region in the calling program. If OpenBLAS gets called from a single parallel region, it runs single-threaded automatically to avoid overloading the system by fanning out its own set of threads. In the case that an OpenMP program makes multiple calls from independent regions or instances in parallel, this default serialization is not sufficient as the additional caller(s) would compete for the original set of buffers already in use by the first call. So if multiple OpenMP runtimes call into OpenBLAS at the same time, then only one of them will be able to make progress while all the rest of them spin-wait for the one available buffer. Setting NUM_PARALLEL to the upper bound on the number of OpenMP runtimes that you can have in a process ensures that there are a sufficient number of buffer sets available

"},{"location":"ci/","title":"CI jobs","text":"Arch Target CPU OS Build system XComp to C Compiler Fortran Compiler threading DYN_ARCH INT64 Libraries CI Provider CPU count x86_64 Intel 32bit Windows CMAKE/VS2015 - mingw6.3 - pthreads - - static Appveyor x86_64 Intel Windows CMAKE/VS2015 - mingw5.3 - pthreads - - static Appveyor x86_64 Intel Centos5 gmake - gcc 4.8 gfortran pthreads + - both Azure x86_64 SDE (SkylakeX) Ubuntu CMAKE - gcc gfortran pthreads - - both Azure x86_64 Haswell/ SkylakeX Windows CMAKE/VS2017 - VS2017 - - - static Azure x86_64 \" Windows mingw32-make - gcc gfortran list - both Azure x86_64 \" Windows CMAKE/Ninja - LLVM - - - static Azure x86_64 \" Windows CMAKE/Ninja - LLVM flang - - static Azure x86_64 \" Windows CMAKE/Ninja - VS2022 flang* - - static Azure x86_64 \" macOS11 gmake - gcc-10 gfortran OpenMP + - both Azure x86_64 \" macOS11 gmake - gcc-10 gfortran none - - both Azure x86_64 \" macOS12 gmake - gcc-12 gfortran pthreads - - both Azure x86_64 \" macOS11 gmake - llvm - OpenMP + - both Azure x86_64 \" macOS11 CMAKE - llvm - OpenMP no_avx512 - static Azure x86_64 \" macOS11 CMAKE - gcc-10 gfortran pthreads list - shared Azure x86_64 \" macOS11 gmake - llvm ifort pthreads - - both Azure x86_64 \" macOS11 gmake arm AndroidNDK-llvm - - - both Azure x86_64 \" macOS11 gmake arm64 XCode 12.4 - + - both Azure x86_64 \" macOS11 gmake arm XCode 12.4 - + - both Azure x86_64 \" Alpine Linux(musl) gmake - gcc gfortran pthreads + - both Azure arm64 Apple M1 OSX CMAKE/XCode - LLVM - OpenMP - - static Cirrus arm64 Apple M1 OSX CMAKE/Xcode - LLVM - OpenMP - + static Cirrus arm64 Apple M1 OSX CMAKE/XCode x86_64 LLVM - - + - static Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - pthreads - - both Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - pthreads - + both Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - OpenMP - - both Cirrus 8 x86_64 Ryzen FreeBSD gmake - gcc12.2 gfortran pthreads - - both Cirrus x86_64 Ryzen FreeBSD gmake gcc12.2 gfortran pthreads - + both Cirrus x86_64 GENERIC QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 SICORTEX QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 I6400 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 P6600 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 I6500 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 Intel Ubuntu CMAKE - gcc-11.3 gfortran pthreads + - static Github x86_64 Intel Ubuntu gmake - gcc-11.3 gfortran pthreads + - both Github x86_64 Intel Ubuntu CMAKE - gcc-11.3 flang-classic pthreads + - static Github x86_64 Intel Ubuntu gmake - gcc-11.3 flang-classic pthreads + - both Github x86_64 Intel macOS12 CMAKE - AppleClang 14 gfortran pthreads + - static Github x86_64 Intel macOS12 gmake - AppleClang 14 gfortran pthreads + - both Github x86_64 Intel Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + + static Github x86_64 Intel 32bit Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - LLVM 16 - + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - LLVM 16 - + + static Github x86_64 Intel Windows2022 CMAKE/Ninja - gcc 13 - + - static Github x86_64 Intel Ubuntu gmake mips64 gcc gfortran pthreads + - both Github x86_64 generic Ubuntu gmake riscv64 gcc gfortran pthreads - - both Github x86_64 Intel Ubuntu gmake mips32 gcc gfortran pthreads - - both Github x86_64 Intel Ubuntu gmake ia64 gcc gfortran pthreads - - both Github x86_64 C910V QEmu gmake riscv64 gcc gfortran pthreads - - both Github power pwr9 Ubuntu gmake - gcc gfortran OpenMP - - both OSUOSL zarch z14 Ubuntu gmake - gcc gfortran OpenMP - - both OSUOSL"},{"location":"developers/","title":"Developer manual","text":""},{"location":"developers/#source-codes-layout","title":"Source codes Layout","text":"
OpenBLAS/  \n\u251c\u2500\u2500 benchmark                  Benchmark codes for BLAS\n\u251c\u2500\u2500 cmake                      CMakefiles\n\u251c\u2500\u2500 ctest                      Test codes for CBLAS interfaces\n\u251c\u2500\u2500 driver                     Implemented in C\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 level2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 level3\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mapper\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 others                 Memory management, threading, etc\n\u251c\u2500\u2500 exports                    Generate shared library\n\u251c\u2500\u2500 interface                  Implement BLAS and CBLAS interfaces (calling driver or kernel)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapack\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 netlib\n\u251c\u2500\u2500 kernel                     Optimized assembly kernels for CPU architectures\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 alpha                  Original GotoBLAS kernels for DEC Alpha\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 arm                    ARMV5,V6,V7 kernels (including generic C codes used by other architectures)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 arm64                  ARMV8\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 generic                General kernel codes written in plain C, parts used by many architectures.\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ia64                   Original GotoBLAS kernels for Intel Itanium\n\u2502   \u251c\u2500\u2500 mips\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mips64\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 power\n|   \u251c\u2500\u2500 riscv64\n|   \u251c\u2500\u2500 simd                   Common code for Universal Intrinsics, used by some x86_64 and arm64 kernels\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sparc\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 x86\n\u2502   \u251c\u2500\u2500 x86_64\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 zarch   \n\u251c\u2500\u2500 lapack                      Optimized LAPACK codes (replacing those in regular LAPACK)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getf2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getrf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getrs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 laswp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lauu2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lauum\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 potf2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 potrf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 trti2\n\u2502   \u251c\u2500\u2500 trtri\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trtrs\n\u251c\u2500\u2500 lapack-netlib               LAPACK codes from netlib reference implementation\n\u251c\u2500\u2500 reference                   BLAS Fortran reference implementation (unused)\n\u251c\u2500\u2500 relapack                    Elmar Peise's recursive LAPACK (implemented on top of regular LAPACK)\n\u251c\u2500\u2500 test                        Test codes for BLAS\n\u2514\u2500\u2500 utest                       Regression test\n

A call tree for dgemm is as following.

interface/gemm.c\n        \u2502\ndriver/level3/level3.c\n        \u2502\ngemm assembly kernels at kernel/\n

To find the kernel currently used for a particular supported cpu, please check the corresponding kernel/$(ARCH)/KERNEL.$(CPU) file.

Here is an example for kernel/x86_64/KERNEL.HASWELL

...\nDTRMMKERNEL    =  dtrmm_kernel_4x8_haswell.c\nDGEMMKERNEL    =  dgemm_kernel_4x8_haswell.S\n...\n
According to the above KERNEL.HASWELL, OpenBLAS Haswell dgemm kernel file is dgemm_kernel_4x8_haswell.S.

"},{"location":"developers/#optimizing-gemm-for-a-given-hardware","title":"Optimizing GEMM for a given hardware","text":"

Read the Goto paper to understand the algorithm.

Goto, Kazushige; van de Geijn, Robert A. (2008). \"Anatomy of High-Performance Matrix Multiplication\". ACM Transactions on Mathematical Software 34 (3): Article 12 (The above link is available only to ACM members, but this and many related papers is also available on the pages of van de Geijn's FLAME project, http://www.cs.utexas.edu/~flame/web/FLAMEPublications.html )

The driver/level3/level3.c is the implementation of Goto's algorithm. Meanwhile, you can look at kernel/generic/gemmkernel_2x2.c, which is a naive 2x2 register blocking gemm kernel in C.

Then, * Write optimized assembly kernels. consider instruction pipeline, available registers, memory/cache accessing * Tuning cache block size, Mc, Kc, and Nc

Note that not all of the cpu-specific parameters in param.h are actively used in algorithms. DNUMOPT only appears as a scale factor in profiling output of the level3 syrk interface code, while its counterpart SNUMOPT (aliased as NUMOPT in common.h) is not used anywhere at all. SYMV_P is only used in the generic kernels for the symv and chemv/zhemv functions - at least some of those are usually overridden by cpu-specific implementations, so if you start by cloning the existing implementation for a related cpu you need to check its KERNEL file to see if tuning SYMV_P would have any effect at all. GEMV_UNROLL is only used by some older x86_64 kernels, so not all sections in param.h define it. Similarly, not all of the cpu parameters like L2 or L3 cache sizes are necessarily used in current kernels for a given model - by all indications the cpu identification code was imported from some other project originally.

"},{"location":"developers/#run-openblas-test","title":"Run OpenBLAS Test","text":"

We use netlib blas test, cblas test, and LAPACK test. Meanwhile, we use BLAS-Tester, a modified test tool from ATLAS.

The project makes use of several Continuous Integration (CI) services conveniently interfaced with github to automatically check compilability on a number of platforms. Lastly, the testsuites included with \"numerically heavy\" projects like Julia, NumPy, Octave or QuantumEspresso can be used for regression testing.

"},{"location":"developers/#benchmarking","title":"Benchmarking","text":"

Several simple C benchmarks for performance testing individual BLAS functions are available in the benchmark folder, and its scripts subdirectory contains corresponding versions for Python, Octave and R. Other options include

"},{"location":"developers/#adding-autodetection-support-for-a-new-revision-or-variant-of-a-supported-cpu","title":"Adding autodetection support for a new revision or variant of a supported cpu","text":"

Especially relevant for x86_64, a new cpu model may be a \"refresh\" (die shrink and/or different number of cores) within an existing model family without significant changes to its instruction set. (e.g. Intel Skylake, Kaby Lake etc. still are fundamentally Haswell, low end Goldmont etc. are Nehalem). In this case, compilation with the appropriate older TARGET will already lead to a satisfactory build.

To achieve autodetection of the new model, its CPUID (or an equivalent identifier) needs to be added in the cpuid_<architecture>.c relevant for its general architecture, with the returned name for the new type set appropriately. For x86 which has the most complex cpuid file, there are two functions that need to be edited - get_cpuname() to return e.g. CPUTYPE_HASWELL and get_corename() for the (broader) core family returning e.g. CORE_HASWELL. (This information ends up in the Makefile.conf and config.h files generated by getarch. Failure to set either will typically lead to a missing definition of the GEMM_UNROLL parameters later in the build, as getarch_2nd will be unable to find a matching parameter section in param.h.)

For architectures where \"DYNAMIC_ARCH\" builds are supported, a similar but simpler code section for the corresponding runtime detection of the cpu exists in driver/others/dynamic.c (for x86) and driver/others/dynamic_<arch>.c for other architectures. Note that for x86 the CPUID is compared after splitting it into its family, extended family, model and extended model parts, so the single decimal number returned by Linux in /proc/cpuinfo for the model has to be converted back to hexadecimal before splitting into its constituent digits, e.g. 142 = 8E , translates to extended model 8, model 14.

"},{"location":"developers/#adding-dedicated-support-for-a-new-cpu-model","title":"Adding dedicated support for a new cpu model","text":"

Usually it will be possible to start from an existing model, clone its KERNEL configuration file to the new name to use for this TARGET and eventually replace individual kernels with versions better suited for peculiarities of the new cpu model. In addition, it is necessary to add (or clone at first) the corresponding section of GEMM_UNROLL parameters in the toplevel param.h, and possibly to add definitions such as USE_TRMM (governing whether TRMM functions use the respective GEMM kernel or a separate source file) to the Makefiles (and CMakeLists.txt) in the kernel directory. The new cpu name needs to be added to TargetLists.txt and the cpu autodetection code used by the getarch helper program - contained in the cpuid_<architecture>.c file amended to include the CPUID (or equivalent) information processing required (see preceding section).

"},{"location":"developers/#adding-support-for-an-entirely-new-architecture","title":"Adding support for an entirely new architecture","text":"

This endeavour is best started by cloning the entire support structure for 32bit ARM, and within that the ARMV5 cpu in particular as this is implemented through plain C kernels only. An example providing a convenient \"shopping list\" can be seen in pull request #1526.

"},{"location":"distributing/","title":"Redistributing OpenBLAS","text":"

Note

This document contains recommendations only - packagers and other redistributors are in charge of how OpenBLAS is built and distributed in their systems, and may have good reasons to deviate from the guidance given on this page. These recommendations are aimed at general packaging systems, with a user base that typically is large, open source (or freely available at least), and doesn't behave uniformly or that the packager is directly connected with.*

OpenBLAS has a large number of build-time options which can be used to change how it behaves at runtime, how artifacts or symbols are named, etc. Variation in build configuration can be necessary to acheive a given end goal within a distribution or as an end user. However, such variation can also make it more difficult to build on top of OpenBLAS and ship code or other packages in a way that works across many different distros. Here we provide guidance about the most important build options, what effects they may have when changed, and which ones to default to.

The Make and CMake build systems provide equivalent options and yield more or less the same artifacts, but not exactly (the CMake builds are still experimental). You can choose either one and the options will function in the same way, however the CMake outputs may require some renaming. To review available build options, see Makefile.rule or CMakeLists.txt in the root of the repository.

Build options typically fall into two categories: (a) options that affect the user interface, such as library and symbol names or APIs that are made available, and (b) options that affect performance and runtime behavior, such as threading behavior or CPU architecture-specific code paths. The user interface options are more important to keep aligned between distributions, while for the performance-related options there are typically more reasons to make choices that deviate from the defaults.

Here are recommendations for user interface related packaging choices where it is not likely to be a good idea to deviate (typically these are the default settings):

  1. Include CBLAS. The CBLAS interface is widely used and it doesn't affect binary size much, so don't turn it off.
  2. Include LAPACK and LAPACKE. The LAPACK interface is also widely used, and while it does make up a significant part of the binary size of the installed library, that does not outweigh the regression in usability when deviating from the default here.[^1]
  3. Always distribute the pkg-config (.pc) and CMake .cmake) dependency detection files. These files are used by build systems when users want to link against OpenBLAS, and there is no benefit of leaving them out.
  4. Provide the LP64 interface by default, and if in addition to that you choose to provide an ILP64 interface build as well, use a symbol suffix to avoid symbol name clashes (see the next section).

[^1] All major distributions do include LAPACK as of mid 2023 as far as we know. Older versions of Arch Linux did not, and that was known to cause problems.

"},{"location":"distributing/#ilp64-interface-builds","title":"ILP64 interface builds","text":"

The LP64 (32-bit integer) interface is the default build, and has well-established C and Fortran APIs as determined by the reference (Netlib) BLAS and LAPACK libraries. The ILP64 (64-bit integer) interface however does not have a standard API: symbol names and shared/static library names can be produced in multiple ways, and this tends to make it difficult to use. As of today there is an agreed-upon way of choosing names for OpenBLAS between a number of key users/redistributors, which is the closest thing to a standard that there is now. However, there is an ongoing standardization effort in the reference BLAS and LAPACK libraries, which differs from the current OpenBLAS agreed-upon convention. In this section we'll aim to explain both.

Those two methods are fairly similar, and have a key thing in common: using a symbol suffix. This is good practice; it is recommended that if you distribute an ILP64 build, to have it use a symbol suffix containing 64 in the name. This avoids potential symbol clashes when different packages which depend on OpenBLAS load both an LP64 and an ILP64 library into memory at the same time.

"},{"location":"distributing/#the-current-openblas-agreed-upon-ilp64-convention","title":"The current OpenBLAS agreed-upon ILP64 convention","text":"

This convention comprises the shared library name and the symbol suffix in the shared library. The symbol suffix to use is 64_, implying that the library name will be libopenblas64_.so and the symbols in that library end in 64_. The central issue where this was discussed is openblas#646, and adopters include Fedora, Julia, NumPy and SciPy - SuiteSparse already used it as well.

To build shared and static libraries with the currently recommended ILP64 conventions with Make:

$ make INTERFACE64=1 SYMBOLSUFFIX=64_\n

This will produce libraries named libopenblas64_.so|a, a pkg-config file named openblas64.pc, and CMake and header files.

Installing locally and inspecting the output will show a few more details:

$ make install PREFIX=$PWD/../openblas/make64 INTERFACE64=1 SYMBOLSUFFIX=64_\n$ tree .  # output slightly edited down\n.\n\u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 cblas.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 f77blas.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_config.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_utils.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapack.h\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas_config.h\n\u2514\u2500\u2500 lib\n    \u251c\u2500\u2500 cmake\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLASConfig.cmake\n    \u2502\u00a0\u00a0     \u2514\u2500\u2500 OpenBLASConfigVersion.cmake\n    \u251c\u2500\u2500 libopenblas64_.a\n    \u251c\u2500\u2500 libopenblas64_.so\n    \u2514\u2500\u2500 pkgconfig\n        \u2514\u2500\u2500 openblas64.pc\n

A key point are the symbol names. These will equal the LP64 symbol names, then (for Fortran only) the compiler mangling, and then the 64_ symbol suffix. Hence to obtain the final symbol names, we need to take into account which Fortran compiler we are using. For the most common cases (e.g., gfortran, Intel Fortran, or Flang), that means appending a single underscore. In that case, the result is:

base API name binary symbol name call from Fortran code call from C code dgemm dgemm_64_ dgemm_64(...) dgemm_64_(...) cblas_dgemm cblas_dgemm64_ n/a cblas_dgemm64_(...)

It is quite useful to have these symbol names be as uniform as possible across different packaging systems.

The equivalent build options with CMake are:

$ mkdir build && cd build\n$ cmake .. -DINTERFACE64=1 -DSYMBOLSUFFIX=64_ -DBUILD_SHARED_LIBS=ON -DBUILD_STATIC_LIBS=ON\n$ cmake --build . -j\n

Note that the result is not 100% identical to the Make result. For example, the library name ends in _64 rather than 64_ - it is recommended to rename them to match the Make library names (also update the libsuffix entry in openblas64.pc to match that rename).

$ cmake --install . --prefix $PWD/../../openblas/cmake64\n$ tree .\n.\n\u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas64\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 cblas.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 f77blas.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_config.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_example_aux.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_utils.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapack.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 openblas64\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u2514\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 openblas_config.h\n\u2514\u2500\u2500 lib\n    \u251c\u2500\u2500 cmake\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 OpenBLAS64\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64Config.cmake\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64ConfigVersion.cmake\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64Targets.cmake\n    \u2502\u00a0\u00a0     \u2514\u2500\u2500 OpenBLAS64Targets-noconfig.cmake\n    \u251c\u2500\u2500 libopenblas_64.a\n    \u251c\u2500\u2500 libopenblas_64.so -> libopenblas_64.so.0\n    \u2514\u2500\u2500 pkgconfig\n        \u2514\u2500\u2500 openblas64.pc\n

"},{"location":"distributing/#the-upcoming-standardized-ilp64-convention","title":"The upcoming standardized ILP64 convention","text":"

While the 64_ convention above got some adoption, it's slightly hacky and is implemented through the use of objcopy. An effort is ongoing for a more broadly adopted convention in the reference BLAS and LAPACK libraries, using (a) the _64 suffix, and (b) applying that suffix before rather than after Fortran compiler mangling. The central issue for this is lapack#666.

For the most common cases of compiler mangling (a single _ appended), the end result will be:

base API name binary symbol name call from Fortran code call from C code dgemm dgemm_64_ dgemm_64(...) dgemm_64_(...) cblas_dgemm cblas_dgemm_64 n/a cblas_dgemm_64(...)

For other compiler mangling schemes, replace the trailing _ by the scheme in use.

The shared library name for this _64 convention should be libopenblas_64.so.

Note: it is not yet possible to produce an OpenBLAS build which employs this convention! Once reference BLAS and LAPACK with support for _64 have been released, a future OpenBLAS release will support it. For now, please use the older 64_ scheme and avoid using the name libopenblas_64.so; it should be considered reserved for future use of the _64 standard as prescribed by reference BLAS/LAPACK.

"},{"location":"distributing/#performance-and-runtime-behavior-related-build-options","title":"Performance and runtime behavior related build options","text":"

For these options there are multiple reasonable or common choices.

"},{"location":"distributing/#threading-related-options","title":"Threading related options","text":"

OpenBLAS can be built as a multi-threaded or single-threaded library, with the default being multi-threaded. It's expected that the default libopenblas library is multi-threaded; if you'd like to also distribute single-threaded builds, consider naming them libopenblas_sequential.

OpenBLAS can be built with pthreads or OpenMP as the threading model, with the default being pthreads. Both options are commonly used, and the choice here should not influence the shared library name. The choice will be captured by the .pc file. E.g.,:

$ pkg-config --libs openblas\n-fopenmp -lopenblas\n\n$ cat openblas.pc\n...\nopenblas_config= ... USE_OPENMP=0 MAX_THREADS=24\n

The maximum number of threads users will be able to use is determined at build time by the NUM_THREADS build option. It defaults to 24, and there's a wide range of values that are reasonable to use (up to 256). 64 is a typical choice here; there is a memory footprint penalty that is linear in NUM_THREADS. Please see Makefile.rule for more details.

"},{"location":"distributing/#cpu-architecture-related-options","title":"CPU architecture related options","text":"

OpenBLAS contains a lot of CPU architecture-specific optimizations, hence when distributing to a user base with a variety of hardware, it is recommended to enable CPU architecture runtime detection. This will dynamically select optimized kernels for individual APIs. To do this, use the DYNAMIC_ARCH=1 build option. This is usually done on all common CPU families, except when there are known issues.

In case the CPU architecture is known (e.g. you're building binaries for macOS M1 users), it is possible to specify the target architecture directly with the TARGET= build option.

DYNAMIC_ARCH and TARGET are covered in more detail in the main README.md in this repository.

"},{"location":"distributing/#real-world-examples","title":"Real-world examples","text":"

OpenBLAS is likely to be distributed in one of these distribution models:

  1. As a standalone package, or multiple packages, in a packaging ecosystem like a Linux distro, Homebrew, conda-forge or MSYS2.
  2. Vendored as part of a larger package, e.g. in Julia, NumPy, SciPy, or R.
  3. Locally, e.g. making available as a build on a single HPC cluster.

The guidance on this page is most important for models (1) and (2). These links to build recipes for a representative selection of packaging systems may be helpful as a reference:

"},{"location":"extensions/","title":"Extensions","text":" Routine Data Types Description ?axpby s,d,c,z like axpy with a multiplier for y ?gemm3m c,z gemm3m ?imatcopy s,d,c,z in-place transpositon/copying ?omatcopy s,d,c,z out-of-place transpositon/copying ?geadd s,d,c,z matrix add ?gemmt s,d,c,z gemm but only a triangular part updated "},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#general-questions","title":"General questions","text":""},{"location":"faq/#what-is-blas-why-is-it-important","title":"What is BLAS? Why is it important?","text":"

BLAS stands for Basic Linear Algebra Subprograms. BLAS provides standard interfaces for linear algebra, including BLAS1 (vector-vector operations), BLAS2 (matrix-vector operations), and BLAS3 (matrix-matrix operations). In general, BLAS is the computational kernel (\"the bottom of the food chain\") in linear algebra or scientific applications. Thus, if BLAS implementation is highly optimized, the whole application can get substantial benefit.

"},{"location":"faq/#what-functions-are-there-and-how-can-i-call-them-from-my-c-code","title":"What functions are there and how can I call them from my C code?","text":"

As BLAS is a standardized interface, you can refer to the documentation of its reference implementation at netlib.org. Calls from C go through its CBLAS interface, so your code will need to include the provided cblas.h in addition to linking with -lopenblas. A single-precision matrix multiplication will look like

#include <cblas.h>\n...\ncblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, M, N, K, 1.0, A, K, B, N, 0.0, result, N);\n
where M,N,K are the dimensions of your data - see https://petewarden.files.wordpress.com/2015/04/gemm_corrected.png (This image is part of an article on GEMM in the context of deep learning that is well worth reading in full - https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/)

"},{"location":"faq/#what-is-openblas-why-did-you-create-this-project","title":"What is OpenBLAS? Why did you create this project?","text":"

OpenBLAS is an open source BLAS library forked from the GotoBLAS2-1.13 BSD version. Since Mr. Kazushige Goto left TACC, GotoBLAS is no longer being maintained. Thus, we created this project to continue developing OpenBLAS/GotoBLAS.

"},{"location":"faq/#whats-the-difference-between-openblas-and-gotoblas","title":"What's the difference between OpenBLAS and GotoBLAS?","text":"

In OpenBLAS 0.2.0, we optimized level 3 BLAS on the Intel Sandy Bridge 64-bit OS. We obtained a performance comparable with that Intel MKL.

We optimized level 3 BLAS performance on the ICT Loongson-3A CPU. It outperformed GotoBLAS by 135% in a single thread and 120% in 4 threads.

We fixed some GotoBLAS bugs including a SEGFAULT bug on the new Linux kernel, MingW32/64 bugs, and a ztrmm computing error bug on Intel Nehalem.

We also added some minor features, e.g. supporting \"make install\", compiling without LAPACK and upgrading the LAPACK version to 3.4.2.

You can find the full list of modifications in Changelog.txt.

"},{"location":"faq/#where-do-parameters-gemm_p-gemm_q-gemm_r-come-from","title":"Where do parameters GEMM_P, GEMM_Q, GEMM_R come from?","text":"

The detailed explanation is probably in the original publication authored by Kazushige Goto - Goto, Kazushige; van de Geijn, Robert A; Anatomy of high-performance matrix multiplication. ACM Transactions on Mathematical Software (TOMS). Volume 34 Issue 3, May 2008 While this article is paywalled and too old for preprints to be available on arxiv.org, more recent publications like https://arxiv.org/pdf/1609.00076 contain at least a brief description of the algorithm. In practice, the values are derived by experimentation to yield the block sizes that give the highest performance. A general rule of thumb for selecting a starting point seems to be that PxQ is about half the size of L2 cache.

"},{"location":"faq/#how-can-i-report-a-bug","title":"How can I report a bug?","text":"

Please file an issue at this issue page or send mail to the OpenBLAS mailing list.

Please provide the following information: CPU, OS, compiler, and OpenBLAS compiling flags (Makefile.rule). In addition, please describe how to reproduce this bug.

"},{"location":"faq/#how-to-reference-openblas","title":"How to reference OpenBLAS.","text":"

You can reference our papers in this page. Alternatively, you can cite the OpenBLAS homepage http://www.openblas.net.

"},{"location":"faq/#how-can-i-use-openblas-in-multi-threaded-applications","title":"How can I use OpenBLAS in multi-threaded applications?","text":"

If your application is already multi-threaded, it will conflict with OpenBLAS multi-threading. Thus, you must set OpenBLAS to use single thread as following.

If the application is parallelized by OpenMP, please build OpenBLAS with USE_OPENMP=1

With the increased availability of fast multicore hardware it has unfortunately become clear that the thread management provided by OpenMP is not sufficient to prevent race conditions when OpenBLAS was built single-threaded by USE_THREAD=0 and there are concurrent calls from multiple threads to OpenBLAS functions. In this case, it is vital to also specify USE_LOCKING=1 (introduced with OpenBLAS 0.3.7).

"},{"location":"faq/#does-openblas-support-sparse-matrices-andor-vectors","title":"Does OpenBLAS support sparse matrices and/or vectors ?","text":"

OpenBLAS implements only the standard (dense) BLAS and LAPACK functions with a select few extensions popularized by Intel's MKL. Some cases can probably be made to work using e.g. GEMV or AXPBY, in general using a dedicated package like SuiteSparse (which can make use of OpenBLAS or equivalent for standard operations) is recommended.

"},{"location":"faq/#what-support-is-there-for-recent-pc-hardware-what-about-gpu","title":"What support is there for recent PC hardware ? What about GPU ?","text":"

As OpenBLAS is a volunteer project, it can take some time for the combination of a capable developer, free time, and particular hardware to come along, even for relatively common processors. Starting from 0.3.1, support is being added for AVX 512 (TARGET=SKYLAKEX), requiring a compiler that is capable of handling avx512 intrinsics. While AMD Zen processors should be autodetected by the build system, as of 0.3.2 they are still handled exactly like Intel Haswell. There once was an effort to build an OpenCL implementation that one can still find at https://github.com/xianyi/clOpenBLAS , but work on this stopped in 2015.

"},{"location":"faq/#how-about-the-level-3-blas-performance-on-intel-sandy-bridge","title":"How about the level 3 BLAS performance on Intel Sandy Bridge?","text":"

We obtained a performance comparable with Intel MKL that actually outperformed Intel MKL in some cases. Here is the result of the DGEMM subroutine's performance on Intel Core i5-2500K Windows 7 SP1 64-bit:

"},{"location":"faq/#os-and-compiler","title":"OS and Compiler","text":""},{"location":"faq/#how-can-i-call-an-openblas-function-in-microsoft-visual-studio","title":"How can I call an OpenBLAS function in Microsoft Visual Studio?","text":"

Please read this page.

"},{"location":"faq/#how-can-i-use-cblas-and-lapacke-without-c99-complex-number-support-eg-in-visual-studio","title":"How can I use CBLAS and LAPACKE without C99 complex number support (e.g. in Visual Studio)?","text":"

Zaheer has fixed this bug. You can now use the structure instead of C99 complex numbers. Please read this issue page for details.

This issue is for using LAPACKE in Visual Studio.

"},{"location":"faq/#i-get-a-segfault-with-multi-threading-on-linux-whats-wrong","title":"I get a SEGFAULT with multi-threading on Linux. What's wrong?","text":"

This may be related to a bug in the Linux kernel 2.6.32 (?). Try applying the patch segaults.patch to disable mbind using

 patch < segfaults.patch\n

and see if the crashes persist. Note that this patch will lead to many compiler warnings.

"},{"location":"faq/#when-i-make-the-library-there-is-no-such-instruction-xgetbv-error-whats-wrong","title":"When I make the library, there is no such instruction: `xgetbv' error. What's wrong?","text":"

Please use GCC 4.4 and later version. This version supports xgetbv instruction. If you use the library for Sandy Bridge with AVX instructions, you should use GCC 4.6 and later version.

On Mac OS X, please use Clang 3.1 and later version. For example, make CC=clang

For the compatibility with old compilers (GCC < 4.4), you can enable NO_AVX flag. For example, make NO_AVX=1

"},{"location":"faq/#my-build-fails-due-to-the-linker-error-multiple-definition-of-dlamc3_-what-is-the-problem","title":"My build fails due to the linker error \"multiple definition of `dlamc3_'\". What is the problem?","text":"

This linker error occurs if GNU patch is missing or if our patch for LAPACK fails to apply.

Background: OpenBLAS implements optimized versions of some LAPACK functions, so we need to disable the reference versions. If this process fails we end with duplicated implementations of the same function.

"},{"location":"faq/#my-build-worked-fine-and-passed-all-tests-but-running-make-lapack-test-ends-with-segfaults","title":"My build worked fine and passed all tests, but running make lapack-test ends with segfaults","text":"

Some of the LAPACK tests, notably in xeigtstz, try to allocate around 10MB on the stack. You may need to use ulimit -s to change the default limits on your system to allow this.

"},{"location":"faq/#how-could-i-disable-openblas-threading-affinity-on-runtime","title":"How could I disable OpenBLAS threading affinity on runtime?","text":"

You can define the OPENBLAS_MAIN_FREE or GOTOBLAS_MAIN_FREE environment variable to disable threading affinity on runtime. For example, before the running,

export OPENBLAS_MAIN_FREE=1\n

Alternatively, you can disable affinity feature with enabling NO_AFFINITY=1 in Makefile.rule.

"},{"location":"faq/#how-to-solve-undefined-reference-errors-when-statically-linking-against-libopenblasa","title":"How to solve undefined reference errors when statically linking against libopenblas.a","text":"

On Linux, if OpenBLAS was compiled with threading support (USE_THREAD=1 by default), custom programs statically linked against libopenblas.a should also link to the pthread library e.g.:

gcc -static -I/opt/OpenBLAS/include -L/opt/OpenBLAS/lib -o my_program my_program.c -lopenblas -lpthread\n

Failing to add the -lpthread flag will cause errors such as:

/opt/OpenBLAS/libopenblas.a(memory.o): In function `_touch_memory':\nmemory.c:(.text+0x15): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0x41): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `openblas_fork_handler':\nmemory.c:(.text+0x440): undefined reference to `pthread_atfork'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `blas_memory_alloc':\nmemory.c:(.text+0x7a5): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0x825): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `blas_shutdown':\nmemory.c:(.text+0x9e1): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0xa6e): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(blas_server.o): In function `blas_thread_server':\nblas_server.c:(.text+0x273): undefined reference to `pthread_mutex_lock'\nblas_server.c:(.text+0x287): undefined reference to `pthread_mutex_unlock'\nblas_server.c:(.text+0x33f): undefined reference to `pthread_cond_wait'\n/opt/OpenBLAS/libopenblas.a(blas_server.o): In function `blas_thread_init':\nblas_server.c:(.text+0x416): undefined reference to `pthread_mutex_lock'\nblas_server.c:(.text+0x4be): undefined reference to `pthread_mutex_init'\nblas_server.c:(.text+0x4ca): undefined reference to `pthread_cond_init'\nblas_server.c:(.text+0x4e0): undefined reference to `pthread_create'\nblas_server.c:(.text+0x50f): undefined reference to `pthread_mutex_unlock'\n...\n

The -lpthread is not required when linking dynamically against libopenblas.so.0.

"},{"location":"faq/#building-openblas-for-haswell-or-dynamic-arch-on-rhel-6-centos-6-rocks-61scientific-linux-6","title":"Building OpenBLAS for Haswell or Dynamic Arch on RHEL-6, CentOS-6, Rocks-6.1,Scientific Linux 6","text":"

Minimum requirement to actually run AVX2-enabled software like OpenBLAS is kernel-2.6.32-358, shipped with EL6U4 in 2013

The binutils package from RHEL6 does not know the instruction vpermpd or any other AVX2 instruction. You can download a newer binutils package from Enterprise Linux software collections, following instructions here: https://www.softwarecollections.org/en/scls/rhscl/devtoolset-3/ After configuring repository you need to install devtoolset-?-binutils to get later usable binutils package

$ yum search devtoolset-\\?-binutils\n$ sudo yum install devtoolset-3-binutils\n
once packages are installed check the correct name for SCL redirection set to enable new version
$ scl --list\ndevtoolset-3\nrh-python35\n
Now just prefix your build commands with respective redirection:
$ scl enable devtoolset-3 -- make DYNAMIC_ARCH=1\n
AVX-512 (SKYLAKEX) support requires devtoolset-8-gcc-gfortran (which exceeds formal requirement for AVX-512 because of packaging issues in earlier packages) which dependency-installs respective binutils and gcc or later and kernel 2.6.32-696 aka 6U9 or 3.10.0-327 aka 7U2 or later to run. In absence of abovementioned toolset OpenBLAS will fall back to AVX2 instructions in place of AVX512 sacrificing some performance on SKYLAKE-X platform.

"},{"location":"faq/#building-openblas-in-qemukvmxen","title":"Building OpenBLAS in QEMU/KVM/XEN","text":"

By default, QEMU reports the CPU as \"QEMU Virtual CPU version 2.2.0\", which shares CPUID with existing 32bit CPU even in 64bit virtual machine, and OpenBLAS recognizes it as PENTIUM2. Depending on the exact combination of CPU features the hypervisor choses to expose, this may not correspond to any CPU that exists, and OpenBLAS will error when trying to build. To fix this, pass -cpu host or -cpu passthough to QEMU, or another CPU model. Similarly, the XEN hypervisor may not pass through all features of the host cpu while reporting the cpu type itself correctly, which can lead to compiler error messages about an \"ABI change\" when compiling AVX512 code. Again changing the Xen configuration by running e.g. \"xen-cmdline --set-xen cpuid=avx512\" should get around this (as would building OpenBLAS for an older cpu lacking that particular feature, e.g. TARGET=HASWELL)

"},{"location":"faq/#building-openblas-on-power-fails-with-ibm-xl","title":"Building OpenBLAS on POWER fails with IBM XL","text":"
Trying to compile OpenBLAS with IBM XL ends with error messages about unknown register names\n

like \"vs32\". Working around these by using known alternate names for the vector registers only leads to another assembler error about unsupported constraints. This is a known deficiency in the IBM compiler at least up to and including 16.1.0 (and in the POWER version of clang, from which it is derived) - use gcc instead. (See issues #1078 and #1699 for related discussions)

"},{"location":"faq/#replacing-system-blasupdating-apt-openblas-in-mintubuntudebian","title":"Replacing system BLAS/updating APT OpenBLAS in Mint/Ubuntu/Debian","text":"

Debian and Ubuntu LTS versions provide OpenBLAS package which is not updated after initial release, and under circumstances one might want to use more recent version of OpenBLAS e.g. to get support for newer CPUs

Ubuntu and Debian provides 'alternatives' mechanism to comfortably replace BLAS and LAPACK libraries systemwide.

After successful build of OpenBLAS (with DYNAMIC_ARCH set to 1)

$ make clean\n$ make DYNAMIC_ARCH=1\n$ sudo make DYNAMIC_ARCH=1 install\n
One can redirect BLAS and LAPACK alternatives to point to source-built OpenBLAS First you have to install NetLib LAPACK reference implementation (to have alternatives to replace):
$ sudo apt install libblas-dev liblapack-dev\n
Then we can set alternative to our freshly-built library:
$ sudo update-alternatives --install /usr/lib/libblas.so.3 libblas.so.3 /opt/OpenBLAS/lib/libopenblas.so.0 41 \\\n   --slave /usr/lib/liblapack.so.3 liblapack.so.3 /opt/OpenBLAS/lib/libopenblas.so.0\n
Or remove redirection and switch back to APT-provided BLAS implementation order:
$ sudo update-alternatives --remove libblas.so.3 /opt/OpenBLAS/lib/libopenblas.so.0\n
In recent versions of the distributions, the installation path for the libraries has been changed to include the name of the host architecture, like /usr/lib/x86_64-linux-gnu/blas/libblas.so.3 or libblas.so.3.x86_64-linux-gnu. Use $ update-alternatives --display libblas.so.3 to find out what layout your system has.

"},{"location":"faq/#i-built-openblas-for-use-with-some-other-software-but-that-software-cannot-find-it","title":"I built OpenBLAS for use with some other software, but that software cannot find it","text":"

Openblas installs as a single library named libopenblas.so, while some programs may be searching for a separate libblas.so and liblapack.so so you may need to create appropriate symbolic links (ln -s libopenblas.so libblas.so; ln -s libopenblas.so liblapack.so) or copies. Also make sure that the installation location (usually /opt/OpenBLAS/lib or /usr/local/lib) is among the library search paths of your system.

"},{"location":"faq/#i-included-cblash-in-my-program-but-the-compiler-complains-about-a-missing-commonh-or-functions-from-it","title":"I included cblas.h in my program, but the compiler complains about a missing common.h or functions from it","text":"

You probably tried to include a cblas.h that you simply copied from the OpenBLAS source, instead you need to run make install after building OpenBLAS and then use the modified cblas.h that this step builds in the installation path (usually either /usr/local/include, /opt/OpenBLAS/include or whatever you specified as PREFIX= on the make install)

"},{"location":"faq/#compiling-openblas-with-gccs-fbounds-check-actually-triggers-aborts-in-programs","title":"Compiling OpenBLAS with gcc's -fbounds-check actually triggers aborts in programs","text":"

This is due to different interpretations of the (informal) standard for passing characters as arguments between C and FORTRAN functions. As the method for storing text differs in the two languages, when C calls Fortran the text length is passed as an \"invisible\" additional parameter. Historically, this has not been required when the text is just a single character, so older code like the Reference-LAPACK bundled with OpenBLAS does not do it. Recently gcc's checking has changed to require it, but there is no consensus yet if and how the existing LAPACK (and many other codebases) should adapt. (And for actual compilation, gcc has mostly backtracked and provided compatibility options - hence the default build settings in the OpenBLAS Makefiles add -fno-optimize-sibling-calls to the gfortran options to prevent miscompilation with \"affected\" versions. See ticket 2154 in the issue tracker for more details and links)

"},{"location":"faq/#build-fails-with-lots-of-errors-about-undefined-gemm_unroll_m","title":"Build fails with lots of errors about undefined ?GEMM_UNROLL_M","text":"

Your cpu is apparently too new to be recognized by the build scripts, so they failed to assign appropriate parameters for the block algorithm. Do a make clean and try again with TARGET set to one of the cpu models listed in TargetList.txt - for x86_64 this will usually be HASWELL.

"},{"location":"faq/#cmakeosx-build-fails-with-argument-list-too-long","title":"CMAKE/OSX: Build fails with 'argument list too long'","text":"

This is a limitation in the maximum length of a command on OSX, coupled with how CMAKE works. You should be able to work around this by adding the option -DCMAKE_Fortran_USE_RESPONSE_FILE_FOR_OBJECTS=1 to your CMAKE arguments.

"},{"location":"faq/#likely-problems-with-avx2-support-in-docker-desktop-for-osx","title":"Likely problems with AVX2 support in Docker Desktop for OSX","text":"

There have been a few reports of wrong calculation results and build-time test failures when building in a container environment managed by the OSX version of Docker Desktop, which uses the xhyve virtualizer underneath. Judging from these reports, AVX2 support in xhyve appears to be subtly broken but a corresponding ticket in the xhyve issue tracker has not drawn any reaction or comment since 2019. Therefore it is strongly recommended to build OpenBLAS with the NO_AVX2=1 option when inside a container under (or for later use with) the Docker Desktop environment on Intel-based Apple hardware.

"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#program-is-terminated-because-you-tried-to-allocate-too-many-memory-regions","title":"Program is Terminated. Because you tried to allocate too many memory regions","text":"

In OpenBLAS, we mange a pool of memory buffers and allocate the number of buffers as the following.

#define NUM_BUFFERS (MAX_CPU_NUMBER * 2)\n
This error indicates that the program exceeded the number of buffers.

Please build OpenBLAS with larger NUM_THREADS. For example, make NUM_THREADS=32 or make NUM_THREADS=64. In Makefile.system, we will set MAX_CPU_NUMBER=NUM_THREADS.

"},{"location":"faq/#how-to-choose-target-manually-at-runtime-when-compiled-with-dynamic_arch","title":"How to choose TARGET manually at runtime when compiled with DYNAMIC_ARCH","text":"

The environment variable which control the kernel selection is OPENBLAS_CORETYPE (see driver/others/dynamic.c) e.g. export OPENBLAS_CORETYPE=Haswell. And the function char* openblas_get_corename() returns the used target.

"},{"location":"faq/#after-updating-the-installed-openblas-a-program-complains-about-undefined-symbol-gotoblas","title":"After updating the installed OpenBLAS, a program complains about \"undefined symbol gotoblas\"","text":"

This symbol gets defined only when OpenBLAS is built with \"make DYNAMIC_ARCH=1\" (which is what distributors will choose to ensure support for more than just one CPU type).

"},{"location":"faq/#how-can-i-find-out-at-runtime-what-options-the-library-was-built-with","title":"How can I find out at runtime what options the library was built with ?","text":"

OpenBLAS has two utility functions that may come in here:

openblas_get_parallel() will return 0 for a single-threaded library, 1 if multithreading without OpenMP, 2 if built with USE_OPENMP=1

openblas_get_config() will return a string containing settings such as USE64BITINT or DYNAMIC_ARCH that were active at build time, as well as the target cpu (or in case of a dynamic_arch build, the currently detected one).

"},{"location":"faq/#after-making-openblas-i-find-that-the-static-library-is-multithreaded-but-the-dynamic-one-is-not","title":"After making OpenBLAS, I find that the static library is multithreaded, but the dynamic one is not ?","text":"

The shared OpenBLAS library you built is probably working fine as well, but your program may be picking up a different (probably single-threaded) version from one of the standard system paths like /usr/lib on startup. Running ldd /path/to/your/program will tell you which library the linkage loader will actually use.

Specifying the \"correct\" library location with the -L flag (like -L /opt/OpenBLAS/lib) when linking your program only defines which library will be used to see if all symbols can be resolved, you will need to add an rpath entry to the binary (using -Wl,rpath=/opt/OpenBLAS/lib) to make it request searching that location. Alternatively, remove the \"wrong old\" library (if you can), or set LD_LIBRARY_PATH to the desired location before running your program.

"},{"location":"faq/#i-want-to-use-openblas-with-cuda-in-the-hpl-23-benchmark-code-but-it-keeps-looking-for-intel-mkl","title":"I want to use OpenBLAS with CUDA in the HPL 2.3 benchmark code but it keeps looking for Intel MKL","text":"

You need to edit file src/cuda/cuda_dgemm.c in the NVIDIA version of HPL, change the \"handle2\" and \"handle\" dlopen calls to use libopenblas.so instead of libmkl_intel_lp64.so, and add an trailing underscore in the dlsym lines for dgemm_mkl and dtrsm_mkl (like dgemm_mkl = (void(*)())dlsym(handle, \u201cdgemm_\u201d);)

"},{"location":"faq/#multithreaded-openblas-runs-no-faster-or-is-even-slower-than-singlethreaded-on-my-armv7-board","title":"Multithreaded OpenBLAS runs no faster or is even slower than singlethreaded on my ARMV7 board","text":"

The power saving mechanisms of your board may have shut down some cores, making them invisible to OpenBLAS in its startup phase. Try bringing them online before starting your calculation.

"},{"location":"faq/#speed-varies-wildly-between-individual-runs-on-a-typical-armv8-smartphone-processor","title":"Speed varies wildly between individual runs on a typical ARMV8 smartphone processor","text":"

Check the technical specifications, it could be that the SoC combines fast and slow cpus and threads can end up on either. In that case, binding the process to specific cores e.g. by setting OMP_PLACES=cores may help. (You may need to experiment with OpenMP options, it has been reported that using OMP_NUM_THREADS=2 OMP_PLACES=cores caused a huge drop in performance on a 4+4 core chip while OMP_NUM_THREADS=2 OMP_PLACES=cores(2) worked as intended - as did OMP_PLACES=cores with 4 threads)

"},{"location":"faq/#i-cannot-get-openblas-to-use-more-than-a-small-subset-of-available-cores-on-a-big-system","title":"I cannot get OpenBLAS to use more than a small subset of available cores on a big system","text":"

Multithreading support in OpenBLAS requires the use of internal buffers for sharing partial results, the number and size of which is defined at compile time. Unless you specify NUM_THREADS in your make or cmake command, the build scripts try to autodetect the number of cores available in your build host to size the library to match. This unfortunately means that if you move the resulting binary from a small \"front-end node\" to a larger \"compute node\" later, it will still be limited to the hardware capabilities of the original system. The solution is to set NUM_THREADS to a number big enough to encompass the biggest systems you expect to run the binary on - at runtime, it will scale down the maximum number of threads it uses to match the number of cores physically available.

"},{"location":"faq/#getting-elf-load-command-addressoffset-not-properly-aligned-when-loading-libopenblasso","title":"Getting \"ELF load command address/offset not properly aligned\" when loading libopenblas.so","text":"

If you get a message \"error while loading shared libraries: libopenblas.so.0: ELF load command address/offset not properly aligned\" when starting a program that is (dynamically) linked to OpenBLAS, this is very likely due to a bug in the GNU linker (ld) that is part of the GNU binutils package. This error was specifically observed on older versions of Ubuntu Linux updated with the (at the time) most recent binutils version 2.38, but an internet search turned up sporadic reports involving various other libraries dating back several years. A bugfix was created by the binutils developers and should be available in later versions of binutils.(See issue 3708 for details)

"},{"location":"faq/#using-openblas-with-openmp","title":"Using OpenBLAS with OpenMP","text":"

OpenMP provides its own locking mechanisms, so when your code makes BLAS/LAPACK calls from inside OpenMP parallel regions it is imperative that you use an OpenBLAS that is built with USE_OPENMP=1, as otherwise deadlocks might occur. Furthermore, OpenBLAS will automatically restrict itself to using only a single thread when called from an OpenMP parallel region. When it is certain that calls will only occur from the main thread of your program (i.e. outside of omp parallel constructs), a standard pthreads build of OpenBLAS can be used as well. In that case it may be useful to tune the linger behaviour of idle threads in both your OpenMP program (e.g. set OMP_WAIT_POLICY=passive) and OpenBLAS (by redefining the THREAD_TIMEOUT variable at build time, or setting the environment variable OPENBLAS_THREAD_TIMEOUT smaller than the default 26) so that the two alternating thread pools do not unnecessarily hog the cpu during the handover.

"},{"location":"install/","title":"Install OpenBLAS","text":"

Note

Lists of precompiled packages are not comprehensive, is not meant to validate nor endorse a particular third-party build over others, and may not always lead to the newest version

"},{"location":"install/#quick-install","title":"Quick install","text":"

Precompiled packages have recently become available for a number of platforms through their normal installation procedures, so for users of desktop devices at least, the instructions below are mostly relevant when you want to try the most recent development snapshot from git. See your platform's relevant \"Precompiled packages\" section.

The Conda-Forge project maintains packages for the conda package manager at https://github.com/conda-forge/openblas-feedstock.

"},{"location":"install/#source","title":"Source","text":"

Download the latest stable version from release page.

"},{"location":"install/#platforms","title":"Platforms","text":""},{"location":"install/#linux","title":"Linux","text":"

Just type make to compile the library.

Notes:

"},{"location":"install/#precompiled-packages","title":"Precompiled packages","text":""},{"location":"install/#debianubuntumintkali","title":"Debian/Ubuntu/Mint/Kali","text":"

OpenBLAS package is available in default repositories and can act as default BLAS in system

Example installation commands:

$ sudo apt update\n$ apt search openblas\n$ sudo apt install libopenblas-dev\n$ sudo update-alternatives --config libblas.so.3\n
Alternatively, if distributor's package proves unsatisfactory, you may try latest version of OpenBLAS, Following guide in OpenBLAS FAQ

"},{"location":"install/#opensusesle","title":"openSuSE/SLE","text":"

Recent OpenSUSE versions include OpenBLAS in default repositories and also permit OpenBLAS to act as replacement of system-wide BLAS.

Example installation commands:

$ sudo zypper ref\n$ zypper se openblas\n$ sudo zypper in openblas-devel\n$ sudo update-alternatives --config libblas.so.3\n
Should you be using older OpenSUSE or SLE that provides no OpenBLAS, you can attach optional or experimental openSUSE repository as a new package source to acquire recent build of OpenBLAS following instructions on openSUSE software site

"},{"location":"install/#fedoracentosrhel","title":"Fedora/CentOS/RHEL","text":"

Fedora provides OpenBLAS in default installation repositories.

To install it try following:

$ dnf search openblas\n$ dnf install openblas-devel\n
For CentOS/RHEL/Scientific Linux packages are provided via Fedora EPEL repository

After adding repository and repository keys installation is pretty straightforward:

$ yum search openblas\n$ yum install openblas-devel\n
No alternatives mechanism is provided for BLAS, and packages in system repositories are linked against NetLib BLAS or ATLAS BLAS libraries. You may wish to re-package RPMs to use OpenBLAS instead as described here

"},{"location":"install/#mageia","title":"Mageia","text":"

Mageia offers ATLAS and NetLIB LAPACK in base repositories. You can build your own OpenBLAS replacement, and once installed in /opt TODO: populate /usr/lib64 /usr/include accurately to replicate netlib with update-alternatives

"},{"location":"install/#archmanjaroantergos","title":"Arch/Manjaro/Antergos","text":"
$ sudo pacman -S openblas\n
"},{"location":"install/#windows","title":"Windows","text":"

The precompiled binaries available with each release (in https://github.com/xianyi/OpenBLAS/releases) are created with MinGW using an option list of \"NUM_THREADS=64 TARGET=GENERIC DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 CONSISTENT_FPCSR=1\" - they should work on any x86 or x86_64 computer. The zip archive contains the include files, static and dll libraries as well as configuration files for getting them found via CMAKE or pkgconfig - just create a suitable folder for your OpenBLAS installation and unzip it there. (Note that you will need to edit the provided openblas.pc and OpenBLASConfig.cmake to reflect the installation path on your computer, as distributed they have \"win\" or \"win64\" reflecting the local paths on the system they were built on). Some programs will expect the DLL name to be lapack.dll, blas.dll, or (in the case of the statistics package \"R\") even Rblas.dll to act as a direct replacement for whatever other implementation of BLAS and LAPACK they use by default. Just copy the openblas.dll to the desired name(s). Note that the provided binaries are built with INTERFACE64=0, meaning they use standard 32bit integers for array indexing and the like (as is the default for most if not all BLAS and LAPACK implementations). If the documentation of whatever program you are using with OpenBLAS mentions 64bit integers (INTERFACE64=1) for addressing huge matrix sizes, you will need to build OpenBLAS from source (or open an issue ticket to make the demand for such a precompiled build known).

"},{"location":"install/#precompiled-packages_1","title":"Precompiled packages","text":""},{"location":"install/#visual-studio","title":"Visual Studio","text":"

As of OpenBLAS v0.2.15, we support MinGW and Visual Studio (using CMake to generate visual studio solution files \u2013 note that you will need at least version 3.11 of CMake for linking to work correctly) to build OpenBLAS on Windows.

Note that you need a Fortran compiler if you plan to build and use the LAPACK functions included with OpenBLAS. The sections below describe using either flang as an add-on to clang/LLVM or gfortran as part of MinGW for this purpose. If you want to use the Intel Fortran compiler ifort for this, be sure to also use the Intel C compiler icc for building the C parts, as the ABI imposed by ifort is incompatible with msvc.

"},{"location":"install/#1-native-msvc-abi","title":"1. Native (MSVC) ABI","text":"

A fully-optimized OpenBLAS that can be statically or dynamically linked to your application can currently be built for the 64-bit architecture with the LLVM compiler infrastructure. We're going to use Miniconda3 to grab all of the tools we need, since some of them are in an experimental status. Before you begin, you'll need to have Microsoft Visual Studio 2015 or newer installed.

  1. Install Miniconda3 for 64 bits using winget install --id Anaconda.Miniconda3 or easily download from conda.io.
  2. Open the \"Anaconda Command Prompt,\" now available in the Start Menu, or at %USERPROFILE%\\miniconda3\\shell\\condabin\\conda-hook.ps1.
  3. In that command prompt window, use cd to change to the directory where you want to build OpenBLAS
  4. Now install all of the tools we need:
conda update -n base conda\nconda config --add channels conda-forge\nconda install -y cmake flang clangdev perl libflang ninja\n
  1. Still in the Anaconda Command Prompt window, activate the MSVC environment for 64 bits with vcvarsall x64. On Windows 11 with Visual Studio 2022, this would be done by invoking:
\"c:\\Program Files\\Microsoft Visual Studio\\2022\\Preview\\vc\\Auxiliary\\Build\\vcvars64.bat\"\n

With VS2019, the command should be the same \u2013 except for the year number, obviously. For other/older versions of MSVC, the VS documentation or a quick search on the web should turn up the exact wording you need.

Confirm that the environment is active by typing link \u2013 this should return a long list of possible options for the link command. If it just returns \"command not found\" or similar, review and retype the call to vcvars64.bat. NOTE: if you are working from a Visual Studio Command prompt window instead (so that you do not have to do the vcvars call), you need to invoke conda activate so that CONDA_PREFIX etc. get set up correctly before proceeding to step 6. Failing to do so will lead to link errors like libflangmain.lib not getting found later in the build.

  1. Now configure the project with CMake. Starting in the project directory, execute the following:
set \"LIB=%CONDA_PREFIX%\\Library\\lib;%LIB%\"\nset \"CPATH=%CONDA_PREFIX%\\Library\\include;%CPATH%\"\nmkdir build\ncd build\ncmake .. -G \"Ninja\" -DCMAKE_CXX_COMPILER=clang-cl -DCMAKE_C_COMPILER=clang-cl -DCMAKE_Fortran_COMPILER=flang -DCMAKE_MT=mt -DBUILD_WITHOUT_LAPACK=no -DNOFORTRAN=0 -DDYNAMIC_ARCH=ON -DCMAKE_BUILD_TYPE=Release\n

You may want to add further options in the cmake command here \u2013 for instance, the default only produces a static .lib version of the library. If you would rather have a DLL, add -DBUILD_SHARED_LIBS=ON above. Note that this step only creates some command files and directories, the actual build happens next.

  1. Build the project:

cmake --build . --config Release\n
This step will create the OpenBLAS library in the \"lib\" directory, and various build-time tests in the test, ctest and openblas_utest directories. However it will not separate the header files you might need for building your own programs from those used internally. To put all relevant files in a more convenient arrangement, run the next step.

  1. Install all relevant files created by the build

cmake --install . --prefix c:\\opt -v\n
This will copy all files that are needed for building and running your own programs with OpenBLAS to the given location, creating appropriate subdirectories for the individual kinds of files. In the case of \"C:\\opt\" as given above, this would be C:\\opt\\include\\openblas for the header files, C:\\opt\\bin for the libopenblas.dll and C:\\opt\\lib for the static library. C:\\opt\\share holds various support files that enable other cmake-based build scripts to find OpenBLAS automatically.

"},{"location":"install/#visual-studio-2017-c2017-standard","title":"Visual studio 2017+ (C++2017 standard)","text":"

In newer visual studio versions, Microsoft has changed how it handles complex types. Even when using a precompiled version of OpenBLAS, you might need to define LAPACK_COMPLEX_CUSTOM in order to define complex types properly for MSVC. For example, some variant of the following might help:

#if defined(_MSC_VER)\n    #include <complex.h>\n    #define LAPACK_COMPLEX_CUSTOM\n    #define lapack_complex_float _Fcomplex\n    #define lapack_complex_double _Dcomplex\n#endif\n

For reference, see https://github.com/xianyi/OpenBLAS/issues/3661, https://github.com/Reference-LAPACK/lapack/issues/683, and https://stackoverflow.com/questions/47520244/using-openblas-lapacke-in-visual-studio.

"},{"location":"install/#cmake-and-visual-studio","title":"CMake and Visual Studio","text":"

To build OpenBLAS for the 32-bit architecture, you'll need to use the builtin Visual Studio compilers.

Note

This method may produce binaries which demonstrate significantly lower performance than those built with the other methods. (The Visual Studio compiler does not support the dialect of assembly used in the cpu-specific optimized files, so only the \"generic\" TARGET which is written in pure C will get built. For the same reason it is not possible (and not necessary) to use -DDYNAMIC_ARCH=ON in a Visual Studio build) You may consider building for the 32-bit architecture using the GNU (MinGW) ABI.

"},{"location":"install/#1-install-cmake-at-windows","title":"# 1. Install CMake at Windows","text":""},{"location":"install/#2-use-cmake-to-generate-visual-studio-solution-files","title":"# 2. Use CMake to generate Visual Studio solution files","text":"
# Do this from Powershell so cmake can find visual studio\ncmake -G \"Visual Studio 14 Win64\" -DCMAKE_BUILD_TYPE=Release .\n
"},{"location":"install/#build-the-solution-at-visual-studio","title":"Build the solution at Visual Studio","text":"

Note that this step depends on perl, so you'll need to install perl for windows, and put perl on your path so VS can start perl (http://stackoverflow.com/questions/3051049/active-perl-installation-on-windows-operating-system).

Step 2 will build the OpenBLAS solution, open it in VS, and build the projects. Note that the dependencies do not seem to be automatically configured: if you try to build libopenblas directly, it will fail with a message saying that some .obj files aren't found, but if you build the projects libopenblas depends on before building libopenblas, the build will succeed.

"},{"location":"install/#build-openblas-for-universal-windows-platform","title":"Build OpenBLAS for Universal Windows Platform","text":"

OpenBLAS can be built for use on the Universal Windows Platform using a two step process since commit c66b842.

"},{"location":"install/#1-follow-steps-1-and-2-above-to-build-the-visual-studio-solution-files-for-windows-this-builds-the-helper-executables-which-are-required-when-building-the-openblas-visual-studio-solution-files-for-uwp-in-step-2","title":"# 1. Follow steps 1 and 2 above to build the Visual Studio solution files for Windows. This builds the helper executables which are required when building the OpenBLAS Visual Studio solution files for UWP in step 2.","text":""},{"location":"install/#2-remove-the-generated-cmakecachetxt-and-cmakefiles-directory-from-the-openblas-source-directory-and-re-run-cmake-with-the-following-options","title":"# 2. Remove the generated CMakeCache.txt and CMakeFiles directory from the OpenBLAS source directory and re-run CMake with the following options:","text":"
# do this to build UWP compatible solution files\ncmake -G \"Visual Studio 14 Win64\" -DCMAKE_SYSTEM_NAME=WindowsStore -DCMAKE_SYSTEM_VERSION=\"10.0\" -DCMAKE_SYSTEM_PROCESSOR=AMD64 -DVS_WINRT_COMPONENT=TRUE -DCMAKE_BUILD_TYPE=Release .\n
"},{"location":"install/#build-the-solution-with-visual-studio","title":"# Build the solution with Visual Studio","text":"

This will build the OpenBLAS binaries with the required settings for use with UWP.

"},{"location":"install/#2-gnu-mingw-abi","title":"2. GNU (MinGW) ABI","text":"

The resulting library can be used in Visual Studio, but it can only be linked dynamically. This configuration has not been thoroughly tested and should be considered experimental.

"},{"location":"install/#incompatible-x86-calling-conventions","title":"Incompatible x86 calling conventions","text":"

Due to incompatibilities between the calling conventions of MinGW and Visual Studio you will need to make the following modifications ( 32-bit only ):

  1. Use the newer GCC 4.7.0. The older GCC (<4.7.0) has an ABI incompatibility for returning aggregate structures larger than 8 bytes with MSVC.
"},{"location":"install/#build-openblas-on-windows-os","title":"Build OpenBLAS on Windows OS","text":"
  1. Install the MinGW (GCC) compiler suite, either 32-bit (http://www.mingw.org/) or 64-bit (http://mingw-w64.sourceforge.net/). Be sure to install its gfortran package as well (unless you really want to build the BLAS part of OpenBLAS only) and check that gcc and gfortran are the same version \u2013 mixing compilers from different sources or release versions can lead to strange error messages in the linking stage. In addition, please install MSYS with MinGW.
  2. Build OpenBLAS in the MSYS shell. Usually, you can just type \"make\". OpenBLAS will detect the compiler and CPU automatically.
  3. After the build is complete, OpenBLAS will generate the static library \"libopenblas.a\" and the shared dll library \"libopenblas.dll\" in the folder. You can type \"make PREFIX=/your/installation/path install\" to install the library to a certain location.

Note

We suggest using official MinGW or MinGW-w64 compilers. A user reported that s/he met Unhandled exception by other compiler suite. https://groups.google.com/forum/#!topic/openblas-users/me2S4LkE55w

Note also that older versions of the alternative builds of mingw-w64 available through http://www.msys2.org may contain a defect that leads to a compilation failure accompanied by the error message

<command-line>:0:4: error: expected identifier or '(' before numeric constant\n
If you encounter this, please upgrade your msys2 setup or see https://github.com/xianyi/OpenBLAS/issues/1503 for a workaround.

"},{"location":"install/#generate-import-library-before-0210-version","title":"Generate import library (before 0.2.10 version)","text":"
  1. First, you will need to have the lib.exe tool in the Visual Studio command prompt.
  2. Open the command prompt and type cd OPENBLAS_TOP_DIR/exports, where OPENBLAS_TOP_DIR is the main folder of your OpenBLAS installation.
  3. For a 32-bit library, type lib /machine:i386 /def:libopenblas.def. For 64-bit, type lib /machine:X64 /def:libopenblas.def.
  4. This will generate the import library \"libopenblas.lib\" and the export library \"libopenblas.exp\" in OPENBLAS_TOP_DIR/exports. Although these two files have the same name, they are totally different.
"},{"location":"install/#generate-import-library-0210-and-after-version","title":"Generate import library (0.2.10 and after version)","text":"
  1. OpenBLAS already generated the import library \"libopenblas.dll.a\" for \"libopenblas.dll\".
"},{"location":"install/#generate-windows-native-pdb-files-from-gccgfortran-build","title":"generate windows native PDB files from gcc/gfortran build","text":"

Tool to do so is available at https://github.com/rainers/cv2pdb

"},{"location":"install/#use-openblas-dll-library-in-visual-studio","title":"Use OpenBLAS .dll library in Visual Studio","text":"
  1. Copy the import library (before 0.2.10: \"OPENBLAS_TOP_DIR/exports/libopenblas.lib\", 0.2.10 and after: \"OPENBLAS_TOP_DIR/libopenblas.dll.a\") and .dll library \"libopenblas.dll\" into the same folder(The folder of your project that is going to use the BLAS library. You may need to add the libopenblas.dll.a to the linker input list: properties->Linker->Input).
  2. Please follow the documentation about using third-party .dll libraries in MS Visual Studio 2008 or 2010. Make sure to link against a library for the correct architecture. For example, you may receive an error such as \"The application was unable to start correctly (0xc000007b)\" which typically indicates a mismatch between 32/64-bit libraries.

Note

If you need CBLAS, you should include cblas.h in /your/installation/path/include in Visual Studio. Please read this page.

"},{"location":"install/#limitations","title":"Limitations","text":""},{"location":"install/#windows-on-arm","title":"Windows on Arm","text":""},{"location":"install/#prerequisites","title":"Prerequisites","text":"

Following tools needs to be installed

"},{"location":"install/#1-download-and-install-clang-for-windows-on-arm","title":"1. Download and install clang for windows on arm","text":"

Find the latest LLVM build for WoA from LLVM release page

E.g: LLVM 12 build for WoA64 can be found here

Run the LLVM installer and ensure that LLVM is added to environment PATH.

"},{"location":"install/#2-download-and-install-classic-flang-for-windows-on-arm","title":"2. Download and install classic flang for windows on arm","text":"

Classic flang is the only available FORTRAN compiler for windows on arm for now and a pre-release build can be found here

There is no installer for classic flang and the zip package can be extracted and the path needs to be added to environment PATH.

E.g: on PowerShell

$env:Path += \";C:\\flang_woa\\bin\"\n
"},{"location":"install/#build","title":"Build","text":"

The following steps describe how to build the static library for OpenBLAS with and without LAPACK

"},{"location":"install/#1-build-openblas-static-library-with-blas-and-lapack-routines-with-make","title":"1. Build OpenBLAS static library with BLAS and LAPACK routines with Make","text":"

Following command can be used to build OpenBLAS static library with BLAS and LAPACK routines

$ make CC=\"clang-cl\" HOSTCC=\"clang-cl\" AR=\"llvm-ar\" BUILD_WITHOUT_LAPACK=0 NOFORTRAN=0 DYNAMIC_ARCH=0 TARGET=ARMV8 ARCH=arm64 BINARY=64 USE_OPENMP=0 PARALLEL=1 RANLIB=\"llvm-ranlib\" MAKE=make F_COMPILER=FLANG FC=FLANG FFLAGS_NOOPT=\"-march=armv8-a -cpp\" FFLAGS=\"-march=armv8-a -cpp\" NEED_PIC=0 HOSTARCH=arm64 libs netlib\n
"},{"location":"install/#2-build-static-library-with-blas-routines-using-cmake","title":"2. Build static library with BLAS routines using CMake","text":"

Classic flang has compatibility issues with cmake hence only BLAS routines can be compiled with CMake

$ mkdir build\n$ cd build\n$ cmake ..  -G Ninja -DCMAKE_C_COMPILER=clang -DBUILD_WITHOUT_LAPACK=1 -DNOFORTRAN=1 -DDYNAMIC_ARCH=0 -DTARGET=ARMV8 -DARCH=arm64 -DBINARY=64 -DUSE_OPENMP=0 -DCMAKE_SYSTEM_PROCESSOR=ARM64 -DCMAKE_CROSSCOMPILING=1 -DCMAKE_SYSTEM_NAME=Windows\n$ cmake --build . --config Release\n
"},{"location":"install/#getarchexe-execution-error","title":"getarch.exe execution error","text":"

If you notice that platform-specific headers by getarch.exe are not generated correctly, It could be due to a known debug runtime DLL issue for arm64 platforms. Please check out link for the workaround.

"},{"location":"install/#mingw-import-library","title":"MinGW import library","text":"

Microsoft Windows has this thing called \"import libraries\". You don't need it in MinGW because the ld linker from GNU Binutils is smart, but you may still want it for whatever reason.

"},{"location":"install/#make-the-def","title":"Make the .def","text":"

Import libraries are compiled from a list of what symbols to use, .def. This should be already in your exports directory: cd OPENBLAS_TOP_DIR/exports.

"},{"location":"install/#making-a-mingw-import-library","title":"Making a MinGW import library","text":"

MinGW import libraries have the suffix .a, same as static libraries. (It's actually more common to do .dll.a...)

You need to first prepend libopenblas.def with a line LIBRARY libopenblas.dll:

cat <(echo \"LIBRARY libopenblas.dll\") libopenblas.def > libopenblas.def.1\nmv libopenblas.def.1 libopenblas.def\n

Now it probably looks like:

LIBRARY libopenblas.dll\nEXPORTS\n   caxpy=caxpy_  @1\n   caxpy_=caxpy_  @2\n       ...\n

Then, generate the import library: dlltool -d libopenblas.def -l libopenblas.a

Again, there is basically no point in making an import library for use in MinGW. It actually slows down linking.

"},{"location":"install/#making-a-msvc-import-library","title":"Making a MSVC import library","text":"

Unlike MinGW, MSVC absolutely requires an import library. Now the C ABI of MSVC and MinGW are actually identical, so linking is actually okay. (Any incompatibility in the C ABI would be a bug.)

The import libraries of MSVC have the suffix .lib. They are generated from a .def file using MSVC's lib.exe. See the MSVC instructions.

"},{"location":"install/#notes","title":"Notes","text":""},{"location":"install/#mac-osx","title":"Mac OSX","text":"

If your CPU is Sandy Bridge, please use Clang version 3.1 and above. The Clang 3.0 will generate the wrong AVX binary code of OpenBLAS.

"},{"location":"install/#precompiled-packages_2","title":"Precompiled packages","text":"

https://www.macports.org/ports.php?by=name&substr=openblas

brew install openblas

or using the conda package manager from https://github.com/conda-forge/miniforge#download (which also has packages for the new M1 cpu)

conda install openblas

"},{"location":"install/#build-on-apple-m1","title":"Build on Apple M1","text":"

On newer versions of Xcode and on arm64, you might need to compile with a newer macOS target (11.0) than the default (10.8) with MACOSX_DEPLOYMENT_TARGET=11.0, or switch your command-line tools to use an older SDK (e.g., 13.1).

"},{"location":"install/#android","title":"Android","text":""},{"location":"install/#prerequisites_1","title":"Prerequisites","text":"

In addition to the Android NDK, you will need both Perl and a C compiler on the build host as these are currently required by the OpenBLAS build environment.

"},{"location":"install/#building-with-android-ndk-using-clang-compiler","title":"Building with android NDK using clang compiler","text":"

Around version 11 Android NDKs stopped supporting gcc, so you would need to use clang to compile OpenBLAS. clang is supported from OpenBLAS 0.2.20 version onwards. See below sections on how to build with clang for ARMV7 and ARMV8 targets. The same basic principles as described below for ARMV8 should also apply to building an x86 or x86_64 version (substitute something like NEHALEM for the target instead of ARMV8 and replace all the aarch64 in the toolchain paths obviously) \"Historic\" notes: Since version 19 the default toolchain is provided as a standalone toolchain, so building one yourself following building a standalone toolchain should no longer be necessary. If you want to use static linking with an old NDK version older than about r17, you need to choose an API level below 23 currently due to NDK bug 272 (https://github.com/android-ndk/ndk/issues/272 , the libc.a lacks a definition of stderr) that will probably be fixed in r17 of the NDK.

"},{"location":"install/#build-armv7-with-clang","title":"Build ARMV7 with clang","text":"

## Set path to ndk-bundle\nexport NDK_BUNDLE_DIR=/path/to/ndk-bundle\n\n## Set the PATH to contain paths to clang and arm-linux-androideabi-* utilities\nexport PATH=${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin:${NDK_BUNDLE_DIR}/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH\n\n## Set LDFLAGS so that the linker finds the appropriate libgcc\nexport LDFLAGS=\"-L${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/lib/gcc/arm-linux-androideabi/4.9.x\"\n\n## Set the clang cross compile flags\nexport CLANG_FLAGS=\"-target arm-linux-androideabi -marm -mfpu=vfp -mfloat-abi=softfp --sysroot ${NDK_BUNDLE_DIR}/platforms/android-23/arch-arm -gcc-toolchain ${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/\"\n\n#OpenBLAS Compile\nmake TARGET=ARMV7 ONLY_CBLAS=1 AR=ar CC=\"clang ${CLANG_FLAGS}\" HOSTCC=gcc ARM_SOFTFP_ABI=1 -j4\n
On a Mac, it may also be necessary to give the complete path to the ar utility in the make command above, like so:
AR=${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc-ar\n
otherwise you may get a linker error complaining about a \"malformed archive header name at 8\" when the native OSX ar command was invoked instead.

"},{"location":"install/#build-armv8-with-clang","title":"Build ARMV8 with clang","text":"

## Set path to ndk-bundle\nexport NDK_BUNDLE_DIR=/path/to/ndk-bundle/\n\n## Export PATH to contain directories of clang and aarch64-linux-android-* utilities\nexport PATH=${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/:${NDK_BUNDLE_DIR}/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH\n\n## Setup LDFLAGS so that loader can find libgcc and pass -lm for sqrt\nexport LDFLAGS=\"-L${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/lib/gcc/aarch64-linux-android/4.9.x -lm\"\n\n## Setup the clang cross compile options\nexport CLANG_FLAGS=\"-target aarch64-linux-android --sysroot ${NDK_BUNDLE_DIR}/platforms/android-23/arch-arm64 -gcc-toolchain ${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/\"\n\n## Compile\nmake TARGET=ARMV8 ONLY_CBLAS=1 AR=ar CC=\"clang ${CLANG_FLAGS}\" HOSTCC=gcc -j4\n
Note: Using TARGET=CORTEXA57 in place of ARMV8 will pick up better optimized routines. Implementations for CORTEXA57 target is compatible with all other armv8 targets.

Note: For NDK 23b, something as simple as

export PATH=/opt/android-ndk-r23b/toolchains/llvm/prebuilt/linux-x86_64/bin/:$PATH\nmake HOSTCC=gcc CC=/opt/android-ndk-r23b/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android31-clang ONLY_CBLAS=1 TARGET=ARMV8\n
appears to be sufficient on Linux.

"},{"location":"install/#alternative-script-which-was-tested-on-osx-with-ndk2136528147","title":"Alternative script which was tested on OSX with NDK(21.3.6528147)","text":"

This script will build openblas for 3 architecture (ARMV7,ARMV8,X86) and put them with sudo make install to /opt/OpenBLAS/lib

export NDK=YOUR_PATH_TO_SDK/Android/sdk/ndk/21.3.6528147\nexport TOOLCHAIN=$NDK/toolchains/llvm/prebuilt/darwin-x86_64\n\nmake clean\nmake \\\n    TARGET=ARMV7 \\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/armv7a-linux-androideabi21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/arm-linux-androideabi-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n\nmake clean\nmake \\\n    TARGET=CORTEXA57 \\\n    ONLY_CBLAS=1 \\\n    CC=$TOOLCHAIN/bin/aarch64-linux-android21-clang \\\n    AR=$TOOLCHAIN/bin/aarch64-linux-android-ar \\\n    HOSTCC=gcc \\\n    -j4\nsudo make install\n\nmake clean\nmake \\\n    TARGET=ATOM \\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/i686-linux-android21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/i686-linux-android-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n\n## This will build for x86_64 \nmake clean\nmake \\\n    TARGET=ATOM BINARY=64\\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/x86_64-linux-android21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/x86_64-linux-android-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n
Also you can find full list of target architectures in TargetsList.txt

anything below this line should be irrelevant nowadays unless you need to perform software archeology

"},{"location":"install/#building-openblas-with-very-old-gcc-based-versions-of-the-ndk-without-fortran","title":"Building OpenBLAS with very old gcc-based versions of the NDK, without Fortran","text":"

The prebuilt Android NDK toolchains do not include Fortran, hence parts like LAPACK cannot be built. You can still build OpenBLAS without it. For instructions on how to build OpenBLAS with Fortran, see the next section.

To use easily the prebuilt toolchains, follow building a standalone toolchain for your desired architecture. This would be arm-linux-androideabi-gcc-4.9 for ARMV7 and aarch64-linux-android-gcc-4.9 for ARMV8.

You can build OpenBLAS (0.2.19 and earlier) with:

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain/bin:$PATH\n\n## Build without Fortran for ARMV7\nmake TARGET=ARMV7 HOSTCC=gcc CC=arm-linux-androideabi-gcc NOFORTRAN=1 libs\n## Build without Fortran for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc NOFORTRAN=1 libs\n

Since we are cross-compiling, we make the libs recipe, not all. Otherwise you will get errors when trying to link/run tests as versions up to and including 0.2.19 cannot build a shared library for Android.

From 0.2.20 on, you should leave off the \"libs\" to get a full build, and you may want to use the softfp ABI instead of the deprecated hardfp one on ARMV7 so you would use

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain/bin:$PATH\n\n## Build without Fortran for ARMV7\nmake TARGET=ARMV7 ARM_SOFTFP_ABI=1 HOSTCC=gcc CC=arm-linux-androideabi-gcc NOFORTRAN=1\n## Build without Fortran for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc NOFORTRAN=1\n

If you get an error about stdio.h not being found, you need to specify your sysroot in the CFLAGS argument to make like CFLAGS=--sysroot=$NDK/platforms/android-16/arch-arm When you are done, install OpenBLAS into the desired directory. Be sure to also use all command line options here that you specified for building, otherwise errors may occur as it tries to install things you did not build:

make PREFIX=/path/to/install-dir TARGET=... install\n

"},{"location":"install/#building-openblas-with-fortran","title":"Building OpenBLAS with Fortran","text":"

Instructions on how to build the GNU toolchains with Fortran can be found here. The Releases section provides prebuilt versions, use the standalone one.

You can build OpenBLAS with:

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain-with-fortran/bin:$PATH\n\n## Build with Fortran for ARMV7\nmake TARGET=ARMV7 HOSTCC=gcc CC=arm-linux-androideabi-gcc FC=arm-linux-androideabi-gfortran libs\n## Build with LAPACK for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc FC=aarch64-linux-android-gfortran libs\n

As mentioned above you can leave off the libs argument here when building 0.2.20 and later, and you may want to add ARM_SOFTFP_ABI=1 when building for ARMV7.

"},{"location":"install/#linking-openblas-0219-and-earlier-for-armv7","title":"Linking OpenBLAS (0.2.19 and earlier) for ARMV7","text":"

If you are using ndk-build, you need to set the ABI to hard floating points in your Application.mk:

APP_ABI := armeabi-v7a-hard\n

This will set the appropriate flags for you. If you are not using ndk-build, you will want to add the following flags:

TARGET_CFLAGS += -mhard-float -D_NDK_MATH_NO_SOFTFP=1\nTARGET_LDFLAGS += -Wl,--no-warn-mismatch -lm_hard\n

From 0.2.20 on, it is also possible to build for the softfp ABI by specifying ARM_SOFTFP_ABI=1 during the build. In that case, also make sure that all your dependencies are compiled with -mfloat-abi=softfp as well, as mixing \"hard\" and \"soft\" floating point ABIs in a program will make it crash.

"},{"location":"install/#iphoneios","title":"iPhone/iOS","text":"

As none of the current developers uses iOS, the following instructions are what was found to work in our Azure CI setup, but as far as we know this builds a fully working OpenBLAS for this platform.

Go to the directory where you unpacked OpenBLAS,and enter the following commands:

     CC=/Applications/Xcode_12.4.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang\n\nCFLAGS= -O2 -Wno-macro-redefined -isysroot /Applications/Xcode_12.4.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS14.4.sdk -arch arm64 -miphoneos-version-min=10.0\n\nmake TARGET=ARMV8 DYNAMIC_ARCH=1 NUM_THREADS=32 HOSTCC=clang NOFORTRAN=1\n
Adjust MIN_IOS_VERSION as necessary for your installation, e.g. change the version number to the minimum iOS version you want to target and execute this file to build the library.

"},{"location":"install/#mips","title":"MIPS","text":"

For mips targets you will need latest toolchains P5600 - MTI GNU/Linux Toolchain I6400, P6600 - IMG GNU/Linux Toolchain

The download link is below (http://codescape-mips-sdk.imgtec.com/components/toolchain/2016.05-03/downloads.html)

You can use following commandlines for builds

IMG_TOOLCHAIN_DIR={full IMG GNU/Linux Toolchain path including \"bin\" directory -- for example, /opt/linux_toolchain/bin}\nIMG_GCC_PREFIX=mips-img-linux-gnu\nIMG_TOOLCHAIN=${IMG_TOOLCHAIN_DIR}/${IMG_GCC_PREFIX}\n\nI6400 Build (n32):\nmake BINARY=32 BINARY32=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL -mabi=n32\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=I6400\n\nI6400 Build (n64):\nmake BINARY=64 BINARY64=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=I6400\n\nP6600 Build (n32):\nmake BINARY=32 BINARY32=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL -mabi=n32\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=P6600\n\nP6600 Build (n64):\nmake BINARY=64 BINARY64=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=\"$CFLAGS\" LDFLAGS=\"$CFLAGS\" TARGET=P6600\n\nMTI_TOOLCHAIN_DIR={full MTI GNU/Linux Toolchain path including \"bin\" directory -- for example, /opt/linux_toolchain/bin}\nMTI_GCC_PREFIX=mips-mti-linux-gnu\nMTI_TOOLCHAIN=${IMG_TOOLCHAIN_DIR}/${IMG_GCC_PREFIX}\n\nP5600 Build:\n\nmake BINARY=32 BINARY32=1 CC=$MTI_TOOLCHAIN-gcc AR=$MTI_TOOLCHAIN-ar FC=\"$MTI_TOOLCHAIN-gfortran -EL\"    RANLIB=$MTI_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=P5600\n
"},{"location":"install/#freebsd","title":"FreeBSD","text":"

You will need to install the following tools from the FreeBSD ports tree: * lang/gcc [1] * lang/perl5.12 * ftp/curl * devel/gmake * devel/patch

To compile run the command:

$ gmake CC=gcc46 FC=gfortran46\n

Note that you need to build with GNU make and manually specify the compiler, otherwhise gcc 4.2 from the base system would be used.

[1]: Removal of Fortran from the FreeBSD base system

pkg install openblas\n

see https://www.freebsd.org/ports/index.html

"},{"location":"install/#cortex-m","title":"Cortex-M","text":"

Cortex-M is a widely used microcontroller that is present in a variety of industrial and consumer electronics. A common variant of the Cortex-M is the STM32F4xx series. Here, we will give instructions for building for the STM32F4xx.

First, install the embedded arm gcc compiler from the arm website. Then, create the following toolchain file and build as follows.

# cmake .. -G Ninja -DCMAKE_C_COMPILER=arm-none-eabi-gcc -DCMAKE_TOOLCHAIN_FILE:PATH=\"toolchain.cmake\" -DNOFORTRAN=1 -DTARGET=ARMV5 -DEMBEDDED=1\n\nset(CMAKE_SYSTEM_NAME Generic)\nset(CMAKE_SYSTEM_PROCESSOR arm)\n\nset(CMAKE_C_COMPILER \"arm-none-eabi-gcc.exe\")\nset(CMAKE_CXX_COMPILER \"arm-none-eabi-g++.exe\")\n\nset(CMAKE_EXE_LINKER_FLAGS \"--specs=nosys.specs\" CACHE INTERNAL \"\")\n\nset(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)\nset(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)\nset(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)\nset(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)\n

In your embedded application, the following functions need to be provided for OpenBLAS to work correctly:

void free(void* ptr);\nvoid* malloc(size_t size);\n

Note

If you are developing for an embedded platform, it is your responsibility to make sure that the device has sufficient memory for malloc calls. Libmemory provides one implementation of malloc for embedded platforms.

"},{"location":"user_manual/","title":"User manual","text":""},{"location":"user_manual/#compile-the-library","title":"Compile the library","text":""},{"location":"user_manual/#normal-compile","title":"Normal compile","text":""},{"location":"user_manual/#cross-compile","title":"Cross compile","text":"

Please set CC and FC with the cross toolchains. Then, set HOSTCC with your host C compiler. At last, set TARGET explicitly.

Examples:

Install only gnueabihf versions. Please check https://github.com/xianyi/OpenBLAS/issues/936#issuecomment-237596847

make CC=arm-linux-gnueabihf-gcc FC=arm-linux-gnueabihf-gfortran HOSTCC=gcc TARGET=CORTEXA9\n
make BINARY=64 CC=mips64el-unknown-linux-gnu-gcc FC=mips64el-unknown-linux-gnu-gfortran HOSTCC=gcc TARGET=LOONGSON3A\n
make CC=loongcc FC=loongf95 HOSTCC=gcc TARGET=LOONGSON3A CROSS=1 CROSS_SUFFIX=mips64el-st-linux-gnu-   NO_LAPACKE=1 NO_SHARED=1 BINARY=32\n
"},{"location":"user_manual/#debug-version","title":"Debug version","text":"
make DEBUG=1\n
"},{"location":"user_manual/#install-to-the-directory-optional","title":"Install to the directory (optional)","text":"

Example:

make install PREFIX=your_installation_directory\n

The default directory is /opt/OpenBLAS. Note that any flags passed to make during build should also be passed to make install to circumvent any install errors, i.e. some headers not being copied over correctly.

For more information, please read Installation Guide.

"},{"location":"user_manual/#link-the-library","title":"Link the library","text":"
gcc -o test test.c -I/your_path/OpenBLAS/include/ -L/your_path/OpenBLAS/lib -Wl,-rpath,/your_path/OpenBLAS/lib -lopenblas\n

The -Wl,-rpath,/your_path/OpenBLAS/lib option to linker can be omitted if you ran ldconfig to update linker cache, put /your_path/OpenBLAS/lib in /etc/ld.so.conf or a file in /etc/ld.so.conf.d, or installed OpenBLAS in a location part of ld.so default search path. Otherwise, linking at runtime will fail.

If the library is multithreaded, please add -lpthread. If the library contains LAPACK functions, please add -lgfortran or other Fortran libs, although if you only make calls to LAPACKE routines, i.e. your code has #include \"lapacke.h\" and makes calls to methods like LAPACKE_dgeqrf, -lgfortran is not needed.

gcc -o test test.c /your/path/libopenblas.a\n

You can download test.c from https://gist.github.com/xianyi/5780018

"},{"location":"user_manual/#code-examples","title":"Code examples","text":""},{"location":"user_manual/#call-cblas-interface","title":"Call CBLAS interface","text":"

This example shows calling cblas_dgemm in C. https://gist.github.com/xianyi/6930656

#include <cblas.h>\n#include <stdio.h>\n\nvoid main()\n{\n  int i=0;\n  double A[6] = {1.0,2.0,1.0,-3.0,4.0,-1.0};         \n  double B[6] = {1.0,2.0,1.0,-3.0,4.0,-1.0};  \n  double C[9] = {.5,.5,.5,.5,.5,.5,.5,.5,.5}; \n  cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,3,3,2,1,A, 3, B, 3,2,C,3);\n\n  for(i=0; i<9; i++)\n    printf(\"%lf \", C[i]);\n  printf(\"\\n\");\n}\n

gcc -o test_cblas_open test_cblas_dgemm.c -I /your_path/OpenBLAS/include/ -L/your_path/OpenBLAS/lib -lopenblas -lpthread -lgfortran\n
"},{"location":"user_manual/#call-blas-fortran-interface","title":"Call BLAS Fortran interface","text":"

This example shows calling dgemm Fortran interface in C. https://gist.github.com/xianyi/5780018

#include \"stdio.h\"\n#include \"stdlib.h\"\n#include \"sys/time.h\"\n#include \"time.h\"\n\nextern void dgemm_(char*, char*, int*, int*,int*, double*, double*, int*, double*, int*, double*, double*, int*);\n\nint main(int argc, char* argv[])\n{\n  int i;\n  printf(\"test!\\n\");\n  if(argc<4){\n    printf(\"Input Error\\n\");\n    return 1;\n  }\n\n  int m = atoi(argv[1]);\n  int n = atoi(argv[2]);\n  int k = atoi(argv[3]);\n  int sizeofa = m * k;\n  int sizeofb = k * n;\n  int sizeofc = m * n;\n  char ta = 'N';\n  char tb = 'N';\n  double alpha = 1.2;\n  double beta = 0.001;\n\n  struct timeval start,finish;\n  double duration;\n\n  double* A = (double*)malloc(sizeof(double) * sizeofa);\n  double* B = (double*)malloc(sizeof(double) * sizeofb);\n  double* C = (double*)malloc(sizeof(double) * sizeofc);\n\n  srand((unsigned)time(NULL));\n\n  for (i=0; i<sizeofa; i++)\n    A[i] = i%3+1;//(rand()%100)/10.0;\n\n  for (i=0; i<sizeofb; i++)\n    B[i] = i%3+1;//(rand()%100)/10.0;\n\n  for (i=0; i<sizeofc; i++)\n    C[i] = i%3+1;//(rand()%100)/10.0;\n  //#if 0\n  printf(\"m=%d,n=%d,k=%d,alpha=%lf,beta=%lf,sizeofc=%d\\n\",m,n,k,alpha,beta,sizeofc);\n  gettimeofday(&start, NULL);\n  dgemm_(&ta, &tb, &m, &n, &k, &alpha, A, &m, B, &k, &beta, C, &m);\n  gettimeofday(&finish, NULL);\n\n  duration = ((double)(finish.tv_sec-start.tv_sec)*1000000 + (double)(finish.tv_usec-start.tv_usec)) / 1000000;\n  double gflops = 2.0 * m *n*k;\n  gflops = gflops/duration*1.0e-6;\n\n  FILE *fp;\n  fp = fopen(\"timeDGEMM.txt\", \"a\");\n  fprintf(fp, \"%dx%dx%d\\t%lf s\\t%lf MFLOPS\\n\", m, n, k, duration, gflops);\n  fclose(fp);\n\n  free(A);\n  free(B);\n  free(C);\n  return 0;\n}\n
gcc -o time_dgemm time_dgemm.c /your/path/libopenblas.a -lpthread\n./time_dgemm <m> <n> <k>\n
"},{"location":"user_manual/#troubleshooting","title":"Troubleshooting","text":""},{"location":"user_manual/#blas-reference-manual","title":"BLAS reference manual","text":"

If you want to understand every BLAS function and definition, please read Intel MKL reference manual or netlib.org

Here are OpenBLAS extension functions

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#introduction","title":"Introduction","text":"

OpenBLAS is an optimized Basic Linear Algebra Subprograms (BLAS) library based on GotoBLAS2 1.13 BSD version.

OpenBLAS implements low-level routines for performing linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. OpenBLAS makes these routines available on multiple platforms, covering server, desktop and mobile operating systems, as well as different architectures including x86, ARM, MIPS, PPC, RISC-V, and zarch.

The old GotoBLAS documentation can be found on GitHub.

"},{"location":"#license","title":"License","text":"

OpenBLAS is licensed under the 3-clause BSD license. The full license can be found on GitHub.

"},{"location":"about/","title":"About","text":""},{"location":"about/#mailing-list","title":"Mailing list","text":"

We have a GitHub discussions forum to discuss usage and development of OpenBLAS. We also have a Google group for users and a Google group for development of OpenBLAS.

"},{"location":"about/#donations","title":"Donations","text":"

You can read OpenBLAS statement of receipts and disbursement and cash balance on google doc. A backer list is available on GitHub.

We welcome the hardware donation, including the latest CPU and boards.

"},{"location":"about/#acknowledgements","title":"Acknowledgements","text":"

This work is partially supported by * Research and Development of Compiler System and Toolchain for Domestic CPU, National S&T Major Projects: Core Electronic Devices, High-end General Chips and Fundamental Software (No.2009ZX01036-001-002) * National High-tech R&D Program of China (Grant No.2012AA010903)

"},{"location":"about/#users-of-openblas","title":"Users of OpenBLAS","text":""},{"location":"about/#publications","title":"Publications","text":""},{"location":"about/#2013","title":"2013","text":""},{"location":"about/#2012","title":"2012","text":""},{"location":"build_system/","title":"Build system","text":"

Warning

This page is made by someone who is not the developer and should not be considered as an official documentation of the build system. For getting the full picture, it is best to read the Makefiles and understand them yourself.

"},{"location":"build_system/#makefile-dep-graph","title":"Makefile dep graph","text":"
Makefile                                                        \n|                                                               \n|-----  Makefile.system # !!! this is included by many of the Makefiles in the subdirectories !!!\n|       |\n|       |=====  Makefile.prebuild # This is triggered (not included) once by Makefile.system \n|       |       |                 # and runs before any of the actual library code is built.\n|       |       |                 # (builds and runs the \"getarch\" tool for cpu identification,\n|       |       |                 # runs the compiler detection scripts c_check and f_check) \n|       |       |\n|       |       -----  (Makefile.conf) [ either this or Makefile_kernel.conf is generated ] \n|       |       |                            { Makefile.system#L243 }\n|       |       -----  (Makefile_kernel.conf) [ temporary Makefile.conf during DYNAMIC_ARCH builds ]\n|       |\n|       |-----  Makefile.rule # defaults for build options that can be given on the make command line\n|       |\n|       |-----  Makefile.$(ARCH) # architecture-specific compiler options and OpenBLAS buffer size values\n|\n|~~~~~ exports/\n|\n|~~~~~ test/\n|\n|~~~~~ utest/  \n|\n|~~~~~ ctest/\n|\n|~~~~~ cpp_thread_test/\n|\n|~~~~~ kernel/\n|\n|~~~~~ ${SUBDIRS}\n|\n|~~~~~ ${BLASDIRS}\n|\n|~~~~~ ${NETLIB_LAPACK_DIR}{,/timing,/testing/{EIG,LIN}}\n|\n|~~~~~ relapack/\n
"},{"location":"build_system/#important-variables","title":"Important Variables","text":"

Most of the tunable variables are found in Makefile.rule, along with their detailed descriptions. Most of the variables are detected automatically in Makefile.prebuild, if they are not set in the environment.

"},{"location":"build_system/#cpu-related","title":"CPU related","text":"
ARCH         - Target architecture (eg. x86_64)\nTARGET       - Target CPU architecture, in case of DYNAMIC_ARCH=1 means library will not be usable on less capable CPUs\nTARGET_CORE  - TARGET_CORE will override TARGET internally during each cpu-specific cycle of the build for DYNAMIC_ARCH\nDYNAMIC_ARCH - For building library for multiple TARGETs (does not lose any optimizations, but increases library size)\nDYNAMIC_LIST - optional user-provided subset of the DYNAMIC_CORE list in Makefile.system\n
"},{"location":"build_system/#toolchain-related","title":"Toolchain related","text":"
CC                 - TARGET C compiler used for compilation (can be cross-toolchains)\nFC                 - TARGET Fortran compiler used for compilation (can be cross-toolchains, set NOFORTRAN=1 if used cross-toolchain has no fortran compiler)\nAR, AS, LD, RANLIB - TARGET toolchain helpers used for compilation (can be cross-toolchains)\n\nHOSTCC             - compiler of build machine, needed to create proper config files for target architecture\nHOST_CFLAGS        - flags for build machine compiler\n
"},{"location":"build_system/#library-related","title":"Library related","text":"
BINARY          - 32/64 bit library\n\nBUILD_SHARED    - Create shared library\nBUILD_STATIC    - Create static library\n\nQUAD_PRECISION  - enable support for IEEE quad precision [ largely unimplemented leftover from GotoBLAS, do not use ]\nEXPRECISION     - Obsolete option to use float80 of SSE on BSD-like systems\nINTERFACE64     - Build with 64bit integer representations to support large array index values [ incompatible with standard API ]\n\nBUILD_SINGLE    - build the single-precision real functions of BLAS [and optionally LAPACK] \nBUILD_DOUBLE    - build the double-precision real functions\nBUILD_COMPLEX   - build the single-precision complex functions\nBUILD_COMPLEX16 - build the double-precision complex functions\n(all four types are included in the build by default when none was specifically selected)\n\nBUILD_BFLOAT16  - build the \"half precision brainfloat\" real functions \n\nUSE_THREAD      - Use a multithreading backend (default to pthread)\nUSE_LOCKING     - implement locking for thread safety even when USE_THREAD is not set (so that the singlethreaded library can\n                  safely be called from multithreaded programs)\nUSE_OPENMP      - Use OpenMP as multithreading backend\nNUM_THREADS     - define this to the maximum number of parallel threads you expect to need (defaults to the number of cores in the build cpu)\nNUM_PARALLEL    - define this to the number of OpenMP instances that your code may use for parallel calls into OpenBLAS (default 1,see below)\n

OpenBLAS uses a fixed set of memory buffers internally, used for communicating and compiling partial results from individual threads. For efficiency, the management array structure for these buffers is sized at build time - this makes it necessary to know in advance how many threads need to be supported on the target system(s). With OpenMP, there is an additional level of complexity as there may be calls originating from a parallel region in the calling program. If OpenBLAS gets called from a single parallel region, it runs single-threaded automatically to avoid overloading the system by fanning out its own set of threads. In the case that an OpenMP program makes multiple calls from independent regions or instances in parallel, this default serialization is not sufficient as the additional caller(s) would compete for the original set of buffers already in use by the first call. So if multiple OpenMP runtimes call into OpenBLAS at the same time, then only one of them will be able to make progress while all the rest of them spin-wait for the one available buffer. Setting NUM_PARALLEL to the upper bound on the number of OpenMP runtimes that you can have in a process ensures that there are a sufficient number of buffer sets available

"},{"location":"ci/","title":"CI jobs","text":"Arch Target CPU OS Build system XComp to C Compiler Fortran Compiler threading DYN_ARCH INT64 Libraries CI Provider CPU count x86_64 Intel 32bit Windows CMAKE/VS2015 - mingw6.3 - pthreads - - static Appveyor x86_64 Intel Windows CMAKE/VS2015 - mingw5.3 - pthreads - - static Appveyor x86_64 Intel Centos5 gmake - gcc 4.8 gfortran pthreads + - both Azure x86_64 SDE (SkylakeX) Ubuntu CMAKE - gcc gfortran pthreads - - both Azure x86_64 Haswell/ SkylakeX Windows CMAKE/VS2017 - VS2017 - - - static Azure x86_64 \" Windows mingw32-make - gcc gfortran list - both Azure x86_64 \" Windows CMAKE/Ninja - LLVM - - - static Azure x86_64 \" Windows CMAKE/Ninja - LLVM flang - - static Azure x86_64 \" Windows CMAKE/Ninja - VS2022 flang* - - static Azure x86_64 \" macOS11 gmake - gcc-10 gfortran OpenMP + - both Azure x86_64 \" macOS11 gmake - gcc-10 gfortran none - - both Azure x86_64 \" macOS12 gmake - gcc-12 gfortran pthreads - - both Azure x86_64 \" macOS11 gmake - llvm - OpenMP + - both Azure x86_64 \" macOS11 CMAKE - llvm - OpenMP no_avx512 - static Azure x86_64 \" macOS11 CMAKE - gcc-10 gfortran pthreads list - shared Azure x86_64 \" macOS11 gmake - llvm ifort pthreads - - both Azure x86_64 \" macOS11 gmake arm AndroidNDK-llvm - - - both Azure x86_64 \" macOS11 gmake arm64 XCode 12.4 - + - both Azure x86_64 \" macOS11 gmake arm XCode 12.4 - + - both Azure x86_64 \" Alpine Linux(musl) gmake - gcc gfortran pthreads + - both Azure arm64 Apple M1 OSX CMAKE/XCode - LLVM - OpenMP - - static Cirrus arm64 Apple M1 OSX CMAKE/Xcode - LLVM - OpenMP - + static Cirrus arm64 Apple M1 OSX CMAKE/XCode x86_64 LLVM - - + - static Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - pthreads - - both Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - pthreads - + both Cirrus arm64 Neoverse N1 Linux gmake - gcc10.2 - OpenMP - - both Cirrus 8 x86_64 Ryzen FreeBSD gmake - gcc12.2 gfortran pthreads - - both Cirrus x86_64 Ryzen FreeBSD gmake gcc12.2 gfortran pthreads - + both Cirrus x86_64 GENERIC QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 SICORTEX QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 I6400 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 P6600 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 I6500 QEMU gmake mips64 gcc gfortran pthreads - - static Github x86_64 Intel Ubuntu CMAKE - gcc-11.3 gfortran pthreads + - static Github x86_64 Intel Ubuntu gmake - gcc-11.3 gfortran pthreads + - both Github x86_64 Intel Ubuntu CMAKE - gcc-11.3 flang-classic pthreads + - static Github x86_64 Intel Ubuntu gmake - gcc-11.3 flang-classic pthreads + - both Github x86_64 Intel macOS12 CMAKE - AppleClang 14 gfortran pthreads + - static Github x86_64 Intel macOS12 gmake - AppleClang 14 gfortran pthreads + - both Github x86_64 Intel Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + + static Github x86_64 Intel 32bit Windows2022 CMAKE/Ninja - mingw gcc 13 gfortran + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - LLVM 16 - + - static Github x86_64 Intel Windows2022 CMAKE/Ninja - LLVM 16 - + + static Github x86_64 Intel Windows2022 CMAKE/Ninja - gcc 13 - + - static Github x86_64 Intel Ubuntu gmake mips64 gcc gfortran pthreads + - both Github x86_64 generic Ubuntu gmake riscv64 gcc gfortran pthreads - - both Github x86_64 Intel Ubuntu gmake mips32 gcc gfortran pthreads - - both Github x86_64 Intel Ubuntu gmake ia64 gcc gfortran pthreads - - both Github x86_64 C910V QEmu gmake riscv64 gcc gfortran pthreads - - both Github power pwr9 Ubuntu gmake - gcc gfortran OpenMP - - both OSUOSL zarch z14 Ubuntu gmake - gcc gfortran OpenMP - - both OSUOSL"},{"location":"developers/","title":"Developer manual","text":""},{"location":"developers/#source-codes-layout","title":"Source codes Layout","text":"
OpenBLAS/  \n\u251c\u2500\u2500 benchmark                  Benchmark codes for BLAS\n\u251c\u2500\u2500 cmake                      CMakefiles\n\u251c\u2500\u2500 ctest                      Test codes for CBLAS interfaces\n\u251c\u2500\u2500 driver                     Implemented in C\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 level2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 level3\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mapper\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 others                 Memory management, threading, etc\n\u251c\u2500\u2500 exports                    Generate shared library\n\u251c\u2500\u2500 interface                  Implement BLAS and CBLAS interfaces (calling driver or kernel)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapack\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 netlib\n\u251c\u2500\u2500 kernel                     Optimized assembly kernels for CPU architectures\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 alpha                  Original GotoBLAS kernels for DEC Alpha\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 arm                    ARMV5,V6,V7 kernels (including generic C codes used by other architectures)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 arm64                  ARMV8\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 generic                General kernel codes written in plain C, parts used by many architectures.\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ia64                   Original GotoBLAS kernels for Intel Itanium\n\u2502   \u251c\u2500\u2500 mips\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mips64\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 power\n|   \u251c\u2500\u2500 riscv64\n|   \u251c\u2500\u2500 simd                   Common code for Universal Intrinsics, used by some x86_64 and arm64 kernels\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sparc\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 x86\n\u2502   \u251c\u2500\u2500 x86_64\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 zarch   \n\u251c\u2500\u2500 lapack                      Optimized LAPACK codes (replacing those in regular LAPACK)\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getf2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getrf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 getrs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 laswp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lauu2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lauum\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 potf2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 potrf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 trti2\n\u2502   \u251c\u2500\u2500 trtri\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trtrs\n\u251c\u2500\u2500 lapack-netlib               LAPACK codes from netlib reference implementation\n\u251c\u2500\u2500 reference                   BLAS Fortran reference implementation (unused)\n\u251c\u2500\u2500 relapack                    Elmar Peise's recursive LAPACK (implemented on top of regular LAPACK)\n\u251c\u2500\u2500 test                        Test codes for BLAS\n\u2514\u2500\u2500 utest                       Regression test\n

A call tree for dgemm is as following.

interface/gemm.c\n        \u2502\ndriver/level3/level3.c\n        \u2502\ngemm assembly kernels at kernel/\n

To find the kernel currently used for a particular supported cpu, please check the corresponding kernel/$(ARCH)/KERNEL.$(CPU) file.

Here is an example for kernel/x86_64/KERNEL.HASWELL

...\nDTRMMKERNEL    =  dtrmm_kernel_4x8_haswell.c\nDGEMMKERNEL    =  dgemm_kernel_4x8_haswell.S\n...\n
According to the above KERNEL.HASWELL, OpenBLAS Haswell dgemm kernel file is dgemm_kernel_4x8_haswell.S.

"},{"location":"developers/#optimizing-gemm-for-a-given-hardware","title":"Optimizing GEMM for a given hardware","text":"

Read the Goto paper to understand the algorithm.

Goto, Kazushige; van de Geijn, Robert A. (2008). \"Anatomy of High-Performance Matrix Multiplication\". ACM Transactions on Mathematical Software 34 (3): Article 12 (The above link is available only to ACM members, but this and many related papers is also available on the pages of van de Geijn's FLAME project, http://www.cs.utexas.edu/~flame/web/FLAMEPublications.html )

The driver/level3/level3.c is the implementation of Goto's algorithm. Meanwhile, you can look at kernel/generic/gemmkernel_2x2.c, which is a naive 2x2 register blocking gemm kernel in C.

Then, * Write optimized assembly kernels. consider instruction pipeline, available registers, memory/cache accessing * Tuning cache block size, Mc, Kc, and Nc

Note that not all of the cpu-specific parameters in param.h are actively used in algorithms. DNUMOPT only appears as a scale factor in profiling output of the level3 syrk interface code, while its counterpart SNUMOPT (aliased as NUMOPT in common.h) is not used anywhere at all. SYMV_P is only used in the generic kernels for the symv and chemv/zhemv functions - at least some of those are usually overridden by cpu-specific implementations, so if you start by cloning the existing implementation for a related cpu you need to check its KERNEL file to see if tuning SYMV_P would have any effect at all. GEMV_UNROLL is only used by some older x86_64 kernels, so not all sections in param.h define it. Similarly, not all of the cpu parameters like L2 or L3 cache sizes are necessarily used in current kernels for a given model - by all indications the cpu identification code was imported from some other project originally.

"},{"location":"developers/#run-openblas-test","title":"Run OpenBLAS Test","text":"

We use netlib blas test, cblas test, and LAPACK test. Meanwhile, we use BLAS-Tester, a modified test tool from ATLAS.

The project makes use of several Continuous Integration (CI) services conveniently interfaced with github to automatically check compilability on a number of platforms. Lastly, the testsuites included with \"numerically heavy\" projects like Julia, NumPy, Octave or QuantumEspresso can be used for regression testing.

"},{"location":"developers/#benchmarking","title":"Benchmarking","text":"

Several simple C benchmarks for performance testing individual BLAS functions are available in the benchmark folder, and its scripts subdirectory contains corresponding versions for Python, Octave and R. Other options include

"},{"location":"developers/#adding-autodetection-support-for-a-new-revision-or-variant-of-a-supported-cpu","title":"Adding autodetection support for a new revision or variant of a supported cpu","text":"

Especially relevant for x86_64, a new cpu model may be a \"refresh\" (die shrink and/or different number of cores) within an existing model family without significant changes to its instruction set. (e.g. Intel Skylake, Kaby Lake etc. still are fundamentally Haswell, low end Goldmont etc. are Nehalem). In this case, compilation with the appropriate older TARGET will already lead to a satisfactory build.

To achieve autodetection of the new model, its CPUID (or an equivalent identifier) needs to be added in the cpuid_<architecture>.c relevant for its general architecture, with the returned name for the new type set appropriately. For x86 which has the most complex cpuid file, there are two functions that need to be edited - get_cpuname() to return e.g. CPUTYPE_HASWELL and get_corename() for the (broader) core family returning e.g. CORE_HASWELL. (This information ends up in the Makefile.conf and config.h files generated by getarch. Failure to set either will typically lead to a missing definition of the GEMM_UNROLL parameters later in the build, as getarch_2nd will be unable to find a matching parameter section in param.h.)

For architectures where \"DYNAMIC_ARCH\" builds are supported, a similar but simpler code section for the corresponding runtime detection of the cpu exists in driver/others/dynamic.c (for x86) and driver/others/dynamic_<arch>.c for other architectures. Note that for x86 the CPUID is compared after splitting it into its family, extended family, model and extended model parts, so the single decimal number returned by Linux in /proc/cpuinfo for the model has to be converted back to hexadecimal before splitting into its constituent digits, e.g. 142 = 8E , translates to extended model 8, model 14.

"},{"location":"developers/#adding-dedicated-support-for-a-new-cpu-model","title":"Adding dedicated support for a new cpu model","text":"

Usually it will be possible to start from an existing model, clone its KERNEL configuration file to the new name to use for this TARGET and eventually replace individual kernels with versions better suited for peculiarities of the new cpu model. In addition, it is necessary to add (or clone at first) the corresponding section of GEMM_UNROLL parameters in the toplevel param.h, and possibly to add definitions such as USE_TRMM (governing whether TRMM functions use the respective GEMM kernel or a separate source file) to the Makefiles (and CMakeLists.txt) in the kernel directory. The new cpu name needs to be added to TargetLists.txt and the cpu autodetection code used by the getarch helper program - contained in the cpuid_<architecture>.c file amended to include the CPUID (or equivalent) information processing required (see preceding section).

"},{"location":"developers/#adding-support-for-an-entirely-new-architecture","title":"Adding support for an entirely new architecture","text":"

This endeavour is best started by cloning the entire support structure for 32bit ARM, and within that the ARMV5 cpu in particular as this is implemented through plain C kernels only. An example providing a convenient \"shopping list\" can be seen in pull request #1526.

"},{"location":"distributing/","title":"Redistributing OpenBLAS","text":"

Note

This document contains recommendations only - packagers and other redistributors are in charge of how OpenBLAS is built and distributed in their systems, and may have good reasons to deviate from the guidance given on this page. These recommendations are aimed at general packaging systems, with a user base that typically is large, open source (or freely available at least), and doesn't behave uniformly or that the packager is directly connected with.*

OpenBLAS has a large number of build-time options which can be used to change how it behaves at runtime, how artifacts or symbols are named, etc. Variation in build configuration can be necessary to acheive a given end goal within a distribution or as an end user. However, such variation can also make it more difficult to build on top of OpenBLAS and ship code or other packages in a way that works across many different distros. Here we provide guidance about the most important build options, what effects they may have when changed, and which ones to default to.

The Make and CMake build systems provide equivalent options and yield more or less the same artifacts, but not exactly (the CMake builds are still experimental). You can choose either one and the options will function in the same way, however the CMake outputs may require some renaming. To review available build options, see Makefile.rule or CMakeLists.txt in the root of the repository.

Build options typically fall into two categories: (a) options that affect the user interface, such as library and symbol names or APIs that are made available, and (b) options that affect performance and runtime behavior, such as threading behavior or CPU architecture-specific code paths. The user interface options are more important to keep aligned between distributions, while for the performance-related options there are typically more reasons to make choices that deviate from the defaults.

Here are recommendations for user interface related packaging choices where it is not likely to be a good idea to deviate (typically these are the default settings):

  1. Include CBLAS. The CBLAS interface is widely used and it doesn't affect binary size much, so don't turn it off.
  2. Include LAPACK and LAPACKE. The LAPACK interface is also widely used, and while it does make up a significant part of the binary size of the installed library, that does not outweigh the regression in usability when deviating from the default here.[^1]
  3. Always distribute the pkg-config (.pc) and CMake .cmake) dependency detection files. These files are used by build systems when users want to link against OpenBLAS, and there is no benefit of leaving them out.
  4. Provide the LP64 interface by default, and if in addition to that you choose to provide an ILP64 interface build as well, use a symbol suffix to avoid symbol name clashes (see the next section).

[^1] All major distributions do include LAPACK as of mid 2023 as far as we know. Older versions of Arch Linux did not, and that was known to cause problems.

"},{"location":"distributing/#ilp64-interface-builds","title":"ILP64 interface builds","text":"

The LP64 (32-bit integer) interface is the default build, and has well-established C and Fortran APIs as determined by the reference (Netlib) BLAS and LAPACK libraries. The ILP64 (64-bit integer) interface however does not have a standard API: symbol names and shared/static library names can be produced in multiple ways, and this tends to make it difficult to use. As of today there is an agreed-upon way of choosing names for OpenBLAS between a number of key users/redistributors, which is the closest thing to a standard that there is now. However, there is an ongoing standardization effort in the reference BLAS and LAPACK libraries, which differs from the current OpenBLAS agreed-upon convention. In this section we'll aim to explain both.

Those two methods are fairly similar, and have a key thing in common: using a symbol suffix. This is good practice; it is recommended that if you distribute an ILP64 build, to have it use a symbol suffix containing 64 in the name. This avoids potential symbol clashes when different packages which depend on OpenBLAS load both an LP64 and an ILP64 library into memory at the same time.

"},{"location":"distributing/#the-current-openblas-agreed-upon-ilp64-convention","title":"The current OpenBLAS agreed-upon ILP64 convention","text":"

This convention comprises the shared library name and the symbol suffix in the shared library. The symbol suffix to use is 64_, implying that the library name will be libopenblas64_.so and the symbols in that library end in 64_. The central issue where this was discussed is openblas#646, and adopters include Fedora, Julia, NumPy and SciPy - SuiteSparse already used it as well.

To build shared and static libraries with the currently recommended ILP64 conventions with Make:

$ make INTERFACE64=1 SYMBOLSUFFIX=64_\n

This will produce libraries named libopenblas64_.so|a, a pkg-config file named openblas64.pc, and CMake and header files.

Installing locally and inspecting the output will show a few more details:

$ make install PREFIX=$PWD/../openblas/make64 INTERFACE64=1 SYMBOLSUFFIX=64_\n$ tree .  # output slightly edited down\n.\n\u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 cblas.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 f77blas.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_config.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapacke_utils.h\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lapack.h\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas_config.h\n\u2514\u2500\u2500 lib\n    \u251c\u2500\u2500 cmake\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLASConfig.cmake\n    \u2502\u00a0\u00a0     \u2514\u2500\u2500 OpenBLASConfigVersion.cmake\n    \u251c\u2500\u2500 libopenblas64_.a\n    \u251c\u2500\u2500 libopenblas64_.so\n    \u2514\u2500\u2500 pkgconfig\n        \u2514\u2500\u2500 openblas64.pc\n

A key point are the symbol names. These will equal the LP64 symbol names, then (for Fortran only) the compiler mangling, and then the 64_ symbol suffix. Hence to obtain the final symbol names, we need to take into account which Fortran compiler we are using. For the most common cases (e.g., gfortran, Intel Fortran, or Flang), that means appending a single underscore. In that case, the result is:

base API name binary symbol name call from Fortran code call from C code dgemm dgemm_64_ dgemm_64(...) dgemm_64_(...) cblas_dgemm cblas_dgemm64_ n/a cblas_dgemm64_(...)

It is quite useful to have these symbol names be as uniform as possible across different packaging systems.

The equivalent build options with CMake are:

$ mkdir build && cd build\n$ cmake .. -DINTERFACE64=1 -DSYMBOLSUFFIX=64_ -DBUILD_SHARED_LIBS=ON -DBUILD_STATIC_LIBS=ON\n$ cmake --build . -j\n

Note that the result is not 100% identical to the Make result. For example, the library name ends in _64 rather than 64_ - it is recommended to rename them to match the Make library names (also update the libsuffix entry in openblas64.pc to match that rename).

$ cmake --install . --prefix $PWD/../../openblas/cmake64\n$ tree .\n.\n\u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 openblas64\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 cblas.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 f77blas.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_config.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_example_aux.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapacke_utils.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 lapack.h\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 openblas64\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u2514\u2500\u2500 lapacke_mangling.h\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 openblas_config.h\n\u2514\u2500\u2500 lib\n    \u251c\u2500\u2500 cmake\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 OpenBLAS64\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64Config.cmake\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64ConfigVersion.cmake\n    \u2502\u00a0\u00a0     \u251c\u2500\u2500 OpenBLAS64Targets.cmake\n    \u2502\u00a0\u00a0     \u2514\u2500\u2500 OpenBLAS64Targets-noconfig.cmake\n    \u251c\u2500\u2500 libopenblas_64.a\n    \u251c\u2500\u2500 libopenblas_64.so -> libopenblas_64.so.0\n    \u2514\u2500\u2500 pkgconfig\n        \u2514\u2500\u2500 openblas64.pc\n

"},{"location":"distributing/#the-upcoming-standardized-ilp64-convention","title":"The upcoming standardized ILP64 convention","text":"

While the 64_ convention above got some adoption, it's slightly hacky and is implemented through the use of objcopy. An effort is ongoing for a more broadly adopted convention in the reference BLAS and LAPACK libraries, using (a) the _64 suffix, and (b) applying that suffix before rather than after Fortran compiler mangling. The central issue for this is lapack#666.

For the most common cases of compiler mangling (a single _ appended), the end result will be:

base API name binary symbol name call from Fortran code call from C code dgemm dgemm_64_ dgemm_64(...) dgemm_64_(...) cblas_dgemm cblas_dgemm_64 n/a cblas_dgemm_64(...)

For other compiler mangling schemes, replace the trailing _ by the scheme in use.

The shared library name for this _64 convention should be libopenblas_64.so.

Note: it is not yet possible to produce an OpenBLAS build which employs this convention! Once reference BLAS and LAPACK with support for _64 have been released, a future OpenBLAS release will support it. For now, please use the older 64_ scheme and avoid using the name libopenblas_64.so; it should be considered reserved for future use of the _64 standard as prescribed by reference BLAS/LAPACK.

"},{"location":"distributing/#performance-and-runtime-behavior-related-build-options","title":"Performance and runtime behavior related build options","text":"

For these options there are multiple reasonable or common choices.

"},{"location":"distributing/#threading-related-options","title":"Threading related options","text":"

OpenBLAS can be built as a multi-threaded or single-threaded library, with the default being multi-threaded. It's expected that the default libopenblas library is multi-threaded; if you'd like to also distribute single-threaded builds, consider naming them libopenblas_sequential.

OpenBLAS can be built with pthreads or OpenMP as the threading model, with the default being pthreads. Both options are commonly used, and the choice here should not influence the shared library name. The choice will be captured by the .pc file. E.g.,:

$ pkg-config --libs openblas\n-fopenmp -lopenblas\n\n$ cat openblas.pc\n...\nopenblas_config= ... USE_OPENMP=0 MAX_THREADS=24\n

The maximum number of threads users will be able to use is determined at build time by the NUM_THREADS build option. It defaults to 24, and there's a wide range of values that are reasonable to use (up to 256). 64 is a typical choice here; there is a memory footprint penalty that is linear in NUM_THREADS. Please see Makefile.rule for more details.

"},{"location":"distributing/#cpu-architecture-related-options","title":"CPU architecture related options","text":"

OpenBLAS contains a lot of CPU architecture-specific optimizations, hence when distributing to a user base with a variety of hardware, it is recommended to enable CPU architecture runtime detection. This will dynamically select optimized kernels for individual APIs. To do this, use the DYNAMIC_ARCH=1 build option. This is usually done on all common CPU families, except when there are known issues.

In case the CPU architecture is known (e.g. you're building binaries for macOS M1 users), it is possible to specify the target architecture directly with the TARGET= build option.

DYNAMIC_ARCH and TARGET are covered in more detail in the main README.md in this repository.

"},{"location":"distributing/#real-world-examples","title":"Real-world examples","text":"

OpenBLAS is likely to be distributed in one of these distribution models:

  1. As a standalone package, or multiple packages, in a packaging ecosystem like a Linux distro, Homebrew, conda-forge or MSYS2.
  2. Vendored as part of a larger package, e.g. in Julia, NumPy, SciPy, or R.
  3. Locally, e.g. making available as a build on a single HPC cluster.

The guidance on this page is most important for models (1) and (2). These links to build recipes for a representative selection of packaging systems may be helpful as a reference:

"},{"location":"extensions/","title":"Extensions","text":" Routine Data Types Description ?axpby s,d,c,z like axpy with a multiplier for y ?gemm3m c,z gemm3m ?imatcopy s,d,c,z in-place transpositon/copying ?omatcopy s,d,c,z out-of-place transpositon/copying ?geadd s,d,c,z matrix add ?gemmt s,d,c,z gemm but only a triangular part updated "},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#general-questions","title":"General questions","text":""},{"location":"faq/#what-is-blas-why-is-it-important","title":"What is BLAS? Why is it important?","text":"

BLAS stands for Basic Linear Algebra Subprograms. BLAS provides standard interfaces for linear algebra, including BLAS1 (vector-vector operations), BLAS2 (matrix-vector operations), and BLAS3 (matrix-matrix operations). In general, BLAS is the computational kernel (\"the bottom of the food chain\") in linear algebra or scientific applications. Thus, if BLAS implementation is highly optimized, the whole application can get substantial benefit.

"},{"location":"faq/#what-functions-are-there-and-how-can-i-call-them-from-my-c-code","title":"What functions are there and how can I call them from my C code?","text":"

As BLAS is a standardized interface, you can refer to the documentation of its reference implementation at netlib.org. Calls from C go through its CBLAS interface, so your code will need to include the provided cblas.h in addition to linking with -lopenblas. A single-precision matrix multiplication will look like

#include <cblas.h>\n...\ncblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, M, N, K, 1.0, A, K, B, N, 0.0, result, N);\n
where M,N,K are the dimensions of your data - see https://petewarden.files.wordpress.com/2015/04/gemm_corrected.png (This image is part of an article on GEMM in the context of deep learning that is well worth reading in full - https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/)

"},{"location":"faq/#what-is-openblas-why-did-you-create-this-project","title":"What is OpenBLAS? Why did you create this project?","text":"

OpenBLAS is an open source BLAS library forked from the GotoBLAS2-1.13 BSD version. Since Mr. Kazushige Goto left TACC, GotoBLAS is no longer being maintained. Thus, we created this project to continue developing OpenBLAS/GotoBLAS.

"},{"location":"faq/#whats-the-difference-between-openblas-and-gotoblas","title":"What's the difference between OpenBLAS and GotoBLAS?","text":"

In OpenBLAS 0.2.0, we optimized level 3 BLAS on the Intel Sandy Bridge 64-bit OS. We obtained a performance comparable with that Intel MKL.

We optimized level 3 BLAS performance on the ICT Loongson-3A CPU. It outperformed GotoBLAS by 135% in a single thread and 120% in 4 threads.

We fixed some GotoBLAS bugs including a SEGFAULT bug on the new Linux kernel, MingW32/64 bugs, and a ztrmm computing error bug on Intel Nehalem.

We also added some minor features, e.g. supporting \"make install\", compiling without LAPACK and upgrading the LAPACK version to 3.4.2.

You can find the full list of modifications in Changelog.txt.

"},{"location":"faq/#where-do-parameters-gemm_p-gemm_q-gemm_r-come-from","title":"Where do parameters GEMM_P, GEMM_Q, GEMM_R come from?","text":"

The detailed explanation is probably in the original publication authored by Kazushige Goto - Goto, Kazushige; van de Geijn, Robert A; Anatomy of high-performance matrix multiplication. ACM Transactions on Mathematical Software (TOMS). Volume 34 Issue 3, May 2008 While this article is paywalled and too old for preprints to be available on arxiv.org, more recent publications like https://arxiv.org/pdf/1609.00076 contain at least a brief description of the algorithm. In practice, the values are derived by experimentation to yield the block sizes that give the highest performance. A general rule of thumb for selecting a starting point seems to be that PxQ is about half the size of L2 cache.

"},{"location":"faq/#how-can-i-report-a-bug","title":"How can I report a bug?","text":"

Please file an issue at this issue page or send mail to the OpenBLAS mailing list.

Please provide the following information: CPU, OS, compiler, and OpenBLAS compiling flags (Makefile.rule). In addition, please describe how to reproduce this bug.

"},{"location":"faq/#how-to-reference-openblas","title":"How to reference OpenBLAS.","text":"

You can reference our papers in this page. Alternatively, you can cite the OpenBLAS homepage http://www.openblas.net.

"},{"location":"faq/#how-can-i-use-openblas-in-multi-threaded-applications","title":"How can I use OpenBLAS in multi-threaded applications?","text":"

If your application is already multi-threaded, it will conflict with OpenBLAS multi-threading. Thus, you must set OpenBLAS to use single thread as following.

If the application is parallelized by OpenMP, please build OpenBLAS with USE_OPENMP=1

With the increased availability of fast multicore hardware it has unfortunately become clear that the thread management provided by OpenMP is not sufficient to prevent race conditions when OpenBLAS was built single-threaded by USE_THREAD=0 and there are concurrent calls from multiple threads to OpenBLAS functions. In this case, it is vital to also specify USE_LOCKING=1 (introduced with OpenBLAS 0.3.7).

"},{"location":"faq/#does-openblas-support-sparse-matrices-andor-vectors","title":"Does OpenBLAS support sparse matrices and/or vectors ?","text":"

OpenBLAS implements only the standard (dense) BLAS and LAPACK functions with a select few extensions popularized by Intel's MKL. Some cases can probably be made to work using e.g. GEMV or AXPBY, in general using a dedicated package like SuiteSparse (which can make use of OpenBLAS or equivalent for standard operations) is recommended.

"},{"location":"faq/#what-support-is-there-for-recent-pc-hardware-what-about-gpu","title":"What support is there for recent PC hardware ? What about GPU ?","text":"

As OpenBLAS is a volunteer project, it can take some time for the combination of a capable developer, free time, and particular hardware to come along, even for relatively common processors. Starting from 0.3.1, support is being added for AVX 512 (TARGET=SKYLAKEX), requiring a compiler that is capable of handling avx512 intrinsics. While AMD Zen processors should be autodetected by the build system, as of 0.3.2 they are still handled exactly like Intel Haswell. There once was an effort to build an OpenCL implementation that one can still find at https://github.com/xianyi/clOpenBLAS , but work on this stopped in 2015.

"},{"location":"faq/#how-about-the-level-3-blas-performance-on-intel-sandy-bridge","title":"How about the level 3 BLAS performance on Intel Sandy Bridge?","text":"

We obtained a performance comparable with Intel MKL that actually outperformed Intel MKL in some cases. Here is the result of the DGEMM subroutine's performance on Intel Core i5-2500K Windows 7 SP1 64-bit:

"},{"location":"faq/#os-and-compiler","title":"OS and Compiler","text":""},{"location":"faq/#how-can-i-call-an-openblas-function-in-microsoft-visual-studio","title":"How can I call an OpenBLAS function in Microsoft Visual Studio?","text":"

Please read this page.

"},{"location":"faq/#how-can-i-use-cblas-and-lapacke-without-c99-complex-number-support-eg-in-visual-studio","title":"How can I use CBLAS and LAPACKE without C99 complex number support (e.g. in Visual Studio)?","text":"

Zaheer has fixed this bug. You can now use the structure instead of C99 complex numbers. Please read this issue page for details.

This issue is for using LAPACKE in Visual Studio.

"},{"location":"faq/#i-get-a-segfault-with-multi-threading-on-linux-whats-wrong","title":"I get a SEGFAULT with multi-threading on Linux. What's wrong?","text":"

This may be related to a bug in the Linux kernel 2.6.32 (?). Try applying the patch segaults.patch to disable mbind using

 patch < segfaults.patch\n

and see if the crashes persist. Note that this patch will lead to many compiler warnings.

"},{"location":"faq/#when-i-make-the-library-there-is-no-such-instruction-xgetbv-error-whats-wrong","title":"When I make the library, there is no such instruction: `xgetbv' error. What's wrong?","text":"

Please use GCC 4.4 and later version. This version supports xgetbv instruction. If you use the library for Sandy Bridge with AVX instructions, you should use GCC 4.6 and later version.

On Mac OS X, please use Clang 3.1 and later version. For example, make CC=clang

For the compatibility with old compilers (GCC < 4.4), you can enable NO_AVX flag. For example, make NO_AVX=1

"},{"location":"faq/#my-build-fails-due-to-the-linker-error-multiple-definition-of-dlamc3_-what-is-the-problem","title":"My build fails due to the linker error \"multiple definition of `dlamc3_'\". What is the problem?","text":"

This linker error occurs if GNU patch is missing or if our patch for LAPACK fails to apply.

Background: OpenBLAS implements optimized versions of some LAPACK functions, so we need to disable the reference versions. If this process fails we end with duplicated implementations of the same function.

"},{"location":"faq/#my-build-worked-fine-and-passed-all-tests-but-running-make-lapack-test-ends-with-segfaults","title":"My build worked fine and passed all tests, but running make lapack-test ends with segfaults","text":"

Some of the LAPACK tests, notably in xeigtstz, try to allocate around 10MB on the stack. You may need to use ulimit -s to change the default limits on your system to allow this.

"},{"location":"faq/#how-could-i-disable-openblas-threading-affinity-on-runtime","title":"How could I disable OpenBLAS threading affinity on runtime?","text":"

You can define the OPENBLAS_MAIN_FREE or GOTOBLAS_MAIN_FREE environment variable to disable threading affinity on runtime. For example, before the running,

export OPENBLAS_MAIN_FREE=1\n

Alternatively, you can disable affinity feature with enabling NO_AFFINITY=1 in Makefile.rule.

"},{"location":"faq/#how-to-solve-undefined-reference-errors-when-statically-linking-against-libopenblasa","title":"How to solve undefined reference errors when statically linking against libopenblas.a","text":"

On Linux, if OpenBLAS was compiled with threading support (USE_THREAD=1 by default), custom programs statically linked against libopenblas.a should also link to the pthread library e.g.:

gcc -static -I/opt/OpenBLAS/include -L/opt/OpenBLAS/lib -o my_program my_program.c -lopenblas -lpthread\n

Failing to add the -lpthread flag will cause errors such as:

/opt/OpenBLAS/libopenblas.a(memory.o): In function `_touch_memory':\nmemory.c:(.text+0x15): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0x41): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `openblas_fork_handler':\nmemory.c:(.text+0x440): undefined reference to `pthread_atfork'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `blas_memory_alloc':\nmemory.c:(.text+0x7a5): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0x825): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(memory.o): In function `blas_shutdown':\nmemory.c:(.text+0x9e1): undefined reference to `pthread_mutex_lock'\nmemory.c:(.text+0xa6e): undefined reference to `pthread_mutex_unlock'\n/opt/OpenBLAS/libopenblas.a(blas_server.o): In function `blas_thread_server':\nblas_server.c:(.text+0x273): undefined reference to `pthread_mutex_lock'\nblas_server.c:(.text+0x287): undefined reference to `pthread_mutex_unlock'\nblas_server.c:(.text+0x33f): undefined reference to `pthread_cond_wait'\n/opt/OpenBLAS/libopenblas.a(blas_server.o): In function `blas_thread_init':\nblas_server.c:(.text+0x416): undefined reference to `pthread_mutex_lock'\nblas_server.c:(.text+0x4be): undefined reference to `pthread_mutex_init'\nblas_server.c:(.text+0x4ca): undefined reference to `pthread_cond_init'\nblas_server.c:(.text+0x4e0): undefined reference to `pthread_create'\nblas_server.c:(.text+0x50f): undefined reference to `pthread_mutex_unlock'\n...\n

The -lpthread is not required when linking dynamically against libopenblas.so.0.

"},{"location":"faq/#building-openblas-for-haswell-or-dynamic-arch-on-rhel-6-centos-6-rocks-61scientific-linux-6","title":"Building OpenBLAS for Haswell or Dynamic Arch on RHEL-6, CentOS-6, Rocks-6.1,Scientific Linux 6","text":"

Minimum requirement to actually run AVX2-enabled software like OpenBLAS is kernel-2.6.32-358, shipped with EL6U4 in 2013

The binutils package from RHEL6 does not know the instruction vpermpd or any other AVX2 instruction. You can download a newer binutils package from Enterprise Linux software collections, following instructions here: https://www.softwarecollections.org/en/scls/rhscl/devtoolset-3/ After configuring repository you need to install devtoolset-?-binutils to get later usable binutils package

$ yum search devtoolset-\\?-binutils\n$ sudo yum install devtoolset-3-binutils\n
once packages are installed check the correct name for SCL redirection set to enable new version
$ scl --list\ndevtoolset-3\nrh-python35\n
Now just prefix your build commands with respective redirection:
$ scl enable devtoolset-3 -- make DYNAMIC_ARCH=1\n
AVX-512 (SKYLAKEX) support requires devtoolset-8-gcc-gfortran (which exceeds formal requirement for AVX-512 because of packaging issues in earlier packages) which dependency-installs respective binutils and gcc or later and kernel 2.6.32-696 aka 6U9 or 3.10.0-327 aka 7U2 or later to run. In absence of abovementioned toolset OpenBLAS will fall back to AVX2 instructions in place of AVX512 sacrificing some performance on SKYLAKE-X platform.

"},{"location":"faq/#building-openblas-in-qemukvmxen","title":"Building OpenBLAS in QEMU/KVM/XEN","text":"

By default, QEMU reports the CPU as \"QEMU Virtual CPU version 2.2.0\", which shares CPUID with existing 32bit CPU even in 64bit virtual machine, and OpenBLAS recognizes it as PENTIUM2. Depending on the exact combination of CPU features the hypervisor choses to expose, this may not correspond to any CPU that exists, and OpenBLAS will error when trying to build. To fix this, pass -cpu host or -cpu passthough to QEMU, or another CPU model. Similarly, the XEN hypervisor may not pass through all features of the host cpu while reporting the cpu type itself correctly, which can lead to compiler error messages about an \"ABI change\" when compiling AVX512 code. Again changing the Xen configuration by running e.g. \"xen-cmdline --set-xen cpuid=avx512\" should get around this (as would building OpenBLAS for an older cpu lacking that particular feature, e.g. TARGET=HASWELL)

"},{"location":"faq/#building-openblas-on-power-fails-with-ibm-xl","title":"Building OpenBLAS on POWER fails with IBM XL","text":"
Trying to compile OpenBLAS with IBM XL ends with error messages about unknown register names\n

like \"vs32\". Working around these by using known alternate names for the vector registers only leads to another assembler error about unsupported constraints. This is a known deficiency in the IBM compiler at least up to and including 16.1.0 (and in the POWER version of clang, from which it is derived) - use gcc instead. (See issues #1078 and #1699 for related discussions)

"},{"location":"faq/#replacing-system-blasupdating-apt-openblas-in-mintubuntudebian","title":"Replacing system BLAS/updating APT OpenBLAS in Mint/Ubuntu/Debian","text":"

Debian and Ubuntu LTS versions provide OpenBLAS package which is not updated after initial release, and under circumstances one might want to use more recent version of OpenBLAS e.g. to get support for newer CPUs

Ubuntu and Debian provides 'alternatives' mechanism to comfortably replace BLAS and LAPACK libraries systemwide.

After successful build of OpenBLAS (with DYNAMIC_ARCH set to 1)

$ make clean\n$ make DYNAMIC_ARCH=1\n$ sudo make DYNAMIC_ARCH=1 install\n
One can redirect BLAS and LAPACK alternatives to point to source-built OpenBLAS First you have to install NetLib LAPACK reference implementation (to have alternatives to replace):
$ sudo apt install libblas-dev liblapack-dev\n
Then we can set alternative to our freshly-built library:
$ sudo update-alternatives --install /usr/lib/libblas.so.3 libblas.so.3 /opt/OpenBLAS/lib/libopenblas.so.0 41 \\\n   --slave /usr/lib/liblapack.so.3 liblapack.so.3 /opt/OpenBLAS/lib/libopenblas.so.0\n
Or remove redirection and switch back to APT-provided BLAS implementation order:
$ sudo update-alternatives --remove libblas.so.3 /opt/OpenBLAS/lib/libopenblas.so.0\n
In recent versions of the distributions, the installation path for the libraries has been changed to include the name of the host architecture, like /usr/lib/x86_64-linux-gnu/blas/libblas.so.3 or libblas.so.3.x86_64-linux-gnu. Use $ update-alternatives --display libblas.so.3 to find out what layout your system has.

"},{"location":"faq/#i-built-openblas-for-use-with-some-other-software-but-that-software-cannot-find-it","title":"I built OpenBLAS for use with some other software, but that software cannot find it","text":"

Openblas installs as a single library named libopenblas.so, while some programs may be searching for a separate libblas.so and liblapack.so so you may need to create appropriate symbolic links (ln -s libopenblas.so libblas.so; ln -s libopenblas.so liblapack.so) or copies. Also make sure that the installation location (usually /opt/OpenBLAS/lib or /usr/local/lib) is among the library search paths of your system.

"},{"location":"faq/#i-included-cblash-in-my-program-but-the-compiler-complains-about-a-missing-commonh-or-functions-from-it","title":"I included cblas.h in my program, but the compiler complains about a missing common.h or functions from it","text":"

You probably tried to include a cblas.h that you simply copied from the OpenBLAS source, instead you need to run make install after building OpenBLAS and then use the modified cblas.h that this step builds in the installation path (usually either /usr/local/include, /opt/OpenBLAS/include or whatever you specified as PREFIX= on the make install)

"},{"location":"faq/#compiling-openblas-with-gccs-fbounds-check-actually-triggers-aborts-in-programs","title":"Compiling OpenBLAS with gcc's -fbounds-check actually triggers aborts in programs","text":"

This is due to different interpretations of the (informal) standard for passing characters as arguments between C and FORTRAN functions. As the method for storing text differs in the two languages, when C calls Fortran the text length is passed as an \"invisible\" additional parameter. Historically, this has not been required when the text is just a single character, so older code like the Reference-LAPACK bundled with OpenBLAS does not do it. Recently gcc's checking has changed to require it, but there is no consensus yet if and how the existing LAPACK (and many other codebases) should adapt. (And for actual compilation, gcc has mostly backtracked and provided compatibility options - hence the default build settings in the OpenBLAS Makefiles add -fno-optimize-sibling-calls to the gfortran options to prevent miscompilation with \"affected\" versions. See ticket 2154 in the issue tracker for more details and links)

"},{"location":"faq/#build-fails-with-lots-of-errors-about-undefined-gemm_unroll_m","title":"Build fails with lots of errors about undefined ?GEMM_UNROLL_M","text":"

Your cpu is apparently too new to be recognized by the build scripts, so they failed to assign appropriate parameters for the block algorithm. Do a make clean and try again with TARGET set to one of the cpu models listed in TargetList.txt - for x86_64 this will usually be HASWELL.

"},{"location":"faq/#cmakeosx-build-fails-with-argument-list-too-long","title":"CMAKE/OSX: Build fails with 'argument list too long'","text":"

This is a limitation in the maximum length of a command on OSX, coupled with how CMAKE works. You should be able to work around this by adding the option -DCMAKE_Fortran_USE_RESPONSE_FILE_FOR_OBJECTS=1 to your CMAKE arguments.

"},{"location":"faq/#likely-problems-with-avx2-support-in-docker-desktop-for-osx","title":"Likely problems with AVX2 support in Docker Desktop for OSX","text":"

There have been a few reports of wrong calculation results and build-time test failures when building in a container environment managed by the OSX version of Docker Desktop, which uses the xhyve virtualizer underneath. Judging from these reports, AVX2 support in xhyve appears to be subtly broken but a corresponding ticket in the xhyve issue tracker has not drawn any reaction or comment since 2019. Therefore it is strongly recommended to build OpenBLAS with the NO_AVX2=1 option when inside a container under (or for later use with) the Docker Desktop environment on Intel-based Apple hardware.

"},{"location":"faq/#usage","title":"Usage","text":""},{"location":"faq/#program-is-terminated-because-you-tried-to-allocate-too-many-memory-regions","title":"Program is Terminated. Because you tried to allocate too many memory regions","text":"

In OpenBLAS, we mange a pool of memory buffers and allocate the number of buffers as the following.

#define NUM_BUFFERS (MAX_CPU_NUMBER * 2)\n
This error indicates that the program exceeded the number of buffers.

Please build OpenBLAS with larger NUM_THREADS. For example, make NUM_THREADS=32 or make NUM_THREADS=64. In Makefile.system, we will set MAX_CPU_NUMBER=NUM_THREADS.

"},{"location":"faq/#how-to-choose-target-manually-at-runtime-when-compiled-with-dynamic_arch","title":"How to choose TARGET manually at runtime when compiled with DYNAMIC_ARCH","text":"

The environment variable which control the kernel selection is OPENBLAS_CORETYPE (see driver/others/dynamic.c) e.g. export OPENBLAS_CORETYPE=Haswell. And the function char* openblas_get_corename() returns the used target.

"},{"location":"faq/#after-updating-the-installed-openblas-a-program-complains-about-undefined-symbol-gotoblas","title":"After updating the installed OpenBLAS, a program complains about \"undefined symbol gotoblas\"","text":"

This symbol gets defined only when OpenBLAS is built with \"make DYNAMIC_ARCH=1\" (which is what distributors will choose to ensure support for more than just one CPU type).

"},{"location":"faq/#how-can-i-find-out-at-runtime-what-options-the-library-was-built-with","title":"How can I find out at runtime what options the library was built with ?","text":"

OpenBLAS has two utility functions that may come in here:

openblas_get_parallel() will return 0 for a single-threaded library, 1 if multithreading without OpenMP, 2 if built with USE_OPENMP=1

openblas_get_config() will return a string containing settings such as USE64BITINT or DYNAMIC_ARCH that were active at build time, as well as the target cpu (or in case of a dynamic_arch build, the currently detected one).

"},{"location":"faq/#after-making-openblas-i-find-that-the-static-library-is-multithreaded-but-the-dynamic-one-is-not","title":"After making OpenBLAS, I find that the static library is multithreaded, but the dynamic one is not ?","text":"

The shared OpenBLAS library you built is probably working fine as well, but your program may be picking up a different (probably single-threaded) version from one of the standard system paths like /usr/lib on startup. Running ldd /path/to/your/program will tell you which library the linkage loader will actually use.

Specifying the \"correct\" library location with the -L flag (like -L /opt/OpenBLAS/lib) when linking your program only defines which library will be used to see if all symbols can be resolved, you will need to add an rpath entry to the binary (using -Wl,rpath=/opt/OpenBLAS/lib) to make it request searching that location. Alternatively, remove the \"wrong old\" library (if you can), or set LD_LIBRARY_PATH to the desired location before running your program.

"},{"location":"faq/#i-want-to-use-openblas-with-cuda-in-the-hpl-23-benchmark-code-but-it-keeps-looking-for-intel-mkl","title":"I want to use OpenBLAS with CUDA in the HPL 2.3 benchmark code but it keeps looking for Intel MKL","text":"

You need to edit file src/cuda/cuda_dgemm.c in the NVIDIA version of HPL, change the \"handle2\" and \"handle\" dlopen calls to use libopenblas.so instead of libmkl_intel_lp64.so, and add an trailing underscore in the dlsym lines for dgemm_mkl and dtrsm_mkl (like dgemm_mkl = (void(*)())dlsym(handle, \u201cdgemm_\u201d);)

"},{"location":"faq/#multithreaded-openblas-runs-no-faster-or-is-even-slower-than-singlethreaded-on-my-armv7-board","title":"Multithreaded OpenBLAS runs no faster or is even slower than singlethreaded on my ARMV7 board","text":"

The power saving mechanisms of your board may have shut down some cores, making them invisible to OpenBLAS in its startup phase. Try bringing them online before starting your calculation.

"},{"location":"faq/#speed-varies-wildly-between-individual-runs-on-a-typical-armv8-smartphone-processor","title":"Speed varies wildly between individual runs on a typical ARMV8 smartphone processor","text":"

Check the technical specifications, it could be that the SoC combines fast and slow cpus and threads can end up on either. In that case, binding the process to specific cores e.g. by setting OMP_PLACES=cores may help. (You may need to experiment with OpenMP options, it has been reported that using OMP_NUM_THREADS=2 OMP_PLACES=cores caused a huge drop in performance on a 4+4 core chip while OMP_NUM_THREADS=2 OMP_PLACES=cores(2) worked as intended - as did OMP_PLACES=cores with 4 threads)

"},{"location":"faq/#i-cannot-get-openblas-to-use-more-than-a-small-subset-of-available-cores-on-a-big-system","title":"I cannot get OpenBLAS to use more than a small subset of available cores on a big system","text":"

Multithreading support in OpenBLAS requires the use of internal buffers for sharing partial results, the number and size of which is defined at compile time. Unless you specify NUM_THREADS in your make or cmake command, the build scripts try to autodetect the number of cores available in your build host to size the library to match. This unfortunately means that if you move the resulting binary from a small \"front-end node\" to a larger \"compute node\" later, it will still be limited to the hardware capabilities of the original system. The solution is to set NUM_THREADS to a number big enough to encompass the biggest systems you expect to run the binary on - at runtime, it will scale down the maximum number of threads it uses to match the number of cores physically available.

"},{"location":"faq/#getting-elf-load-command-addressoffset-not-properly-aligned-when-loading-libopenblasso","title":"Getting \"ELF load command address/offset not properly aligned\" when loading libopenblas.so","text":"

If you get a message \"error while loading shared libraries: libopenblas.so.0: ELF load command address/offset not properly aligned\" when starting a program that is (dynamically) linked to OpenBLAS, this is very likely due to a bug in the GNU linker (ld) that is part of the GNU binutils package. This error was specifically observed on older versions of Ubuntu Linux updated with the (at the time) most recent binutils version 2.38, but an internet search turned up sporadic reports involving various other libraries dating back several years. A bugfix was created by the binutils developers and should be available in later versions of binutils.(See issue 3708 for details)

"},{"location":"faq/#using-openblas-with-openmp","title":"Using OpenBLAS with OpenMP","text":"

OpenMP provides its own locking mechanisms, so when your code makes BLAS/LAPACK calls from inside OpenMP parallel regions it is imperative that you use an OpenBLAS that is built with USE_OPENMP=1, as otherwise deadlocks might occur. Furthermore, OpenBLAS will automatically restrict itself to using only a single thread when called from an OpenMP parallel region. When it is certain that calls will only occur from the main thread of your program (i.e. outside of omp parallel constructs), a standard pthreads build of OpenBLAS can be used as well. In that case it may be useful to tune the linger behaviour of idle threads in both your OpenMP program (e.g. set OMP_WAIT_POLICY=passive) and OpenBLAS (by redefining the THREAD_TIMEOUT variable at build time, or setting the environment variable OPENBLAS_THREAD_TIMEOUT smaller than the default 26) so that the two alternating thread pools do not unnecessarily hog the cpu during the handover.

"},{"location":"install/","title":"Install OpenBLAS","text":"

Note

Lists of precompiled packages are not comprehensive, is not meant to validate nor endorse a particular third-party build over others, and may not always lead to the newest version

"},{"location":"install/#quick-install","title":"Quick install","text":"

Precompiled packages have recently become available for a number of platforms through their normal installation procedures, so for users of desktop devices at least, the instructions below are mostly relevant when you want to try the most recent development snapshot from git. See your platform's relevant \"Precompiled packages\" section.

The Conda-Forge project maintains packages for the conda package manager at https://github.com/conda-forge/openblas-feedstock.

"},{"location":"install/#source","title":"Source","text":"

Download the latest stable version from release page.

"},{"location":"install/#platforms","title":"Platforms","text":""},{"location":"install/#linux","title":"Linux","text":"

Just type make to compile the library.

Notes:

"},{"location":"install/#precompiled-packages","title":"Precompiled packages","text":""},{"location":"install/#debianubuntumintkali","title":"Debian/Ubuntu/Mint/Kali","text":"

OpenBLAS package is available in default repositories and can act as default BLAS in system

Example installation commands:

$ sudo apt update\n$ apt search openblas\n$ sudo apt install libopenblas-dev\n$ sudo update-alternatives --config libblas.so.3\n
Alternatively, if distributor's package proves unsatisfactory, you may try latest version of OpenBLAS, Following guide in OpenBLAS FAQ

"},{"location":"install/#opensusesle","title":"openSuSE/SLE","text":"

Recent OpenSUSE versions include OpenBLAS in default repositories and also permit OpenBLAS to act as replacement of system-wide BLAS.

Example installation commands:

$ sudo zypper ref\n$ zypper se openblas\n$ sudo zypper in openblas-devel\n$ sudo update-alternatives --config libblas.so.3\n
Should you be using older OpenSUSE or SLE that provides no OpenBLAS, you can attach optional or experimental openSUSE repository as a new package source to acquire recent build of OpenBLAS following instructions on openSUSE software site

"},{"location":"install/#fedoracentosrhel","title":"Fedora/CentOS/RHEL","text":"

Fedora provides OpenBLAS in default installation repositories.

To install it try following:

$ dnf search openblas\n$ dnf install openblas-devel\n
For CentOS/RHEL/Scientific Linux packages are provided via Fedora EPEL repository

After adding repository and repository keys installation is pretty straightforward:

$ yum search openblas\n$ yum install openblas-devel\n
No alternatives mechanism is provided for BLAS, and packages in system repositories are linked against NetLib BLAS or ATLAS BLAS libraries. You may wish to re-package RPMs to use OpenBLAS instead as described here

"},{"location":"install/#mageia","title":"Mageia","text":"

Mageia offers ATLAS and NetLIB LAPACK in base repositories. You can build your own OpenBLAS replacement, and once installed in /opt TODO: populate /usr/lib64 /usr/include accurately to replicate netlib with update-alternatives

"},{"location":"install/#archmanjaroantergos","title":"Arch/Manjaro/Antergos","text":"
$ sudo pacman -S openblas\n
"},{"location":"install/#windows","title":"Windows","text":"

The precompiled binaries available with each release (in https://github.com/xianyi/OpenBLAS/releases) are created with MinGW using an option list of \"NUM_THREADS=64 TARGET=GENERIC DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 CONSISTENT_FPCSR=1\" - they should work on any x86 or x86_64 computer. The zip archive contains the include files, static and dll libraries as well as configuration files for getting them found via CMAKE or pkgconfig - just create a suitable folder for your OpenBLAS installation and unzip it there. (Note that you will need to edit the provided openblas.pc and OpenBLASConfig.cmake to reflect the installation path on your computer, as distributed they have \"win\" or \"win64\" reflecting the local paths on the system they were built on). Some programs will expect the DLL name to be lapack.dll, blas.dll, or (in the case of the statistics package \"R\") even Rblas.dll to act as a direct replacement for whatever other implementation of BLAS and LAPACK they use by default. Just copy the openblas.dll to the desired name(s). Note that the provided binaries are built with INTERFACE64=0, meaning they use standard 32bit integers for array indexing and the like (as is the default for most if not all BLAS and LAPACK implementations). If the documentation of whatever program you are using with OpenBLAS mentions 64bit integers (INTERFACE64=1) for addressing huge matrix sizes, you will need to build OpenBLAS from source (or open an issue ticket to make the demand for such a precompiled build known).

"},{"location":"install/#precompiled-packages_1","title":"Precompiled packages","text":""},{"location":"install/#visual-studio","title":"Visual Studio","text":"

As of OpenBLAS v0.2.15, we support MinGW and Visual Studio (using CMake to generate visual studio solution files \u2013 note that you will need at least version 3.11 of CMake for linking to work correctly) to build OpenBLAS on Windows.

Note that you need a Fortran compiler if you plan to build and use the LAPACK functions included with OpenBLAS. The sections below describe using either flang as an add-on to clang/LLVM or gfortran as part of MinGW for this purpose. If you want to use the Intel Fortran compiler ifort for this, be sure to also use the Intel C compiler icc for building the C parts, as the ABI imposed by ifort is incompatible with msvc.

"},{"location":"install/#1-native-msvc-abi","title":"1. Native (MSVC) ABI","text":"

A fully-optimized OpenBLAS that can be statically or dynamically linked to your application can currently be built for the 64-bit architecture with the LLVM compiler infrastructure. We're going to use Miniconda3 to grab all of the tools we need, since some of them are in an experimental status. Before you begin, you'll need to have Microsoft Visual Studio 2015 or newer installed.

  1. Install Miniconda3 for 64 bits using winget install --id Anaconda.Miniconda3 or easily download from conda.io.
  2. Open the \"Anaconda Command Prompt,\" now available in the Start Menu, or at %USERPROFILE%\\miniconda3\\shell\\condabin\\conda-hook.ps1.
  3. In that command prompt window, use cd to change to the directory where you want to build OpenBLAS
  4. Now install all of the tools we need:
conda update -n base conda\nconda config --add channels conda-forge\nconda install -y cmake flang clangdev perl libflang ninja\n
  1. Still in the Anaconda Command Prompt window, activate the MSVC environment for 64 bits with vcvarsall x64. On Windows 11 with Visual Studio 2022, this would be done by invoking:
\"c:\\Program Files\\Microsoft Visual Studio\\2022\\Preview\\vc\\Auxiliary\\Build\\vcvars64.bat\"\n

With VS2019, the command should be the same \u2013 except for the year number, obviously. For other/older versions of MSVC, the VS documentation or a quick search on the web should turn up the exact wording you need.

Confirm that the environment is active by typing link \u2013 this should return a long list of possible options for the link command. If it just returns \"command not found\" or similar, review and retype the call to vcvars64.bat. NOTE: if you are working from a Visual Studio Command prompt window instead (so that you do not have to do the vcvars call), you need to invoke conda activate so that CONDA_PREFIX etc. get set up correctly before proceeding to step 6. Failing to do so will lead to link errors like libflangmain.lib not getting found later in the build.

  1. Now configure the project with CMake. Starting in the project directory, execute the following:
set \"LIB=%CONDA_PREFIX%\\Library\\lib;%LIB%\"\nset \"CPATH=%CONDA_PREFIX%\\Library\\include;%CPATH%\"\nmkdir build\ncd build\ncmake .. -G \"Ninja\" -DCMAKE_CXX_COMPILER=clang-cl -DCMAKE_C_COMPILER=clang-cl -DCMAKE_Fortran_COMPILER=flang -DCMAKE_MT=mt -DBUILD_WITHOUT_LAPACK=no -DNOFORTRAN=0 -DDYNAMIC_ARCH=ON -DCMAKE_BUILD_TYPE=Release\n

You may want to add further options in the cmake command here \u2013 for instance, the default only produces a static .lib version of the library. If you would rather have a DLL, add -DBUILD_SHARED_LIBS=ON above. Note that this step only creates some command files and directories, the actual build happens next.

  1. Build the project:

cmake --build . --config Release\n
This step will create the OpenBLAS library in the \"lib\" directory, and various build-time tests in the test, ctest and openblas_utest directories. However it will not separate the header files you might need for building your own programs from those used internally. To put all relevant files in a more convenient arrangement, run the next step.

  1. Install all relevant files created by the build

cmake --install . --prefix c:\\opt -v\n
This will copy all files that are needed for building and running your own programs with OpenBLAS to the given location, creating appropriate subdirectories for the individual kinds of files. In the case of \"C:\\opt\" as given above, this would be C:\\opt\\include\\openblas for the header files, C:\\opt\\bin for the libopenblas.dll and C:\\opt\\lib for the static library. C:\\opt\\share holds various support files that enable other cmake-based build scripts to find OpenBLAS automatically.

"},{"location":"install/#visual-studio-2017-c2017-standard","title":"Visual studio 2017+ (C++2017 standard)","text":"

In newer visual studio versions, Microsoft has changed how it handles complex types. Even when using a precompiled version of OpenBLAS, you might need to define LAPACK_COMPLEX_CUSTOM in order to define complex types properly for MSVC. For example, some variant of the following might help:

#if defined(_MSC_VER)\n    #include <complex.h>\n    #define LAPACK_COMPLEX_CUSTOM\n    #define lapack_complex_float _Fcomplex\n    #define lapack_complex_double _Dcomplex\n#endif\n

For reference, see https://github.com/xianyi/OpenBLAS/issues/3661, https://github.com/Reference-LAPACK/lapack/issues/683, and https://stackoverflow.com/questions/47520244/using-openblas-lapacke-in-visual-studio.

"},{"location":"install/#cmake-and-visual-studio","title":"CMake and Visual Studio","text":"

To build OpenBLAS for the 32-bit architecture, you'll need to use the builtin Visual Studio compilers.

Note

This method may produce binaries which demonstrate significantly lower performance than those built with the other methods. (The Visual Studio compiler does not support the dialect of assembly used in the cpu-specific optimized files, so only the \"generic\" TARGET which is written in pure C will get built. For the same reason it is not possible (and not necessary) to use -DDYNAMIC_ARCH=ON in a Visual Studio build) You may consider building for the 32-bit architecture using the GNU (MinGW) ABI.

"},{"location":"install/#1-install-cmake-at-windows","title":"# 1. Install CMake at Windows","text":""},{"location":"install/#2-use-cmake-to-generate-visual-studio-solution-files","title":"# 2. Use CMake to generate Visual Studio solution files","text":"
# Do this from Powershell so cmake can find visual studio\ncmake -G \"Visual Studio 14 Win64\" -DCMAKE_BUILD_TYPE=Release .\n
"},{"location":"install/#build-the-solution-at-visual-studio","title":"Build the solution at Visual Studio","text":"

Note that this step depends on perl, so you'll need to install perl for windows, and put perl on your path so VS can start perl (http://stackoverflow.com/questions/3051049/active-perl-installation-on-windows-operating-system).

Step 2 will build the OpenBLAS solution, open it in VS, and build the projects. Note that the dependencies do not seem to be automatically configured: if you try to build libopenblas directly, it will fail with a message saying that some .obj files aren't found, but if you build the projects libopenblas depends on before building libopenblas, the build will succeed.

"},{"location":"install/#build-openblas-for-universal-windows-platform","title":"Build OpenBLAS for Universal Windows Platform","text":"

OpenBLAS can be built for use on the Universal Windows Platform using a two step process since commit c66b842.

"},{"location":"install/#1-follow-steps-1-and-2-above-to-build-the-visual-studio-solution-files-for-windows-this-builds-the-helper-executables-which-are-required-when-building-the-openblas-visual-studio-solution-files-for-uwp-in-step-2","title":"# 1. Follow steps 1 and 2 above to build the Visual Studio solution files for Windows. This builds the helper executables which are required when building the OpenBLAS Visual Studio solution files for UWP in step 2.","text":""},{"location":"install/#2-remove-the-generated-cmakecachetxt-and-cmakefiles-directory-from-the-openblas-source-directory-and-re-run-cmake-with-the-following-options","title":"# 2. Remove the generated CMakeCache.txt and CMakeFiles directory from the OpenBLAS source directory and re-run CMake with the following options:","text":"
# do this to build UWP compatible solution files\ncmake -G \"Visual Studio 14 Win64\" -DCMAKE_SYSTEM_NAME=WindowsStore -DCMAKE_SYSTEM_VERSION=\"10.0\" -DCMAKE_SYSTEM_PROCESSOR=AMD64 -DVS_WINRT_COMPONENT=TRUE -DCMAKE_BUILD_TYPE=Release .\n
"},{"location":"install/#build-the-solution-with-visual-studio","title":"# Build the solution with Visual Studio","text":"

This will build the OpenBLAS binaries with the required settings for use with UWP.

"},{"location":"install/#2-gnu-mingw-abi","title":"2. GNU (MinGW) ABI","text":"

The resulting library can be used in Visual Studio, but it can only be linked dynamically. This configuration has not been thoroughly tested and should be considered experimental.

"},{"location":"install/#incompatible-x86-calling-conventions","title":"Incompatible x86 calling conventions","text":"

Due to incompatibilities between the calling conventions of MinGW and Visual Studio you will need to make the following modifications ( 32-bit only ):

  1. Use the newer GCC 4.7.0. The older GCC (<4.7.0) has an ABI incompatibility for returning aggregate structures larger than 8 bytes with MSVC.
"},{"location":"install/#build-openblas-on-windows-os","title":"Build OpenBLAS on Windows OS","text":"
  1. Install the MinGW (GCC) compiler suite, either 32-bit (http://www.mingw.org/) or 64-bit (http://mingw-w64.sourceforge.net/). Be sure to install its gfortran package as well (unless you really want to build the BLAS part of OpenBLAS only) and check that gcc and gfortran are the same version \u2013 mixing compilers from different sources or release versions can lead to strange error messages in the linking stage. In addition, please install MSYS with MinGW.
  2. Build OpenBLAS in the MSYS shell. Usually, you can just type \"make\". OpenBLAS will detect the compiler and CPU automatically.
  3. After the build is complete, OpenBLAS will generate the static library \"libopenblas.a\" and the shared dll library \"libopenblas.dll\" in the folder. You can type \"make PREFIX=/your/installation/path install\" to install the library to a certain location.

Note

We suggest using official MinGW or MinGW-w64 compilers. A user reported that s/he met Unhandled exception by other compiler suite. https://groups.google.com/forum/#!topic/openblas-users/me2S4LkE55w

Note also that older versions of the alternative builds of mingw-w64 available through http://www.msys2.org may contain a defect that leads to a compilation failure accompanied by the error message

<command-line>:0:4: error: expected identifier or '(' before numeric constant\n
If you encounter this, please upgrade your msys2 setup or see https://github.com/xianyi/OpenBLAS/issues/1503 for a workaround.

"},{"location":"install/#generate-import-library-before-0210-version","title":"Generate import library (before 0.2.10 version)","text":"
  1. First, you will need to have the lib.exe tool in the Visual Studio command prompt.
  2. Open the command prompt and type cd OPENBLAS_TOP_DIR/exports, where OPENBLAS_TOP_DIR is the main folder of your OpenBLAS installation.
  3. For a 32-bit library, type lib /machine:i386 /def:libopenblas.def. For 64-bit, type lib /machine:X64 /def:libopenblas.def.
  4. This will generate the import library \"libopenblas.lib\" and the export library \"libopenblas.exp\" in OPENBLAS_TOP_DIR/exports. Although these two files have the same name, they are totally different.
"},{"location":"install/#generate-import-library-0210-and-after-version","title":"Generate import library (0.2.10 and after version)","text":"
  1. OpenBLAS already generated the import library \"libopenblas.dll.a\" for \"libopenblas.dll\".
"},{"location":"install/#generate-windows-native-pdb-files-from-gccgfortran-build","title":"generate windows native PDB files from gcc/gfortran build","text":"

Tool to do so is available at https://github.com/rainers/cv2pdb

"},{"location":"install/#use-openblas-dll-library-in-visual-studio","title":"Use OpenBLAS .dll library in Visual Studio","text":"
  1. Copy the import library (before 0.2.10: \"OPENBLAS_TOP_DIR/exports/libopenblas.lib\", 0.2.10 and after: \"OPENBLAS_TOP_DIR/libopenblas.dll.a\") and .dll library \"libopenblas.dll\" into the same folder(The folder of your project that is going to use the BLAS library. You may need to add the libopenblas.dll.a to the linker input list: properties->Linker->Input).
  2. Please follow the documentation about using third-party .dll libraries in MS Visual Studio 2008 or 2010. Make sure to link against a library for the correct architecture. For example, you may receive an error such as \"The application was unable to start correctly (0xc000007b)\" which typically indicates a mismatch between 32/64-bit libraries.

Note

If you need CBLAS, you should include cblas.h in /your/installation/path/include in Visual Studio. Please read this page.

"},{"location":"install/#limitations","title":"Limitations","text":""},{"location":"install/#windows-on-arm","title":"Windows on Arm","text":""},{"location":"install/#prerequisites","title":"Prerequisites","text":"

Following tools needs to be installed

"},{"location":"install/#1-download-and-install-clang-for-windows-on-arm","title":"1. Download and install clang for windows on arm","text":"

Find the latest LLVM build for WoA from LLVM release page

E.g: LLVM 12 build for WoA64 can be found here

Run the LLVM installer and ensure that LLVM is added to environment PATH.

"},{"location":"install/#2-download-and-install-classic-flang-for-windows-on-arm","title":"2. Download and install classic flang for windows on arm","text":"

Classic flang is the only available FORTRAN compiler for windows on arm for now and a pre-release build can be found here

There is no installer for classic flang and the zip package can be extracted and the path needs to be added to environment PATH.

E.g: on PowerShell

$env:Path += \";C:\\flang_woa\\bin\"\n
"},{"location":"install/#build","title":"Build","text":"

The following steps describe how to build the static library for OpenBLAS with and without LAPACK

"},{"location":"install/#1-build-openblas-static-library-with-blas-and-lapack-routines-with-make","title":"1. Build OpenBLAS static library with BLAS and LAPACK routines with Make","text":"

Following command can be used to build OpenBLAS static library with BLAS and LAPACK routines

$ make CC=\"clang-cl\" HOSTCC=\"clang-cl\" AR=\"llvm-ar\" BUILD_WITHOUT_LAPACK=0 NOFORTRAN=0 DYNAMIC_ARCH=0 TARGET=ARMV8 ARCH=arm64 BINARY=64 USE_OPENMP=0 PARALLEL=1 RANLIB=\"llvm-ranlib\" MAKE=make F_COMPILER=FLANG FC=FLANG FFLAGS_NOOPT=\"-march=armv8-a -cpp\" FFLAGS=\"-march=armv8-a -cpp\" NEED_PIC=0 HOSTARCH=arm64 libs netlib\n
"},{"location":"install/#2-build-static-library-with-blas-routines-using-cmake","title":"2. Build static library with BLAS routines using CMake","text":"

Classic flang has compatibility issues with cmake hence only BLAS routines can be compiled with CMake

$ mkdir build\n$ cd build\n$ cmake ..  -G Ninja -DCMAKE_C_COMPILER=clang -DBUILD_WITHOUT_LAPACK=1 -DNOFORTRAN=1 -DDYNAMIC_ARCH=0 -DTARGET=ARMV8 -DARCH=arm64 -DBINARY=64 -DUSE_OPENMP=0 -DCMAKE_SYSTEM_PROCESSOR=ARM64 -DCMAKE_CROSSCOMPILING=1 -DCMAKE_SYSTEM_NAME=Windows\n$ cmake --build . --config Release\n
"},{"location":"install/#getarchexe-execution-error","title":"getarch.exe execution error","text":"

If you notice that platform-specific headers by getarch.exe are not generated correctly, It could be due to a known debug runtime DLL issue for arm64 platforms. Please check out link for the workaround.

"},{"location":"install/#mingw-import-library","title":"MinGW import library","text":"

Microsoft Windows has this thing called \"import libraries\". You don't need it in MinGW because the ld linker from GNU Binutils is smart, but you may still want it for whatever reason.

"},{"location":"install/#make-the-def","title":"Make the .def","text":"

Import libraries are compiled from a list of what symbols to use, .def. This should be already in your exports directory: cd OPENBLAS_TOP_DIR/exports.

"},{"location":"install/#making-a-mingw-import-library","title":"Making a MinGW import library","text":"

MinGW import libraries have the suffix .a, same as static libraries. (It's actually more common to do .dll.a...)

You need to first prepend libopenblas.def with a line LIBRARY libopenblas.dll:

cat <(echo \"LIBRARY libopenblas.dll\") libopenblas.def > libopenblas.def.1\nmv libopenblas.def.1 libopenblas.def\n

Now it probably looks like:

LIBRARY libopenblas.dll\nEXPORTS\n   caxpy=caxpy_  @1\n   caxpy_=caxpy_  @2\n       ...\n

Then, generate the import library: dlltool -d libopenblas.def -l libopenblas.a

Again, there is basically no point in making an import library for use in MinGW. It actually slows down linking.

"},{"location":"install/#making-a-msvc-import-library","title":"Making a MSVC import library","text":"

Unlike MinGW, MSVC absolutely requires an import library. Now the C ABI of MSVC and MinGW are actually identical, so linking is actually okay. (Any incompatibility in the C ABI would be a bug.)

The import libraries of MSVC have the suffix .lib. They are generated from a .def file using MSVC's lib.exe. See the MSVC instructions.

"},{"location":"install/#notes","title":"Notes","text":""},{"location":"install/#mac-osx","title":"Mac OSX","text":"

If your CPU is Sandy Bridge, please use Clang version 3.1 and above. The Clang 3.0 will generate the wrong AVX binary code of OpenBLAS.

"},{"location":"install/#precompiled-packages_2","title":"Precompiled packages","text":"

https://www.macports.org/ports.php?by=name&substr=openblas

brew install openblas

or using the conda package manager from https://github.com/conda-forge/miniforge#download (which also has packages for the new M1 cpu)

conda install openblas

"},{"location":"install/#build-on-apple-m1","title":"Build on Apple M1","text":"

On newer versions of Xcode and on arm64, you might need to compile with a newer macOS target (11.0) than the default (10.8) with MACOSX_DEPLOYMENT_TARGET=11.0, or switch your command-line tools to use an older SDK (e.g., 13.1).

"},{"location":"install/#android","title":"Android","text":""},{"location":"install/#prerequisites_1","title":"Prerequisites","text":"

In addition to the Android NDK, you will need both Perl and a C compiler on the build host as these are currently required by the OpenBLAS build environment.

"},{"location":"install/#building-with-android-ndk-using-clang-compiler","title":"Building with android NDK using clang compiler","text":"

Around version 11 Android NDKs stopped supporting gcc, so you would need to use clang to compile OpenBLAS. clang is supported from OpenBLAS 0.2.20 version onwards. See below sections on how to build with clang for ARMV7 and ARMV8 targets. The same basic principles as described below for ARMV8 should also apply to building an x86 or x86_64 version (substitute something like NEHALEM for the target instead of ARMV8 and replace all the aarch64 in the toolchain paths obviously) \"Historic\" notes: Since version 19 the default toolchain is provided as a standalone toolchain, so building one yourself following building a standalone toolchain should no longer be necessary. If you want to use static linking with an old NDK version older than about r17, you need to choose an API level below 23 currently due to NDK bug 272 (https://github.com/android-ndk/ndk/issues/272 , the libc.a lacks a definition of stderr) that will probably be fixed in r17 of the NDK.

"},{"location":"install/#build-armv7-with-clang","title":"Build ARMV7 with clang","text":"

## Set path to ndk-bundle\nexport NDK_BUNDLE_DIR=/path/to/ndk-bundle\n\n## Set the PATH to contain paths to clang and arm-linux-androideabi-* utilities\nexport PATH=${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin:${NDK_BUNDLE_DIR}/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH\n\n## Set LDFLAGS so that the linker finds the appropriate libgcc\nexport LDFLAGS=\"-L${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/lib/gcc/arm-linux-androideabi/4.9.x\"\n\n## Set the clang cross compile flags\nexport CLANG_FLAGS=\"-target arm-linux-androideabi -marm -mfpu=vfp -mfloat-abi=softfp --sysroot ${NDK_BUNDLE_DIR}/platforms/android-23/arch-arm -gcc-toolchain ${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/\"\n\n#OpenBLAS Compile\nmake TARGET=ARMV7 ONLY_CBLAS=1 AR=ar CC=\"clang ${CLANG_FLAGS}\" HOSTCC=gcc ARM_SOFTFP_ABI=1 -j4\n
On a Mac, it may also be necessary to give the complete path to the ar utility in the make command above, like so:
AR=${NDK_BUNDLE_DIR}/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/bin/arm-linux-androideabi-gcc-ar\n
otherwise you may get a linker error complaining about a \"malformed archive header name at 8\" when the native OSX ar command was invoked instead.

"},{"location":"install/#build-armv8-with-clang","title":"Build ARMV8 with clang","text":"

## Set path to ndk-bundle\nexport NDK_BUNDLE_DIR=/path/to/ndk-bundle/\n\n## Export PATH to contain directories of clang and aarch64-linux-android-* utilities\nexport PATH=${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/bin/:${NDK_BUNDLE_DIR}/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH\n\n## Setup LDFLAGS so that loader can find libgcc and pass -lm for sqrt\nexport LDFLAGS=\"-L${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/lib/gcc/aarch64-linux-android/4.9.x -lm\"\n\n## Setup the clang cross compile options\nexport CLANG_FLAGS=\"-target aarch64-linux-android --sysroot ${NDK_BUNDLE_DIR}/platforms/android-23/arch-arm64 -gcc-toolchain ${NDK_BUNDLE_DIR}/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64/\"\n\n## Compile\nmake TARGET=ARMV8 ONLY_CBLAS=1 AR=ar CC=\"clang ${CLANG_FLAGS}\" HOSTCC=gcc -j4\n
Note: Using TARGET=CORTEXA57 in place of ARMV8 will pick up better optimized routines. Implementations for CORTEXA57 target is compatible with all other armv8 targets.

Note: For NDK 23b, something as simple as

export PATH=/opt/android-ndk-r23b/toolchains/llvm/prebuilt/linux-x86_64/bin/:$PATH\nmake HOSTCC=gcc CC=/opt/android-ndk-r23b/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android31-clang ONLY_CBLAS=1 TARGET=ARMV8\n
appears to be sufficient on Linux.

"},{"location":"install/#alternative-script-which-was-tested-on-osx-with-ndk2136528147","title":"Alternative script which was tested on OSX with NDK(21.3.6528147)","text":"

This script will build openblas for 3 architecture (ARMV7,ARMV8,X86) and put them with sudo make install to /opt/OpenBLAS/lib

export NDK=YOUR_PATH_TO_SDK/Android/sdk/ndk/21.3.6528147\nexport TOOLCHAIN=$NDK/toolchains/llvm/prebuilt/darwin-x86_64\n\nmake clean\nmake \\\n    TARGET=ARMV7 \\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/armv7a-linux-androideabi21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/arm-linux-androideabi-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n\nmake clean\nmake \\\n    TARGET=CORTEXA57 \\\n    ONLY_CBLAS=1 \\\n    CC=$TOOLCHAIN/bin/aarch64-linux-android21-clang \\\n    AR=$TOOLCHAIN/bin/aarch64-linux-android-ar \\\n    HOSTCC=gcc \\\n    -j4\nsudo make install\n\nmake clean\nmake \\\n    TARGET=ATOM \\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/i686-linux-android21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/i686-linux-android-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n\n## This will build for x86_64 \nmake clean\nmake \\\n    TARGET=ATOM BINARY=64\\\n    ONLY_CBLAS=1 \\\n    CC=\"$TOOLCHAIN\"/bin/x86_64-linux-android21-clang \\\n    AR=\"$TOOLCHAIN\"/bin/x86_64-linux-android-ar \\\n    HOSTCC=gcc \\\n    ARM_SOFTFP_ABI=1 \\\n    -j4\nsudo make install\n
Also you can find full list of target architectures in TargetsList.txt

anything below this line should be irrelevant nowadays unless you need to perform software archeology

"},{"location":"install/#building-openblas-with-very-old-gcc-based-versions-of-the-ndk-without-fortran","title":"Building OpenBLAS with very old gcc-based versions of the NDK, without Fortran","text":"

The prebuilt Android NDK toolchains do not include Fortran, hence parts like LAPACK cannot be built. You can still build OpenBLAS without it. For instructions on how to build OpenBLAS with Fortran, see the next section.

To use easily the prebuilt toolchains, follow building a standalone toolchain for your desired architecture. This would be arm-linux-androideabi-gcc-4.9 for ARMV7 and aarch64-linux-android-gcc-4.9 for ARMV8.

You can build OpenBLAS (0.2.19 and earlier) with:

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain/bin:$PATH\n\n## Build without Fortran for ARMV7\nmake TARGET=ARMV7 HOSTCC=gcc CC=arm-linux-androideabi-gcc NOFORTRAN=1 libs\n## Build without Fortran for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc NOFORTRAN=1 libs\n

Since we are cross-compiling, we make the libs recipe, not all. Otherwise you will get errors when trying to link/run tests as versions up to and including 0.2.19 cannot build a shared library for Android.

From 0.2.20 on, you should leave off the \"libs\" to get a full build, and you may want to use the softfp ABI instead of the deprecated hardfp one on ARMV7 so you would use

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain/bin:$PATH\n\n## Build without Fortran for ARMV7\nmake TARGET=ARMV7 ARM_SOFTFP_ABI=1 HOSTCC=gcc CC=arm-linux-androideabi-gcc NOFORTRAN=1\n## Build without Fortran for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc NOFORTRAN=1\n

If you get an error about stdio.h not being found, you need to specify your sysroot in the CFLAGS argument to make like CFLAGS=--sysroot=$NDK/platforms/android-16/arch-arm When you are done, install OpenBLAS into the desired directory. Be sure to also use all command line options here that you specified for building, otherwise errors may occur as it tries to install things you did not build:

make PREFIX=/path/to/install-dir TARGET=... install\n

"},{"location":"install/#building-openblas-with-fortran","title":"Building OpenBLAS with Fortran","text":"

Instructions on how to build the GNU toolchains with Fortran can be found here. The Releases section provides prebuilt versions, use the standalone one.

You can build OpenBLAS with:

## Add the toolchain to your path\nexport PATH=/path/to/standalone-toolchain-with-fortran/bin:$PATH\n\n## Build with Fortran for ARMV7\nmake TARGET=ARMV7 HOSTCC=gcc CC=arm-linux-androideabi-gcc FC=arm-linux-androideabi-gfortran libs\n## Build with LAPACK for ARMV8\nmake TARGET=ARMV8 BINARY=64 HOSTCC=gcc CC=aarch64-linux-android-gcc FC=aarch64-linux-android-gfortran libs\n

As mentioned above you can leave off the libs argument here when building 0.2.20 and later, and you may want to add ARM_SOFTFP_ABI=1 when building for ARMV7.

"},{"location":"install/#linking-openblas-0219-and-earlier-for-armv7","title":"Linking OpenBLAS (0.2.19 and earlier) for ARMV7","text":"

If you are using ndk-build, you need to set the ABI to hard floating points in your Application.mk:

APP_ABI := armeabi-v7a-hard\n

This will set the appropriate flags for you. If you are not using ndk-build, you will want to add the following flags:

TARGET_CFLAGS += -mhard-float -D_NDK_MATH_NO_SOFTFP=1\nTARGET_LDFLAGS += -Wl,--no-warn-mismatch -lm_hard\n

From 0.2.20 on, it is also possible to build for the softfp ABI by specifying ARM_SOFTFP_ABI=1 during the build. In that case, also make sure that all your dependencies are compiled with -mfloat-abi=softfp as well, as mixing \"hard\" and \"soft\" floating point ABIs in a program will make it crash.

"},{"location":"install/#iphoneios","title":"iPhone/iOS","text":"

As none of the current developers uses iOS, the following instructions are what was found to work in our Azure CI setup, but as far as we know this builds a fully working OpenBLAS for this platform.

Go to the directory where you unpacked OpenBLAS,and enter the following commands:

     CC=/Applications/Xcode_12.4.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang\n\nCFLAGS= -O2 -Wno-macro-redefined -isysroot /Applications/Xcode_12.4.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS14.4.sdk -arch arm64 -miphoneos-version-min=10.0\n\nmake TARGET=ARMV8 DYNAMIC_ARCH=1 NUM_THREADS=32 HOSTCC=clang NOFORTRAN=1\n
Adjust MIN_IOS_VERSION as necessary for your installation, e.g. change the version number to the minimum iOS version you want to target and execute this file to build the library.

"},{"location":"install/#mips","title":"MIPS","text":"

For mips targets you will need latest toolchains P5600 - MTI GNU/Linux Toolchain I6400, P6600 - IMG GNU/Linux Toolchain

The download link is below (http://codescape-mips-sdk.imgtec.com/components/toolchain/2016.05-03/downloads.html)

You can use following commandlines for builds

IMG_TOOLCHAIN_DIR={full IMG GNU/Linux Toolchain path including \"bin\" directory -- for example, /opt/linux_toolchain/bin}\nIMG_GCC_PREFIX=mips-img-linux-gnu\nIMG_TOOLCHAIN=${IMG_TOOLCHAIN_DIR}/${IMG_GCC_PREFIX}\n\nI6400 Build (n32):\nmake BINARY=32 BINARY32=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL -mabi=n32\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=I6400\n\nI6400 Build (n64):\nmake BINARY=64 BINARY64=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=I6400\n\nP6600 Build (n32):\nmake BINARY=32 BINARY32=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL -mabi=n32\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=P6600\n\nP6600 Build (n64):\nmake BINARY=64 BINARY64=1 CC=$IMG_TOOLCHAIN-gcc AR=$IMG_TOOLCHAIN-ar FC=\"$IMG_TOOLCHAIN-gfortran -EL\" RANLIB=$IMG_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=\"$CFLAGS\" LDFLAGS=\"$CFLAGS\" TARGET=P6600\n\nMTI_TOOLCHAIN_DIR={full MTI GNU/Linux Toolchain path including \"bin\" directory -- for example, /opt/linux_toolchain/bin}\nMTI_GCC_PREFIX=mips-mti-linux-gnu\nMTI_TOOLCHAIN=${IMG_TOOLCHAIN_DIR}/${IMG_GCC_PREFIX}\n\nP5600 Build:\n\nmake BINARY=32 BINARY32=1 CC=$MTI_TOOLCHAIN-gcc AR=$MTI_TOOLCHAIN-ar FC=\"$MTI_TOOLCHAIN-gfortran -EL\"    RANLIB=$MTI_TOOLCHAIN-ranlib HOSTCC=gcc CFLAGS=\"-EL\" FFLAGS=$CFLAGS LDFLAGS=$CFLAGS TARGET=P5600\n
"},{"location":"install/#freebsd","title":"FreeBSD","text":"

You will need to install the following tools from the FreeBSD ports tree: * lang/gcc [1] * lang/perl5.12 * ftp/curl * devel/gmake * devel/patch

To compile run the command:

$ gmake CC=gcc46 FC=gfortran46\n

Note that you need to build with GNU make and manually specify the compiler, otherwhise gcc 4.2 from the base system would be used.

[1]: Removal of Fortran from the FreeBSD base system

pkg install openblas\n

see https://www.freebsd.org/ports/index.html

"},{"location":"install/#cortex-m","title":"Cortex-M","text":"

Cortex-M is a widely used microcontroller that is present in a variety of industrial and consumer electronics. A common variant of the Cortex-M is the STM32F4xx series. Here, we will give instructions for building for the STM32F4xx.

First, install the embedded arm gcc compiler from the arm website. Then, create the following toolchain file and build as follows.

# cmake .. -G Ninja -DCMAKE_C_COMPILER=arm-none-eabi-gcc -DCMAKE_TOOLCHAIN_FILE:PATH=\"toolchain.cmake\" -DNOFORTRAN=1 -DTARGET=ARMV5 -DEMBEDDED=1\n\nset(CMAKE_SYSTEM_NAME Generic)\nset(CMAKE_SYSTEM_PROCESSOR arm)\n\nset(CMAKE_C_COMPILER \"arm-none-eabi-gcc.exe\")\nset(CMAKE_CXX_COMPILER \"arm-none-eabi-g++.exe\")\n\nset(CMAKE_EXE_LINKER_FLAGS \"--specs=nosys.specs\" CACHE INTERNAL \"\")\n\nset(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)\nset(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)\nset(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)\nset(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)\n

In your embedded application, the following functions need to be provided for OpenBLAS to work correctly:

void free(void* ptr);\nvoid* malloc(size_t size);\n

Note

If you are developing for an embedded platform, it is your responsibility to make sure that the device has sufficient memory for malloc calls. Libmemory provides one implementation of malloc for embedded platforms.

"},{"location":"user_manual/","title":"User manual","text":""},{"location":"user_manual/#compile-the-library","title":"Compile the library","text":""},{"location":"user_manual/#normal-compile","title":"Normal compile","text":""},{"location":"user_manual/#cross-compile","title":"Cross compile","text":"

Please set CC and FC with the cross toolchains. Then, set HOSTCC with your host C compiler. At last, set TARGET explicitly.

Examples:

Install only gnueabihf versions. Please check https://github.com/xianyi/OpenBLAS/issues/936#issuecomment-237596847

make CC=arm-linux-gnueabihf-gcc FC=arm-linux-gnueabihf-gfortran HOSTCC=gcc TARGET=CORTEXA9\n
make BINARY=64 CC=mips64el-unknown-linux-gnu-gcc FC=mips64el-unknown-linux-gnu-gfortran HOSTCC=gcc TARGET=LOONGSON3A\n
make CC=loongcc FC=loongf95 HOSTCC=gcc TARGET=LOONGSON3A CROSS=1 CROSS_SUFFIX=mips64el-st-linux-gnu-   NO_LAPACKE=1 NO_SHARED=1 BINARY=32\n
"},{"location":"user_manual/#debug-version","title":"Debug version","text":"
make DEBUG=1\n
"},{"location":"user_manual/#install-to-the-directory-optional","title":"Install to the directory (optional)","text":"

Example:

make install PREFIX=your_installation_directory\n

The default directory is /opt/OpenBLAS. Note that any flags passed to make during build should also be passed to make install to circumvent any install errors, i.e. some headers not being copied over correctly.

For more information, please read Installation Guide.

"},{"location":"user_manual/#link-the-library","title":"Link the library","text":"
gcc -o test test.c -I/your_path/OpenBLAS/include/ -L/your_path/OpenBLAS/lib -Wl,-rpath,/your_path/OpenBLAS/lib -lopenblas\n

The -Wl,-rpath,/your_path/OpenBLAS/lib option to linker can be omitted if you ran ldconfig to update linker cache, put /your_path/OpenBLAS/lib in /etc/ld.so.conf or a file in /etc/ld.so.conf.d, or installed OpenBLAS in a location that is part of the ld.so default search path (usually /lib,/usr/lib and /usr/local/lib). Alternatively, you can set the environment variable LD_LIBRARY_PATH to point to the folder that contains libopenblas.so. Otherwise, linking at runtime will fail with a message like cannot open shared object file: no such file or directory

If the library is multithreaded, please add -lpthread. If the library contains LAPACK functions, please add -lgfortran or other Fortran libs, although if you only make calls to LAPACKE routines, i.e. your code has #include \"lapacke.h\" and makes calls to methods like LAPACKE_dgeqrf, -lgfortran is not needed.

gcc -o test test.c /your/path/libopenblas.a\n

You can download test.c from https://gist.github.com/xianyi/5780018

"},{"location":"user_manual/#code-examples","title":"Code examples","text":""},{"location":"user_manual/#call-cblas-interface","title":"Call CBLAS interface","text":"

This example shows calling cblas_dgemm in C. https://gist.github.com/xianyi/6930656

#include <cblas.h>\n#include <stdio.h>\n\nvoid main()\n{\n  int i=0;\n  double A[6] = {1.0,2.0,1.0,-3.0,4.0,-1.0};         \n  double B[6] = {1.0,2.0,1.0,-3.0,4.0,-1.0};  \n  double C[9] = {.5,.5,.5,.5,.5,.5,.5,.5,.5}; \n  cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,3,3,2,1,A, 3, B, 3,2,C,3);\n\n  for(i=0; i<9; i++)\n    printf(\"%lf \", C[i]);\n  printf(\"\\n\");\n}\n

gcc -o test_cblas_open test_cblas_dgemm.c -I /your_path/OpenBLAS/include/ -L/your_path/OpenBLAS/lib -lopenblas -lpthread -lgfortran\n
"},{"location":"user_manual/#call-blas-fortran-interface","title":"Call BLAS Fortran interface","text":"

This example shows calling dgemm Fortran interface in C. https://gist.github.com/xianyi/5780018

#include \"stdio.h\"\n#include \"stdlib.h\"\n#include \"sys/time.h\"\n#include \"time.h\"\n\nextern void dgemm_(char*, char*, int*, int*,int*, double*, double*, int*, double*, int*, double*, double*, int*);\n\nint main(int argc, char* argv[])\n{\n  int i;\n  printf(\"test!\\n\");\n  if(argc<4){\n    printf(\"Input Error\\n\");\n    return 1;\n  }\n\n  int m = atoi(argv[1]);\n  int n = atoi(argv[2]);\n  int k = atoi(argv[3]);\n  int sizeofa = m * k;\n  int sizeofb = k * n;\n  int sizeofc = m * n;\n  char ta = 'N';\n  char tb = 'N';\n  double alpha = 1.2;\n  double beta = 0.001;\n\n  struct timeval start,finish;\n  double duration;\n\n  double* A = (double*)malloc(sizeof(double) * sizeofa);\n  double* B = (double*)malloc(sizeof(double) * sizeofb);\n  double* C = (double*)malloc(sizeof(double) * sizeofc);\n\n  srand((unsigned)time(NULL));\n\n  for (i=0; i<sizeofa; i++)\n    A[i] = i%3+1;//(rand()%100)/10.0;\n\n  for (i=0; i<sizeofb; i++)\n    B[i] = i%3+1;//(rand()%100)/10.0;\n\n  for (i=0; i<sizeofc; i++)\n    C[i] = i%3+1;//(rand()%100)/10.0;\n  //#if 0\n  printf(\"m=%d,n=%d,k=%d,alpha=%lf,beta=%lf,sizeofc=%d\\n\",m,n,k,alpha,beta,sizeofc);\n  gettimeofday(&start, NULL);\n  dgemm_(&ta, &tb, &m, &n, &k, &alpha, A, &m, B, &k, &beta, C, &m);\n  gettimeofday(&finish, NULL);\n\n  duration = ((double)(finish.tv_sec-start.tv_sec)*1000000 + (double)(finish.tv_usec-start.tv_usec)) / 1000000;\n  double gflops = 2.0 * m *n*k;\n  gflops = gflops/duration*1.0e-6;\n\n  FILE *fp;\n  fp = fopen(\"timeDGEMM.txt\", \"a\");\n  fprintf(fp, \"%dx%dx%d\\t%lf s\\t%lf MFLOPS\\n\", m, n, k, duration, gflops);\n  fclose(fp);\n\n  free(A);\n  free(B);\n  free(C);\n  return 0;\n}\n
gcc -o time_dgemm time_dgemm.c /your/path/libopenblas.a -lpthread\n./time_dgemm <m> <n> <k>\n
"},{"location":"user_manual/#troubleshooting","title":"Troubleshooting","text":""},{"location":"user_manual/#blas-reference-manual","title":"BLAS reference manual","text":"

If you want to understand every BLAS function and definition, please read Intel MKL reference manual or netlib.org

Here are OpenBLAS extension functions

"}]} \ No newline at end of file diff --git a/docs/user_manual/index.html b/docs/user_manual/index.html index 68098388d..3ba5b2ae9 100644 --- a/docs/user_manual/index.html +++ b/docs/user_manual/index.html @@ -732,7 +732,7 @@
gcc -o test test.c -I/your_path/OpenBLAS/include/ -L/your_path/OpenBLAS/lib -Wl,-rpath,/your_path/OpenBLAS/lib -lopenblas
 
-

The -Wl,-rpath,/your_path/OpenBLAS/lib option to linker can be omitted if you ran ldconfig to update linker cache, put /your_path/OpenBLAS/lib in /etc/ld.so.conf or a file in /etc/ld.so.conf.d, or installed OpenBLAS in a location part of ld.so default search path. Otherwise, linking at runtime will fail.

+

The -Wl,-rpath,/your_path/OpenBLAS/lib option to linker can be omitted if you ran ldconfig to update linker cache, put /your_path/OpenBLAS/lib in /etc/ld.so.conf or a file in /etc/ld.so.conf.d, or installed OpenBLAS in a location that is part of the ld.so default search path (usually /lib,/usr/lib and /usr/local/lib). Alternatively, you can set the environment variable LD_LIBRARY_PATH to point to the folder that contains libopenblas.so. Otherwise, linking at runtime will fail with a message like cannot open shared object file: no such file or directory

If the library is multithreaded, please add -lpthread. If the library contains LAPACK functions, please add -lgfortran or other Fortran libs, although if you only make calls to LAPACKE routines, i.e. your code has #include "lapacke.h" and makes calls to methods like LAPACKE_dgeqrf, -lgfortran is not needed.