m (→Monk cluster) |
|||
Line 15: | Line 15: | ||
===Monk cluster=== | ===Monk cluster=== | ||
− | (These instructions were tested on Oct 22, 2014) | + | (These instructions were tested on Oct 22, 2014 ) |
1. Unload the Intel compiler module (loaded by default), so that GCC becomes the default compiler. Also, use a later python version. | 1. Unload the Intel compiler module (loaded by default), so that GCC becomes the default compiler. Also, use a later python version. |
Revision as of 19:17, 21 January 2015
The header titles get automatically converted into tab titles.
PyCUDA makes it possible to easily use CUDA inside Python code.
Documentation can be found on the package webpage.
This package is not currently installed as SHARCNET-supported software, but it's easy for users to install it on their own following instructions below. If any difficulties are encountered when following these instructions, please ask SHARCNET staff for help.
See also: PyOpenCL
Contents
SHARCNET installation instructions
Monk cluster
(These instructions were tested on Oct 22, 2014 )
1. Unload the Intel compiler module (loaded by default), so that GCC becomes the default compiler. Also, use a later python version.
module unload intel module unload openmpi module load gcc/4.8.2 module load openmpi/gcc/1.8.1 module load python/gcc/2.7.8
Note: openmpi module is loaded because the python module needs it (it is not actually used by PyCuda)
2. Create some directory you want to build the package in, cd into it, then get the PyCUDA source code:
git clone http://git.tiker.net/trees/pycuda.git cd pycuda git submodule init git submodule update wget https://pypi.python.org/packages/source/p/pycuda/pycuda-2014.1.tar.gz#md5=fdc2f59e57ab7256a7e0df0d9d943022 tar xfz pycuda-2014.1.tar.gz cd pycuda-2014.1
3. At this point decide where you want the package to be installed. In this example will use a directory called python_packages in the home directory. If this directory does not yet exist, make it with:
mkdir -p ~/python_packages/lib/python/
Doing it this way creates the required subdirectories as well.
4. Edit file Makefile.in to add --home flag pointing to the directory you created to the setup install line, so that it reads:
${PYTHON_EXE} setup.py install --home=~/python_packages
5.You now need to update the PYTHONPATH variable to point to the library directory:
export PYTHONPATH=~/python_packages/lib/python/:$PYTHONPATH
6. Configure and compile, providing a path to the CUDA files on monk:
python configure.py --cuda-root=/opt/sharcnet/cuda/6.0.37/toolkit make install
7. Do the first test of the installation to make sure the pycuda module can be imported, by starting python and executing:
import pycuda
If no errors are reported, everything worked and the package is ready for use.
8. Add the lines:
module unload intel module unload openmpi module load gcc/4.8.2 module load openmpi/gcc/1.8.1 module load python/gcc/2.7.8 export PYTHONPATH=~/python_packages/lib/python/:$PYTHONPATH
to your ~/.bashrc file so that this variable is set automatically for you on every login.
9. Test PyCUDA on a development node which has a GPU (the login node does not have one so PyCUDA tests will produce an error). To do this, execute on monk login node:
ssh mon54
Then go to the directory where you put the PyCuda source code, and execute:
python test/test_driver.py
Gives error:
[ppomorsk@mon241:~/supported_sharcnet_packages/pycuda/pycuda-2014.1/test] python test_driver.py Traceback (most recent call last):
File "test_driver.py", line 4, in <module> from pycuda.tools import mark_cuda_test File "/home/ppomorsk/python_packages/lib/python/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/tools.py", line 30, in <module> import pycuda.driver as cuda File "/home/ppomorsk/python_packages/lib/python/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/driver.py", line 2, in <module> from pycuda._driver import * # noqa ImportError: /home/ppomorsk/python_packages/lib/python/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/_driver.so: undefined symbol: cuStreamAttachMemAsync
ldd output
[ppomorsk@mon54:~/supported_sharcnet_packages/pycuda/pycuda-2014.1/test] ldd /home/ppomorsk/python_packages/lib/python/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/_driver.so
linux-vdso.so.1 => (0x00007fffac5c1000)
libcuda.so.1 => /usr/lib64/libcuda.so.1 (0x00002b7dbda9e000)
libcurand.so.6.0 => /opt/sharcnet/cuda/6.0.37/toolkit/lib64/libcurand.so.6.0 (0x00002b7dbea01000)
libstdc++.so.6 => /opt/sharcnet/gcc/4.8.2/lib64/libstdc++.so.6 (0x00002b7dc3f08000)
libm.so.6 => /lib64/libm.so.6 (0x00002b7dc4211000)
libgcc_s.so.1 => /opt/sharcnet/gcc/4.8.2/lib64/libgcc_s.so.1 (0x00002b7dc4496000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b7dc46ac000)
libc.so.6 => /lib64/libc.so.6 (0x00002b7dc48c9000)
libz.so.1 => /lib64/libz.so.1 (0x00002b7dc4c5e000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002b7dc4e74000)
librt.so.1 => /lib64/librt.so.1 (0x00002b7dc5078000) /lib64/ld-linux-x86-64.so.2 (0x00002b7dbd4a9000)
and the symbol does seem to exist
[ppomorsk@mon54:/usr/lib64] readelf -Ws libcuda.so.331.89 | grep cuStreamAttachMemAsync 43: 0000000000139890 538 FUNC GLOBAL DEFAULT 10 cuStreamAttachMemAsync
try
python configure.py --cuda-root=/opt/sharcnet/cuda/6.0.37/toolkit --cudadrv-lib-dir=/usr/lib64
If everything is working properly, the output should look like this:
http://pycuda.2962900.n2.nabble.com/PyCUDA-undefined-symbol-cuMemAllocPitch-v2-td7571641.html
"cudart is the run-time interface, which is used by 'conventional' CUDA C code. (cudaMemcpy) PyCUDA uses the driver interface. (cuMemcpy) "
"mismatch between cuda.h and driver" ?? try older CUDA modules?
python configure.py --cuda-root=/opt/sharcnet/cuda/5.5.22/toolkit --cudadrv-lib-dir=/usr/lib64
with cuda/5.5.22 loaded even basic cuda test program fails with
FATAL: Error inserting nvidia (/lib/modules/2.6.32-431.3.1.el6.x86_64/kernel/drivers/video/nvidia.ko): No such device
--
runpaths problem
_driver.so has an RPATH, but none of the members contain libcuda.so. so it was build wrong, in that it doesn't correctly encode its actual dependency.
[ppomorsk@mon241:~/supported_sharcnet_packages/pycuda/pycuda-2014.1/test] objdump -x /home/ppomorsk/python_packages/lib/python/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/_driver.so | grep RPATH
RPATH /opt/sharcnet/python/2.7.8/gcc/lib:/opt/sharcnet/gcc/4.8.2/lib64:/opt/sharcnet/cuda/6.0.37/cula/lib64:/opt/sharcnet/cuda/6.0.37/toolkit/lib64:/opt/sharcnet/mkl/10.3.9/mkl/lib/intel64:/opt/sharcnet/mkl/10.3.9/lib/intel64
Problem with python itself?? python/gcc/2.7.8 module has theano, which is GPU , so there might be some hidden dependency.
Anyway, with theano python/gcc/2.7.8 ought to have an explicit CUDA dependency.
[ppomorsk@mon54:~/pycuda] python test/test_driver.py ============================= test session starts ============================== platform linux2 -- Python 2.6.6 -- pytest-2.3.4 collected 21 items test_driver.py ..................... ========================== 21 passed in 61.39 seconds ==========================
10. Try the example programs provided with the source code, found in the examples subdirectory of your pycuda source directory:
python examples/dump_properties.py python examples/hello_gpu.py
Sample PyCUDA code
This is code from the hello_gpu.py example program. It multiplies two vectors elementwise on the GPU, and then verifies the result with a standard calculation on the CPU.
import pycuda.driver as drv import pycuda.tools import pycuda.autoinit import numpy import numpy.linalg as la from pycuda.compiler import SourceModule mod = SourceModule(""" __global__ void multiply_them(float *dest, float *a, float *b) { const int i = threadIdx.x; dest[i] = a[i] * b[i]; } """) multiply_them = mod.get_function("multiply_them") a = numpy.random.randn(400).astype(numpy.float32) b = numpy.random.randn(400).astype(numpy.float32) dest = numpy.zeros_like(a) multiply_them( drv.Out(dest), drv.In(a), drv.In(b), block=(400,1,1)) print dest-a*b
List of colors: Wiki_color_formatting_help
Simplest table
AA | BB | CC |
DD | EE | FF |
Table with padding
AA | BB | CC |
DD | EE | FF |
Table with border
AA | CC | EE |
BB | DD | FF |
Notebox:
This is a notebox to show border color. |
Third section header
This will be always displayed under the tab view because it's below the <headertabs/> tag.
Example
Some text above the tab view = First section header = This will be displayed on the first tab {{#switchtablink:Second section header|Click here to go to the next tab...}} {{#switchtablink:Tab name|Link text|Page name}} = Second section header = This will be displayed on the second tab <headertabs/> = Third section header = This will be always displayed under the tab view because it's below the <headertabs/> tag.