RM Server 9 running great on Paperspace with GPUs
I have noticed that Paperspace is not featuring strongly in the RM community (in favour of AWS and Azure), however, if you needed to play with deep learning with large GPUs and little expense (e.g. Volta V100 is $2.30 per hour), I can report that the RM Server 9 installs and runs just great on their Ubuntu virtual machines (I have not tested their Windows machines).
Things to watch: make sure you install Java 8 (as Java 9 is a default), install Oracle MySQL (use legacy passwords and the current Oracle mysql connector) and avoid Mariadb (unless you want a major cleanup of your machine) and define your Keras and Python settings in [<server>/job-agent/home/config/rapidminer/rapidminer.properties], pointing to the Python in the preinstalled Anaconda with Tensorflow and Keras.
As Keras plugin seems memory hungry, increase the container memoryLimit to the max you can (depending on your machine configuration) in [<server>/job-agent/home/config/agent.properties].
Last, you can save yourself a lot of time and effort by steering away from bare machines, and instead using some of the pre-configured ML in a Box templates, which they currently have for small GPUs (which I was playing with) and for Volta V100 (in Beta).
Have fun -- Jacob
P.S. I found that a container memory limit less than 8Gb causes frequent over the limit warnings, however, Java seemed to recover and garbagge collect the memory to complete a job.
Answers
Hi,
thanks for the write-up.
Have you also tried out to use the new DeepLearning extension from RapidMiner, that supports GPU natively with the DL4J backend?
Would be interesting to know how good this works as well.
You can simply switch the backend support between GPU and CPU by setting this server parameter key:
Best,
David
All looks good. I have installed the Deep Learning extension and ran a couple of standard samples on the remote Paperspace server and all worked well. For anything more complex, I'd have to grab the DL4J documentation.
Jacob
This is great to hear.
We like to get more feedback, especially from experiences on different environments.
Feel free to share any benchmarks you perform.
Best,
David