Importing Large Data Sets into the Cloud
When reading in large files to the cloud inside of a process you may run out of the local memory on your machine which will cause the reading of this data to fail. You may see this error "
main memory limit reached
"
This is happening because the system is loading the file into local memory before streaming it to the cloud.
This is not the way this process should be completed, instead you should load the file into your cloud repository manually.
This is can be done by selecting file > import data> import whichever type is more appropriate
The use of the import wizard should be the same until Step 5 where the cloud repository should be selected instead of the local.
Once the data is in the cloud repository you will be able to execute processes without error.