Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenCL: Add release memory #1741

Merged
merged 2 commits into from
Jun 9, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions ggml-opencl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -662,6 +662,15 @@ static void ggml_cl_pool_free(cl_mem mem, size_t size) {
clReleaseMemObject(mem);
}

void ggml_cl_data_free(const struct ggml_tensor* tensor) {
if (tensor->backend != GGML_BACKEND_GPU) {
return;
}

cl_mem mem = (cl_mem)tensor->data;
clReleaseMemObject(mem);
}

static cl_int ggml_cl_h2d_tensor_2d(cl_command_queue queue, cl_mem dst, size_t offset, const struct ggml_tensor * src, uint64_t i3, uint64_t i2, cl_event* ev) {
cl_int err;
const uint64_t ne0 = src->ne[0];
Expand Down
2 changes: 2 additions & 0 deletions ggml-opencl.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ void ggml_cl_mul_mat(const struct ggml_tensor * src0, const struct ggml_tensor
void * ggml_cl_host_malloc(size_t size);
void ggml_cl_host_free(void * ptr);

void ggml_cl_data_free(const struct ggml_tensor* tensor);

void ggml_cl_transform_tensor(struct ggml_tensor * tensor);
void ggml_cl_load_data(const char * fname, struct ggml_tensor * tensor, size_t offset);

Expand Down
5 changes: 5 additions & 0 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,11 @@ struct llama_model {
ggml_cuda_free_data(tensors_by_name[i].second);
}
#endif // GGML_USE_CUBLAS
#if defined(GGML_USE_CLBLAST)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep it shorter with #elif defined

for (size_t i = 0; i < tensors_by_name.size(); ++i) {
ggml_cl_data_free(tensors_by_name[i].second);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the interest of keeping the implementation relatively close to the cuda one, maybe this should be ggml_cl_free_data.

}
#endif // GGML_USE_CLBLAST
}
};

Expand Down