Skip to content

[PyTorch] Store Tensor explicitly in IValue #48824

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 13 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update on "[PyTorch] Store Tensor explicitly in IValue"
Enables following diff, which will make toTensor() return
`const Tensor&` and allow callers to avoid refcounting overhead.

Differential Revision: [D25324617](https://our.internmc.facebook.com/intern/diff/D25324617/)

[ghstack-poisoned]
  • Loading branch information
swolchok committed Dec 16, 2020
commit 1b6544b63f08d417b733413dc9a294f9929cf7a8
12 changes: 12 additions & 0 deletions aten/src/ATen/core/ivalue.h
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,10 @@ struct CAFFE2_API IValue final {
// make this abundantly clear.
//
// payload.as_tensor.~Tensor();
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wclass-memaccess"
memcpy(&payload, &rhs.payload, sizeof(payload));
#pragma GCC diagnostic pop
new (&rhs.payload.as_tensor) at::Tensor(std::move(t));
} else if (rhs.isTensor()) {
rhs.swap(*this);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is potentially slow because it needs to do the isTensor checks again (depending on how smart the compiler is with inlining this and proving that the extra branches are never executed). Not sure if relevant in practice, but if you want to optimize it, you could just move lines 332 to 335 into their own subfunction swapWithTensor(lhs, rhs) or something like that and call it from both the isTensor() and rhs.isTensor() case.

Expand Down Expand Up @@ -878,7 +881,10 @@ struct CAFFE2_API IValue final {
//
// rhs.payload.as_tensor.~Tensor();
} else {
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wclass-memaccess"
memcpy(&payload, &rhs.payload, sizeof(payload));
#pragma GCC diagnostic pop
}
tag = rhs.tag;
is_intrusive_ptr = rhs.is_intrusive_ptr;
Expand Down Expand Up @@ -913,7 +919,10 @@ struct CAFFE2_API IValue final {
if (isTensor()) {
new (&payload.as_tensor) at::Tensor(p.as_tensor);
} else {
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wclass-memaccess"
memcpy(&payload, &p, sizeof(payload));
#pragma GCC diagnostic pop
}
}

Expand Down Expand Up @@ -980,7 +989,10 @@ struct CAFFE2_API WeakIValue final {
IValue lock() const {
if (!is_intrusive_ptr) {
IValue::Payload newPayload;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wclass-memaccess"
memcpy(&newPayload, &payload, sizeof(newPayload));
#pragma GCC diagnostic pop
return IValue(newPayload, tag, false);
}
if (IValue::Tag::Tensor == tag) {
Expand Down
You are viewing a condensed version of this merge commit. You can view the full changes here.