Skip to content Skip to sidebar Skip to footer

Pytorch: How To Implement Attention For Graph Attention Layer

I have implemented the attention (Eq. 1) of https://arxiv.org/pdf/1710.10903.pdf but it's clearly not memory efficient and can run only a single model on my GPU (it takes 7-10GB).

Solution 1:

Maybe you can use sparse tensor to store adj_mat

def sparse_mx_to_torch_sparse_tensor(sparse_mx):
    """Convert a scipy sparse matrix to a torch sparse tensor."""
    sparse_mx = sparse_mx.tocoo().astype(np.float32)
    indices = torch.from_numpy(np.vstack((sparse_mx.row,
                                          sparse_mx.col))).long()
    values = torch.from_numpy(sparse_mx.data)
    shape = torch.Size(sparse_mx.shape)
    return torch.sparse.FloatTensor(indices, values, shape)

Post a Comment for "Pytorch: How To Implement Attention For Graph Attention Layer"