
TorchKGE Documentation Release 0.16.25 Armand Boschin Jun 02, 2021 Tutorials: 1 TorchKGE 1 Index 49 i ii CHAPTER 1 TorchKGE TorchKGE: Knowledge Graph embedding in Python and Pytorch. TorchKGE is a Python module for knowledge graph (KG) embedding relying solely on Pytorch. This package provides researchers and engineers with a clean and efficient API to design and test new models. It features a KG data structure, simple model interfaces and modules for negative sampling and model evaluation. Its main strength is a highly efficient evaluation module for the link prediction task, a central application of KG embedding. It has been observed to be up to five times faster than AmpliGraph and twenty-four times faster than OpenKE. Various KG embedding models are also already implemented. Special attention has been paid to code efficiency and simplicity, documentation and API consistency. It is distributed using PyPI under BSD license. 1.1 Citations If you find this code useful in your research, please consider citing our paper (presented at IWKG-KDD 2020): • Free software: BSD license • Documentation: https://torchkge.readthedocs.io. 1 TorchKGE Documentation, Release 0.16.25 1.1.1 Model Training Here are two examples of models being trained on FB15k. Simplest training This is the python code to train TransE without any wrapper. This script shows how all parts of TorchKGE should be used together: from torch import cuda from torch.optim import Adam from torchkge.models import TransEModel from torchkge.sampling import BernoulliNegativeSampler from torchkge.utils import MarginLoss, DataLoader from torchkge.utils.datasets import load_fb15k from tqdm.autonotebook import tqdm # Load dataset kg_train, _, _= load_fb15k() # Define some hyper-parameters for training emb_dim= 100 lr= 0.0004 n_epochs= 1000 b_size= 32768 margin= 0.5 # Define the model and criterion model= TransEModel(emb_dim, kg_train.n_ent, kg_train.n_rel, dissimilarity_type='L2') criterion= MarginLoss(margin) # Move everything to CUDA if available if cuda.is_available(): cuda.empty_cache() model.cuda() criterion.cuda() # Define the torch optimizer to be used optimizer= Adam(model.parameters(), lr=lr, weight_decay=1e-5) sampler= BernoulliNegativeSampler(kg_train) dataloader= DataLoader(kg_train, batch_size=b_size, use_cuda='all') iterator= tqdm(range(n_epochs), unit='epoch') for epoch in iterator: running_loss= 0.0 for i, batch in enumerate(dataloader): h, t, r= batch[0], batch[1], batch[2] n_h, n_t= sampler.corrupt_batch(h, t, r) optimizer.zero_grad() # forward + backward + optimize pos, neg= model(h, t, n_h, n_t, r) loss= criterion(pos, neg) (continues on next page) 2 Chapter 1. TorchKGE TorchKGE Documentation, Release 0.16.25 (continued from previous page) loss.backward() optimizer.step() running_loss+= loss.item() iterator.set_description( 'Epoch {} | mean loss: {:.5f}'.format(epoch+1, running_loss/ len(dataloader))) model.normalize_parameters() Shortest training TorchKGE also provides simple utility wrappers for model training. Here is an example on how to use them: from torch.optim import Adam from torchkge.evaluation import LinkPredictionEvaluator from torchkge.models import TransEModel from torchkge.utils.datasets import load_fb15k from torchkge.utils import Trainer, MarginLoss def main(): # Define some hyper-parameters for training emb_dim= 100 lr= 0.0004 margin= 0.5 n_epochs= 1000 batch_size= 32768 # Load dataset kg_train, kg_val, kg_test= load_fb15k() # Define the model and criterion model= TransEModel(emb_dim, kg_train.n_ent, kg_train.n_rel, dissimilarity_type='L2') criterion= MarginLoss(margin) optimizer= Adam(model.parameters(), lr=lr, weight_decay=1e-5) trainer= Trainer(model, criterion, kg_train, n_epochs, batch_size, optimizer=optimizer, sampling_type='bern', use_cuda='all',) trainer.run() evaluator= LinkPredictionEvaluator(model, kg_test) evaluator.evaluate(200) evaluator.print_results() if __name__ =="__main__": main() 1.1. Citations 3 TorchKGE Documentation, Release 0.16.25 Training with Ignite TorchKGE can be used along with the PyTorch ignite library. It makes it easy to include early stopping in the training process. Here is an example script of training a TransE model on FB15k on GPU with early stopping on evaluation MRR: import torch from ignite.engine import Engine, Events from ignite.handlers import EarlyStopping from ignite.metrics import RunningAverage from torch.optim import Adam from torchkge.evaluation import LinkPredictionEvaluator from torchkge.models import TransEModel from torchkge.sampling import BernoulliNegativeSampler from torchkge.utils import MarginLoss, DataLoader from torchkge.utils.datasets import load_fb15k def process_batch(engine, batch): h, t, r= batch[0], batch[1], batch[2] n_h, n_t= sampler.corrupt_batch(h, t, r) optimizer.zero_grad() pos, neg= model(h, t, n_h, n_t, r) loss= criterion(pos, neg) loss.backward() optimizer.step() return loss.item() def linkprediction_evaluation(engine): model.normalize_parameters() loss= engine.state.output # validation MRR measure if engine.state.epoch% eval_epoch ==0: evaluator= LinkPredictionEvaluator(model, kg_val) evaluator.evaluate(b_size=256, verbose=False) val_mrr= evaluator.mrr()[1] else: val_mrr=0 print('Epoch {} | Train loss: {}, Validation MRR: {}'.format( engine.state.epoch, loss, val_mrr)) try: if engine.state.best_mrr< val_mrr: engine.state.best_mrr= val_mrr return val_mrr except AttributeError as e: if engine.state.epoch ==1: engine.state.best_mrr= val_mrr return val_mrr (continues on next page) 4 Chapter 1. TorchKGE TorchKGE Documentation, Release 0.16.25 (continued from previous page) else: raise e device= torch.device('cuda') eval_epoch= 20 # do link prediction evaluation each 20 epochs max_epochs= 1000 patience= 40 batch_size= 32768 emb_dim= 100 lr= 0.0004 margin= 0.5 kg_train, kg_val, kg_test= load_fb15k() # Define the model, optimizer and criterion model= TransEModel(emb_dim, kg_train.n_ent, kg_train.n_rel, dissimilarity_type='L2') model.to(device) optimizer= Adam(model.parameters(), lr=lr, weight_decay=1e-5) criterion= MarginLoss(margin) sampler= BernoulliNegativeSampler(kg_train, kg_val=kg_val, kg_test=kg_test) # Define the engine trainer= Engine(process_batch) # Define the moving average RunningAverage(output_transform=lambda x: x).attach(trainer,'margin') # Add early stopping handler= EarlyStopping(patience=patience, score_function=linkprediction_evaluation, trainer=trainer) trainer.add_event_handler(Events.EPOCH_COMPLETED, handler) # Training train_iterator= DataLoader(kg_train, batch_size, use_cuda='all') trainer.run(train_iterator, epoch_length=len(train_iterator), max_epochs=max_epochs) print('Best score {:.3f} at epoch {}'.format(handler.best_score, trainer.state.epoch- handler.patience)) 1.1.2 Model Evaluation Link Prediction To evaluate a model on link prediction: from torch import cuda from torchkge.utils.pretrained_models import load_pretrained_transe from torchkge.utils.datasets import load_fb15k from torchkge.evaluation import LinkPredictionEvaluator (continues on next page) 1.1. Citations 5 TorchKGE Documentation, Release 0.16.25 (continued from previous page) _, _, kg_test= load_fb15k() model= load_pretrained_transe('fb15k', 100) if cuda.is_available(): model.cuda() # Link prediction evaluation on test set. evaluator= LinkPredictionEvaluator(model, kg_test) evaluator.evaluate(b_size=32) evaluator.print_results() Triplet Classification To evaluate a model on triplet classification: from torch import cuda from torchkge.evaluation import TripletClassificationEvaluator from torchkge.utils.pretrained_models import load_pretrained_transe from torchkge.utils.datasets import load_fb15k _, kg_val, kg_test= load_fb15k() model= load_pretrained_transe('fb15k', 100): if cuda.is_available(): model.cuda() # Triplet classification evaluation on test set by learning thresholds on validation ,!set evaluator= TripletClassificationEvaluator(model, kg_val, kg_test) evaluator.evaluate(b_size=128) print('Accuracy on test set: {}'.format(evaluator.accuracy(b_size=128))) 1.1.3 Models Interfaces Model class torchkge.models.interfaces.Model(n_entities, n_relations) Model interface to be used by any other class implementing a knowledge graph embedding model. It is only required to implement the methods scoring_function, normalize_parameters, lp_prep_cands and lp_scoring_function. Parameters • n_entities (int) – Number of entities to be embedded. • n_relations (int) – Number of relations to be embedded. n_ent Number of entities to be embedded. Type int 6 Chapter 1. TorchKGE TorchKGE Documentation, Release 0.16.25 n_rel Number of relations to be embedded. Type int forward(heads, tails, negative_heads, negative_tails, relations) Parameters • heads (torch.Tensor, dtype: torch.long, shape: (batch_size)) – Integer keys of the current batch’s heads • tails (torch.Tensor, dtype: torch.long, shape: (batch_size)) – Integer keys of the current batch’s tails. • negative_heads (torch.Tensor, dtype: torch.long, shape: (batch_size)) – Integer keys of the current batch’s negatively sampled heads. • negative_tails (torch.Tensor, dtype: torch.long, shape: (batch_size)) – Integer keys of the current batch’s negatively sampled tails. • relations (torch.Tensor, dtype: torch.long, shape: (batch_size)) – Integer keys of the current batch’s relations. Returns • positive_triplets (torch.Tensor, dtype: torch.float, shape: (b_size)) – Scoring function evaluated on true triples. • negative_triplets (torch.Tensor, dtype: torch.float, shape: (b_size)) – Scoring function evaluated on negatively sampled triples. get_embeddings()
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages57 Page
-
File Size-