Hierarchy Transformers (HiTs)

university

AI & ML interests

This collection includes language models trained on hierarchies using hyperbolic losses. The resulting HiT models yield entity embeddings that are hierarchically organised in hyperbolic space.

Hierarchy Transformer

Hierarchy Transformer (HiT) is a framework that enables transformer encoder-based language models (LMs) to learn hierarchical structures in hyperbolic space.

Get Started

Install hierarchy_tranformers (check our repository) through pip or GitHub.

Use the following code to get started with HiTs:

from hierarchy_transformers import HierarchyTransformer

# load the model
model = HierarchyTransformer.from_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun')

# entity names to be encoded.
entity_names = ["computer", "personal computer", "fruit", "berry"]

# get the entity embeddings
entity_embeddings = model.encode(entity_names)

Citation

Our paper has been accepted at NeurIPS 2024 (to appear).

Preprint on arxiv: https://arxiv.org/abs/2401.11374.

Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks. Language Models as Hierarchy Encoders. arXiv preprint arXiv:2401.11374 (2024).

@article{he2024language,
  title={Language Models as Hierarchy Encoders},
  author={He, Yuan and Yuan, Zhangdie and Chen, Jiaoyan and Horrocks, Ian},
  journal={arXiv preprint arXiv:2401.11374},
  year={2024}
}