LANG-JEPA

LANG-JEPA is an experimental language model architecture that operates in 'concept space' rather than 'token space'. It predicts semantic features of future text and consists of an encoder and a decoder. The encoder transforms text into semantic embeddings and predicts the latent representation of the next sentence. The decoder converts learned feature embeddings back into human-readable text. The repository contains code for training and evaluation of the model.

Unknown language model semantic understanding transformer architecture Updated: 2024-12-24