Meta Unveils Non‑Generative VLJ Model
- Meta’s former AI chief scientist Yan Lun (Yan LeCun) published a paper on “VLJ,” a Vision‑Language model that uses a joint‑embedding predictive architecture (JEPPer) as an extension of the earlier VJA design.
- Unlike generative models (e.g., ChatGPT, GPT‑4) that produce text token‑by‑token, VLJ is a non‑generative system that directly predicts a meaning vector in semantic space and only converts it to words when required.