Semantic Web Access & Personalization Research Lab

Foundational Transformer Architecture

Goal

The project aims to explore and develop new architectures and strategies to improve the performance of Language Models (LLMs) based on the Transformer architecture, increasing their effectiveness, efficiency and robustness against hallucinations.

Supervisors

Marco Polignano

Previous post
“Societal” applications of Language Models
Next post
Deep Learning and VLMs for eHealth