Researchers from Korea University, Upstage AI, and Aigen Sciences have identified specialized components of large-scale language models that process time-dependent information. These “temporal heads” play an important role in how AI systems handle facts that change over time.
Researchers have found that these temporal heads exist in multiple language models, but the exact locations may vary between systems. Their responses also vary depending on the type of knowledge being processed and the particular year of the problem.
These professional components don’t just understand simple date references like “in 2004”. It can also handle more complex temporal phrases such as “the year the Olympics were held in Athens.” This suggests that the model has developed a more nuanced understanding of the time that exceeds the basic number of processes.
Discovery may allow targeted LLM updates
When researchers disable these time heads, the model lost the ability to remember time-specific information while still retaining other abilities. This selective confusion did not affect the extent to which the model handled time-independent knowledge or answered general questions.
advertisement
The team also discovered that simply adjusting the values of these special heads can change temporary knowledge. This could reduce the cost of keeping your AI systems up to date. Instead of recalibrating the entire model (a expensive, time-consuming process), developers may be able to update time-sensitive information by targeting only these time heads.
However, researchers acknowledge important limitations in their work. Smaller models like the Phi-3-mini have only 3.8 billion parameters, and therefore do not respond to time head target manipulation. These models require more complex mechanisms that have not yet been discovered and understood.