From Epic to Dramatic Tradition: Literary Contribution of Poet Kalidas – A Critical Study
1st Author
Name: Piyush Junghare
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
2nd Author
Name: Jay Chheniya
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
3rd Author
Name: Mohit Behare
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
4th Author
Name: Pratik Kashte
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
5th Author
Name: Saniya Belekar
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
6th Author
Name: Vaishnavi Dhoble
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
7th Author
Name: Shalini Kumari
Department: CSE (Cyber Security)
Institution: GHRCEM, Nagpur, India
Email: shalini.kumari@ghrau.edu.in
Abstract :Abstract This paper presents a comprehensive review of Google's Titans architecture, a groundbreaking neural memory framework designed to address the limitations of traditional Transformer models in handling long-term context. Titans introduces a novel neural long-term memory module that learns to memorize at test time, enabling language models to retain, update, and recall information over millions of tokens without the quadratic computational cost associated with standard attention mechanisms. The architecture combines three interconnected memory systems: short-term attention-based memory, a deep neural long-term memory module, and persistent memory for task-specific knowledge. This review examines the theoretical foundations, architectural components, and design principles that enable Titans to achieve superior performance on long-context tasks while maintaining computational efficiency. The paper provides detailed analysis of the three architectural variants Memory as Context (MAC), Memory as Gate (MAG), and Memory as Layer (MAL) and discusses their respective strengths and trade-offs in different application scenarios.
Keywords: Deep learning, long-term memory, neural architecture, sequence modelling , test-time learning, Transformers.


