HOME PROJECTS LITERATURE CONTACT HOME PROJECTS LITERATURE CONTACT

Research Literature

Published Papers & Current Reading

Published Papers


Feeling Machines: Ethics, Culture, and the Rise of Emotional AI

Sorbonne Center for Artificial Intelligence

Authors: Vivek Chavan, Arsen Cenaj, Shuyuan Shen, Ariane Bar, Srishti Binwani, Tommaso Del Becaro, Marius Funk, Lynn Greschner, Roberto Hung, Stina Klein, Romina Kleiner, Stefanie Krause, Sylwia Olbrych, Vishvapalsinhji Parmar, Jaleh Sarafraz, Daria Soroko, Daksitha Withanage Don, Chang Zhou, Hoang Thuy Duong Vu, Parastoo Semnani, Daniel Weinhardt, Elisabeth Andre, Jörg Krüger, Xavier Fresquet

Institution: Sorbonne University, Paris, France

Publication Date: June 14, 2025

Abstract: This paper explores the growing presence of emotionally responsive artificial intelligence through a critical and interdisciplinary lens. Bringing together the voices of early-career researchers from multiple fields, it explores how AI systems that simulate or interpret human emotions are reshaping our interactions in areas such as education, healthcare, mental health, caregiving, and digital life.

Key Themes:

  • Ethical implications of emotional AI
  • Cultural dynamics of human-machine interaction
  • Risks and opportunities for vulnerable populations
  • Emerging regulatory, design, and technical considerations

Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

Published
Collaborative Research
AI Ethics
View Paper

Current Read


Neural Networks and Deep Learning

Michael Nielsen

Type: Online Book

Abstract: A comprehensive introduction to neural networks and deep learning. The book covers the fundamentals of neural networks, backpropagation, and deep learning techniques. It provides both theoretical foundations and practical implementations.

Key Topics:

  • Neural network fundamentals
  • Backpropagation algorithm
  • Deep learning architectures
  • Practical implementation
Current Read
Deep Learning
View Paper

Generative Adversarial Networks

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

Type: Research Paper

Abstract: This paper introduces Generative Adversarial Networks (GANs), a new framework for estimating generative models via an adversarial process. The framework consists of two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G.

Key Contributions:

  • Introduction of GAN framework
  • Adversarial training methodology
  • Generative model architecture
  • Discriminative model design
Current Read
GANs
View Paper

PaLM: Scaling Language Modeling with Pathways

Chowdhery et al.

Type: Research Paper

Abstract: This paper presents PaLM (Pathways Language Model), a 540-billion parameter, densely activated, Transformer language model. PaLM achieves state-of-the-art few-shot performance across most tasks, and in many cases by large margins.

Key Features:

  • 540B parameter model
  • Pathways architecture
  • Few-shot learning capabilities
  • State-of-the-art performance
Current Read
Language Models
View Paper

Wide & Deep Learning for Recommender Systems

Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah

Type: Research Paper

Abstract: This paper presents Wide & Deep learning, a model architecture that combines the benefits of memorization and generalization for recommender systems. The model jointly trains wide linear models and deep neural networks to combine the strengths of both approaches.

Key Contributions:

  • Wide & Deep architecture
  • Memorization and generalization
  • Recommender systems
  • Joint training methodology
Current Read
Recommender Systems
View Paper

Federated Learning: Strategies for Improving Communication Efficiency

Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon

Type: Research Paper

Abstract: This paper presents several strategies for reducing the communication cost of federated learning. The authors propose structured and sketched updates that provide theoretical guarantees and demonstrate empirical improvements in communication efficiency.

Key Contributions:

  • Federated learning optimization
  • Communication efficiency
  • Structured updates
  • Sketching techniques
Current Read
Federated Learning
View Paper

Contact Me