My research focuses on integrating augmented reality and artificial intelligence to enhance human thought and creativity. Traditional AI interfaces are confined to 2D screens, limiting their ability to interact with the physical world. By merging AR with AI, I aim to create seamless digital-physical interactions that expand cognitive and creative possibilities. My work explores AI-mediated communication, AI-powered content creation, machine learning-driven tangible interactions, and the blending of virtual and physical environments. These interfaces enhance human collaboration, learning, and expression by embedding AI-driven visual and interactive elements into real-world contexts. I develop systems that transform static content into dynamic and adaptive experiences, enabling users to engage with information in more immersive ways. By integrating AI with tangible objects, my research makes interactions more intuitive and adaptable. Additionally, I explore how AR and robotics can work together to create dynamic physical environments that respond to human input. My future research aims to incorporate generative AI, large language models, and explainable AI into AR systems, making them more context-aware, adaptive, and transparent. Ultimately, I seek to create intelligent environments that empower people to think, learn, and collaborate in new and meaningful ways, turning everyday spaces into dynamic mediums for thought and creativity.
PufferBot: Actuated Expandable Structures for Aerial Robots.
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems.
1338-1343.
2020