Proposal: Mobile-based Context-Aware Assistant for Art Exhibit Interaction
Project Summary
This project aims to develop a mobile application that functions as a context-aware assistant. It will scan and recognize artworks in a museum setting, such as MoMA, providing basic information and engaging users in interactive discussions about the exhibits. Inspired by my previous Discord-based chatbot, which could recognize real-world objects, this project extends that idea into an interactive, on-the-go experience designed specifically for art lovers and museum visitors.
One-Sentence Description
A wearable, mobile-based context-aware assistant that identifies and shares insights about art exhibits while answering user questions in real time.
Inspiration and Background
This idea stemmed from my experience with a Discord chatbot that recognized objects through image input, sparking my interest in bringing such recognition and interaction to mobile devices. My goal is to enhance the museum experience by delivering informative, accessible content and personalized interactions about each artwork directly through the user’s mobile device.


Goals and Scope
- Project Goals
- Objective: Create a simple yet effective mobile app that identifies nearby art exhibits and provides contextually relevant information.
- Target Audience: Art enthusiasts and museum visitors looking to enhance their experience through interactive learning.
- Context: Designed to be worn on the body, the app continuously scans the environment and detects artworks in view, ideal for an immersive museum experience.
- Scope
- Implementation Timeline: This is achievable within the three-week project timeframe by focusing on foundational features such as object recognition, basic Q&A interaction, and real-time information retrieval.
- Testing: The app’s features can be user-tested remotely with museum or gallery images, or by interacting with prototype screens.
Design Outline
- Platform and Setup
- Mobile application with a wearable setup allowing continuous scanning.
- Simple, user-friendly UI for a hands-free, immersive experience.
- Core Features
- Object Recognition: Scans for artworks using the phone’s camera, identifying them through ML models or pre-trained datasets.
- Information Display: Provides exhibit details, like title, artist, medium, and year, retrieved from a structured database or API.
- Interactive Q&A: Users can ask follow-up questions about the exhibit for deeper context, answered by a conversational AI module.
- Context Awareness
- Adapts based on user location within the museum, previous interactions, and environmental cues, making each engagement unique and relevant.
Technical Hurdles
Some challenges include optimizing object recognition for accurate results in dynamic environments and designing the Q&A system to provide meaningful responses based on limited data. Feedback on recommended tools for on-device object recognition or lightweight databases would be valuable.
Feedback Questions
- Is the scope realistic for the timeline, or should I narrow it further?
- Would specific museums (like MoMA) provide sufficient testing ground for a prototype?