An Android-Based Multimodal AI Application for Contextual
Environmental Learning in Children
Dublin Core
Title
An Android-Based Multimodal AI Application for Contextual
Environmental Learning in Children
Environmental Learning in Children
Subject
Artificial Intelligence; Multimodal Learning; Google Gemini; Contextual Learning; Environmental Education; Android Application; Childre
Description
Children’s limited engagement with nature in the digital era poses a growing challenge for environmental education. This study presents the
development of an Android-based educational application that leverages multimodal artificial intelligence (AI)—specifically the Google Gemini
model—to facilitate contextual environmental learning for preschool and elementary-aged children. Using a prototyping methodology, the
application integrates image capture, cloud-based processing through a FastAPI backend, and a Flutter-based interface designed for young
learners. The system allows children to photograph plants and receive real-time, age-appropriate explanations about plant names, characteristics,
and ecological functions in a narrative format. A limited usability trial involving children of varying age groups demonstrated positive engagement
and curiosity, indicating the app’s potential as an interactive and enjoyable learning medium. Despite occasional inaccuracies in AI-generated
descriptions and reliance on internet connectivity, user feedback suggested strong interest and educational value. Future enhancements will focus
on developing localized plant databases, improving accuracy, and incorporating gamification elements. Overall, this study contributes to the
growing field of AI-driven educational technology, demonstrating how multimodal AI can effectively bridge digital learning with real-world
environmental experiences
development of an Android-based educational application that leverages multimodal artificial intelligence (AI)—specifically the Google Gemini
model—to facilitate contextual environmental learning for preschool and elementary-aged children. Using a prototyping methodology, the
application integrates image capture, cloud-based processing through a FastAPI backend, and a Flutter-based interface designed for young
learners. The system allows children to photograph plants and receive real-time, age-appropriate explanations about plant names, characteristics,
and ecological functions in a narrative format. A limited usability trial involving children of varying age groups demonstrated positive engagement
and curiosity, indicating the app’s potential as an interactive and enjoyable learning medium. Despite occasional inaccuracies in AI-generated
descriptions and reliance on internet connectivity, user feedback suggested strong interest and educational value. Future enhancements will focus
on developing localized plant databases, improving accuracy, and incorporating gamification elements. Overall, this study contributes to the
growing field of AI-driven educational technology, demonstrating how multimodal AI can effectively bridge digital learning with real-world
environmental experiences
Creator
Andhika Rafi Hananto1,*, Muhammad Izha Rahardian2
Source
https://ijiis.org/index.php/IJIIS/article/view/264/166
Publisher
Universitas Kristen Satya Wacana, Salatiga, Indonesia
Date
september 2025
Contributor
Fajar bagus W
Format
PDF
Language
English
Type
Text
Files
Collection
Citation
Andhika Rafi Hananto1,*, Muhammad Izha Rahardian2, “An Android-Based Multimodal AI Application for Contextual
Environmental Learning in Children,” Repository Horizon University Indonesia, accessed January 1, 2026, https://repository.horizon.ac.id/items/show/9734.
Environmental Learning in Children,” Repository Horizon University Indonesia, accessed January 1, 2026, https://repository.horizon.ac.id/items/show/9734.