Background
Visual impairment significantly impacts the quality of life for more than 338 million individuals globally with racial and ethnic minorities and socioeconomically disadvantaged groups being disproportionately impacted, restricting their independence, mobility, and ability to engage in daily activities, while also posing challenges to their mental health. Despite extensive research efforts aimed at addressing these issues through the use of traditional computational methods, such as object detection and optical character recognition (OCR) via smartphones and smart glasses, existing solutions remain inadequate.
What We Do
AI-powered smart devices such as smart glasses and robotics have emerged as a promising way to assist people with low vision. We are developing multimodal AI models that are optimized for visually impaired individuals, which will be used with smart glasses and robotics. Particularly, we optimize the multimodal AI models tailored to address the specific challenges faced by people with low vision in daily life, such as reading impairment, navigation difficulties, fall risk, and depression. Check out our open-source code repositories on our Harvard AI Robotics Lab GitHub account.
Selected Publications
Work in progress.