A Hybrid CNN-RNN Based Framework for Transforming Braille to Multilingual Speech
Abstract
Braille is a pivotal medium of communication for individuals with visual impairments, enabling independent access to written information. With the rapid advancement of digital technologies, there is a growing demand for systems that can convert traditional Braille documents into accessible digital formats. This work introduces a hybrid CNN-RNN based framework that leverages image processing and deep learning techniques to transform Braille images into readable text. The proposed system incorporates preprocessing steps such as noise reduction, image enhancement, and Braille dot segmentation to ensure accurate feature extraction. Convolutional Neural Networks (CNNs) are employed for image segmentation and pattern recognition, while Recurrent Neural Networks (RNNs) capture sequential relationships between Braille characters, enhancing recognition accuracy across variations in spacing, orientation, and image quality. Beyond text recognition, the system integrates multilingual translation and text-to-speech functionality, offering both visual and auditory access to information. The text-to-speech module delivers natural and customizable audio output, catering to diverse user preferences. In this work, a well-defined interface is implemented to provide ease of access with multilingual support for visually impaired individuals. Our experimental results justify that the proposed framework exhibits higher accuracy in converting Braille image into text which can further be converted into audio for better accessibility.
How to Cite This Article
G Naga Sreevidya, M Rajitha, K Mahesh Babu, K Venkata Tarun, KRMC Sekhar (2026). A Hybrid CNN-RNN Based Framework for Transforming Braille to Multilingual Speech . International Journal of Artificial Intelligence Engineering and Transformation (IJAIEAT), 7(1), 36-41. DOI: https://doi.org/10.54660/IJAIET.2026.7.1.36-41