In this paper, we explore the possibility of harnessing text extraction to perform localization for a visually impaired grocery shopper to help the shopper navigate through a grocery store. The extraction of environmental text is performed only after more traditional guidance techniques are employed to first bring the user to the text-rich aisles of a grocery store. This idea is built upon the need for both text extraction and localization in a visual assistance pipeline. For instance, typical visual assistance pipelines attempt to solve a number of problems: indoor localization and navigation, classification of an aisle's products, and potentially contextual information retrieval based on local environmental text (to, say, determine a product's price). However, prior art does not consider how text might also augment localization. Thus, this paper explores the viability of introducing text into such a seemingly disjoint problem space and ultimately concludes that environmental text extraction can enhance indoor localization by providing course-grained accuracy.